Python Computer Science
Beginner
80 mins
Teacher/Student led
What you need:
Chromebook/Laptop/PC

Algorithmic Efficiency and Complexity

In this lesson, you'll explore time and space complexity along with Big O notation. You'll implement sorting algorithms in Python using VS Code, measure their performance, and analyse the results to understand algorithmic efficiency better.
Learning Goals Learning Outcomes Teacher Notes

Live Class Feed

This is a live feed of the latest activity by your students on this lesson. It will update in real-time as they work on the lesson.
Load previous activity

    1 - Introduction

    In this lesson you'll learn about time and space complexity, Big O notation, and apply these concepts by implementing and timing sorting algorithms in Python using VS Code.

    By the end of this lesson, you will be able to:

    • Explain algorithmic efficiency in terms of time and space complexity.
    • Describe Big O notation and apply it to algorithms.
    • Implement sorting algorithms and measure their performance.
    • Analyse results to compare efficiencies.

    You'll need VS Code installed with Python extension. Work in a new folder called 'AlgoEfficiency'. 

    2 - Understanding Algorithmic Efficiency

    Algorithmic efficiency refers to how well an algorithm performs in terms of time (how fast it runs) and space (how much memory it uses). Efficient algorithms solve problems quickly and with minimal resources, which is crucial for large datasets, such as those in social media apps or scientific simulations where speed and low resource use can make a big difference.

    Time complexity measures the time an algorithm takes as the input size grows. Space complexity measures the memory used.


    For example, searching a list: A linear search checks each item one by one (which can be inefficient for large lists, potentially taking time proportional to the list size), while binary search is much faster on sorted lists because it repeatedly divides the search space in half.
    Imagine a phone book with a million names: linear search might require up to a million checks in the worst case, but binary search would typically need only about 20 checks.

    In the next steps, we'll explore these concepts in detail.

    3 - Time Complexity and Big O Notation

    Time complexity helps us understand how the running time of an algorithm increases as the size of the input (often denoted as 'n') gets larger. It's a way to predict how efficient an algorithm will be for big problems, without worrying about the exact hardware or small details.

    This is expressed using Big O notation, which focuses on the worst-case scenario – the maximum time an algorithm might take for a given input size. Big O gives us an upper bound on the growth rate, ignoring constant factors and smaller terms to focus on the big picture.

    Here are some common Big O notations, explained with examples:

    O(1): Constant time
    The algorithm takes the same amount of time no matter how large the input is. For example, accessing a single element in an array by its index – it's instant, like picking a book from a shelf when you know exactly where it is.
    O(n): Linear time
    The time grows directly with the input size. For instance, looping through a list to find the maximum value – if the list doubles in size, the time roughly doubles, like checking each page in a book one by one.
    O(n²): Quadratic time
    This often happens with nested loops, where for each item, you check every other item. Bubble sort is a classic example – for large lists, it can be very slow because if n doubles, the time quadruples (since (2n)² = 4n²). Imagine comparing every pair of students in a class to sort them by height; with twice as many students, you'd have four times as many comparisons.
    O(log n): Logarithmic time
    The time grows very slowly as n increases, often seen in divide-and-conquer strategies. Binary search on a sorted list is a great example – it halves the search space each step, so for a list of a million items, it might only take about 20 steps. It's like guessing a number between 1 and a million by repeatedly asking if it's higher or lower – you narrow it down quickly.

    In sorting algorithms, bubble sort has a time complexity of O(n²) because of its nested loops, making it inefficient for large datasets. Quicksort, on the other hand, has an average time complexity of O(n log n), which is much better – it grows slower than quadratic but faster than linear.

    Remember, Big O simplifies things by ignoring constants and less dominant terms to focus on the overall growth rate as the input size becomes very large. For example, if an algorithm's time is described by 3n² + 2n + 1, we drop the constant 3 (so it's just n²), and ignore the 2n and +1 terms, leaving O(n²). This is because as n gets really big – say n=1000 – the n² term (1,000,000) dominates over 2n (2000) and the constant (1), which become negligible in comparison.

    Constants are ignored because whether it's 3n² or 5n², both grow at the same rate category; it's the n² that defines the behaviour for large n. Think of it like comparing the speed of cars on a long journey – the top speed matters more than whether one has a slightly better acceleration from a stop.

    Understanding Big O helps you choose the right algorithm for the job, especially when dealing with large amounts of data.

    4 - Space Complexity

    Space complexity measures how the memory usage of an algorithm increases as the size of the input (often denoted as 'n') gets larger. Just like time complexity, it's expressed using Big O notation, focusing on the worst-case scenario – the maximum memory an algorithm might require for a given input size.

    Space complexity considers both the space needed for the input itself and any additional (auxiliary) space used by the algorithm, such as temporary variables, recursion stacks, or extra data structures. Efficient space usage is important in environments with limited memory, like mobile devices or embedded systems, where using too much RAM can cause crashes or slow performance.

    Here are some common Big O notations for space complexity, explained with examples:

    O(1): Constant space
    The algorithm uses a fixed amount of memory, regardless of input size. For example, swapping two variables or finding the maximum in a list (you only need a few variables to track the max) – it's like using a single notepad no matter how long your shopping list is.
    O(n): Linear space
    Memory usage grows directly with the input size. For instance, creating a copy of the input list or using an array of size n – if the input doubles, the memory needed doubles, like needing a bigger backpack for more books.
    O(n²): Quadratic space
    This occurs when using 2D structures, like a matrix of size n x n. For example, some graph algorithms might create an adjacency matrix – if n doubles, memory quadruples, which can be problematic for large n, like storing distances between every pair of cities in a large map.
    O(log n): Logarithmic space
    Memory grows slowly as input size increases, seen in recursive algorithms like merge sort. It splits the problem into halves, creating a call stack of depth log n (base 2). For 1,024 elements (2^10), the stack uses ~10 levels, storing minimal data. Even millions of elements need only ~20-30 stack frames. Imagine sorting a deck of 1,024 cards by splitting it into two piles, then splitting those piles, and so on, until you have single cards. The "stack" of steps you track is only about 10 deep, no matter how many cards you start with, keeping memory use low.

    In sorting algorithms, bubble sort and insertion sort have O(1) space complexity because they sort in place, using only a few extra variables. Quicksort, however, has O(log n) space on average due to the recursion stack, though worst-case can be O(n).

    Understanding space complexity helps you design algorithms that are not only fast but also memory-efficient, which is crucial for real-world applications.

    Now, let's apply these concepts by implementing sorting algorithms and analysing their efficiencies.

    5 - Setting Up Your Project

    Open VS Code and create a new folder called 'AlgoEfficiency'. Inside it, create a file named 'sorting.py'.

    We'll implement three sorting algorithms: bubble sort (O(n²)), insertion sort (O(n²)), and quicksort (O(n log n) average).

    We'll use Python's time module to measure execution time (to track how long each sorting algorithm takes) and random to generate test data (to create random lists of numbers for sorting).

    Add this import code to 'sorting.py':

    import time
    import random

    Save the file. We'll add functions next.


    Unlock the Full Learning Experience

    Get ready to embark on an incredible learning journey! Get access to this lesson and hundreds more in our Digital Skills Curriculum.

    Copyright Notice
    This lesson is copyright of DigitalSkills.org. Unauthorised use, copying or distribution is not allowed.
    🍪 Our website uses cookies to make your browsing experience better. By using our website you agree to our use of cookies. Learn more