Skip to main content

Big O Notation — Time & Space Complexity in Ruby!

 

Image for post
Big O Cheat Sheet

In this post, we will look at the Big O Notation both time and space complexity! For all our examples we will be using Ruby.

What is Big O Notation?

Image for post
Big O complexity chart

Time Complexity

Image for post
Big O algorithms breakdown

O(1)

Examples of Constant Time Functions

add simply returns the sum and whether it’s 1 + 1 or 100 + 500 this is constant time because it will always take just two steps to complete the function
value_at the takes an array and a position. It returns the value at that position. If there is no value it will return the error else it will return the value. So regardless of the size of the array since all we are doing is picking a single value this constant time always taking just 2 steps.

We often see constant as O(1), but any number could be used and it would still be constant. We sometimes change the number to a 1, because it doesn’t matter at all about how many steps it takes. What matters is that it takes a constant number of steps.

O(log n)

Image for post
Binary Search

This is what a binary search looks like in ruby. If you want to run this code to see what that looks like, here you go.

Binary Search

O(n)

sum function

sum simply takes an array and loops through each element adding everything then prints the sum. An array of 10 items would definitely be faster than an array of 1 million items. I have an example set up to show the time, right here.
A lot of code that iterates over arrays is O(n).

O(n log n)

Image for post
Merge Sort

The function below shows how a merge sort can be implemented.
To run this, here is the link

O(n²)

Image for post
Bubble Sort

Here is what a bubble sort looks like in ruby.
If you want to run the example, here is the link.

bubble_sort

This also means that the more nesting on a for loop the higher the power to n, a for loop with 2 nested for loops gives(3 for loops) O(n³), 3 nested for loops gives O(n⁴) etc!
So be very careful when nesting those for loops!

O(2^n)

subsets

Here is what our subset function looks like in ruby
If you want to run the function, here is the link

subset function

When the array(a) has 2 elements we have an output of 4 elements, this is 2², when there are 3 at input we have 8 at the output, 2³.
The output is growing exponentially.

How to calculate time complexity

Case 1
We have a sequential function that loops through the array to find a specific element, the .each method is similar to the for.

sequential_search

The best scenario would be
sequential_search([‘orange’], ‘orange’)
But remember the worst case, meaning we could have
sequential_search([‘here’, ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘orange’, ‘f’, ‘g’], ‘orange’)
We have to go through almost every element to find what we are looking for, this is O(n)

Case 2
We have a list of movies and each movie has a rating attached to it. We want to find a movie then use the rating of that movie to find other movies with a similar rating.
We could use a binary search to find the movie as this would faster, this is O(log n). But this is just one part we need to find the other movies so we go through the list again maybe using our sequential function this is an O(n).
This is now O(n log n). Not too bad.

We can now on a high level try to figure out our algorithms, now that we know:
halving => O(log n)
for loops => O(n)
nested for loops => O(n²)…

Here are a couple of links for a better understanding of how to calculate time complexity
- With discrete maths
- Some exercises with solutions

Space Complexity

Space complexity uses the same notation as time complexity so expect O(n), O(1) etc.
Let’s look at this simple function using ruby and try to figure out the space complexity


In the sum function, we have an array of 3 elements that we loop through and add up the elements as we go along and when we are done we print the sum of the elements.
Now to figure out its space complexity, before we dive in it’s important to know that integers have an O(1) space complexity. So looking at the function we have an array and 2 integers(sum, i).
An array can have any number of elements, n. The space occupied by the array is 4*n bytes. Assuming 4 bytes per integer our integers occupy 8 bytes. The total space then is 4n + 8. Eliminating constants, we have 4n. So the space complexity of our function is O(n), linear complexity.

Here is a Big O cheat sheet showing the different space and time complexities for different data structures and array sorting algorithms.

Comments

Popular posts from this blog

Game Theory with examples

Game Theory with examples Introduction In this article I will be covering problems related to two players game in which it is assumed that both players play optimally and we have to find out the winner of the game. First we will look at the  basic   division of positions to winning and losing . Then we will see the  Game of Nim  and then see how it will be used to solve the  Composite games . Basic Division of positions to winning and losing Problem Statement:  Consider a simple game played by two players A and B . There are n stones on the table. Each player can pick 1 , 2 or 5 stones in each turn. Both players pick the stones alternately until the total number of stones left on the table is 0. The player unable to make the move will lose. Assuming that both the players play optimally, output the winner of the game. Solution:  As you can see positions 1 , 2 and 5 are winning positions since the player can pick up all the stones and other player will n...

Breaking The Summation Formula (Part 1)

 Q. f ( n ) =  - 1 + 2 - 3 + .. + ( - 1) n n .  Given n, find out f(n) Approach(1)- Bruteforce: 1. Calculation sum=n*(n+1)/2 2. loop[i=1,i=n : i+=2] odd+=i 3.ans=sum-2*odd Code: #include < bits / stdc ++. h > using namespace std ; int main (){   long long x ; cin >> x ; long long p =( x *( x + 1 ))/ 2 ; long long bad = 0 ; for ( long long i = 1 ; i <= x ; i += 2 ) bad += i ; cout << p - 2 * bad << endl ; } Approach(2)-Greedy: Basic: s=1+2+3+4+....+n Formula: sum=n*(n+1)/2= (n/2) + (n+1).2 ...

NT Part 2: Generating Primes, Prime Test, Prime Factorization

  Generating primes fast is very important in some problems. Let's cut to the chase and introduce Eratosthenes's Sieve. The main idea is the following. Suppose we want to find all primes between 2 and 50. Iterate from 2 to 50. We start with 2. Since it is not checked, it is a prime number. Now check all numbers that are multiple of    except  2. Now we move on, to number 3. It's not checked, so it is a prime number. Now check all numbers that are multiple of  ,   except  3. Now move on to 4. We see that this is checked - this is a multiple of 2! So 4 is not a prime. We continue doing this. Here's the implementation. #include <stdio.h> int primechk [ 21000 ] ;   void preprocess ( void ) { int i, j ; for ( i = 2 ; i <= 20000 ; i ++ ) { primechk [ i ] = 1 ; } for ( i = 2 ; i <= 20000 ; i ++ ) { if ( primechk [ i ] == 1 ) { for ( j = 2 ; i * j <= 20000...