# Big O Notation in JS….

--

Hi Guys in this blog we will discuss on Big O Notation in JS , here we will cover very basic concepts on Big O Notation:

**1> Motivate the need for something like Big O Notation**

**2> Describe what Big O Notation is**

**3> Simplify Big O Expressions.**

**4> Define “time complexity” and “space complexity”.**

**5> Evaluate the time complexity and space complexity of different algorithms using Big O Notation.**

**6> Built in data structures through the lens of Big O.**

So let’s get started …

**1> Motivate the need for something like Big O Notation :**

Let’s get motivated first :) , like **why we need Big O Notation ?** So let’s start with a question, **Imagine we have multiple implementations of the same function. How can we determine which one is the “best?”**.

So let’s see an example , we are having one problem statement like

**Write a function that accepts a string input and returns a reversed copy**

There will be multiple way to solve this problem , Whenever we are measuring a performance of a code we will classify those in a richter scale like earthQuake :) like :

But it’s very informal/non technical way of defining the performance of our code , It’s important to have a precise vocabulary to talk about how our code performs for that we will be having **Big O Notation** to measure and determine the performance of our code.

But you may think **who cares for Performance or Big O Notation?.** If the code is working we should not be worry ,but that’s not the case, literally every developer cares about performance , it’s the key thing that differentiate your application from other application specially if a website is with high traffics , so with **Big O Notation** it’s helpful for developer on discussing trade-offs between different approaches, also When your code slows down or crashes **Big O Notation** actually help us identifying parts of the code that are inefficient that can help us find pain points in our applications.

Suppose we want to **write a function that calculates the sum of all numbers from 1 up to (and including) some number n**. There are two ways to solve the problem :

So which one is better , before that we have to consider like what does better means, is it :

**a> Faster ?**

**b> Less memory intensive ?**

**c> More Readable ?**

So here faster and less memory intensive will always take the fast priority for considering the performance of a piece of code.

Let’s try to see how we can measure the speed of a piece of code, first approach we can follow is by using timer by this way :

Let see what we have done here , we have defined our function **addUpto ()** where we are passing **n **as parameter. we are initialising a variable **total** that will store the sum of n numbers . Then we are running a for loop from 1 to n-1 and adding the value of n to total in each iteration , once iteration is done we are returning the value of total . This is about definition , now for measuring the performance of function we are using performance.now function . The `performance.now()`

method returns a high resolution timestamp in milliseconds. It represents the time elapsed since `Performance.timeOrigin`

(the time when navigation has started in window contexts, or the time when the worker is run in `Worker`

and `ServiceWorker`

contexts). So what we are doing here first we are defining **t1 **which is storing the time when navigation has started in window contexts , then we are calling the function with 1000000000 as value of n **(Note : one thing we should always consider , we should always use larger value as parameter to test the performance of a function )** , then we we are defining another variable **t2** which is storing the time when the worker is run in `Worker`

and `ServiceWorker`

contexts . Now if we deduct **t1** from **t2** and divide it with **1000 **(since it will be in microsecond) we will get the function’s execution time in second , but there is a catch here , what if you are running the code in other’s machine will the result will be consistent? no right !! So there are few drawbacks on this approach of calculating performance which are as follows:

- Different machines will record different times
- The
*same*machine will record different times! - For fast algorithms, speed measurements may not be precise enough?

So our next query is** if not time , then what ?**

Rather than counting *seconds*, which are so variable…

Let’s count the *number* of simple operations the computer has to perform!

Let analyse this piece of Code :

Let’s count the number of operation we are having in this code , one is multiplication, addition and division , so 3 simple operations, regardless of the size of *n.*

Now let’s see another approach of same problem:

so += will have n additions and n assignments , i++ will have n additions and n assignments let total= 0 will be 1 assignment , let i=1 will have i assignment, i ≤n will have n comparisons. So depending on what we count, the number of operations can be as low as 2*n* or as high as 5*n* + 2. But regardless of the exact number, the number of operations grows roughly *proportionally* *with n. *If *n* doubles, the number of operations will also roughly double. So we can say this function is totally dependent on the value of n. If we take n=100 , 5n+2 will be 502 but when n=100000 5n+2 will be 500002. So we can see value of 5n+2 is total dependent on value of n .

**2> Describe what Big O Notation is:**

Okay Now it’s time to Introduce ….. **Big O Notation**

Big O Notation is a way to formalize fuzzy counting , It allows us to talk formally about how the runtime of an algorithm grows as the inputs grow.

**Big O Notation Definition**

We say that an algorithm is **O(f(n))** if the number of simple operations the computer has to do is eventually less than a constant times **f(n)**, as **n** increases

- f(n) could be linear (f(n) = n)
- f(n) could be quadratic (f(n) = n )
- f(n) could be constant (f(n) = 1)
- f(n) could be something entirely different!

Let’s go through few examples on the same :

Here irrespective of any value of n , the number of operation will be 3 , so **f(n)** is not dependent on n , so time complexity will be **O(1)**.

Here number of operations is (eventually) bounded by a multiple of *n. *So* *time complexity will be **O(n)**.

Here also number of operations is (eventually) bounded by a multiple of *n , *so time complexity will be **O(n)**.

Here O(n) operation is inside of an O(n) operation. so time complexity will be** O(n²)**.

**3> Simplify Big O Expressions**

When determining the time complexity of an algorithm, there are some helpful rule of thumbs for big O expressions.

These rules of thumb are consequences of the definition of big O notation.

**a> Constant does not matter**

If we have a time complexity like **O(2n)** we can ignore the constant part 2 and consider it as **O(n)**. **O(500)** will become **O(1) **constant time complexity , **O(13n²)** will become **O(n²)** .

**b> Smaller terms does not matter**

If we have a time complexity like **O(n+10)** we can ignore the smaller terms 10 and can consider it as **O(n)**. **O(1000n+50 )** will become **O(n)** and **O(n²+5n+8)** will become **O(n²)**.

Analysing complexity with big O can get complicated , so there are several rules of thumb that can help us on that , These rules won’t always work, but are a helpful starting point. So let see the **Big O Shorthands**….

**a> Arithmetic operation are constant :** Let’s try to understand in details , computer used to take same amount of time for running 2+2 and 10000+10000. So whatever there is arithmetic operation execution it will have constant time, it’s not dependent on the values of the parameters associated in the operation.

**b> Variable assignment is constant : **Which actually means the time complexity of assigning x=2 and x=10000 is same.

**c> Accessing elements in an array (by index) or object (by key) is constant:**

For array we used to access the elements by index and for object we used to access elements by key , so time complexity of the accessing elements of both array and object will be same .

**d> In a loop, the complexity is the length of the loop times the complexity of whatever happens inside of the loop:**

In this example , the complexity will be n (length of the loop ) times n ( since it’s having another loop inside) that is O(n²).

Let’s see two scenarios to improve our understanding on time complexity :

In this function execution will happen at least 5 times to at most n times, So we can say the time complexity of this code is **O(n)** , because it’s totally proportional to value of n , it will run 5 times if n=1,2,3,4,5 and it will run n times if n >5. Let’s see another example:

In this function execution will happen maximum 5 times , regardless of any value of n , because it will run n times if n=1,2,3,4,5 and it will run 5 times if n >5. So If we ignore the smaller terms , the complexity of this code will be** O(5)** eventually that will be **O(1)** constant time.

Now let’s see the graph of time complexity :

The Big O chart above shows that **O(1)**, which stands for constant time complexity, is the best. This implies that your algorithm processes only one statement without any iteration. Then there’s **O(log n)**, which is good, and others like it, as shown below:

**O(1)**— Excellent/Best**O(log n)**— Good**O(n)**— Fair**O(n log n)**— Bad

You now understand the various time complexities, and you can recognise the best, good, and fair ones, as well as the bad and worst ones (always avoid the bad and worst time complexity).

The next question that comes to mind is how you know which algorithm has which time complexity.

- When your calculation is not dependent on the input size, it is a constant time complexity (
**O(1)**). - When the input size is reduced by half, maybe when iterating, handling recursion, or whatsoever, it is a logarithmic time complexity (
**O(log n)**). - When you have a single loop within your algorithm, it is linear time complexity (
**O(n)**). - When you have nested loops within your algorithm, meaning a loop in a loop, it is quadratic time complexity (
**O(n²)**).

**4> Define “time complexity” and “space complexity”**

So far, we’ve been focusing on **time complexity **that means** **how can we analyse the *runtime* of an algorithm as the size of the inputs increases?

We can also use big O notation to analyse **space** **complexity**: how much additional memory do we need to allocate in order to run the code in our algorithm?

Sometimes you’ll hear the term **auxiliary space complexity** to refer to space required by the algorithm, not including space taken up by the inputs. Unless otherwise noted, when we talk about space complexity, technically we’ll be talking about auxiliary space complexity.

Let’s see the** rules of Thumb **when it comes for space complexity :

a> Most primitives (booleans, numbers, undefined, null) are constant space.

b> Strings require O(*n*) space (where *n* is the string length).

c> Reference types are generally O( *n*), where n is the length (for arrays) or the number of keys (for objects).

**5> Evaluate the space complexity of different algorithms using Big O Notation**

Let’s see some examples:

In this piece of code total is one number and i is another number , so space complexity of this piece of code will be O(1) space.

Here the space complexity of the piece of code will be O(n) space where n is number of elements in arr (arr.length) .

**6> Built in data structures through the lens of Big O.**

Now that we’ve covered BIG O , Let’s spend a couple minutes analysing the things we do all the time in JS: working with Arrays, Objects, and built-in methods.We spend a lot of this course talking about data structures.

Let’s start by discussing the ones we get for free, so we will discuss:

- How objects and arrays work, through the lens of Big O
- Why adding elements to the beginning of an array is costly
- Compare and contrast the runtime for arrays and objects, as well as built-in methods.

**Big O of Objects :**

Objects are unordered collection of key value pairs .

**When to use objects:**

- When you don’t need order.
- When you need fast access / insertion and removal.

When you don’t need any ordering, objects are an excellent choice!

Let’s see what is the time complexity of several operation we can perform on Object:

**a>Insertion **— Insertion of any element in object will always be constant time complexity **O(1).**

**b>Removal** — Removal of any element in object will always be constant time complexity **O(1).**

**c>Searching** — Searching any element in object will be** O(n)** where n is number of key value pairs present in the object.

**d> Access** — Like insertion and deletion, accessing an element in object will be **O(1).**

**Big O of Object Method:**

**a> Object.keys **— The `Object.keys()`

static method returns an array of a given object's own enumerable string-keyed property names. So time complexity of Object. keys function will be **O(N)**

**b> Object.values** —The `Object.values()`

static method returns an array of a given object’s own enumerable string-keyed property values. So time complexity of Object.values function will be **O(N)**

**c> Object.entries** — The `Object.entries()`

static method returns an array of a given object's own enumerable string-keyed property key-value pairs. So time complexity of Object.entries function will be **O(N)**

**d> hasOwnProperty** — The `hasOwnProperty()`

method of `Object`

instances returns a boolean indicating whether this object has the specified property as its own property. So time complexity of **hasOwnProperty **function will be **O(1)**

**Big O of Array:**

Array is ordered list of elements:

**When to use arrays:**

- When you need order
- When you need fast access / insertion and removal (sort of….)

Let’s see what is the time complexity of several operation we can perform on Array:

**a>Insertion** — Insertion of any element in array depends on where the element will be inserted . Adding element in the first position will have more time complexity than adding element at the last position, because adding element from first position will include reindexing of all elements in array.

**b>Removal **— Like insertion, removal of any element in array depends on where the element will be removed. Removing element from the first position will have more time complexity than removing element from the last position, because removing element from first position will include reindexing of all elements in array.

**c>Searching** — Searching any element in array will be** O(n)** where n is number of elements present in the array.

**d>Access **— Accessing an element in array will be **O(1).**

**Big O of Array Operations:**

**push**— The`push()`

method of`Array`

instances adds the specified elements to the end of an array and returns the new length of the array. So time complexity of Array.push function will be**O(1)****pop**— The`pop()`

method of`Array`

instances removes the last element from an array and returns that element. This method changes the length of the array. So time complexity of Array.pop function will be**O(1)****shift**— The`shift()`

method of`Array`

instances removes the first element from an array and returns that removed element. This method changes the length of the array. So time complexity of Array.shift function will be**O(N)****unshift**— The`unshift()`

method of`Array`

instances adds the specified elements to the beginning of an array and returns the new length of the array. So time complexity of Array.unshift function will be**O(N)****concat**— The`concat()`

method of`Array`

instances is used to merge two or more arrays. This method does not change the existing arrays, but instead returns a new array. So time complexity of Array.concat function will be**O(N).****slice**— The`slice()`

method of`Array`

instances returns a shallow copy of a portion of an array into a new array object selected from`start`

to`end`

(`end`

not included) where`start`

and`end`

represent the index of items in that array. The original array will not be modified. So time complexity of Array.slice function will be**O(N)****splice**— The`splice()`

method of`Array`

instances changes the contents of an array by removing or replacing existing elements and/or adding new elements in place . So time complexity of Array.splice function will be**O(N)****sort**—The`sort()`

method of`Array`

instances sorts the elements of an array*in place*and returns the reference to the same array, now sorted. The default sort order is ascending, built upon converting the elements into strings, then comparing their sequences of UTF-16 code units values. So time complexity of Array.sort function will be**O(N * log N).****forEach/map/filter/reduce/etc**. — For all mapping functions of array the time complexity will be**O(N).**

**Summary :**

- To analyse the performance of an algorithm, we use Big O Notation
- Big O Notation can give us a high level understanding of the time or space complexity of an algorithm
- Big O Notation doesn’t care about precision, only about general trends (linear? quadratic? constant?)
- The time or space complexity (as measured by Big O) depends only on the algorithm, not the hardware used to run the algorithm

**Conclusion:**

That is really long one , But congrats!! finally we did it . So now I think we will have a better understanding on big O Notation . But this is not enough, we have mostly covered the theoretical part of **Big O Notation **, writing a performance efficient code keeping Big O Notation in mind really needs practice and hands on Experience. Please check the link below its really helpful for analysing time complexity graphically for different functions:

I am attaching my GitHub repo down below for the codes discussed in the blog :

HAPPY CODING !!!! :)