😲
Imagine we have multiple implementations of the same function.
How can we determine which one is the "best?"
Imagine we have multiple implementations of the same function.
How can we determine which one is the "best?"
Great!
Pretty Good
Only OK
Ehhhhh
Awful
function addUpTo(n) {
let total = 0;
for (let i = 1; i <= n; i++) {
total += i;
}
return total;
}
Suppose we want to write a function that calculates the sum of all numbers from 1 up to (and including) some number n.
function addUpTo(n) {
return n * (n + 1) / 2;
}
Which one is better?
Which one is better?
Let's focus here first
function addUpTo(n) {
let total = 0;
for (let i = 1; i <= n; i++) {
total += i;
}
return total;
}
let t1 = performance.now();
addUpTo(1000000000);
let t2 = performance.now();
console.log(`Time Elapsed: ${(t2 - t1) / 1000} seconds.`)
Rather than counting seconds, which are so variable...
Let's count the number of simple operations the computer has to perform!
function addUpTo(n) {
return n * (n + 1) / 2;
}
1 multiplication
1 addition
1 division
3 simple operations, regardless of the size of n
function addUpTo(n) {
let total = 0;
for (let i = 1; i <= n; i++) {
total += i;
}
return total;
}
n additions
n assignments
n additions and n assignments
1 assignment
1 assignment
n comparisons
?????
Depending on what we count, the number of operations can be as low as 2n or as high as 5n + 2
But regardless of the exact number, the number of operations grows roughly proportionally with n
If n doubles, the number of operations will also roughly double
Big O Notation is a way to formalize fuzzy counting
It allows us to talk formally about how the runtime of an algorithm grows as the inputs grow
We won't care about the details, only the trends
We say that an algorithm is O(f(n)) if the number of simple operations the computer has to do is eventually less than a constant times f(n), as n increases
2
function addUpTo(n) {
return n * (n + 1) / 2;
}
function addUpTo(n) {
let total = 0;
for (let i = 1; i <= n; i++) {
total += i;
}
return total;
}
Always 3 operations
O(1)
Number of operations is (eventually) bounded by a multiple of n (say, 10n)
O(n)
function countUpAndDown(n) {
console.log("Going up!");
for (let i = 0; i < n; i++) {
console.log(i);
}
console.log("At the top!\nGoing down...");
for (let j = n - 1; j >= 0; j--) {
console.log(j);
}
console.log("Back down. Bye!");
}
Number of operations is (eventually) bounded by a multiple of n (say, 10n)
O(n)
O(n)
O(n)
function printAllPairs(n) {
for (var i = 0; i < n; i++) {
for (var j = 0; j < n; j++) {
console.log(i, j);
}
}
}
O(n) operation inside of an O(n) operation.
O(n * n)
O(n)
O(n)
O(n )
2
When determining the time complexity of an algorithm, there are some helpful rule of thumbs for big O expressions.
These rules of thumb are consequences of the definition of big O notation.
O(2n)
O(500)
O(13n )
2
O(n)
O(1)
O(n )
2
O(n + 10)
O(1000n + 50)
O(n + 5n + 8 )
2
O(n)
O(n)
O(n )
2
function logAtLeast5(n) {
for (var i = 1; i <= Math.max(5, n); i++) {
console.log(i);
}
}
function logAtMost5(n) {
for (var i = 1; i <= Math.min(5, n); i++) {
console.log(i);
}
}
O(n)
O(1)
O(1)
O(log n)
O(n)
O(nlog n)
O(n )
2
So far, we've been focusing on time complexity: how can we analyze the runtime of an algorithm as the size of the inputs increases?
We can also use big O notation to analyze space complexity: how much additional memory do we need to allocate in order to run the code in our algorithm?
Sometimes you'll hear the term auxiliary space complexity to refer to space required by the algorithm, not including space taken up by the inputs.
Unless otherwise noted, when we talk about space complexity, technically we'll be talking about auxiliary space complexity.
Rules of Thumb
function sum(arr) {
let total = 0;
for (let i = 0; i < arr.length; i++) {
total += arr[i];
}
return total;
}
one number
another number
O(1) space!
function double(arr) {
let newArr = [];
for (let i = 0; i < arr.length; i++) {
newArr.push(2 * arr[i]);
}
return newArr;
}
n numbers
O(n) space!
We've encountered some of the most common complexities: O(1), O(n), O(n )
2
Sometimes big O expressions involve more complex mathematical expressions
One that appears more often than you might like is the logarithm!
log (8) = 3
2
2 = 8
3
log (value) = exponent
2
2 = value
exponent
We'll omit the 2.
log === log
2
This isn't a math course, so here's a rule of thumb.
The logarithm of a number roughly measures the number of times you can divide that number by 2 before you get a value that's less than or equal to one.
8
4
2
1
25
12.5
6.25
3.125
1.5625
0.78125
log(8) = 3
log(25) ≈ 4.64
÷ 2
÷ 2
÷ 2
÷ 2
÷ 2
÷ 2
÷ 2
÷ 2
Logarithmic time complexity is great!
O(1)
O(log n)
O(n)
O(nlog n)
O(n )
2
Certain searching algorithms have logarithmic time complexity.
Efficient sorting algorithms involve logarithms.
Recursion sometimes involves logarithmic space complexity.
...and more!