Big O Notation
This topic matters as it relates to the introduction of Big O notation, which is going to be a fundamental part of this course.
Big O
Big O notation is used in Computer Science to describe the performance or complexity of an algorithm. It specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used by an algorithm.
- O(1)
Describes an algorithm that will always execute in the same time or space regardless of the size of the input data set.
- O(N)
Describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set.
- O(N²)
Represents an algorithm whose performance is directly proportional to the square of the size of the input data set. This is common with algorithms that involve nested iterations over the data set. Deeper nested iterations will result in O(N³), O(N⁴) etc.
- O(2^N)
Denotes an algorithm whose growth doubles with each addition to the input data set. The growth curve of an O(2^N) function is exponential.
- Logarithms
Logarithm is a mathematical concept / expression that can be defined as the power or exponent to which one base number must be raised multiplied by itself to produce another number.
Things I want to know more about
- I would like to learn how to implement the different Big O notation approaches.