Complexity

Factors that affect good programs

Environment

  • the Operating System
  • the Compiler
  • the Hardware

Implementation

  • the Algorithm (our code)

Testing

Empirical testing of the efficiency of algorithms

  • Choose a good set of inputs
  • Measure the actual runtime (environment is important)

Timing

Environmental factors

As different environments will produce different absolute times, we want to focus on the relative changes

Time Complexity

When calculating the time complexity of an algorithm, we consider the worst case scenario.
(ie for a search algorithm; key not in A)

Big-O Notation

From best to worst timing

TypeTime Cost
bestConstantO(1)
LogarithmicO(log n)
LinearO(n)
n-log-nO(n log n)
QuadraticO(n^2)
CubicO(n^3)
ExponentialO(2^n)
FactorialO(n!)
worstall hell breaks looseO(n^n)

The worst case execution time is between the fastest and slowest possible operation time.

Suggested: Searching Algorithms

Calculating Big-O Values

// Remember: The Big-O value refers to the worst-case scenario

Step through the algorithm and count the number of operations performed.
When you encounter a loop, add n, or a variant of n.

The resulting Big-O is the highest power of n, expressed without the coefficient.

Example :: 4 + 3n
Highest power of n: 3n
Big-O: O(n) - linear
 
Example :: 4 + 3n + 3n^2
Highest power of n: 3n^2
Big-O: O(n^2) - quadratic

Maths-y stuff

f(n) is O(g(n)) if f(n) is asymptotically less than or equal to g(n).
f(n) is Omega(g(n)) if f(n) is asymptotically greater than or equal to g(n).
f(n) is Theta(g(n)) if f(n) is asymptotically equal to g(n).

// Given f(n) and g(n), we say f(n) is O(g(n)) if we have positive constants c and n_0

Tractable - polynomial-time algorithm; polynomial worst-case performance O(n^2)
useful and usable in in practical applications

Intractable - No algorithm exists (usually NP’)
Worse than polynomial performance O(n^2) Feasible only for small n

Non-computable – no algorithm exists (or can exist)

Tools

time(1)

measures execution time
Also available at /usr/bin/time

Recursion

Components

  • Base Case - aka our stopping case
  • Recursive Case - will call itself

Calculating Factorials

Iterative Approach

1
2
3
4
int fac(int n) {
  int result = 1;
  for (int i = 1; i <= n; i++) result *= i;
  return result;

Recursive Approach

1
2
3
int fac(int n) {
  if (n == 1) return 1;
  return n * fac(n-1);

To calculate fac(3) the program will need to do

1
2
3
4
5
1 | fac(4) = 4 * fac(3)
2 |              fac(3) = 3 * fac(2)
3 |                           fac(2) = 2 * fac(1)
4 |                                        fac(1) = 1

That’s a lot of stack frames that it needs to set up and destroy…
Would be better to just iterate for a factorial function

Fibonacci

So this section alone got quite large…

Click here to read the Fibonacci post

Optimisation

Tail-call optimisation

Tail-call optimisation is where you are able to avoid allocating a new stack frame for a function because the calling function will simply return the value that it gets from the called function.

A function ends with a tail call if the last operation before the function returns is another function call. If this call invokes the same function, it is tail-recursive.

// A function can probably be optimised if it’s last operation is calling itself

Source: StackOverflow