# Resources and efficiency, with a focus on time

From Module I: The mystery of the computer we recall that computers are physical artefacts
whose operation is constrained by the available physical resources.
Accordingly, *computing and computations consume and utilize physical, measurable resources*, such as:

energy,

time,

storage (e.g. bits of memory, disk space),

communication (e.g. bits transmitted/received over the network), and

processors (e.g. execution cores, CPUs, compute nodes).

In this round we concentrate on *time* as a measure of *efficiency*.
Naturally, we want our programs to run as fast (in as short a time)
as possible so that we get our results quickly, our customers will not
have to observe delays in our service, and so on.

**Scalability**.
Here our main interest is on how *scalable* our programs are in
terms of their resource consumption, such as running time.
That is, we are not too interested in how much time our program spends
on a specific, *fixed* input or function call with fixed arguments.
For example, it is not very interesting to consider how much time
it takes to multiply two specific matrices, such as
\(\left(\begin{smallmatrix}1 & 2\\3 & 4\end{smallmatrix}\right)\left(\begin{smallmatrix}5 & 6\\7 & 8\end{smallmatrix}\right)\).
Indeed, once we know the answer, we can store it and use it directly
if this is the only matrix product that we are interested in.
Rather, what is more interesting is how well our program (algorithm)
performs *in general*, *as we increase the size of the input*.
For example, what we would like to achieve in the context of matrix
multiplication is that our program (algorithm for multiplying matrices)
runs fast for two *arbitrary* \(n\)-by-\(n\) matrices given as input.
That is, our measure for problem size is \(n\), the dimension of the
input matrices, and we want our algorithm to do well (run fast,
whatever the \(n\)-by-\(n\) input matrices are) in terms of the *scaling* of
the running time, as a function of the size parameter *n*.

**Measurement and analysis**.
Our intent in this round is to initiate you to basic tools and practices
for measuring the run time of an algorithm or function. We will observe
that there can be a big difference between best-case, typical, and
worst-case behaviours of an algorithm.
We then introduce a standard mathematical tool for
expressing and comparing run-times: the **big O notation**.
This notation concentrates on *scalability* by abstracting away small
sizes as well as multiplicative constants (effected by CPU clock speed,
pipeline efficiency, memory bandwidth, memory latency, and other factors
that affect the run-time by some constant amount).
Although we focus on the running time, the big O notation can and is
regularly used to quantify scalability in terms of other resources as well.