Software and hardware
Let us reflect again a little bit where we are now. We are essentially done in resolving the mystery of the computer at the level of hardware and low-level software. The computer is, essentially, just a piece of amazingly fast and programmable sequential logic physically implemented with semiconductors.
Indeed, by building the
armlet from individual logic gates,
together with a minimum suite of programming tools to provide an adequate
degree of programming convenience to start writing serious programs for
armlet, we can now argue that we understand at least many of
the central principles underlying a modern programmable computer.
Certainly we have gained a fair understanding of the hardware and the low-level
software that runs on the hardware, even if we have neglected essentially
all the physical constraints in hardware design. Most of the advanced
features of processors, including support for virtualization and parallelism,
and support for external interfaces ranging from a graphics processing unit
to the network interface and a bus for accessing peripheral devices have also
But the hardware in itself is of little consequence without the programs and programming that puts the hardware into use. And herein still lies a large part of the remaining mystery of the computer. We may reflect that it must take a great deal of programming to turn a piece of silicon into a machine that interactively accepts Scala code. Not to mention the programs that operate and run on the Internet, with billions of processors automatically transporting information, processing and indexing it, serving user requests.
A great deal of programming indeed.
The operating system (*)
Let us continue our quest to understand the computer, from the perspective of software that runs on a computer. Indeed, from this perspective our understanding is far from complete. How is it exactly that the program binaries get executed, for example. How can the machine execute multiple programs, simultaneously? How do we display something on screen, or track the movement of the mouse? Communicate with the network? Play or record sound? Take pictures or record video? And so forth.
Without question the most important program that runs on any computer
is the operating system, whose responsibility
is to load user programs for execution and manage the
programs while they are being executed, controlling and serving access to the
hardware. This access control and service includes in fact access to the memory.
Unlike in our simplified
armlet design, user programs typically do not
have complete access to the memory, but rather access is managed through extra
functionality in the processor hardware that enables virtualization of
the memory space seen by a user program.
This also implies that the program code of the core of the operating system
(the kernel) enjoys a more privileged level of access to the physical
hardware than auxiliary operating system code or user code, whose access is
restricted and managed with built-in processor functionality, both to abstract
away cumbersome details of management, and also to limit access to provide
stability to the system even when user code is not behaving as intended.
Our ambition during this course is directed towards user programs, so
we will not enter into detailed discussion on programming operating systems
and hardware support for virtualization at the operating system level, even
if to really understand the computer from an advanced- or
systems-programming perspective, an understanding of operating systems is
vital. But operating systems and detailed hardware architecture easily deserve
dedicated courses of their own, so suffice it to say that even if user
code may see and access the hardware from its perspective as something
roughly equivalent to the
armlet, in fact this is not quite
the full picture of what is really going on at the software/hardware
Virtualization and simulation (*)
Virtualization is a principle that extends far beyond the
hardware-level virtualization of program memory. In fact, if you think about
it, for example our
armlet is a virtual design – a small, fictious
but fully capable computer simulated by a computer. Or, in fact, a small,
fictious computer simulated by yet another fictious computer simulated
by a computer, since Scala programs themselves run on a
more precisely, the Java Virtual Machine, which
is simulated by the actual physical hardware
(or, in some cases, one or more further layers of virtualization).
Given the inevitable loss in performance – the overhead – incurred by virtualization and simulation it should of course be asked why one wants to go virtual and engage in simulation, instead of doing it for real?
From a pedagogical perspective the advantages of virtualization and
simulation are obvious. Think about the
armlet, for example. We have
complete control over the simulation, and with minor effort we can
isolate any aspect of the design for detailed study and instrumentation
(say, load to
Ticker) as we please.
In contrast, try tracking an individual gate or toggling the value of
a memory element in a real, physical processor design! It is also easy
to play and experiment with the virtual design, since design changes
are typically just a few lines of Scala code, instead of, say, a physical
and costly manufacturing process. Virtualization also enables us to
abstract away the nitty-gritty annoying details of physical reality
that physical hardware designers must obsess about, but which are not
relevant from a programming perspective, such as routing power to the
gates and heat out of the circuit.
Of course the same advantages apply also to non-pedagogical,
professional settings. Virtualization enables complete control over
and isolation of whatever is being simulated. For example, code
running on a virtual machine cannot (well, at least in principle)
escape the virtual machine. Try writing an
armlet program that
crashes your machine! (Note: the assembler is easily crashed in fact,
but try crashing your computer from an
Errors are easier to catch and recover
from in many cases since the simulator can be programmed to detect and take
care of such functionality transparently while running the simulation.
Virtualization is portable across different computing platforms.
As soon as you have the simulator running on a platform, you have all
the programs that run on the simulator running on the platform.
The advantages of abstraction and easy experimentation also apply
in professional settings. Programming is about automation, and the more
annoying nitty-gritty details we can forget about and abstract away,
the more time we have to experiment and play with the more relevant
aspects of a design, also professionally.
Programming environments (*)
A fundamentally important skill for a professional programmer is to be aware of, select, and, if necessary, create the tools that enable a comfortable and productive programming environment where one can
focus on the important aspects of a design, and
get the job done efficiently.
Here the programming environment should be viewed as including both software and hardware, and efficiency should be interpreted broadly to consist of resource consumption both in physical terms (computer time, electricity, and so forth) and in human terms (programmer time, money, and so forth).
In our case we have chosen the Scala programming language and the associated set of tools to study and practice programming. We believe Scala to be a versatile and scalable choice that not only balances (human) productivity and (physical) efficiency, but also enables one to pick up with relative ease other programming languages.
In Module II, we proceed to look at programming abstractions and analysis common to essentially all of programming.
Conclusion – it is important to understand hardware
Let us conclude by summarising some rationale why a successful programmer must understand the basic principles of hardware, and low-level representations of information, as per our quest in Module I.
Bits hide nothing and enable everything
When all else fails, a programmer can always inspect and manipulate information at the level of bits. At this level nothing is hidden, but working with bits is cumbersome since every single piece of detail is visible and not hidden under successive layers of abstraction and standards of representation. In many respects this is akin to disassembling a complex, packaged machine into its atomic parts.
Hardware is the basis of all computing
In practice computing is a physically constrained (hardware-constrained) activity, even if programmers try their very best at virtualization and abstraction. Every program ultimately run on a machine. Silicon and plastic in your hand or in a rack thousands of km away. Understanding the scale and relevance of different hardware-induced constraints such as memory hierarchies forms a prerequisite for advanced and efficient programming.
Hardware and software interact
Understanding the basics of only one leaves a limited worldview.
Modern hardware design is programming.
Hardware constraints guide how software is written.
For example, the importance and emergence of parallel programming is precisely because hardware can deliver further performance increasingly only through multiple parallel processors cores. To deliver performance in software, such physical constraints and trends in hardware design must be understood and taken into account.
Software requirements guide hardware design.
The requirements of dedicated applications often translate into hardware designs supplying increased performance in that application, and beyond. Dedicated graphics processing units (GPUs) originated to provide increased performance in graphics applications but are now used also in other applications too.
To motivate the ingenuity of software and the intellectual pleasure/challenge of programming
Through the understanding of hardware it is possible to properly appreciate software and achieve new levels of programming skill and creativity.