When I was in elementary school, one of the weekly vocabulary exercises I had to complete was in the topic of “fields of study.” This basically meant that our class had to learn the etymology of many of those terms ending in the root “-ology,” or “field of study” – a task I’m sure we all had to complete at some point in our education.
Our vocabulary quiz at the end of that week consisted of questions such as, “what does a biologist study?” or “what is sociology?” – all of which were usually answered by reverse-engineering the Latin roots in the words, which usually led to formulaic answers such as, “one who studies life,” or “the study of society.”
Deciphering the term “computer scientist,” however, is not so simple.
Surprisingly, however, that’s not because it’s a sophisticated definitional problem, rather a historical one that’s best addressed by stepping through the history of the field.
Computer science codex
The very first computer scientists were mathematicians. Computer science sits on a mathematical bedrock, drawing from areas such as linear algebra and combinatorial math to solve challenging problems, such as displaying three-dimensional objects on computer monitors.
In fact, computer science and computer scientists predate the creation of the first modern computer – machines with intricate electronic processors that can carry out complex tasks – by several hundreds of years. At first, that may seem hard to believe, but consider the definition of computer: something that computes.
From this, it’s clear that the first computer did not need to be a machine.
In fact, one of the earliest uses of the term computer was in the seventeenth century to refer to people who carried out mathematical calculations. That convention persisted until about the end of the nineteenth century, where it was then used to to refer specifically to machines that carried out calculations, due to the propagation of
machinery in the Industrial Revolution.
The inventors of these computational machines were prominently mathematicians who had a knack for tinkering with parts such as gears and pulleys. For example, in the mid-nineteenth century, a mathematician by the name of Charles Babbage conceptualized and began developing a machine called the Difference Engine, which is considered the first automatic mechanical computer. The Difference Engine was capable of doing high order polynomial math by turning a handle that set many gears into motion. And like this, there were many instances of machinery being used to perform more and more complex computations, though the notion of computer continued to remain tied to mathematics, save for a few instances, notably the Jacquard loom, or a machine that produced fabrics with woven patterns corresponding to inputted punched cards.
Arguably, it wasn’t until a scientific paper published in the mid-nineteenth century by Augusta Ada King, the countess of Lovelace and the daughter of the poet Lord Byron, that the notion of programmable computers – a foundational basis for modern-day PCs and laptops – was conceived. Through studying Babbage’s machines and working closely with him, Lady Lovelace came across the idea that there was a distinction between calculation and programmability, such as with the Jacquard loom, and she eventually became what’s considered the world’s first computer programmer, specializing in processing sequencing instruction on punched cards.
From there, the idea of programmable computers took flight in various forms, be that in the Hollerith Tabulator machine created by Herman Hollerith in 1884 to aid in census data collection, or in the Differential Analyzer created by Vannevar Bush in 1930 to solve complex differential equations – a complicated task that had implications for dynamic simulations, such as for aircraft flight systems. These machines all had the central characteristic of being algorithmic, or solving their specific problems in a step-by-step, mechanical process.
Unsurprisingly, it’s this mechanical nature that caught the eye of Alan Turing, the father of the modern computer, who proposed that it was possible to create a computing machine could do more than just arithmetic, but also carry out logical operations and store data such as letters in its internal states. Turing’s proposal and later work ultimately laid the groundwork for a number of modern computer science concepts, such as the basis for artificial intelligence. And, from there, the rest is history – computers began incorporating electromagnetic components and continuous developments in hardware systems led to the computers as we know them today.
Man and Machine
Through understanding the history of computers and how they eventually branched off from just computational machines to tools that can carry out a variety of tasks, it becomes increasingly clear that the underlying notion of computer science is about combining man and machine to algorithmically solve problems.
Consequently, this makes computer scientists the middlemen and women who can bridge the gap between an idea or a challenging issue into a set of understandable instructions that a machine or program can accomplish. But, there’s a deeper implication of that answer that we’re not mentioning: the notion of versatility.
As discussed before, computer science is an ever-changing field that is becoming increasingly accessible due to its everyday advancements. More often than not, the field is being incorporated into other fields of study. In this regard, it’s clear how the computer scientist is more than just a programmer or a network security administrator; rather, it is a subset of versatile people who wield technology to solve problems.
Considering the scope of that answer, I don’t think it’s far-fetched to say that everyone has an inner computer scientist – and that’s an empowering and encouraging idea.