a primer on AI for non-technical readers
what to read to understand what goes on with AI if you're not a technical person
If you landed on Earth today after an interstellar journey, no one would blame you for thinking that 2022 was the year when a tsunami called Artificial Intelligence (AI) swept earthlings off their feet after a virtually unknown company – OpenAI – released two generative AI tools – Dall-E 2 and ChatGPT – that captured the world’s attention. Even Bill Gates felt it necessary to declare in his newsletter that The Age of AI has begun.
What do I need to know about Artificial Intelligence?
There are plenty of technical books on AI for technical people. Fortunately, you don’t need to learn computer science, advanced linear algebra, optimization or advanced statistics to understand AI from a non-technical perspective. All you need is a good non-technical grasp of three topics: algorithms, data and computing. These three books will get you the overview you need. They weave history, important benchmarks, mind-blowing anecdotes and distilled technical insights that are accessible to all:
AI is nothing but a set of instructions (algorithms) performed on data by computers. Chris Wiggins and Matthew Jones’ book How Data Happened (2023) is the most complete historical account of how data came about and how we became dependent on algorithms performed by computers. An unexpected takeaway: the term AI has been around for almost 60 years, but what we call AI today was not intended to create intelligent machines.
AI algorithms are instructions to make computers identify, extract and replicate patterns in data. We call that “learning” and what makes AI special is that it “learns” without being explicitly “told” which patterns to “learn”. Can we be sure that what AI “learns” is and will always be compatible with human values and intentions? Brian Christian’s book The Alignment Problem (2020) gives an answer rooted in an accessible overview of the evolution of how machines learn that pinpoints the challenges with this alignment between machines and humans. An unexpected takeaway: technical applications advance faster than research on alignment, but the field is quickly attracting minds and gaining traction to solve this problem.
AI cannot happen without computers, chips and computing power. Chris Miller’s book Chip War (2022) does a fantastic job tracing the history of chips and microprocessors since their invention in the late 1950s, while explaining what they are, how they cemented Silicon Valley as we know it today and how very few suppliers dominate the market. An unexpected takeaway: the history of chips and computers has always been inescapably intertwined with decisions in entrepreneurship, business, and geopolitics, which is why the United States holds threats to Taiwan and South Korea in the same category as threats to its national security. That is where AI becomes geopolitical.
The AI Dystopia
If you haven’t been hiding in a technology detox cave, you would also know that 2023 was the year of the AI dystopia. It all became quite public in March of that year, when the Future of Life Institute published an open letter calling for a pause in AI experimentation to better understand the risks it may pose to humanity. In May, the Center for AI Safety released a statement warning about the risk of extinction that AI poses to humanity. Both have been endorsed by academics, researchers, and CEOs of AI Labs.
But not everyone is in line with this view, though. Nowhere is the disagreement clearer than among the three grandfathers of AI – joint winners of the 2018 Turing Award (aka the Nobel Prize of Computing). They all agree that AI will become smarter than humans, but they disagree about the threat it poses to humanity.
Geoff Hinton quit Google in May 2023 and has since spent most of his time publicly sharing his views about the existential threat of AI.
Yoshua Bengio has spelled out his concerns about rogue AI in many publications, but perhaps nowhere more clearly than in his testimony before the U.S Senate - The urgency to act against AI threats to democracy, society and national security (2023).
Yan LeCun disagrees with Hinton and Bengio and made his position eloquently clear in a blog post with Anthony Zador – Don’t Fear the Terminator (2019) – in Scientific American. The title of the article is self-explanatory.
Other AI Risks
Even if human extinction from AI is not in the cards, there are other risks associated with how we use AI that have been getting some attention over the last few years.
The two most researched risks of AI are fairness – the human biases reproduced by AI – and ethics – the implications of the “choices” AI makes. Cathy O’Neil’s now classic book Weapons of Math Destruction (2016) is one of the most accessible accounts of these very technical problems. She manages to show how issues with ethics and biases appear everywhere in your daily life, from the algorithms that set differentiated prices for car insurance to the algorithms that decide what you will see in your Facebook feed. Though she uses the terms “mathematical models” and “Big Data”, she refers to what we call AI today.
A more recent concern are the social and economic risks associated with the widespread use of AI. Daron Acemoğlu and Simon Johnson’s book Power and Progress (2023) does an extraordinary job of unpacking a powerful argument supported by a thousand years of economic history: there is nothing inevitable about technology translating in benefits for everyone. That only happened during a few decades after World War II. Only under certain conditions – which they clearly identify – does technological progress become a tide that lifts all boats. Those conditions are not present now, but we could choose to enact them.
Old wine in new bottles…
The discussion about the risks of machine intelligence is not truly new, however. Norbert Weiner – the brain behind the field of cybernetics – warned about this more than 60 years ago in an article – Some Moral and Technical Consequences of Automation (1960) – in Science. His main point was as brutal as it was prescient: “when a machine constructed by us is capable of operating on its incoming data at a pace which we cannot keep, we may not know, until too late, when to turn it off”.
Paraphrasing my friend Jorge G. Castañeda, literature somehow manages to always get it earlier and better than science. So, it’s not surprising that technological dystopia has been a literary theme since H. G. Wells’ novel The Time Machine (1895) where technology segregated people into two different species – Eloi and Morlocks – that resulted from how each benefitted or were hurt by technology.
The theme continued with Karel Čapek’s play R. U. R. (1920) that introduced the term robot to describe fictional artificial humans created to work on factories that soon turn against their human masters to exterminate them.
Perhaps the tension is most clearly laid out in Isaac Asimov’s short story Runaround (1942) – later published under the collection I, Robot (1950) – where he introduces the “three laws of robotics” intended to protect humans from robots. Asimov’s uses his writing to show unintended problems created by the logical application of the laws of robotics by providing clear examples of the so-called alignment problem before it was formally delineated. You may have seen the 2004 film version of this story starring Will Smith.
Did we spark your interest? Then also read:
What about AI? - to learn more about whether AI has rendered Data Science obsolete. Spoiler Alert: it hasn’t!