Artificial Intelligence (AI) is among the hottest and most discussed technological topics today. AI covers many different subjects, several different definitions and are generally interpreted in many ways depending on the background of who is talking. There is however one thing that most can agree on: AI is going to change the lives of hundreds of millions of people in the decades to come.
In 2016 analysts from the McKinsey Global Institute wrote that “60% of all occupations have at least 30% of activities that technically are automate-able,” and in a report on Artificial Intelligence, Automation, and the Economy the U.S. Whitehouse estimate that “47% of U.S. jobs are at risk of being replaced by AI technologies” over the next 10–20 years.
While much of the discussion could implicate that AI is something that only belongs to the world of tomorrow, the truth is that its capabilities are already in wide use. AI is being used for speech recognition, smart cars, fraud detection, security surveillance, music recommendations, digital assistants and much more.
While the disruptive potential of AI is hard to deny it is not easy to predict “where” and in “what” way the change will manifest itself in the world of tomorrow.
The investment patterns do however show a clear trend; in the last five years almost $15 BN has been raised in venture capital across 2250 “deals” in the AI space, according to the tech research firm CB Insights, indicating an extreme interest in the area from investors. Most of the $15 BN can be traced to investors in the U.S., predominantly with roots in Silicon Valley.
On top of these investments comes the substantial resources that tech giants such as Amazon, Google, Apple, NVidia, Intel, IBM and Facebook are investing in internal organizational development in order to be able to deliver the most powerful AI solutions in the future - competing for AI talent with each other - hiring engineers, data scientists, mathematicians and more by the hundreds.
The companies working with AI focus on a broad range of domains and issues. Listed below are nine “AI Unicorns” (A unicorn being a company, usually a start-up without an established performance record, with a stock market valuation or estimated valuation of more than $1 BN) of which seven are based in the U.S. Together they cover a diverse field stretching from credit risk modelling to drug discovery and autonomous vehicles, showing the breadth of applied AI.
The potential size of the AI market is staggering if the analysts are right in the predictions. But – the predictions varies a lot, which also reflects different definitions and scope for AI.
Forrester Research expects the market for “Cognitive computing technologies” to be worth $1200 BN by 2020. By 2035, Accenture expects the market for “IT systems that sense, comprehend, act and learn” to be worth $8300 BN, while a more conservative market estimate from Bank of America Merrill Lynch expects “artificial intelligence-based systems” to be worth $70 BN in 2020, combined with $83 BN for “robots”.
In their “Sizing the Prize” report from 2017, PwC predicts that the global GDP will have increased by 14 % ($15.7 trillion) in 2030 due to adoption and implementation of AI solutions across a variety of industries.
For an overview on a number of well-researched projections of AI’s growth and market value see this piece written by Techemergence.
Many prominent people have a say and opinion about AI. The theme is so broad, can be discussed from many viewpoints, potentially affecting almost all business sectors, the jobs of most people, government, transportation and much more.
Here are some of the voices.
Three key drivers for the current change
AI has been underway in many years, but the huge amount of investments and interest is driven by 3 converging factors:
Where does all the money go? What kind of companies and solutions are being developed? All kinds of solutions see the light of day: AI frameworks facilitating the use and development of AI; AI chatbots for conversations; AI for cybersecurity, robots, autonomous vehicles, healthcare and much more. In the figure below, CB Insights have mapped 100 hot startup companies and grouped them in themes and solution foci.
It all started more than 50 years ago
The origin of the modern AI concept is by many traced back to one event in 1956, known as the “Dartmouth Summer Research Project on Artificial Intelligence”. Organized by the young Assistant Professor of Mathematics at Dartmouth College John McCarthy, the project was an 8-week workshop event, where a small number of invited scientists and scholars met to discuss, clarify and develop ideas about “thinking machines”.
It was in the proposal for the project, that the term ‘artificial intelligence’ was first officially introduced by stating:
“…. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. ...”
That same proposal also went on to discuss the concepts of natural language processing, neural networks, theory of computation, abstraction and creativity, which are still central elements of the field of AI to this day.
(Five of the attendees of the 1956 Dartmouth Summer Research Project, reunited in July 2006, at the “AI@50 conference”. John McCarthy is number two from left. Source: Dartmouth.edu)
Today, AI is not only discussed by the experts
The discussion on everything AI only continued and intensified in the decades following the Dartmouth workshop; it was however mostly confined to the “tech ecosystems” for the greater part of the period.
Technology has become an increasingly pervasive part of our personal lives, and digitalisation of "everything" is an interest that public and private organisations actively pursue. Therefore AI has gradually entered into the public discussion, where everyone has an opinion of what it is, what it should do and where it is heading.
The discussion is challenging due to the varying ideas on the true definition of AI, which leads to confusion and talking past each other. A psychologist, computer scientist and a political scientist may have different ideas of the exact definition of the terms, and the values embedded in it. “What are we really talking about here?”, “Are we talking about the same thing?”, “Where does machine learning fit into this? And what about deep learning?”, “What is ‘natural intelligence’ – and is that the opposite of ‘artificial intelligence’?” are questions that often appear in the discussion around AI.
In the next sections, we will shed light on three categories that are frequently discussed when trying to find out how far we have come with AI.
According to John McCarthy, AI is “The science and engineering of making intelligent machines, especially intelligent computer programs”. Today the term “artificial intelligence” is often used, but others talk about “machine learning”, and others again use the AI term “deep learning”. But what is the difference between the three terms?
Work on computer based intelligence started before McCarthy. In 1950 Allan Turing introduced the famous “Turing test” in a paper on “Computing Machinery and Intelligence”. The Turing test is a test of a machine’s ability to generate “human-like responses” and lead a conversation. A machine will pass the test if a human is unable to determine whether it is a machine or not after engaging in a conversation.
The test is limited and simplistic, and represents a very narrow approach to defining intelligence. But it was a start, and has today still to be passed unconditionally by any machine.
Early AI solutions developed in the decades that followed experimented with simple tasks; like playing games such as checkers. Later on, based on the belief that knowledge was “just” a large number of rules, “expert systems” using rule based thinking was developed. One of the most famous “traditional” AI solutions is Deep Blue, a chess-playing computer developed by IBM that defeated world champion Garry Kasparov in 1997.
In the 80s a new way of programming was introduced, and the ability for machines to learn saw the light of day: Machine learning. This new way of programming machines allowed a system to learn to recognize patterns on its own and make predictions, contrary to hard-coding a software program with specific instructions to complete a task. This makes the solution dynamic and does not require human intervention or experts to make certain changes.
Deep learning is a subset of machine learning (which again is a subset of AI). Deep learning is based on the same principles as a lot of the machine learning, but the model behind the learning is larger, more complex, typically requiring a lot of processing power and a lot of “learning data”. Deep learning relies on large and “deep neural networks” to store the intelligence. The latest breakthrough in the algorithms behind deep learning gave the possibility to reach new levels of accuracy for many important problems, such as image recognition, sound recognition, recommender systems, etc.
And – to get all of this intelligence going, this new form of AI needs to be trained. Several forms of training or learning exists, with the three most used forms are called: unsupervised learning, supervised learning and reinforcement learning.
Much more can be said, and other definitions can be found. Therefore, we urge you to find out more via articles, blogs, whitepapers or videos - all available online, to build a deeper understanding of AI.
The “intelligence” definition
Another dimension of AI is to look at how much it can “accomplish”. Using this approach, the technology can be broadly categorized into three groups: Narrow, General and Super - Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI).
In this context, Artificial Narrow Intelligence is the only form of AI that humanity has achieved so far. Solutions has been developed where the AI performs a single task, such as playing chess or Go, making purchase suggestions, sales predictions and weather forecasts. Computer vision, natural language processing is also, at the current stage, considered narrow AI. Speech and image recognition are narrow AI, even if their advances seem fascinating. Self-driving car technology is still considered a type of narrow AI, or more precisely, a coordination of several narrow AIs. Even Google’s translation engine, sophisticated as it is, is a form of narrow Artificial Intelligence.
While some see Narrow AI as weak, immature or limited, we perform a lot of narrow tasks every day, at home or at work. Therefore, AI will play a larger and larger role in our lives, as a “digital assistant” in both at home and at work.
Artificial General Intelligence, also known as human-level AI or strong AI, is the type of Artificial Intelligence that can understand and interact with its environment as a human would. Developing an AGI is however a significant challenge. Some actors within the AI field believe we will see AGI in a not too distant future, while others think it will be impossible for machines to ever master human-level intelligence. Others again say the definition and discussion is irrelevant since we do not need machines with General AI capabilities.
When AI becomes much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills, we have achieved Artificial Super Intelligence. If AGI is hard to grasp and define, ASI is even harder.
The “system” definition
Another approach to understanding, or defining, artificial intelligence, is to look at the “elements” in an intelligent application. AI company Ayasdi speaks of “five pillars” required for a system to be intelligent, depicted below:
Discovery, is the ability of an intelligent system to learn from data without being presented with an explicit target. Once the data is understood through discovery, the capability to predict, is the ability to provide output as to what will happen in the future.
Justification is probably the most important, but also least discussed AI capacity. But, for prediction to have value it must justify its assertions. This capability is what makes the predictions intelligent. Justification builds transparency and transparency builds trust.
Action as a consequence of discovering, predicting and justifying is the next element. The intelligent system acts on insight, and are “alive”. The frequency for action can vary a lot depending on the needs, but with more advanced solutions seeing the light, such as AI supporting stock trading or autonomous cars, many actions can be performed every minute or even every second.
The last element, is learning. Based on the action, and the following evaluation of the outcome of that action, the system needs to learn. Reflect, understand outcome – and learn, and start the “discover, predict, justify, act” loop again. Many solutions provide an answer in a point of time - an intelligent system is one that is always learning through processes similar to the one outlined here.