Skip to Main Content

Making Sense of AI: The History, the Hype, and the Hard Truth

Making Sense of AI: The History, the Hype, and the Hard Truth

Making Sense of AI: The History, the Hype, and the Hard Truth

Always wanted to know more about Artificial Intelligence (AI) but never dared to ask? This fascinating introduction to AI, debunks many of the myths and looks objectively at key developments and potential outcomes.

AI. What’s the first thing that comes to mind when you hear or see the term? A film like Ex Machina, Chappie, RoboCop, Blade Runner or 2001: A Space Odyssey? Or perhaps you think of more specific concepts or images along the lines of facial recognition technology, Bitcoin, job losses, autonomous vehicles, killer robots? What is AI, exactly? Margaret Boden, OBE, ScD, FBA, and Research Professor of Cognitive Science at the University of Sussex, states quite simply that “AI seeks to make computers do the sorts of things that minds can do.” That’s a pretty broad definition. And now, AI seems to suddenly be the hot topic. Let’s explore why AI has seemingly mushroomed overnight, review a some lesser known historic developments in the field, consider how the future of work is likely to be impacted and finally, reflect on some important aspects around how the media reports on the topic.

Understanding Frequency Illusion, Selective Attention and Confirmation Bias

If you’re anything like me, it does seem that AI is cropping up everywhere. It’s like that time I heard about an obscure chamber music-electro-punk band one day. Then, later in the week, a song of theirs suddenly appeared in my recommended Spotify playlist! Was it simply a coincidence ...or did the social media algorithms have me pegged? It could very well be the latter but equally likely, I experienced the “frequency illusion.”  Stanford linguistics professor Arnold Zwicky coined the term back in 2006 to describe the syndrome in which a concept or thing that one suddenly learns about suddenly seems to crop up everywhere. Zwicky attributes this to the combination of two psychological processes - selective attention which is activated when upon noticing a new word, object, or concept; after that, it’s on the radar, triggering an unconscious hypersensitivity to its existence, and as a result, its found more often. The second process, confirmation bias, suggests to your mind that each sighting is additional proof of the impression that the item has become omnipresent overnight. This is also known as the Baader-Meinhof phenomenon, first established in 1994 after a commenter in an the online discussion board of a US newspaper heard the name of the ultra-left-wing German terrorist group twice in 24 hours.

While “frequency illusion” or the Baader-Meinhof phenomenon may be at work when it comes to the ubiquitous presence of AI, a number of catalysts have contributed to the acceleration in the field, namely significant advancements in computer processing power, the miniaturization of materials, rapid prototyping, increased connectivity and significantly lower cost of storage (P. Anderson).

AI is not a recent discovery

In terms of the history of AI, a one of the first pioneers of AI was Lady Ada Lovelace 1840s but the term “artificial intelligence” was coined in 1956 by Stanford professor John McCarthy, and this date is generally considered as the modern birth of AI.

And while it is true that AI has grown in leaps and bounds very recently, there are a number of impressive developments that have been around for some time (Artificial Intelligence: a timeline with key highlights). For example, in 1952, the first industrial robot, Unimate, started working on a General Motor’s assembly line based in their New Jersey plant. Then in 1972, Stanford University developed an early expert system called MYCIN to identify bacteria causing severe infections and to recommend antibiotics. AI beat a chess master when Deep Blue was the first computer chess-playing program to beat a reigning world chess champion in 1997. Right up to the present day where robots not deliver food in Berkley, California and Milton Keynes her in the UK.

The impact of AI at work

Given the great recent advances in the field, how should we think about the near term impact on the workplace?  The World Economic Forum’s Future of Jobs Report offers some interesting observations from their wide-reaching survey and suggests that the future isn’t all gloomy. For example, while most companies expect that automation will lead to a reduction in the full-time workforce by 2022, 38% of businesses surveyed actually expect to expand the workforce to new productivity-enhancing roles, and more than 25% expect automation to lead to the creation of completely new enterprise roles. By 2022, emerging professions are set to experience 11% growth in the total employee base, whereas the employment share of declining roles is set to decrease by 10%. And the bulk of employment across industries - about half of today’s core jobs - is expected to remain stable in the period up to 2022. While this may be less negative that you may have anticipated (thanks to media coverage, as we will see later), we should still anticipate dramatic shifts as the AI landscape continues to evolve.

 

What types of jobs are likely to benefit and what skills will employers value most? According to the World Economic Forum’s research, we’re likely to see increasing demand for roles along the lines of Data Analysts and Scientists, Software and Applications Developers, and Ecommerce and Social Media Specialists. In conjunction, roles requiring distinctively ‘human' skills, such as Customer Service Workers, Sales and Marketing Professionals, Training and Development, People and Culture, and Organizational Development Specialists as well as Innovation Managers will be of value in order to facilitate the change associated with the advances in technology. It could be argued that some of these roles will be AI driven (think chatbot), the survey results also point towards accelerating demand for completely new specialist roles focusing understanding and leveraging the latest emerging technologies: AI and Machine Learning Specialists, Big Data Specialists, Process Automation Experts, Information Security Analysts, User Experience and Human-Machine Interaction Designers, Robotics Engineers, and Blockchain Specialists.

 

Which skills will employers be seeking?

 

And for those uninterested in a purely tech focused role, here’s a glimpse into skills tipped to be most sought after in a mere 3 years.

 

Table 1: Comparison of the top ten skills demand

Today, 2018

Trending, 2022

Analytical thinking and innovation

Analytical thinking and innovation

Complex problem-solving

Active learning and learning strategies

Critical thinking and analysis

Creativity, originality and initiative

Active learning and learning strategies

Technology design and programming

Creativity, originality and initiative

Critical thinking and analysis

Attention to detail, trustworthiness

Complex problem-solving

Emotional intelligence

Leadership and social influence

Reasoning, problem-solving and ideation

Emotional intelligence

Leadership and social influence

Reasoning, problem-solving and ideation

Coordination and time management

Systems analysis and evaluation

 

For most workers, up-skilling will be crucial to navigating the new workplace landscape. And for all workers, there will be “an unquestionable need to take personal responsibility for one’s own lifelong learning and career development”. Many individuals will require support through periods of job transition and phases of retraining and up-skilling by governments and employers who will be looking for the right formula to encourage individuals to voluntarily undergo periodic skills upgrading.

How can we respond to these changes?

But forget about the future, what about right now? Paul Armstrong’s book Disruptive Technologies: Understand, Evaluate, Respond is a great place to begin if you’re keen to understand how to grapple with the impact of new technologies in your own workplace.  Armstrong’s book outlines the steps that you can take to engage with emerging technologies today in order to serve the consumer of tomorrow. It is a practical book offering a distinct response to emerging technologies - including Blockchain (Bitcoin), artificial intelligence, graphene and nanotechnology (among others) and other external factors such as the sharing economy, mobile penetration, millennial workforce, ageing populations - that impact business, client service and product models. Armstrong provides a clear roadmap to assess, respond to and problem-solve: what are the upcoming changes in technology, when is the right time to respond to those change, and what is the best response?

“AI’s future has been hyped since its inception” says Margaret Boden.  We should always be mindful that the media has a tremendous amount of power to direct our shape our most basic views on the topic. Consider these recent headlines:

  • ‘The AI that can tell you when you’ll DIE...’ (MailOnline, 23 Feb. 2018)
  • ‘DeepMind has trained an AI to unlock the mysteries of your brain’ (Wired UK, 9 May, 2018).

The Oxford Martin School and Reuters Institute recently conducted an analysis of the UK media and its coverage of AI. They determined that news coverage is significantly biased with clear political leanings. Their findings may align with your own observations – but here’s the proof: they found that left leaning media outlets tend to highlight the issue of ethics such as discrimination, algorithmic bias and privacy while the right draw attention to economics and geopolitics, including automation, national security and investment. In addition, content itself is overwhelmingly drawn from industry sources and CEOs amplifying their self interest. To make matters worse, serious cuts to budgets supporting journalists have resulted in an overwhelming reliance on basic press releases for day-to-day science and technology news stories. Some news desks even completely eliminated their science and/or technology desks. In order to present a more balanced view of the AI landscape, greater input is needed from scientists, activists, as well as others for alternative and independent views. The bottom line regarding the media is that in order to be well informed and differentiate between what is possible and what is aspirational, one should seek out varied sources of information rather than relying on newspapers alone.  For example, New Scientist magazine or academic centres such as the Alan Turing Institute.

In the midst of tremendous change, some basic truths remain – the human characteristics of creativity, persuasion, adaptability, critical thinking and collaboration underpin our positive steps forward. Only time will tell whether AI will help, or hinder, these very human traits and aspirations.  For more, head over to the Barbican to experience their current exhibition “AI: More than Human”.

Paula Kienert, CWN Events Committee Chair and Executive Director Fidelity Investments

References