Artificial Intelligence | News | Insights | AiThority
Bitcoin
$11,909.94
+331.87
(+2.87%)
Ethereum
$395.99
+9.11
(+2.35%)
Ripple
$0.30
+0.01
(+3.65%)
Litecoin
$58.85
+2.23
(+3.94%)
EOS
$3.28
+0.28
(+9.46%)
Cardano
$0.14
0
(+0.35%)
Stellar
$0.11
0
(+1.45%)
NEO
$14.96
+2.08
(+16.15%)
NEM
$0.06
-0
(-0.19%)
DigitalCash
$96.69
+0.73
(+0.76%)
Tether
$1.00
0
(+0.1%)
Binance Coin
$22.53
+0.26
(+1.17%)
QTUM
$2.96
+0.28
(+10.37%)
Verge
$0.01
0
(+5.56%)
Ontology
$0.84
+0.08
(+10.04%)
ZCash
$91.37
-1.39
(-1.5%)
Steem
$0.23
0
(+1.17%)

Encapsulating the Entire Evolution of Artificial Intelligence

0 31

Artificial Intelligence was first recorded centuries ago in the antiquity of ancient Greek mythology. When Aristotle developed the syllogism for the purpose of deductive reasoning, it was the first step by human beings to try and understand their own intellect. But, AI, as we know it now, has a brief history, although marked by significant events that evolved the technology from humble beginnings to an agent that would transform the way we imagine our world.

Although, modern AI saw the first light of the day circa 1943, one of the most crucial breakthroughs in Artificial Intelligence was the development of the Bombe Machine by Alan Turing, a British Scientist, and a Researcher. The machine was successfully able to crack German messages from their infamous Enigma Machine during World War Two. The development is one of the most important factors that helped the allied forces win the war against the might of Germany, helping shape the future of the world.

This was also the beginning of Machine Learning, one of the most used domains of Artificial Intelligence nowadays. Turing built a machine that could converse with human beings without them knowing about it building an ‘intelligent’ machine back in a day. The achievement was honored by the Association for Computing Machinery (ACM), that organizes Turing awards in the honor of Alan Turing. This year Yoshua Bengio, Geoffrey Hinton, and Yann LeCun received the prestigious award for their innovation in Deep Learning capabilities, another crucial AI aspect.

Read More: AiThority Interview Series with Dr. Massimiliano Versace, CEO and Co-Founder at Neurala

Below is the summary of the important developments in Artificial Intelligence by years.

1943

Warren McCullough and Walter Pitts compose and publish a paper that the duo title ‘A Logical Calculus of Ideas Immanent in Nervous Activity.’ The paper’s discussion was about building a neural network through a first-ever, unique, mathematical model.

1949

Donald Hebb wrote a book, ‘The Organization of Behavior’: A Neuropsychological Theory proposing a concept that consisted of neural pathways, and how they are created by experiences. The crux of this book spoke about the direct proportion between neuronic strength and the frequency of their use. This proposition still continues to be a key model when developing AI.

1950

1950 was a crucial year in the life of AI as there were four major developments that advanced AI capabilities manifold, back in the day. These were –

  • Coming back to Alan Turing, this was the year wherein he published ‘Computing Machinery and Intelligence.’ In today’s day, ideas of this book implement as ‘The Turing Test,’ one of the basic tests that are applied to computers to comprehend their intelligence quotient.
  • Marvin Minsky and Dean Edmonds, two Harvard undergraduates developed SNARC (Stochastic Neural Analog Reinforcement Calculator), which was the world’s first ever neural network computer.
  • ‘Programming a Computer for Playing the Chessboard’ was published by Claude Shannon. The title is self-explanatory in understanding that the intent was to program a computer in such a way that it could play chess with a human being (which happened later through IBM Watson).
  • Isaac Asimov finishes writing and publishing ‘Three Laws of Robotics.’ The published novel was responsible for flagging off what we know now as Robotic Process Automation (RPA).

Read More: AiThority Interview Series with Jeff Epstein, VP of Product at Comm100

1952

The world could hopefully play checkers on a computer now, thanks to Arthur Samuel developing this feature.

1954

Georgetown’s IBM experimented with Machine Translation to translate 60 cherry-picked Russian sentences to translate into English.

1956

Artificial Intelligence as a term is officially coined at the ‘Dartmouth Summer Research Project on Artificial Intelligence.’ John McCarthy was one of the key figures of this conference — the conference defined the vision of Artificial Intelligence for the future.

Another critical year for AI, just like 1950, this year also saw four significant developments –

  1. Allen Newell, Herbert Simon, and J.C. Shaw developed a program designed to replicate the problem-solving ability of human beings and named it as General Problem Solver (GPS).
  2. Geometry Theorem Prover program developed by Herbert Gelernter becomes a real-world product.
  3. Machine Learning gets its name from Arthur Samuel while at IBM.
  4. MIT Artificial Intelligence Project is founded by John McCarthy and Marvin Minsky.

Read More: Molecula Launches to Make Enterprise Data AI-Ready

1963

John McCarthy becomes the vanguard of Artificial intelligence by founding the AI lab at the prestigious Stanford University.

1966

The Automatic Language Processing Advisory Committee (ALPAC) published a report stating the U.S. Government’s unsatisfaction towards the progress of Machine Translation technology. The cold war was looming large with the U.S.S.R and as such the United States of America was in real need of a program that could efficiently translate a large amount of Russian communication into English. The report further propelled the Government canceling government-funded Machine Translation projects.

1969

The year saw AI’s initial penetration into diagnostics and lab testing. Expert Systems such as DENDRAL and MYCIN were developed specifically to diagnose blood samples.

1972

PROLOG, one of the initial programming languages in computing was created in 1972.

1973

Like what the U.S. did in 1966, England did in 1973. The Lighthill Report detailed the disappointments in AI Research is published by the British Government. The report was a setback for AI in Britain as AI research faced severe cutbacks in Government funding.

Read More: AiThority Interview Series With Scot Marcotte, Chief Technology Officer at Buck

1974-1980

AI progress was perceived to be at a standstill by authoritative agencies such as DARPA (Defense Advanced Research Projects Agency). Combined with earlier reports from the Automatic Language Processing Advisory Committee (ALPAC) from 1966 and the Lighthill Report from 1973 AI funding had almost dried up resulting in a stoppage for AI research. This period is deemed as ‘The First AI Winter.’

1980

Now defunct, a software giant of yesteryears, Digital Equipment Corporation developed a highly successful commercial expert system which the company named as R1 (also known as XCON). Developed for order configuration of new computer systems, R1 officially ended the ‘AI Winter’ and magnetized a huge investment boom in expert systems that sustained the next decade for AI.

1982

Japan’s International and Trade Ministry announced the launch of the extremely ambitious Fifth Generation Computer Systems project (FGCS). The main motive behind this initiative was to build supercomputers and a functional & robust platform for developing Artificial Intelligence.

1983

The United States of America counters Japan’s claim of the FGCS by launching its own computing endeavor, funded by DARPA, to build supercomputers and a functional & robust platform for developing Artificial Intelligence.

Read More: What Is Automated Machine Learning (AutoML) and Is It About to Put Data Scientists out of Work?

1985

Enterprises are seen spending a billion dollars on expert systems — an entirely new industry, Lisp machine market, emerges to support expert systems business. Specialized computers that can run on Lisp for AI development are built by companies such as Symbolics and Lisp Machines Inc.

1987-1993

The period saw ‘AI’s Second Winter’

  • As the technology evolved, the Lisp machine market collapsed in 1987 due to the easy availability of cheaper alternatives. Expert Systems also went obsolete as they were expensive to maintain and update.
  • Japan aborts its FGCS project in 1992 failing to achieve AI- related goals that it set at the launch of the project
  • The U.S. follows Japan citing similar reasons after spending $1 billion

However, private corporations, instead of the government, began to show a renewed interest in Artificial Intelligence ending AI’s Second Winter

1997

In one of the most provocative events in the history of computing, IBM’s Deep Blue defeats the master of chess, Gary Kasparov

Read More: Why Is Explainability Important and How Can It Be Achieved?

2005

Self-Driving Car Stanley won DARPA’S grand challenge. The U.S. military started investing in autonomous robots such as ‘Big Dog’ & Packbot.

2008

Google’s speech recognition makes a breakthrough and features in its iPhone app.

2011

IBM Watson repeats its 1997 history by winning Jeopardy.

2012

Google Brain Deep Learning project’s founder Andrew Ng uploads 10 million YouTube videos in a neural network by leveraging deep learning code. The network could successfully recognize a cat without explicitly describing a cat to it. This is hailed as the biggest breakthrough for Neural Networks which ushered in millions of dollars in funding for Deep Learning capabilities.

2014

Google’s self-driving car passes the driving test.

Read More: In Appreciation of The Technologists Who Simplify Our Lives

2016

World Champion in the Chinese game of Go, Lee Sedol, was defeated by Google DeepMind’s AlphaGo. This was a big hurdle for AI considering how complex this ancient Chinese game is, another significant breakthrough in AI.

2017

NVIDIA creates artificial human faces that look exactly like real human faces.

2018

Researchers based out of the UK working at the very prestigious University of Cambridge may just have discovered how life is made. The faculty has grown mouse embryos using stem cells only. There are no eggs or sperm involved. The researchers developed the embryo by plucking cells from another embryo.

This was our account of the history of AI. The technology has seen its ups and downs, but survives and is growing at lightning fast speeds.

What’s next in AI has perhaps become the most intriguing question for enthusiasts and enterprises alike.

Read More: Cryptocurrency Tax Returns and the IRS

Leave A Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.