What Is AI: Types, History, and Future

8 June 2023
45
0
Reading: 8 min

Artificial Intelligence (AI) is a field of computer science that focuses on creating computer systems capable of performing tasks that typically require human intelligence. Its importance in today’s world lies in its ability to automate processes, make informed decisions, entertain people, improve efficiency, and drive innovation across various industries and domains.

History of artificial intelligence

What Is AI: Types, History, and Future

Of course, the advent of artificial intelligence would not be possible without computers. But computers are very different. Originally, the term computer meant anybody doing the calculations, human beings included. The Latin verb ‘computare’ means “arithmetic, accounting, reckoning”.

  • Latin ‘com’ = “with, together
  • Latin ‘putare’ = “to settle, clear up, reckon

Hardware milestones

What Is AI: Types, History, and Future

Conceptually, the first computer emerged in the 19th century, thanks to Charles Babbage, an English mathematician.

  • In 1822, he proposed the concept of a programmable computer
  • In 1823, he began building the difference engine, his first computer, but never finished it
  • In 1834, Charles Babbage designed the analytical engine, a mechanical computer that could perform complex calculations and introduced the concept of CPU and memory

Babbage’s concept was a mechanical one, using a steam engine to power itself up. Of course, fully electric computers that don’t vape were next in line.

  • In 1942, the Atanasoff-Berry Computer (ABC), an electronic computer that used binary (yes/no) representation and electricity for calculations, saw the world
  • In 1946, ENIAC (Electronic Numerical Integrator and Computer), the first general-purpose electronic computer, was introduced
  • In 1949, EDSAC (Electronic Delay Storage Automatic Calculator), the first practical stored-program computer, was created, introducing magnetic core memory and the ability to perform multiple tasks
  • In 1956, IBM 305 RAMAC (Random Access Memory Accounting System) came up with a hard drive, introducing the concept of random access storage (RAM)
  • In 1968, Douglas Engelbart demonstrated the modern computer prototype that featured a mouse, graphical user interface, and other elements, integral to personal computers

Those are the key milestones in computer development that are more about hardware, but artificial intelligence, of course, incorporates a software part too.

Software milestones

What Is AI: Types, History, and Future

Before EDSAC’s arrival in 1949, computers could only execute tasks, they could not store the information. But after the implementation of magnetic core memory, of course, machine learning became possible: execute a task > get the data > optimize the execution.

  • In 1956, the Dartmouth Conference is held, considered the founding event of artificial intelligence as a discipline
  • In 1965, DENDRAL, a system, specializing in molecular chemistry, was created to mirror human reasoning and provide answers of a high level of expertise
  • In late 1970s–early 1980s, artificial intelligence experiences a setback due to challenges in programming and maintenance of expert systems, a.k.a., the first winter of artificial intelligence
  • In 1997, Deep Blue (IBM’s expert system) beats Garry Kasparov, the reigning champion, in a game of chess (hopefully, no computer will beat a human in a game of Go)
  • The same year, a speech recognition system is introduced by Dragon Systems is implemented on Windows
  • In 2011, IBM’s Watson, a question-answering system, beats Brad Rutter and Ken Jennings, the reigning champions, in a game of Jeopardy! (We still have Go)
  • The same year, Eugene Goostman, a chatbot, passed the formerly irreproachable Turing test that was used for differentiating artificial intelligence from the human one’s
  • In 2012, Google’s AI can recognize cats in a video
  • In 2017, Google’s AlphaGo beats Ke Jie, the reigning champion, in a game of Go (We still have… oh wait)

Artificial intelligence tends to evolve in bursts, with gigantic leaps proceeded by periods of stagnation. People claimed many times that no artificial intelligence can replace humans in a certain field: decision-making, gaming, translating, and design. Over the course of history, computers pushed the limits with machine learning, specifically, the Midjourney bot is more than capable of making great artworks.

Types of artificial intelligence

Artificial intelligence can be different, and machine learning is not always the case. Sometimes even data gathering is omitted. Below is the classification of the most common types of artificial intelligence.

  1. Purely Reactive artificial intelligence: these machines specialize in one field and do not have memory, layers, or data to work with. They make decisions based on the current situation without considering current discourse or past experiences. A good example is a chess-playing machine that observes moves and makes the best decision to win.
  2. Limited Memory artificial intelligence: these machines have the ability to collect and store previous data in their memory. They can make decisions based on their limited memory or experience over the course of history. For instance, a machine can suggest a restaurant based on location data it has gathered or a gaming artificial intelligence can test the players.
  3. Theory of Mind artificial intelligence: this type of artificial intelligence is yet to be built, but is characterized by machines that can understand thoughts and emotions and interacts socially. They have the ability to perceive and respond to human-like behaviors. One of the prototypes we can think of is Kismet.
  4. Self-Aware artificial intelligence: self-aware machines represent the future generation of artificial intelligence. They are envisioned as intelligent, sentient, and conscious entities. These machines would possess self-awareness similar to human beings or possess it already.

For the machine to be truly intelligent, it has to be able to go beyond simple data layer collection. After all, human intelligence is capable of finding the patterns in a dataset and rationalizing by making conclusions without conducting physical experiments. Whenever the machine takes the course of learning from data without explicit programming, it is called machine learning. It is the foundation of modern artificial intelligence, at least in the mainstream discourse.

The next step is deep learning. It’s when the machine is able to make sense out of patterns, noise, and sources of confusion in data. Deep learning involves multiple layers of artificial neural networks, including input, hidden, and output layers. Each layer performs specific computations or feature extraction on the inputs, and the output layer provides the final results.

There are many emerging trends in artificial intelligence and describing them all deserves a separate article, which is why we are going to simply name some trends for more context.

  • Reinforcement Learning
  • Generative Adversarial Networks (GANs)
  • Transfer Learning
  • Explainable artificial intelligence (XAI)
  • Federated Learning
  • Quantum Machine Learning
  • Artificial intelligence Ethics and Responsible AI

Pros and cons of artificial intelligence

All the disadvantages of artificial intelligence boil down to ethics, job displacement, data dependency, mythical creativity, and lack of human supervision. Of course, people are afraid of becoming obsolete and inferior.

Proper artificial intelligence with its complex layers of data does not need any human supervision — it makes data-driven decisions, is free from bias, and, of course, relies on machine learning, to get more data layers. The lack of creativity is a poor excuse, considering that some humans can’t even draw, while machine learning does not stop for a second. Over the course of history, human discourse pursued the idea of humans being irreplaceable: labor, planning, gaming, creativity, philosophy. But computers repeatedly defeat humans in various fields, thanks to machine learning. As for ethics and job displacement, the former is biased itself toward humans, and the latter is a logical consequence of economics — get more for less.

As for the advantages of artificial intelligence, here is an incomplete list:

  1. Efficiency and automation — thanks to its complex data layers, artificial intelligence can do more tasks and faster, as long as they are standardized.
  2. Decision-making — artificial intelligence and computers can minimize the risk of human errors and bias, thanks to complex data layers that account for a myriad of factors.
  3. Problem-solving — over the course of history, artificial intelligence has tackled so much data that no human brain can process and can solve literally any problem.
  4. Repetitive and dangerous tasks — of course, artificial intelligence is better suited for repetitive and hazardous work, thanks to its fast thinking and machine learning.
  5. Personalization — artificial intelligence is even better than humans at understanding what human wants! That’s how user satisfaction and engagement emerge.
  6. Predictive analytics — artificial intelligence algorithms and layers can analyze historical data and make predictions to choose the best course of action.
  7. Entertainment — modern computer games are impossible without artificial intelligence. In fact, it is severely handicapped, otherwise, the players would have no fun playing the rigged from the start game.

Conclusion

Artificial intelligence (AI) is a transformative field that automates processes, improves efficiency, and drives innovation across industries. It has evolved through hardware and software advancements, including machine learning and deep learning. Commonly, these machines rely on complex data layers to perform tedious, repetitive, or otherwise impossible tasks to help human beings in their endeavors.

Have a story to tell about traffic arbitrage?
Become a ZorbasMedia contributor!
Become an author