In the first half of the 20th century, science friction familiarized the world with the concept of Artificial Intelligence robots. It began with the “heartless” Tinman from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis.
Turning suggested that humans use available information as well as a reason to solve problems and make decisions. So, why can’t machines do the same thing? This was the logical framework of the 1950 paper. In this blog, you’ll be getting about how Artificial Intelligence history and its levels of evaluation.
Making the Pursuit Possible:
First, computers needed to fundamentally change. Before 1949 computers lacked a key prerequisite for intelligence: they couldn’t store commands, only execute them. In other words, the computer could be told what to do but couldn’t remember what they did.
Secondly, computers were very expensive. Buying them was so tough. Only prestigious universities and big technology companies could afford them.
The Conference that Started AI:
After a lot of struggle, a proof of concept was initialized through Allen Newell, Herbert Simons, Logic Theorist. The logic theorist was a problem-solving skill of a human. It is considered by many to be the first artificial intelligence program hosted by John McCarthy- one of the founders of Artificial Intelligence.
At that time no one was pleased with the new technology. Despite this, everyone whole-heartedly aligned with the sentiment that AI was achievable. The significance of that this event cannot be undermined as it catalyzed the next 20 years of AI research.
Roller Coaster of Success:
From 1957 to 1974, Artificial Intelligence flourished. Computers could store more information and become faster, cheaper, and more accessible. Machine Learning algorithms also improved, and people got better at knowing which algorithm to apply to their problem. Some early demonstrations gave the goals of problem-solving and the interpretation of spoken language.
There are successes, as well as the advocacy of leading researchers convinced the government agencies to fund AI research at several institutions. However, while the basic proof of principle was still a long way to go before the end goals of natural language processing, abstract thinking, and self-recognition could be achieved.
Time Heals all Wounds:
The basic idea of how artificial intelligence could be coded has not found. But it turns out, the fundamental limits of computer storage that was holding back for 30 years. Moore’s law which estimates that the memory and speed of computer doubles every year, had finally caught up and in many cases, surpassed our needs.
AI is Everywhere:
The present age is all about “Big data”, an age in which we have the capacity to collect cumbersome for a person to process. The application of Artificial Intelligence in this regard has already been quite fruitful in several industries such as
There may be evidence that Moore’s law is slowing down a tad but an increase in data certainly hasn’t lost any momentum. Breakthrough in computer science, mathematics, or neuroscience all serve as potential outs through the ceiling of Moore’s Law.
So, what is in store for the future? In the immediate future, Artificial Intelligence language is looking like the next big thing. In fact, it’s already underway. One can’t remember the last time they called a company and directly spoke with a human. One could imagine interacting with an expert system in a fluid conversation. But for now, we’ll allow AI to steadily improve and run amok in society.