Recents in Bollywood Movies

History of AI


History of AI

 Can robots think?


The idea of robots with artificial intelligence was popularized by science fiction in the first half of the 20th century. It started with the "heartless" Tin man from the Wizard of Oz and continued with the humanoid robot in Metropolis who played Maria. By the 1950s, a generation of scientists, mathematicians, and philosophers had culturally assimilated the idea of artificial intelligence, or AI. Alan Turing, a young British polymath who investigated the mathematical possibilities of artificial intelligence, was one of these people. Why can't machines solve problems and make decisions in the same way that humans do? Turing argued that humans use both reason and the information they have at their disposal to do so. In his 1950 paper, Computing Machinery and Intelligence, he discussed how to construct intelligent machines and how to test their intelligence within this logical framework.


history of AI

Making the Search Possible Unfortunately, conversation is inexpensive. What prevented Turing from beginning his work immediately? In the first place, PCs expected to change generally. Before 1949, computers lacked a crucial intelligence requirement: they couldn't store orders, just execute them. To put it another way, computers could be instructed on what to do, but they could not recall what they did. Second, it was very expensive to use computers. At the beginning of the 1950s, leasing a computer could run up to $200,000 per month. In these uncharted waters, only prestigious universities and large technology companies could afford to delay. To convince funding sources that machine intelligence was worth pursuing, a proof of concept and advocacy from prominent individuals were required.


The Conference That Started It All Five years later, Allen Newell, Cliff Shaw, and Herbert Simon's Logic Theorist initiated the proof of concept. The Research and Development (RAND) Corporation provided funding for the program known as The Logic Theorist, which was created to imitate human problem-solving abilities. It was presented in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI), which was hosted by John McCarthy and Marvin Minsky. Many people believe that it was the first program on artificial intelligence. In this historic conference, McCarthy organized a wide-ranging discussion on artificial intelligence—the term he coined at the very event—with leading researchers from a variety of fields in the hopes of achieving great collaboration. Unfortunately, the conference did not live up to McCarthy's expectations; There was no consensus on the field's standard procedures because people came and went as they pleased. Despite this, everyone agreed without reservation that AI was attainable. Because it sparked the subsequent twenty years of AI research, this event cannot be overstated in its significance.


AI experienced a roller coaster of success and failure from 1957 to 1974. PCs could store more data and turned out to be quicker, less expensive, and more open. AI calculations likewise improved and individuals got better at knowing which calculation to apply to their concern. Newell and Simon's General Problem Solver and Joseph Weizenbaum's ELIZA, two early demonstrations, showed promise for the goals of problem solving and spoken language interpretation, respectively. Government agencies like the Defense Advanced Research Projects Agency (DARPA) were persuaded to fund AI research at several institutions as a result of these accomplishments and the advocacy of leading researchers, specifically those who attended the DSRPAI. A machine that could transcribe and translate spoken language as well as process data at a high throughput was of particular interest to the government. Expectations were even higher, and optimism was high. In 1970 Marvin Minsky told Life Magazine, "from three to eight years we will have a machine with the overall knowledge of a typical person." In any case, while the fundamental evidence of standard was there, there was still quite far to go before the ultimate objectives of normal language handling, dynamic reasoning, and self-acknowledgment could be accomplished.


Anyoha SITN AI timeline Breaking through the AI's initial fog revealed a mountain of challenges. The greatest was the lack of computing power necessary to carry out anything significant: Computers simply could not process or store enough data at a sufficient rate. For instance, understanding the meanings of numerous words and their combinations is necessary for communication. McCarthy's doctoral student Hans Moravec said, "Computers were still millions of times too weak to exhibit intelligence." Research slowed down for ten years as patience waned and funding decreased.


Two factors rekindled AI in the 1980s:

 an increase in funding and an expansion of the algorithmic toolkit. "Deep learning" methods, developed by John Hopfield and David Rumelhart, enabled computers to learn from experience. On the other hand, Edward Feigenbaum developed expert systems that simulated a human expert's decision-making process. The program would ask a person who was an expert in a particular field how to respond to a situation, and once this was learned for almost every situation, non-experts could get advice from that program. Industries made extensive use of expert systems. As part of their Fifth Generation Computer Project (FGCP), the Japanese government gave a lot of money to expert systems and other projects related to AI. They invested $400 million between 1982 and 1990 to advance artificial intelligence, implement logic programming, and revolutionize computer processing. Sadly, the majority of the lofty objectives were not achieved. On the other hand, it could be argued that the FGCP's indirect effects inspired a talented young generation of scientists and engineers. Regardless, the FGCP no longer received funding, and AI lost its prominence.


Ironically, AI flourished in the absence of public hype and funding from the government. Many of the important goals of artificial intelligence had been accomplished between the 1990s and the 2000s. Deep Blue, a computer program that plays chess, defeated Gary Kasparov, the current world chess champion and grandmaster, in 1997. This highly publicized match marked a significant step toward the development of an artificially intelligent decision-making program and marked the first time a current world chess champion had to lose to a computer. Dragon Systems' speech recognition software was implemented on Windows that same year. This was an excellent additional step toward the spoken language interpretation project. There appeared to be no issue that machines couldn't handle. Kismet, a robot developed by Cynthia Breazeal that could recognize and display emotions, demonstrated that even human emotion was a target.


Time heals all wounds; 

therefore, what has changed in our approach to coding artificial intelligence? It turns out that the fundamental storage limitation of computers that held us back 30 years ago is no longer an issue. Moore's Law, which predicts that computer memory and speed will double annually, had finally caught up to and, in many instances, surpassed our requirements. This is exactly how Google's Alpha Go was able to defeat Chinese Go champion Ke Jie just a few months ago, and Deep Blue was able to defeat Gary Kasparov in 1997. It provides a partial explanation for the AI research roller coaster; Waiting for Moore's Law to catch up again, we saturate AI's capabilities to the level of our current computational power, or speed of computer storage and processing.

Categories:
Similar Videos

0 Comments: