(AI) by Definition
Artificial intelligence 101 is the set of theories and techniques used to achieve machines capable of simulating intelligence. It, therefore, corresponds to a set of concepts and technologies more than to an autonomous discipline constituted.
Artificial Intelligence (AI)
The term “artificial intelligence”, created by John McCarthy, is often abbreviated as “AI” (or “AI” for Artificial Intelligence 101 ). It is defined by one of its creators, Marvin Lee Minsky, as “the construction of computer programs that perform tasks that are, for the moment, performed more satisfactorily by human beings because they require high-level mental processes such as perceptual learning, memory organization, and critical reasoning”. There is, therefore, the “artificial” side achieved by the use of computers or elaborate electronic processes and the “intelligence” side associated with its goal of imitating behavior. This imitation can be done in reasoning, for example in games or the practice of mathematics, in the understanding of natural languages, in perception: visual (interpretation of images and scenes), auditory (understanding of spoken language), or by other sensors, in the control of a robot in an unknown or hostile environment.
Although they are broadly consistent with Minsky’s definition, there are a number of different definitions of AI that vary on two fundamental points :
- definitions that link AI to a human aspect of intelligence, and those that link it to an ideal model of intelligence, not necessarily human, called rationality ;
- definitions that insist that AI is intended to have all the appearances of intelligence (human or rational), and those that insist that the inner workings of the AI system must also resemble that of the human being and be at least as rational .
History of AI
Historically, the idea of artificial intelligence seems to emerge in the 1950s when Alan Turing wonders if a machine can ” think “. In an article “Computing Machinery and Intelligence” ( Mind, October 1950) Turing explores this problem and proposes an experiment (now called the Turing test ) aimed at finding out from when a machine would become ” conscious “. He then develops this idea in several forums, in the conference “The intelligence of the machine, a heretic idea “, in the lecture he gives to the BBC 3 rd program May 15, 1951″Can digital calculators think? ” or discussion with MHA Newman, Sir Geoffrey Jefferson, and RB Braithwaite and January, 1952on the theme “Can computers think? “.
Another probable source is Warren Weaver’s 1949 publication of a memorandum on automatic machine translation which suggests that a machine can do a task that is typically human intelligence.
The development of computer technologies ( computing power ) then leads to several advances:
- in the 1980s, Machine Learning (Machine Learning) developed. The computer begins to deduce “rules to follow” just by analyzing data;
- at the same time, “learning” algorithms are created that prefigure future neural networks, reinforcement learning, support vector machines, etc.). This allows for example in May 1997 the Deep Blue computer to beat Garry Kasparov at the chess game;
- artificial intelligence becomes a field of international research, marked by a conference at Dartmouth College in the summer of 1956 attended by those who will mark the discipline;
- since the 1980s, research has mainly been conducted in the United States, notably at Stanford University under the leadership of John McCarthy , at MIT under Marvin Minsky, at Carnegie Mellon University under Allen Newell, and Herbert Simon and at the University of Edinburgh under Donald Michie , in Europe and China. In France, one of the pioneers is Jacques Pitrat ;
- in the 2000s, the Web 2.0 , the big data and new computing powers and infrastructures, allow some computers to explore unprecedented masses of data; it is the deep learning ( ” deep learning “).
- The limits of this area vary and optimize a route was considered a problem of artificial intelligence in the 1950s , and today is no longer seen as a simple problem of algorithm.
By 2015, the artificial intelligence sector is addressing three challenges: the perception of the environment, the understanding of a situation, and the decision-making by an AI. Producing and organizing massive and quality data, that is to say, correlated, complete, qualified (sourced, dated, georeferenced …), historized is another challenge. And the deductive ability and relevant generalization of a computer, from little data or a small number of events, is another goal more distant.
Between 2010 and 2017, investments would have increased tenfold, exceeding by 5 billion euros in 2017.
Translated from source: fr.wikipedia.org/wiki/Intelligence_artificielle
Google AI – Artificial Intelligence Google
At Google AI, they are conducting research that advances the state-of-the-art in the field, applying AI to products and to new domains, and developing tools to ensure that everyone can access AI.
Google’s mission is to organize the world’s information and make it universally accessible and useful. AI is helping us do that in exciting new ways, solving problems for our users, our customers, and the world.
AI is making it easier for people to do things every day, whether it’s searching for photos of loved ones, breaking down language barriers in Google Translate, typing emails on the go, or getting things done with the Google Assistant. AI also provides new ways of looking at existing problems, from rethinking healthcare to advancing scientific discovery.