Defining artificial intelligence (AI) is not easy. The field is so vast that it cannot be restricted to a specific area of research: it is more like a multidisciplinary programme. Originally, it sought to imitate the cognitive processes of human beings. Its current objectives are to develop automatons that solve some problems better than humans, by all means available.

AI is at the crossroads of several disciplines: computer science, mathematics (logic, optimization, analysis, probabilities, linear algebra), and cognitive science, not to mention the specialized knowledge of the fields to which we want to apply it. The algorithms that underpin it are based on equally varied approaches: semantic analysis, symbolic representation, statistical and exploratory learning, neural networks, and so on. The recent boom in AI is due to significant advances in machine learning. Learning techniques are revolutionary compared to AI's historical approaches: instead of the machine being programmed with the rules that govern a task (often much more complex than one might think), it now discovers them itself.

AI is also developing quickly due to the international “dataization” of all sectors (i.e. big data) and the exponential increase in computing power and data storage capacities. Applications are multiplying and directly affecting our daily lives: image recognition, self-driving cars, disease detection, and content recommendation are some of the many possibilities being explored. The universal nature of AI and its many variations herald a new revolution, with its share of pitfalls and opportunities.