How to translate common sense into rules for teaching machines?

The future of humanity is largely linked to how we manage two fundamental resources: energy and information. The advancement of information technology is an unknown comparable to the climate crisis, both in terms of opportunities and threats.

The evolution of computing resources will help solve many problems and open new possibilities. And that progress will happen through algorithms and the advancement of artificial intelligence.

Artificial Intelligence (AI), today, is basically used to analyze large data sets (big data), in image and voice recognition, translation and solving complex problems, mainly in the areas of finance, technology and games. All this based on what we call machine learning. This practice uses networks of interconnected logical processing units that are taught or trained to detect patterns in the universe of available data.

Throughout the training or learning process, small adjustments are made to improve communication between units in the network, until reaching the correct result. After thousands or even millions of these interactions, the system starts to provide correct and reliable results for the new parameters. Machine learning is a powerful resource.

The algorithms used in language translation, for example, one of the simplest, is far from perfect, but it has come a long way with technological advances and, each year, brings more assertive results. Combined with AI speech recognition, these systems already enable real-time translation of spoken text.

But even with all these advances, how to teach machines common sense?

The problem pointed out by some is that we simply don’t know what’s going on in that black box. The principles of machine learning are clear, but the reasoning by which the algorithm reaches its conclusions after training is not entirely clear. Occasionally these machines produce a very peculiar response that a human would hardly think of.

In this aspect, it is important to address an issue that is pointed out by many scholars as a threat that could lead us to the realization of what the mathematician Alan Turing predicted. One of the pioneers in computer science and recognized as the originator of the notion of machine intelligence, Turing, in the article, “Computing machines and intelligence ”, published in 1950, argued that the goal of making a machine that thought was feasible and desirable. However, this would have profound implications for humanity, as it would not take long for the machine to overtake humans. And so, he concluded, “we can expect the machines to take over.”

The thing is, AI would need what we call common sense, but we just don’t know how to express this in formal rules that can be used to program a computer. In other words, AI may be able to do some things better than we can, but it will never be able to think in the same way and with the same rationality as humans.

As Artificial Intelligence evolves and becomes increasingly powerful, we entrust it with important decisions. However, can we be sure that it will not present solutions that are, from our point of view, immoral, disastrous, or putting human beings in danger?