The news that around 1,000 surveillance cameras connected to real-time facial recognition software were installed on the streets of Belgrade, Serbia’s capital, rekindled the alert. The debate over the boundaries between privacy and security has returned to the headlines of newspapers and the sessions of the parliaments of the main countries in Europe. This discussion is also of interest to Brazilians.
The pivot of the topic is consent. It is difficult to determine in which situations and for which purposes the information, in this case a person’s image and habits, can be obtained without prior knowledge and authorization. These systems allow, for example, to automatically compare a person’s face captured by a camera with the records in a suspect’s file and, if they match, trigger an arrest alert.
The adoption of these systems should be accompanied by comprehensive and transparent information. What algorithmic models were used? What behavior and information traits are being monitored? How is this data recorded? How long are they kept, where and under whose responsibility?
The General Data Protection Law (GDPL) already in force in some countries and in the process of adoption in others with great difficulty is an attempt to define the collection and use of this data in the Business context. Even so, the boundaries between what is allowed and what is not are not clear to most people. The difficulty in the definition of the borders in the most different situations is not enough; add to this the limitations of the technology itself.
Reproduction of biases and failures in facial recognition
Our faces are as unique as our fingerprints. Facial features include distance between pupils, nose size, smile shape, and jaw indentation. Computers use photographs to map these features and lay the foundation for a mathematical equation that can be accessed by algorithms.
However, facial recognition systems have flaws that manifest mainly in cases of very similar traits. And, as it has been proven, algorithms reproduce biases from those who program them, who define the criteria and even from the databases in which they learn to “recognize” us.
A US government survey analyzed more than 200 facial recognition algorithms, having identified high false positive rates in Asian and African American facial recognition compared to whites, and even algorithms that assigned the wrong sex to black women in over a third of cases.
Although some countries on the European continent use surveillance systems with cameras usually monitored by the police, one aspect about the equipment in Serbia has caused concern: the fact that they are developed and supplied by Huawei , the Chinese telecommunications multinational. This reveals another key point in this debate which is the intricate link between politics and business.
To expand its business, China has been financing technological infrastructure projects in countries with scarce resources. In the last decade, this focus has been on the countries of the Balkans, a peninsula in Southeastern Europe which has been seeking entry into the European Union.
As it broadens and manages to expand its databases with information from diverse populations, China also improves its own systems, upgrading them for use anywhere. In favor of Chinese companies is the fact that they are more developed than many of the other big techs and charging lower prices.
Concern for the privacy of citizens has led the European Union to enter into a kind of moratorium on this technology. A regulation proposal presented by the European Commission a few months ago and currently in process proposes to prohibit the use of these systems in public spaces. In the United States, facial recognition technology is widely used by police and federal institutions such as the FBI and the Drug Enforcement Agency (DEA), immigration services, and the military forces, although it has been banned in some cities.
New technologies, such as facial recognition, bring benefits such as greater security in transactions and prevention of fraud and cybercrime. But, like almost any innovation, it brings risks and threats. If they are good or not will be defined by how they will be used, by whom, and for what purpose. For this, it is necessary to deepen the debate and analysis and avoid that biased interests overlap with the objective of the common good.