Authorities
Research
Technology
Biometrics and artificial intelligence
Fairness, robustness and security for AI

Artificial intelligence (AI) holds great potential for applications in the field of homeland security. One example is the live analysis of facial images for security purposes, for example at airports. However, there is one major challenge: statistical distortions in the data basis can lead to biased results, which in the worst case can disadvantage or favour certain groups of people. secunet is researching solutions that counteract this dangerous and momentous effect and thus help to ensure that AI applications work fairly and are trustworthy.

In order to guarantee the security and fairness of AI applications, the EU is currently developing the so-called EU AI Act, which will divide AI applications into different risk categories and prescribe certain tests. In Germany, the Federal Office for Information Security (BSI) established a framework for testing criteria for AI applications early on with the AIC4 catalogue.

An important component of these tests is the question of whether and to what extent a trained AI model is subject to statistical distortions - a so-called bias. A bias arises, for example, from insufficiently balanced training data or from an overrepresentation of certain combinations of characteristics, which are then learned and generalised by the AI model. Thus, in the worst case, it can happen that certain groups of people or also individuals are disadvantaged or favoured in the application of the AI model. Especially in AI applications that work with images or videos of people, a bias would have serious consequences.

The problem is a deep-seated one: image-processing AI models with architectures like CNNs (Convolutional Neural Networks) are designed to identify patterns. Even if features such as age, gender or ethnicity are not explicitly labelled in the training data, an AI model can construct an indirect representation of such or similar characteristics based on the available image information. Such a so-called indirect bias is difficult to detect and even more difficult to correct. An analysis requires a very large, differentiated data set. To test all combinations of characteristics for bias in the conventional way, thousands of new images must be produced, which is simply not possible in relation to the time and cost required.

secunet has intensively studied the topic of bias in images and AI models with human reference and has developed a solution that not only detects bias in the data, but for the first time can also test the AI model for bias. With this solution, it is possible to further develop a model so that bias is eliminated. For this purpose, the training data is adjusted so that all features are fairly distributed and present and, as a consequence, a possible bias is mitigated in the model. Incidentally, this not only ensures fair and discrimination-free AI models, but also increases the security and robustness of the models. With additional functions such as the adjustment of environmental and surrounding factors, the limitations of the AI model can be determined and solutions can be developed.

Real or not? In fact, these are synthetic faces that have been assembled from a variety of possible features and then utilised as the basis for virtual images. Such images are also used to check AI models for bias.

The analysis and solution of the bias problem takes place in three steps:

  1. The training and test data are analysed for bias in relation to characteristics such as age, gender and ethnicity. This allows problems to be identified at an early stage, which would later be reflected in the AI model.
  2. secunet's solution performs a test of the AI model. In the process, it generates a large number of photorealistic artificial identities that differ in age, gender and ethnicity, for example. The original identities of the test data are exchanged with the artificial identities with the intention of testing the recognition performance of the AI model. In the process, arbitrary combinations of characteristics are generated; after all, the actual human diversity is also very large. There is no specification of a certain number of ethnicities, as this would lead to a new bias. The customised identities with fluid transitions in the characteristics thus represent the entire human diversity.
  3. If a bias is detected, new identities can be created for the training data and the model can be re-trained. This process is repeated until no bias is detectable.

The process is simple and quick. Once an AI model has gone through the process, there is proof that it is fair and free from discrimination. This creates confidence in the AI application - on the part of the public, but also on the part of the operators, because the latter can now assume with confidence that their model is fair.

Furthermore, the analysis of AI models is not only about bias. Other important aspects that are also reflected in the EU AI Act as well as in the BSI’s AIC4 are robustness and security of recognition performance. For example, there is a risk that an AI application will fail to recognise relevant characteristics under certain weather conditions or lighting conditions, which would result in security risks. With secunet's research and development, various environmental factors can be tested to show the limits of an application's recognition performance and identify potential risks.

The risks of AI are debated a lot in the public, including in the context of its use for homeland security. A thorough and impartial examination and optimisation of the relevant AI models is the best way to respond to this discussion and to increase the acceptance of AI applications.

Contact request

Contact:

Florian Domin
secunet Security Networks AG

Do you have any questions or comments about this article? Then contact us using the contact form on the right.

Seite 1
Submit
* Required fields
Logo

secuview is the online magazine of secunet, Germany's leading cybersecurity company. Here you will find news, trends, viewpoints and background information from the world of cybersecurity for public authorities and companies. Whether cloud, IIoT, home office, eGovernment or autonomous driving - there can be no digitisation without security.

 

In addition to the online magazine, secuview is published twice a year as a journal, which you can subscribe to free of charge in printed form or download as a PDF.

secuview is the online magazine of secunet, Germany's leading cybersecurity company. Whether cloud, IIoT, home office, eGovernment or autonomous driving - there can be no digitisation without security.

© 2024 secunet Security Networks AG