The suggested requirements for high-risk AI systems include, among others:
- Establishing a risk management system, error-free data on training, validation and testing,
- Establishing appropriate data governance,
- Cybersecurity measures that prevent manipulation of the training data or system behaviour, detailed technical documentation that proves compliance with all necessary risk-minimising measures,
- etc.
The following are considered high-risk areas:
- critical infrastructures, where human life or health could be put at risk,
- determining access to education,
- personnel recruitment,
- determining access to social assistance or other services from the public purse,
- assessing creditworthiness,
- deciding on asylum applications, as well as visa and residence approvals,
- criminal prosecution and jurisdiction.
The goal that the AIA pursues – ensuring that AI contributes sustainably to the benefit of people and society through the balance of risks and benefit – is both right and important. Whether AIA is the appropriate instrument for this purpose, and what risks and ancillary impacts it has for Europe as a location of innovation, are still subject to intensive debate (and will continue to be). Two thoughts immediately occur to me on this subject. One:
- In all the domains mentioned, the alternative to the statistical machine (which, put crudely, is what an AI system actually is) is people. Today, people mostly (still) make decisions in these areas, with some of these decisions being far-reaching. People who are perhaps having a bad day; people who perhaps want to get something else done before going home for the day; people who perhaps have prejudices or are poorly trained; people who perhaps profit from their own decisions; and people who perhaps have been duped with incorrect information. This doesn’t happen every time – but it certainly is said to have happened before.
- Looking at it the other way round: Don’t the “opportunity risks” also have to be included in the risk consideration of an AI system? In other words, shouldn’t an analysis look at which “human” risks can be eliminated or at least reduced through the use of AI? And in fact – with a view to ruling out this human bias – should the use of AI actually be stipulated in certain critical domains?
And the other:
- What did my math teacher in the 1980s always say?
“Yes, you can use your calculator. As soon as you’re able to calculate that the results you have are also right…”
- And to me that seems to still be a very good principle.
(And while Googling this topic, I also found an interesting article: Man versus machine: Why naïvely trusting algorithms is evidence of incapacity for the industry (horizont.net)) Voilà – what do you think?