Artificial Intelligence (AI) is one of the hypes of the moment. The launch of ChatGPT at the end of 2022 made AI available for the general public and sparked the discussion on how AI can be used to improve our lives, but also on risks it poses. The potential risks to the health, safety and fundamental rights of its citizens are one reasons why the European Union (EU) introduced the AI Act.

What qualifies as AI?
To determine what is actually meant by AI, the AI Act uses the following definition of AI:
“AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate output such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. “
If we take this definition, then almost every computer program is part of AI. Adaptiveness or implicit objectives are no strict requirements. So, an algorithm only needs to generate an output that is influencing a physical or virtual environment from some input. This implies that both an advanced deep neural network, whose outcomes are hard to explain, and a simple rule-based system belong to AI. Hence, also the algorithms that we develop at Doing The Math fall into that category if we use this definition.
Keeping humans in the loop
The level of autonomy is an important factor in the risk-assessment of an AI system. At Doing The Math we see our tools as a way to enhance human decision-making. The models and data that are used are a good approximation of reality, but they will never include all details. There are only a few situations in which no human-in-the-loop is required. For example, in a very well-defined environment with controlled decisions such as whether to switch on the heating in a house. In all other cases, a human should always verify, and adjust if necessary, the proposed solution of an AI system. Especially, because humans are also responsible for the consequences of decisions.
Algorithms have played a crucial role in the last decades to shape the world that we currently live in. For example, managing complex systems such as energy networks, supply chain, and railway operations would be impossible without them. Most of these algorithms might not have been seen as AI a couple of years ago. It is also not very relevant whether an application is AI. As they have contributed a lot to our current welfare, we should not shy away from using them. However, as the EU act also emphasises, it is really important that humans remain in charge over the decisions that are made. We should be careful to hand over control to an algorithm, especially if it will affect something we care about as society.


