"The spread of artificial intelligence will be one of the driving forces behind societal restructuring"

Holding our breath, we all watch as artificial intelligence (AI) continues to expand its presence. The chorus of concerned voices is growing: who will set limits to this rapid progress? Will we be able to legally keep up with the changes in the market? What applications will be off-limits to develop? And will robots take over our jobs in the next decade? We had a conversation with Zoltán Karászi, Chairman of QTICS Group, a company conducting testing, examination, and certification across multiple industries.

Artificial intelligence is gaining an increasingly significant space. Can the current regulations set limits to this, or will we soon find ourselves in a Black Mirror episode?

It is already becoming clear that certain AI applications will simply be prohibited from development. The upcoming AI Act (the European Union regulation on artificial intelligence) categorizes all applications into four categories, with the fourth category containing completely prohibited tools. Examples of such could include real-time biometric identifiers that would transmit immediate data about a candidate's physical reactions during a job interview.

Indeed, we can imagine an app that automatically deems a candidate unsuitable based on their behaviour, tone of voice, or even their family history gathered from the internet. Artificial intelligence can go to a certain extent in profiling and supporting HR activities, but developing such applications is considered unethical. We can also talk about predictions of voting behaviour, gender-based approaches, and numerous other ways AI could infiltrate our intimate sphere. However, these developments would clash with our rights to privacy and the protection of personal data.

The recurring concern is that AI will eventually render masses of jobs redundant. Or will it not?

In this matter, I would take a middle-ground stance; I'm not excessively pessimistic. It's obvious that with the rise of AI, significant job opportunities will emerge. Providers of solutions will need a European representation. Once the regulatory environment is established, a multitude of professionals, from quality assurance experts to test engineers, will find employment in roles that don't even exist today.

On the other hand, in fields where extensive efforts are invested in data collection or where things are categorised based on simple criteria, machine algorithms will eventually replace human labour. Hence, certain work places might cease, but in exchange, new roles will emerge. Therefore, I would rather say that the spread of artificial intelligence will be one of the driving forces behind societal restructuring.

If not this, then what do you consider the biggest risk?

IT systems powered by artificial intelligence already control numerous critical operations today, from hospitals to chemical plants. Such systems are often targeted for hacking attempts, searching for their vulnerabilities. AI can deduce these weaknesses based on the system's responses. On the other hand, the same AI can effectively identify the type, nature, and possibly the perpetrator of an attack based on patterns. Therefore, the vulnerability of IT systems is present in the toolkit of the defence and the attacker as well.

Just think about it, for example, if the protective mechanisms of a chemical plant's production system were to be hacked! If they raise the temperature slightly on the substances present there, it could lead to an explosion. Similarly, in a pharmaceutical factory, even a minor alteration to the recipe can result in deadly poisoning. Following such malicious experiments, it is vital to identify, through data analysis, what vulnerabilities need to be corrected in the system.

 

Karászi Zoltán, QTICS Group

It's understandable to feel like we're in the technological Wild West. When can we expect legislation regarding the use of AI to be enacted?

The certification of AI-driven devices is partially covered by existing regulations, but there are currently no harmonized, European-level standards that would provide an objective, repeatable, and generally approved certification procedure. It's important to note, however, that the so-called AI Act is already in preparation, which is a legislative package concerning artificial intelligence. The European Parliament has adopted the draft law, and now the Parliament, the Council, and the Commission will work together to finalize the ultimate version, hopefully before the European Parliament elections next year.

The AI Act will specify high-risk applications and prescribe mandatory conformity assessment and certification processes for them, which can be carried out by designated bodies, such as notified bodies similar to QTICS. However, this process also requires harmonized standards, which are not yet in place. All of this is a serious and complex procedure, and it might seem a bit overregulated as it is the style in Europe, especially considering that artificial intelligence is a pervasive technology with far-reaching implications.

If European standards haven't been established yet, what can be the basis for reviewing new inventions today?

Engineering judgment remains the approach, albeit it is not yet an accredited testing methodology. The regulatory framework and the associated institutional system need to be created by the accrediting authority as a first step. At QTICS, we are currently working tirelessly on establishing a testing laboratory to be among the pioneers in both testing and certification aspects as a quality control organisation dealing with artificial intelligence. We can already perform such tests today, but they are non-accredited, as the necessary infrastructure is not yet in place.

But let's not forget that some of the existing regulations already govern the use of AI! Such tools also fall under the scope of data protection regulations like GDPR and cybersecurity laws since they handle vast amounts of data. Both regulations include a high-risk category, and, for example, medical devices automatically are classified into this category. Furthermore, they also fall under the scope of the recently issued MDR/IVDR (Medical Device Regulation and In Vitro Diagnostic Regulation), so they intersect with multiple laws simultaneously.

Many of us are not even aware of where the boundary lies between robots and AI devices

A Robotics is often popularly mystified: most people imagine a robot, for example, as a robot chef that even gives Uncle Joe a back massage. In reality, a robot is 'just' a machine controlled by artificial intelligence. There are logistics robots, for instance, that simply pick goods off the shelves and transport them, but the learning algorithms used in them are solving the problem of the shortest path more and more accurately. We can also mention inspection and condition-monitoring robots or aerial devices. The AI Act will apply to all of the latter, because it's vital that a robot cannot be easily manipulated, potentially causing harm to humans or property.

At QTICS, you have covered several significant industrial areas. Where do you see artificial intelligence advancing the most?

Since AI is a horizontal technology, we have encountered developers in every industry who have incorporated it into their toolkit. It can appear everywhere from children's toys to space exploration. Among our four divisions, we encounter it most prominently in advanced medical devices, closely followed by mobility, as it is firmly present in the worlds of drones and autonomous vehicles. It also doesn't lag far behind in the energy and IoT sectors.

Which recent developments have convinced you the most? What, in your opinion, holds significant potential?

There are numerous image analysis software applications on the market that, thanks to huge amounts of data and patterns, can accurately determine, for example, whether a medical X-ray shows a benign or malignant condition with astonishing precision. Image processing algorithms are gaining increasing practical significance, especially in life-improving technologies. We can also mention the development of drones, which can now make highly accurate determinations with algorithms processing raw images, such as whether a certain section of a power line requires maintenance. And then there are the linguistic models used in customer relations, so it can be said that we are talking about technology that is entering every field.

How do Hungarians feel about this? Will Hungary ever become an AI superpower?

I could respond with the cliché that "Why is energy production better in Hungary? Because Hungarian solar panels receive Hungarian sunshine!" However, setting aside bias, there's no real basis to assume that Hungary would have fundamentally different starting conditions in the AI competition than other countries. The level of support for specific industries or the amount invested in training experts can vary significantly from one member state to another. Those who invest earlier and more significantly in these fields and adopt international best practices in timer may advance further in the competition. However, at the starting line, Hungary is neither ahead nor behind others.

Can we state that this competition has already started?

In my opinion, yes, and we are certainly not leading it at the moment. However, with the world being technologically democratic, anyone can create something ground-breaking with a computer and good software. Hungary has these opportunities and can hold its ground in this sense. Could it become an AI superpower on its own? I don't think so, but with education policy measures, the situation could be significantly improved. I hope that the right people in the right positions are already thinking about this.

Share this Post: