At the beginning was the computer mouse, then came the pen, and then gestures. But when it came to language, people needed keywords a computer understood to communicate with it, says computer scientist Mária Bieliková, laureate winner in the main category of this year’s ESET Science Award. Today, we can communicate with machines using our own language, but that means we have to recognise the nuances of that communication, she tells The Slovak Spectator.
Related article
Tom Cruise’s mother had no children. Wait, what?
Read more
Sometimes, AI’s ability to replicate how humans talk can be surprising. But then it does something that we would never do, like when a chatbot hallucinates. What is going on inside AI?
Inside AI, or a machine learning model to be precise, nothing magical is going on. If by AI we mean a chatbot, then we are talking about a system of several parts, and not all of them have AI. At the core is a model trained on vast amounts of data using machine learning methods; this is based on statistics. Its knowledge of how to complete or generate text is expressed in billions of parameters assigned in a multilayer structure of neurons. The model has no understanding of what it is doing at all. Depending on the task for which the model was created and the data with which the model was trained on and subsequently fine-tuned on, based on probability it might consider something utterly stupid as the best answer. The less a model is trained for specific situations, domains, or tasks, the more it will make up things, meaning there will be moments when it connects words and sentences that don’t make sense.
What can be considered to be AI?
Although a very broad term, it is often narrowed down to machine learning and special deep neural networks, which lie behind the current boom. However, it is much more than that. The term itself is not a very good one, as we ourselves still don’t know how to properly define what intelligence is. But we point to intelligence, usually because something learns and acts on what it perceives, which leads to a meaningful solution. AI systems try to imitate this behaviour. The important thing is that their behaviour is not predetermined, they can learn from new inputs and even react in situations that are new to them.
To stay up to date with what scientists in Slovakia or Slovak scientists around the world are doing, subscribe to the Slovak Science newsletter, which will be sent to readers free of charge four times a year.
To what extent can we trust AI? Would it be better to first understand the data on which it has been trained?
Although very important, data is only one part [of making an AI system trustworthy]. Many AI systems are based on finding patterns in data, so biased or low quality data will fundamentally influence a result. When it comes to large foundation models, the situation is different because there is no chance of analysing all the data on which they are trained. Therefore, how we work with them, train them and fine-tune them is very important.
Trust is not a one-off thing, it needs to be earned gradually. Machine learning-based AI systems are trained on data produced by humans and as such are distorted and imperfect. Trust in humans usually comes from humans being open and not having a hidden agenda, bad intentions.
AI systems do not understand what they produce and therefore have no agenda; the people that introduce them to the market and use them have an agenda. This makes the situation complicated. What we are trying to do – at least in Europe – is to develop human-centric AI systems, maximising their benefits while minimising their risks. AI systems should be ethical, meaning they should follow ethical standards, but also be robust, meaning they do not cause unintended harm. Today, practice has significantly overtaken science, and so we have systems capable of great things, but we do not fully understand them, often we do not even know why they give us specific results. In my opinion, investments in research strongly linked with practice are crucial.
Related article
AI expert: Robots should fit into our world, not us into theirs
Read more
What do you think is the biggest threat AI poses and, conversely, its biggest benefit?
The former is in using AI to directly cause harm or subliminally influence people’s opinions, attitudes and subsequently their actions. In the past, all of us were influenced by a few people we knew personally and then a few others partly through newspapers, television. Today, we take in dozens, hundreds of opinions from thousands of people from all over the world every second without having a real opportunity to verify that information to a sufficient extent.
This is a ticking bomb. AI systems are already significantly improving our lives in many ways, without even being aware of it. Without AI, there are many medicines and vaccines we would not have, diagnostics would be less accurate, and there would be certain types of medical procedures we would be able to perform. Even weather forecasts would look completely different. In particular, the advent of generative AI is a revolution in human-machine communication.
Do we have to learn to communicate with AI?
Yes, but it’s not just AI; generally speaking we have to learn to communicate with computers, or even more broadly with machines. When Doug Engelbart invented the computer mouse in the 1960s, what followed was a fundamental change in the way we communicate with computers. Then came the pen, gestures, but when it came to language, there still had to be keywords a computer understood. With AI has come communication in natural language, in the sense that we ask fully realised questions using a vast range of a given language.
But just as humans have learned to communicate differently in different contexts with different people, we have done the same with machines. It is good to know what principles they work on and what their limitations are, so that we can ask good questions and also know what to ask so we get good answers.
What AI research is being carried out in Slovakia?
There are several research institutes that have been dealing with AI for a long time, mainly at universities and the Slovak Academy of Sciences, and for more than four years now by the Kempelen Institute of Intelligent Technologies (KInIT), which currently has a significant research capacity in the field in Slovakia.
Speech, natural language, image processing and robotics are Slovakia’s strengths in the field. There are also several companies with innovative products doing their own research, usually applied or focused on using the latest knowledge for a specific product or process. With the advent of generative AI, more and more companies are experimenting and looking for opportunities for their business.
What kind of research is done at KInIT?
Research at KInIT is carried out with regard to the needs of local business, to which the latest knowledge is transferred, and at the same time so that internationally our excellence stands out and helps Slovakia get out there. The main area is so-called low resource AI. We are looking at ways to research and develop AI systems with the use of limited data or limited labelled data, saving computing capacity for the training of models or their operation. Together with AI agents (autonomous systems for specific tasks operating without human intervention – Ed. note), this is becoming a major future area of AI.
When it comes to low resource AI, a lot is put into information processing, especially in text form, examining the properties of large language models with respect to languages with low resources such as Slovak, and also fine-tuning models so that their outputs in low resource languages are as good as possible despite the data handicap compared to languages such as English. We also address environment-related issues, especially energy, as well as the efficient use of our planet’s resources. Prediction, finding anomalies, and optimisation are also among the typical problems we solve.
I must also mention the ethics and regulations of AI as an important topic at KInIT for not just research, but also practical use in the development of AI systems.
Related article
Teaching AI bots to speak better Slovak
Read more
So Slovakia has something to offer in terms of AI?
We have a lot of smart people. Many have left Slovakia for abroad, but several have also returned, or are helping from abroad. AI research and development needs quality data, computing capacity and talent.
I believe that the greatest asset is people. Hopefully in the near future we in Slovakia will move forward in computing capacity. A new supercomputer focused on AI computing is planned to be built as part of the recovery and resilience plan. Together with talent, a strategic location in the centre of Europe and a balanced energy system provide good conditions for the creation of even larger European computing infrastructure in Slovakia.
What rules currently limit AI research?
This August, the AI Act came into force, and Europe and individual EU members are preparing to specify and apply the rules. There had been many prior limitations on research, such as ensuring systems were trustworthy – not just AI. This is very important in our part of the world. And just as no one is discussing whether a digital system should have a good user experience (UX), it is becoming important for users to view the systems as trustworthy. In this regard, I consider it very important to find a balance between the uncertainty that comes with every new technology and the desire to know more about it and try new approaches.
What ethical or moral dilemmas do researchers face?
There are quite a few – for example, how to ensure that we do not influence real users in any way when we audit social media for various biases using artificial bots, , or how to transform a model trained on data for research purposes into a model that will help improve a product so that it is in line with data protection and privacy, but at the same time delivering a sufficiently high-quality solution.
It is a very fragile system: having the courage to try, while always keeping in mind the fundamental values that technology must respect. The same applies to those who monitor and check compliance with the regulation.
This year you won the main category of the ESET Science Award. What did it mean to you?
I was overjoyed, but I also felt grateful to everyone I’ve met on my journey. Awards like this are never individual. Receiving the award from Emanuelle Charpentier, a Nobel Prize winner, was a great honour. It is also important for Kempelen Institute, because it strengthens our motivation in increasing scientific excellence and bridging academia and industry. It is also important for computer science and artificial intelligence fields, which were recognised in the main category of this prestigious award for the first time.
This article is supported by the ESET Foundation, whose annual ESET Science Award recognises exceptional scientists.
This post was originally published on here