Alan Turing, who is attributed to having invented the first computing machine, famously predicted that one day, humanity will reach a singularity point whereby it will be impossible to tell if one is communicating with a person or a machine. What appeared to be elusive for many decades, many are arguing is happening right now. We have reached that point, and the future is uncharted.
Artificial Intelligence has been with us for several years, but it is only recently that it has taken the world by storm. Freely accessible software such as Midjourney and ChatGPT, which manage tasks that were virtually impossible for a machine to successfully carry out until recently, have started to lift the curtain on the immense possibilities and remarkable power of AI.
Innovation has many faces. Thousands of years ago, our forefathers used tools that in their era, stood as a paragon of innovation, to build temples. Today, we develop complex algorithms and empower machines to learn to solve problems. With new tools and technology, it is only natural to be faced with a level of uncertainty and fear. But as the adage goes, with great power comes great responsibility, and the right person to expand on the subject is Gavril Flores, Chief Officer at the Malta Digital Innovation Authority (MDIA), the entity responsible for the regulation and promotion of innovation technology.
“We are in the business of balancing the desired effects or ‘benefits’ of innovative technology against its undesired effects or ‘risks’, and we do this by directing and facilitating the secure and optimal uptake of digital innovation through supervision, recognition and promotion. For example, in the field of AI, we are working on an EU-wide regulatory framework that protects users without stifling innovation,” says Mr Flores, who has a long career working in a number of the country’s regulatory structures, including the Medicines Authority and the MCCAA.
AI has been at the centre of a pan-European debate, with MDIA participating on Malta’s behalf in the drafting of ground-breaking legislation to regulate the exciting new technology on a European level. The AI Act, which is expected to come into force by early 2024, is intended to provide clarity, and do away with misconceptions.
“We understand that new and powerful technologies such as AI can bring with them a host of misunderstandings, which are often fuelled by fiction rather than fact. It is our job to ensure that a solid framework is in place to explain clearly what the technology will be able to do and what it will be limited from doing, striking the right balance between risk and benefit,” Mr Flores explains.
All systems bring with them a level of risk, which can be mitigated through solid regulation, and the AI Act aims at identifying the areas that AI will be allowed to be used in, while clearly identifying the absolute no-go areas.
“For example, we are aware that in Asia, AI is used for social scoring. In Europe this is unacceptable and such an application of the technology will be prohibited. We are interested in identifying areas where AI can enhance the human experience, making us more productive and competitive in an ethical and transparent manner,” he continues. In fact, the AI Act is expected to provide a concrete structure against which the technology will operate in the European Union.
The Act will oblige operators to identify and recognise risk, even with the participation of independent conformity assessment bodies to ensure that it is thoroughly mitigated. Human oversight will be central in this process, whereby AI will need to be given strict and well-defined protocols for its operation. In any case, the system will have clear parameters that it will not be able to go against, and there will be the possibility of human intervention at any given moment.
Users of AI systems that generate or manipulate image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), must disclose that the content has been artificially generated or manipulated.
For the first time, the Act will also introduce the possibility for redress on the subject, which currently may not always be easily available. Citizens will be able to voice their concerns to especially set up authorities that will investigate each case on its own merits. The Act sees to the setting up of national supervisory authorities – a step that Malta is already well ahead of its European counterparts in, through the MDIA. A central AI office in Brussels will, in turn, ensure the required coordination and advice at European level.
“Furthermore, we truly believe in the opportunities that this new frontier is bringing with it as a human-centric tool. That is why the Act will see to the setting up of national AI sandboxes for safe innovation, that will encourage the development of AI, or its application within a sound environment. We already established an active technology sandbox, and our focus will be to extend it to cover the regulatory aspect once the framework is in place. We want to draw all the opportunities from AI, that will ultimately result in better distribution of wealth and improved skills for the economy based on a just transition for the benefit of all stakeholders,” Mr Flores concludes.
Photos by Inigo Taylor
Inġ Antoine Sciberras explains how the regulatory authority strikes a balance between healthy competition, and facilitating new tech investment
Focus on people and on nurturing growth and mental health support were instrumental in winning the Employee’s Voice Award 2023
CEO of WFDM discusses his company's rapid rise, its impact on local workforce realities, and his ambitious plans for the ...