July 5, 2021
28 April, 2021
Dear Governor Dukakis, Dear Tuan Nguyen and Mark Rotenberg, Distinguished Members of the AIWS Committee, Ladies and Gentlemen,
Thank you for this distinction, which I accept with gratitude and humility. I am especially moved to receive it in the presence of Governor Dukakis, who not only epitomizes public service and leadership in this country, but has also been a source of great pride in his ancestral one. Σας ευχαριστώ για τη μεγάλη τιμή που μου κάνετε, thank you for this high honor!
Ambassadors, as is well known, are asked to speak about many topics for which they are not experts themselves. But in fact, Artificial Intelligence and human rights in the digital age are two topics that I have followed closely for a long time. They are a large portion of my current work as European Union Ambassador to the United States, and long before that, I followed these issues closely as a Greek Member of the European Parliament. While Vice President of the Parliament’s civil liberties committee, I wrote the very first report on security and privacy in the digital age, exactly 11 years ago, when the topic was hardly as popular as it is today.
Perhaps my personal experience of growing up under a dictatorship in Greece endowed me with an acute awareness of the fragility of our open and free societies. As a child, I saw first-hand that even strong democracies can fall under authoritarian spells. I remember the dictatorship holding “files” on different citizens (including my parents), with personal information revealing their political activities and preferences, to be used against them, or to scare them into submission or complacency.
Perhaps this explains best why, when I look at the promise but also the challenges posed by digital technologies, I have always been guided by two fundamental principles, in politics and now diplomacy:
- First – in real democracies, it is the people who should have the power to judge the thoughts and actions of their governments, and to hold governments and companies to account; not governments or companies who are supposed to observe and judge the daily actions or thoughts of their citizens. If the unrestrained use of technology leads to the latter instead of the former, we will have flipped Democracy on its head.
- Second — In today’s democracies, a “Big Brother” will materialize slowly and by stealth, not suddenly in the form of an authoritarian figure who takes away our rights in one fell swoop. If it happens, it will be gradual, by a thousand cuts, with our own explicit or tacit “consent,” with our complacency.
In the mid 2000’s, an example that illustrated the conundrum was the unfolding mass use of cameras in the streets. What should be their proper use? For regulating traffic? Sounds reasonable. Protecting us from terrorist attacks? Sounds reasonable too. But given that by their very nature they could be used on a 24-hour basis for many more things – identifying all participants at a protest in case just a few of them turned violent? Catching a pickpocket, in addition to a terrorist? – the question quickly became, “Where do democracies draw the line for the use of technology to avoid dangerous slippery slopes? What is necessary, appropriate and proportionate usage? Who should have access to the data and who should not? Where should such personal data be stored in order to be kept safe, and when should it be permanently deleted?” And, soon thereafter, similar questions started to be raised on the collection and use of citizens’ personal data by major private companies and digital platforms as well.
The argument used by some governments and businesses at the time, the one that most troubled me, was: “If you have nothing to hide, you have nothing to fear.” It troubled me, because it in essence encouraged the “innocent” to offer their consent to their very own unfettered surveillance, in the name of catching the few “guilty.” If successful, it could indeed lead to the gradual and irreversible salami-slicing of our rights, “with our consent.”
So what I would answer to that posed pseudo-dilemma was, “If you have nothing to hide, you don’t have a life!” Because in fact, we all have thousands of elements of our private transactions, relationships, histories, or beliefs, all perfectly legal, that we do not wish others to have unrestricted access to.
That was then. The reality today is that we are all thinking about technology, innovation, and privacy quite differently than we did a few years ago.
We’ve seen — and are seeing more every day— that Americans want baseline privacy protections, that the status quo isn’t good enough, and that the time is ripe for new actions to improve citizens’ rights and trust in technology.
In Europe, we have always been forward leaning when it comes to the protection of privacy – perhaps for historic reasons – and withstood significant criticism for pioneering it in the beginning, not least of all with the General Data Protection Regulation (GDPR).
Without a doubt, we are a better and stronger Union today for the privacy safeguards we have in place. And we are now committed to seeing that our privacy protection also safeguards innovation and competition. Our economies and societies need both.
Whether you call it AI or machine learning, both – in the broadest sense – represent change. Change makes many of us uncomfortable, because it creates a new reality, something different from what we have come to know. It is easy to fear the unknown.
I will not lecture you on the textile revolution, the industrial revolution, or the introduction of the automobile. They represented massive shifts in our technological progress, and the economic benefits as well as the social upheaval that accompanied them are well-documented.
Artificial Intelligence is different, more complex and far-reaching. In creating a tool that can make judgements – that can decide for us between multiple alternatives – we have introduced a new form of change into our daily lives. It is, if you like, change “to the nth power,” scaling exponentially in a way we have not yet experienced.
As policymakers, citizens and consumers – even as ordinary human beings – we must ask ourselves: Who do we want to make the rules for tools that are becoming increasingly embedded, invisibly, in the fabric of our society? How do we ensure that the AI embedded in the cars we drive, the buildings we use, the energy we consume, the health services we receive, the messages we send, the news we read, even in the refrigerators we use – is safe, controllable, unbiased, and trustworthy? That AI does not discriminate, is not used to “observe and judge,” or to impinge on our universal human rights?
In the final analysis, how do we ensure that AI technologies enhance and protect our freedoms, our well-being, and our democracies rather than diminish them?
Europe balance between Innovation and Fundamental Rights
In Europe, we believe that there is a clear interrelation between Innovation and Fundamental Rights – that one can promote the other.
We value, champion, and thrive from innovation. Last year, as the deadly COVID-19 pandemic spread rapidly across the globe, AI demonstrated its potential to aid humanity by helping to predict the geographical spread of the disease, diagnose the infection through computed tomography scans, and develop the first vaccines and drugs against the virus.
European companies and innovators have been at the forefront in every aspect of that effort. The winner of last year’s Future Unicorn Award, presented annually by the European Union to start-ups with the greatest potential, was awarded to a Danish company, Corti, which uses AI and voice recognition to help doctors predict heart attacks.
Clearly, the possibilities and opportunities for AI are immense – from turning on wind turbines to produce the clean energy for our green transition, to detecting cyber-attacks faster than any human being, or cancer in mammograms earlier and more reliably than trained doctors. We hope that AI will even help us to detect the next infectious outbreak, before it becomes a deadly pandemic.
We want AI to do all of these great things.
At the same time, just as in every technological evolution that has come before it, we must prepare for the unexpected. With the increasing adoption of AI, our rights to privacy, dignity, freedom, equality, and justice are all at stake. These are fundamental to our lives as Europeans, and enshrined in the European Charter of Fundamental Rights.
If it is our aspiration to create machines that are able to do more and more of our thinking, selections and decision-making, we must also take care to ensure they do not make the same mistakes that we humans have been prone to make. Let me offer two examples to illustrate the point:
- First – the use of facial, voice, and movement recognition systems in public places can help make our lives more secure. However, it can also allow governments to engage in mass surveillance, intimidation, and repression, as China has shown, in the most cynical and calculated way, in Xingiang.
- Second – the use of AI in recruitment decisions can be helpful. However, if a computer compares resumes of senior managers and concludes that being male is a good predictor of success, the data simply reflects bias – a bias within our society, which historically has favored men for leadership positions. We do not want AI to reinforce existing biases by copying and infinitely replicating them.
These are just two examples that illustrate why we must not become bystanders to the development and deployment of AI. If we, the major world democracies, do not move to establish a regulatory framework, if we do not move fast, smart and strategically to build alliances and set standards for human-centric, trustworthy, and human rights-respecting AI with countries big and small from all over the world, I dread to think who might.
EU Proposal for AI
In Europe, we have been thinking about these questions for many years now. We see that technology is an inescapable, necessary and desirable part of our future. But without trust in it, our progress as a society will simply not be sustainable.
Wearing both my Ambassador and European citizen hats, I am immensely proud that the European Commission has just presented a ground-breaking proposal for a regulatory framework on AI. It is the first proposal of its kind in the world and it builds on years of work, analysis and consultation with citizens, academics, social partners, NGOs, businesses (including U.S. businesses), and EU Member states.
It is not a regulation for regulation’s sake. It responds to calls for a comprehensive approach across the European Union to protect basic rights, encourage innovation consistent with our values, provide legal certainty to innovative companies, spur technological leadership, and prevent the fragmentation of our single market.
In terms of the scope, the draft regulation is actually quite limited. It will introduce a simple classification system with four levels of risk – unacceptable, high, limited, and minimal.
“Unacceptable” and thus prohibited AI practices are those which deploy subliminal techniques beyond a human’s consciousness, such as toys or equipment using voice assistance that could lead to dangerous behavior, or the exploitation of the vulnerabilities of specific groups of persons due to their age, physical, or mental disability. Real-time remote biometric identification systems used in public spaces are also classified as an “unacceptable risk” – with extremely narrow exceptions when strictly necessary. As are “social scoring” practices, where governments “score” their citizens as opposed to the other way around.
When enacted, the regulation will also set binding requirements for a small fraction of so-called “high-risk” uses of AI, like credit scoring, sorting software for recruitment, verification of travel documents, robot assisted surgery, the management of critical infrastructure (e.g. electricity), or when an AI assists a judicial authority, to name a few practical examples.
The binding requirements ensure that in such cases, high-quality data sets are used, risks are adequately managed, documentation and logs are kept, and human oversight is provided for – in order to ensure the AI systems are robust, secure and accurate.
In the end of the day, the purpose of the regulation is two-fold: (a) to ensure that Europeans can trust what AI has to offer and embrace AI-based solutions with confidence they are safe, while (b) to encourage innovation to develop in an ecosystem of trust. As the European Commission’s Executive Vice President Vestager put it recently: “Trust is a must, not a ‘nice to have.’”
As was the case with the General Data Protection Regulation, the Commission’s AI proposal will be subject to legislative scrutiny before it can become law in all EU countries.
Transatlantic and Multilateral Context
Without a doubt, it will also be a topic of some debate here in the United States, as well. And the European Union looks forward to these discussions with our like-minded partners.
This is because, on the global stage, AI has become an area of strategic importance, at the crossroads of geopolitics and security. Having taken this pioneering step, the EU will work to deepen partnerships, coalitions and alliances with third countries and with likeminded partners to promote trustworthy, ethical AI. Exploring a Social Contract for the AI Age – a framework to ensure an AI “Bill of Rights” in the digital age – is fundamental in international relations today.
And in this work, our relationship with the United States is paramount. For Europe and the United States in particular, our shared values make us natural partners in the face of rival systems of digital governance. Together, we must rise to the occasion.
That is why President Ursula von der Leyen called for a Trans-Atlantic Agreement on AI that protects human dignity, individual rights and democratic principles, to also serve as a “blueprint” for broader global outreach.
My hope is that Europe and the United States will work more closely together, continuously and at all levels – with engineers, policymakers, thought leaders, civil society, scientists, and businesses on both sides of the Atlantic – to guide our technological progress and help us improve, evolve, and become more just, equitable, and free societies. To help us ensure that AI enhances the human condition and experience for all mankind.
And I rely on all AI students and researchers, innovators, policymakers, and business leaders listening in today, to help us turn this aspiration into our new reality.
We have a window of opportunity to act – and we should do so. During my time as EU Ambassador to the United States, I will do all in my power to bring it about.
Once again, please accept my warm gratitude and appreciation for this award.