- Reset + Print

President Kaljulaid at the North Star AI conference


I have to start far away in history to help you understand what Estonia is today. It was the year 1987 and a bunch of Estonian school kids had access to really good computer power in the Institute of Cybernetics in Tallinn. I must remind you, then it was still Soviet Union. When using this computer, it functioned like a big processor and the access was provided by separate screens. While we were creating our little programs, we realized that some of us managed to create programs, which the big processors seemed to process quicker than the others. This led us to try and figure out, how we could one day create solutions to our problems which will actually outrace the other people gaining from the same resource in the same processor by making it more likeable and more comfortable for a computer to solve these problems. We had huge debates about whether computers can feel and how long it would take before the computers started to feel.

This was 1987. One of us later became the father of the Estonian e-voting system, one the youngest ever academician in the Estonian Academy of Sciences in computer sciences and the third one is me. There was also a fourth one, but unfortunately, I do not know what became of him.

You are in the country where politicians really get it – most of us know what is automatic, what is autonomous, what is narrow AI and singularity. It is not very common I can assure you among head of states, but here it is.

The reason is that twenty years ago in Estonia we realized that if we do what other developed countries do, then we will never catch up with the living standards of others. They said that if we simply do what others do then we will be as good as they are. We said: “Hold on!” We would have been slow followers. Even if we would have been quick followers we still would not have been able to catch up. We decided that we needed to do things differently.

Since then Estonia became a rebel state. Our monetary reform was against the advice of the president of IMF, Michel Camdessus. Our flat tax reform was what everybody said we should do in theory, but also agreed is impossible to do in practice. Our digital ID also started because we felt that others are not doing it hence we will do it. What we did not predict in the case of digital ID is that the world will be so slow in following. No, not the world. Private sector was racing ahead and for some reason that we do not understand—because in Estonia you do not separate people into private and public sector—the governments were left behind. They are now trying hard to catch up, but they never can. Do you know why? Because we are in the positive spiral. What appears in the private sector, Estonian people demand from our public sector. Frankly speaking we may one day loose out with this game simply because we are not as rich as Facebook or Amazon etc. But so far so good.

In this country we are not any more talking about digitalized state, we are talking about proactive services, because private sector has started to offer them. Our people demand algorithms to come together and make decision about them and then react and do something to them. Do you believe this?! European people in an EU member state, here in Estonia you have people demanding that algorithms take independently decisions about them.

We already have such services in this country. If you are a retired person then you are entitled to a top-up of your pension. You do not have to know that such a service exists. I just read that 35 percent of French people who have the right to demand help to survive do not apply, because they do not know that such a service exists. Well, all our pensioners do not know either about this support for a single pensioner living alone.  They do not have to, because it just comes to them – we have our address registry and we know that they are retired, because we pay out their pensions every month and so the two systems can easily decide.

Another example. People in Estonia say that “I just had a baby and am entitled to social services, why do I need to go through the trouble of ticking some boxes online to get the services when the government knows I had a baby and I am entitled to them, and you know my account number because I pay taxes.”

Here also comes the interesting question for you. We cannot really move further now because our current legal space is very fine providing 99 percent of Estonian public services online, but they are all “On demand” – I go online and I demand, hence I opt-in. And now we are moving to the domain where you have to ask this question: “If everybody is in and if the algorithms decide, where is my opt-in or opt-out? Am I automatically opted in when I receive my digital ID?” Lots of ethical questions come into this game and these are the questions you sitting in this room need to explain to us. This classmate of mine, who became the youngest ever academician, told me something about ten years ago which I did not get then, but understand really well now: “Look Kersti! The systems will make decisions about us, and it is not going to be like that, that engineers can always come and explain to us why and how the systems decide – the systems themselves have to be able to do that, they have to be accountable”.

This is probably one of the biggest challenges of our world. You are now literally sitting in the black box and you have huge power. We may think that you have even more power than you really have about putting a certain bias into what you are doing and influencing us. Quite a lot of digital systems have lost the trust of people and the politicians are racing in, not to rebuild that trust but regulating something which is far away from them, somewhere in private sector.

We in Estonia are not like that. We are really trying to grasp what you people can do and we are trying to incorporate it into our legal space this way that we become a safe place to play with all those technologies, practice and use them. And do you know why? Because our people demand that we are at the cusp of the technological development. But even we cannot trust you just like that, we need to understand. I see this common miscomprehension daily when I talk with my colleagues. A year ago, we repeatedly tried to discuss at the Munich Security Conference AI and security, AI and nuclear weapons. I saw regularly people falling in the discussions from AI to the level of automated, and this happens constantly. This is not safe and good for the world. We together – you in this room and Estonia – can prove to the world that AI can be used and can be understood and can be regulated in a way that is not harassing development, but actually supporting development.

I know that the discussions that follow are extremely technical, but please keep this in mind – you cannot thrive without the necessary legal space. Digital systems and internet as a whole functioned quite a long time relatively without the legal space and now the blame is put on digital, on internet, on private sector and we are still fighting to convince our colleagues that it is our fault. In internet problems start from the lack of identity. Who has the right to provide safe identity, passports? Governments. So why blame Google, Facebook or Amazon for using nicknames, because they cannot provide passports. It is our job.

We will face similar misunderstandings daily. You need to help us to solve them here in Estonia and this way we can all work together.

What are the main problems that politicians see with AI and its developments? I have a little worm in my pocket that I normally bring out when I need to explain this dilemma to people. I know it is an extreme simplification, but after all I am just a politician. Let us agree that this little worm that is narrow AI, but relatively well developed though, and it is sent to somebody`s nuclear system. Now, what will it do there? Those who sent it there will of course think that they know what it will do, because they knew the system and what is supposed to be in it. In the context where the system only exhibits the programs my little worm knew, it will do exactly what it was meant to do. Now let us imagine someone has done the most common cyber hygiene error and logged into to the system with a computer that has also been used to read news and the news remain somewhere, available in the system. To make it more interesting let us imagine these are the news the narrow AI read: that the UN is preparing to vote in the security council to ban AI in military use. My little worm has found an additional piece of information in the system that it was not meant to find. It knows that it is AI and it also knows that it is for military use. What will it do now? My fellow politicians do not know the answer, I do not know the answer. Maybe you tell me this is impossible, but I am afraid we have to think the impossible.

My example is extreme, but I think we will daily be facing this kind of situations, where those who deal the systems with the narrow AI, are using it in the context where they think that they know the whole technological and human environment. But there will be flaws, errors, different bits and pieces of information. How do we manage all this mess? For you it is not mess, but for me it is, I am sorry. I badly want all this kind of narrow AI take the responsibility for the mundane tasks of human beings, because this will allow us, me and other human beings, to specialize in something that the machines will never be able to be – to be compassionate. But for that I need AI to be demonstrably safe.

There are other aspects. One in my line of thinking is about how we as humans will cooperate together with narrow AI. Almost two years ago Estonia held the Presidency of the Council of EU and we had those little package delivery robots running around in here in Kultuurikatel with chocolates on board. Suddenly I realized that something weird was happening. People here were smart – EU commissioners, presidents, ministers – and yet their animistic instinct came out when they saw that little Starship delivery robot. To be honest it looked little bit like a dog – same size, runs around – and people wanted to take a selfie with it. To stop a robot you step in front of it, it gets confused and stops, but in this case people started to call for it: “Come here!”.

It is perfectly normal reaction and to be expected, because you want your creations to run loose around us. We do not yet know how we will adapt to it. It is easy right now, because these Starship robots run around Tallinn loosely and it was relatively easy to educate the whole Estonian population that no, this is not a human being or a dog wanting to cross the road, hence you as a car driver do not stop and let it pass, because it will never go. It wants you to be gone and then it crosses. Imagine, we had to broadcast this to people and Estonians, being relatively tech calm people, we learned. But this is only one system, one piece of technology. Can we really be expected to broadcast to everybody the particularities of, for example home care robots? If I get a home care robot for my granny, then what will be the manual I need or my granny needs to read to understand how to work with this machine? I know that you will do all to achieve that it acts and looks like an actual live carer. Yet you know that it will always be a little bit short to achieving that, especially the first beta versions that only Estonians would dare to use anyway. Not because we love our grannies more then anybody else, but because our grannies are totally Facebook compatible – we forced them online 20 years ago by offering free banking services.

You see, we have a lot of questions. So how would you standardize what you are doing, knowing that it will always be that diffuse? This development is not like developing nuclear weapons, which went under narrow control of states and military use, so states always roughly knew what the systems were capable of and were able to regulate. This is different. What kind of standardization we need to create so that human beings will not have to learn every wave of technology, but the set of standard principles of interoperability with narrow AI? We need this, otherwise we are not going to convince people that it is safe. Again, work with us, we are with you. Here in Estonia we are actually adamant the we need to provide our people with the access to the best technology.

For 20 years we have originally been quick followers. We have never created any technology to achieve Estonian e-state in the beginning phases. All this technology was tried and tested and quite cheap as well. So, you see, we are used to this.

Yes, we are now also turning into somebody who is actually developing technology, because we have such a high number of unicorns and startups, start-up visa and you can all come and practice in our wonderful legal sandbox.