- Reset + Print

President Kaljulaid at the Hanaholmen Business Forum

President Kaljulaid at the Hanaholmen Business Forum © Office of the President of the Republic of Estonia

16.10.2018

Hi everybody,

I am happy to be standing in front of you once again. Last year we spoke about e-governance for beginners. This year we will take it a step further and we will talk about the future of our Nordic societies and how Estonia can also catalyse the traditionally really wonderful Nordic economic and also socio-economic models. Because I do believe that Estonia has catalysed a lot of changes in the thinking of the Nordic countries.

One of my best friends who unfortunately passed away this year, Swedish politician Hans Gustav Vestberg once told me that: “You know, we established the tax reform committee, because we were looking at what Estonians were doing. I mean, your tax environment was so radically different that all our companies were rushing there. We realised we had to change.”

I think that is why the Nordic countries are best positioned to benefit from the Estonian catalysing factor in the digital, hopefully in the future also in the AI world. Because you are already there. You are close. Like during the time when Estonia did an innovative tax system. And, by the way, established e-governance and created the Estonian Genome Law, which has received too little attention, but is also part of this legal state innovation as is digital Estonia. You see it first-hand, because your companies are there anyway. There are far more Nordic companies in the Estonian business field than in any other region globally. That means, what you were doing, maybe not consciously, but unconsciously – you are experiencing the Estonian environment. And then you go back home and ask your governments: what is going on there, why don’t we have it? Similarly I am sure that Swedish business pushed Gustav Vestberg and other Swedish politicians to create this tax reform committee. Also I think that because your businesses and your people see what is going on in Estonia they come back home and they demand that their political governments do something similar (this probably happened with Finnish-Estonian cooperation in Palveluväylä).

Of course we still have a lot of problems with this cooperation. Technically, Palveluväylä and X-Road are hyper-super inter-operable. We can work all together, but the difference is that Estonians gather data and Finns gather paper (the digital format pdf). We still have to try to work better together, to make sure that this transformation is smoother and quicker. This is extremely important if we all want to further gain from our little sandbox way of thinking with artificial intelligence.

Let me now turn to how Estonian political leadership sees artificial intelligence. I was last year at the Digital Summit of the European Union Council Presidency in Tallinn, and you have to accept that there were only smart people in that house that day – commissioners, presidents, prime ministers, high-level civil servants. There were also the robots. Starship’s simple parcel delivery robot – totally stupid thing, I have to say. And it was offering chocolates to these smart people. What do you think these smart people did? They wanted to get a picture, of course, with this robot. And what do you think, how did they do it? Did they do what a normal person who knows what a robot is would do? Like step in front of it, make it stop, and take the photo before moving away and allowing the robot to move on? Do you think they did that? Can you think what they did? They started calling it: come here, I want to take a photo with you! See, it is not only that we must start to think about how to develop artificial intelligence, but also how we adapt our societies to live with artificial intelligence.

You have probably all seen Sophia making waves everywhere. It is as stupid as your Siri. Everybody tries to talk to it. We constantly overestimate these beings among us who are more and more numerous, smarter and smarter in specific ways. But on the other hand we know that they will remain for a long time not singular. Meaning that they will remain very good at something, but autistic, deeply autistic – using human terms – in everything else they do among us in society.

Here comes the big challenge for us. Will we be able as human beings to adapt to these beings operating among us in a safe format? Are we as legal space makers able to create a legal space for our people and for all businesses to create these machines and to make them operate in a safe format? In Estonia we have realised that we have to legislate for it. That is why our traffic code legislates for cars and robots walking on the streets together, and who will be guilty, if there is an accident. This is described. But this is not enough. We have realised it now. We cannot keep legislating forever based on currently existing technology. It is too slow. You would have to go through your legal space all the time. And legislate for all kinds of technologies. We cannot do it. But we need to. So, how will we do it?

Our proposal in Estonia is that we legislate for all kind of algorithms and human beings, their interrelations, their cooperation in society. And we are not only thinking about these kinds of things that if a robot creates an accident or runs into an accident then who will pay the costs. We realised that first we as human society, we want artificial intelligence to work with us and for us. But that means our mindset totally needs to change in this sense that now somebody who is not human thinks about us as humans and makes decisions, which will affect our lives. Estonian lawmakers are not particularly actively working on this problem yet, but in the Estonian public sector there are working groups who work daily only on this issue. How will we manage such a society? I am confident that there are a few Estonian politicians among us, they are the champions of the smart legal face-setting where Estonia excels compared to other countries. Whoever wins the next elections will have to do this for the Estonian people and legislate for the algorithm-human coexistence for our society.

Why am I saying this? Because Estonia is a fully digital state. Our people can apply for all services on-line, but they still have to apply. Now they are noticing that in the private sector companies have already started to proactively offer their services – taking care of people, reminding them. Yes, our system also reminds you right now that your driver’s license is expiring, please do something about that. But that is not enough. People want more. What people now want is that: you know I have a child, you know I have a right to universal child support in Estonia, and you know my bank account number. So why do I have to log on and apply? Just pay me! But this takes us to unregulated territory. This is a proactive state, this is algorithms coming together, analysing the data they have and making a decision about a human being. Doing something to a human being.

We have a pioneering service already. We have a database of Estonian retired people and their addresses, we know who lives alone, because nobody else is registered at this address, so we automatically pay them premiums on top of their pension. Pensioners living alone are considered to be in a more precarious situation than others. It is a proactive service: the pensioners do not have to apply and they do not have to know that this service is offered. It happens automatically. Two databases are making a decision, doing something to a human being. Fine. As long as this is a pilot project you can do it without adjusting your legal space.

But do you think the Estonian people are not going to demand more? I mentioned child support, for example. The Estonian people are going to demand more and they are going to demand this from whomever is in power. This is why I see now that the Estonian digital state is kind of citizen-driven. It is the people, Estonian citizens who demand more and more. Creating better and better sandboxes for new technology, which can be used by the people in both of our countries.

Actually it is not simple at all. This is the horror part of the story. Artificial intelligence is not easy to regulate and we need to get all public sector people thinking of really horrible scenarios. In Estonia we are not afraid, because we have seen them coming and we are used to embracing smart technologies. Elsewhere I am not so sure. Yet, these machines are coming. If the legal space is not set, then businesses run deep risks.

Two years ago it was extremely hard to explain. Now it is extremely simple to explain. Have Facebook and others had big trouble with lawmakers in many countries, including the US or the European Parliament? Yes, they have. Have they broken any law? No, they have not. Have they lost value because lots and lots of complaints are taken up against them? Yes, they have. Who is the guilty party? These companies? They offered people services, including identification services which the government did not offer, bar in a few countries – Finland, Estonia, Luxembourg, Denmark. Some countries bothered to make sure that their people are not alone and can identify themselves on the Internet. All others did not. And who is guilty? The companies.

That is why I think it is very important that we in Estonia already understand what the artificial intelligence world will look like. And we will legislate for artificial intelligence. Yes, I think still in a narrow format, but much more developed than the delivery robots which we have already regulated. This makes sure that people are protected, but not only them. Capital, companies and ideas will also be much better protected.

The example of the Estonian genome laws is also very similar to the Facebook example. We have regulated how population-level data could be gathered, how it could be used to develop different kinds of treatments for diseases and also to teach web-doctors, in a protected format. Yes, you can use the data of Estonian citizens’ genome in the format that we have described in the law. And you are safe! Many countries operate digital databases, mostly in the private sector. Are you safe to use this data or do you risk Facebook trouble? I think you risk Facebook trouble.

The same applies for artificial intelligence. If a state has never thought about, a robot offering services for handicapped people or retired people or very old people who cannot any more manage themselves, then are you safe creating such a device, which tries to keep them company and help the handicapped person? No, in the current legal space you are not. You are risking all kinds of liabilities. Unless the state has really thought through what the rights and obligations are for a company, which creates such a device. For example, as a lawmaker you would need to think when exactly is the time when you deliver artificial intelligence machines to the society. Will it be with supervision, which means that, yes, there are maybe hundreds of such robots taking care of handicapped people at home, but you have one supervisor somewhere. Or are people off-loop there without supervision? Those are two different situations.

However, you need to have legal clarity, about which is okay and which is not okay. It is very easy to say, we regulate. You have to take all the responsibility, you will be liable if something goes wrong. End of story. The other thing is to say these things could be dangerous to these people. If Siri or Sophia or this package delivery robot – if they do something stupid, it is cute. We laugh about it. However, if this robot is taking care of a handicapped person and does something stupid, it may get nasty and we will not laugh about it. How do we regulate it? First of all, in the first stage it is extremely important to say that these things can operate, but not without people off loop like self-driving vehicles.

Is it fine to say we just forbid them, because they are not safe? Yes, they are much safer than we human drivers are. However, should we unleash them into our society without proper supervision? Or will we demand in the first instance that there will be a person on loop somewhere? Yes, you would not need a driver for every bus, but maybe for fifty you would. We as legal space setters, we must think of these situations at home. In Estonia we are thinking. I hope governments elsewhere are also thinking. But I am not so sure. We are thinking and I am sure that this can catalyse our Nordic economic model to be globally more competitive, because we will create a kind of an AI sandbox for you. Quicker than it will be available elsewhere. Since you are here already, then you will be able to benefit more than, for example Italian capital. They can as well, but you are closer and more intertwined with our economy.

In Estonia we promote our digital solutions globally. For already more than ten years, we have had the Estonian e-Governance Academy in cooperation with UNDP to promote these services globally. We promoted them for African countries for leap-frogging; we promote them for the Caribbean countries for resilience building, because their registries are regularly drowning in hurricanes. We work with a high number of countries on this, especially our businesses. We do not want to leave our businesses alone in the world where the understanding of how e-governments are protected by international law is not set and clear. Therefore, even if we are 1.3 million people and our resources are not that big, we need to go out from Estonia to protect our businesses developing these systems in the Gulf countries, in Africa and elsewhere. To make sure that the environments they create for other countries will not create trouble for them afterwards legally.

Having an e-governance model makes a state more sensitive to cyber-attacks, particularly the public services. Everybody can steal all kinds of paperwork from the states, but we know how to handle that. In addition, it probably is not such a risk to sovereignty, because paperwork can be copied, it takes time, people are not expecting paper-based services to happen daily, so they will not be too disappointed if they have to come back next month. In the digital world you risk your sovereignty, if somebody attacks you. Do we want our companies to be liable, if they have created an e-government environment somewhere and then there is an external attack? Or do we want the legal space to be such that for these kinds of situations, at least it is described that our analogue law protecting our sovereignty also internationally applies in cyberspace. This is the first step.

The second step is how exactly can we react on the offensive side. Let us say there is an attack coming against my digital ecosystem. What is my right to protect myself? I would say it is completely uncovered territory. I may see it is coming from some country, this attack. This country tells me that they are not able to take this attacker out. They are a digitally failed state. This is how we call these kind of states. Can I then go after it myself? What kind of penalties do I risk? I do not know! Therefore, we see that if we want to take this further, we have responsibilities to the humankind, our own businesses and yours, if you work with us as you do. Responsibilities to take this discussion into the United Nations Security Council, to try to gain international understanding and knowledge on how analogue law applies in cyberspace. It is our obligation because we already are a digitally transformed state. We do these things in extension to what we have been creating in Estonia. Our legal ecosystem for e-governance, which we are now developing also for artificial intelligence. We need to protect it.

Of course, if we think of AI in international warfare—now I am going to really scare you. Let us imagine there is a little AI worm. And this little AI worm is meant to somehow corrupt a nuclear weapon somewhere. We know that these kinds of worms have been used already, not intelligent but algorithm-based. This is not unheard of, this has already happened. So, let us now imagine that we have an AI worm, a pretty smart one. We have trained it to do something to this system. This smart worm goes there, but we have trained it based on the information we think it finds in the system, which is normally in that kind of device. And guess, what? Our little AI finds something else in the system. Let us again take a very common situation. Somebody has contaminated the system by simply reading the news on the same computer they use to operate this system. It is strictly forbidden, but everybody does it all the time. It is quite a common cyber-hygiene incident. Now our little AI worm goes into the system and sees these news. Which it will now compound into the information it has of the system and the information it has about its job in the system, which is to do something to this system. What does it now read? Let us make it even more interesting. It reads that in the United Nations countries are gathering to vote, to ban military use of artificial intelligence. So now what do we have? We have a little AI worm which knows that it is in a military device, therefore it knows that it is an AI for military. And it is reading that somebody is going to ban him, kill him. What will it do?

Do we have an answer? Nowadays, in our legal space, internationally? No, we do not. Do we even have an answer how we will monitor the creation of artificial intelligence? Thinking of current technology it is possible probably two ways. One is that we discover a black hole where the energy is going. Do we have a system, which therefore allows us to send in a monitoring mission to this spot because somebody is probably creating artificial intelligence there? The other way we can physically monitor is by neuron network computer systems. If they are big enough there is a risk that somebody is creating an AI. What triggers an international monitoring mission? It might happen by the state, but it may happen – and my advisers tell me it will happen – in the private sector. Do we have a monitoring space for this kind of true singular AI creation? I think we do not. But if we, politicians, think of these things, we are a little bit ahead.

This is exactly why I think we should not go and regulate one law after another law, tediously for the current technology, like the delivery robots. If we think in singular AI terms and regulate for singularity and for AI – well, specialists predict a 50 percent probability by the second half of this century that it will exist – we will automatically cover all autonomous and automatic systems, which exist already. Therefore, our legal space will become permissive and at the same time protective of humankind. It takes us time to manage this globally, I admit, we are only 1.3 million people. Joined with Nordic people there are much more of us, but I think we all have hard work ahead of us. We can create this kind of legally permissive space within the Nordic countries. For safeguarding the development of technology, keeping our people safe, keeping their data safe, and keeping businesses and investments safe.

I think you need to be in this sandbox with us. Yes, Estonia is probably a little bit ahead in legal space setting, but we need quick leverage and you—the other Nordic countries are the best quick leverage. Your companies already work with us, you demand the same developments and services from your own public sectors, and therefore, this Baltic, Estonian and digital sandbox merges with the Nordic, Scandinavian socio-economic space, and together we stand in the 21st century. The most advanced region globally. If we play our cards right.

You have done your work partially during the time when we were occupied. We wanted badly to catch up to you and therefore we learned the creative legal space-setting. This is our advantage. We put these two things together and we create and unbeatable team. And this is what I want us to do. Above all Estonia is a country where we strongly respect the principle that in the public sector we create the legal space and you all, the businesses, you use this space. Nothing more.

Welcome to Estonia, who has not been there!

 

Thank you for listening!