- Reset + Print

President of the Republic at Latitude59

President of the Republic at Latitude59 © Ahto Sooaru


Dear guests of Latitude59!

It was only two years ago when I explained to you why and how we created our digital state and society in Estonia. What the digitally disrupted society looks like and how it normally works, and what happens in a society if it does not work– this will be discussed today.

Two years is a long time in tech development.

Firstly, two years ago the share of the IT sector in Estonian GDP was slightly above 4%. Now it is 7%, and may I assure you, the GDP has not been shrinking, it has been growing, so the trend of growth in this sector has been particularly strong. We are growing not so much here in Estonia, but globally. Estonia is now growing globally also in Africa unexpectedly well – Africa is the continent looking to leapfrog and is not held back by frozen structures.

And very importantly for Estonian companies – this is not a continent saturated with big companies who come with their solutions and say ‘please adapt your needs to my solutions’. No. They are open to the Estonian way of thinking – that everything, particularly when you deal with the e-government and public e-services, has to be tailor-made because this preserves the culture.

Estonian companies are well-viewed abroad because Estonians lack work-force. Therefore, they forge partnerships with local SMEs because they are tiny themselves by global standards. This makes it a very good example of the 21st century win-win development of economic cooperation, where you take a respectful view of the need to develop the economy of the country in which you are selling your services.

Thank you, all Estonian IT companies for being a great example of this international trade and development. I am so proud of you all! If you would join me in applause for Estonian tech companies working abroad, please.

I promised to tell you what happens if the digital society faces a hitch and cannot function properly.  In the second half of 2017, Estonia fell victim to the famous chip problem of an international company – I am too polite to name this company – but it provides chips for identity cards, not only here in Estonia but in many other countries.  Also, for similar functional cards.

Millions, maybe a billion people were affected by this problem. But it only posed a serious risk in Estonia because here it affected at least half of our ID cards. By now, most Estonians do not remember that we had this problem because it was overcome. But the case-study is actually very interesting.

Some countries, who also have their ID cards based on the same chip, simply closed their identity cards. It resulted in practically no articles in the newspapers and no problem in the society, which made us very sad because we realized that in many countries, if digital IDs do exist, they are not being used, because otherwise people would have had problems.

It caused a huge problem in Estonia, but we learned that once you have digitalised the society and you have passed a certain line, you’re thinking ‘okay, if the digital services do not function, I can do it on paper for a couple of days or weeks while I fix the problem’, but this is no longer an option. People are used to using digital services, and they don’t know where to go and seek paper services. Even if they knew, they would hate it so much that you would have a riot if you force them to use it.

People coming from developed countries and elsewhere probably don’t understand the Estonian population who get upset if the system of checking and re-applying for driver’s licenses is down for two hours. Come on, I couldn’t renew my driver’s license – what’s going on? But if you suddenly face the horrible situation where you have to go and manually renew your digital ID card certificates at the Police and Border Guard Board – oh my God! The queues were kept down to, let’s say, an hour, maybe an hour and a half, in the worst cases. We almost had a riot.

This is a lesson to all who are thinking of digitalising their countries and societies. You truly are on a different planet and there is no turning back. You have to have a digital alternative to digital. You can no longer function with paper alternatives to digital. We perhaps could have had in the first five years after our digital transformation, but not anymore. We have to make sure that we have several ways to access our digital state and the necessary level of application in our system so that the digital state itself can function at all times. We are helpless when it comes to paper. Most Estonian people are therefore helpless when they need to settle in other countries because they simply don’t know how to do it – which is probably a positive thing because they will appreciate our own low bureaucracy environment much more.

It taught us that once the first step has been taken, you will be citizen-driven. Now it is leading us to think about how the narrow formats of AI could be deployed in our public sector. There is absolutely no question of whether we should or should not because our people model our expectations of the digital state on the private sector. They see that the private sector is offering something, and they say ‘I want this from the public sector as well’. This means that we now have to start thinking about a pro-active state and pro-active services.

We already have one service. If you are a retired person and live alone, you are entitled to a top-up of your pension once a year. These people do not even have to know about this service, that it exists, because the systems themselves are able to establish that the person is retired and living alone. Since we know that they receive a pension, we can simply top it up and people receive this money without any application.

People want more of these kinds of services nowadays. For example, if you know you are entitled to universal child support, and you know that your state knows it too and also your bank account number because you pay taxes, etc., then our people ask more and more often ‘why should I even apply?’ You know I have the right, just give me this support. Leave me the option to opt-out – if I want to opt out, I will tell you I don’t want this universal support.

We are happy to serve our people here and think of developing these services, which will get more and more complex and increasingly often algorithms will be making decisions about people. Then as a state you will have an option for people to opt-in or opt-out of these services. It is very comfortable if somebody pays you something automatically, but not all state services consist of a payment. For example, it is also possible to find young inactive people by looking at the data we have about them. We can check all school registries and we can check our employee and employer registries and notice that the young person does nothing. He is also not staying at home with family or with children. We now have a pilot programme, according to which this data is put together and given to a local government which can then send their social services to find out what problems these young people have.

It sounds perfectly positive and right. Yet there are people who may want to opt out. What is the point of opting out? Initially you don’t know that somebody is looking at your data. Should we give them an opt-out at an early stage? Yes, they have an opt-out at that point somebody comes and says ‘hi, I don’t think you are working or doing anything, can I help you?’ Then you can simply say ‘go to hell, I don’t want to see you.’ But technically, is this the right point? Maybe it is, but we have not discussed it as a society.

There are constantly questions that people ask, for example, ‘is it okay that there is a bunch of algorithms making decisions about me? Who is going to explain to me this decision?’ As engineers, you know that you can build systems this way, that you can and are able to explain to us, how you arrived at the decision. It is the growing responsibility of the engineering and IT community towards the society that we do not lose the trust of people in these kinds of developments, these kinds of services.

People need to understand, so these services need to be transparent. If a machine has to make a decision about me, and I don’t like it and I query it, then the answer cannot be ‘computer says no.’ We know that very often with automated visa applications, for example, globally, this is the situation. You made one stupid error and you can’t get the visa because you ticked the wrong box. For example, yes, my 3-year-old has taken narcotics to your country or something like this. That’s enough. But it cannot be. If we want a society to truly benefit from this, then ‘computer says no’ should never be an answer to someone’s question.

This is a problem we are now tracking here in Estonia. If we start offering our people what they seem to be looking for-- more pro-active services, then what are their rights and what is their understanding of how this system works? It is important because otherwise we cannot move ahead. However, we need to move ahead because, there is a citizen drive. It is not that our public sector is in all its formats, very ready to assume future goals. People can be as lazy here as everywhere else. But it is the people’s demand and we need to work on it.

There is another important theoretical point: what if the private sector offers something which we as a state cannot afford? This is very likely, because Estonia is a tiny country. What if our citizens demand something that we in principle cannot afford, but which is very mainstream and common in the wider world?

Therefore, our recipe has been from the beginning that we scan the horizon to see what is coming up and then set a legal space for it. Our politicians do not try to buy or influence it.  

Because, if you haven’t noticed, the Estonian e-systems are not tech-wise Estonian developments, in the sense that they are unheard of globally. They are cheap and available mechanisms. What we are doing differently is legal space-setting. We set an environment in which these technological developments can be used by the public sector, the private sector and all individuals. By safe legal space-setting investments are protected, people’s data is protected, companies are protected, the state is protected and everybody knows their rights and obligations to operate in this legal setting, in this technological framework.

It sounds like a no-brainer – you have done it for 20 years, just continue. If you see something new coming, just say it is legal in Estonia and just enjoy the proliferation of the start-up companies in Estonia. True, this is still going on. But the problems are getting more and more complex now. And I will mention just one.

We see today that these little Starship package delivery robots are running around in the room and they are among us in our countries now. When they got the right to walk alone and unaccompanied on Tallinn streets, it materialised that people actually treated them as other people or at least as pets. When the Starship arrives at the side of the road and there is a car, the car stops. Of course, the robot stands there and will not cross because it is trained the other way around. It is not trained like children – the car stopped, please go ahead, cross the pedestrian crossing. It is waiting for the car to go and then it will cross when it is empty, not when the car has stopped. We had to train our drivers, and people in general, that this is what happens, please do not treat them as human beings or pets, please treat them as robots.

Which led me to think about the next decade. It is only one Starship, and it was pretty easy to broadcast. What if the next robot on the street has different behavioral patterns? Finally, I have a robot which helps me take care of my old grandmother and it has some very practical habits; it knows how to do its job but it is really bad at understanding people’s feelings etc.

How will we as human beings live in such a world? How can we regulate and set the legal space for these machines to operate among us, for us to remain safe in this environment? My conclusion is that, you need a certain level of standardization, also in legislation. I am quite sure that Estonia is among the front runners globally in thinking of these problems because we want to have this sandbox for new technology developed here in Estonia. These narrow-AI functions will not only be algorithms, but will have some kind of physical presence, as well. And we will want them here.

We need to tackle this problem, but this is no longer just a legal problem, but is increasingly becoming an educational problem. The Estonian digital state has always been an educational problem because we have had the Tiger Leap program and had to teach people to go online and get used to not seeing anybody when the state is serving them. But it gets more and more complex. We now need to bring our children up in schools in such a way that they would know that robots are not people, that they do not think like people and that they only act like people to the extent to which they are trained. Or like dogs or whatever. Like cars. Like buses, to the extent they are expected to function.

How do we manage this new environment? As a society, Estonia is now thinking of it and we need to start tackling this. We have robotics in the schools and there are little robots running around in some Estonian schools following the dots that the children tell them to follow, etc. But the trouble is that we do not know what world our children will be living in. What will be the technological environment they will live in? For the short term, Estonia has a solution – trying to legislate narrow AI in a mainstream way so that we will have a common understanding that every law applies to all forms of narrow AI in a certain way. But this is a short-term solution and it will apply to the future that we can foresee, the next 10-15 years. But the trouble is: are our kids going to school now going to be ready for the world 20-30 years in the future? How can we train them and what should we teach?

My own personal conclusion is that as high-tech as the world becomes, when most menial tasks, and up to a certain level, intellectual tasks, are performed by these robots who live among us, we will have to learn with them, etc. But the massive change in the society will be training and education.

I know it sounds old-fashioned, but it isn’t, because we need to go to the schools and tell our children today: dear kids, we absolutely don’t know what your world will look like. One option is that it is a tech-rich environment where we have conquered climate change and you will live happily ever after, but we do not know on which technological bases.

Or you will have a world where we have not managed to control climate change, so you will be using all these nice technological developments to fight each other for an increasingly small space on this planet. Whichever way, we don’t know how you will live. The only thing of which we can assure you, dear kids, is that there is one thing we cannot delegate, one area in which we can specialise, area in which we will always have a competitive advantage over all kinds of machines, and this is being a compassionate human being.

Being a compassionate human being becomes the most important skill for future generations. Being the compassionate human beings to one another, providing human interaction, which we nowadays we still get more or less automatically from the services and goods we get from society.

But, even now, if you take, let’s say, the moment you buy a ticket to fly somewhere until the moment you exit the plane at your destination and count the interactions with real people for this kind of journey 20 years ago and today. There is almost nothing.

This will apply more and more often in the future. To simply exist, you do not need human contact. But we need human contact to remain human beings and this is where our competitive advantage will come out. We need to make sure that we teach true values – the value of human life, human rights, all universal rights and the rights of states and nations to choose their future – to our children and say, dear kids, we know this will be valuable knowledge, and the ability to be a compassionate human being is something we want to get from schools, from training. The rest we cannot teach. We don’t know what the technological world of yours will be.

But we can only put trust in this future world if you, the IT people and engineers, make sure that people at least understand what is inside the black-box. Without that, I am afraid there will be a vital breakdown between the trust of the people in new technologies and their technical capacity to deliver. This job is yours. Legal space-setting and pooling – this job is mine. But we both need to do it so that our kids will be able to live in a safe world.

And finally, I already mentioned climate change. Frankly speaking, developing digital societies makes no sense if I know that they will only last 100-150 years because the planet will be inhabitable for human species by then. I am not interested in such a short-term perspective and therefore we absolutely need to solve the climate change issue. First and foremost.

Thank you for listening.