- Reset + Print

At the Annual Baltic Conference on Defence 2019

At the Annual Baltic Conference on Defence 2019 © Andres Teiss

30.10.2019

Ladies and Gentlemen, dear participants,

Let me start with stressing that development of the technologies we commonly call Artificial Intelligence is an opportunity for humankind. First and foremost, it’s an opportunity to match in real time what we need with and where we need it and how we can get it. In the widest sense imaginable.

For example, even a relatively narrow or stupid AI will be able to guide energy grids towards ideally matched production and consumption, with flexibility on both sides.  

Whereas the traditional energy grid functioned on the basis of permanent readiness for the maximum levels of supply, to be able to cater for inflexible and unmanaged demand. Thus, for example here in Estonia, AI can reduce the maximum needed capacities by at least 20%.

The more sophisticated the AI, the more precisely it can perform for our societies. It’s technically already today perfectly possible to pay by looking deeply into the blinking eye of an automated checkout system.

In Estonia, we actually do ask algorithms to compare data from the address registry with that of the retirement payment registry, and if the result is as defined, then we pay a top-up to a person’s pension. In principle, such a simple active service was technically possible long before we started calling this kind of computing an AI function. But at the lowest level, that’s what it is – machines making decisions for people.

AI can match our empty fridge with delivery box and our calendar with the time when our food has to be in the said delivery box.

It can monitor our vital signs and advise us of trouble ahead, or directly advise 112 if – in AI’s opinion – we are not aware of the problem or not able to call for help ourselves.

AI can take care of our elderly relatives, provided it behaves in a certain safe way for elderly people to interact with.

These are all beneficial services we can afford, if we can sort out the necessary legal space for such developments. I think the examples I just provided already give a good indication of what kind of legal guidance is necessary to reap all the benefits from AI.

I am not at all worried about the capability of the engineers to create such devices and services.

What I am worried about, is that due to limited and restrictive development in our legal thinking we cannot fully benefit from what is already available. And this unnecessarily limits the commercial viability of such developments in the democratic parts of the world.

Because, for some reason incomprehensible for me, we expect some kind of absolute security, including absolute privacy, from the applications of AI technology.  While at the same time we totally accept that gossip and spreading of rumours – which often proved also to be true – were and are, part of our classical analogue societies.

Because, even if it’s easy to demonstrate that with sophisticated AI development, we can clearly set the rules that guarantee the privacy of people and data security – we for some reason are still reluctant to do so. It’s also understandable why we have such a fear for even trying to regulate how we can safely use AI. The reason for that is simple – it’s just the same kind of fear which paralyzed our ancestors when they didn’t understand thunder or fire. It’s human. but as we today are so much better equipped with sophisticated scientific approaches than they were, we should be able to overcome our instinctive prejudices.

For example, we might be worried that an AI gathering traffic data might snoop on the people driving the vehicles. In this case we should simply state in the law that this kind of data can be gathered, provided that whoever gathers it, has to, at the first point of entry of the data into the system, anonymise it. We shouldn’t strive to limit data gathering. Rather we should clearly say how data has to be treated and what kind of responsibility – including the ability to verify by responsible state authorities – these companies must take towards the data they handle.

It sounds as simple as on paper, and in fact this issue shouldn’t be treated any differently than in the past, when we simply told that doctors, for example, must protect data of their patients, and we then left doctors to get along with it. Of course, we need dissuasive penalties for not abiding the law, but we cannot afford legal space to dissuade of the use of AI altogether.

We must give up on the myths and get practical on telling our private enterprises how we want them to handle all the data they need for their AI applications. And then let them get on with assembling medical cases in order to train web-doctors. We should allow traffic data to inform smart city systems on how to change traffic lights in order to avoid a traffic jam after the end of a crowded football match. We must use AI to find people in distress and offer them help before they harm themselves or their loved ones.

If we do not manage to describe any acceptable legal space for AI applications in democratic societies, we will lose out on two counts. First, our people are not anytime soon able to benefit from the technologies which exist already and get smarter every day. Secondly, undemocratic societies benefit both in civil and military domains from our inability to participate at the wide scale of AI applications, while they are pulling ahead.

Undemocratic regimes have two advantages over us – technical advatages, of course, not societal. They do not have the concerns we do and therefore they don’t need to define the legal space to protect universal human rights in the AI world. Of course, it’s a technical advantage, not a general advantage for the society. I’m just saying this so you wouldn’t think that I’m not serious about data and human rights protection. I am.

Their other advantage is that they are able to control the tech development as easily as we were when nuclear weapons were first developed. Back then states always knew what kind of arsenals were under development, because private sector was not in the game. Nowadays, AI development in the free world is not under the control of the state. Private companies like Deep Mind can easily outspend governments and come up with solutions entirely without any kind of NASA program or European R&D funding. And in principle, it’s a good thing.

It just means that we in democracies must learn from the private sector what their AI can currently achieve. And make sure that our legal system allows such AI to make our lives better without having to give up on our privacy and general feeling of security. If AI can find data, it can as easily lose it – that much is for sure. We just have to say how and when we want AI to lose the data and how it has to demonstrate to us that it has really done so. By honest dialogue and smart law-making we cannot only be as good as totalitarian regimes, we can be better.

I would also like to point out some AI featuring services which only the Chinese use to the benefit of the society – in their case, also with the cost of people accepting that the state has a total control over them. Face recognition programmes, for example. Some tech companies also try to use face recognition programs in the Western societies. And here we already see setbacks. This spring San Francisco – yes, it was San Francisco of all the cities – banned the use of face recognition by the police and other city departments, citing mostly civil liberties issues. Some other US cities are already following suit, although there are already positive examples how facial recognition has helped to catch criminals. But as there is no adequate legal space, it had to be simply banned. This is not the fault of technology or private companies, rather the fault of lacking regulation.

In the European Union we are luckier – we have the GDPR in place which helps to understand how face recognition technology can be used in full data safety. And, as a clear sign that GDPR works, we also have examples on how GDPR is being breached: a couple of months ago the Swedish Data Protection Authority fined a municipality for using a face recognition program to monitor student attendance in a high school.

The Swedish example is a clear breach of the GDPR, but there’s still a lot of unclarity on how GDPR has to be applied so that we will not stymie the development of technology, but just snip the services which intrude on privacy. For example, most authorities demand that no personalized data can be in the hands of any private actor. But what to consider as “being in the hands”? Is it a breach of the GDPR if a company gathering information in the real time depersonalizes it, or can it only be a state actor who gathers first, then depersonalizes and finally releases data to private companies to feed it to their AI systems for learning purposes?

Should states perhaps relinquish some parts of their Data Protection regulations monopoly to private companies to maintain momentum in innovation? I do not know the solutions, but I do know that this is one of the things that is holding back our development. Especially, again, compared to China that does not seem to have such limitations. As I said, tech people can do whatever we ask from them. We just have to make clear what are the legal demands reflecting the privacy needs that we as democratic societies have promised to meet.

Then there is the need to protect us from false information, which today is limited to falsified photos, but getting closer and closer to creating videos that are entirely fake, but indistinguishable from the real. I believe that in a few years’ time we would not believe any photo or video which has no digital signature of the creator and time stamps in the beginning and at the end of the video, which can be KSI or blockchain based.

Yes, my Estonian mind can easily understand this kind of a solution to the fake news problem, because we have our digital ID infrastructure in place. The whole Europe is also developing it, but how on earth will American companies protect themselves and their clients? I have no idea, as no common, state guaranteed digital ID models are tested in US. Yes, the ID has to be state provided, as only a state can create it complete with the legal space to make it as sure as our analogue passports. And a bit surer, actually, as no one asks someone who presents a passport to supply also a couple of PIN codes, but never mind that.

And then there is the age-old question of how to keep AI in check so that it would not rearrange our world in paperclips. One might of course ask – and I ask myself this regularly – why we even need AI to do this as we are already making bitcoin out of energy and other real resources; bitcoin which has absolutely no value, but again – never mind.

We must be sure we control the AI. Here the obvious problem is how to make AI stay under our control, even if we can utter our instructions to it so slowly that it would sound like someone speaking a word a week to us?

How could and should we cope? I tend to like some of the ideas that the well-known computer scientist Stuart Russell has put forth. Namely, that we shouldn’t give machines and AI systems any firm and final goals, but force or program them to constantly doubt the objectives. And constantly ask humans for confirmation whether the goal or next activity is still OK and within the limits of the original task.
Or to put the same basic idea into scientific and human context: maybe we should demand from all the scientists that while compiling science articles or developing AI they should always analyse and put forth the possible negative side-effects and risks that the development holds. So we could also develop ways and means to manage these risks.

We must use our advantage as democratic and free societies to reach common positions on how universal human rights, the protection of national sovereignty and other wide principles of international law would apply at every technical level. Academic research dealing with lesser levels of computerized systems like automatic and autonomous systems, including cyber domain as defined by NATO, seems to indicate that everything which applies in the analogue world, also stands true in the digital. And therefore there is no need for a special new internet or AI human rights, for example. We tend to mystify the new space created by new tech and AI, but we shouldn’t. We should simply postulate that all applies and then let technology developers demonstrate to us how it can apply.

I take a very simple example from e-services 1.0: role management. Simply because we can file taxes online, the people who can file the taxes do not change. They still have to be legal representatives of a company to do so. Or parental rights – simply because a child has an individual digital ID, he or she doesn’t gain the right to decide what it couldn’t previously decide in the analogue world. It has never been a problem of the lawmaker how to solve it online – the lawmaker simply has to give guidance on how analogue law applies. It is for the tech developer to show the way forward.

At the AI level, it is as simple as to ask it to impersonalize the data first and only then learn from it to get smarter. Or ask all the video cameras to create a blockchain marker for their videos for validation. All this has nothing to do with us, the lawmakers, understanding new tech – all we must understand is that those who create the tech abide by our rules. And since their systems are smart, they must be able to explain to us and demonstrate how they protect our data. They have the computing capacity for that, we just have to demand that it will be used not only for developing AI, but keeping it in check, too.

Ladies and gentlemen,

When we now come to new and disruptive technologies and the security and defence sector, then the picture is twofold. On the one hand, this seems to be an area where, as it sometimes happens, the defence sector is in the forefront of development. Creating already practical solutions, like in the case of autonomous or semi-autonomous systems and dragging the civilian sector along.

At the same time, I believe that the real face of the dangers new and disruptive technologies create, have not yet fully revealed themselves. There’s already a lot of Halloween-scenarios, as somebody today already said, for example, of terrorists using a couple of truck-loads of small autonomous drones that are programmed to attack people with devastating effects. This is believed to be something that is already currently available to non-state actors both technologically and financially. We do not know why this kind of threats have not yet really materialized. One might draw a parallel between the use of biological weapons, or dirty bombs. But in regards to the use of drones this comparison doesn’t seem to be correct. Because exploding a dirty bomb or releasing a deadly virus are things that after release become basically uncontrollable in regards to their after-effects. Drones, in this case, are controllable, thus one can always limit the damage to intended levels. Thus – they have a much higher likelihood of being used.

The current developments and discussions in the defence sector tend to be focused on autonomous weapons systems. If we for a moment disregard the quite bleak consequences of the so-called killer robots – or for that matter any weapons system – then there is a lot of rather healthy discussions about the moral and ethical aspects of using these kinds of weapons. It’s an inevitable development that we use more and more automated systems also in the military. However, I believe that as a basic moral principle we must stick to the rule that lethal action of these systems must always be controlled by humans. A human must always remain in the loop in principle.

This basically hasn’t been a serious problem so far, or at least in the Western world. But it might not stay that way. Because we have used automated systems – drones mostly – in low-intensity conflicts against technologically underdeveloped non-state actors for almost 15 years now. It might seem that we can always be in constant contact with the drone, get real-time information from it, and make real-time life and death decisions. However, if we look at a possible crisis between two relatively equal state actors, then this kind of constant control will not be the case anymore. The airwaves will most certainly be full with massive electronical jamming which makes real-time command-and-control of automated systems very difficult, if not impossible.

This in turn makes taking the human off the loop not an option, but very likely a dire reality. The other option would probably be just not using the quite vast and powerful arsenal of automated drones that larger Western states have already developed and amassed. The consequences and moral dilemmas this situation or scenario creates, are self-evident and worth pondering about.

And I do hope that you will ponder on these issues during this conference – both on the large and philosophical AI questions and on the smaller and more mundane issues that new and disruptive technologies create. But – don’t forget the time dimension. There are already AI systems who have in our time-scale gone through centuries during these 20 minutes I’ve spoken here. We need to think a little bit faster!