How to handle the ethical dilemmas caused by AI, Data Analytics and the Blockchain?

25 March 2019

by Geert Vermeulen

Since my visit to the World Economic Forum in 2018 I have been following the technological developments in IT and the medical sciences with great interest. These two industries have something important in common. The technological developments are progressing at an astonishing pace. Great benefits lie ahead of us. However, these same developments also trigger a whole lot of ethical dilemmas. Organizations struggle with them. They don’t know how to deal with these issues and as a result they ask for more legislation. So, I was thinking: Are new laws really the best answer to these ethical dilemmas? And who would be in a better position to facilitate this debate than the Ethics & Compliance Officers? I am going to let the developments in the medical industry rest for a future occasion and concentrate on the IT developments. In my view, we have recently witnessed a couple really exciting developments in this area, namely the rise of blockchain, data analytics and artificial intelligence.

Without going into the details too much, I can easily imagine that the blockchain technology can be used to set up a land register or an Ultimate Beneficial Owner (UBO) register. There have already been experiments with the blockchain technology in trade finance transactions. And smart contracts. With the blockchain technology you can make a reliable and transparent register, take away a lot of double work, red tape, bureaucracy, shorten transaction times and reduce the number of opportunities to demand bribes.

Just imagine that you don’t have to check UBOs anymore

That all sounds great. The only caveat however is that somebody still needs to launch the blockchain and ensure that the information that goes into the blockchain is correct. Once that problem is solved, I predict great benefits from this technology, also for Ethics and Compliance Officers. Just imagine that you don’t have to check UBOs anymore. You can just ask the client or the third party to disclose the information that is in the blockchain and that has already been verified. This could even include the results of a screening exercise against denied parties lists, PEP databases and bad news databases. You would then still have to assess whether doing a certain type of business with this customer or third party falls within the limits of your risk appetite. But it would take away a lot of the boring work. The client or third party may still choose to not disclose the information that is recorded in the blockchain to you. But in that case, you wouldn’t do business with them.

The use of artificial intelligence (AI) also bears the potential to make the life of the Ethics and Compliance Officer much more interesting. When I am conducting customer or third-party due diligence, I am doing a lot of manual work that, for a large part, can also be automated. The same for transaction monitoring. Smart logarithms will be much better in finding unusual transaction patterns that need to be investigated further. And you can train these logarithms to become even smarter. This can take away a lot of boring work. These technologies already exist but have not widely been implemented yet. The technique can also be used to search for indicators of fraud in a large pile of data. This type of data analytics has already become more common.

However, these smart technologies also create new challenges. The blockchain technology for example can be used at places where you currently need a trusted person to validate a transaction. Wide use of the blockchain technology could mean that in the future the world hardly needs any accountants or notaries anymore. And a lot less Ethics and Compliance Officers who are currently conducting customer due diligence or third-party due diligence. Is that a good thing or a bad thing? I am inclined to say that it is a good thing. It will take away a lot of the more boring, check-the-box work from the Ethics and Compliance Officers and enable us to concentrate more on the high-risk areas. But it will also mean that a lot of the current CDD/KYC/TPDD experts, who are in high demand at the moment, may become redundant in the near future.

The use of artificial intelligence (AI) creates even more challenges. A well-known problem for example concerns the quality of the data. If there is a bias in the original data, like a little bit of discrimination, this will be amplified once the logarithms start working. Some companies have already stopped using AI in recruitment processes for example, because it led to unacceptable outcomes. Nevertheless, more and more police departments are using AI and data analytics to adopt a more risk-based approach, called predictive policing. That sounds great but the increased use of profiling may also lead to undesirable outcomes.

How would you experience being fired as a precautionary measure before actually doing something wrong?

As an Ethics and Compliance Officer, I try to stimulate a culture of trust. A couple years ago a UK bank announced, having gone through several scandals, that they had hired a former chief of MI5, so a former chief of spies, to ensure compliance. They started to use a tool that analyzed the way how the employees expressed themselves in emails and chat boxes. From the way that employees used certain phrases, they claimed to be able to predict whether these employees were more or less inclined to behave unethical. So, they could already start firing the high-risk employees. Now, I wonder, how does this help create a culture of trust? And how would you experience being fired as a precautionary measure before actually doing something wrong?

Nowadays we can also instruct the logarithms to learn. This creates the risk that at a given moment the logarithms may become so smart, that we as humans can’t explain anymore how they came to a conclusion. This creates ‘computer says no’ type of scenario’s, that can be extremely frustrating if you are the victim of them and are constantly not hired by employers, stopped by the police, sentenced by a judge, and refused when opening a bank account.

I clearly remember for example the moment where the insurance company where I was working as the Chief Compliance Officer refused to sell me an insurance policy without telling me why (?!). Many increasingly angry emails and conversations later it turned out that a person with the same name and birth date as myself had not repaid a debt in the past and the computer had therefore automatically blocked me. In this case I was able to figure out what happened, but normally the front desk employees were not allowed to inform the clients why they were refused. What if this happens to you constantly?

Other well-known challenges are the use of profiling and the possibility to influence elections or referenda by feeding certain types of (dis)information to certain types of people without them knowing. Should this be prohibited? Another much debated topic is that of the autonomous cars. Suppose that you are a 50-year old male ‘driving’ an autonomous car. Suddenly a young woman with two young children crosses the road. There are only two options. Either the car hits the woman with her children, that causes three young promising people to die. Or the car hits the wall, causing the relatively old driver to die. How should the programmers program the car? And would you buy a car that lets you die if somebody suddenly jumps in front of it? And can the car manufacturer be held liable in these scenarios?

Another technology that is developing rapidly is facial recognition. This creates all kinds of challenges. For example: The next war or terrorist attack may be executed by autonomous machines who are instructed to kill certain people based on facial recognition. This technology already exists. Are we comfortable with that?

China is experimenting with social scores

China is already using facial recognition in order to monitor the behavior of its citizens and check whether they are perhaps breaking the rules, e.g. by crossing a red light. They also experiment with social scores, where people constantly evaluate each other when they give or receive a service. Your social score may become one of the factors determining whether or not can you obtain a loan and how high the interest rate will be. Are we comfortable with that? Different governments around the world hold a different view on this.

In China the authorities don’t seem to be bothered by the privacy challenges too much. In the EU however we consider personal privacy a human right and we recently implemented the GDPR to protect it. But the technology develops so rapidly that some people think that the legislation is already out of date. And how about the US? It was not a surprise to me that US-based Microsoft recently asked for more legislation on the use of facial recognition.

All the challenges described above have in common that the technological developments go faster than the lawmakers can anticipate. Just because of that I doubt whether laws are going to help us solve these ethical dilemmas. But what should we do then?

Much to my surprise it appeared that a group of smart people had already been thinking and debating hard and long on these matters. And in the end, they came up with the Asilomar principles. You can read them here: https://futureoflife.org/ai-principles/. So when Microsoft asked me recently about my view on AI, I was more than happy to refer to these principles (see the video).

Though these principles may not be perfect yet, I think that they make a lot of sense. We would still have to find out what to do with ‘bad actors’, who chose to ignore these principles. But I would be happy to trigger a debate on these principles in the Ethics and Compliance community. After all, isn’t this our job?

The author, Geert Vermeulen, is CEO of ECMC (Ethics & Compliance Management & Consulting)  based in Rotterdam, the Netherlands.

Leave a Reply

Your email address will not be published. Required fields are marked *