Preserving freedom in an automated decision making world

In April 2018, during the LLW Workshop in Barcelona, I had the great pleasure to participate in a workshop organised by Andy Wilson, and focused on understanding how the core values and principles of freedom conveyed by the original Free Software movement can be carried over to the 21st century. The lively discussion that ensued led to interesting results, that it is of general interest to share broadly, especially now that awareness is raising about the issues that lie behind the explosive interest in Machine Learning and AI. Here are the notes that I wrote to sum up the results of the discussion.

A changing landscape

Over 30 years ago, Free Software carried a message of freedom: users should be able to access, study, understand, adapt and share the source code of software that they were using. Free Software licences were a means to implement this vision.

Over time, science, technology and society have evolved, with some key turning points that changed the landscape in such a way that Free Software licences are not necessarily sufficient to implement it today: one was the "cloud", aka "somebody else's computers".

The other is the generalization of automated decision making, in particular thanks to recent spectacular advances in machine learning (that is by far not the only way to realize automated decision making, which has been around us for a long way, well before deep learning became fashionable, think of credit ratings, for example).

What we should demand of AI, and decision making in general

In the workshop discussion, it emerged that automated decision making raises expectations that cannot always be met, and we came up with the following three key sets of provisions:

Full end user understanding (aka Explainable AI) of algorithmic decision making

Any end user that is the object of an automated decision should be given a human understandable explanation of why the decision was made. An analogy was made between an AI (or an algorithm) and a judge: when the decision is made, we might not have access, or understand the inner workings of the judge's brain (or of the AI system, or of the algorithm) that lead to the decision, but we do expect to see motivations in the judgement that provide an explanation accessible to a normal human being.

We do tend to expect this property for any automated decision making, not just artificial intelligence/machine learning based ones, but the issue has been brought to centerstage by the great public interest raised by the latest incarnations of the AI dream, machine learning.

While for classical algorithmic decision making, requiring an explanation that is understandable by a human is often technically feasible, for machine learning techniques, and in particular deep learning, we currently do not know how to do this, and it is an active area of research.

So, we cannot always get in the ideal situation where any human being subject to algorithmic decisions can get an explanation that is directly understandable, and in these cases we need to lower our expectations.

Accountability and transparency

When a human understandable explanation cannot be obtained, it is in general not possible to assess the outcome of an automated decision just by looking at it.

Hence we believe that the whole decision making process should be transparent, and accountable.

This is quite analogous to the requirement we see in FOSS to make the source code of a traditional piece of software available and to allow its modification and reuse.

In the case of automated decision making based on machine learning, this includes, in particular,

  • the (source code of the) software used for the processing
  • the trained machine model(s)
  • the data used in the training
  • and all information necessary to perform independent experiments using all the above

One could argue that all the above is too technical and complex to be understandable by Joe of the street, but this objection can be easily refuted, like the old objections made to the requirement to make source code available to all.

Indeed, we can reuse an important argument we make to explain why access to source code is important for everybody, even for non technical people that cannot read or understand source code: it is the fact that anybody can commission an expert of her own choosing to scrutinize the code, and find out if it works properly.

The analogy with //law// is interesting here: the legal codes are so complex that, despite the fact they are written in a language that looks like natural language, a normal person is unable to understand them. And yet we attach the highest importance to the fact that the code of law is accessible publicly, because this enables anybody to pick a lawyer of their choice to look at it, and see how it adapts to their particular situation.

Nevertheless, there are cases where even this lesser requirement of making the process fully open and accountable cannot be fulfilled: indeed, we cannot request to make available to all the full credit or medical history of a population used to train certain algorithms!

In these cases, we have a last resort: Ethics.

Ethics

We have already seen examples of technical measures put into bear with laudable objectives that were turned into nightmarish tools soon after. For example, in the 1930's in the Netherlands a census of the population based on religion was realized with the goal of ensuring equitable funding of the different religions, a surely well meaning goal. Unfortunately, when on May 10th 1940 the Netherlands was invaded by Germany, this good goal turned into evil.

The only protection, then, seems to be non technical, and based on human ethics: when we cannot control the outcome of the use of a technology, or we fear we do not sufficiently understand the consequences, we should just refrain from using it.

This is what ethical committees have been doing for a long time in biomedical research; it is maybe the time to have similar committees for computer technology.

Conclusion

As early stage actors of the Free and Open Source world, we have foreseen quite a few of the promises and the dangers that computing brings to humankind, and we believe our contribution has been important to preserve to some extent the freedom of the users of technology. Today, the challenge is broader, and deeper, than what we have faced before, but the key values we rely upon stay the same.