AI in government: considerations for ethics and responsibility

Ethics and responsibility

What role do ethics and responsibility play in Mind Foundry’s approach to AI?

BRIAN MULLINS: This all comes back to what we call our pillars, and this is at the centre of what we make, and we think they lead to the right types of considerations as well.

Mind Foundry provides public and private sector organisations around the world with AI solutions designed to enable consideration of the ethics, impact, and return on investment of their use. Its technology is underpinned by three pillars:

What are the problems associated with deploying AI ethically and responsibly in the Public Sector?

MULLINS: Understand first that it’s hard to do it the right way — it’s harder than to move quickly. These technologies can become very seductive when you see short term high-speed gains but we hope a better understanding leads to the realisation that it doesn’t have to be a compromise. In fact, if you consider the total cost over the lifecycle of a system, making the right choices and having a fundamental understanding from the beginning can protect against the unforeseen costs in outcomes that were unanticipated using methods that were not understandable, especially when deployed at the scale of the public sector. If you think responsible decisions with AI are expensive and take a long time to get right, you should look at the cost of the irresponsible ones.

AI for student grading became a fiasco in the UK in the summer of 2020.

How do you begin with ethical considerations?

MULLINS: One of the things that we do as an organisation is we look at cautionary tales.

What do we mean when we think about ethics in AI? Is there a best practice? Is it like the Hippocratic Oath in medicine, where we simply have to have our best intentions? Or is it more than that?

ALESSANDRA TOSI: Ethics is defined as a system of moral principles that govern a person’s behaviour. And we want to apply a set of moral principles to govern AI behaviour. So, there is a set of questions we need to ask here. First of all, what are these principles? There might not be an agreement on the answer to this question, as well as there’s no agreement in general ethics in Philosophy, so it’s an open discussion. And the other question that’s important is — how do we encode those principles into an AI system?

It’s impossible to have a “one size fits all” approach to ethics. Every project must start from scratch. Does explainability solve these problems? Is unexplainable or complex modelling inherently bad?

DAVIDE ZILLI: Complex modelling is not inherently bad and we shouldn’t shy away from complex models. In fact, most of the recent advances in Machine Learning and AI have been driven by these complex models. We as humans are not perfect ourselves, and so checking the perfection of a Machine Learning Model or an AI system might be difficult. At the moment, we don’t live in a perfect world with perfect AI systems.

If an algorithm is simply replicating what humans are already doing, and if there is some bias in society today, which unquestionably varies in all manner of ways, is that an issue? Will that bias be conveyed to an algorithm?

TOSI: A bias is a behaviour of a prediction algorithm whose outcome is systematically deviating from the expected or correct outcome. As humans, we know that we do have bias in the form of a prejudice, or preconceived notion, but bias also has a statistically clear definition that we can quantify in the AI system. Bias is the difference between the expected value of an estimator and its estimate. So, this seems quite technical, but it’s important that we do have this technical definition because it allows us to identify and quantify the bias mathematically inside our AI algorithm.

Would explainability of an AI system actually solve that kind of issue?

Contact Us

We understand that Governments have uniquely demanding requirements for scaleability, precision, reliability, compliance, and more.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Mind Foundry

Mind Foundry

Artificial Intelligence for high-stakes applications.