Mind Foundry named in ‘Ethical AI startup landscape’ by EAIGG

Mind Foundry
3 min readJan 16

--

(This story was originally featured on the Mind Foundry website in May of 2022)

Mind Foundry is thrilled to have been included in the ‘Ethical AI startup landscape’ research, mapped by researchers at the EAIGG (Ethical AI Governance Group), who have vetted nearly 150 companies working in Ethical AI across the globe. This important research by the EAIGG is being conducted to provide transparency on the ecosystem of companies working on ethical AI.

Mind Foundry was highlighted in the ‘Targeted AI Solutions’ and ‘ModelOps, Monitoring & Observability’ categories. Both these subsets of Ethical AI aptly describe how Mind Foundry provides Responsible AI.

Mind Foundry’s Targeted AI Solutions

Mind Foundry creates Responsible AI for high-stakes applications. We build targeted AI solutions for our customers, ranging from intelligence management, insurance, central and local government, and security & defence. We emphasise the need for Responsible AI across the development lifecycle of an AI system, including:

1) Use-case-specific risks: ensuring our customers can succeed by fully understanding the benefits and risks of using AI for their particular business uses and where AI should and should not be used.

2) Algorithmic design: favouring interpretable and explainable AI models, with data and model provenance, over black-box approaches. For example, in high-stakes applications, it is not always appropriate to use neural networks — as this can make the traceability and interpretability of your outputs opaque to users of the system as well as to unrepresented stakeholders, such as citizens.

3) Solution design: empowering the human to make the right decision, with UX design highlighting possible limitations in the system itself. For example, by using Bayesian optimisation, we visually represent probabilistic estimates to our insurance customers so that they can be aware of where the system is less confident in its predictions and where additional human input might be required before making a decision.

4) Post-deployment monitoring: ensuring our AI systems continue to work as intended through performance monitoring, including predictive power, robustness, and resilience.

Mind Foundry’s approaches in ModelOps, Monitoring and Observability

One of the fundamental aspects of our work is Continuous Metalearning: we are currently building and implementing the tools and techniques for the next generation of responsible AI systems.

This research, as part of an Innovate UK Smart Grant, includes understanding how AI systems can continuously improve and adapt to surrounding environments and meta-optimise their learning process through the combination of cutting-edge Machine Learning techniques and domain expert input.

At its core, Continuous Metalearning proposes to create a complete end-to-end framework for the operation of algorithms. By prioritising these techniques, we are enabling our customers to use AI that is resilient to adversarial attacks, such as data poisoning, as well as being able to classify novel trends that the AI system had previously never seen before.

We hope that our approaches and philosophies surrounding the development of Responsible AI continue to be spread, and we are grateful for the essential research being carried out by EAIGG to uncover this.

Find out more about how we’re using Responsible, Explainable AI in high-stakes applications.

--

--

Mind Foundry

Artificial Intelligence for high-stakes applications.