Our Aims
AI systems are now widely used to inform and automate decisions and actions that have significant consequences for individuals, including safety-critical and human-rights critical contexts, ranging from medical diagnostic tools, autonomous vehicles and biometric identification and verification systems that are used to inform decisions to allow or deny access to critical resources and opportunities. Addressing these problems requires the development of methods that can be integrated into interpretable and accountable legal and ethical governance architectures that will enable lay-users to regard such systems as trustworthy.
Our aim is to investigate the adequacy of existing technical methods and governance mechanisms, seeking to develop new techniques, mechanisms and analytical approaches that can provide the foundations for establishing demonstrable, evidence-based assurance mechanisms capable of safeguarding multiple dimensions of safety and security that otherwise remain under threat, including epistemic security and the safety and security of property, persons and human identity