Bridging the assurance gap where human and machine certifications overlap in the realm of autonomous systems and AI.
  • DatesJune 2022 to June 2026
  • SponsorÃå±±ÂÖ¼é with tuition funded by the Royal Navy

Full autonomy may be achieved by integrating Artificial Intelligence, Machine Learning, and Data Science. With the rapid development of autonomous systems, there's a growing need for assurance and certification processes to ensure safe deployment.

Currently, the UK MoD ensures safety through civil health and safety regulations, government legislation, and a unique duty holder system - established from the 2009 Haddon Cave report. This research evaluates whether the duty holder construct is suited to AI-based technology. It combines a literature review and interviews with stakeholders from the MoD, industry, and academia, comparing safety assurance methods across various domains to develop a new concept: "safe to operate itself safely" for military weapon systems containing AI.

Progress update

Initiated in 2022, this doctoral project commenced with a comprehensive literature analysis across automotive, industrial, military, engineering, and space sectors to evaluate assurance techniques. A knowledge gap emerged where human performance assessment and machine certification overlap when control is transferred to AI. This led to the development of the "Safe to Operate Itself Safely" framework. The framework was refined through insights gained from 20 high-level interviews and is currently undergoing testing in workshops before final refinement and publication.

Further information

This research contributed to the House of Lords Select Committee's research towards AI in Weapon Systems and has been discussed in journals published by RUSI. It will be presented at INEC24 in November and an analysis is currently under peer review by Defence Studies.