Verifiable autonomy

Bremner, Paul (2019) Verifiable autonomy. UWE Bristol

Brief summary of project

The project examined formal verification methods for autonomous systems. Our specific contribution was designing an architecture for robots that evaluated possible actions against a code of ethics and selected the most appropriate according to that code. The architecture implementation was designed in such a way as to be able to be formally verified, as well as experimentally validated.

UWE College/School: College of Arts, Technology and Environment > School of Engineering
Creators: Bremner, Paul
Data collection method: In our architecture ethical reasoning is handled by a separate layer, augmenting a typical layered control architecture, ethically moderating the robot actions. It makes use of a simulation-based internal model, and supports proactive, transparent and verifiable ethical reasoning. To do so the reasoning component of the ethical layer uses our Python based Beliefs, Desires, Intentions (BDI) implementation. The declarative logic structure of BDI facilitates both transparency, through logging of the reasoning cycle, and formal verification methods. To experimentally validate the architecture, and demonstrate the capabilities and utility of our ethical black-box recorder, we conducted a series of experiments using NAO robots: one acting as a proxy human and one controlled with our ethical architecture.
Resource language: English



How to cite this Dataset

UWE Harvard Citation

(for UWE users)

Explore Further

Read more research from the creator(s):

  • Bremner, Paul

Read publications using this data:


Additional statistics for this record

Actions (Log-in required)

View Item View Item