In our architecture ethical reasoning is handled by a separate layer, augmenting a typical layered control architecture, ethically moderating the robot actions. It makes use of a simulation-based internal model, and supports proactive, transparent and verifiable ethical reasoning. To do so the reasoning component of the ethical layer uses our Python based Beliefs, Desires, Intentions (BDI) implementation. The declarative logic structure of BDI facilitates both transparency, through logging of the reasoning cycle, and formal verification methods. To experimentally validate the architecture, and demonstrate the capabilities and utility of our ethical black-box recorder, we conducted a series of experiments using NAO robots: one acting as a proxy human and one controlled with our ethical architecture.