About the project
“Transparent Deep Learning Classifier of Driving Scenarios able to Identify and Learn from Unseen Situations” is a project funded by Ford Motor Co. and conducted at Lancaster Intelligent, Robotic and Autonomous Systems Centre (LIRA) – Lancaster University.
Machine learning, and more specifically, deep learning, have attracted the attention of media and the broader public in the last decade due to its potential to revolutionize industries, public services, and society. Deep learning achieved or even surpassed human experts’ performance in terms of accuracy for different challenging problems such as image recognition, speech, and language translation. However, deep learning models are often characterized as a “black box” as these models are composed of many millions of parameters, which are extremely difficult to interpret by specialists. Complex “black box” models can easily fool users unable to inspect the algorithm’s decision, which can lead to dangerous or catastrophic events. Therefore, auditable explainable AI approaches are crucial for developing safe systems, complying with regulations, and accepting this new technology within society.
The project “Transparent Deep Learning Classifiers of Driving Scenarios able to Identify and Learn from Unseen Situations“, funded by Ford Motor Co. and conducted at Lancaster University under the supervision of Professor Plamen Angelov tries to answer the following research question: Is it possible to provide an approach that has a performance compared to a Deep Learning and the same time has a transparent structure (non-black box)?
Towards this work, we introduce a novel architecture explainable-by-design Deep Learning (xDNN) that offers transparency and high accuracy, helping humans understand why a particular machine decision has been reached and whether or not it is trustworthy. Moreover, the proposed prototype-based model has a flexible structure that allows the unsupervised detection of new classes and situations.