Related papers:
Git:
xDNN is a prototype-based network that uses data density as its core mechanism.
Prototypes are selected data samples that the users can easily view, understand and analyse their similarity to other data samples.
These images are input to the Artificial Neural Network. On the left we have a dashboard camera view through a windscreen. In the middle of the figure we have the Artificial Neural Network (ANN) proposed in the project, xDNN. It has a density layer and a conditional probability layer. On the output layer stage, there are multiple output images of prototypes.
This image shows the validation process of the xDNN. The leftmost image is the windscreen view of unknown scene (winter, summer, etc). Then there's a feature extraction layer. Each of these represents real images. One group of them can represent sunny days, another one can represent wintery days etc. The most accurate decision (winner) is taken forward, and each winner per class determine the overall winner, representing the outputs. If the winner is the sunny day, the network suggests it's a sunny day.
This image shows the multimodal function of each prototype. They are the points with the highest conditional probability.
Related papers:
Git:
xDNN is used to approximate the DRL model with a set of IF…THEN rules that provide an alternative interpretable model, which is further enhanced by visualizing the rules.
This image is a block diagram which represents reinforcement learning, a type of machine learning by which outputs are generated in accordance with performance. Each error provides the feedback to correct the recommended output, which in this case is an action. These types of actions are corrected by the deep reinforcement learning algorithm, for example sensitivity selection, prototype layer, map layer and decision making. The oval box is an xDNN model, which can be used to generate the suggested actions by a driverless car.
This image depicts a potential situation on a motorway, where the yellow car in the middle is depicted as ego car. There are cars at the front and at the rear of the ego car.
Related papers:
xDNN is applied for the object detection context. Due to its transparent design, it allows an explainable error analysis which is crucial for high stake applications.
This video shows xDNN for object detection in action. Cars are being detected by AI, with the confidence level colour-coded and displayed next to each object
This figure (xDNN for object detection) represent the scene, where by each participant each vehicle or object on the road is classified. The level of confidence is shown in colour between 0 and 1. The rules are presented on the right. We can see that the similarly between the scene and the prototype is 93%.
This image shows an example of a mislabelled image. The plot represents the analysis of the error. Sometimes it may be wrong, but unlike other networks, we can explain why it is wrong. The visual similarity to the prototype of the truck is almost 63%, and this is why it has been mislabelled. This image is error analysis, we take one example when it was wrongly labelled as a truck.
xClass extends xDNN and offers:
In this video, you can see pedestrians and vehicles are moving across a road. They are detected with Yolo. The image is nu-sense, the objects have been detected using xDNN. We provide confidence in our detection against each object, with colours relating to the certainty of correct identification.
With x-Class, our confidence plays an important role: the horizontal line shows the frames. It represents 5-6 seconds of a video. The blue line represents confidence. It fluctuates, and when it falls below the dashed line, it is novel. A new object appears at the scene etc. It demonstrates detection of novelty.