This year's Leipzig Symposium on Intelligent Systems will take place on September 16-17, 2024. On day 1, you may join us either on campus or remotely. On day 2, all events will take place completely online.
Reference ontologies play an essential role in organising knowledge in the life sciences and other domains. They are built and maintained manually. Since this is an expensive process, many reference ontologies only cover a small fraction of their domain. We develop techniques that enable the automatic extension of the coverage of a reference ontology by extending it with entities that have not been manually added yet. The extension shall be faithful to the (often implicit) design decisions by the developers of the reference ontology. While this is a generic problem, our use case addresses the Chemical Entities of Biological Interest (ChEBI) ontology with classes of molecules, since the chemical domain is particularly suited to our approach. ChEBI provides annotations that represent the structure of chemical entities (e.g., molecules and functional groups). We show that classical machine learning approaches can outperform ClassyFire, a rule-based system representing the state of the art for the task of classifying new molecules, and is already being used for the extension of ChEBI. Moreover, we develop RoBERTa and Electra transformer neural networks that achieve even better performance. In addition, the axioms of the ontology can be used during the training of prediction models as a form of semantic loss function. Furthermore, we show that ontology pre-training can improve the performance of transformer networks for the task of prediction of toxicity of chemical molecules. Finally, we show that our model learns to focus attention on more meaningful chemical groups when making predictions with ontology pre-training than without, paving a path towards greater robustness and interpretability. This strategy has general applicability as a neuro-symbolic approach to embed meaningful semantics into neural networks.
In this talk we outline the neuro-symbolic nature of the LTL Synthesis tool "SemML", the winner of SYNTCOMP's realizability track in 2024. Specifically, we demonstrate how its machine-learning-based heuristics enhance the underlying formal algorithms to solve LTL synthesis more efficiently. Finally, we identify some criteria of other algorithms where such techniques could be employed.
Team-work is better than struggling alone for achieving complex tasks. Such tasks usually involve expertise in multiple domains and require diverse capabilities. Hybrid teams of human and artificial actors (aka. humans and agents) can contribute their respective strengths to achieve their common goals. Common goals can be negotiated, while taking into account the diverse capabilities of hybrid actors. In combination, the goals can be achieved by performing a suitable set of tasks. In order to determine the set of tasks, actors need to communicate about and agree on the appropriate actions that cause the desired effects. In the proposed system, a shared causal model is used to enable the alignment, distribution and delegation of tasks.
I will give a short overview of my experience with combining different AI techniques for driver assistance and autonomous driving systems. One particular interesting aspect is safety engineering for AI-based systems.
Recently recorded large-scale neuroimaging datasets allow us to train artificial intelligence models end-to-end on brain activity alone. These large-scale recordings are expected to revolutionize neuroscience, providing us with new insight on how intelligence works in nature; and overcoming simplistic previous assumptions. The talk will introduce one current neural networks-based method of deriving digital twins of neural information processing from neuroimaging data.