This year's Leipzig Symposium on Intelligent Systems will take place on April 28-29, 2022. On day 1, you may join us either on campus or remotely. On day 2, all events will take place completely online.
In this face-to-face workshop, on-campus participants will together explore the growing, diverse, and exciting field of hybrid AI concepts. In particular, we will discuss not only classical hybrid approaches likes neural-symbolic integration, but also more general combinations of different strategies from different fields of artificial intelligence. The overall goal of this interactive workshop is to find the connecting dots between diverse research topics of the participants.
Nigel Davies, Head of the School of Computing and Communications at Lancaster University and Co-Director of the Data Science Institute at Lancaster University, will introduce the audience to LEISYS and the campus of Lancaster University in Leipzig.
AI has the potential to sustainably change social, economic and ecological processes. The semiconductor industry plays an important role in this development, because AI applications are usually based on semiconductor solutions. Infineon is already a user and provider of AI solutions today. Infineon connects the real world with the digital world through system expertise in hardware, software as well as algorithms and hence helps to make AI applications more robust, energy-efficient and secure. In this context, Edge or Embedded AI applications are becoming increasingly important. Within Edge AI solutions, data is processed close to sensors making applications energy-efficient, fast and secure. Therefore, Embedded AI creates special opportunities for European semiconductor manufacturers such as Infineon and the Silicon Saxony industry cluster.
In parallel to the successes of neural networks for computer vision during the past decade, computational visual neuroscience has experienced a paradigm shift: The type of neural network behind their resurgence in computer vision happened to be the convolutional neural network, which originally had been proposed – in the form of the neocognitron – as the mechanism of object recognition behind the experimental findings in the early visual system by Hubel & Wiesel in the 1960s. In a rare succession of mutual scientific inspiration, convolutional neural networks trained on object recognition tasks turned out to be the best model for information processing in the human visual cortex as well. Recent approaches are using this processing similarity as a given, and train the networks directly (end-to-end) on newly recorded large-scale human neuroimaging data sets. This leads to biological properties learned implicitly, and to data-driven visual cortex models that allow wide-scaled in-silico exploration what higher brain areas are responding to.
In the field of abstract argumentation one studies the possibility to make decisions about the acceptability of arguments based on the structure of the attack relation between arguments. For this purpose multiple argumentation semantics have been proposed. But which of these argumentation semantics give outcomes in line with what humans judge to be rational? This question can be tackled in different ways. In this talk, we focus on the principle-based approach to abstract argumentation, in which one defines certain principles that argumentation semantics should satisfy. One then studies which semantics proposed in the literature satisfy which principles, and if some desired combination of principles is not satisfied by any existing semantics, one studies the possibility of defining a new semantics that does satisfy this combination of principles. In this talk we pay special attention to the principle of Irrelevance of Necessarily Rejected Arguments and two argumentation semantics that were developed in order to satisfy this principle as well as certain other principles: SCF and choice-preferred semantics.
The generalization capability of a learning algorithm is intrinsically related to the information that the output hypothesis reveals about the input training dataset: The lesser the information revealed, the better the generalization. This argument has been formalized in recent years by appealing to different notions of information stability. In this talk, I will present a unifying picture of information stability-based upper bounds on the generalization error of randomized learning algorithms under different assumptions on the loss function. Optimizing these bounds naturally gives rise to a method called Stochastic Complexity Minimization for which we discuss two practical examples for learning with neural networks, namely Entropy- and PAC-Bayes- SGD.
In this face-to-face closing session, we will bring together on-campus participants in order to review day 1 of LEISYS 2022, discuss lessons learned and identify future directions of research.
Machine ethics is concerned with the challenge of constructing ethical and ethically behaving artificial agents and systems. One important theme within machine ethics concerns explicitly ethical agents – those which are not ethical simply because they are constrained by their programming or deployment to be so but which use a concept of ethics in some way as part of their operation. Normally this requires the provision of rules, utilities or priorities by a programmer, knowledge engineer or user. In this talk I will address the question of how such explicitly ethical programs can be verified. What kind of properties can we consider and what kind of errors might we find?
This talk explores the physical, behavioral, and computational limits of crowd-assembly for time-critical problem-solving. I follow several real-world experiments where we utilized social media to mobilize the masses in tasks of unprecedented complexity. From finding red weather balloons, to locating thieves in distant cities, to reconstructing shredded classified documents, the potential of crowdsourcing is real, but so are exploitation, sabotage, and hidden biases that undermine the power of crowds.
Sensor networks are becoming increasingly common, but the torrent of data they provide is not without its problems. It's intuitively clear that issues such as the placement of the sensors, their accuracy, the degradations caused by physical wear and tear, and deliberate attacks will all affect the confidence we should place in the conclusions we draw from the data collected, but we have only a limited understanding of how these issues affect what we observe. This talk describes work in progress that represents a sensor system as a tensor -- a three-dimensional generalisation of a matrix -- that can be used to perform data interpolation. It might also help us understand the effects of errors and develop additional algorithms for in-network data analytics.
Clause selection is one of the key decision points within the saturation-style architecture of automated theorem provers (ATPs) for first-order logic. I will describe how machine learning (ML) can be used to greatly improve the clause selection heuristics and thus the prover performance. I will then try to put this prototypical example of the ATP+ML synergy into a broader context.
Gradient descent is an ubiquitous tool in the area of deep learning. Still, much of the underpinnings of gradient descent in the deep learning context have yet to be understood. In this talk we will explore a line of work which uses backward error analysis to quantify the discretisation drift induced by gradient descent and shed light on its effects in supervised learning and two-player games. We will uncover the implicit regularisation effect that gradient descent has in supervised learning, and see how it can aid generalisation. In two-player games however, we will find a more complicated picture which shows that for adversarial games such as GANs discretisation drift can have a harmful effect, and that by cancelling parts of the drift using explicit regularisation we can improve performance and stability.
James will consider the Transformer Architecture that has become a standard tool for AI and Natural Language Processing (NLP) and examine its relevance for understanding human thinking and planning. He will suggest that human-like instantiations of this architecture might allow us to build models that capture many aspects of human performance using a general-purpose architecture that has its roots in models of human memory that originated almost 50 years ago, and he will discuss some of the open questions that James and his group is grappling with as they seek to implement such systems to model human goal-directed problem solving and planning.