Lancaster academics spearhead conference on ramifications of new EU AI Act


The ELSA logo next to an image of the conference

The European Lighthouse on Secure and Safe AI (ELSA) recently hosted a conference exploring the legal and ethical aspects of machine learning in light of the EU AI Act, which was finalized in May of this year. The ELSA project was launched in 2022 and is comprised of a network of excellent researchers from 26 top research institutions and companies across Europe intent on delivering cutting-edge AI solutions in a safe and ethically conscious manner. It is funded from €10M of research grants stemming from the EU, UK, and Switzerland, with Lancaster leading on one of its three research programmes.

The EU AI Act, first proposed in April 2021, outlines regulations on commercial usage of AI in order to protect the public’s health, safety, and fundamental human rights – including democracy and protection of the environment. The first legislation of its kind, it outlaws the use of AI to manipulate human behaviour to prevent free will, make criminal assessments of people or employ “social scoring”, perform emotion recognition in the workplace, harvesting facial images from CCTV to build facial recognition databases, or biometrically categorise people based on protected characteristics or for use within law enforcement. It was approved on 21st May this of this year, but is still yet to be fully implemented.

ELSA’s conference - which took place in Windemere on the 25th June – gathered a multitude of experts on AI, machine learning, and security to discuss this new Act, and how we can develop trustworthy AI that adheres to the regulations set out by the EU that also enhances our lives in a measurable way. The team explored the new regulations in-depth, and considered a variety of use cases for safe AI, including the creation of privacy-preserving machine learning in robotics, combating the use of “deep fake” images through the development of algorithms to better detect them, as well as looking into how we can extract and interpret information from AI models more accurately.

On the success of the day, organiser Professor Plamen Angelov said: “The workshop was a truly cross-disciplinary event that brought together not only machine learning and AI experts, but also professors of law and experts in data protection as well as external stakeholders such as the Joint Research Centre of the EC and the European Centre for Algorithmic Transparency. The main focus of the event was to discuss the need for human agency and oversight in regards to the AI and machine learning tools in the light of the recent EU AI Act. The workshop was organised as a part of ELSA, which is a large European collaborative effort and one of its work packages that Lancaster University is leading.”

Back to News