About TAS-S
Posted on
Automata have been part of myth since ancient Greece, but they are now very much a reality. Autonomous systems (AS) are appearing in many different social contexts - and their application is growing fast. In TAS-S, system security is reviewed through an ecological lens. As well as the technical aspects of the system there are interacting human, social and ethical elements to consider.
Take Connected and Autonomous Vehicles (CAVs) for example. We are working with National Highways (NH), a UK leader in road management, preparing for the proliferation of connected and autonomous vehicles (CAVs). NH priorities of customer protection, road safety and streamlining road efficiency connect with TAS-S' interest in data ethics, networked communications and more-than-human agency. As drivers negotiate with increasingly “hands-off” behaviours, what new rules of the road will emerge and how will this feed into keeping the public and AS secure?
Autonomous plant machines are already in use in road construction projects. In TAS-S, we focus on how new technologies for road maintenance, manufacturing and transporting goods could, or should, integrate into existing UK infrastructures. How do such technologies interact with the surrounding social contexts into which they might be deployed and what are the challenges of adapting to unexpected events and security concerns? Through workshops with National Highways, we are exploring some wider ethical issues connected to increasingly automated road maintenance infrastructures, such as the potential consequences on job markets, and what bearing this might have on workers’ rights and responsibilities. With respect to futures with more distributed security, as more work is carried out via networked communications how can vulnerabilities which may emerge be accounted for?
As in other areas of AS there are supply chain security challenges too. Supply chains are increasingly vast, knitting together the local and the global in complex configurations. What are the social, organisational and material consequences of increasingly autonomous, distributed supply chains? How might issues of sustainability and climate change feed into an ethical assessment? And crucially, what security questions arise, whether around materials or the data flows of vehicle and road users, regulators, and others interacting with the AS in an increasingly networked context?
These questions don’t have easy answers. One method we’ve used to explore them is the isITethical? card game. This wasn’t your everyday game of Texas hold‘em though. Players are presented with a series of value cards, each describing ethical, legal and social (ELSI) values. Not all values on the cards necessarily align – some even conflict. Each player receives a value card visible only to them and must make the case to either replace one of the three communal value cards with their own, discard it or add another space. The aim is for participants to reach consensus on a set of three value cards that form the foundation for an “ELSI-TAS vision”: a framework of values which informs and negotiates with future AS practices.
The discussion through each round was challenging, as different perspectives converged and diverged. In the end, we agreed upon three values:
Security - While within computer science security tends to refer to the vulnerability of a system to attack, a social scientific perspective on security opens a wider set of issues, including the security of researchers, users and data subjects - as well as the balance between security and civil liberties.
Two-Spirit - Inspired by indigenous protocols for artificial intelligence, Two-Spirit is a value which looks beyond binary differentiation (good/bad, secure/insecure) and recognises the deep co-dependences and entanglements between humans, things, and worlds. This card embraces the uncertain and contradictory dimensions of AS – including their potential to generate challenges, controversies and security challenges.
Accountability - The easiest pick was accountability. There was strong agreement that identifying who is accountable and when is essential for our futures with AS. One area of uncertainty is over what happens to accountability as more automation enters the scene. For example, can a CAV be accountable, and if so, when?
What would you choose?
Related Blogs
Disclaimer
The opinions expressed by our bloggers and those providing comments are personal, and may not necessarily reflect the opinions of Lancaster University. Responsibility for the accuracy of any of the information contained within blog posts belongs to the blogger.
Back to blog listing