On AI, universities, the sciences, and the humanities


Posted on

Two large robots working alongside lots of people in a library

The AI revolution is here. It has taken a while as the ‘winters’ of AI show. But now that the technology has reached a large part of the globalised world and Generative AI and derived applications there is a rush to get started, or at least to catch up.

At Lancaster, we have a long tradition of research on Artificial Intelligence: from medicine, cybersecurity and mathematics to history, design, and sustainability.

Society will have to confront and directly engage with AI. There is clearly great potential for good, but there is also a need for critical assessment and for a range of checks and balances. The questions that AI raises are old and fundamental questions of humanity. These include introducing further racism to social and other systems, amplifying misogyny, inequalities, contributing to ecological disaster, and perpetuating colonial exploitation.

Much like other technologies that have impacted substantially the world, AI needs to be carefully and thoughtfully advanced. Nevertheless, an important difference with, say, nuclear energy, is the astonishing speed at which AI is evolving, and therefore, swift transformation is of the essence.

The emerging literatures, and our own experiences at Lancaster, very quickly suggest that much of the research on the societal effects of AI comes from the Humanities and Social Sciences. This is hardly surprising, as it is in the tradition of these fields to deconstruct and point out the impact and dangers of methods, assumptions and theories.

Today, more than ever, there is the need to stop the long-promoted separation between the ‘sciences’ and the ‘humanities’. In the case of university research and due to the pace of developments in AI, there is an urgent need for interdisciplinary conversations that can lead to a more thoughtful co-creation and co-development of ethical technologies.

We also need to acknowledge that the ‘leviathan’ is being advanced and mostly controlled by large companies, and there are significant difficulties in regulating these or the technology. These are questions of capitalism and ethics in a competitive market that quite literally finds a way directly into billions of people’s pockets through smart phones.

However, we should take pause and consider that universities hold enormous, although usually disregarded power. It is at universities where engineers, designers, managers, lawyers, and implementers of AI in all fields, are and will be educated.

After all, it is in the classroom where much of the future of our societies is being built. It is therefore imperative that our curriculums consider the most pressing global challenges from multiple disciplinary perspectives, that we introduce subjects such as ‘Technologies for Humanities’, but also ‘Humanities for Technologies’, and that we expose and instil in those that one day will be at the head of important transformations, the pressing need to have a humanistic outlook beyond profit or individual gain in the creation of our world.

Professor Patricia Murrieta-Flores, Professor in Digital Humanities

Related Blogs


Disclaimer

The opinions expressed by our bloggers and those providing comments are personal, and may not necessarily reflect the opinions of Lancaster University. Responsibility for the accuracy of any of the information contained within blog posts belongs to the blogger.


Back to blog listing