Research Projects
â–¶ Setting the Legal Tone: Towards a framework for the protection of rights in voice personality
AI deep fakes create ‘misuse’ of an individual’s personal image but little is known about whether this ‘personal image’ includes the voice. Despite emphasis on the legal and ethical implications of rights pertaining to image and voice (see the SAG “Actors strike”), the legal and ethical implications of this are relatively under-explored. Many of the existing intellectual property tools which have been successful in protecting intangible concepts fail when it comes to protecting the voice - particularly the voice of the ordinary person. With a focus on the creative industries, we seek to understand the characteristics of the voice and how to protect it by understanding the legal, linguistic and ethical dimensions of ‘personality of the voice’. In this project we ask; what are the characteristics of personality in voice and can they be protected? And can a framework concerning the rights and responsibilities of voice personality be developed? Can we live with AI voices?
â–¶ UKRI AI Centre for Doctoral Training in Lifelong Safety Assurance of AI-Enabled Autonomous Systems (SAINTS) CO-I and Responsible Innovation Lead
SAINTS at the University of York is training 60 talented PhD students with the research expertise and skills needed to ensure that the benefits of AI-enabled Autonomous Systems (AI-AS) are realised without introducing harm as systems and their environments evolve. SAINTS is a single-university Centre for Doctoral Training with an enthusiastic and diverse leadership team, reflecting its ambition for its student cohorts: Prof Ibrahim Habli (Director), Dr Jenn Chubb, Dr Jo Iacovides, Prof Cynthia Iglesias, Dr Ana MacIntosh, Prof John McDermid, Dr Phillip Morgan, Dr Colin Paterson, Dr Zoe Porter, Prof Tom Stoneham, Prof Richard Wilson.
â–¶ AI, What’s That Sound?
Artificial Intelligence (AI) and algorithms play an increasingly prevalent role in how we consume music and culture. Generative AI is expanding the boundaries of music, AI personalisation tools tailor listening experiences and this expansion extends to live music immersive XR experiences. The impacts of these developments has sparked debate and revolt across the creative industries. Whilst there is research into how listeners and musicians perceive artificially created music and the effects of AI consumption on societies, musician and gig-goer attitudes towards the disruption and innovation taking place in our music making, performing and gig-going is only peripherally explored.
​
This research reflects on 75 survey responses from UK gig-goers and musicians about their attitudes towards AI in music. We find a relative acceptance of AI providing its use is made transparent and under certain conditions of co-creation. An overall resistance towards AI’s ability to ‘mimic affect’ or reflect (let alone replace) the role of the human in art and music creation and live expression poses questions about human connection, authenticity and relatability in an age of AI music-making. One of the outputs of which was a public exhibition at Streetlife, York.
â–¶ The Sonic Framing of AI in Documentaries
The potential of sound for storytelling and its impact on public perception is often ignored in favour of other aspects of a story’s production such as its contextual, technical or visual framing. Yet the soundtrack and sound design of stories allow the storyteller to speak to the audience and can influence their attitudes and understanding of a narrative object.
The aim of this research is to understand in what ways a failure to consider the sonic framing of AI influences or undermines attempts to broaden public understanding of AI. Based on our preliminary impressions, we argue that the sonic framing of AI is just as important as other narrative features. Some of this research is published with We and AI and a forthcoming journal article in AI & Ethics.
â–¶ ALGorithms
This project aims to bring together expertise across the humanities and social sciences interested in Algorithms, Loss and Grief. In particular, I am interested in algorithmic hauntings or triggers affecting the grief process, digital death (and non-death) and the ethics of AI in understanding grief and loss.
â–¶ A Vice Framework for AI
In this work, we consider if we myopically focus on the moral positives of AI, then there is a risk of inadvertently creating corrupting techno-social systems, which is potentially going to make our future work environments even worse than they’re already likely to be. Contemporary theorising about the ethical dimensions of AI indicates at least two problematic tendencies.
The first, a default focus on concepts like autonomy, justice, fairness, non-discrimination, explicable in terms of the entrenchment of a liberal morality of the sort central to much contemporary moral and political philosophy. Second, a focus on concepts that track the positive dimensions of moral practice and experience, such as virtue, flourishing, moral progress, the good life (Vallor, 2016). Taken together, the result is large-scale moral frameworks for moral appraisal of new technologies that fail to systematically address the negative aspects of moral life (Jobin et al, 2019): one is emphasis on virtues and excellences to the occlusion of vices and failings; another is affirmation of the edifying potential of new technologies to the relative neglect of the corrupting possibilities (Coekelberg, 2020). p.59).
We suggest that a more comprehensive assessment of the moral possibilities of new technologies in the future of work requires a more complex set of concepts and sensibilities and that there are good empirical and theoretical reasons to consider the vicious and corrupting dimensions of new technologies. If so, exploring the negative moral possibilities of these technologies requires a greater sensitivity to what we will call technomoral-vices.
â–¶ AI in Society Lab
Think slowly and fix things! This network coordinates and supports thoughtful reflection on ethical AI development across the University co-led with ethicist Dr Zoe Porter in the Institute for Safe Autonomy (ISA).
​
The AI in Society (AIS) Lab is a forum at the University of York for academics and researchers working on the societal implications of AI.
â–¶ AI Futures
In collaboration with the Digital Creativity Labs, this project explored the opportunity spaces (and gaps) noted in living with data and AI. The project yielded several outputs including invited talks and journal articles exploring AI in science and AI missing narratives.