Symposium 2023 presenters' abstracts
UTS Data and AI Ethics 2023 Symposium: Reimagining data and AI for a future worth wanting. Friday 28 July 2023, 8:30-2pm (lunch from 1-2pm). Register on Humanitix
Below are abstracts of the presentations to be given at the symposium.
Panel 1: Understanding Data and AI
Michael Davis and Monica Attard: Canary in the coalmine or angel in disguise: generative AI, journalism and a worsening information disorder problem
OpenAI’s ChatGPT marks a significant step up in the potential use and application of generative AI in newsrooms, and new challenges concerning the potentially profound influence on the way journalism is made and consumed. These challenges include questions around journalistic integrity and accuracy, plagiarism, copyright, bias and impartiality. The copyright issue may influence the negotiating positions adopted by media organisations negotiating with the digital platforms for compensation for content in the shadow of the News Media Bargaining Code (NMBC).
The presentation will outline a new research project exploring how the ethics and practice of traditional public interest journalism can be maintained given the inevitable adoption of generative AI models in newsrooms, as well as their impact on an already critical information disorder problem.
The project involves a series of semi-structured interviews with local newsrooms to explore how they have, or plan to, incorporate generative AI into their production processes, and any ethical and editorial guidelines or processes they are or are planning to implement to mitigate any potential for the propagation of inaccurate or biased news and information and to address other concerns such as copyright. Following this we will conduct an assessment of the implications of generative AI systems for local newsrooms and of the robustness of their practices for addressing any potential negative implications. This will feed into a broader analysis of the interactions between journalism and other parts of the information ecosystem, including the role that journalism can play in both contributing to and mitigating the propagation of mis- and disinformation, and considering the implications for regulation and governance of media, digital platforms, and AI.
Simon Knight: Learning for Ethical AI
Four recent reviews offer close to 200 guideline and principle documents for AI ethics. How do these documents help us to navigate and learn about ethical engagement with AI? In reviewing these materials, a high proportion are either simple statements of principles, or roadmaps for AI strategy (often at a national level), with very few providing practical guidelines for users, developers, and researchers. I’ll talk about work to analyse materials available from a learning perspective, and develop models and draft guidelines and cases. One feature of this work is the (seemingly not too radical) claim: if we want to learn about ethics, we have to talk about ethics, and the implications of that including for scholarly communities via materials like editorial policies.
Adam Berry: Intersections between AI and Disability
For artificial intelligence (AI) to be genuinely human-centred, it depends on the meaningful inclusion of beneficiaries, end-users, data contributors and potentially-impacted communities. Historically, though, the voices of minority and marginalised groups in the delivery of AI and data-driven algorithms has been, at worst, absent and, often, tokenistic. The harms resulting from that exclusion have become increasingly commonplace, but no less disturbing – yielding societal and individual impacts that calcify or exacerbate existing disadvantage across domains as critical as criminal justice, health and education. The result is that even well-intentioned AI solutions consistently fail to capture the needs and concerns of marginalised communities, fail to correct for the material risks they foresee, and fail to deliver the societal uplift that AI so richly promises and could so readily achieve.
In response, we have delivered the first-ever national survey that focuses on how members of the Australian disability community think about, worry about and respond to the emergent risk of artificial intelligence in their lives. We hope that the survey and its findings will provide a critical new asset for thinking about how to deliver AI which properly reflects the needs and concerns of people with a lived experience of disability. Without such work, AI developers and adopters alike are left with little more than assumption upon which to pave the path forward.
Kirsty Kitto: Technical democracy: generating a more equal collaboration between context experts and AI experts
Data science and artificial intelligence do not exist in a vacuum, the data that they utilise is often very well understood by context experts, or the people who are experts in the system itself. Teachers, policy makers, immunologists, and many other professions have developed a profound understanding of the systems that they work within, and yet this expertise is often forgotten in the rush to apply advanced analytical methods to the data in delivering an AI model. Thus, context experts are rarely provided with an equal participatory status at the AI table, lacking both the opportunity and the requisite expertise in AI and data science to ask critical questions of the techniques and approaches used.
This often results in AI experts rediscovering widely known facts, or developing models that are irrelevant or difficult to act upon. Context-free analytics can also lead to scenarios where a naive understanding of the complexities of a socio-technical system can lead to ethical dilemmas, unrecognised bias, and discrimination. How can we bring these two sets of experts into a more equal collaboration?
This talk will discuss an ongoing program of research that aims to generate a technical democracy. This would help context experts bring their expertise into the modelling process, critically examining the results generated, and challenging them if necessary. Methods from a wide array of fields are being harnessed including graphical causal models, critical questions, and ethical edge cases.
Panel 2: Governing data and AI
Simon Buckingham Shum: Participatory governance of university AI
“How can a university engage its diverse community about their values, concerns and expectations regarding the use of AI?” This is the challenge that our work on Deliberative Democracy (DD) seeks to answer. DD emerged in response to the crisis in confidence in how typical democratic systems engage citizens in decision making. We tested these principles empirically by designing the EdTech Ethics workshop series for students, tutors and academics to co-produce a set of ethical principles to govern UTS educational technology powered by analytics and AI [report]. The rich experience, and analysis of stakeholder interviews, is evidence that the DD process cultivated commitment and trust among the participants, which has since continued through formalising the principles in university AI policy, and ongoing policy consultation through the UTS Student Partnership in AI, addressing contemporary issues such as generative AI and predictive AI modelling.
David Lindsay and Evana Wright: Regulating Use of Generative AI by Digital Platforms: Implications for News Media
The rise of generative AI has focused attention on rights in the content or data used to train algorithms. For example, artists and musicians have expressed concerns about the uncompensated use of their works to train algorithms for producing images or music. The draft consolidated EU AI Act, as approved by committees of the European Parliament in May 2023, attempts to address these issues by requiring providers of generative AI systems to, first, disclose the use of training data protected under copyright law and, secondly, to ensure ‘adequate safeguards’ against the generation of content that breaches the law, including copyright law. In Australia, specific attention has been given to the use of news content in generative AI deployed by digital platforms.
On the one hand, in a submission to the recent review of copyright enforcement, Google argued for new uncompensated exceptions to copyright law, such as a text and data mining (TDM) exception, to facilitate AI technologies. On the other hand, media companies, such as Nine and News Corp, support payments from companies that use their news content to train generative AI systems, upon analogy with the News Media Bargaining Code. This project analyses the policy issues relating to the use of news content to train generative AI deployed by digital platforms, including whether or not there is a case for treating news content differently from other forms of content, such as artistic or musical works.
Nicholas Davis: Corporate governance of AI
Calls for public regulation of AI, particularly when linked to vague, future risks, can be misleading. First, such calls imply that organisations operate in a regulatory 'wild west' with regard to AI. HTI’s research shows this is untrue - a wealth of existing laws provide protections, including privacy and anti-discrimination laws. Unfortunately, these haven't been rigorously enforced. Second, AI systems are in wide use today, and producing significant harms and risks that matter now to individuals, organisations and society. Two-thirds of Australian organisations report they are already using or planning to use AI, a figure that HTI’s research suggests is a significant underestimation of actual reliance on AI systems. Third, our research shows that vanishingly few Australian organisations are making systematic effort to manage the harms of AI. Legal reform is undoubtedly needed to clarify, extend and better enforce our existing laws. However, given that 90% of AI research and the overwhelming majority of citizen encounters with AI are the responsibility of the private sector, an urgent focus is needed on the corporate use, management, control and governance of AI systems, lest these harms scale. See related report here.
Panel 3: Reimagining Data and AI
Suneel Jethani: Does AI need a Hippocratic Oath? (No.)
Amongst the discourse on AI there have been calls for developers, deployers and users of AI to swear to ‘do no harm’ in the same way that medical doctors do. In this talk, I will argue that this is a simplistic solution to a complex problem. A’ do no harm’ oath brackets broader systemic drivers of risk and harm and relegates moral responsibility to individuals ignoring important contingencies and relations within sociotechnical systems. Further, such oath-like soft regulatory apparatuses don’t accommodate the dynamic nature of expectations and norms that surround emerging and not yet fully understood technologies very well.
The talk will conclude with a discussion of speculative proposals for AI regulation and governance which demonstrate alternative approaches – looking beyond developer/deployer onus for demonstrating the ability to manage risk, taking seriously the notion of a precautionary principle, and framing harm at the intersection of different artificially intelligent systems rather than within them – which have the potential to hold value as catalysts for cultural change with the technology industry that could lead towards more sophisticated understandings of risk and harm in the development and deployment of AI systems. Part of this work was funded by a small FASS grant in 2021 and though not attached to funding at present, it forms the basis of grant applications planned for 2024. See related paper here.
Michael Falk: The Body of the Machine
What should AI look and sound like? How should it smell, taste, feel? The physical aspects of AI systems are often overlooked, but they are crucial to human-machine interaction. In this paper, I will introduce some old ideas about the embodiment of artificial agents. Today we tend to think of AIs as disembodied chatbots or sleek metallic robots. In the seventeenth and eighteenth centuries, before the rise of digital computers, people dreamt of other physical AIs: defecating ducks, voyeuristic hansom cabs, wooden hand-cranked text generators, ‘software’ of firm but pliable mud, levitating homunculi in glowing glass phials. What new ideas about the design of AI systems can we glean from these wacky dreams of old?
My talk emerges from ‘Artificial Enlightenment’, a project I have pursued for several years with colleagues in design and computer science. The broad aim is to recover fictional AIs from the past, to try and break open our assumptions about what AI can and should be in the present. It is remarkable how narrowly ‘AI’ is defined in any given place and time. By remembering lost definitions of AI, we can hopefully imagine a wider range of futures with it. You can see some results in my papers on Artificial Stupidity and Embodied AI.
Monica Monin and Andrew Burrell: The role of creative practice in encountering and critically understanding AI
We are currently in the middle of the upward turn in a hype cycle centred around AI technologies. Largely this turn is driven by a range of increasingly accessible machine learning systems that can generate (at times) sophisticated images, based on short text prompts and chat bots that appear to understand what we are saying to them and reply with the information we need. Much of the hype speculates as to where this technology may be taking us and is often polarised between the techno-utopian and the dismissal of the technology as anti-creative and as an attempted automation of human creative activity.
This presentation will present a middle ground—one that takes a critical approach to working with AI technologies in creative practice. Critical making and critical reflection are central to this process. We will present a short case study, Filmic Identity in the Age of Deep Fakes, and point to other examples, in creative practice that demonstrate ways of working with emerging AI technologies that allow practice-based researchers to develop a more nuanced understanding of the implications and possibilities of these technologies in creative practice specifically and in contemporary society more generally.
See the Symposium program here.