UTS researchers working towards ethical AI
Welcome to the first edition of the University of Technology Sydney (UTS) Data and AI Ethics Cluster’s newsletter for 2024. Please forward it widely to friends and colleagues who may be interested.
Since the last newsletter, generative AI has exploded onto the public stage, creating new urgency for AI applications to be developed towards ethical ends. UTS academics have been hard at work to answer new questions arising from this development, but also continuing to conduct research with organisations and institutions and to work with students to expand the local and global conversation about how AI should be developed, managed and governed.
In our researcher spotlight, Senior Lecturer in the School of Communications, Dr Suneel Jethani shows how questions about AI are a continuation rather than a radically novel development in relation to previous technologies. He talks about his own path to data and AI ethics from his exploration of the Australian debate over embryonic stem cell cloning in the early to mid 2000s to the normalisation of surveillance in the diffusion of wearable health and fitness tracking technology in the 2010s. AI Ethics, he says, is a natural extension of this work, which brings together bioethics, information ethics, communication ethics and technology ethics. Debates about the development of AI tend to be located in the technical fields, but Suneel’s story demonstrates how those with expertise in the social and political impacts of technology have such important expertise to share. We love hearing about how researchers from across the university have come to study the ethics of data and AI from a variety of perspectives and will be featuring more of them through the year.
We also have updates on AI policy at home and abroad in this edition. Emma Clancy provides a fascinating update on the EU’s AI Act and the corporate lobbying that has derailed the process in its final stages. In our research centre updates, we hear how researchers from UTS’s Centre for Research in a Digital Society (CREDS) and the Human Technology Institute have been providing vital input into a number of the recent government hearings related to AI. We also have an update on recent research relating to data and AI ethics from the Centre for Media Transition (CMT). In our research updates, we highlight three new papers from UTS researchers and collaborators.
Sampling across the university, we can see how important UTS’s contribution is to stimulating and informing a public debate about the social and political implications of AI’s development. Asking the right questions, connecting with other academics and institutions to facilitate public conversations and providing evidence to inform policy making are essential if we are to ensure that AI is developed towards ethical ends.
By Heather Ford and Emma Clancy.
In each edition, our newsletter features an interview with a UTS researcher working in the area of data and AI ethics. This edition, we spoke to Dr Suneel Jethani, Senior Lecturer in Digital and Social Media in the School of Communication at UTS.
Dr Suneel Jethani – AI Ethics is a natural extension of my work so far
Why have you chosen UTS to work on your research?
The truth is that UTS chose me by employing me. Before UTS, I’d been working in the State Government in Victoria on some open data policy reform projects and was keen to move into a more traditional academic research environment. My background and training is in Science and Technology Studies so a university of technology makes a lot of sense for me. I knew a lot of people here before I started, and these were people who I respected as friends and colleagues. That makes a big difference – coming into a new workplace and finding the balance between where you align and fit in with work that’s already going on, and how you carve out your own little niche and position within a broader group.
What inspired you to focus your research on data and AI ethics?
My primary interest is in studying the ways that emerging technologies which are not so well understood are framed through discourse. My Masters research looked at the scientific advocacy in the Australian debate over embryonic stem cell cloning in the early to mid 2000s, and my PhD looked at the reframing and normalisation of surveillance and discipline in the diffusion of wearable health and fitness tracking technology in the 2010s. AI Ethics is a natural extension of this work which brings together bioethics, information ethics, communication ethics and technology ethics.
How does your research inspire your teaching on AI ethics?
I teach two AI/Data ethics classes: one in the Faculty of Engineering and IT and the other in the Online Program Management program that predominantly has students from the Business School in it. My research feeds a constant stream of new material and case studies into the subjects I’m teaching, while discussions in class often lead to new ideas in research.
Can you tell us about an interesting research project you're working on right now?
I’ve got two projects I’m working on. One is about enfeeblement risk among creative practitioners working with generative AI and some of that work is going to be featured in a book called AI and Culture, edited by Tracy Harwood at De Montfort University in Leicester that’ll be published through E.E. Elgar next year. Enfeeblement risk is a type of risk where individuals voluntarily cede control to AI systems and start to lose the ability to think for themselves, solve problems, etc. The other is some preliminary work looking at strategies and tactics within the global anti-AI movement, from ritualised public protest to attempts to organise tech workers that’s feeding some ideas for upcoming conference talks and a journal article.
Corporate lobbying derails EU AI Act’s proposed rules on generative AI
Proposed European Union (EU) rules governing major generative AI programs such as Chat GPT have been significantly weakened in the adoption of the final text of the AI Act in March. The European Parliament signed off the final version of the AI Act on 13 March, following negotiations with the Council of member states. The negotiation period from the Parliament’s adoption of its negotiating mandate between June 2023 until March was marked by intense anti-regulation corporate lobbying of member state governments and MEPs. This lobbying was led by US-based tech giants such as Google and Microsoft on the one hand, and emerging EU AI “champions” such as France’s Mistral and Germany’s Aleph Alpha on the other. The treatment of large-scale generative AI programs was the focus of much of the lobbying.
Months of corporate lobbying
The 24 October 2023 “trilogue” (non-transparent negotiations between the Council, Parliament and European Commission) reached a broad agreement regarding the treatment of so-called general purpose AI (GPAI). GPAI refers to AI systems that can be used for more than one purpose, and include large language models and powerful generative models such as ChatGPT. While the Parliament’s mandate had proposed a horizontal approach with rules applying to all GPAI models, the Spanish presidency of the Council proposed a “tiered” approach with stronger requirements applying only to the most powerful models. MEPs conceded this point and agreed to a tiered approach, with details to be worked out.
But in November 2023, France pushed strongly for a withdrawal of the tiered approach towards GPAI. Supported by Germany and Italy, France argued that providers of major generative models should not be subject to external regulation, and should only be self-regulated, including through the use of company codes of conduct. Media reports immediately identified French AI company Mistral as influencing France’s position, noting one of its co-founders, Cedric O, was France’s digital economy minister until 2022 and had close ties with the Macron government. Since its founding, Mistral lobbied aggressively against the EU AI Act, saying in October 2023 that it would “kill” the company. While framing its lobby campaign in terms of EU champions, competitiveness and sovereignty, Mistral’s policy positions were largely identical to that of the US tech giants.
France leads push against regulating AI
On 19 November, France, Germany and Italy circulated a joint non-paper rejecting regulation of GPAI models and proposing “mandatory self-regulation through codes of conduct”. On 24 November, Corporate Europe Observatory published a report online, 'How Big Tech undermined the AI Act', which made public the private lobbying documents submitted by Google, OpenAI, Microsoft and others to the EU institutions regarding the treatment of GPAI. The report noted that throughout 2023, 86% of the Commission’s high-level meetings on the AI Act were with industry lobbyists.
At the Council meeting of EU ambassadors at the end of November last year, the Council gave the Spanish presidency a revised mandate for the outstanding provisions in the AI Act, proposing basic minimal requirements for GPAI systems and additional requirements for “high-impact” GPAI models. Media reports suggested France was in favour of collapsing the negotiations rather than accept GPAI regulation. Germany and Italy retreated somewhat and indicated willingness to compromise.
Minimal regulation of generative AI
A “political agreement” between the institutions on the final text was agreed upon at the fifth and final political trilogue, held in early December. The final text included minimal horizontal requirements for all GPAI models, such as keeping records and providing these upon request to regulators, as well as limited stronger requirements for models posing “systemic risks”. The text set this threshold as models that used training compute of 10^25 floating point operations or FLOPs – a compute level that currently applies to only OpenAI’s GPT-4 and (likely) Google’s Gemini Ultra. On 2 February, the Council meeting finally signed off on the provisionally agreed text, coordinated under the new Belgian Council presidency. France resisted until the last moment, reportedly manoeuvring to postpone or even reject the provisional agreement.
Tellingly, on 26 February, Microsoft and French “champion”, Mistral AI announced a “strategic partnership”, with Microsoft investing €15 million (likely to be converted to equity during Mistral’s next funding round) and Mistral being granted access to Microsoft’s Azure cloud services. While the amount is a miniscule part of the €450 million Mistral has raised this round, the announcement caused anger among MEPs and others, who have requested the Commission investigate whether Mistral was negotiating this deal with Microsoft while lobbying for “EU champions” during the AI Act negotiations. Campaigners have described Mistral as a “Trojan horse” for Big Tech.
By Emma Clancy, PhD candidate, UTS School of Communication
News from UTS Research Centres
Inquiry into the use of generative artificial intelligence in the Australian education system
In January 2024, Associate Professor Simon Knight provided evidence as a witness to a government inquiry into the use of generative AI (genAI) in the Australian education system. Presenting key points from the UTS Centre for Research in a Digital Society (CREDS) submission, he suggested that there will be significant implications of genAI for learning but that we needed to temper strong claims about unprecedented change and recognise the specific unknowns that are open questions in research, where evidence is necessary to move the debate forward.
There are three ways to frame genAI in learning, with distinctive implications:
First, education is one area where genAI will have an impact, including on how we teach and learn. The practices and tools marking this shift will develop alongside each other. To support that development, policy and methods are needed to support evidence generation regarding those tools and practices, and avenues to share this knowledge.
Second, what we teach will also shift, reflecting changes in society and labour markets. This is a cross-sector and discipline challenge, and understanding how to support professional learning in this context will need coordination and dynamism.
Third, to understand ethical engagement with AI, we need to understand how people learn about AI and its applications. This underpins meaningful stakeholder participation, how real ‘informed consent’ is, or whether ‘explainable AI’ actually achieves its end -i.e., is understandable AI. These are crucial for AI that fosters human autonomy. For this, we need sector-based guidelines with examples and ways to share practical cases.
Navigating the genAI in education discourse over the last 18 months, we have been faced with two contradictory concerns: concern regarding the unknown (we don’t know enough, ‘unprecedented’ change, etc); and the known (sure statements that AI can do x, or will lead to y). Strong claims in either space should be tempered; we have examples of previous technologies, and we have existing regulatory models that apply now just as they did a year ago. We can learn from prior tech-hype and failures and use existing policy in many cases to tackle these novel challenges.
On the other hand, we don’t ‘know’ the efficacy of tools or their impact in many contexts – for example, what the implications are of being able to offload ‘lower level’ skills that may be required for more advanced operations. These are open questions for research on learning, and this lack of evidence matters if we want people to make judgements about whether engaging is “worthwhile” – i.e., will genAI help us achieve our aims in education.
Links:
By Dr Simon Knight, Director of UTS Centre for Research on Education in a Digital Society (CREDS).
Human Technology Institute advocates for privacy on facial recognition and Digital ID
A key focus of the Human Technology Institute is ensuring the safety and trustworthiness of Digital ID systems as both the NSW and federal governments move towards roll out of their respective systems.
In the Federal sphere, HTI wrote a submission and appeared before a Senate Committee in which we advocated for a number of amendments to improve privacy and other human rights in the Digital ID bill – many of which were adopted by the Government. These include recommendations around strengthening inclusion and accessibility criteria, providing for a redress mechanism, and ensuring the voluntariness of the scheme. The Bill has passed the Senate and is awaiting sign off by the Lower House.
In New South Wales, HTI provided independent expert advice to Service NSW in a governance framework and training strategy to support the safe, trustworthy and responsible rollout of NSW’s Digital ID system. This was supported by a Policy Challenge Grant through the James Martin Institute for Public Policy. You can read more here.
AI Corporate Governance Program – Lighthouse Case Study Series
The AI Corporate Governance Program is an initiative of the Human Technology Institute to broaden understanding of corporate accountability and governance in the use of AI. We have engaged with over 1000 organisations and individuals across Australia since our inception in 2022 and a consistent request from business leaders has been to hear from their peers across diverse organisations who are in the process of tackling the challenge of AI governance. In response, in April, we launched our Lighthouse Case Study Series to highlight the insights generated and challenges faced by organisations on the frontier of human-centred AI development and deployment. We have launched our first two case studies – Telstra and KPMG Australia. Stay tuned for our UTS case study later this month. Read more about the Lighthouse Case Study Series here.
Thrive program – seeking postgraduate research students
The Thrive: Finishing School Well program is a groundbreaking research collaboration between the Human Technology Institute, Western Sydney University’s TeEACH Research Centre, the Paul Ramsay Foundation and the NSW Department of Education, which combines the expertise of statistical machine learning, lived experience and community co-design to develop new methods to discover what factors impact NSW school students finishing school well. We are offering generous scholarships for students advancing to Master’s, PhD and Honours studies to support our Thrive research program, in both Statistical Machine Learning and Data Science streams and Qualitative and Co-Design Methods streams. Read about our scholarships here.
Centre for Media Transition work on generative AI and journalism
As media organisations begin to grapple with the implications of generative AI on news production, the Centre for Media Transition (CMT) is conducting a multi-year research project into the potential impact of gen AI on the news and information ecosystem and considering possible policy approaches to mitigate risk. In 2023, we investigated how Australian newsrooms have been approaching gen AI and the extent to which they are ready to meet the challenges it presents. Our report, released in December 2023, found that newsrooms are experimenting cautiously with gen AI technology. They see strong upside for news production, particularly around the absorption of menial tasks that have arisen with digitalisation. They also see an opportunity for trusted brands to stand out from the mire of misinformation and other low-quality content. However, editors are attuned to the significant downside of gen AI use if problems of accuracy, authenticity and bias are not adequately dealt with. Newsrooms are also concerned about the use of their news archives to train AI systems without recompense. Meanwhile, efforts at regulating AI are gathering pace around the world. These mostly focus on transparency and safety-testing obligations for AI systems and regulating high-risk uses. Promoting quality news and information will be essential in an increasingly polluted information environment.
In 2024, we are expanding on this research to look at the following issues:
Are industry-wide guidelines or standards necessary to mitigate the risks of gen AI to news and information integrity?
What capacity will news media organisations have to seek compensation for their output when LLM manufacturers use it for training purposes?
How can problems of bias and verification be best addressed, given that newsrooms do not have full control over AI-generated content?
Our research will include a further series of interviews with editorial staff in Australian newsrooms, as well as a number of roundtables, which will aim to forge a set of common principles for implementing gen AI tools and for dealing with AI-generated content.
Research updates
A human-centred design space for AI writing tools
Everywhere we look in the world of digital writing, AI is wriggling its way into the apps, from the longstanding, fully featured tools like Microsoft Word and Overleaf, to the niche products like Grammarly, to the myriad new kids on the block targeting specific commercial sectors with the promise of “writing productivity” from Generative AI. Designers are now faced with an array of choices, some with ethical implications, so we asked, what is the shape and size of the design space for AI writing tools? In a new paper for ACM-CHI2024 we map that space, and offer an interactive tool to filter the literature through the design space dimensions. Read more.
Mina Lee, et al. (2024). A Design Space for Intelligent and Interactive Writing Assistants. In Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24), May 11–16, 2024, Honolulu. ACM, New York, NY, USA, 34 pages. Open Access Preprint: https://arxiv.org/abs/2403.14117
Co-designing AI ethics in education using deliberative democracy
In late 2021, UTS pioneered the use of deliberative democracy as a methodology for giving students and staff a meaningful voice in shaping the principles to govern the use of analytics and AI in educational technologies. While the informal feedback on this confirmed the positive experience participants had, the formal evaluation through participant interviews has now been published in the International Journal of Artificial Intelligence in Education, also documenting the subsequent impact of the work on university policy, and follow-on student engagements. Read more.
Swist, T., Buckingham Shum, S. & Gulson, K. N. (2024). Co-producing AIED Ethics Under Lockdown: An Empirical Study of Deliberative Democracy in Action. International Journal of Artificial Intelligence in Education. Published online: 27 Feb. 2024. https://doi.org/10.1007/s40593-023-00380-z
Navigating the certainty of generative AI
Following historic tradition, there are pundits who have made dramatic pronouncements about the way that generative AI will transform their field. Public diplomacy is no different, with pundits suggesting that future diplomatic negotiations may be supported by AI models that haggle with one another on a myriad points of policy and trade. In this paper, we talk about the underlying tension demonstrated by the current debate about what generative AI will do for human engagements. Whereas some have heralded generative AI models as an opportunity to inform diplomacy and support diplomats’ communication campaigns. Others have argued that generative AI is inherently untrustworthy because it simply manages probabilities and doesn’t consider the truth value of statements. We look at how AI applications are built to smooth over uncertainty by providing a single answer among multiple possible answers and by presenting information in a tone and form that demands authority. We contrast this with the practices of public diplomacy professionals who must grapple with both epistemic and aleatory uncertainty head on to effectively manage complexities through negotiation. Read more.
Di Martino, L., Ford, H. Navigating uncertainty: public diplomacy vs. AI. Place Branding and Public Diplomacy (2024). https://doi.org/10.1057/s41254-024-00330-z
Subscribe to UTS Data & AI Ethics Cluster
We are a network of researchers working across disciplines at the University of Technology Sydney in the field of Data and AI Ethics. This is where we share our newsletters, analysis and updates. Visit www.uts.edu.au.