Welcome to the first edition of the UTS Data and AI Ethics Cluster’s newsletter! This is where we will regularly profile the news, analysis, publications and conferences of UTS researchers working across disciplines in this field. Please forward it widely to friends and colleagues who may be interested. If you’ve been forwarded the newsletter, you can subscribe at the button above. Note: We recommend subscribing with your personal email account, as your institutional account may block Substack newsletters.
Depiction of data-labelling workers: Image generated by MidJourney for The Conversation.
ChatGPT prompts unprecedented public debate on AI
OpenAI’s release of ChatGPT last November has sparked an unprecedented level of public debate about the impact of data and AI on society. By January, ChatGPT had 100 million active monthly users, making it the fastest growing app ever. In March, the Future of Life Institute (FLI) published an open letter calling for a six-month pause on “giant AI experiments”, as well as accompanying policy proposals for regulating AI.
The open letter tapped into widespread alarm about the scale and pace of AI development, and a belief that the risks to job security and human agency are too great for AI systems to carry on unchecked. While these concerns are well-founded, critical data scholars have criticised the solutions proposed by the FLI, as well as the politics of the organisation itself.
The FLI was founded by tech billionaires who argue that AI could pose an existential threat to humanity at some unspecified time in the future, but ignore the actual harms resulting from the deployment of AI systems today. It is closely linked to the dangerously elitist philosophies of longtermism and effective altruism that are currently fashionable in Silicon Valley, and have been described by data justice activist Timnit Gebru as “eugenics under a different name”.
Gebru and colleagues Emily Bender, Angelina McMillan-Major and Margaret Mitchell responded to the FLI open letter, saying that – while they agreed with, and had already proposed, some of its recommendations – the letter directed public concerns away from actual risks and harms of AI systems towards imagined future scenarios associated with longtermist beliefs. They noted the actual harms omitted from the letter, including:
“1) worker exploitation and massive data theft to create products that profit a handful of entities
2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and
3) the concentration of power in the hands of a few people which exacerbates social inequities.”
Janet Haven and Jenna Burrell from the Data & Society Institute agree, pointing out that AI harms are not in the distant future, but already here, and they don’t only involve ChatGPT. In contrast to the FLI – which generally favours self-regulation by industry and only light-touch, risk-based state regulation – Haven and Burrell argue that we should focus instead on current legislative efforts by governments to regulate AI.
A new report by the AI Now Institute points out that the massive data and computational resources required by large language models (LLMs) further entrenches the power of the largest tech firms. Only Google, Meta and Microsoft (which backs OpenAI) have the resources to build such models right now, consolidating the monopolistic power of these companies. In March, Google released its own LLM chatbot, Bard, against the wishes of the Google employees who tested it, who raised concerns that its advice could “result in serious injury or death”. And while the competition to release large-scale AI models has intensified in recent months, Big Tech companies – including Microsoft, Google, Meta, Amazon and Twitter – have been dismissing entire teams of “responsible AI” staff.
We can’t rely on Big Tech to regulate itself, nor should we allow the industry to set the parameters of public debate about AI systems. The mainly positive and uncritical response to the FLI open letter from the media and industry practitioners highlights the need for critical scholars to reclaim the terms of this debate, and redirect it towards public-interest regulation based on social and democratic values.
From Emma Clancy and Heather Ford, UTS Data and AI Ethics Cluster Co-Coordinators.
Publications: What we’re writing
Simon Knight (CREDS), Shibani Antonette (TDS), and Simon Buckingham Shum (CIC) published a new article in the British Journal of Educational Technology, ‘A reflective design case of practical micro-ethics in learning analytics’ on 3 April 2023.
Linda Przhedetsky’s (HTI) research on opaque algorithmic decision-making processes in the RentTech sector was profiled in the ABC on 18 April 2023 – ‘Inside the “opaque” property tech sector that helps real estate agents decide who gets a lease in a rental shortage’ – coinciding with the launch of a new CHOICE report on the issue.
Suneel Jethani’s (DSM) article on the limits of using a Hippocratic oath-style mechanism in data science was cited in the Data & Policy (2023, 5: e12) article, ‘Think about the stakeholders first! Toward an algorithmic transparency playbook for regulatory compliance’ by Bell, Nov and Stoyanovich.
Conferences and events: What we’re talking about
Simon Buckingham Shum (CIC) contributed to the second TEQSA national webinar on the “implications of ChatGPT for academic integrity in higher education.
Of interest: What we’re reading
Economies of Virtue – The Circulation of ‘Ethics’ in AI
Authors: Thao Phan, Jake Goldenfein, Declan Kuch, and Monique Mann (eds) (2022)
Published by: Institute of Network Cultures
Seeing the sort: The aesthetic and industrial defence of ‘the algorithm’
Author: Christian Sandvig
Published by: Journal of the New Media Caucus
Algorithmic Governmentality and the Death of Politics
Author: Antoinette Rouvroy (2020)
Published by: Green European Journal
Other news
HTI launches The Future of AI Regulation in Australia project
The UTS Human Technology Institute launched a new research project on ‘The Future of AI Regulation in Australia’ on 27 March 2023. The project “will work collaboratively with civil society, industry and government to... first assess the gaps in Australia’s policy and legal approach to AI [and] then set out a roadmap for reform to ensure Australia’s regulatory framework encourages positive innovation while addressing the real risks of harm associated with AI”.
The Ethics of Data and AI unit
Suneel Jethani (DSM) is teaching unit 57304 ‘The Ethics of Data and AI’ for the first time to FEIT students and reports the quality of the work and in class engagement has been very impressive. He will be compiling a volume of student work in this unit and circulating it to the UTS Data and AI Ethics Cluster. The unit has 13 students in Autumn and will run again in Spring this year.