What kinds of research do we need to humanise AI futures?
Heather Ford reflects on a successful symposium on 'Humanising AI Futures'
By Heather Ford, co-coordinator of the UTS Data and AI Ethics research cluster
On Friday 28 July, the UTS Data and AI Ethics Research Cluster hosted a symposium to present research aimed at ‘Humanising AI Futures’ at UTS. We’ve been thinking as a network about what UTS should do to support research into the social and ethical implications of data and AI technologies, and the symposium was aimed at sparking further conversations within the university community about this.
Professor Heather Horst from the ARC Centre of Excellence for Automated Decision Making and Society delivered an inspiring keynote with examples from her research on the impact of data and AI technology in multiple countries. She highlighted that the context (including its politics, history and cultures) in which technology is deployed is vital to determine the outcomes of that technology. It is not inevitable that data and AI technologies lead to wealth extraction, alienation or increased inequalities. The futures of data and AI technologies are being determined as we speak – as technologies are being adopted, developed, extended and rejected in particular contexts around the world.
Twelve speakers from the Data and AI Ethics Research Cluster representing eight research groups and five faculties and schools presented current research projects in three panels focused on 1) understanding (and “learning about”, as Dr Simon Knight put it), 2) governing and 3) reimagining data and AI. As the symposium concluded, I remarked how extraordinary it was that lawyers, engineers, creative practitioners, anthropologists, social scientists, computer scientists, humanists, digital humanists, designers, literary scholars and historians could agree on so much.
It is clear that there are two processes that require research support in the area of ethical AI. The first is in the period before technology is deployed: we need research that helps engineers understand the social, political, historical and cultural contexts in which technology will be deployed so that we can a) build technology that fits well into the ways that people live and work, and b) mitigate against any unintended consequences by anticipating the impact of that technology on current policy and practice. The second is in the period after technology is deployed: we need research that examines how people are using (or not using) technology in situ and for that knowledge to be fed back to engineers and legislators to improve technology (and sometimes to abandon it completely).
In both cases, research requires meaningful conversations between stakeholders and researchers. Ethical AI, in other words, is determined by the vitality and comprehensiveness of dialogue about it. Finding the best method for facilitating those conversations, I’m starting to realise, is one of the most valid areas of research to support ethical AI – and it is in this area that I think UTS is really excelling. Across multiple groups, we work to facilitate conversations between stakeholders (the Human Technology Institute with corporate Australia, the Disability Research Network with people with disability, the Connected Intelligence Centre and the Centre for Research on Education in a Digital Society with students and educators, the Centre for Media Transition with journalists and media policy makers, the Data Science Institute with human resources services and job seekers) that lead to the development of participatory, democratic AI tools, processes and policies.
We design, re-design and re-think how oaths, guidelines, principles, sensitising questions, participatory models, ethical edge cases, and critical questions might be used to develop humanistic AI. We create channels that stakeholders can use to have conversations about ethical AI in the future. What seemed to be disagreement on whether AI principles or guidelines work, for example, was a demonstration on how we are able to evaluate these processes in context and towards the public interest.
Another area where UTS excels is in the way that we practise data analytics towards public interest goals. Excellent research being conducted by researchers in the Faculty of Design, Architecture & Building for example, shows how experimentation with AI image generators can creatively reflect how the tools work and in whose interest. Work in the Faculty of Arts and Social Sciences shows how data technologies can be used to surface inequalities in supposedly representative data and produce reflexive tools that can be used by data workers in their daily practice to improve the quality of hidden data. The Connected Intelligence Centre is working on analytics that can improve student retention but doing so within a larger ethical framework that reflects multiple stakeholders’ needs. Humanising AI futures is, in short, about building better ways to talk about humans’ needs in relation to technological affordances and about getting our hands dirty by using current tools to make data that surprises, that astounds, that complicates the ways in which we think about that data.
This short symposium really helped to set the stage for the kinds of cross-disciplinary work that can not only mitigate against the risks of AI, as our Deputy Vice-Chancellor (Research), Professor Kate McGrath articulated in her welcome, but also to develop technological futures that are better engineered, managed and governed for all stakeholders and in the public interest.