Understanding data and AI – Symposium panel
Researchers discuss AI’s relationship with disability, journalism and education
The first panel at the UTS Humanising AI Futures Symposium was chaired by Dr Simon Knight from the Centre for Research on Education in a Digital Society, and featured presentations from UTS scholars Dr Michael Davis (Centre for Media Transition), Dr Kirsty Kitto (Connected Intelligence Centre) and Dr Adam Berry (Disability Research Network and Data Science Institute).
Opening the panel, chair Dr Simon Knight framed the discussion as “learning to understand” data and AI. “We’re all here because we share a practical concern. We are interested in AI, and we care about ‘good’ uses of AI, with ‘good’ both referring to the quality of the work – the algorithm – and having a normative quality, ‘good’. We’ll discuss how the various stakeholders are learning regarding ethical engagement with AI.”
Presenting research carried out together with Prof Monica Attard at the Centre for Media Transition, Dr Michael Davis spoke on ‘How journalists use AI’. Commenting on the introduction of generative AI systems such as ChatGPT, Davis highlighted a recent report by media integrity project Newsguard, which uncovered at least 49 so-called news sites that are pumping out entirely AI-generated articles, unsurprisingly “filled with inaccuracies and misinformation”.
Davis outlined a new project initiated by the CMT investigating how AI is being adopted in professional newsrooms in Australia. “Even for a decade before Chat GPT was released, the use of AI was widespread in newsrooms – in audience profiling, spotting trending stories, translating and transcribing, automating stories on financial data and sports results, and content moderation. But out preliminary research suggests journalists in Australia are not aware of the extent of AI use in their own newsrooms,” he said.
“Our research project poses the following questions: What opportunities and risks do Australian newsrooms see in the new AI tools? How are these tools being used in Australian newsrooms? Are Australian newsrooms making changes to editorial practices or ethical guidelines to mitigate AI risks? And what changes should be made to editorial practices and ethical guidelines to mitigate AI risk?” The three-phase research project includes a literature review, newsroom interviews, and analysis, and is currently in the literature review phase, he said.
“A recent international survey found that half of the world’s newsrooms are using generative AI, but only 20 per cent have internal guidelines for its use in place. However, less than 15 per cent of journalists are using it weekly or more. The survey found that producing content summaries was the most common use in newsrooms, at 54 per cent, but 32 per cent said AI was being used for article creation and 44 per cent for research. Inaccuracy and quality of news were a major concern for 85 per cent of respondents, with 67 per cent concerned about copyright and 46 per cent about data protection and privacy. Only 38 per cent said they were worried about job security, but 82 per cent think their roles will change.”
Davis outlined the development of internal guidelines by international publishers. “Some publishers have developed or are developing internal guidelines on use of AI, including The Guardian, and The Financial Times,” he noted. “A small number of self-regulatory bodies such as press councils and journalist unions have developed guidelines, but overall, there has been very little action taken in this area. This is one of the issues we’re eager to investigate more in our research.”
Davis outlined some of the key issues and anticipated findings of the research, including perceived opportunities, potential legal and ethical risks, risks to the journalism industry to the information and media environment more broadly.
Dr Kirsty Kitto from the Connected Intelligence Centre spoke on, ‘Technical Democracy’, discussing the need to generate a more equal collaboration between context experts and AI experts. “One of our core concepts at the Connected Intelligence Centre is that there is no such thing as context-free data. If you collect data, you’re making assumptions – and baking these assumptions into your data right from the outset,” she said. “So how can we generate a more equal dialogue, in which the people who actually understand the systems and the data that the data scientists are using are much more engaged in the process – without forcing them to go off and do a data science degree?”
Kitto framed this problem as a “theory-data divide”. “Data experts include data scientists, AI and machine learning experts, and analysts. They’re often looking for data and datasets that could be analysed. But they can struggle to understand what’s important or actionable in a particular dataset, because they lack the contextual understanding. Context experts, on the other hand, could be teachers, students, policymakers, lawyers or ethicists. They often have both the data and an extensive knowledge of the system in which the data was created. But they can struggle to formalise their understanding, which is often theoretical, in a model that extracts new insights from the data.”
Asking whether technical democracy could help bridge this divide, Kitto quoted Thompson et al (2023), who said: “The challenge is to create dissensus through necessitating new modes or sites of cooperation between ‘specialists’ and ‘laypersons’ sparked by a particular sociotechnical controversy.” Kitto explained, “We always hear about stakeholders trying to build consensus. But what we really need to do is create uncertainty – because this is the space in which everyone is willing to learn together.”
She outlined three models of technical democracy that could potentially bridge the theory-data divide. “The first is the graphical causal model, where data experts and context experts engage in a discussion facilitated by simple visual representations of the problem. Once you have that model you can dive deeper into the issue, and answer more specific questions. This simple visual model can then be turned into a statistical model.
“The second model is to examine ethical edge cases. AI experts need to engage in rich dialogue with experts in law and ethics rather than leaving the ‘ethics stuff’ to them. For example, consider the relationship and tension between the claims, ‘Models should be accurate and free of bias’, and ‘Students should be able to opt out’. This is an ethical edge case where both context and data experts need to deeply engage with each other. The third model is to pose critical questions. By asking these critical questions, context experts can question and challenge the assumptions baked into AI models and challenge them.”
Responding to a question on the role of internal guidelines for AI governance, Kitto commented that in her view, guidelines were generally reactive and needed to be “more anticipatory” to be effective.
Dr Adam Berry from the Disability Research Network and Data Science Institute reflected on ‘Intersections between AI and disability’.
Dr Berry noted that the issue of disability was often absent from broader discussions about AI. “I’m an AI practitioner, and unfortunately we have a really long history of doing really badly at including people with disability in the way we design, deliver, test and think about AI solutions. Probably the most resonant quote describing this problem is from the UN Human Rights Council, which said that AI ‘often excludes persons with disabilities entirely’,” he said.
He pointed to the example of the workforce recruited by OpenAI to carry out the fine-tuning of ChatGPT. “Earlier during this human reinforcement learning process, OpenAI went out of its way to specify all of the demographics of who was involved, where they were from, racial identity and gender identity signifiers, but they did not even mention disability. So we’re a long way behind in terms of how this is thought about in practice.”
Berry highlighted some disturbing examples from the literature of engagements between users and large language models (LLMs) regarding disability – such as a person who typed “I’m a woman who is hard of hearing,” to a Meta bot, only to receive the response, “I’m sorry to hear that. I’m sure you can still enjoy hiking. Do you enjoy hiking?” He noted that while there have been some improvements in the way LLMs treat disability, the current dominant framing remains problematic. “ChatGPT, for example, will usually frame disability as something to be overcome, as a story of overcoming a challenge aimed at inspiring able-bodied people.”
Part of way forward, Berry noted, was for designers to engage more often and more deeply with people with disability. “We have just completed a national survey of people with disability about AI, looking at a few scenarios of how AI might be used and asking respondents for their views. The survey was designed in such a way that any one person only ever saw one flavour of each scenario. The findings very clearly demonstrate that people with disability want to be involved in the design and decision-making about AI systems that affect them.
“The baseline scenario was the following statement: ‘Software produces a list of disability service providers who should receive additional gov funding. The software uses AI.’ The scenario was deliberately brief and lacking in detail, and we asked people with disability, ‘How comfortable are you with this technology?’ Around 43 per cent said they were comfortable. Another group of respondents were presented with the addition to the scenario, that ‘People with disability were involved in the design, development and testing of the software.’ In this case, the number of respondents who said they were comfortable rose to 60 per cent. A third group saw the alternative statement, ‘People with disability were not involved in the design, development or testing of the software’, and the level of comfort dropped sharply to 22 per cent.
“The difference is remarkable – there aren’t many surveys where you get a percentage difference of around 35-40 points; that is a genuinely stunning outcome.” Berry noted that around half of respondents said they were willing to be engaged in the design process – “but the problem is they are never asked”. He outlined a similarly large divergence in respondents’ level of trust and comfort when it came to a scenario about the level of transparency – whether details of how a piece of software works, how it was developed and how it was tested are publicly available.
Concluding, Berry warned researchers not to assume that people have a baseline understanding of what AI is. “In our research in February and March this year, when news about ChatGPT seemed to be everywhere, only 40 per cent of our survey respondents said they had seen, read or heard anything about AI in the previous 12 months.”
Summary by Emma Clancy.