Governing data and AI – Symposium panel
Researchers discuss AI governance in the corporate, news media and university sectors
The second panel at the UTS Humanising AI Futures Symposium held on 28 July – Governing data and AI – was chaired by Prof Derek Wilding from the Centre for Media Transition and Faculty of Law, and featured presentations from UTS scholars Prof Nicholas Davis (Human Technology Institute), Prof David Lindsay (Faculty of Law), and Prof Simon Buckingham Shum (Connected Intelligence Centre).
Introducing the panel, chair Prof Derek Wilding commented on the Symposium’s earlier discussion about whether or not internal guidelines were valuable as a tool to govern AI. “One of the really important aspects of developing guidelines is that they give us a common language to identify the issue or problem to be addressed. In an emerging policy area such as this, this common language is crucial. But organisations don’t necessarily take into account diversity, and the diversity of people’s experiences, in producing such guidelines. That’s something we can delve deeper into as researchers,” he said.
Prof Nicholas Davis spoke on ‘The corporate governance of AI systems’. He noted that UTS’s Human Technology Institute (HTI) was launched one year ago, and examines research questions at the intersection of law, policy and computer science. Davis outlined research findings on the corporate governance of AI published in May by the HTI, saying: “Almost every organisation in Australia – in fact, every company and government agency that I interviewed for this project, about 300 in total – relies on AI but only about 60 per cent of them say they do. There is a huge gap in understanding and awareness of what AI systems are actually being used, and consequently, of the risks associated with these uses.”
Davis noted that of the top five uses of AI in Australia, “most are in areas that touch people directly”. “The most common uses include customer service, marketing and sales, human resources, and product or service development. These systems are making really consequential decisions for humans,” he said.
“We’re seeing some significant emerging challenges around the conceptualisation, embodiment, development and use of AI systems. Government is increasingly rolling out the use of AI – in the case of Transport NSW, there are more than 70 active machine-learning systems facing the public. But despite this, it is mostly private sector managed and run. The private sector companies are the vendors responsible for developing and deploying the AI systems in most cases. We’ve seen increasing calls for regulation, but most of this is completely disingenuous when you delve into it. These are not really calls for accountability but for some form of future risk management. So the HTI is focusing on examining internal policies, laws, obligations and regulation.”
Davis noted that there is little disagreement on core AI principles and values around the world. “The problem is, as Luke Munn [2022] has pointed out, in practice our ethical principles tend to be isolated, meaningless and toothless. In our research asking data science, compliance, legal and privacy teams, ‘how do the ethical principles around AI affect your lives?’, the answer is, ‘they don’t’. There’s no real embodiment of these principles in practice. Many representatives are supportive of implementing ethical values in practice, but the channels of communication, accountability and control do not yet exist.”
He cited the example of “one of Australia’s biggest investors in technology”, which provides services to millions of Australians. “This organisation spends millions of dollars each month on compliance systems and risk management systems. But the risks and reporting of its AI models are managed in an Excel spreadsheet – because the organisational maturity just isn’t there yet.”
He outlined the HTI’s efforts to categorise, expand and structure conversations about potential harms of AI systems. “We framed these harms in three main categories – AI system failures; malicious or misleading deployment of AI; and the overuse, inappropriate or reckless use of AI. One of the most important points arising from our research is that this conversation is not just about bias, security, or privacy rights – and we need to avoid developing tunnel vision. There are so many important points of failure in AI systems that demand the kind of engagement, collective learning, contextual data gathering and design that we’ve heard about at this Symposium.” See the HTI report here.
Prof David Lindsay (whose co-presenter Dr Evana Wright was unable to attend the Symposium) spoke on their research project, ‘Regulating use of generative AI by digital platforms: Implications for news media’.
Prof Lindsay began by noting while the Internet transformed access to content, resulting in the rise of powerful gatekeeping platforms, generative AI transforms content creation – posing major challenges to laws protecting content, principally copyright law. Lindsay and Wright’s research focuses specifically on regulating platforms’ use of generative AI to deliver news content.
“In the shadow of the news media bargaining code – which effectively mandates payments for the use of news content from Google and Facebook to content providers, such as Nine and News Corp – the uncompensated use of generative AI to produce news content has been a particular focus of attention in Australia,” he said. “On the one hand, Google argues for greater flexibility in copyright law to promote AI, supporting an exception for text and data mining. On the other hand, Nine and News Corp support payments from businesses that use their news content to train generative AI systems, making an analogy with the news media bargaining code.
“So, our key research question was, ‘How should we regulate the use of generative AI by digital platforms to deliver news content so as to ensure the sustainability of accurate news production?’ We addressed the research problem through analysis of all relevant laws and policy responses, in Australia and elsewhere.
“We described and analysed how generative AI is being integrated into digital platforms. In this, there are important differences between the platforms. In February, Microsoft announced the incorporation of GPT-4 into the Bing search engine. Soon after, Google launched the Bard chatbot – but Google’s keyword advertising business depends upon payments for referrals to content sites, so it has no incentive to change its search business by incorporating a chat interface.
“We then analysed the legal and ethical issues involving rights in the input data that is used by generative AI to produce outputs, such as text or images. In the EU, the proposed AI Act adopted by the EU Parliament in June imposes two new obligations on providers of generative AI. Providers must publish a summary of the use of copyrighted material as input data, and ensure ‘adequate safeguards’ against their systems generating unlawful content, including copyrighted content.
“In the US, the Biden administration announced in July that seven platforms, including Amazon, Google, Meta, Microsoft and OpenAI had agreed to voluntary commitments to manage AI risks, including ‘provenance, watermarking, or both’. While this is aimed at improving transparency on when AI is used to generate content, it seems to fall short of requiring disclosure of input data.” Lindsay noted that in the US, copyright law is based on the fair use doctrine, and under EU law it depends on the application of exceptions for text and data mining.
Commenting on the initial findings of their research, Lindsay said: “To date, the problem that news content creators have faced with digital platforms is generally seen as a competition law problem, not something to be addressed by increasing copyright protection. But it’s too early to predict the precise business models that may be deployed by platforms to use generative to deliver news content. It seems to us that platforms’ use of generative AI to deliver news content can be conceptualised as both a subset of the broader copyright issue – the uncompensated and unconsented use of copyright content – and as a problem of undue concentration of advertising markets, involving ongoing siphoning of advertising away from news content providers.”
Noting the distinct features of news content creation, and the society-wide consequences of the ongoing erosion of news content, Lindsay argued that, “at least as an interim measure, there is a good case for establishing a mechanism for dealing with the problem of generative AI free-riding on news content by either (a) extending the news media bargaining code or a (b) new form of compulsory licensing”.
Prof Simon Buckingham Shum spoke on ‘Participatory governance and design of university AI’. Focusing on the two themes, human-centred participatory design and deliberative democracy, Prof Buckingham Shum outlined how universities could implement these models in practice when designing and deploying AI systems and data analytics. “Our research aims to address the questions, ‘How can we give our stakeholders a meaningful voice in shaping ethical AI?’ And UTS is our key site of investigation.
“One answer to this question is human-centred participatory design, which is a well-established field of research that has been active for many decades. We now know a lot about how you involve non-technical people in the design of software systems. We can do this here at UTS because we are building our own analytics and AI tools for teaching and learning. That is not the case, of course, if you are just buying a product and dropping it on your academics.
“A key method in human-centred design is to use high-touch, low-tech techniques – things like sticky notes, pens, paper, cardboard and other objects that allow non-technical people to play around with ideas. Another example of this human-centred design is an AI writing feedback tool that we’ve been developing at UTS since 2015, which has been shaped by the academics who work here. The academics sit down with us to help compose what become the automated feedback messages to students, and test classification thresholds for sentence types.”
Buckingham Shum explained how his team used this human-centred design in combination with deliberative democracy. “Deliberative democracy is both a political theory and a practical methodology. It’s a response to the manifold failings of our current democratic systems – failures reflected by research showing people believe they don’t have a real stake in the decision-making process, that they’re not being consulted properly, that the consultation that does happen is just tokenistic, and that there’s no accountability for implementation.” He noted that UTS has domain experts in deliberative democracy in the Institute for Sustainable Futures, which provides short courses in deliberative democracy.
“We wanted to use this model in the way we consulted with our students and staff about the kind of analytics and AI technologies we’re developing at UTS,” he said. “We used the deliberative democracy model effectively throughout our EdTech Ethics consultation held in 2021. It’s very different from just running a workshop – there are specific rules about how these models work, and there are expert organisations that come in and facilitate the process.
“Our first step in the consultation was to recruit a ‘deliberative mini-public’. This can’t be just the usual suspects – volunteers who have the confidence to speak up, or a particular advocacy axe to grind. We recruited 20 students and staff through stratified sampling to ensure accurate representation, considering factors including gender, whether English was a second language, academic discipline, domestic or international student status, and undergraduate or postgraduate status. The participants committed to learning from ‘expert witnesses’, and to respectful, reflective deliberation.
“We then held a series of carefully designed workshops, expertly facilitated, with lots of hands-on activities to help engage with ethical dilemmas. The workshops always had a deliverable – in this case, to propose a set of draft principles to govern the use of AI and analytics in our teaching and learning, with specific examples of what this could look like at UTS. The deliberative mini-public then presented their proposal to the university’s senior leadership who sponsored the process.” Read more about the UTS EdTech Ethics Deliberative Democracy 2021 consultation and report here.
Buckingham Shum concluded by welcoming the university’s ongoing commitment to human-centred design and deliberative democracy in developing its AI policy. “This deliberative democracy process led to the publication in June of the UTS AI Operations Policy, which commits to a certain set of AI principles, plus the accompanying AI Operations Procedure, which sets out how we’re going to implement these principles.” The AI principles and procedures are to be governed by an AI Operations Board, which includes representatives from the Students Association. Student workshops on generative AI and predictive AI remain ongoing.
Summary by Emma Clancy.