[SydPhil] CAVE Online workshop: Applied AI ethics: ethical issues with AI at work, in healthcare, and beyond - 21 July

Centre for Agency, Values, and Ethics arts.cave at mq.edu.au
Thu Jul 15 09:48:03 AEST 2021


Hi everyone,

Due to the current COVID situation in Sydney, this workshop has been moved online. It will now be held as a Zoom webinar. All welcome, but please register at https://protect-au.mimecast.com/s/tI5MC81V0PT6JGvJ7sn8Dxc?domain=page.mq.edu.au.


Applied AI Ethics: ethical issues with AI at work, in healthcare, and beyond



Artificially Intelligent (AI) systems are increasingly being used in the domains of human resource management and healthcare to make important decisions, such as who gets hired for a job or diagnosed with skin cancer. Such decisions raise important ethical and political concerns around respectful and fair treatment. This one-day workshop, hosted by the Centre for Agency, Values and Ethics (CAVE) at Macquarie University, will explore these and related issues. The workshop will be of interest to anyone interested in exploring the ethical implications of the increasing use of AI in society.


Schedule:

9:30-9:35. Welcome - A/Prof Paul Formosa, Director of CAVE.



9:35-10:15. “Machine intelligence in the workplace: a critical review of recent literature”. Prof Jean-Philippe Deranty and Thomas Corbin, Department of Philosophy, Macquarie University



10:15-10:55. “Slaves to the algorithms? Algocracy and Republican liberty”. Prof Robert Sparrow, Department of Philosophy, Monash University



10:55-11:35. “AI decision-making and dehumanisation. Examining responses to the use of AI in workplace and healthcare contexts”. A/Prof Paul Formosa and Dr Sarah Bankins. Department of Philosophy and Macquarie Business School, Macquarie University.



LUNCH BREAK



1:00-1:40. “Engaging empirically with the ethics of AI-enabled healthcare work”.

Dr Yves Aquino & Prof Stacy Carter, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong.



1.40-2.20. “AI and healthcare: future directions and challenges”. Prof Wendy A Rogers, Department of Philosophy and Department of Clinical Medicine, Macquarie University



2.20-2.30 Concluding remarks.


Abstracts



“Machine intelligence in the workplace: a critical review of recent literature”.

Prof Jean-Philippe Deranty and Thomas Corbin, Department of Philosophy, Macquarie University



We first present the bibliographical research we have conducted on issues of automation for the onwork.edu.au repository project. We then discuss the main issues identified by researchers, focusing on methodological, social and political challenges raised by the spread of AI in different sectors of economic activity.



"Slaves to the algorithms? Algocracy and republican liberty"

Prof Robert Sparrow, Department of Philosophy, Monash University



Increasingly, governments are relying on artificial intelligence to make, or inform, important decisions. The possibility of government by AI, which John Danaher has styled “algocracy”, raises hopes of realising utopian visions of efficient and frictionless government: it also evokes dystopian fears of technocracy and totalitarianism. In this paper I turn to republican political theory to evaluate the prospect of algocracy and, in particular, the threat it might pose to the liberty of citizens. I argue that republicanism implies that there are at least four different reasons to be concerned about government by AI. First, decisions made using AI will often be impossible for citizens to contest because the reasons for the decisions will be inscrutable, which, as John Danaher has pointed out, in turn calls into question the legitimacy of the decisions and of any government that relies too heavily on AI. Second, the inability of citizens to contest the outcomes of government decisions made using AI and/or the justification for the use of AI will render these arbitrary in an important sense and therefore inimical to liberty on a republican account. Third, overreliance on AI is likely to undermine important civic virtues that are necessary to the flourishing of the Republic and thus to the defence of liberty. Fourth, AI is such a powerful technology that its use by government may grant states the power to suppress political dissent to such an extent as to essentially free governments from any fear of revolution. As a result, citizens will only be able to pursue their ends as long as the government chooses not to prevent them from doing so, which means that they will, according to republican theory, fundamentally lack liberty. If we wish to benefit from the use of AI in government without sacrificing liberty, we must ensure that there exist institutional checks and balances, in the spirit of a mixed constitution, which ensure that decisions made by AI can be publicly contested in order to ensure that they track the interests of citizens. We must also investigate ways to mitigate the impact of algocracy on the political culture of democratic societies and, in particular, on the willingness of citizens to challenge the decisions of algorithms. Finally, governments must resist the temptation to develop AI for applications that would grant them too much power over their citizens. These are difficult challenges and in practice, I suggest, the use of AI in government poses a significant threat to liberty.



“AI decision-making and dehumanisation. Examining responses to the use of AI in workplace and healthcare contexts”

A/Prof Paul Formosa and Dr Sarah Bankins. Department of Philosophy & Centre for Agency Values and Ethics, Macquarie University and Macquarie Business School.



In this paper we outline two studies looking at the use of Artificial Intelligence (AI) decision-making in workplace human resource management and healthcare contexts. Using AI to make decisions raises questions of how fair people perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in various workplace and healthcare scenarios and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, and dehumanization, and perceptions of decision-maker role appropriateness. We outline our results and their implications for theory, practice, and future research.



“Engaging empirically with the ethics of AI-enabled healthcare work”

Dr Yves Aquino & Prof Stacy Carter, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong.



This presentation focuses on preliminary findings from an ongoing empirical study examining the perspectives of professional stakeholders working on artificial intelligence (AI) applications in diagnosis and screening. The project, funded by NHMRC 1181960, responds to recent calls for AI Ethics to proceed in a case-based way, grounded in the social worlds where AI is and will be deployed. Our study design combines a range of empirical and theoretical methods, allowing us to work between the moral intuitions of implicated actors, the detail of AI technological possibilities, and existing theoretical approaches, and thus produce new guidance for publics, decision-makers, and clinicians. For this presentation, we focus on our analysis that explores the varied epistemic and normative understandings of impact of AI on healthcare work. First, our findings examine the diverse roles of AI in the clinical context, ranging from augmenting clinicians’ skills to automating specific clinical tasks. Second, our analysis reveals conflicting stakeholder perspectives about clinical deskilling. One view frames deskilling as having negative impact on both clinicians and patients. A competing view contends that deskilling is a positive—if not necessary—effect of upskilling, wherein medical AI takes over menial tasks and enables clinicians to perform the more complex and meaningful aspects of the clinical encounter. Finally, our analysis investigates normative assumptions about essential and dispensable clinical skills that underpin the informants’ views about the value of clinical deskilling due to AI-enabled healthcare work.



"AI and healthcare: future directions and challenges"

Prof Wendy A Rogers, Department of Philosophy and Department of Clinical Medicine, Macquarie University



Access to high quality healthcare is foundational to people’s wellbeing. But providing accessible high quality healthcare seems difficult to achieve. Spiralling costs, high rates of iatrogenic harm, inequitable access, regulatory gaps and workforce problems are ubiquitous. AI is now being introduced into this messy complex arena. The promissory ‘hype’ is intense. AI may have the potential to decrease clinician loads by taking over mundane tasks, streamlining service delivery, unravelling organisational barriers and directing medical attention where it is most needed. These are significant promises that might address some of the current barriers to providing safe, effective, accessible healthcare. But these promises warrant examination. In this paper I explore the extent to which emergent healthcare AI matches the problems faced by healthcare systems. I argue that the current pathways to development and deployment of healthcare AI require scrutiny and revision if AI is to make substantial contributions to healthcare.

All welcome!



Macquarie University Research Centre for Agency, Values and Ethics (CAVE)
Department of Philosophy
Macquarie University
Sydney, NSW 2109, Australia
CAVE website: mq.edu.au/cave<https://protect-au.mimecast.com/s/CxHkC91WPRTkZ03Z5HEbDIL?domain=cave.mq.edu.au>
www.facebook.com/MQCAVE<https://protect-au.mimecast.com/s/090vC0YKPviGExoERH2CvO4?domain=facebook.com>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.sydney.edu.au/pipermail/sydphil/attachments/20210714/88d7e9a4/attachment-0001.html>


More information about the SydPhil mailing list