From sylvie.magerstaedt at nd.edu.au Tue Jul 13 15:36:05 2021 From: sylvie.magerstaedt at nd.edu.au (Sylvie Magerstaedt) Date: Tue, 13 Jul 2021 05:36:05 +0000 Subject: [SydPhil] CFP extended for Screening Virtue Workshop Message-ID: <1626154565466.3940@nd.edu.au> As I'm aware that many of us had to adjust in recent weeks to the renewed lockdown, we have extended the deadline for papers for the following workshop to Monday 19 July: CFP: 'Screening Virtue - What can films and television shows teach us about virtues?' This one-day workshop at The University of Notre Dame Australia in Sydney will explore the philosophical, cinematic and educational aspects of exploring virtue through fictional film and television programmes. When Friday, 27th August 2021 (AEST) Where: University of Notre Dame Australia, Sydney Broadway Campus (exact location tbc), with all or parts of the workshop being also live-streamed via Zoom Keynote Speaker: Prof Joseph Kupfer (Iowa State University) - via zoom Fictional films and television programs can entertain, uplift, distract and educate us, but could they also help to make us better people? This workshop is part of an ongoing research project that aims to bring together moral philosophy with film and television studies in order to demonstrate how fictional film and television can help us deepen and expand our understanding of the virtues and how they can be cultivated. Topics for this workshop include (but are not limited to): * Looking at individual virtues and the ways in which fictional screen media can offer us new insights * Discussing what film and television can - and cannot - teach us about virtues * Representations of vices and their correlating virtues on screen * Investigating the potential applications of these fictional screen representations for character education The workshop particularly aims to encourage a discussion of virtues across disciplines and we welcome proposals from a range of disciplines, in particular philosophy, theology and religious studies, film and television studies and education. This workshop is part of a collaborative project between researchers in the School of Arts and Sciences and the Institute for Ethics and Society at The University of Notre Dame Australia. Short proposals (ca. 300 words) for 20 minute papers either to be presented in person at the workshop or for participation via zoom should be sent to Dr Sylvie Magerst?dt (sylvie.magerstaedt at nd.edu.au) by 09 19 July 2021. If you have any queries about this workshop, please don't hesitate to get in touch. If you like to attend without presenting a paper, you are most welcome. We will send out an event announcement later this month but you are welcome to drop me a line to regster your interest. Best wishes Sylvie ______________________ Dr Sylvie Magerstaedt, FHEA Senior Lecturer in Film and Media School of Arts and Sciences University of Notre Dame Australia, Sydney Recent Publication 'TV Antiquity - Swords, sandals, blood and sand' (2019, Manchester University Press) The University of Notre Dame Australia, Sydney acknowledges the original custodians of this land, the Cadigal people of the Eora nation, and pay our respects to their Elders past, present and future. For they hold the memories, the traditions, the culture and hopes of Aboriginal Australia. Disclaimer The information contained in this communication from the sender is confidential. It is intended solely for use by the recipient and others authorized to receive it. If you are not the recipient, you are hereby notified that any disclosure, copying, distribution or taking action in relation of the contents of this information is strictly prohibited and may be unlawful. This email has been scanned for viruses and malware, and may have been automatically archived by Mimecast Ltd, an innovator in Software as a Service (SaaS) for business. Providing a safer and more useful place for your human generated data. Specializing in; Security, archiving and compliance. To find out more visit the Mimecast website. -------------- next part -------------- An HTML attachment was scrubbed... URL: From admin at aap.org.au Wed Jul 14 10:11:32 2021 From: admin at aap.org.au (Aap Admin) Date: Wed, 14 Jul 2021 10:11:32 +1000 Subject: [SydPhil] Public Lecture - Alan Saunders Lecture 2021 Message-ID: This year's Alan Saunders Lecture, co-hosted with the ABC, will be presented by Professor Stephen Gardiner on Thursday 15th July 10.00-11.30am AEST. The topic of the lecture is 'Climate Crisis & Institutional Denialism: Is it Time for a Global Constitutional Convention for the Young & Other Future Generations?' This is a free live streamed event, participants however will need to register to obtain the zoom link. For more information and to register: https://protect-au.mimecast.com/s/YnmZCp81lrtnl6AZ1FPYyjR?domain=aap.org.au -- Australasian Association of Philosophy www.aap.org.au ABN 29 152 892 272 -------------- next part -------------- An HTML attachment was scrubbed... URL: From h.ikaheimo at unsw.edu.au Wed Jul 14 14:14:46 2021 From: h.ikaheimo at unsw.edu.au (Heikki Ikaheimo) Date: Wed, 14 Jul 2021 04:14:46 +0000 Subject: [SydPhil] Hannah Tierney (UC Davis): 'Don't Burst my Blame Bubble', UNSW Philosophy Seminar, 20th July 12:30-2pm (on Zoom) In-Reply-To: References: Message-ID: [cid:image001.png at 01D777E6.26B37EC0] UNSW PHILOSOPHY SEMINAR SERIES 2021 Speaker: Hannah Tierney (UC Davis) [cid:image002.png at 01D778B5.040B6BB0] Title: Don?t Burst My Blame Bubble Abstract: Blame abounds in our everyday lives, perhaps no more so than on social media. With the rise of social networking platforms, we have access to more information about others? blameworthy behaviour and larger audiences to whom we can express our blame. But these audiences, while large, are typically not diverse. Social media tends to create what I call ?blame bubbles?: systems in which expressions of blame are shared amongst agents with similar moral outlooks while dissenting views are excluded. Many have criticised the blame expressed on social media, arguing that it is often unfitting, excessive, and counterproductive. In this talk, I?ll argue that while blame bubbles can be guilty of these charges, they are also well placed to do important moral work. I?ll then attempt to identify the causal source of these bad-making features and explore potential structural interventions that can make blame bubbles better at performing their moral function and less likely to generate harmful consequences. Presenter: Hannah Tierney is Assistant Professor in the Department of Philosophy at the University of California, Davis. She writes mainly on issues of free will, moral responsibility, and personal identity. For more information about her work, see: https://protect-au.mimecast.com/s/MNTiCVARKgCx4o6xZSGr6bd?domain=philosophy.ucdavis.edu 20th July 2021 12.30 pm ? 2 pm This event is free. Click Here for Zoom Link Enquiries: Heikki Ik?heimo h.ikaheimo at unsw.edu.au School of Humanities and Languages Follow Us [UNSW Facebook] [UNSW Instagram] [UNSW LinkedIn] [UNSW Twitter] [UNSW WeChat] [UNSW Weibo] [UNSW YouTube] [UNSW TikTok] Copyright ? 2021 UNSW Sydney. All rights reserved. CRICOS Provider Code 00098G -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 70702 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 68052 bytes Desc: image002.png URL: From arts.cave at mq.edu.au Thu Jul 15 09:48:03 2021 From: arts.cave at mq.edu.au (Centre for Agency, Values, and Ethics) Date: Wed, 14 Jul 2021 23:48:03 +0000 Subject: [SydPhil] CAVE Online workshop: Applied AI ethics: ethical issues with AI at work, in healthcare, and beyond - 21 July Message-ID: Hi everyone, Due to the current COVID situation in Sydney, this workshop has been moved online. It will now be held as a Zoom webinar. All welcome, but please register at https://protect-au.mimecast.com/s/tI5MC81V0PT6JGvJ7sn8Dxc?domain=page.mq.edu.au. Applied AI Ethics: ethical issues with AI at work, in healthcare, and beyond Artificially Intelligent (AI) systems are increasingly being used in the domains of human resource management and healthcare to make important decisions, such as who gets hired for a job or diagnosed with skin cancer. Such decisions raise important ethical and political concerns around respectful and fair treatment. This one-day workshop, hosted by the Centre for Agency, Values and Ethics (CAVE) at Macquarie University, will explore these and related issues. The workshop will be of interest to anyone interested in exploring the ethical implications of the increasing use of AI in society. Schedule: 9:30-9:35. Welcome - A/Prof Paul Formosa, Director of CAVE. 9:35-10:15. ?Machine intelligence in the workplace: a critical review of recent literature?. Prof Jean-Philippe Deranty and Thomas Corbin, Department of Philosophy, Macquarie University 10:15-10:55. ?Slaves to the algorithms? Algocracy and Republican liberty?. Prof Robert Sparrow, Department of Philosophy, Monash University 10:55-11:35. ?AI decision-making and dehumanisation. Examining responses to the use of AI in workplace and healthcare contexts?. A/Prof Paul Formosa and Dr Sarah Bankins. Department of Philosophy and Macquarie Business School, Macquarie University. LUNCH BREAK 1:00-1:40. ?Engaging empirically with the ethics of AI-enabled healthcare work?. Dr Yves Aquino & Prof Stacy Carter, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong. 1.40-2.20. ?AI and healthcare: future directions and challenges?. Prof Wendy A Rogers, Department of Philosophy and Department of Clinical Medicine, Macquarie University 2.20-2.30 Concluding remarks. Abstracts ?Machine intelligence in the workplace: a critical review of recent literature?. Prof Jean-Philippe Deranty and Thomas Corbin, Department of Philosophy, Macquarie University We first present the bibliographical research we have conducted on issues of automation for the onwork.edu.au repository project. We then discuss the main issues identified by researchers, focusing on methodological, social and political challenges raised by the spread of AI in different sectors of economic activity. "Slaves to the algorithms? Algocracy and republican liberty" Prof Robert Sparrow, Department of Philosophy, Monash University Increasingly, governments are relying on artificial intelligence to make, or inform, important decisions. The possibility of government by AI, which John Danaher has styled ?algocracy?, raises hopes of realising utopian visions of efficient and frictionless government: it also evokes dystopian fears of technocracy and totalitarianism. In this paper I turn to republican political theory to evaluate the prospect of algocracy and, in particular, the threat it might pose to the liberty of citizens. I argue that republicanism implies that there are at least four different reasons to be concerned about government by AI. First, decisions made using AI will often be impossible for citizens to contest because the reasons for the decisions will be inscrutable, which, as John Danaher has pointed out, in turn calls into question the legitimacy of the decisions and of any government that relies too heavily on AI. Second, the inability of citizens to contest the outcomes of government decisions made using AI and/or the justification for the use of AI will render these arbitrary in an important sense and therefore inimical to liberty on a republican account. Third, overreliance on AI is likely to undermine important civic virtues that are necessary to the flourishing of the Republic and thus to the defence of liberty. Fourth, AI is such a powerful technology that its use by government may grant states the power to suppress political dissent to such an extent as to essentially free governments from any fear of revolution. As a result, citizens will only be able to pursue their ends as long as the government chooses not to prevent them from doing so, which means that they will, according to republican theory, fundamentally lack liberty. If we wish to benefit from the use of AI in government without sacrificing liberty, we must ensure that there exist institutional checks and balances, in the spirit of a mixed constitution, which ensure that decisions made by AI can be publicly contested in order to ensure that they track the interests of citizens. We must also investigate ways to mitigate the impact of algocracy on the political culture of democratic societies and, in particular, on the willingness of citizens to challenge the decisions of algorithms. Finally, governments must resist the temptation to develop AI for applications that would grant them too much power over their citizens. These are difficult challenges and in practice, I suggest, the use of AI in government poses a significant threat to liberty. ?AI decision-making and dehumanisation. Examining responses to the use of AI in workplace and healthcare contexts? A/Prof Paul Formosa and Dr Sarah Bankins. Department of Philosophy & Centre for Agency Values and Ethics, Macquarie University and Macquarie Business School. In this paper we outline two studies looking at the use of Artificial Intelligence (AI) decision-making in workplace human resource management and healthcare contexts. Using AI to make decisions raises questions of how fair people perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in various workplace and healthcare scenarios and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals? experiences of interactional justice, trust, and dehumanization, and perceptions of decision-maker role appropriateness. We outline our results and their implications for theory, practice, and future research. ?Engaging empirically with the ethics of AI-enabled healthcare work? Dr Yves Aquino & Prof Stacy Carter, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong. This presentation focuses on preliminary findings from an ongoing empirical study examining the perspectives of professional stakeholders working on artificial intelligence (AI) applications in diagnosis and screening. The project, funded by NHMRC 1181960, responds to recent calls for AI Ethics to proceed in a case-based way, grounded in the social worlds where AI is and will be deployed. Our study design combines a range of empirical and theoretical methods, allowing us to work between the moral intuitions of implicated actors, the detail of AI technological possibilities, and existing theoretical approaches, and thus produce new guidance for publics, decision-makers, and clinicians. For this presentation, we focus on our analysis that explores the varied epistemic and normative understandings of impact of AI on healthcare work. First, our findings examine the diverse roles of AI in the clinical context, ranging from augmenting clinicians? skills to automating specific clinical tasks. Second, our analysis reveals conflicting stakeholder perspectives about clinical deskilling. One view frames deskilling as having negative impact on both clinicians and patients. A competing view contends that deskilling is a positive?if not necessary?effect of upskilling, wherein medical AI takes over menial tasks and enables clinicians to perform the more complex and meaningful aspects of the clinical encounter. Finally, our analysis investigates normative assumptions about essential and dispensable clinical skills that underpin the informants? views about the value of clinical deskilling due to AI-enabled healthcare work. "AI and healthcare: future directions and challenges" Prof Wendy A Rogers, Department of Philosophy and Department of Clinical Medicine, Macquarie University Access to high quality healthcare is foundational to people?s wellbeing. But providing accessible high quality healthcare seems difficult to achieve. Spiralling costs, high rates of iatrogenic harm, inequitable access, regulatory gaps and workforce problems are ubiquitous. AI is now being introduced into this messy complex arena. The promissory ?hype? is intense. AI may have the potential to decrease clinician loads by taking over mundane tasks, streamlining service delivery, unravelling organisational barriers and directing medical attention where it is most needed. These are significant promises that might address some of the current barriers to providing safe, effective, accessible healthcare. But these promises warrant examination. In this paper I explore the extent to which emergent healthcare AI matches the problems faced by healthcare systems. I argue that the current pathways to development and deployment of healthcare AI require scrutiny and revision if AI is to make substantial contributions to healthcare. All welcome! Macquarie University Research Centre for Agency, Values and Ethics (CAVE) Department of Philosophy Macquarie University Sydney, NSW 2109, Australia CAVE website: mq.edu.au/cave www.facebook.com/MQCAVE -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.alfano at gmail.com Fri Jul 16 17:13:40 2021 From: mark.alfano at gmail.com (Mark Alfano) Date: Fri, 16 Jul 2021 17:13:40 +1000 Subject: [SydPhil] 21-month postdoc opportunity in MQ: Social and virtue epistemology Message-ID: Dear Sydneysiders, I hope you're well, despite our current lockdown conditions. If you know anyone who might be interested in the following position, please let them know about it: https://protect-au.mimecast.com/s/uuYhCyojxQTr9z2DMHZwfo1?domain=mq.wd3.myworkdayjobs.com It's a 21-month postdoc at Macquarie to work with me and my team on social and virtue epistemology. Best wishes, Mark -- Mark Alfano, Ph.D. Associate Professor of Philosophy, Macquarie University www.alfanophilosophy.com -------------- next part -------------- An HTML attachment was scrubbed... URL: