Showing posts with label philosophers. Show all posts
Showing posts with label philosophers. Show all posts

Tuesday, August 27, 2024

EXAMINING THE WORKS OF C.S. LEWIS: CRITICAL THINKING AND ETHICS; United States Air Force Academy, August 26, 2024

Randy RoughtonU.S. Air Force Academy Strategic Communications , United States Air Force Academy; EXAMINING THE WORKS OF C.S. LEWIS: CRITICAL THINKING AND ETHICS

"Twentieth-century author C.S. Lewis’s books dominate the top shelf in Dr. Adam Pelser’s office. Pelser, who was recently recognized as an Inaugural Fellow of the Inklings Project, has used Lewis’ work to teach critical thinking skills and ethics in his Department of Philosophy course since 2018...

Reading with a critical eye

In Pelser’s course, cadets evaluate and discuss the philosophical arguments and themes in some of Lewis’s most influential non-fiction books and essays. They also observe how Lewis interacted with the philosophers and philosophies of his era, including the Oxford philosopher Elizabeth Anscombe, and the most noteworthy philosophers in history such as Aristotle, Plato, Immanuel Kant and David Hume.

Cadets read a series of Lewis books and learn to approach them with “a critical eye,” Pelser said. Like their professor, the cadets can raise their objections to Lewis’s arguments and study how the author interacted with his era’s other great thinkers...

Pelser has four goals for each course. First, he wants to deepen an understanding of the philosophical themes in Lewis’ writings. Second is a deeper understanding of the historical and contemporary philosophical influences on Lewis’s thought. The third goal is for cadets to learn to identify and summarize theses and arguments in philosophical texts. Finally, he wants each cadet to write and think through arguments carefully and clearly.

“A major critical thinking component is the dialogue in class when we push each other and challenge ideas,” Pelser said. “That is an important skill they learn in our course.”"

Thursday, August 1, 2024

What do corporations need to ethically implement AI? Turns out, a philosopher; Northeastern Global News, July 26, 2024

, Northeastern Global News ; What do corporations need to ethically implement AI? Turns out, a philosopher

"As the founder of the AI Ethics Lab, Canca maintains a team of “philosophers and computer scientists, and the goal is to help industry. That means corporations as well as startups, or organizations like law enforcement or hospitals, to develop and deploy AI systems responsibly and ethically,” she says.

Canca has also worked with organizations like the World Economic Forum and Interpol.

But what does “ethical” mean when it comes to AI? That, Canca says, is exactly the point.

“A lot of the companies come to us and say, ‘Here’s a model that we are planning to use. Is this fair?’” 

But, she notes, there are “different definitions of justice, distributive justice, different definitions of fairness. They conflict with each other. It is a big theoretical question. How do we define fairness?”

"Saying that ‘We optimized this for fairness,’ means absolutely nothing until you have a working,  proper definition” — which shifts from project to project, she also notes.

Now, Canca has been named one of Mozilla’s Rise25 honorees, which recognizes individuals “leading the next wave of AI — using philanthropy, collective power, and the principles of open source to make sure the future of AI is responsible, trustworthy, inclusive and centered around human dignity,” the organization wrote in its announcement."

Tuesday, June 25, 2024

Collaborative ethics: innovating collaboration between ethicists and life scientists; Nature, June 20, 2024

, Nature ; Collaborative ethics: innovating collaboration between ethicists and life scientists

"Is there a place for ethics in scientific research, not about science or after scientific breakthroughs? We are convinced that there is, and we describe here our model for collaboration between scientists and ethicists.

Timely collaboration with ethicists benefits science, as it can make an essential contribution to the research process. In our view, such critical discussions can improve the efficiency and robustness of outcomes, particularly in groundbreaking or disruptive research. The discussion of ethical implications during the research process can also prepare a team for a formal ethics review and criticism after publication.

The practice of collaborative ethics also advances the humanities, as direct involvement with the sciences allows long-held assumptions and arguments to be put to the test. As philosophers and ethicists, we argue that innovative life sciences research requires new methods in ethics, as disruptive concepts and research outcomes no longer fit traditional notions and norms. Those methods should not be developed at a distance from the proverbial philosopher’s armchair or in after-the-fact ethics analysis. We argue that, rather, we should join scientists and meet where science evolves in real-time: as Knoppers and Chadwick put it in the early days of genomic science, “Ethical thinking will inevitably continue to evolve as the science does”1."

Saturday, June 22, 2024

Oxford University institute hosts AI ethics conference; Oxford Mail, June 21, 2024

Jacob Manuschka , Oxford Mail; Oxford University institute hosts AI ethics conference

"On June 20, 'The Lyceum Project: AI Ethics with Aristotle' explored the ethical regulation of AI.

This conference, set adjacent to the ancient site of Aristotle’s school, showcased some of the greatest philosophical minds and featured an address from Greek prime minister, Kyriakos Mitsotakis.

Professor John Tasioulas, director of the Institute for Ethics in AI, said: "The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI.

"We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges...

The conference was held in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research."

Wednesday, June 19, 2024

Oxford Institute for Ethics in AI to host ground-breaking AI Ethics Conference; University of Oxford, In-Person Event on June 20, 2024

University of Oxford; Oxford Institute for Ethics in AI to host ground-breaking AI Ethics Conference

"The Oxford University Institute for Ethics in AI is hosting an exciting one day conference in Athens on the 20th of June 2024, The Lyceum Project: AI Ethics with Aristotle, in partnership with Stanford University and Demokritos, Greece's National Centre for Scientific Research...

Set in the cradle of philosophy, adjacent to the ancient site of Aristotle’s school, the conference will showcase some of the greatest philosophical minds and feature a special address from the Greek Prime Minister, Kyriakos Mitsotakis, as they discuss the most pressing question of our times – the ethical regulation of AI.

The conference will be free to attend (register to attend).

Professor John Tasioulas, Director of the Institute for Ethics in AI, said: ‘The Aristotelian approach to ethics, with its rich notion of human flourishing, has great potential to help us grapple with the urgent question of what it means to be human in the age of AI. We are excited to bring together philosophers, scientists, policymakers, and entrepreneurs in a day-long dialogue about how ancient wisdom can shed light on contemporary challenges.’

George Nounesis, Director & Chairman of the Board of NCSR Demokritos said: ‘There is no such thing as ethically neutral AI; and high-quality research on AI cannot ignore its inherent ethical aspects. Ancient Greek philosophy can serve as a valuable resource guiding us in this discourse. In this respect, Aristotelian philosophy can play a pivotal role by nurturing ethical reasoning and a comprehensive understanding of the societal 'implications of AI, broadening the dialogue with society.’

Alexandra Mitsotaki, President of the World Human Forum, said: ‘This conference is an important first step towards our vision to bring Aristotle’s lyceum alive again by showing the relevance of the teachings of the great philosopher for today’s global challenges. We aspire for the Lyceum to become a global point of connection. This is, after all, the original location where the great philosopher thought, taught and developed many of the ideas that formed Western Civilisation.’"

Friday, April 29, 2022

LSU to Embed Ethics in the Development of New Technologies, Including AI; LSU Office of Research and Economic Development, April 2022

Elsa Hahne, LSU Office of Research and Economic Development ; LSU to Embed Ethics in the Development of New Technologies, Including AI

"“If we want to educate professionals who not only understand their professional obligations but become leaders in their fields, we need to make sure our students understand ethical conflicts and how to resolve them,” Goldgaber said. “Leaders don’t just do what they’re told—they make decisions with vision.”

The rapid development of new technologies has put researchers in her field, the world of Socrates and Rousseau, in the new and not-altogether-comfortable role of providing what she calls “ethics emergency services” when emerging capabilities have unintended consequences for specific groups of people.

“We can no longer rely on the traditional division of labor between STEM and the humanities, where it’s up to philosophers to worry about ethics,” Goldgaber said. “Nascent and fast-growing technologies, such as artificial intelligence, disrupt our everyday normative understandings, and most often, we lack the mechanisms to respond. In this scenario, it’s not always right to ‘stay in your lane’ or ‘just do your job.’”

Saturday, March 24, 2018

Driverless cars raise so many ethical questions. Here are just a few of them.; San Diego Union-Tribune, March 23, 2018

Lawrence M. Hinman, San Diego Union-Tribune; Driverless cars raise so many ethical questions. Here are just a few of them.

"Even more troubling will be the algorithms themselves, even if the engineering works flawlessly. How are we going to program autonomous vehicles when they are faced with a choice among competing evils? Should they be programmed to harm or kill the smallest number of people, swerving to avoid hitting two people but unavoidably hitting one? (This is the famous “trolley problem” that has vexed philosophers and moral psychologists for over half a century.)

Should your car be programmed to avoid crashing into a group of schoolchildren, even if that means driving you off the side of a cliff? Most of us would opt for maximizing the number of lives saved, except when one of those lives belongs to us or our loved ones.

These are questions that take us to the heart of the moral life in a technological society. They are already part of a lively and nuanced discussion among philosophers, engineers, policy makers and technologists. It is a conversation to which the larger public should be invited.

The ethics of dealing with autonomous systems will be a central issue of the coming decades."

Friday, March 2, 2018

Philosophers are building ethical algorithms to help control self-driving cars; Quartz, February 28, 2018

Olivia Goldhill, Quartz; Philosophers are building ethical algorithms to help control self-driving cars

"Artificial intelligence experts and roboticists aren’t the only ones working on the problem of autonomous vehicles. Philosophers are also paying close attention to the development of what, from their perspective, looks like a myriad of ethical quandaries on wheels.

The field has been particularly focused over the past few years on one particular philosophical problem posed by self-driving cars: They are a real-life enactment of a moral conundrum known as the Trolley Problem. In this classic scenario, a trolley is going down the tracks towards five people. You can pull a lever to redirect the trolley, but there is one person stuck on the only alternative track. The scenario exposes the moral tension between actively doing versus allowing harm: Is it morally acceptable to kill one to save five, or should you allow five to die rather than actively hurt one?"