Showing posts with label applied ethics. Show all posts
Showing posts with label applied ethics. Show all posts

Wednesday, May 17, 2023

Certificates… From a Philosophy Department; Inside Higher Ed, May 17, 2023

 Ryan Quinn, Inside Higher Ed; Certificates… From a Philosophy Department

Pennsylvania’s Millersville University has begun offering ethics certificates. It’s among multiple philosophy departments that have shaken things up.

"He said the department wanted to help people understand philosophy’s relevancy “to whatever else they were doing.”

“What we ultimately decided was that the ethics angle was a clear way in which that was the case,” he said. “Our society today is kind of encountering a challenge in terms of the limits of our ability to think through the ethical issues in all of these various kinds of advancements that are taking place.”

Amy E. Ferrer, executive director of the American Philosophical Association, said in an email that her organization “is aware of philosophy programs naming and structuring their degrees, courses and concentrations in ways meant to draw the interest of students that might not have a clear understanding of what philosophy is.” She even provided her association’s own Department Advocacy Toolkit.

“Consider whether some of the traditional names of courses might be failing to attract students,” the guide says. “The appeal of a course on ‘epistemology,’ for instance, might be limited to students who are already ‘in the know’ about philosophy. It is worth considering whether a name change might attract a wider audience. Words like ‘information,’ ‘knowledge,’ ‘truth’ and ‘belief’—common topics in an epistemology course—might draw a student to read the course description more so than ‘epistemology.’”"

Monday, April 22, 2019

A New Model For AI Ethics In R&D; Forbes, March 27, 2019

Cansu Canca, Forbes; 

A New Model For AI Ethics In R&D


"The ethical framework that evolved for biomedical research—namely, the ethics oversight and compliance model—was developed in reaction to the horrors arising from biomedical research during World War II and which continued all the way into the ’70s.

In response, bioethics principles and ethics review boards guided by these principles were established to prevent unethical research. In the process, these boards were given a heavy hand to regulate research without checks and balances to control them. Despite deep theoretical weaknesses in its framework and massive practical problems in its implementation, this became the default ethics governance model, perhaps due to the lack of competition.

The framework now emerging for AI ethics resembles this model closely. In fact, the latest set of AI principles—drafted by AI4People and forming the basis for the Draft Ethics Guidelines of the European Commission’s High-Level Expert Group on AI—evaluates 47 proposed principles and condenses them into just five.

Four of these are exactly the same as traditional bioethics principles: respect for autonomy, beneficence, non-maleficence, and justice, as defined in the Belmont Report of 1979. There is just one new principle added—explicability. But even that is not really a principle itself, but rather a means of realizing the other principles. In other words, the emerging default model for AI ethics is a direct transplant of bioethics principles and ethics boards to AI ethics. Unfortunately, it leaves much to be desired for effective and meaningful integration of ethics into the field of AI."

Thursday, June 1, 2017

Rethinking Ethics Training in Silicon Valley; The Atlantic, May 26, 2017

Irina Raicu, The Atlantic; Rethinking Ethics Training in Silicon Valley

"I work at an ethics center in Silicon Valley.

I know, I know, “ethics” is not the first word that comes to mind when most people think of Silicon Valley or the tech industry. It’s probably not even in the top 10. But given the outsized role that tech companies now play, it’s time to focus on the ethical responsibilities of the technologists who help shape our lives.

In a recent talk, technologist Maciej Ceglowski argued that “[t]his year especially there’s an uncomfortable feeling in the tech industry that we did something wrong, that in following our credo of ‘move fast and break things,’ some of what we knocked down were the load-bearing walls of our democracy.”...

I work in an applied ethics center, and we do believe that technology can help democracy (we offer a free ethical-decision-making app, for example; we even offer a MOOC—a free online course—on ethical campaigning!). For it to do that, though, we need people who are ready to tackle the ethical questions—both within and outside of tech companies."

Tuesday, October 11, 2016

If War Can Have Ethics, Wall Street Can, Too; New York Times, 10/3/16

Nathaniel B. Davis, New York Times; If War Can Have Ethics, Wall Street Can, Too:
"To demand moral perfection or to succumb in the face of seeming futility is to turn our backs on what can be achieved by acknowledging both the ideal and the limits of reality. Applied ethics guide our interactions in the world as it exists while nudging us incrementally closer to the normative ideal and the world we seek to create.
War is inherently unjust, but the Just War Ethic has made it more just. The economy is not moral, but a foundational ethics of the economy could make it more moral. The product of such ethics would be decidedly imperfect, but it would be better than no ethics at all."