Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Friday, April 26, 2024

Op-Ed: AI’s Most Pressing Ethics Problem; Columbia Journalism Review, April 23, 2024

  ANIKA COLLIER NAVAROLI, Columbia Journalism Review; Op-Ed: AI’s Most Pressing Ethics Problem

"I believe that, now more than ever, it’s time for people to organize and demand that AI companies pause their advance toward deploying more powerful systems and work to fix the technology’s current failures. While it may seem like a far-fetched idea, in February, Google decided to suspend its AI chatbot after it was enveloped in a public scandal. And just last month, in the wake of reporting about a rise in scams using the cloned voices of loved ones to solicit ransom, OpenAI announced it would not be releasing its new AI voice generator, citing its “potential for synthetic voice misuse.”

But I believe that society can’t just rely on the promises of American tech companies that have a history of putting profits and power above people. That’s why I argue that Congress needs to create an agency to regulate the industry. In the realm of AI, this agency should address potential harms by prohibiting the use of synthetic data and by requiring companies to audit and clean the original training data being used by their systems.

AI is now an omnipresent part of our lives. If we pause to fix the mistakes of the past and create new ethical guidelines and guardrails, it doesn’t have to become an existential threat to our future."

Wednesday, February 7, 2024

Act now on AI before it’s too late, says UNESCO’s AI lead; Fast Company, February 6, 2024

CHRIS STOKEL-WALKER, Fast Company; Act now on AI before it’s too late, says UNESCO’s AI lead

"Starting today, delegates are gathering in Slovenia at the second Global Forum on the Ethics of AI, organized by UNESCO, the United Nations’ educational, scientific, and cultural arm. The meeting is aimed at broadening the conversation around AI risks and the need to consider AI’s impacts beyond those discussed by first-world countries and business leaders.

Ahead of the conference, Gabriela Ramos, assistant director-general for social and human sciences at UNESCO, spoke with Fast Company...

Countries want to learn from each other. Ethics have become very important. Now there’s not a single conversation I go to that is not at some point referring to ethics—which was not the case one year ago...

Tech companies have previously said they can regulate themselves. Do you think they can with AI?

Let me just ask you something: Which sector has been regulating itself in life? Give me a break."

Wednesday, December 20, 2023

Recent cases raise questions about the ethics of using AI in the legal system; NPR, December 15, 2023

 , NPR; Recent cases raise questions about the ethics of using AI in the legal system

"NPR's Steve Inskeep asks the director of the Private Law Clinic at Yale University, Andrew Miller, about the ethics of using artificial intelligence in the legal system...

INSKEEP: To what extent does someone have to think about what a large language model produces? I'm thinking about the way that we as consumers are continually given these terms of service that we're supposedly going to read and click I accept, and of course we glance at it and click I accept. You have to do something more than that as a lawyer, don't you?

MILLER: You're exactly right. A professor colleague said to me, you know, when a doctor uses an MRI machine, the doctor doesn't necessarily know every technical detail of the MRI machine, right? And my response was, well, that's true, but the doctor knows enough about how the MRI works to have a sense of the sorts of things that would be picked up on an MRI, the sorts of things that wouldn't be picked up. With ChatGPT we don't have - at least not yet - particularly well developed understanding of how our inputs relate to the outputs."

Monday, December 4, 2023

Unmasking AI's Racism And Sexism; NPR, Fresh Air, November 28, 2023

 NPR, Fresh Air; Unmasking AI's Racism And Sexism

"Computer scientist and AI expert Joy Buolamwini warns that facial recognition technology is riddled with the biases of its creators. She is the author of Unmasking AI and founder of the Algorithmic Justice League. She coined the term "coded gaze," a cousin to the "white gaze" or "male gaze." She says, "This is ... about who has the power to shape technology and whose preferences and priorities are baked in — as well as also, sometimes, whose prejudices are baked in.""

Saturday, September 23, 2023

NASA’s Mars Rovers Could Inspire a More Ethical Future for AI; The Conversation via Gizmodo, September 23, 2023

 Janet Vertesi, The Conversation via Gizmodo; NASA’s Mars Rovers Could Inspire a More Ethical Future for AI

"In industries where AI could be used to replace workers, technology experts might consider how clever human-machine partnerships could enhance human capabilities instead of detracting from them.

Script-writing teams may appreciate an artificial agent that can look up dialog or cross-reference on the fly. Artists could write or curate their own algorithms to fuel creativity and retain credit for their work. Bots to support software teams might improve meeting communication and find errors that emerge from compiling code.

Of course, rejecting replacement does not eliminate all ethical concerns with AI. But many problems associated with human livelihood, agency and bias shift when replacement is no longer the goal.

The replacement fantasy is just one of many possible futures for AI and society. After all, no one would watch Star Wars if the droids replaced all the protagonists. For a more ethical vision of humans’ future with AI, you can look to the human-machine teams that are already alive and well, in space and on Earth."

Monday, September 18, 2023

With self-driving cars, it's the ethics we have to navigate; The Japan Times, September 17, 2023

PETER SINGER, The Japan Times ; With self-driving cars, it's the ethics we have to navigate

"One important but often overlooked ethical issue raised by autonomous vehicles is whether they should be programmed to avoid hitting animals and, if so, which ones...How we should value the lives and interests of all sentient beings is a question that AI ethics needs to address."

Thursday, July 13, 2023

Are We Going Too Far By Allowing Generative AI To Control Robots, Worriedly Asks AI Ethics And AI Law; Forbes, July 10, 2023

Dr. Lance B. Eliot , Forbes; Are We Going Too Far By Allowing Generative AI To Control Robots, Worriedly Asks AI Ethics And AI Law

"What amount of due diligence is needed or required on the part of the user when it comes to generative AI and robots?

Nobody can as yet say for sure. Until we end up with legal cases and issues involving presumed harm, this is a gray area. For lawyers that want to get involved in AI and law, these are going to be an exciting and emerging set of legal challenges and legal puzzles that will undoubtedly arise as the use of generative AI becomes further ubiquitous and the advent of robots becomes affordable and practical in our daily lives.

You might also find of interest that some of the AI makers have contractual or licensing clauses that if you are using their generative AI and they get sued for something you did as a result of using their generative AI, you indemnify the AI maker and pledge to pay for their costs and expenses to fight the lawsuit, see my analysis at the link here. This could be daunting for you. Suppose that the house you were cooking in burns to the ground. The insurer sues the AI maker claiming that their generative AI was at fault. But, you agreed whether you know it or not to the indemnification clause, thus the AI maker comes to you and says you need to pay for their defense.

Ouch."

Friday, June 30, 2023

AI ethics toolkit updated to include more assessment components; ZDNet, June 27, 2023

 Eileen Yu, ZDNet ; AI ethics toolkit updated to include more assessment components

"A software toolkit has been updated to help financial institutions cover more areas in evaluating their "responsible" use of artificial intelligence (AI). 

First launched in February last year, the assessment toolkit focuses on four key principles around fairness, ethics, accountability, and transparency -- collectively called FEAT. It offers a checklist and methodologies for businesses in the financial sector to define the objectives of their AI and data analytics use and identify potential bias.

The toolkit was developed by a consortium led by the Monetary Authority of Singapore (MAS) that compromises 31 industry players, including Bank of China, BNY Mellon, Google Cloud, Microsoft, Goldman Sachs, Visa, OCBC Bank, Amazon Web Services, IBM, and Citibank."

Thursday, June 29, 2023

Tech leaders discuss A.I. ethics and regulation at Aspen Ideas Festival; NBC News, June 26, 2023

NBC News; Tech leaders discuss A.I. ethics and regulation at Aspen Ideas Festival

"Entrepreneur Eric Schmidt, professor Walter Isaacson, and MIT dean Daniel Huttenlocher discuss how to regulate A.I. while maximizing its positive influence. NBCUniversal News Group is the media partner of Aspen Ideas Festival."

Thursday, June 15, 2023

Korea issues first AI ethics checklist; The Korea Times, June 14, 2023

Lee Kyung-min, The Korea Times; Korea issues first AI ethics checklist

"The government has outlined the first national standard on how to use artificial intelligence (AI) ethically, in a move to bolster the emerging industry's sustainability and enhance its global presence, the industry ministry said Wednesday.

Korea Agency for Technology and Standards (KATS), an organization affiliated with the Ministry of Trade, Industry and Energy, issued a checklist of possible ethical issues and reviewed factors to be referenced and considered by service developers, providers and users.

The considerations specified for report and review include ethical issues arising in the process of collecting and processing data, the designing and development of AI, and the provision of such services to customers. 

The guidelines contain considerations such as transparency, fairness, harmlessness, responsibility, privacy protection, convenience, autonomy, reliability, sustainability and solidarity-enhancing qualities."

Tuesday, March 7, 2023

SAS' data ethics chief talks about keeping an ethical eye on AI; Axios, March 7, 2023


    "The U.S. is at a crossroads when it comes to the future of artificial intelligence, as the technology takes dramatic leaps forward without much regulation in place, Reggie Townsend, director of SAS Institute's Data Ethics Practice, tells Axios.

Driving the news: Cary-based SAS is a giant in the world of data analytics, and the company and its customers are increasingly using AI to process data and make decisions. Townsend's role at the company puts him at the forefront of the conversation.

Why it matters: Artificial intelligence could soon impact nearly every aspect of our lives, from health care decisions to who gets loans."

Friday, March 3, 2023

A Moral Panic: ChatGPT and the Gamification of Education; Markkula Center for Applied Ethics at Santa Clara University, February 6, 2023

Susan Kennedy, Markkula Center for Applied Ethics at Santa Clara UniversityA Moral Panic: ChatGPT and the Gamification of Education

"Surprisingly, the panic over ChatGPT doesn’t actually seem to be about ChatGPT. It’s not all that impressive, nor is it significantly more effective than the “old ways” of cheating. Instead, the panic seems to be fueled by the expectation that students won’t be able to resist the temptation to use it and that cheating will become rampant. The release of ChatGPT is forcing educators to confront a much deeper issue that has been taking shape for quite some time; students who are becoming increasingly obsessed with grades, GPAs, and completing a degree, and who are willing to go to great, and sometimes unethical, lengths to achieve these things. 

This transformation that is taking place is best explained by the gamification of education. Gamification refers to the process of adding game-like elements, such as points, scores, rankings and badges, to make non-game activities more pleasurable. As philosopher C. Thi Nguyen has argued, part of what makes gamification so appealing is that it trades complexity for simplicity. Our values and goals become much clearer once we have quantified metrics for measuring our progress and success.

In education, gamification takes the form of metrics like exam scores, course grades, GPA, and the completion of a degree. Without these metrics in place, it would be difficult to know when one has made progress towards, or been successful in, their pursuit of the true values of education. After all, the values associated with a good education are diverse and complex, including personal transformation, the cultivation of skills, exposure to diverse worldviews, becoming a more informed citizen, etc. Gamification offers some relief from this complexity by providing unmistakable metrics for success.

The problem with gamification is that, over time, it can transform our values and the very nature of the activity such that we begin to lose sight of what really matters. When students enter college, they may be motivated by a meaningful set of values that can be realized in the context of education. For some students, their grades and GPA are just a useful means to measure their progress towards those goals. But for other students, their values wind up being replaced by these metrics such that “getting an A” or “graduating with a 4.0” becomes the end. 

For the students who get swept up by gamification, ChatGPT is unlikely to strike them as morally wrong or problematic. If a student no longer values education for its own sake, then there would seem to be nothing to lose by using ChatGPT. They won’t see it as cheating themselves out of an education, but merely an easy avenue for a passing grade in a course or completing a college degree. When framed this way, the panic over ChatGPT starts to make a lot more sense. Educators are afraid because they know that, despite their best efforts to adapt their assessments to promote learning outcomes in the face of ChatGPT, these efforts will fall short until they can loosen the grip that gamification has on their students."

Saturday, February 25, 2023

History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot; The New York Times, February 23, 2023

  The New York Times; History May Wonder Why Microsoft Let Its Principles Go for a Creepy, Clingy Bot

"Microsoft’s “responsible A.I.” program started in 2017 with six principles by which it pledged to conduct business. Suddenly, it is on the precipice of violating all but one of those principles. (Though the company says it is still adhering to all six of them.)"

Wednesday, February 1, 2023

In AI arms race, ethics may be the first casualty; Axios, January 31, 2023

"As the tech world embraces ChatGPT and other generative AI programs, the industry's longstanding pledges to deploy AI responsibly could quickly be swamped by beat-the-competition pressures. 

Why it matters: Once again, tech's leaders are playing a game of "build fast and ask questions later" with a new technology that's likely to spark profound changes in society.

  • Social media started two decades ago with a similar rush to market. First came the excitement — later, the damage and regrets.

Catch up quick: While machine learning and related AI techniques hatched in labs over the last decade, scholars and critics sounded alarms about potential harms the technology could promote, including misinformation, bias, hate speech and harassment, loss of privacy and fraud...

Our thought bubble: The innovator's dilemma accurately maps how the tech business has worked for decades. But the AI debate is more than a business issue. The risks could be nation- or planet-wide, and humanity itself is the incumbent with much to lose."

Wednesday, January 25, 2023

Generative AI ChatGPT Is Going To Be Everywhere Once The API Portal Gets Soon Opened, Stupefying AI Ethics And AI Law; Forbes, January 22, 2023

Lance Eliot, Forbes ; Generative AI ChatGPT Is Going To Be Everywhere Once The API Portal Gets Soon Opened, Stupefying AI Ethics And AI Law

"Some adamantly believe that this will be akin to letting loose the Kraken, namely that all kinds of bad things are going to arise. Others see this as making available a crucial resource that can boost tons of other apps by leveraging the grand capabilities of ChatGPT. It is either the worst of times or the best of times. We will herein consider both sides of the debate and you can decide for yourself which camp you land in.

Into all of this comes a slew of AI Ethics and AI Law considerations.

Please be aware that there are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

There have been growing qualms that ChatGPT and other similar AI apps have an ugly underbelly that maybe we aren’t ready to handle. For example, you might have heard that students in schools are potentially able to cheat when it comes to writing assigned essays via using ChatGPT. The AI does all the writing for them. Meanwhile, the student is able to seemingly scot-free turn in the essay as though they did the writing from their own noggin. Not what we presumably want AI to do for humankind."

Sunday, January 22, 2023

AI experts on whether you should be "terrified" of ChatGPT; CBS News , January 22, 2023

DAVID POGUE, CBS News; AI experts on whether you should be "terrified" of ChatGPT

"Timnit Gebru, an AI researcher who specializes in ethics of artificial intelligence, said, "I think that we should be really terrified of this whole thing."

ChatGPT learned how to write by examining millions of pieces of writing on the Internet. Unfortunately, believe it or not, not everything on the internet is true! "It wasn't taught to understand what is fact, what is fiction, or anything like that," Gebru said. "It'll just sort of parrot back what was on the Internet."

Monday, January 2, 2023

People Are Eagerly Consulting Generative AI ChatGPT For Mental Health Advice, Stressing Out AI Ethics And AI Law; January 1, 2023

Lance Eliot, Forbes ; People Are Eagerly Consulting Generative AI ChatGPT For Mental Health Advice, Stressing Out AI Ethics And AI Law

"The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it.

Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI is able to spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.

That’s why there has been an uproar about students being able to cheat when writing essays outside of the classroom. A teacher cannot merely take the essay that deceitful students assert is their own writing and seek to find out whether it was copied from some other online source. Overall, there won’t be any definitive preexisting essay online that fits the AI-generated essay. All told, the teacher will have to begrudgingly accept that the student wrote the essay as an original piece of work. For ways that this might be combatted, see my detailed coverage at the link here."

Saturday, December 10, 2022

First Global Forum on Ethics of AI held in Prague, one year after the adoption of UNESCO’s global recommendation; UNESCO, To Be Held December 13, 2022

 UNESCO; First Global Forum on Ethics of AI held in Prague, one year after the adoption of UNESCO’s global recommendation

The Global Forum on the Ethics of AI, hosted by the Czech Republic on 13 December 2022 in Prague, is the first international ministerial meeting to take place after the adoption of the global recommendation on the ethics of AI a year ago. The forum will place a spotlight on “ensuring inclusion in the AI world,” and take stock of the implementation of the recommendation so far. The event is held under UNESCO’s patronage.

"Human Rights At Risk

While artificial intelligence (AI) is revolutionizing our lives, its benefits are not being distributed equitably across and within countries. Moreover, the technology continues to be developed in ways that raise risks related to human rights. They may also increase inequalities. While most countries are willing to take steps to minimize the risks associated with AI, many lack the regulatory capacity to do so. UNESCO seeks to bridge this gap by promoting a global and ethical approach to AI and offering guidance on regulatory measures and policies. The Recommendation is the first-ever global normative instrument in this domain, unanimously adopted 193 Member States of UNESCO."

Monday, December 5, 2022

Can police use robots to kill? San Francisco voted yes.; The Washington Post, November 30, 2022

 , The Washington Post; Can police use robots to kill? San Francisco voted yes.

"Adam Bercovici, a law enforcement expert and former Los Angeles Police Department lieutenant, told The Post that while policies for robotic lethal force must be carefully written, they could be useful in rare situations. He referenced an active-shooter scenario like the one Dallas officers encountered.

“If I was in charge, and I had that capability, it wouldn’t be the first on my menu,” he said. “But it would be an option if things were really bad.”

Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, worried that San Francisco could instead end up setting a dangerous precedent.

“In my knowledge, this would be the first city to take this step of passing a law authorizing killer robots,” Cahn told The Post.

Cahn expressed concern that the legislation would lead other departments to push for similar provisions, or even to the development of more weaponized robots. In the aftermath of the school shooting in Uvalde, Tex., the police equipment company Axon announced plans to develop drones equipped with Tasers to incapacitate school shooters but canned the idea after nine members of the company’s artificial-intelligence ethics advisory board resigned in protest."

Sunday, November 20, 2022

The everyday ethics of AI;

 ; The everyday ethics of AI

"The AI Act is a proposed European law on artificial intelligence. Though it has not yet taken effect, it’s the first such law on AI to be proposed by a major regulator anywhere, and it’s being studied in detail around the world because so many tech companies do extensive business in the EU.

The law assigns applications of AI to four risk categories, Powell said. First, there’s “minimal risk” – benign applications that don’t hurt people. Think AI-enabled video games or spam filters, for example, and understand that the EU proposal allows unlimited use of those applications.

Then there are “limited risk” systems such as chatbots, in which – the AI Act declares — the user must be made aware that they’re interacting with a machine. That would satisfy the EU’s goal that users decide for themselves whether to continue the interaction or step back.

“High risk” systems can cause real harm – and not only physical harm, as can happen in self-driving cars. These systems also can hurt employment prospects (by sorting resumes, for example, or by tracking productivity on a warehouse floor). They can deny credit or loans or the ability to cross an international border. And they can influence criminal-justice outcomes through AI-enhanced investigation and sentencing programs.

According to the EU, “any producer of this type of technology will have to give not just justifications for the technology and its potential harms, but also business justifications as to why the world needs this type of technology,” Powell said.

“This is the first time in history, as far as I know, that companies are held accountable to their products to this extent of having to explain the business logic of their code.”

Then there is the fourth level: “unacceptable risk.” And under the AI Act, all systems that pose a clear threat to the safety, livelihoods and rights of people will be banned, plain and simple.""