Showing posts with label AI algorithms. Show all posts
Showing posts with label AI algorithms. Show all posts

Monday, June 24, 2024

AI use must include ethical scrutiny; CT Mirror, June 24, 2024

 Josemari Feliciano, CT Mirror; AI use must include ethical scrutiny

"AI use may deal with data that are deeply intertwined with personal and societal dimensions. The potential for AI to impact societal structures, influence public policy, and reshape economies is immense. This power carries with it an obligation to prevent harm and ensure fairness, necessitating a formal and transparent review process akin to that overseen by IRBs.

The use of AI without meticulous scrutiny of the training data and study parameters can inadvertently perpetuate or exacerbate harm to minority groups. If the data used to train AI systems is biased or non-representative, the resulting algorithms can reinforce existing disparities."

Friday, April 5, 2024

Assisted living managers say an algorithm prevented hiring enough staff; The Washington Post, April 1, 2024

, The Washington Post; Assisted living managers say an algorithm prevented hiring enough staff

"Two decades ago, a group of senior-housing executives came up with a way to raise revenue and reduce costs at assisted-living homes. Using stopwatches, they timed caregivers performing various tasks, from making beds to changing soiled briefs, and fed the information into a program they began using to determine staffing.

Brookdale Senior Living, the leading operator of senior homes with 652 facilities, acquired the algorithm-based system and used it to set staffing at its properties across the nation. But as Brookdale’s empire grew, employees complained the system, known as “Service Alignment,” failed to capture the nuances of caring for vulnerable seniors, documents and interviews show."

Wednesday, January 10, 2024

Addressing equity and ethics in artificial intelligence; American Psychological Association, January 8, 2024

 Zara Abrams, American Psychological Association; Addressing equity and ethics in artificial intelligence

"As artificial intelligence (AI) rapidly permeates our world, researchers and policymakers are scrambling to stay one step ahead. What are the potential harms of these new tools—and how can they be avoided?

“With any new technology, we always need to be thinking about what’s coming next. But AI is moving so fast that it’s difficult to grasp how significantly it’s going to change things,” said David Luxton, PhD, a clinical psychologist and an affiliate professor at the University of Washington’s School of Medicine who is part of a session at the upcoming 2024 Consumer Electronics Show (CES) on Harnessing the Power of AI Ethically.

Luxton and his colleagues dubbed recent AI advances “super-disruptive technology” because of their potential to profoundly alter society in unexpected ways. In addition to concerns about job displacement and manipulation, AI tools can cause unintended harm to individuals, relationships, and groups. Biased algorithms can promote discrimination or other forms of inaccurate decision-making that can cause systematic and potentially harmful errors; unequal access to AI can exacerbate inequality (Proceedings of the Stanford Existential Risk Conference 2023, 60–74). On the flip side, AI may also hold the potential to reduce unfairness in today’s world—if people can agree on what “fairness” means.

“There’s a lot of pushback against AI because it can promote bias, but humans have been promoting biases for a really long time,” said psychologist Rhoda Au, PhD, a professor of anatomy and neurobiology at the Boston University Chobanian & Avedisian School of Medicine who is also speaking at CES on harnessing AI ethically. “We can’t just be dismissive and say: ‘AI is good’ or ‘AI is bad.’ We need to embrace its complexity and understand that it’s going to be both.”"

Tuesday, January 2, 2024

How the Federal Government Can Rein In A.I. in Law Enforcement; The New York Times, January 2, 2024

 Joy Buolamwini and , The New York Times; How the Federal Government Can Rein In A.I. in Law Enforcement

"One of the most hopeful proposals involving police surveillance emerged recently from a surprising quarter — the federal Office of Management and Budget. The office, which oversees the execution of the president’s policies, has recommended sorely needed constraints on the use of artificial intelligence by federal agencies, including law enforcement.

The office’s work is commendable, but shortcomings in its proposed guidance to agencies could still leave people vulnerable to harm. Foremost among them is a provision that would allow senior officials to seek waivers by arguing that the constraints would hinder law enforcement. Those law enforcement agencies should instead be required to provide verifiable evidence that A.I. tools they or their vendors use will not cause harm, worsen discrimination or violate people’s rights."

Thursday, December 14, 2023

Pope, once a victim of AI-generated imagery, calls for treaty to regulate artificial intelligence; AP, December 14, 2023

NICOLE WINFIELD, AP ; Pope, once a victim of AI-generated imagery, calls for treaty to regulate artificial intelligence

"On a more basic level, he warned about the profound repercussions on humanity of automated systems that rank citizens or categorize them. In addition to the threats to jobs around the world that can be done by robots, Francis noted that such technology could determine the reliability of an applicant for a mortgage, the right of a migrant to receive political asylum or the chance of reoffending by someone previously convicted of a crime.

“Algorithms must not be allowed to determine how we understand human rights, to set aside the essential human values of compassion, mercy and forgiveness, or to eliminate the possibility of an individual changing and leaving his or her past behind,” he wrote."

Friday, November 3, 2023

Joe Biden Wants US Government Algorithms Tested for Potential Harm Against Citizens; Wired, November 1, 2023

 , Wired; Joe Biden Wants US Government Algorithms Tested for Potential Harm Against Citizens

"The White House issued draft rules today that would require federal agencies to evaluate and constantly monitor algorithms used in health care, law enforcement, and housing for potential discrimination or other harmful effects on human rights.

Once in effect, the rules could force changes in US government activity dependent on AI, such as the FBI’s use of face recognition technology, which has been criticized for not taking steps called for by Congress to protect civil liberties. The new rules would require government agencies to assess existing algorithms by August 2024 and stop using any that don’t comply.

“If the benefits do not meaningfully outweigh the risks, agencies should not use the AI,” the memo says. But the draft memo carves out an exemption for models that deal with national security and allows agencies to effectively issue themselves waivers if ending use of an AI model “would create an unacceptable impediment to critical agency operations.”"

Saturday, February 25, 2023

Science Fiction Magazines Battle a Flood of Chatbot-Generated Stories; The New York Times, February 23, 2023

 Michael Levenson , The New York Times; Science Fiction Magazines Battle a Flood of Chatbot-Generated Stories

"Elaborating on his concerns in the interview, Mr. Clarke said that chatbot-generated fiction could raise ethical and legal questions, if it ever passed literary muster. He said he did not want to pay “for the work the algorithm did” on stories generated by someone who had entered prompts into an algorithm.

“Who owns that, technically?” Mr. Clarke said. “Right now, we’re still in the early days of this technology, and there are a lot of unanswered questions.”"

Friday, May 27, 2022

Accused of Cheating by an Algorithm, and a Professor She Had Never Met; The New York Times, May 27, 2022

Kashmir Hill, The New York Times; Accused of Cheating by an Algorithm, and a Professor She Had Never Met

An unsettling glimpse at the digitization of education.

"The most serious flaw with these systems may be a human one: educators who overreact when artificially intelligent software raises an alert.

“Schools seem to be treating it as the word of God,” Mr. Quintin said. “If the computer says you’re cheating, you must be cheating.”"

Friday, April 29, 2022

LSU to Embed Ethics in the Development of New Technologies, Including AI; LSU Office of Research and Economic Development, April 2022

Elsa Hahne, LSU Office of Research and Economic Development ; LSU to Embed Ethics in the Development of New Technologies, Including AI

"“If we want to educate professionals who not only understand their professional obligations but become leaders in their fields, we need to make sure our students understand ethical conflicts and how to resolve them,” Goldgaber said. “Leaders don’t just do what they’re told—they make decisions with vision.”

The rapid development of new technologies has put researchers in her field, the world of Socrates and Rousseau, in the new and not-altogether-comfortable role of providing what she calls “ethics emergency services” when emerging capabilities have unintended consequences for specific groups of people.

“We can no longer rely on the traditional division of labor between STEM and the humanities, where it’s up to philosophers to worry about ethics,” Goldgaber said. “Nascent and fast-growing technologies, such as artificial intelligence, disrupt our everyday normative understandings, and most often, we lack the mechanisms to respond. In this scenario, it’s not always right to ‘stay in your lane’ or ‘just do your job.’”

Saturday, March 26, 2022

Even in the digital age, Only human-made works are copyrightable in the U.S.; March 21, 2022

K&L Gates LLP - Susan Kayser and Kristin Wells , Lexology; Even in the digital age, Only human-made works are copyrightable in the U.S. 

"The U.S. Copyright Office Review Board refused copyright protection of a two-dimensional artwork created by artificial intelligence, stating that “[c]urrently, ‘the Office will refuse to register a claim if it determines that a human being did not create the work,’” see recent letter. The Compendium of U.S. Copyright Office Practices does not explicitly address AI, but precedent, policy, and practice makes human authorship currently a prerequisite.

A “Creativity Machine,” authored the work titled “A Recent Entrance into Paradise.” The applicant, Steven Thaler, an advocate for AI IP rights, named himself as the copyright claimant. Thaler’s application included a unique transfer statement: “ownership of the machine,” and further explained that the work “was autonomously created by a computer algorithm running on a machine.” Thaler sought to register the work as a work-for-hire because he owns the Creativity Machine.

AI’s “kill switch” at the U.S. Copyright Office? AI isn’t human. The Review Board relied on the Office’s compendium of practices and Supreme Court precedent dating back to 1879—long before computers were a concept—to hold that the U.S. Copyright Office will not register a claim if it determines that a human being did not create the work.

The Review Board also denied Thaler’s argument that the work made for hire doctrine allows non-human persons like companies to be authors of copyrighted material. The Board explained that works made for hire must be prepared by “an employee” or by “parties” who “expressly agree in a written instrument” that the work is for hire.

Because Thaler did not claim any human involvement in the work, the Board did not address under which circumstances human involvement in machine-created works might meet the statutory requirements for copyright protection. This is an issue that may soon arise."

Thursday, March 10, 2022

Report of the Pittsburgh Task Force on Public Algorithms; University of Pittsburgh, March 2022

 

University of Pittsburgh; Report of the Pittsburgh Task Force on Public Algorithms

David J. Hickton: Report for region: People must have voice, stake in algorithms; The Pittsburgh Post-Gazette, March 10, 2022

David J. Hickton, The Pittsburgh Post-Gazette; David J. Hickton: Report for region: People must have voice, stake in algorithms

"The institute that I lead — the University of Pittsburgh’s Institute for Cyber Law, Policy and Security, or simply Pitt Cyber — formed the Pittsburgh Task Force on Public Algorithms to do precisely that for our region.

We brought together a diverse group of experts and leaders from across the region and the country to study how our local governments are using algorithms and the state of public participation and oversight of these systems.

Our findings should be no surprise: Public algorithms are on the rise. And the openness of and public participation in the development and deployment of those systems varies considerably across local governments and agencies...

Our Task Force’s report — the product of our two-year effort — offers concrete recommendations to policymakers. For example, we encourage independent reviews and public involvement in the development of algorithmic systems commensurate with their risks: higher-risk systems, like those involved in decisions affecting liberty, require more public buy-in and examination."

Tuesday, February 15, 2022

What internet outrage reveals about race and TikTok's algorithm; NPR, February 14, 2022

Jess Kung, NPR; What internet outrage reveals about race and TikTok's algorithm

"The more our lives become intertwined and caught up in tech and social media algorithms, the more it's worth trying to understand and unpack just how those algorithms work. Who becomes viral, and why? Who gets harassed, who gets defended, and what are the lasting repercussions? And how does the internet both obscure and exacerbate the racial and gender dynamics that already animate so much of our social interactions?"

Friday, May 28, 2021

Privacy laws need updating after Google deal with HCA Healthcare, medical ethics professor says; CNBC, May 26, 2021

Emily DeCiccio, CNBC; Privacy laws need updating after Google deal with HCA Healthcare, medical ethics professor says

"Privacy laws in the U.S. need to be updated, especially after Google struck a deal with a major hospital chain, medical ethics expert Arthur Kaplan said Wednesday.

“Now we’ve got electronic medical records, huge volumes of data, and this is like asking a navigation system from a World War I airplane to navigate us up to the space shuttle,” Kaplan, a professor at New York University’s Grossman School of Medicine, told “The News with Shepard Smith.” “We’ve got to update our privacy protection and our informed consent requirements.”

On Wednesday, Google’s cloud unit and hospital chain HCA Healthcare announced a deal that — according to The Wall Street Journal — gives Google access to patient records. The tech giant said it will use that to make algorithms to monitor patients and help doctors make better decisions."

Monday, August 24, 2020

Algorithms can drive inequality. Just look at Britain's school exam chaos; CNN, August 23, 2020

Zamira Rahim, CNN; Algorithms can drive inequality. Just look at Britain's school exam chaos

""Part of the problem is the data being fed in," Crider said.
"Historical data is being fed in [to algorithms] and they are replicating the [existing] bias."
Webb agrees. "A lot of [the issue] is about the data that the algorithm learns from," she said. "For example, a lot of facial recognition technology has come out ... the problem is, a lot of [those] systems were trained on a lot of white, male faces.
"So when the software comes to be used it's very good at recognizing white men, but not so good at recognizing women and people of color. And that comes from the data and the way the data was put into the algorithm."
Webb added that she believed the problems could partly be mitigated through "a greater attention to inclusivity in datasets" and a push to add a greater "multiplicity of voices" around the development of algorithms."

Tuesday, February 26, 2019

When Is Technology Too Dangerous to Release to the Public?; Slate, February 22, 2019

Aaron Mak, Slate; When Is Technology Too Dangerous to Release to the Public?

"The announcement has also sparked a debate about how to handle the proliferation of potentially dangerous A.I. algorithms...

It’s worth considering, as OpenAI seems to be encouraging us to do, how researchers and society in general should approach powerful A.I. models...

Nevertheless, OpenAI said that it would only be publishing a “much smaller version” of the model due to concerns that it could be abused. The blog post fretted that it could be used to generate false news articles, impersonate people online, and generally flood the internet with spam and vitriol... 

“There’s a general philosophy that when the time has come for some scientific progress to happen, you really can’t stop it,” says [Robert] Frederking [the principal systems scientist at Carnegie Mellon’s Language Technologies Institute]. “You just need to figure out how you’re going to deal with it.”"

Tuesday, February 19, 2019

Drones and big data: the next frontier in the fight against wildlife extinction; The Guardian, February 18, 2019

, The Guardian; Drones and big data: the next frontier in the fight against wildlife extinction

"Yet it’s not more widely used because few researchers have the skills to use this type of technology. In biology, where many people are starting to use drones, few can code an algorithm specifically for their conservation or research problem, Wich says. “There’s a lot that needs to be done to bridge those two worlds and to make the AI more user-friendly so that people who can’t code can still use the technology.”

The solutions are more support from tech companies, better teaching in universities to help students overcome their fears of coding, and finding ways to link technologies together in an internet-of-things concept where all the different sensors, including GPS, drones, cameras and sensors, work together."

Thursday, February 14, 2019

Parkland school turns to experimental surveillance software that can flag students as threats; The Washington Post, February 13, 2019

Drew Harwell, The Washington Post; Parkland school turns to experimental surveillance software that can flag students as threats

"The specter of student violence is pushing school leaders across the country to turn their campuses into surveillance testing grounds on the hope it’ll help them detect dangerous people they’d otherwise miss. The supporters and designers of Avigilon, the AI service bought for $1 billion last year by tech giant Motorola Solutions, say its security algorithms could spot risky behavior with superhuman speed and precision, potentially preventing another attack.

But the advanced monitoring technologies ensure that the daily lives of American schoolchildren are subjected to close scrutiny from systems that will automatically flag certain students as suspicious, potentially spurring a response from security or police forces, based on the work of algorithms that are hidden from public view.

The camera software has no proven track record for preventing school violence, some technology and civil liberties experts argue. And the testing of their algorithms for bias and accuracy — how confident the systems are in identifying possible threats — has largely been conducted by the companies themselves."

Friday, January 25, 2019

A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values; The New Yorker, January 24, 2019

Caroline Lester, The New Yorker; A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values

"The U.S. government has clear guidelines for autonomous weapons—they can’t be programmed to make “kill decisions” on their own—but no formal opinion on the ethics of driverless cars. Germany is the only country that has devised such a framework; in 2017, a German government commission—headed by Udo Di Fabio, a former judge on the country’s highest constitutional court—released a report that suggested a number of guidelines for driverless vehicles. Among the report’s twenty propositions, one stands out: “In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited.” When I sent Di Fabio the Moral Machine data, he was unsurprised by the respondent’s prejudices. Philosophers and lawyers, he noted, often have very different understandings of ethical dilemmas than ordinary people do. This difference may irritate the specialists, he said, but “it should always make them think.” Still, Di Fabio believes that we shouldn’t capitulate to human biases when it comes to life-and-death decisions. “In Germany, people are very sensitive to such discussions,” he told me, by e-mail. “This has to do with a dark past that has divided people up and sorted them out.”

The decisions made by Germany will reverberate beyond its borders. Volkswagen sells more automobiles than any other company in the world. But that manufacturing power comes with a complicated moral responsibility. What should a company do if another country wants its vehicles to reflect different moral calculations? Should a Western car de-prioritize the young in an Eastern country? Shariff leans toward adjusting each model for the country where it’s meant to operate. Car manufacturers, he thinks, “should be sensitive to the cultural differences in the places they’re instituting these ethical decisions.” Otherwise, the algorithms they export might start looking like a form of moral colonialism. But Di Fabio worries about letting autocratic governments tinker with the code. He imagines a future in which China wants the cars to favor people who rank higher in its new social-credit system, which scores citizens based on their civic behavior."

Tuesday, December 11, 2018

Government Is Using Algorithms — Is It Assessing Bias?; Government Technology, December 10, 2018

Michaelle Bond, Government Technology; Government Is Using Algorithms — Is It Assessing Bias?

"“Data science is here to stay. It holds tremendous promise to improve things,” said Julia Stoyanovich, an assistant professor at New York University and former assistant professor in ethical data management at Drexel University. But policymakers need to use it responsibly.

“The first thing we need to teach people is to be skeptical about technology,” she said.

Data review boards, toolkits and software that cities, universities, and data analysts are starting to develop are steps in the right direction to spur policymakers to think critically about data, researchers said."