Showing posts with label AI algorithms. Show all posts
Showing posts with label AI algorithms. Show all posts

Monday, November 25, 2024

OpenAI’s funding into AI morality research: challenges and implications; The Economic Times, November 25, 2024

The Economic Times ; OpenAI’s funding into AI morality research: challenges and implications

"OpenAI Inc has awarded Duke University researchers a grant for a project titled ‘Research AI Morality,’ the nonprofit revealed in a filing with the Internal Revenue Service (IRS), according to a TechCrunch report. This is part of a larger three-year, $1-million grant to Duke professors studying “making moral AI.”

The funding was granted to “develop algorithms that can predict human moral judgments in scenarios involving conflicts among morally relevant features in medicine, law and business,” the university said in a press release. Not much is known about this research except the fact that the funding ends in 2025."

Saturday, September 28, 2024

Pulling Back the Silicon Curtain; The New York Times, September 10, 2024

Dennis Duncan, The New York Times; Pulling Back the Silicon Curtain

Review of NEXUS: A Brief History of Information Networks From the Stone Age to AI, by Yuval Noah Harari

"In a nutshell, Harari’s thesis is that the difference between democracies and dictatorships lies in how they handle information...

The meat of “Nexus” is essentially an extended policy brief on A.I.: What are its risks, and what can be done? (We don’t hear much about the potential benefits because, as Harari points out, “the entrepreneurs leading the A.I. revolution already bombard the public with enough rosy predictions about them.”) It has taken too long to get here, but once we arrive Harari offers a useful, well-informed primer.

The threats A.I. poses are not the ones that filmmakers visualize: Kubrick’s HAL trapping us in the airlock; a fascist RoboCop marching down the sidewalk. They are more insidious, harder to see coming, but potentially existential. They include the catastrophic polarizing of discourse when social media algorithms designed to monopolize our attention feed us extreme, hateful material. Or the outsourcing of human judgment — legal, financial or military decision-making — to an A.I. whose complexity becomes impenetrable to our own understanding.

Echoing Churchill, Harari warns of a “Silicon Curtain” descending between us and the algorithms we have created, shutting us out of our own conversations — how we want to act, or interact, or govern ourselves...

“When the tech giants set their hearts on designing better algorithms,” writes Harari, “they can usually do it.”...

Parts of “Nexus” are wise and bold. They remind us that democratic societies still have the facilities to prevent A.I.’s most dangerous excesses, and that it must not be left to tech companies and their billionaire owners to regulate themselves."

Friday, August 30, 2024

Amy Klobuchar Wants to Stop Algorithms From Ripping You Off; The New York Times, August 30, 2024

 , The New York Times; Amy Klobuchar Wants to Stop Algorithms From Ripping You Off

"This week I interviewed Senator Amy Klobuchar, Democrat of Minnesota, about her Preventing Algorithmic Collusion Act. If you don’t know what algorithmic collusion is, it’s time to get educated, because you could be its next victim.

Algorithmic collusion is where companies illegally coordinate to raise prices through the use of an algorithm that they supply their data to. There is no explicit or even wink-and-a-nod agreement among the competitors, the usual standard for collusion. Instead, each company has its own contract with the algorithm provider. That provider uses the companies’ data to make pricing recommendations that make them all richer — at the expense of their customers.

Algorithmic collusion made the headlines this month when Vice President Kamala Harris vowed to crack down, should she be elected, on “corporate landlords” that use price-setting software to jack up rents. Last week, the Justice Department sued RealPage, charging that the company, which uses an algorithm powered by artificial intelligence to help landlords set rental rates, referred to its products as “driving every possible opportunity to increase price.”"

Friday, August 23, 2024

U.S. Accuses Software Maker RealPage of Enabling Collusion on Rents; The New York Times, August 23, 2024

Danielle KayeLauren Hirsch and  , The New York Times; U.S. Accuses Software Maker RealPage of Enabling Collusion on Rents

"The Justice Department filed an antitrust lawsuit on Friday against the real estate software company RealPage, alleging its software enabled landlords to collude to raise rents across the United States.

The suit, joined by North Carolina, California, Colorado, Connecticut, Minnesota, Oregon, Tennessee and Washington, accuses RealPage of facilitating a price-fixing conspiracy that boosted rents beyond market forces for millions of people. It’s the first major civil antitrust lawsuit where the role of an algorithm in pricing manipulation is central to the case, Justice Department officials said."

Monday, July 22, 2024

Landlords Used Software to Set Rents. Then Came the Lawsuits.; The New York Times, July 19, 2024

Danielle Kaye, The New York Times ; Landlords Used Software to Set Rents. Then Came the Lawsuits.

"The use of the RealPage software in setting rents was the subject of a ProPublica investigation in 2022. Antitrust experts say the allegations in the lawsuits, if substantiated, paint a clear-cut picture of violations of federal antitrust law, which prohibits agreements among competitors to fix prices.

“There’s an emerging view that these exchanges of confidential business information raise significant competitive concerns,” said Peter Carstensen, an emeritus professor at the University of Wisconsin focused on antitrust law and competition policy. The use of algorithmic software, he added, “speeds up the coordination and makes it possible to coordinate many more players with really good information.”"

Thursday, July 18, 2024

An Algorithm Told Police She Was Safe. Then Her Husband Killed Her.; The New York Times, July 18, 2024

Adam Satariano and Roser Toll Pifarré Photographs by Ana María Arévalo Gosen , The New York Times; An Algorithm Told Police She Was Safe. Then Her Husband Killed Her.

"Spain has become dependent on an algorithm to combat gender violence, with the software so woven into law enforcement that it is hard to know where its recommendations end and human decision-making begins. At its best, the system has helped police protect vulnerable women and, overall, has reduced the number of repeat attacks in domestic violence cases. But the reliance on VioGén has also resulted in victims, whose risk levels are miscalculated, getting attacked again — sometimes leading to fatal consequences."

Wednesday, July 17, 2024

How Creators Are Facing Hateful Comments Head-On; The New York Times, July 11, 2024

Melina Delkic, The New York Times ; How Creators Are Facing Hateful Comments Head-On

"Experts in online behavior also say that the best approach is usually to ignore nasty comments, as hard as that may be.

“I think it’s helpful for people to keep in mind that hateful comments they see are typically posted by people who are the most extreme users,” said William Brady, an assistant professor at Northwestern University, whose research team studied online outrage by looking at 13 million tweets. He added that the instinct to “punish” someone can backfire.

“Giving a toxic user any engagement (view, like, share, comment) ironically can make their content more visible,” he wrote in an email. “For example, when people retweet toxic content in order to comment on it, they are actually increasing the visibility of the content they intend to criticize. But if it is ignored, algorithms are unlikely to pick them up and artificially spread them further.”"

Tuesday, July 16, 2024

Ghosts in the Machine: How Past and Present Biases Haunt Algorithmic Tenant Screening Systems; American Bar Association (ABA), June 3, 2024

Gary Rhoades , American Bar Association (ABA); Ghosts in the Machine: How Past and Present Biases Haunt Algorithmic Tenant Screening Systems

"The Civil Rights Act of 1968, also known as the Fair Housing Act (FHA), banned housing discrimination nationwide on the basis of race, religion, national origin, and color. One key finding that persuaded Dr. Martin Luther King Jr., President Lyndon Johnson, and others to fight for years for the passage of this landmark law confirmed that many Americans were being denied rental housing because of their race. Black families were especially impacted by the discriminatory rejections. They were forced to move on and spend more time and money to find housing and often had to settle for substandard housing in unsafe neighborhoods and poor school districts to avoid homelessness.

April 2024 marked the 56th year of the FHA’s attempt to end such unfair treatment. Despite the law’s broadly stated protections, its numerous state and local counterparts, and decades of enforcement, landlords’ use of high-tech algorithms for tenant screening threatens to erase the progress made. While employing algorithms to mine data such as criminal records, credit reports, and civil court records to make predictions about prospective tenants might partially remove the fallible human element, old and new biases, especially regarding race and source of income, still plague the screening results."

Monday, June 24, 2024

AI use must include ethical scrutiny; CT Mirror, June 24, 2024

 Josemari Feliciano, CT Mirror; AI use must include ethical scrutiny

"AI use may deal with data that are deeply intertwined with personal and societal dimensions. The potential for AI to impact societal structures, influence public policy, and reshape economies is immense. This power carries with it an obligation to prevent harm and ensure fairness, necessitating a formal and transparent review process akin to that overseen by IRBs.

The use of AI without meticulous scrutiny of the training data and study parameters can inadvertently perpetuate or exacerbate harm to minority groups. If the data used to train AI systems is biased or non-representative, the resulting algorithms can reinforce existing disparities."

Friday, April 5, 2024

Assisted living managers say an algorithm prevented hiring enough staff; The Washington Post, April 1, 2024

, The Washington Post; Assisted living managers say an algorithm prevented hiring enough staff

"Two decades ago, a group of senior-housing executives came up with a way to raise revenue and reduce costs at assisted-living homes. Using stopwatches, they timed caregivers performing various tasks, from making beds to changing soiled briefs, and fed the information into a program they began using to determine staffing.

Brookdale Senior Living, the leading operator of senior homes with 652 facilities, acquired the algorithm-based system and used it to set staffing at its properties across the nation. But as Brookdale’s empire grew, employees complained the system, known as “Service Alignment,” failed to capture the nuances of caring for vulnerable seniors, documents and interviews show."

Wednesday, January 10, 2024

Addressing equity and ethics in artificial intelligence; American Psychological Association, January 8, 2024

 Zara Abrams, American Psychological Association; Addressing equity and ethics in artificial intelligence

"As artificial intelligence (AI) rapidly permeates our world, researchers and policymakers are scrambling to stay one step ahead. What are the potential harms of these new tools—and how can they be avoided?

“With any new technology, we always need to be thinking about what’s coming next. But AI is moving so fast that it’s difficult to grasp how significantly it’s going to change things,” said David Luxton, PhD, a clinical psychologist and an affiliate professor at the University of Washington’s School of Medicine who is part of a session at the upcoming 2024 Consumer Electronics Show (CES) on Harnessing the Power of AI Ethically.

Luxton and his colleagues dubbed recent AI advances “super-disruptive technology” because of their potential to profoundly alter society in unexpected ways. In addition to concerns about job displacement and manipulation, AI tools can cause unintended harm to individuals, relationships, and groups. Biased algorithms can promote discrimination or other forms of inaccurate decision-making that can cause systematic and potentially harmful errors; unequal access to AI can exacerbate inequality (Proceedings of the Stanford Existential Risk Conference 2023, 60–74). On the flip side, AI may also hold the potential to reduce unfairness in today’s world—if people can agree on what “fairness” means.

“There’s a lot of pushback against AI because it can promote bias, but humans have been promoting biases for a really long time,” said psychologist Rhoda Au, PhD, a professor of anatomy and neurobiology at the Boston University Chobanian & Avedisian School of Medicine who is also speaking at CES on harnessing AI ethically. “We can’t just be dismissive and say: ‘AI is good’ or ‘AI is bad.’ We need to embrace its complexity and understand that it’s going to be both.”"

Tuesday, January 2, 2024

How the Federal Government Can Rein In A.I. in Law Enforcement; The New York Times, January 2, 2024

 Joy Buolamwini and , The New York Times; How the Federal Government Can Rein In A.I. in Law Enforcement

"One of the most hopeful proposals involving police surveillance emerged recently from a surprising quarter — the federal Office of Management and Budget. The office, which oversees the execution of the president’s policies, has recommended sorely needed constraints on the use of artificial intelligence by federal agencies, including law enforcement.

The office’s work is commendable, but shortcomings in its proposed guidance to agencies could still leave people vulnerable to harm. Foremost among them is a provision that would allow senior officials to seek waivers by arguing that the constraints would hinder law enforcement. Those law enforcement agencies should instead be required to provide verifiable evidence that A.I. tools they or their vendors use will not cause harm, worsen discrimination or violate people’s rights."

Thursday, December 14, 2023

Pope, once a victim of AI-generated imagery, calls for treaty to regulate artificial intelligence; AP, December 14, 2023

NICOLE WINFIELD, AP ; Pope, once a victim of AI-generated imagery, calls for treaty to regulate artificial intelligence

"On a more basic level, he warned about the profound repercussions on humanity of automated systems that rank citizens or categorize them. In addition to the threats to jobs around the world that can be done by robots, Francis noted that such technology could determine the reliability of an applicant for a mortgage, the right of a migrant to receive political asylum or the chance of reoffending by someone previously convicted of a crime.

“Algorithms must not be allowed to determine how we understand human rights, to set aside the essential human values of compassion, mercy and forgiveness, or to eliminate the possibility of an individual changing and leaving his or her past behind,” he wrote."

Friday, November 3, 2023

Joe Biden Wants US Government Algorithms Tested for Potential Harm Against Citizens; Wired, November 1, 2023

 , Wired; Joe Biden Wants US Government Algorithms Tested for Potential Harm Against Citizens

"The White House issued draft rules today that would require federal agencies to evaluate and constantly monitor algorithms used in health care, law enforcement, and housing for potential discrimination or other harmful effects on human rights.

Once in effect, the rules could force changes in US government activity dependent on AI, such as the FBI’s use of face recognition technology, which has been criticized for not taking steps called for by Congress to protect civil liberties. The new rules would require government agencies to assess existing algorithms by August 2024 and stop using any that don’t comply.

“If the benefits do not meaningfully outweigh the risks, agencies should not use the AI,” the memo says. But the draft memo carves out an exemption for models that deal with national security and allows agencies to effectively issue themselves waivers if ending use of an AI model “would create an unacceptable impediment to critical agency operations.”"

Saturday, February 25, 2023

Science Fiction Magazines Battle a Flood of Chatbot-Generated Stories; The New York Times, February 23, 2023

 Michael Levenson , The New York Times; Science Fiction Magazines Battle a Flood of Chatbot-Generated Stories

"Elaborating on his concerns in the interview, Mr. Clarke said that chatbot-generated fiction could raise ethical and legal questions, if it ever passed literary muster. He said he did not want to pay “for the work the algorithm did” on stories generated by someone who had entered prompts into an algorithm.

“Who owns that, technically?” Mr. Clarke said. “Right now, we’re still in the early days of this technology, and there are a lot of unanswered questions.”"

Friday, May 27, 2022

Accused of Cheating by an Algorithm, and a Professor She Had Never Met; The New York Times, May 27, 2022

Kashmir Hill, The New York Times; Accused of Cheating by an Algorithm, and a Professor She Had Never Met

An unsettling glimpse at the digitization of education.

"The most serious flaw with these systems may be a human one: educators who overreact when artificially intelligent software raises an alert.

“Schools seem to be treating it as the word of God,” Mr. Quintin said. “If the computer says you’re cheating, you must be cheating.”"

Friday, April 29, 2022

LSU to Embed Ethics in the Development of New Technologies, Including AI; LSU Office of Research and Economic Development, April 2022

Elsa Hahne, LSU Office of Research and Economic Development ; LSU to Embed Ethics in the Development of New Technologies, Including AI

"“If we want to educate professionals who not only understand their professional obligations but become leaders in their fields, we need to make sure our students understand ethical conflicts and how to resolve them,” Goldgaber said. “Leaders don’t just do what they’re told—they make decisions with vision.”

The rapid development of new technologies has put researchers in her field, the world of Socrates and Rousseau, in the new and not-altogether-comfortable role of providing what she calls “ethics emergency services” when emerging capabilities have unintended consequences for specific groups of people.

“We can no longer rely on the traditional division of labor between STEM and the humanities, where it’s up to philosophers to worry about ethics,” Goldgaber said. “Nascent and fast-growing technologies, such as artificial intelligence, disrupt our everyday normative understandings, and most often, we lack the mechanisms to respond. In this scenario, it’s not always right to ‘stay in your lane’ or ‘just do your job.’”

Saturday, March 26, 2022

Even in the digital age, Only human-made works are copyrightable in the U.S.; March 21, 2022

K&L Gates LLP - Susan Kayser and Kristin Wells , Lexology; Even in the digital age, Only human-made works are copyrightable in the U.S. 

"The U.S. Copyright Office Review Board refused copyright protection of a two-dimensional artwork created by artificial intelligence, stating that “[c]urrently, ‘the Office will refuse to register a claim if it determines that a human being did not create the work,’” see recent letter. The Compendium of U.S. Copyright Office Practices does not explicitly address AI, but precedent, policy, and practice makes human authorship currently a prerequisite.

A “Creativity Machine,” authored the work titled “A Recent Entrance into Paradise.” The applicant, Steven Thaler, an advocate for AI IP rights, named himself as the copyright claimant. Thaler’s application included a unique transfer statement: “ownership of the machine,” and further explained that the work “was autonomously created by a computer algorithm running on a machine.” Thaler sought to register the work as a work-for-hire because he owns the Creativity Machine.

AI’s “kill switch” at the U.S. Copyright Office? AI isn’t human. The Review Board relied on the Office’s compendium of practices and Supreme Court precedent dating back to 1879—long before computers were a concept—to hold that the U.S. Copyright Office will not register a claim if it determines that a human being did not create the work.

The Review Board also denied Thaler’s argument that the work made for hire doctrine allows non-human persons like companies to be authors of copyrighted material. The Board explained that works made for hire must be prepared by “an employee” or by “parties” who “expressly agree in a written instrument” that the work is for hire.

Because Thaler did not claim any human involvement in the work, the Board did not address under which circumstances human involvement in machine-created works might meet the statutory requirements for copyright protection. This is an issue that may soon arise."

Thursday, March 10, 2022

Report of the Pittsburgh Task Force on Public Algorithms; University of Pittsburgh, March 2022

 

University of Pittsburgh; Report of the Pittsburgh Task Force on Public Algorithms

David J. Hickton: Report for region: People must have voice, stake in algorithms; The Pittsburgh Post-Gazette, March 10, 2022

David J. Hickton, The Pittsburgh Post-Gazette; David J. Hickton: Report for region: People must have voice, stake in algorithms

"The institute that I lead — the University of Pittsburgh’s Institute for Cyber Law, Policy and Security, or simply Pitt Cyber — formed the Pittsburgh Task Force on Public Algorithms to do precisely that for our region.

We brought together a diverse group of experts and leaders from across the region and the country to study how our local governments are using algorithms and the state of public participation and oversight of these systems.

Our findings should be no surprise: Public algorithms are on the rise. And the openness of and public participation in the development and deployment of those systems varies considerably across local governments and agencies...

Our Task Force’s report — the product of our two-year effort — offers concrete recommendations to policymakers. For example, we encourage independent reviews and public involvement in the development of algorithmic systems commensurate with their risks: higher-risk systems, like those involved in decisions affecting liberty, require more public buy-in and examination."