Showing posts with label AI algorithms. Show all posts
Showing posts with label AI algorithms. Show all posts

Thursday, March 26, 2026

The Terrible Cost of the Infinite Scroll; The New York Times, March 26, 2026

 , The New York Times; The Terrible Cost of the Infinite Scroll

"It finally happened: Social media companies have been held accountable for the toxicity of their algorithmic grip.

In a first ruling of its kind, a California Superior Court jury found Wednesday that Meta and YouTube harmed a user through their addictive design choices.

The consequences for the industry could be significant. This case is only one of thousands set to be litigated across the country, and courts are seeking to consolidate them. This could wind up with a single significant settlement similar to the agreement that the four largest cigarette makers made in 1998 to resolve lawsuits for an estimated $206 billion as part of a master agreement with 46 states.

Compensating people for the harm caused by their products is just the silver lining. The real win would be if the social media giants were finally forced to design less harmful products."

Is Big Tech Facing a Big Tobacco Moment?; The New York Times, March 26, 2026

Andrew Ross SorkinBernhard WarnerSarah KesslerMichael J. de la MercedNiko Gallogly,Brian O’Keefe and , The New York Times; Is Big Tech Facing a Big Tobacco Moment?

Back-to-back courtroom losses have put technology giants, including Meta and Google, in uncertain territory as they face lawsuits and bans on teen users.

"Andrew here. Back in 2018, I moderated a panel at the World Economic Forum that included Marc Benioff of Salesforce. It was then that he essentially declared that Facebook was the modern-day equivalent of cigarettes, and that it and other social media companies should be regulated as such.

Well, Meta’s loss in court on Wednesday, in a case about whether its platforms were designed to be addictive to adolescents, may be a watershed. Investors don’t seem to be fazed — the company’s shares hardly moved after the verdict came out — but the decision could change the conversation around the company yet again. More below...

Some legal experts wonder if Big Tech is staring at a Big Tobacco moment, a reference to how cigarette makers had to overhaul their businesses — at a huge expense — after courts ruled that some of their products were addictive and harmful.

We’re in a new era, a digital era, where we have to rethink definitions for products based on which entities might have superior information to prevent these injuries and accidents,” Catherine Sharkey, a professor of law at N.Y.U., told The Times. She added that the “implications” of those verdicts were “very, very big.”

“This has potentially large impacts on other areas in tech, A.I. and beyond that,” Jessica Nall, a San Francisco lawyer who represents tech companies and executives, told The Wall Street Journal. “The floodgates are already open.”

Meta and Google plan to appeal. The companies have signaled that they will fight efforts to make them drastically redesign their products and algorithms."

Sunday, March 15, 2026

SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY; Pasadena Now, March 15, 2026

Pasadena Now; SHELLEY’S ‘FRANKENSTEIN’ GETS AN AI REBOOT AT PASADENA’S HASTINGS BRANCH LIBRARY

A discussion today ties the 1818 novel's warnings about creator responsibility to contemporary debates over artificial intelligence, part of the city's One City, One Story program 

"Two centuries before algorithms began analyzing people’s dreams and predicting their crimes, Mary Shelley wrote a novel about a scientist who built something he could not control. That novel, “Frankenstein,” is the subject of a free discussion today at Hastings Branch Library, where presenter Rosemary Choate will connect its 207-year-old themes to the same questions about artificial intelligence that Pasadena’s citywide reading program is exploring all month.

The event, titled “Frankenstein: Myths and the Real Story?” is part of the Pasadena Public Library’s 24th annual One City, One Story program, which this year selected Laila Lalami’s “The Dream Hotel” — a dystopian novel about a woman detained because an algorithm, fed by data from her dreams, deemed her a future criminal. The library has organized a month of lectures, films and book discussions around the novel’s themes of surveillance, technology and freedom, and the Frankenstein session draws a direct line between Shelley’s 1818 tale and the anxieties at the center of Lalami’s story.

Choate, a comparative literature and humanities instructor and founder of the Pomona College Alumni Book Club, will lead the discussion at 3 p.m. She will examine themes including creator responsibility, the consequences of unchecked technological ambition and society’s rejection of the “creation” — questions the library’s event description calls “highly relevant to contemporary debates surrounding the development and governance of AI,” according to the Pasadena Public Library’s event listing.

Shelley published “Frankenstein; or, The Modern Prometheus” anonymously in 1818, when she was 20 years old. The novel tells the story of Victor Frankenstein, a young scientist who assembles a creature from dead body parts and recoils from what he has made. The creature, abandoned by its creator, becomes violent as it fails to find acceptance. The novel is widely considered one of the first works of science fiction.

The One City, One Story program, now in its 24th year, selects a single book each year for citywide reading and discussion. A 19-member committee of community volunteers, led by Senior Librarian Christine Reeder, chose “The Dream Hotel” for its exploration of surveillance, freedom and the reach of technology into private life. The program is sponsored by The Friends of the Pasadena Public Library and the Pasadena Literary Alliance.

The month of events culminates in a conversation with Lalami and Pasadena Public Library Director Tim McDonald on Saturday, March 21, at 2 p.m. at Pasadena Presbyterian Church, 585 E. Colorado Blvd. That event is also free and open to the public."

Thursday, March 5, 2026

Vatican hosts seminar on AI and ethics; Vatican News, March 2, 2026

Edoardo Giribaldi, Vatican News; Vatican hosts seminar on AI and ethics

"“An abundance of means and a confusion of ends.” This phrase, attributed to Albert Einstein, offers a snapshot of a world challenged and shaped by new technologies. The interests at stake are multiple and not “neutral.” In this context, the Holy See — which has no military or commercial objectives — can play a key role in promoting global governance capable of developing systems that are “ethical from their design stage.”

These were some of the themes highlighted during the seminar Potential and Challenges of Artificial Intelligence,” organized today, Monday 2 March, in Rome, at the Salone San Pio X on Via della Conciliazione 5, by the Secretariat for the Economy and the Office of Labor of the Apostolic See (ULSA)...

To summarize the consequences of the widespread uptake in 2022 of ChatGPT, Bishop Tighe used the acronym VUCA: Volatility, Uncertainty, Complexity, and Ambiguity...

Father Benanti’s presentation focused on the ethical challenges of artificial intelligence, proposing a new “ethics of technology” that questions the “politics” embedded in such models. “Every technological artifact, when it impacts a social context, functions as a configuration of power and a form of order,” the Franciscan stated.

This is an urgent issue, he added, discussed at “various tables”, from the Holy See to the United Nations — Benanti is the only Italian member of the UN Committee on Artificial Intelligence — where these “configurations of power” are increasingly influenced by commercial agreements. This dynamic is also reflected in the field of information: the visibility of an article does not necessarily depend on its quality, but increasingly on the position an algorithm grants it on web pages. It is a “mediation of power,” Benanti concluded."

Thursday, November 20, 2025

Trump Hatches Creepy New Plot to Target ‘Suspicious’ Drivers; The Daily Beast, November 20, 2025

 , The Daily Beast; Trump Hatches Creepy New Plot to Target ‘Suspicious’ Drivers

"Border Patrol agents armed with hidden cameras and AI-driven algorithms are flagging millions of American drivers as “suspicious” and triggering covert traffic stops across the country, according to a new investigation.

The Trump administration has quietly expanded a vast domestic surveillance web that tracks and analyzes the travel patterns of millions of drivers—feeding local police tips that lead to secretive traffic stops, searches, and arrests, the Associated Press reports.

The intelligence project, built and run by Border Patrol’s parent agency, U.S. Customs and Border Protection (CBP) gathers vehicle movements through a national network of covert license plate readers disguised inside roadside barrels, cones, and job-site equipment, AP reports...

Legal scholars warn that the scale of the data collection—tracking “patterns of life” for millions of ordinary drivers—could violate the Fourth Amendment. “Large-scale surveillance technology that’s capturing everyone and everywhere at every time” may be unconstitutional, Andrew Ferguson, a law professor at George Washington University, told AP.

The program is powered by an enormous expansion of CBP’s intelligence capabilities since President Donald Trump returned to office. Congress has authorized more than $2.7 billion to layer artificial intelligence onto existing surveillance networks. 

Meanwhile, Operation Stonegarden—a two-decade-old federal grant scheme—now channels hundreds of millions of dollars to local sheriff’s offices to buy license-plate readers and drones, and to fund overtime that effectively deputizes local cops into Border Patrol’s mission. Under Trump, congressional Republicans increased Stonegarden to $450 million over four fiscal years."

Wednesday, September 17, 2025

Trump celebrates TikTok deal as Beijing suggests US app would use China’s algorithm; The Guardian, September 16, 2025

Guardian staff and agencies , The Guardian; Trump celebrates TikTok deal as Beijing suggests US app would use China’s algorithm


[Kip Currier: Wasn't fears about the Chinese government's potential ability to manipulate U.S. TikTok users via the TikTok algorithm one of the chief rationales for the past Congress and Biden administration's banning of TikTok? How does this Trump 2.0 deal materially change any of that?

Another rationale for the ban was concerns about China's potential to access and leverage the personal data and impinge the privacy interests of TikTok users in the U.S. How does this proposed arrangement substantively address these concerns, particularly without comprehensive federal data and privacy legislation to give Americans agency over their own data?

The American people need maximal transparency and oversight of any kind of financial deal like this.]


[Excerpt]

"One of the major questions is the fate of TikTok’s powerful algorithm that helped the app become one of the world’s most popular sources of online entertainment.

At a press conference in Madrid, the deputy head of China’s cyber security regulator said the framework of the deal included “licensing the algorithm and other intellectual property rights”.

Wang Jingtao said ByteDance would “entrust the operation of TikTok’s US user data and content security.”

Some commentators have inferred from these comments that TikTok’s US spinoff will retain the Chinese algorithm."

Sunday, June 29, 2025

ACM FAccT ACM Conference on Fairness, Accountability, and Transparency; June 23-26, 2025, Athens, Greece

  

ACM FAccT

ACM Conference on Fairness, Accountability, and Transparency

A computer science conference with a cross-disciplinary focus that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.

"Algorithmic systems are being adopted in a growing number of contexts, fueled by big data. These systems filter, sort, score, recommend, personalize, and otherwise shape human experience, increasingly making or informing decisions with major impact on access to, e.g., credit, insurance, healthcare, parole, social security, and immigration. Although these systems may bring myriad benefits, they also contain inherent risks, such as codifying and entrenching biases; reducing accountability, and hindering due process; they also increase the information asymmetry between individuals whose data feed into these systems and big players capable of inferring potentially relevant information.

ACM FAccT is an interdisciplinary conference dedicated to bringing together a diverse community of scholars from computer science, law, social sciences, and humanities to investigate and tackle issues in this emerging area. Research challenges are not limited to technological solutions regarding potential bias, but include the question of whether decisions should be outsourced to data- and code-driven computing systems. We particularly seek to evaluate technical solutions with respect to existing problems, reflecting upon their benefits and risks; to address pivotal questions about economic incentive structures, perverse implications, distribution of power, and redistribution of welfare; and to ground research on fairness, accountability, and transparency in existing legal requirements."

Wednesday, June 25, 2025

Second study finds Uber used opaque algorithm to dramatically boost profits; The Guardian, June 25, 2025

 , The Guardian; Second study finds Uber used opaque algorithm to dramatically boost profits

"A second major academic institution has accused Uber of using opaque computer code to dramatically increase its profits at the expense of the ride-hailing app’s drivers and passengers.

Research by academics at New York’s Columbia Business School concluded that the Silicon Valley company had implemented “algorithmic price discrimination” that had raised “rider fares and cut driver pay on billions of … trips, systematically, selectively, and opaquely”."

Thursday, May 15, 2025

Republicans propose prohibiting US states from regulating AI for 10 years; The Guardian, May 14, 2025

, The Guardian; Republicans propose prohibiting US states from regulating AI for 10 years

"Republicans in US Congress are trying to bar states from being able to introduce or enforce laws that would create guardrails for artificial intelligence or automated decision-making systems for 10 years.

A provision in the proposed budgetary bill now before the House of Representatives would prohibit any state or local governing body from pursuing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” unless the purpose of the law is to “remove legal impediments to, or facilitate the deployment or operation of” these systems...

The bill defines AI systems and models broadly, with anything from facial recognition systems to generative AI qualifying. The proposed law would also apply to systems that use algorithms or AI to make decisions including for hiring, housing and whether someone qualifies for public benefits.

Many of these automated decision-making systems have recently come under fire. The deregulatory proposal comes on the heels of a lawsuit filed by several state attorneys general against the property management software RealPage, which the lawsuit alleges colluded with landlords to raise rents based on the company’s algorithmic recommendations. Another company, SafeRent, recently settled a class-action lawsuit filed by Black and Hispanic renters who say they were denied apartments based on an opaque score the company gave them."

Monday, January 20, 2025

Meta’s Decision to End Fact-Checking Could Have Disastrous Consequences; The New York Times, January 14, 2025

, The New York Times; Meta’s Decision to End Fact-Checking Could Have Disastrous Consequences

"What happens on Meta’s platforms is more than just a matter of company policy. The prevalence of false information on social media and the ease with which it can proliferate have helped fuel division and violence in the United States and abroad. The company’s addictive algorithms were so effective in supercharging posts encouraging ethnic cleansing in Myanmar that Amnesty International called upon Meta to pay reparations to the Rohingya people. (The company said “we have been too slow to prevent misinformation and hate on Facebook” in Myanmar, and eventually took steps to proactively identify and remove posts.)

I first learned the importance of fact-checking while working as a reporter in Sri Lanka in 2018, when an episode of violence tied to Meta’s platforms rocked the country."

Sunday, December 1, 2024

5 Underrated Films About AI Ethics Every Tech Leader Should Watch; Forbes, November 26, 2024

Bruce Weinstein, Ph.D., Forbes ; 5 Underrated Films About AI Ethics Every Tech Leader Should Watch

"If you’’re a tech leader—and even if you’re not—you owe it to yourself to watch at least a couple of the films on this list. Each raises profound ethical questions and are gripping to boot.

So here are 5 lesser-known works of cinema waiting for you online or on old-fashioned DVD or Blu-Ray discs. For each film I’m including:

  • a reference to an ethical question raised by the film
  • a reference for digging more deeply into the film’s ethical issues
  • The Rotten Tomatoes rating at the time of this article’s publication
  • where to watch"

Monday, November 25, 2024

OpenAI’s funding into AI morality research: challenges and implications; The Economic Times, November 25, 2024

The Economic Times ; OpenAI’s funding into AI morality research: challenges and implications

"OpenAI Inc has awarded Duke University researchers a grant for a project titled ‘Research AI Morality,’ the nonprofit revealed in a filing with the Internal Revenue Service (IRS), according to a TechCrunch report. This is part of a larger three-year, $1-million grant to Duke professors studying “making moral AI.”

The funding was granted to “develop algorithms that can predict human moral judgments in scenarios involving conflicts among morally relevant features in medicine, law and business,” the university said in a press release. Not much is known about this research except the fact that the funding ends in 2025."

Saturday, September 28, 2024

Pulling Back the Silicon Curtain; The New York Times, September 10, 2024

Dennis Duncan, The New York Times; Pulling Back the Silicon Curtain

Review of NEXUS: A Brief History of Information Networks From the Stone Age to AI, by Yuval Noah Harari

"In a nutshell, Harari’s thesis is that the difference between democracies and dictatorships lies in how they handle information...

The meat of “Nexus” is essentially an extended policy brief on A.I.: What are its risks, and what can be done? (We don’t hear much about the potential benefits because, as Harari points out, “the entrepreneurs leading the A.I. revolution already bombard the public with enough rosy predictions about them.”) It has taken too long to get here, but once we arrive Harari offers a useful, well-informed primer.

The threats A.I. poses are not the ones that filmmakers visualize: Kubrick’s HAL trapping us in the airlock; a fascist RoboCop marching down the sidewalk. They are more insidious, harder to see coming, but potentially existential. They include the catastrophic polarizing of discourse when social media algorithms designed to monopolize our attention feed us extreme, hateful material. Or the outsourcing of human judgment — legal, financial or military decision-making — to an A.I. whose complexity becomes impenetrable to our own understanding.

Echoing Churchill, Harari warns of a “Silicon Curtain” descending between us and the algorithms we have created, shutting us out of our own conversations — how we want to act, or interact, or govern ourselves...

“When the tech giants set their hearts on designing better algorithms,” writes Harari, “they can usually do it.”...

Parts of “Nexus” are wise and bold. They remind us that democratic societies still have the facilities to prevent A.I.’s most dangerous excesses, and that it must not be left to tech companies and their billionaire owners to regulate themselves."

Friday, August 30, 2024

Amy Klobuchar Wants to Stop Algorithms From Ripping You Off; The New York Times, August 30, 2024

 , The New York Times; Amy Klobuchar Wants to Stop Algorithms From Ripping You Off

"This week I interviewed Senator Amy Klobuchar, Democrat of Minnesota, about her Preventing Algorithmic Collusion Act. If you don’t know what algorithmic collusion is, it’s time to get educated, because you could be its next victim.

Algorithmic collusion is where companies illegally coordinate to raise prices through the use of an algorithm that they supply their data to. There is no explicit or even wink-and-a-nod agreement among the competitors, the usual standard for collusion. Instead, each company has its own contract with the algorithm provider. That provider uses the companies’ data to make pricing recommendations that make them all richer — at the expense of their customers.

Algorithmic collusion made the headlines this month when Vice President Kamala Harris vowed to crack down, should she be elected, on “corporate landlords” that use price-setting software to jack up rents. Last week, the Justice Department sued RealPage, charging that the company, which uses an algorithm powered by artificial intelligence to help landlords set rental rates, referred to its products as “driving every possible opportunity to increase price.”"

Friday, August 23, 2024

U.S. Accuses Software Maker RealPage of Enabling Collusion on Rents; The New York Times, August 23, 2024

Danielle KayeLauren Hirsch and  , The New York Times; U.S. Accuses Software Maker RealPage of Enabling Collusion on Rents

"The Justice Department filed an antitrust lawsuit on Friday against the real estate software company RealPage, alleging its software enabled landlords to collude to raise rents across the United States.

The suit, joined by North Carolina, California, Colorado, Connecticut, Minnesota, Oregon, Tennessee and Washington, accuses RealPage of facilitating a price-fixing conspiracy that boosted rents beyond market forces for millions of people. It’s the first major civil antitrust lawsuit where the role of an algorithm in pricing manipulation is central to the case, Justice Department officials said."

Monday, July 22, 2024

Landlords Used Software to Set Rents. Then Came the Lawsuits.; The New York Times, July 19, 2024

Danielle Kaye, The New York Times ; Landlords Used Software to Set Rents. Then Came the Lawsuits.

"The use of the RealPage software in setting rents was the subject of a ProPublica investigation in 2022. Antitrust experts say the allegations in the lawsuits, if substantiated, paint a clear-cut picture of violations of federal antitrust law, which prohibits agreements among competitors to fix prices.

“There’s an emerging view that these exchanges of confidential business information raise significant competitive concerns,” said Peter Carstensen, an emeritus professor at the University of Wisconsin focused on antitrust law and competition policy. The use of algorithmic software, he added, “speeds up the coordination and makes it possible to coordinate many more players with really good information.”"

Thursday, July 18, 2024

An Algorithm Told Police She Was Safe. Then Her Husband Killed Her.; The New York Times, July 18, 2024

Adam Satariano and Roser Toll Pifarré Photographs by Ana María Arévalo Gosen , The New York Times; An Algorithm Told Police She Was Safe. Then Her Husband Killed Her.

"Spain has become dependent on an algorithm to combat gender violence, with the software so woven into law enforcement that it is hard to know where its recommendations end and human decision-making begins. At its best, the system has helped police protect vulnerable women and, overall, has reduced the number of repeat attacks in domestic violence cases. But the reliance on VioGén has also resulted in victims, whose risk levels are miscalculated, getting attacked again — sometimes leading to fatal consequences."