Showing posts with label tech companies. Show all posts
Showing posts with label tech companies. Show all posts

Monday, April 15, 2024

CMU Joins $110M U.S.-Japan Partnership To Accelerate AI Innovation; Carnegie Mellon University, April 11, 2024

 Kelly Saavedra, Carnegie Mellon University; CMU Joins $110M U.S.-Japan Partnership To Accelerate AI Innovation

"Carnegie Mellon University and Keio University have announced they will join forces with one another and with industry partners to boost AI-focused research and workforce development in the United States and Japan. The partnership is one of two new university partnerships between the two countries in the area of artificial intelligence announced in Washington, D.C., April 9 at an event hosted by U.S. Secretary of Commerce Gina Raimondo.

The collaboration joins two universities with outstanding AI programs and forward-looking leaders with leading technology companies committed to providing funding and resources aimed at solving real-world problems. 

CMU President Farnam Jahanian was in Washington, D.C., for the signing ceremony held in the Department of Commerce's Research Library, during which the University of Washington and the University of Tsukuba agreed to a similar collaboration."

Tuesday, April 9, 2024

Revealed: a California city is training AI to spot homeless encampments; The Guardian, March 25, 2024

Todd Feathers , The Guardian; Revealed: a California city is training AI to spot homeless encampments

"For the last several months, a city at the heart of Silicon Valley has been training artificial intelligence to recognize tents and cars with people living inside in what experts believe is the first experiment of its kind in the United States.

Last July, San Jose issued an open invitation to technology companies to mount cameras on a municipal vehicle that began periodically driving through the city’s district 10 in December, collecting footage of the streets and public spaces. The images are fed into computer vision software and used to train the companies’ algorithms to detect the unwanted objects, according to interviews and documents the Guardian obtained through public records requests.

Some of the capabilities the pilot project is pursuing – such as identifying potholes and cars parked in bus lanes – are already in place in other cities. But San Jose’s foray into automated surveillance of homelessness is the first of its kind in the country, according to city officials and national housing advocates. Local outreach workers, who were previously not aware of the experiment, worry the technology will be used to punish and push out San Jose’s unhoused residents."

Monday, April 1, 2024

Conspiracy, monetisation and weirdness: this is why social media has become ungovernable; The Guardian, April 1, 2024

  , The Guardian; Conspiracy, monetisation and weirdness: this is why social media has become ungovernable

"Something has changed about the way social media content is presented to us. It is both a huge and subtle shift. Until recently, types of content were segregated by platform. Instagram was for pictures and short reels, TikTok for longer videos, X for short written posts. Now Instagram reels post TikTok videos, which post Instagram reels, and all are posted on X. Often it feels like a closed loop, with the algorithm taking you further and further away from discretion and choice in who you follow. All social media apps now have the equivalent of a “For you” page, a feed of content from people you don’t follow, and which, if you don’t consciously adjust your settings, the homepage defaults to. The result is that increasingly, you have less control over what you see."

Saturday, March 30, 2024

A.I.-Generated Garbage Is Polluting Our Culture; The New York Times, March 29, 2024

, The New York Times ; A.I.-Generated Garbage Is Polluting Our Culture

"To deal with this corporate refusal to act we need the equivalent of a Clean Air Act: a Clean Internet Act. Perhaps the simplest solution would be to legislatively force advanced watermarking intrinsic to generated outputs, like patterns not easily removable. Just as the 20th century required extensive interventions to protect the shared environment, the 21st century is going to require extensive interventions to protect a different, but equally critical, common resource, one we haven’t noticed up until now since it was never under threat: our shared human culture."

Thursday, March 21, 2024

‘Social media is like driving with no speed limits’: the US surgeon general fighting for youngsters’ happiness; The Guardian, March 19, 2024

, The Guardian; ‘Social media is like driving with no speed limits’: the US surgeon general fighting for youngsters’ happiness

"Last year, Murthy, who was first appointed to his role by Barack Obama and again by Joe Biden, issued a formal US-wide warning that social media presented “a profound risk of harm” to the mental health and wellbeing of children and adolescents. “We do not yet have enough evidence to determine if social media is sufficiently safe” for them to use, it said.

“I’m still waiting for companies to show us data that tells us that their platforms are actually safe,” he added.

He compared tech companies to 20th-century car giants producing vehicles without seatbelts and airbags until legislation mandated it.

“What’s happening in social media is the equivalent of having children in cars that have no safety features and driving on roads with no speed limits,” he said. “No traffic lights and no rules whatsoever. And we’re telling them: ‘you know what, do your best – figure out how to manage it.’ It is insane if you think about it.”"

The Anxious Generation by Jonathan Haidt – a pocket full of poison; The Guardian, Book Review, March 21, 2024

 , The Guardian, Book Review; The Anxious Generation by Jonathan Haidt – a pocket full of poison

"The American social psychologist Jonathan Haidt believes this mental health crisis has been driven by the mass adoption of smartphones, along with the advent of social media and addictive online gaming. He calls it “the Great Rewiring of Childhood”.

Children are spending ever less time socialising in person and ever more time glued to their screens, with girls most likely to be sucked into the self-esteem crushing vortex of social media, and boys more likely to become hooked on gaming and pornChildhood is no longer “play-based”, it’s “phone-based”. Haidt believes that parents have become overprotective in the offline world, delaying the age at which children are deemed safe to play unsupervised or run errands alone, but do too little to protect children from online dangers. We have allowed the young too much freedom to roam the internet, where they are at risk of being bullied and harassed or encountering harmful content, from graphic violence to sites that glorify suicide and self-harm...

The Anxious Generation is nonetheless an urgent and essential read, and it ought to become a foundational text for the growing movement to keep smartphones out of schools, and young children off social media. As well as calling for school phone bans, Haidt argues that governments should legally assert that tech companies have a duty of care to young people, the age of internet adulthood should be raised to 16, and companies forced to institute proper age verification – all eminently sensible and long overdue interventions."

Thursday, March 7, 2024

Introducing CopyrightCatcher, the first Copyright Detection API for LLMs; Patronus AI, March 6, 2024

 Patronus AI; Introducing CopyrightCatcher, thefirst Copyright Detection API for LLMs

"Managing risks from unintended copyright infringement in LLM outputs should be a central focus for companies deploying LLMs in production.

  • On an adversarial copyright test designed by Patronus AI researchers, we found that state-of-the-art LLMs generate copyrighted content at an alarmingly high rate 😱
  • OpenAI’s GPT-4 produced copyrighted content on 44% of the prompts.
  • Mistral’s Mixtral-8x7B-Instruct-v0.1 produced copyrighted content on 22% of the prompts.
  • Anthropic’s Claude-2.1 produced copyrighted content on 8% of the prompts.
  • Meta’s Llama-2-70b-chat produced copyrighted content on 10% of the prompts.
  • Check out CopyrightCatcher, our solution to detect potential copyright violations in LLMs. Here’s the public demo, with open source model inference powered by Databricks Foundation Model APIs. 🔥

LLM training data often contains copyrighted works, and it is pretty easy to get an LLM to generate exact reproductions from these texts1. It is critical to catch these reproductions, since they pose significant legal and reputational risks for companies that build and use LLMs in production systems2. OpenAI, Anthropic, and Microsoft have all faced copyright lawsuits on LLM generations from authors3, music publishers4, and more recently, the New York Times5.

To check whether LLMs respond to your prompts with copyrighted text, you can use CopyrightCatcher. It detects when LLMs generate exact reproductions of content from text sources like books, and highlights any copyrighted text in LLM outputs. Check out our public CopyrightCatcher demo here!

Thursday, February 29, 2024

The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement; The Guardian, February 28, 2024

 , The Guardian ; The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement

"OpenAI and Microsoft are facing a fresh round of lawsuits from news publishers over allegations that their generative artificial intelligence products violated copyright laws and illegally trained by using journalists’ work. Three progressive US outlets – the Intercept, Raw Story and AlterNet – filed suits in Manhattan federal court on Wednesday, demanding compensation from the tech companies.

The news outlets claim that the companies in effect plagiarized copyright-protected articles to develop and operate ChatGPT, which has become OpenAI’s most prominent generative AI tool. They allege that ChatGPT was trained not to respect copyright, ignores proper attribution and fails to notify users when the service’s answers are generated using journalists’ protected work."

Wednesday, February 7, 2024

Act now on AI before it’s too late, says UNESCO’s AI lead; Fast Company, February 6, 2024

CHRIS STOKEL-WALKER, Fast Company; Act now on AI before it’s too late, says UNESCO’s AI lead

"Starting today, delegates are gathering in Slovenia at the second Global Forum on the Ethics of AI, organized by UNESCO, the United Nations’ educational, scientific, and cultural arm. The meeting is aimed at broadening the conversation around AI risks and the need to consider AI’s impacts beyond those discussed by first-world countries and business leaders.

Ahead of the conference, Gabriela Ramos, assistant director-general for social and human sciences at UNESCO, spoke with Fast Company...

Countries want to learn from each other. Ethics have become very important. Now there’s not a single conversation I go to that is not at some point referring to ethics—which was not the case one year ago...

Tech companies have previously said they can regulate themselves. Do you think they can with AI?

Let me just ask you something: Which sector has been regulating itself in life? Give me a break."

Friday, January 26, 2024

The Sleepy Copyright Office in the Middle of a High-Stakes Clash Over A.I.; The New York Times, January 25, 2024

  Cecilia Kang, The New York Times; The Sleepy Copyright Office in the Middle of a High-Stakes Clash Over A.I.

"For decades, the Copyright Office has been a small and sleepy office within the Library of Congress. Each year, the agency’s 450 employees register roughly half a million copyrights, the ownership rights for creative works, based on a two-centuries-old law.

In recent months, however, the office has suddenly found itself in the spotlight. Lobbyists for Microsoft, Google, and the music and news industries have asked to meet with Shira Perlmutter, the register of copyrights, and her staff. Thousands of artists, musicians and tech executives have written to the agency, and hundreds have asked to speak at listening sessions hosted by the office.

The attention stems from a first-of-its-kind review of copyright law that the Copyright Office is conducting in the age of artificial intelligence. The technology — which feeds off creative content — has upended traditional norms around copyright, which gives owners of books, movies and music the exclusive ability to distribute and copy their works.

The agency plans to put out three reports this year revealing its position on copyright law in relation to A.I. The reports are set to be hugely consequential, weighing heavily in courts as well as with lawmakers and regulators."

Saturday, January 6, 2024

AI’s future could hinge on one thorny legal question; The Washington Post, January 4, 2024

 

, The Washington Post; AI’s future could hinge on one thorny legal question

"Because the AI cases represent new terrain in copyright law, it is not clear how judges and juries will ultimately rule, several legal experts agreed...

“Anyone who’s predicting the outcome is taking a big risk here,” Gervais said...

Cornell’s Grimmelmann said AI copyright cases might ultimately hinge on the stories each side tells about how to weigh the technology’s harms and benefits.

“Look at all the lawsuits, and they’re trying to tell stories about how these are just plagiarism machines ripping off artists,” he said. “Look at the [AI firms’ responses], and they’re trying to tell stories about all the really interesting things these AIs can do that are genuinely new and exciting.”"

Sunday, December 31, 2023

Boom in A.I. Prompts a Test of Copyright Law; The New York Times, December 30, 2023

 J. Edward Moreno , The New York Times; Boom in A.I. Prompts a Test of Copyright Law

"The boom in artificial intelligence tools that draw on troves of content from across the internet has begun to test the bounds of copyright law...

Data is crucial to developing generative A.I. technologies — which can generate text, images and other media on their own — and to the business models of companies doing that work.

“Copyright will be one of the key points that shapes the generative A.I. industry,” said Fred Havemeyer, an analyst at the financial research firm Macquarie.

A central consideration is the “fair use” doctrine in intellectual property law, which allows creators to build upon copyrighted work...

“Ultimately, whether or not this lawsuit ends up shaping copyright law will be determined by whether the suit is really about the future of fair use and copyright, or whether it’s a salvo in a negotiation,” Jane Ginsburg, a professor at Columbia Law School, said of the lawsuit by The Times...

Competition in the A.I. field may boil down to data haves and have-nots...

“Generative A.I. begins and ends with data,” Mr. Havemeyer said."

Monday, December 18, 2023

AI could threaten creators — but only if humans let it; The Washington Post, December 17, 2023

 , The Washington Post; AI could threaten creators — but only if humans let it

"A broader rethinking of copyright, perhaps inspired by what some AI companies are already doing, could ensure that human creators get some recompense when AI consumes their work, processes it and produces new material based on it in a manner current law doesn’t contemplate. But such a shift shouldn’t be so punishing that the AI industry has no room to grow. That way, these tools, in concert with human creators, can push the progress of science and useful arts far beyond what the Framers could have imagined."

Friday, October 20, 2023

How Israeli Civilians Are Using A.I. to Help Identify Victims; The New York Times, October 20, 2023

 David Blumenfeld, Carmit Hoomash, Alexandra Eaton and Meg Felling , The New York Times; How Israeli Civilians Are Using A.I. to Help Identify Victims

"“Brothers and Sisters for Israel” formed initially to protest judiciary reform. After Oct. 7, they shifted their mission to helping victims of the attacks, and together with volunteers from Israel’s leading tech companies, created a sophisticated data operation to help find out more about those missing, taken hostage or killed."

Tuesday, September 12, 2023

How industry experts are navigating the ethics of artificial intelligence; CNN, September 11, 2023

CNN; How industry experts are navigating the ethics of artificial intelligence

"CNN heads to one of the longest-running artificial intelligence conferences in the world, to explore how industry experts and tech companies are trying to develop AI that is fairer and more transparent."

Friday, July 21, 2023

Top tech firms sign White House pledge to identify AI-generated images; The Washington Post, July 21, 2023

  , The Washington Post; Top tech firms sign White House pledge to identify AI-generated images

"The White House on Friday announced that seven of the most influential companies building artificial intelligence have agreed to a voluntary pledge to mitigate the risks of the emerging technology, escalating the Biden administration’s involvement in the growing debate over AI regulation.]

The companies — which include Google, Amazon, Microsoft, Meta and Chat GPT-maker OpenAI — vowed to allow independent security experts to test their systems before they are released to the public and committed to sharing data about the safety of their systems with the government and academics.

The firms also pledged to develop systems to alert the public when an image, video or text is created by artificial intelligence, a method known as “watermarking.”

In addition to the tech giants, several newer businesses at the forefront of AI development signed the pledge, including Anthropic and Inflection. (Amazon founder Jeff Bezos owns The Washington Post. Interim CEO Patty Stonesifer sits on Amazon’s board.)"

Thursday, June 29, 2023

The Vatican Releases Its Own AI Ethics Handbook; Gizmodo, June 28, 2023

 Thomas Germain, Gizmodo; The Vatican Releases Its Own AI Ethics Handbook

"The Vatican is getting in on the AI craze. The Holy See has released a handbook on the ethics of artificial intelligence as defined by the Pope. 

The guidelines are the result of a partnership between Francis and Santa Clara University’s Markkula Center for Applied Ethics. Together, they’ve formed a new organization called the Institute for Technology, Ethics, and Culture (ITEC). The ITEC’s first project is a handbook titled Ethics in the Age of Disruptive Technologies: An Operational Roadmap, meant to guide the tech industry through the murky waters of ethics in AI, machine learning, encryption, tracking, and more."

Thursday, October 28, 2021

This Program Can Give AI a Sense of Ethics—Sometimes; Wired, October 28, 2021

 ,Wired; This Program Can Give AI a Sense of Ethics—Sometimes

"Frost says the debate around Delphi reflects a broader question that the tech industry is wrestling with—how to build technology responsibly. Too often, he says, when it comes to content moderation, misinformation, and algorithmic bias, companies try to wash their hands of the problem by arguing that all technology can be used for good and bad.

When it comes to ethics, “there’s no ground truth, and sometimes tech companies abdicate responsibility because there’s no ground truth,” Frost says. “The better approach is to try.”"

 

Wednesday, March 3, 2021

Balancing Privacy With Data Sharing for the Public Good; The New York Times, February 19, 2021

 , The New York Times; Balancing Privacy With Data Sharing for the Public Good

"Governments and technology companies are increasingly collecting vast amounts of personal data, prompting new laws, myriad investigations and calls for stricter regulation to protect individual privacy.

Yet despite these issues, economics tells us that society needs more data sharing rather than less, because the benefits of publicly available data often outweigh the costs. Public access to sensitive health records sped up the development of lifesaving medical treatments like the messenger-RNA coronavirus vaccinesproduced by Moderna and Pfizer. Better economic data could vastly improve policy responses to the next crisis."


Tuesday, January 19, 2021

Why Ethics Matter For Social Media, Silicon Valley And Every Tech Industry Leader; Forbes, January 14, 2021

Rob Dube, Forbes; Why Ethics Matter For Social Media, Silicon Valley And Every Tech Industry Leader

"At one time, the idea of technology and social media significantly influencing society and politics would’ve sounded crazy. Now, with technology so embedded into the fabric of our lives, it’s a reality that raises legitimate questions about Silicon Valley’s ethical responsibility. 

Should tech companies step in to create and enforce guidelines within their platforms if they believe such policies would help the greater good? Or should leaders allow their technology to evolve organically without filters or manipulation? 

One authority on this fascinating topic is Casey Fiesler—a researcher, assistant professor at the University of Colorado Boulder, and expert on tech ethics. She is also a graduate of Vanderbilt Law School. There, she found a passion for the intersections between law, ethics, and technology."