Saturday, May 18, 2024

She worked in animal research. Now she’s blocked from commenting on it.; The Washington Post, May 6, 2024

 , The Washington Post; She worked in animal research. Now she’s blocked from commenting on it.

"For a long time, Madeline Krasno didn’t tell other animal rights advocates that she had worked in a monkey research lab as a college student. It had taken her years to understand her nightmares and fragmented memories as signs of post-traumatic stress disorder. And some activists could be vicious to former lab workers.

But four years after she graduated from the University of Wisconsin at Madison, Krasno started posting online about her experiences. Eventually, she started tagging the school in those posts and then commenting on its pages.

Many of those comments disappeared. As she would later learn, it was not a mistake or a glitch. Both the university and the National Institutes of Health were blocking her comments. Now with support from free speech and animal rights organizations, she is suing both institutions."

Wiley shuts 19 scholarly journals amid AI paper mill problems; The Register, May 16, 2024

 Thomas Claburn, The Register; Wiley shuts 19 scholarly journals amid AI paper mill problems

"US publishing house Wiley this week discontinued 19 scientific journals overseen by its Hindawi subsidiary, the center of a long-running scholarly publishing scandal.

In December 2023 Wiley announced it would stop using the Hindawi brand, acquired in 2021, following its decision in May 2023 to shut four of its journals "to mitigate against systematic manipulation of the publishing process."

Hindawi's journals were found to be publishing papers from paper mills – organizations or groups of individuals who try to subvert the academic publishing process for financial gain. Over the past two years, a Wiley spokesperson told The Register, the publisher has retracted more than 11,300 papers from its Hindawi portfolio.

As described in a Wiley-authored white paper published last December, "Tackling publication manipulation at scale: Hindawi’s journey and lessons for academic publishing," paper mills rely on various unethical practices – such as the use of AI in manuscript fabrication and image manipulations, and gaming the peer review process...

In January, Wiley signed on to United2Act – an industry initiative to combat paper mills.

But the concern over scholarly research integrity isn't confined to Wiley publications. A study published in Nature last July suggests as many as a quarter of clinical trials are problematic or entirely fabricated."

Display at Alito’s Home Renews Questions of Supreme Court’s Impartiality; The New York Times, May 17, 2024

Jodi Kantor and  , The New York Times; Display at Alito’s Home Renews Questions of Supreme Court’s Impartiality

"News of a “Stop the Steal” symbol that flew at the home of Justice Samuel A. Alito Jr. after the 2020 election has elicited concerns from politicians, legal scholars and others about the Supreme Court’s ethical standards — and, most urgent, whether the public will regard its rulings about Jan. 6, 2021, as fairly decided."

Reddit shares jump after OpenAI ChatGPT deal; BBC, May 17, 2024

  João da Silva, BBC; Reddit shares jump after OpenAI ChatGPT deal

"Shares in Reddit have jumped more than 10% after the firm said it had struck a partnership deal with artificial intelligence (AI) start-up OpenAI.

Under the agreement, the company behind the ChatGPT chatbot will get access to Reddit content, while it will also bring AI-powered features to the social media platform...

Meanwhile, Google announced a partnership in February which allows the technology giant to access Reddit data to train its AI models.

Both in the European Union and US, there are questions around whether it is copyright infringement to train AI tools on such content, or whether it falls under fair use and "temporary copying" exceptions."

Friday, May 17, 2024

Making sure it’s Correkt: a group of UCSB students set out to revolutionize the ethics of AI chatbots; The Daily Nexus, University of California, Santa Barbara, May 16, 2024

, The Daily Nexus, University of California, Santa Barbara; Making sure it’s Correkt: a group of UCSB students set out to revolutionize the ethics of AI chatbots 

"When second-year computer science major Alexzendor Misra first came to UC Santa Barbara in Fall Quarter 2022, he had no idea an ill-fated encounter with a conspiracy-believing peer would inspire the creation of an artificial intelligence search engine, Correkt. 

Merely two months into college, Misra began a project that he hopes can truly change the ethics of artificial intelligence (AI) chatbots. 

Now, Misra and his team, consisting of first-year statistics and data science major Andre Braga, first-year computer science major Ryan Hung, first-year statistics and data science major Chan Park, first-year computer engineering major Khilan Surapaneni and second-year computer science majors Noah Wang and Ramon Wang, are ready to showcase the outcome of their project. They are preparing themselves to present their product, Correkt, an AI search engine, to the UCSB community at the AI Community of Practice (CoP) Spring Symposium on May 20. 

Correkt is not so different from ChatGPT — in fact, what ChatGPT does, Correkt can do too. Yet, Correkt aims to solve one critical issue with ChatGPT: misinformation. 

It is known that ChatGPT is prone to “hallucinations,” and according to IBM, it refers to the generation of false information due to the AI software’s misinterpretation of patterns or objects. Correkt is designed to prevent these instances of misinformation dissemination in two ways. 

Correkt is linked solely to reputable data sources — newspapers, textbooks and peer-reviewed journals. The AI model currently draws its information from an expansive data bank of over 180 million well-established, trustworthy resources — a number set to grow with time. This greatly lowers the risk of receiving inaccurate information by eliminating unreliable sources. However, it still does not give users the freedom to verify the information they access.

This is where Correkt truly sets itself apart from pre-existing AI chatbots: it includes a built-in citation function that details the precise location where every piece of information it presents to the user was retrieved. Essentially, Correkt is a hybrid between a search engine and an AI chatbot. The citation function allows users to judge for themselves the accuracy and validity of the information they receive as they would when conducting research through a search engine. The difference would be that the results will be much more streamlined with the support of AI. 

“[Correkt] has so much more value as a way to find information, like a new generation of [a] search engine,” Misra comments enthusiastically."

Supreme Court Ethics Controversies: Alito’s Upside-Down Flag Flying Draws Concern; Forbes, May 17, 2024

Alison Durkee , Forbes; Supreme Court Ethics Controversies: Alito’s Upside-Down Flag Flying Draws Concern

"Supreme Court Justice Samuel Alito flew an upside-down American flag outside his house after the 2020 election, The New York Times reported Thursday—a symbol of the “Stop the Steal” movement challenging the election results—the latest in a string of recent ethics issues the court has faced that have ramped up criticism of the court and sparked cries for a binding code of ethics from lawmakers and legal experts...

Alito said in a statement to the Times about the flag flying that he “had no involvement whatsoever in the flying of the flag,” claiming “it was briefly placed by Mrs. Alito in response to a neighbor’s use of objectionable and personally insulting language on yard signs.”"

Thursday, May 16, 2024

How to Implement AI — Responsibly; Harvard Business Review (HBR), May 10, 2024

and , Harvard Business Review (HBR) ; How to Implement AI — Responsibly

"Regrettably, our research suggests that such proactive measures are the exception rather than the rule. While AI ethics is high on the agenda for many organizations, translating AI principles into practices and behaviors is proving easier said than done. However, with stiff financial penalties at stake for noncompliance, there’s little time to waste. What should leaders do to double-down on their responsible AI initiatives?

To find answers, we engaged with organizations across a variety of industries, each at a different stage of implementing responsible AI. While data engineers and data scientists typically take on most responsibility from conception to production of AI development lifecycles, nontechnical leaders can play a key role in ensuring the integration of responsible AI. We identified four key moves — translate, integrate, calibrate and proliferate — that leaders can make to ensure that responsible AI practices are fully integrated into broader operational standards."

Wednesday, May 15, 2024

In Florida, a bestselling author is building a new community of literary resistance; CNN, May 2, 2024

 , CNN; In Florida, a bestselling author is building a new community of literary resistance

"Groff is opening the Lynx now to plant a flag in her adopted home state, where she worries free thought is becoming increasingly endangered.

This isn’t just any bookstore, because Groff isn’t just any bookseller. She’s a three-time National Book Award finalist and one of Florida’s most acclaimed living writers. She’s published bestselling novels like “Fates and Furies,” a diptych that documents a marriage from the perspective of both spouses, and “Matrix,” historical fiction set in a 12th-century French convent...

But she and her neighbors are increasingly concerned about laws Florida’s government is passing that make it easier to ban books, restrict what can be taught about Black historyand limit the rights of LGBTQ residents.

The city needed a new stronghold against these threats, and Groff knew she was the person to make it."

Illinois Attorney General Kwame Raoul sues company for publishing voters’ personal data; Chicago Sun-Times, May 9, 2024

 

, Chicago Sun-Times; Illinois Attorney General Kwame Raoul sues company for publishing voters’ personal data

"A publishing company whose politically-slanted newspapers have been derided as “pink slime” is being sued by Illinois Attorney General Kwame Raoul for illegally identifying birthdates and home addresses of “hundreds of thousands” of voters.

Raoul’s legal move against Local Government Information Services accuses the company of publishing sensitive personal data that could subject voters across Illinois to identity theft.
Among those whose personal data has been identified on LGIS’ nearly three dozen online websites are current and former judges, police officers, high-ranking state officials and victims of domestic violence and human trafficking, Raoul’s filing said."

The Generative AI Copyright Disclosure Act of 2024: Balancing Innovation and IP Rights; The National Law Review, May 13, 2024

  Danner Kline of Bradley Arant Boult Cummings LLP, The National Law Review; The Generative AI Copyright Disclosure Act of 2024: Balancing Innovation and IP Rights

"As generative AI systems become increasingly sophisticated and widespread, concerns around the use of copyrighted works in their training data continue to intensify. The proposed Generative AI Copyright Disclosure Act of 2024 attempts to address this unease by introducing new transparency requirements for AI developers.

The Bill’s Purpose and Requirements

The primary goal of the bill is to ensure that copyright owners have visibility into whether their intellectual property is being used to train generative AI models. If enacted, the law would require companies to submit notices to the U.S. Copyright Office detailing the copyrighted works used in their AI training datasets. These notices would need to be filed within 30 days before or after the public release of a generative AI system.

The Copyright Office would then maintain a public database of these notices, allowing creators to search and see if their works have been included. The hope is that this transparency will help copyright holders make more informed decisions about licensing their IP and seeking compensation where appropriate."

Intellectual property: Protecting traditional knowledge from Western plunder; Frontine, May 15, 2024

  DEUTSCHE WELLE, Frontline; Intellectual property: Protecting traditional knowledge from Western plunder

"Stopping the loss of heritage and knowledge

“The problem? When a patent for traditional knowledge is granted to a third party, that party formally becomes the owner of such knowledge,” said Sattigeri. “The nation loses its heritage and its own traditional knowledge.” But now, that could be changing. In May 2024, WIPO’s 193 member states will meet and potentially ratify the first step of a legal instrument aimed at creating greater protections for these assets.

WIPO has broken them down into three areas seen as vulnerable under the current system: genetic resources, traditional knowledge, and traditional cultural expression. Genetic resources are biological materials like plants and animals that contain genetic information, while traditional knowledge encompasses generational wisdom within communities, which is usually passed down orally. This could include knowledge about biodiversity, food, agriculture, healthcare, and more. Traditional cultural expression includes artistic creations reflecting a group’s heritage and identity, like music, art, and design.

“It changes the classic understanding of intellectual property,” said Dornis. “It might break the system that [says that] many things are unprotected.”"

Tuesday, May 14, 2024

AI Challenges, Freedom to Read Top AAP Annual Meeting Discussions; Publishers Weekly, May 13, 2024

Jim Milliot , Publishers Weekly; AI Challenges, Freedom to Read Top AAP Annual Meeting Discussions

"The search for methods of reining in technology companies’ unauthorized copying of copyrighted materials to build generative AI models was the primary theme of this year's annual meeting of the Association of American Publishers, held May 9 over Zoom...

“To protect society, we will need a forward-thinking scheme of legal rules and enforcement authority across numerous jurisdictions and disciplines—not only intellectual property, but also national security, trade, privacy, consumer protection, and human rights, to name a few,” Pallante said. “And we will need ethical conduct.”...

Newton-Rex began in the generative AI space in 2010, and now leads the Fairly Trained, which launched in January as a nonprofit that seeks to certify AI companies that don't train models on copyrighted work without creators’ consent (Pallante is an advisor for the company.) He founded the nonprofit after leaving a tech company, Stability, that declined to use a licensing model to get permission to use copyrighted materials in training. Stability, Newton-Rex said, “argues that you can train on whatever you want. And it's a fair use in the United States, and I think this is not only incorrect, but I think it's ethically unforgivable. And I think we have to fight it with everything we have.”

“The old rules of copyright are gone,” said Maria Ressa, cofounder of the online news company Rappler and winner of the 2021 Nobel Peace Prize, in her keynote. “We are literally standing on the rubble of the world that was. If we don’t recognize it, we can’t rebuild it.”

Ressa added that, in a social media world drowning in misinformation and manipulation, “it is crucial that we get back to facts.” Messa advised publishers to “hold the line” in protecting their IP, and to continue to defend the importance of truth: “You cannot have rule of law if you do not have integrity of facts.”"

Ethics and equity in the age of AI; Vanderbilt University Research News, May 7, 2024

Jenna Somers, Vanderbilt University Research News ; Ethics and equity in the age of AI

"Throughout the conversation, the role of human intellect in responsible AI use emerged as an essential theme. Because generative AI is trained on a huge body of text on the internet and designed to detect and repeat patterns of language use, it runs the risk of perpetuating societal biases and stereotypes. To mitigate these effects, the panelists emphasized the need to be intentional, critical, and evaluative when using AI, whether users are experts designing and training models at top-tier companies or college students completing an AI-based class assignment.

“There is a lot of work to do around AI literacy, and we can think about this in two parts,” Wise said."

Thursday, May 2, 2024

How One Author Pushed the Limits of AI Copyright; Wired, April 17, 2024

  , Wired; How One Author Pushed the Limits of AI Copyright

"The novel draws from Shupe’s eventful life, including her advocacy for more inclusive gender recognition. Its registration provides a glimpse of how the USCO is grappling with artificial intelligence, especially as more people incorporate AI tools into creative work. It is among the first creative works to receive a copyright for the arrangement of AI-generated text.

“We’re seeing the Copyright Office struggling with where to draw the line,” intellectual property lawyer Erica Van Loon, a partner at Nixon Peabody, says. Shupe’s case highlights some of the nuances of that struggle—because the approval of her registration comes with a significant caveat.

The USCO’s notice granting Shupe copyright registration of her book does not recognize her as author of the whole text as is conventional for written works. Instead she is considered the author of the “selection, coordination, and arrangement of text generated by artificial intelligence.” This means no one can copy the book without permission, but the actual sentences and paragraphs themselves are not copyrighted and could theoretically be rearranged and republished as a different book.

The agency backdated the copyright registration to October 10, the day that Shupe originally attempted to register her work. It declined to comment on this story. “The Copyright Office does not comment on specific copyright registrations or pending applications for registration,” Nora Scheland, an agency spokesperson says. President Biden’s executive order on AI last fall asked the US Patent and Trademark Office to make recommendations on copyright and AI to the White House in consultation with the Copyright Office, including on the “scope of protection for works produced using AI.”

T-Mobile, AT&T, Sprint, Verizon slapped with $200M fine — here’s what they illegally did with your data; Mashable, April 30, 2024

Matt Binder, Mashable ; T-Mobile, AT&T, Sprint, Verizon slapped with $200M fine — here’s what they illegally did with your data

"AT&T, Verizon, Sprint, and T-Mobile allegedly provided location data to third parties without their users' consent, which is illegal.

“Our communications providers have access to some of the most sensitive information about us," said FCC Chairwoman Jessica Rosenworcel in a statement. "These carriers failed to protect the information entrusted to them. Here, we are talking about some of the most sensitive data in their possession: customers’ real-time location information, revealing where they go and who they are.” 

FCC fines the biggest U.S. mobile carriers

According to the FCC, T-Mobile has been fined the largest amount: $80 million. Sprint, which has merged with T-Mobile since the FCC's investigation began, also received a $12 million fine.

AT&T will have to pay more than $57 million and Verizon will dole out close to $47 million."

Wednesday, May 1, 2024

FTC Challenenges ‘junk’ patents held by 10 drugmakers, including for Novo Nordisk’s Ozempic; CNBC, April 30, 2024

Annika Kim Constantino, CNBC; FTC Challenenges ‘junk’ patents held by 10 drugmakers, including for Novo Nordisk’s Ozempic

"Most top-selling medications are protected by dozens of patents covering various ingredients, manufacturing processes, and intellectual property. Generic drugmakers can only launch cheaper versions of a branded drug if the patents have expired or are successfully challenged in court.

“By filing bogus patent listings, pharma companies block competition and inflate the cost of prescription drugs, forcing Americans to pay sky-high prices for medicines they rely on,” FTC Chair Lina Khan said in a release. “By challenging junk patent filings, the FTC is fighting these illegal tactics and making sure that Americans can get timely access to innovative and affordable versions of the medicines they need.”

The FTC also notified the Food and Drug Administration about the challenges. The FDA manages patent listings for approved drugs on a document called the Orange Book.

The FTC first challenged dozens of branded drug patents last fall, leading three drugmakers to comply and delist their patents with the FDA. Five other companies did not. 

The Tuesday announcement expands the Biden administration’s effort to crack down on alleged patent abuses by the pharmaceutical industry. The FTC has argued that drugmakers are needlessly listing dozens of extra patents for branded medications to keep their drug prices high and stall generic competitors from entering the U.S. market."

Microsoft’s “responsible AI” chief worries about the open web; The Washington Post, May 1, 2024

, The Washington Post ; Microsoft’s “responsible AI” chief worries about the open web

"As tech giants move toward a world in which chatbots supplement, and perhaps supplant, search engines, the Microsoft executive assigned to make sure AI is used responsibly said the industry has to be careful not to break the business model of the wider web. Search engines citing and linking to the websites they draw from is “part of the core bargain of search,” Natasha Crampton said in an interview Monday.

Crampton, Microsoft’s chief Responsible AI officer, spoke with The Technology 202 ahead of Microsoft’s release today of its first “Responsible AI Transparency Report.” The 39-page report, which the company is billing as the first of its kind from a major tech firm, details how Microsoft plans to keep its rapidly expanding stable of AI tools from wreaking havoc. 

It makes the case that the company has closely integrated Crampton’s Responsible AI team into its development of new AI products. It also details the progress the company has made toward meeting some of the Voluntary AI Commitments that Microsoft and other tech giants signed on to in September as part of the Biden administration’s push to regulate artificial intelligence. Those include developing safety evaluation systems for its AI cloud tools, expanding its internal AI “red teams,” and allowing users to mark images as AI-generated."

Responsible AI Transparency Report; Microsoft, May 2024

 Microsoft; Responsible AI Transparency Report

Sunday, April 28, 2024

Cisco signs the "Rome Call for AI Ethics"; Vatican News, April 2024

Linda Bordoni, Vatican News; Cisco signs the "Rome Call for AI Ethics"

"Expressing satisfaction that the Multinational Digital Communications Technology – Cisco – has joined other major companies involved in AI, in pledging to adhere to ethical guidelines, Archbishop Vincenzo Paglia, underscored the fact that artificial intelligence is “no longer a topic just for experts” and that the ethics of its development is more urgent than ever.

The President of the Pontifical Academy for Life (PAV) was speaking at an event on Wednesday morning during which the CEO of Cisco System Inc., put his signature to The Call for AI Ethics, a document promoted by the Pontifical Academy and by its RenAIssance Foundation (that supports the anthropological and ethical reflection of new technologies on human life) and has already been endorsed by the likes of Microsoft, IBM, FAO and the Italian Ministry of Innovation."

Stop Using Your Face or Thumb to Unlock Your Phone; Gizmodo, April 26, 2024

 Kyle Barr, Gizmodo; Stop Using Your Face or Thumb to Unlock Your Phone

"Last week, the 9th Circuit Court of Appeals in California released a ruling that concluded state highway police were acting lawfully when they forcibly unlocked a suspect’s phone using their fingerprint. You probably didn’t hear about it. The case didn’t get a lot of coverage, especially because the courts weren’t giving a blanket green light for every cop to shove your thumb to your screen during an arrest. But it’s another toll of the warning bell that reminds you to not trust biometrics to keep your phone’s sensitive info private. In many cases, especially if you think you might interact with the police (at a protest, for example), you should seriously consider turning off biometrics on your phone entirely.

The ruling in United States v. Jeremy Travis Payne found that highway officers acted lawfully by using Payne’s thumbprint to unlock his phone after a drug bust."

Friday, April 26, 2024

Op-Ed: AI’s Most Pressing Ethics Problem; Columbia Journalism Review, April 23, 2024

  ANIKA COLLIER NAVAROLI, Columbia Journalism Review; Op-Ed: AI’s Most Pressing Ethics Problem

"I believe that, now more than ever, it’s time for people to organize and demand that AI companies pause their advance toward deploying more powerful systems and work to fix the technology’s current failures. While it may seem like a far-fetched idea, in February, Google decided to suspend its AI chatbot after it was enveloped in a public scandal. And just last month, in the wake of reporting about a rise in scams using the cloned voices of loved ones to solicit ransom, OpenAI announced it would not be releasing its new AI voice generator, citing its “potential for synthetic voice misuse.”

But I believe that society can’t just rely on the promises of American tech companies that have a history of putting profits and power above people. That’s why I argue that Congress needs to create an agency to regulate the industry. In the realm of AI, this agency should address potential harms by prohibiting the use of synthetic data and by requiring companies to audit and clean the original training data being used by their systems.

AI is now an omnipresent part of our lives. If we pause to fix the mistakes of the past and create new ethical guidelines and guardrails, it doesn’t have to become an existential threat to our future."

The return of net neutrality; The Hill, April 25, 2024

SYLVAN LANE  , The Hill; The return of net neutrality

"The Federal Communications Commission (FCC) voted 3-2 along partisan lines to restore net neutrality rules, barring broadband providers from blocking, throttling or prioritizing internet traffic."

Thursday, April 25, 2024

How G.M. Tricked Millions of Drivers Into Being Spied On (Including Me); The New York Times, April 23, 2024

Kashmir Hill , The New York Times; How G.M. Tricked Millions of Drivers Into Being Spied On (Including Me)

"Automakers have been selling data about the driving behavior of millions of people to the insurance industry. In the case of General Motors, affected drivers weren’t informed, and the tracking led insurance companies to charge some of them more for premiums. I’m the reporter who broke the story. I recently discovered that I’m among the drivers who was spied on."

Wednesday, April 24, 2024

What happens now that Biden has signed the TikTok bill; Axios, April 24, 2024

Meta’s A.I. Assistant Is Fun to Use, but It Can’t Be Trusted; The New York Times, April 24, 2024

Brian X. Chen, The New York Times ; Meta’s A.I. Assistant Is Fun to Use, but It Can’t Be Trusted

"“We believe Meta AI is now the most intelligent AI assistant that you can freely use,” Mark Zuckerberg, the company’s chief executive, wrote on Instagram on Thursday.

The new bot invites you to “ask Meta AI anything” — but my advice, after testing it for six days, is to approach it with caution. It makes lots of mistakes when you treat it as a search engine. For now, you can have some fun: Its image generator can be a clever way to express yourself when chatting with friends.

A Meta spokeswoman said that because the technology was new, it might not always return accurate responses, similar to other A.I. systems. There is currently no way to turn off Meta AI inside the apps.

Here’s what doesn’t work well — and what does — in Meta’s AI."

U.S. bans noncompete agreements for nearly all jobs; NPR, April 23, 2024

 , NPR; U.S. bans noncompete agreements for nearly all jobs

"The Federal Trade Commission narrowly voted Tuesday to ban nearly all noncompetes, employment agreements that typically prevent workers from joining competing businesses or launching ones of their own...

"For more than a year, the group has vigorously opposed the ban, saying that noncompetes are vital to companies, by allowing them to better guard trade secrets, and employees, by giving employers greater incentive to invest in workforce training and development."

Tuesday, April 23, 2024

New Group Joins the Political Fight Over Disinformation Online; The New York Times, April 22, 2024

 Steven Lee Myers and , The New York Times; New Group Joins the Political Fight Over Disinformation Online

"Many of the nation’s most prominent researchers, facing lawsuits, subpoenas and physical threats, have pulled back.

“More and more researchers were getting swept up by this, and their institutions weren’t either allowing them to respond or responding in a way that really just was not rising to meet the moment,” Ms. Jankowicz said in an interview. “And the problem with that, obviously, is that if we don’t push back on these campaigns, then that’s the prevailing narrative.”

That narrative is prevailing at a time when social media companies have abandoned or cut back efforts to enforce their own policies against certain types of content.

Many experts have warned that the problem of false or misleading content is only going to increase with the advent of artificial intelligence.

“Disinformation will remain an issue as long as the strategic gains of engaging in it, promoting it and profiting from it outweigh consequences for spreading it,” Common Cause, the nonpartisan public interest group, wrote in a report published last week that warned of a new wave of disinformation around this year’s vote."

Monday, April 15, 2024

CMU Joins $110M U.S.-Japan Partnership To Accelerate AI Innovation; Carnegie Mellon University, April 11, 2024

 Kelly Saavedra, Carnegie Mellon University; CMU Joins $110M U.S.-Japan Partnership To Accelerate AI Innovation

"Carnegie Mellon University and Keio University have announced they will join forces with one another and with industry partners to boost AI-focused research and workforce development in the United States and Japan. The partnership is one of two new university partnerships between the two countries in the area of artificial intelligence announced in Washington, D.C., April 9 at an event hosted by U.S. Secretary of Commerce Gina Raimondo.

The collaboration joins two universities with outstanding AI programs and forward-looking leaders with leading technology companies committed to providing funding and resources aimed at solving real-world problems. 

CMU President Farnam Jahanian was in Washington, D.C., for the signing ceremony held in the Department of Commerce's Research Library, during which the University of Washington and the University of Tsukuba agreed to a similar collaboration."

Wednesday, April 10, 2024

'Magical Overthinking' author says information overload can stoke irrational thoughts; NPR, Fresh Air, April 9, 2024

, NPR, Fresh Air; 'Magical Overthinking' author says information overload can stoke irrational thoughts

"How is it that we are living in the information age — and yet life seems to make less sense than ever? That's the question author and podcast host Amanda Montell set out to answer in her new book, The Age of Magical Overthinking. 

Montell says that our brains are overloaded with a constant stream of information that stokes our innate tendency to believe conspiracy theories and mysticism...

Montell, who co-hosts the podcast Sounds Like A Cult, says this cognitive bias is what allows misinformation and disinformation to spread so easily, particularly online. It also helps explain our tendency to make assumptions about celebrities we admire...

Montell says that in an age of overwhelming access to information, it's important to step away from electronic devices. "We are meant for a physical world. That's what our brains are wired for," she says. "These devices are addictive, but I find that my nervous system really thanks me when I'm able to do that.""

Tuesday, April 9, 2024

Revealed: a California city is training AI to spot homeless encampments; The Guardian, March 25, 2024

Todd Feathers , The Guardian; Revealed: a California city is training AI to spot homeless encampments

"For the last several months, a city at the heart of Silicon Valley has been training artificial intelligence to recognize tents and cars with people living inside in what experts believe is the first experiment of its kind in the United States.

Last July, San Jose issued an open invitation to technology companies to mount cameras on a municipal vehicle that began periodically driving through the city’s district 10 in December, collecting footage of the streets and public spaces. The images are fed into computer vision software and used to train the companies’ algorithms to detect the unwanted objects, according to interviews and documents the Guardian obtained through public records requests.

Some of the capabilities the pilot project is pursuing – such as identifying potholes and cars parked in bus lanes – are already in place in other cities. But San Jose’s foray into automated surveillance of homelessness is the first of its kind in the country, according to city officials and national housing advocates. Local outreach workers, who were previously not aware of the experiment, worry the technology will be used to punish and push out San Jose’s unhoused residents."

Supreme Court Justices Apply New Ethics Code Differently; Newsweek, April 9, 2024

 , Newsweek; Supreme Court Justices Apply New Ethics Code Differently

"Supreme Court justices are divided along political lines over whether or not to explain their recusals, and legal experts are very concerned."

Saturday, April 6, 2024

Where AI and property law intersect; Arizona State University (ASU) News, April 5, 2024

  Dolores Tropiano, Arizona State University (ASU) News; Where AI and property law intersect

"Artificial intelligence is a powerful tool that has the potential to be used to revolutionize education, creativity, everyday life and more.

But as society begins to harness this technology and its many uses — especially in the field of generative AI — there are growing ethical and copyright concerns for both the creative industry and legal sector.

Tyson Winarski is a professor of practice with the Intellectual Property Law program in Arizona State University’s Sandra Day O’Connor College of Law. He teaches an AI and intellectual property module within the course Artificial Intelligence: Law, Ethics and Policy, taught by ASU Law Professor Gary Marchant.

“The course is extremely important for attorneys and law students,” Winarski said. “Generative AI is presenting huge issues in the area of intellectual property rights and copyrights, and we do not have definitive answers as Congress and the courts have not spoken on the issue yet.”"

Friday, April 5, 2024

2024 may be the year online disinformation finally gets the better of us; Politico.eu, March 25, 2024

SEB BUTCHER , Politico.eu; 2024 may be the year online disinformation finally gets the better of us

"Never before have AI-powered tools been more sophisticated, widespread and accessible to the public.

Generative AI, in its broadest sense, refers to deep learning models that can generate sophisticated text, video, audio, images and other content based on the data they were trained on. And the recent introduction of these tools into the mainstream — including language models and image creators — has made the creation of fake or misleading content incredibly easy, even for those with the most basic tech skills.

We have now entered a new technological era that will change our lives forever — hopefully for the better. But despite the widespread public awe of its capabilities, we must also be aware that this powerful technology has the potential to do incredible damage if mismanaged and abused.


For bad actors, generative AI has supercharged the disinformation and propaganda playbook. False and deceptive content can now be effortlessly produced by these tools, either for free or at low cost, and deployed on a mass scale online. Increasingly, the online ecosystem, which is the source of most of our news and information, is being flooded with fabricated content that’s becoming difficult to distinguish from reality."