Showing posts with label tech companies. Show all posts
Showing posts with label tech companies. Show all posts

Tuesday, November 12, 2024

BLUESKY SURGES WITH 700,000 NEW MEMBERS AS USERS FLEE X AFTER US ELECTION; CEO Today, November 12, 2024

CEO Today; BLUESKY SURGES WITH 700,000 NEW MEMBERS AS USERS FLEE X AFTER US ELECTION

"Bluesky Surges with 700,000 New Members as Users Flee X After US Election: A Social Media Revolution in the Making

In the wake of the US election, a quiet revolution has been unfolding in the world of social media. The platform Bluesky has seen a dramatic increase in user growth, with over 700,000 new members joining in just one week following the election results. This surge has propelled Bluesky’s user base to 14.5 million globally, up from 9 million in September. The platform’s meteoric rise is largely attributed to disillusioned social media users seeking a safer, more regulated alternative to X (formerly Twitter), especially after the platform underwent a radical transformation under Elon Musk's ownership and his association with US president-elect Donald Trump.

Bluesky, which originated as a project within Twitter before becoming an independent platform in 2022, has quickly become a refuge for those seeking a break from the rising tide of far-right activism, misinformation, and offensive content that has overtaken X in recent months. As X grapples with growing controversy and user dissatisfaction, Bluesky is capitalizing on the opportunity to position itself as a civil and balanced alternative...

The Growing Backlash Against X and Musk’s Vision

The rise of Bluesky is part of a broader trend of backlash against X since Elon Musk's acquisition of the platform. Under Musk’s leadership, X has shifted its focus, alienating a significant portion of its user base. In the aftermath of the US election, many have expressed concerns about the platform's increasing alignment with far-right political groups and its potential transformation into a propaganda tool for Trump and his supporters.

For example, a prominent critic of X, historian Ruth Ben-Ghiat, who had 250,000 followers on X, noted that she picked up 21,000 followers within her first day on Bluesky after moving to the platform. She shared her concerns about X's potential evolution into a far-right radicalization machine under Musk’s stewardship. Ben-Ghiat said, "After January, when X could be owned by a de facto member of the Trump administration, its functions as a Trump propaganda outlet and far-right radicalization machine could be accelerated."

This sentiment reflects the growing sense of unease among users about the political direction of X. As Musk’s political ties become clearer and his rhetoric becomes more controversial, users who once considered X a neutral platform for conversation now see it as a space increasingly hostile to their values. For many, Bluesky is emerging as the antidote to this growing disillusionment."

Tuesday, October 15, 2024

This threat hunter chases U.S. foes exploiting AI to sway the election; The Washington Post, October 13, 2024

, The Washington Post; This threat hunter chases U.S. foes exploiting AI to sway the election

"Ben Nimmo, the principal threat investigator for the high-profile AI pioneer, had uncovered evidence that Russia, China and other countries were using its signature product, ChatGPT, to generate social media posts in an effort to sway political discourse online. Nimmo, who had only started at OpenAI in February, was taken aback when he saw that government officials had printed out his report, with key findings about the operations highlighted and tabbed.

That attention underscored Nimmo’s place at the vanguard in confronting the dramatic boost that artificial intelligence can provide to foreign adversaries’ disinformation operations. In 2016, Nimmo was one of the first researchers to identify how the Kremlin interfered in U.S. politics online. Now tech companies, government officials and other researchers are looking to him to ferret out foreign adversaries who are using OpenAI’s tools to stoke chaos in the tenuous weeks before Americans vote again on Nov. 5.

So far, the 52-year-old Englishman says Russia and other foreign actors are largely “experimenting” with AI, often in amateurish and bumbling campaigns that have limited reach with U.S. voters. But OpenAI and the U.S. government are bracing for Russia, Iran and other nations to become more effective with AI, and their best hope of parrying that is by exposing and blunting operations before they gain traction."

Sunday, September 29, 2024

Gavin Newsom vetoes sweeping AI safety bill, siding with Silicon Valley; Politico, September 29, 2024

 LARA KORTE and JEREMY B. WHITE, Politico; Gavin Newsom vetoes sweeping AI safety bill, siding with Silicon Valley

"Gov. Gavin Newsom vetoed a sweeping California bill meant to impose safety vetting requirements for powerful AI models, siding with much of Silicon Valley and leading congressional Democrats in the most high-profile fight in the Legislature this year."

AI could be an existential threat to publishers – that’s why Mumsnet is fighting back; The Guardian, September 28, 2024

 , The Guardian; AI could be an existential threat to publishers – that’s why Mumsnet is fighting back

"After nearly 25 years as a founder of Mumsnet, I considered myself pretty unshockable when it came to the workings of big tech. But my jaw hit the floor last week when I read that Google was pushing to overhaul UK copyright law in a way that would allow it to freely mine other publishers’ content for commercial gain without compensation.

At Mumsnet, we’ve been on the sharp end of this practice, and have recently launched the first British legal action against the tech giant OpenAI. Earlier in the year, we became aware that it was scraping our content – presumably to train its large language model (LLM). Such scraping without permission is a breach of copyright laws and explicitly of our terms of use, so we approached OpenAI and suggested a licensing deal. After lengthy talks (and signing a non-disclosure agreement), it told us it wasn’t interested, saying it was after “less open” data sources...

If publishers wither and die because the AIs have hoovered up all their traffic, then who’s left to produce the content to feed the models? And let’s be honest – it’s not as if these tech giants can’t afford to properly compensate publishers. OpenAI is currently fundraising to the tune of $6.5bn, the single largest venture capital round of all time, valuing the enterprise at a cool $150bn. In fact, it has just been reported that the company is planning to change its structure and become a for-profit enterprise...

I’m not anti-AI. It plainly has the potential to advance human progress and improve our lives in myriad ways. We used it at Mumsnet to build MumsGPT, which uncovers and summarises what parents are thinking about – everything from beauty trends to supermarkets to politicians – and we licensed OpenAI’s API (application programming interface) to build it. Plus, we think there are some very good reasons why these AI models should ingest Mumsnet’s conversations to train their models. The 6bn-plus words on Mumsnet are a unique record of 24 years of female interaction about everything from global politics to relationships with in-laws. By contrast, most of the content on the web was written by and for men. AI models have misogyny baked in and we’d love to help counter their gender bias.

But Google’s proposal to change our laws would allow billion-dollar companies to waltz untrammelled over any notion of a fair value exchange in the name of rapid “development”. Everything that’s unique and brilliant about smaller publisher sites would be lost, and a handful of Silicon Valley giants would be left with even more control over the world’s content and commerce."

Saturday, September 28, 2024

Pulling Back the Silicon Curtain; The New York Times, September 10, 2024

Dennis Duncan, The New York Times; Pulling Back the Silicon Curtain

Review of NEXUS: A Brief History of Information Networks From the Stone Age to AI, by Yuval Noah Harari

"In a nutshell, Harari’s thesis is that the difference between democracies and dictatorships lies in how they handle information...

The meat of “Nexus” is essentially an extended policy brief on A.I.: What are its risks, and what can be done? (We don’t hear much about the potential benefits because, as Harari points out, “the entrepreneurs leading the A.I. revolution already bombard the public with enough rosy predictions about them.”) It has taken too long to get here, but once we arrive Harari offers a useful, well-informed primer.

The threats A.I. poses are not the ones that filmmakers visualize: Kubrick’s HAL trapping us in the airlock; a fascist RoboCop marching down the sidewalk. They are more insidious, harder to see coming, but potentially existential. They include the catastrophic polarizing of discourse when social media algorithms designed to monopolize our attention feed us extreme, hateful material. Or the outsourcing of human judgment — legal, financial or military decision-making — to an A.I. whose complexity becomes impenetrable to our own understanding.

Echoing Churchill, Harari warns of a “Silicon Curtain” descending between us and the algorithms we have created, shutting us out of our own conversations — how we want to act, or interact, or govern ourselves...

“When the tech giants set their hearts on designing better algorithms,” writes Harari, “they can usually do it.”...

Parts of “Nexus” are wise and bold. They remind us that democratic societies still have the facilities to prevent A.I.’s most dangerous excesses, and that it must not be left to tech companies and their billionaire owners to regulate themselves."

Wednesday, September 25, 2024

Why Do People Like Elon Musk Love Donald Trump? It’s Not Just About Money.; The New York Times, September 25, 2024

Chris Hughes, The New York Times; Why Do People Like Elon Musk Love Donald Trump? It’s Not Just About Money.

"Mr. Trump appeals to some Silicon Valley elites because they identify with the man. To them, he is a fellow victim of the state, unjustly persecuted for his bold ideas. Practically, he is also the shield they need to escape accountability. Mr. Trump may threaten democratic norms and spread disinformation; he could even set off a recession, but he won’t challenge their ability to build the technology they like, no matter the social cost...

As much as they want to influence Mr. Trump’s policies, they also want to strike back at the Biden-Harris administration, which they believe has unfairly targeted their industry.

More than any other administration in the internet era, President Biden and Ms. Harris have pushed tech companies toward serving the public interest...

Last year, Mr. Andreessen, whose venture capital firm is heavily invested in crypto, wrote a widely discussed “manifesto” claiming that enemy voices of “bureaucracy, vetocracy, gerontocracy” are opposed to the “pursuit of technology, abundance and life.” In a barely concealed critique of the Biden-Harris administration, he argued that those who believe in carefully assessing the impact of new technologies before adopting them are “deeply immoral.”

Mark Zuckerberg Is Done With Politics; The New York Times, September 24, 2024

Theodore Schleifer and , The New York Times; Mark Zuckerberg Is Done With Politics

"Instead of publicly engaging with Washington, Mr. Zuckerberg is repairing relationships with politicians behind the scenes. After the “Zuckerbucks” criticism, Mr. Zuckerberg hired Brian Baker, a prominent Republican strategist, to improve his positioning with right-wing media and Republican officials. In the lead-up to November’s election, Mr. Baker has emphasized to Mr. Trump and his top aides that Mr. Zuckerberg has no plans to make similar donations, a person familiar with the discussions said.

Mr. Zuckerberg has yet to forge a relationship with Vice President Kamala Harris. But over the summer, Mr. Zuckerberg had his first conversations with Mr. Trump since he left office, according to people familiar with the conversations."

Mark Zuckerberg Isn’t Done With Politics. His Politics Have Just Changed.; Mother Jones, September 24, 2024

 Tim Murphy, Mother Jones; Mark Zuckerberg Isn’t Done With Politics. His Politics Have Just Changed.

"On Tuesday, the New York Times reported that one of the world’s richest men had recently experienced a major epiphany. After bankrolling a political organization that supported immigration reform, espousing his support for social justice, and donating hundreds of millions of dollars to support local election workers during the 2020 election, “Mark Zuckerberg is done with politics.”

The Facebook founder and part-time Hawaiian feudal lord, according to the piece, “believed that both parties loathed technology and that trying to continue engaging with political causes would only draw further scrutiny to their company,” and felt burned by the criticism he has faced in recent years, on everything from the proliferation of disinformation on Facebook to his investment in election administration (which conservatives dismissively referred to as “Zuckerbucks”). He is mad, in other words, that people are mad at him, and it has made him rethink his entire theory of how the world works.

It’s an interesting piece, which identifies a real switch in how Zuckerberg—who along with his wife, Priscilla Chan, has made a non-binding pledge to give away a majority of his wealth by the end of his lifetime—thinks about his influence and his own ideology. But there’s a fallacy underpinning that headline: Zuckerberg isn’t done with politics. His politics have simply changed."

Tuesday, September 24, 2024

LinkedIn is training AI on you — unless you opt out with this setting; The Washington Post, September 23, 2024

 , The Washington Post; LinkedIn is training AI on you — unless you opt out with this setting

"To opt out, log into your LinkedIn account, tap or click on your headshot, and open the settings. Then, select “Data privacy,” and turn off the option under “Data for generative AI improvement.”

Flipping that switch will prevent the company from feeding your data to its AI, with a key caveat: The results aren’t retroactive. LinkedIn says it has already begun training its AI models with user content, and that there’s no way to undo it."

Sunday, September 15, 2024

‘I quit my job as a content moderator. I can never go back to who I was before.’; The Washington Post, September 9, 2024

 , The Washington Post;  ‘I quit my job as a content moderator. I can never go back to who I was before.’

"Alberto Cuadra worked as a content moderator at a video-streaming platform for just under a year, but he saw things he’ll never forget. He watched videos about murders and suicides, animal abuse and child abuse, sexual violence and teenage bullying — all so you didn’t have to. What shows up when you scroll through social media has been filtered through an army of tens of thousands of content moderators, who protect us at the risk of their own mental health.

Warning: The following illustrations contain references to disturbing content."

Thursday, September 5, 2024

Intellectual property and data privacy: the hidden risks of AI; Nature, September 4, 2024

 Amanda Heidt , Nature; Intellectual property and data privacy: the hidden risks of AI

"Timothée Poisot, a computational ecologist at the University of Montreal in Canada, has made a successful career out of studying the world’s biodiversity. A guiding principle for his research is that it must be useful, Poisot says, as he hopes it will be later this year, when it joins other work being considered at the 16th Conference of the Parties (COP16) to the United Nations Convention on Biological Diversity in Cali, Colombia. “Every piece of science we produce that is looked at by policymakers and stakeholders is both exciting and a little terrifying, since there are real stakes to it,” he says.

But Poisot worries that artificial intelligence (AI) will interfere with the relationship between science and policy in the future. Chatbots such as Microsoft’s Bing, Google’s Gemini and ChatGPT, made by tech firm OpenAI in San Francisco, California, were trained using a corpus of data scraped from the Internet — which probably includes Poisot’s work. But because chatbots don’t often cite the original content in their outputs, authors are stripped of the ability to understand how their work is used and to check the credibility of the AI’s statements. It seems, Poisot says, that unvetted claims produced by chatbots are likely to make their way into consequential meetings such as COP16, where they risk drowning out solid science.

“There’s an expectation that the research and synthesis is being done transparently, but if we start outsourcing those processes to an AI, there’s no way to know who did what and where the information is coming from and who should be credited,” he says...

The technology underlying genAI, which was first developed at public institutions in the 1960s, has now been taken over by private companies, which usually have no incentive to prioritize transparency or open access. As a result, the inner mechanics of genAI chatbots are almost always a black box — a series of algorithms that aren’t fully understood, even by their creators — and attribution of sources is often scrubbed from the output. This makes it nearly impossible to know exactly what has gone into a model’s answer to a prompt. Organizations such as OpenAI have so far asked users to ensure that outputs used in other work do not violate laws, including intellectual-property and copyright regulations, or divulge sensitive information, such as a person’s location, gender, age, ethnicity or contact information. Studies have shown that genAI tools might do both1,2."

Wednesday, August 28, 2024

Controversial California AI regulation bill finds unlikely ally in Elon Musk; The Mercury News, August 28, 2024

  , The Mercury News; Controversial California AI regulation bill finds unlikely ally in Elon Musk

"With a make-or-break deadline just days away, a polarizing bill to regulate the fast-growing artificial intelligence industry from progressive state Sen. Scott Wiener has gained support from an unlikely source.

Elon Musk, the Donald Trump-supporting, often regulation-averse Tesla CEO and X owner, this week said he thinks “California should probably pass” the proposal, which would regulatethe development and deployment of advanced AI models, specifically large-scale AI products costing at least $100 million to build.

The surprising endorsement from a man who also owns an AI company comes as other political heavyweights typically much more aligned with Wiener’s views, including San Francisco Mayor London Breed and Rep. Nancy Pelosi, join major tech companies in urging Sacramento to put on the brakes."

After a decade of free Alexa, Amazon now wants you to pay; The Washington Post, August 27, 2024

 , The Washington Post; After a decade of free Alexa, Amazon now wants you to pay

"There was a lot of optimism in the 2010s that digital assistants like Alexa, Apple’s Siri and Google Assistant would become a dominant way we interact with technology, and become as life-changing as smartphones have been.

Those predictions were mostly wrong. The digital assistants were dumber than the companies claimed, and it’s often annoying to speak commands rather than type on a keyboard or tap on a touch screen...

If you’re thinking there’s no chance you’d pay for an AI Alexa, you should see how many people subscribe to OpenAI’s ChatGPT...

The mania over AI is giving companies a new selling point to upcharge you. It’s now in your hands whether the promised features are worth it, or if you can’t stomach any more subscriptions."

Wednesday, August 7, 2024

It’s practically impossible to run a big AI company ethically; Vox, August 5, 2024

Sigal Samuel, Vox; It’s practically impossible to run a big AI company ethically

"Anthropic was supposed to be the good AI company. The ethical one. The safe one.

It was supposed to be different from OpenAI, the maker of ChatGPT. In fact, all of Anthropic’s founders once worked at OpenAI but quit in part because of differences over safety culture there, and moved to spin up their own company that would build AI more responsibly. 

Yet lately, Anthropic has been in the headlines for less noble reasons: It’s pushing back on a landmark California bill to regulate AI. It’s taking money from Google and Amazon in a way that’s drawing antitrust scrutiny. And it’s being accused of aggressively scraping data from websites without permission, harming their performance. 

What’s going on?

The best clue might come from a 2022 paper written by the Anthropic team back when their startup was just a year old. They warned that the incentives in the AI industry — think profit and prestige — will push companies to “deploy large generative models despite high uncertainty about the full extent of what these models are capable of.” They argued that, if we want safe AI, the industry’s underlying incentive structure needs to change.

Well, at three years old, Anthropic is now the age of a toddler, and it’s experiencing many of the same growing pains that afflicted its older sibling OpenAI."

A booming industry of AI age scanners, aimed at children’s faces; The Washington Post, August 7, 2024

, The Washington Post ; A booming industry of AI age scanners, aimed at children’s faces

"Nineteen states, home to almost 140 million Americans, have passed or enacted laws requiring online age checks since the beginning of last year, including Virginia, Texas and Florida. For the companies, that’s created a gold mine: Employees at Incode, a San Francisco firm that runs more than 100 million verifications a year, now internally track state bills and contact local officials to, as senior director of strategy Fernanda Sottil said, “understand where … our tech fits in.”

But while the systems are promoted for safeguarding kids, they can only work by inspecting everyone — surveying faces, driver’s licenses and other sensitive data in vast quantities. Alex Stamos, the former security chief of Facebook, which uses Yoti, said “most age verification systems range from ‘somewhat privacy violating’ to ‘authoritarian nightmare.'”"

Friday, July 26, 2024

In Hiroshima, a call for peaceful, ethical AI; Cisco, The Newsroom, July 18, 2024

Kevin Delaney , Cisco, The Newsroom; In Hiroshima, a call for peaceful, ethical AI

"“Artificial intelligence is a great tool with unlimited possibilities of application,” Archbishop Vincenzo Paglia, president of the Pontifical Academy for Life, said in an opening address at the AI Ethics for Peace conference in Hiroshima this month.

But Paglia was quick to add that AI’s great promise is fraught with potential dangers.

“AI can and must be guided so that its potential serves the good since the moment of its design,” he stressed. “This is our common responsibility.”

The two-day conference aimed to further the Rome Call for AI Ethics, a document first signed on February 28, 2020, at the Vatican. It promoted an ethical approach to artificial intelligence through shared responsibility among international organizations, governments, institutions and technology companies.

This month’s Hiroshima conference drew dozens of global religious, government, and technology leaders to a city that has transcended its dark past of tech-driven, atomic destruction to become a center for peace and cooperation.

The overarching goal in Hiroshima? To ensure that, unlike atomic energy, artificial intelligence is used only for peace and positive human advancement. And as an industry leader in AI innovation and its responsible use, Cisco was amply represented by Dave West, Cisco’s president for Asia Pacific, Japan, and Greater China (APJC)."

Thursday, July 18, 2024

The Future of Ethics in AI: A Global Conversation in Hiroshima; JewishLink, July 18, 2024

 Rabbi Dr. Ari Berman; JewishLink The Future of Ethics in AI: A Global Conversation in Hiroshima

"Last week, I had the honor of representing the Jewish people at the AI Ethics for Peace Conference in Hiroshima, Japan, a three day conversation of global faith, political and industry leaders. The conference was held to promote the necessity of ethical guidelines for the future of artificial intelligence. It was quite an experience.

During the conference, I found myself sitting down for lunch with a Japanese Shinto Priest, a Zen Buddhist monk and a leader of the Muslim community from Singapore. Our conversation could not have been more interesting. The developers who devised AI can rightfully boast of many accomplishments, and they can now count among them the unintended effect of bringing together people of diverse backgrounds who are deeply concerned about the future their creators will bring.

AI promises great potential benefits, including global access to education and healthcare, medical breakthroughs, and greater predictability that will lead to efficiencies and a better quality of life, at a level unimaginable just a few years ago. But it also poses threats to the future of humanity, including deepfakes, structural biases in algorithms, a breakdown of human connectivity, and the deterioration of personal privacy."

Saturday, July 6, 2024

THE GREAT SCRAPE: THE CLASH BETWEEN SCRAPING AND PRIVACY; SSRN, July 3, 2024

Daniel J. SoloveGeorge Washington University Law School; Woodrow HartzogBoston University School of Law; Stanford Law School Center for Internet and SocietyTHE GREAT SCRAPETHE CLASH BETWEEN SCRAPING AND PRIVACY

"ABSTRACT

Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society.


Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping violates nearly all of the key principles in privacy laws, including fairness; individual rights and control; transparency; consent; purpose specification and secondary use restrictions; data minimization; onward transfer; and data security. With scraping, data protection laws built around

these requirements are ignored.


Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others.


This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation."

Tuesday, July 2, 2024

AI ETHICS FOR PEACE: WORLD RELIGIONS COMMIT TO THE ROME CALL; July 9 & 10, 2024

AI ETHICS FOR PEACE: WORLD RELIGIONS COMMIT TO THE ROME CALL

"An historic multi-faith event will take place in Hiroshima, Japan, on July 9th and 10th, 2024. Titled AI Ethics for Peace: World Religions commit to the Rome Call, this event holds profound significance as it convenes in Hiroshima, a city that stands as a powerful testament to the consequences of destructive technology and the enduring quest for peace. In this symbolic location, leaders of major world religions will gather to sign the Rome Call for AI Ethics, emphasizing the vital importance of guiding the development of artificial intelligence with ethical principles to ensure it serves the good of humanity.

The event is promoted by the Pontifical Academy of Life, Religions for Peace Japan, the United Arab Emirates’ Abu Dhabi Forum for Peace, and the Chief Rabbinate of Israel’s Commission for Interfaith Relations.

BACKGROUND

The Rome Call for AI Ethics was issued by the Pontifical Academy for Life and furthered by the RenAIssance Foundation in an effort to promote algorethics, i.e. an ethical development of artificial intelligence.

On February 28th, 2020, the Pontifical Academy for Life, together with Microsoft, IBM, the UN Food and Agriculture Organization (FAO) and the Italian Government – and in the presence of the President of the EU Parliament – signed this “Call for AI Ethics” in Rome.

The document aims to foster an ethical approach to Artificial Intelligence (AI) and to promote a sense of responsibility among organizations, governments, multinational technology companies, and institutions, in order to shape a future in which digital innovation and technological progress serve human genius and creativity, while preserving and respecting the dignity of each and every individual, as well as our planet’s.

Following the signing of the Rome Call by leaders of the three Abrahamic religions (Christianity, Islam and Judaism) in 2023, in the name of peaceful coexistence and shared values, the Hiroshima event reinforces the view that a multi-religious approach to vital questions such as AI ethics is the path to follow.

Religions play a crucial role in shaping a world in which the concept of development proceeds hand in hand with protecting the dignity of each individual human being and preserving the planet, our common home. Coming together to call for the development of an AI ethic is a step that all religious traditions must take."

Sunday, June 30, 2024

Tech companies battle content creators over use of copyrighted material to train AI models; The Canadian Press via CBC, June 30, 2024

 Anja Karadeglija , The Canadian Press via CBC; Tech companies battle content creators over use of copyrighted material to train AI models

"Canadian creators and publishers want the government to do something about the unauthorized and usually unreported use of their content to train generative artificial intelligence systems.

But AI companies maintain that using the material to train their systems doesn't violate copyright, and say limiting its use would stymie the development of AI in Canada.

The two sides are making their cases in recently published submissions to a consultation on copyright and AI being undertaken by the federal government as it considers how Canada's copyright laws should address the emergence of generative AI systems like OpenAI's ChatGPT."