Showing posts with label users. Show all posts
Showing posts with label users. Show all posts

Friday, June 7, 2024

Angry Instagram posts won’t stop Meta AI from using your content; Popular Science, June 5, 2024

 Mack DeGeurin, Popular Science; Angry Instagram posts won’t stop Meta AI from using your content

"Meta, the Mark Zuckerberg-owned tech giant behind Instagram, surprised many of the app’s estimated 1.2 billion global users with a shock revelation last month. Images, including original artwork and other creative assets uploaded to the company’s platforms, are now being used to train the company’s AI image generator. That admission, initially made public by Meta executive Chris Cox during an interview with Bloomberg last month, has elicited a fierce backlash from some creators. As of writing, more than 130,000 Instagram users have reshared a message on Instagram telling the company they do not consent to it using their data to train Meta AI. Those pleas, however, are founded on a fundamental misunderstanding of creators’ relationship with extractive social media platforms. These creators already gave away their work, whether they realize it or not."

Thursday, July 13, 2023

Are We Going Too Far By Allowing Generative AI To Control Robots, Worriedly Asks AI Ethics And AI Law; Forbes, July 10, 2023

Dr. Lance B. Eliot , Forbes; Are We Going Too Far By Allowing Generative AI To Control Robots, Worriedly Asks AI Ethics And AI Law

"What amount of due diligence is needed or required on the part of the user when it comes to generative AI and robots?

Nobody can as yet say for sure. Until we end up with legal cases and issues involving presumed harm, this is a gray area. For lawyers that want to get involved in AI and law, these are going to be an exciting and emerging set of legal challenges and legal puzzles that will undoubtedly arise as the use of generative AI becomes further ubiquitous and the advent of robots becomes affordable and practical in our daily lives.

You might also find of interest that some of the AI makers have contractual or licensing clauses that if you are using their generative AI and they get sued for something you did as a result of using their generative AI, you indemnify the AI maker and pledge to pay for their costs and expenses to fight the lawsuit, see my analysis at the link here. This could be daunting for you. Suppose that the house you were cooking in burns to the ground. The insurer sues the AI maker claiming that their generative AI was at fault. But, you agreed whether you know it or not to the indemnification clause, thus the AI maker comes to you and says you need to pay for their defense.

Ouch."

Tuesday, July 21, 2020

Reforming Digital Lending Libraries and the End of the Internet Archive; Jurist, July 20, 2020

, Jurist; Reforming Digital Lending Libraries and the End of the Internet Archive

"The lack of certainty relating to the legality of CDL as fair use is hampering its growth by creating a chilling effect. Libraries are under the fear of costly litigations. IA itself is under the risk of bankruptcy, as the publishers are not inclined to take back their suit, even after IA stopped ELP. This is the very problem section 108 intended to resolve. Hence, it is pertinent that the section is amended to meet the needs of the digital age and provide certainty in this regard. Some countries have already moved in this direction. While Canada has permitted a limited right to provide digitized copies to patrons of other libraries, the EU has been considering proposals to allow digitization of cultural heritage institutions, including libraries."

Thursday, July 16, 2020

YouTube’s algorithms could be harming users looking for health information; Fast Company, July 15, 2020

ANJANA SUSARLA, Fast Company; 

YouTube’s algorithms could be harming users looking for health information


"A significant fraction of the U.S. population is estimated to have limited health literacy, or the capacity to obtain, process, and understand basic health information, such as the ability to read and comprehend prescription bottles, appointment slips, or discharge instructions from health clinics.
Studies of health literacy, such as the National Assessment of Adult Literacy conducted in 2003, estimated that only 12% of adults had proficient health literacy skills. This has been corroborated in subsequent studies.
I’m a professor of information systems, and my own research has examined how social media platforms such as YouTube widen such health literacy disparities by steering users toward questionable content."

Sunday, April 5, 2020

Developers - it's time to brush up on your philosophy: Ethical AI is the big new thing in tech; ZDNet, April 1, 2020

 , ZDNet; Developers - it's time to brush up on your philosophy: Ethical AI is the big new thing in tech

The transformative potential of algorithms means that developers are now expected to think about the ethics of technology -- and that wasn't part of the job description.

"Crucially, most guidelines also insist that thought be given to the ethical implications of the technology from the very first stage of conceptualising a new tool, and all the way through its implementation and commercialisation. 

This principle of 'ethics by design' goes hand in hand with that of responsibility and can be translated, roughly, as: 'coders be warned'. In other words, it's now on developers and their teams to make sure that their program doesn't harm users. And the only way to make sure it doesn't is to make the AI ethical from day one.
The trouble with the concept of ethics by design, is that tech wasn't necessarily designed for ethics."

Thursday, February 13, 2020

Copyright could be the next way for Congress to take on Big Tech; The Verge, February 13, 2020

, The Verge; Copyright could be the next way for Congress to take on Big Tech

"By the end of the year, Tillis — who chairs the Senate’s intellectual property subcommittee — plans to draft changes to the DMCA. He and co-chair Sen. Chris Coons (D-DE) kicked off the process this week with an introductory hearing, speaking to eight legal experts and former congressional staffers. The hearing helped set the stage to re-fight some long-running battles over the balance between protecting copyrighted content and keeping the internet open — but at a time where internet companies are already facing a large-scale backlash.

The 1998 DMCA attempted to outline how copyright should work on the then-nascent internet, where you could almost freely and infinitely copy a piece of media. But it’s been widely criticized by people with very different stances on intellectual property."

Friday, January 31, 2020

Users Lament PAIR Changes During USPTO Forum; IP Watchdog, January 30, 2020

Eileen McDermott, IP Watchdog; Users Lament PAIR Changes During USPTO Forum

"Jamie Holcombe, Chief Information Officer at the U.S. Patent and Trademark Office (USPTO), seemed surprised to learn on Wednesday that both the Public and Private versions of the USPTO’s Patent Application Information Retrieval (PAIR) System have serious issues that are making workflows untenable for users.

Holcombe was participating in a public Forum on the PAIR system, where USPTO staff listened to stakeholders’ experiences since the Office implemented major security changes to the system on November 15, 2019. “The USPTO disabled the ability to look up public cases outside of a customer number using Private PAIR,” explained Shawn Lillemo, Software Product Manager at Harrity LLP, who attended the Forum. “Most patent professionals prior to the change could retrieve all the PAIR information they needed from Private PAIR. That is no longer true.”"

Tuesday, April 23, 2019

What the EU’s copyright overhaul means — and what might change for big tech; NiemanLab, Nieman Foundation at Harvard, April 22, 2019

Marcello Rossi, NiemanLab, Nieman Foundation at Harvard; What the EU’s copyright overhaul means — and what might change for big tech

"The activity indeed now moves to the member states. Each of the 28 countries in the EU now has two years to transpose it into its own national laws. Until we see how those laws shake out, especially in countries with struggles over press and internet freedom, both sides of the debate will likely have plenty of room to continue arguing their sides — that it marks a groundbreaking step toward a more balanced, fair internet, or that it will result in a set of legal ambiguities that threaten the freedom of the web."

Tuesday, March 19, 2019

Myspace loses all content uploaded before 2016; The Guardian, March 18, 2019

Alex Hern, The Guardian; Myspace loses all content uploaded before 2016 

Faulty server migration blamed for mass deletion of songs, photos and video

"Myspace, the once mighty social network, has lost every single piece of content uploaded to its site before 2016, including millions of songs, photos and videos with no other home on the internet.
 
The company is blaming a faulty server migration for the mass deletion, which appears to have happened more than a year ago, when the first reports appeared of users unable to access older content. The company has confirmed to online archivists that music has been lost permanently, dashing hopes that a backup could be used to permanently protect the collection for future generations...

Some have questioned how the embattled company, which was purchased by Time Inc in 2016, could make such a blunder."

Sunday, January 6, 2019

Our privacy regime is broken. Congress needs to create new norms for a digital age.; The Washington Post, January 5, 2019

Saturday, December 1, 2018

It’s Almost 2019. Do You Know Where Your Photos Are?; The New York Times, November 29, 2018

John Herrman, The New York Times; It’s Almost 2019. Do You Know Where Your Photos Are?

"Jason Scott is a founder of Archive Team, a loose network of archivists and programmers that creates tools for extracting data from services that are at risk of disappearing. Flickr has given users options to export everything from the site; the Archive Team is working on alternatives, just in case.

“The sad thing about the tech industry is they built everything on subsidized lies: ‘This is going to cost you nothing and you’re going to get amazing things,’” Mr. Scott said. It’s not as easy to imagine a future without Google as it might have been to imagine a future without Zing, or even Yahoo. But it shouldn’t be hard.

“It’s 100 percent like Flickr,” Mr. Scott said. “Tech companies are still selling a lot of very neophyte people a lot of problematic lies about things that matter a lot to them.”"

Thursday, July 19, 2018

“I Was Devastated”: Tim Berners-Lee, the Man Who Created the World Wide Web, Has Some Regrets; Vanity Fair, July 1, 2018

Katrina Brooker, Vanity Fair; “I Was Devastated”: Tim Berners-Lee, the Man Who Created the World Wide Web, Has Some Regrets


"For now, chastened by bad press and public outrage, tech behemoths and other corporations say they are willing to make changes to ensure privacy and protect their users. “I’m committed to getting this right,” Facebook’s Zuckerberg told Congress in April. Google recently rolled out new privacy features to Gmail which would allow users to control how their messages get forwarded, copied, downloaded, or printed. And as revelations of spying, manipulation, and other abuses emerge, more governments are pushing for change. Last year the European Union fined Google $2.7 billion for manipulating online shopping markets. This year new regulations will require it and other tech companies to ask for users’ consent for their data. In the U.S., Congress and regulators are mulling ways to check the powers of Facebook and others.

But laws written now don’t anticipate future technologies. Nor do lawmakers—many badgered by corporate lobbyists—always choose to protect individual rights. In December, lobbyists for telecom companies pushed the Federal Communications Commission to roll back net-neutrality rules, which protect equal access to the Internet. In January, the U.S. Senate voted to advance a bill that would allow the National Security Agency to continue its mass online-surveillance program. Google’s lobbyists are now working to modify rules on how companies can gather and store biometric data, such as fingerprints, iris scans, and facial-recognition images."

Thursday, May 31, 2018

An American Alternative to Europe’s Privacy Law; The New York Times, May 30, 2018

Tim Wu, The New York Times; An American Alternative to Europe’s Privacy Law

"To be sure, a European-style regulatory system operates faster and has clearer rules than an American-style common-law approach. But the European approach runs the risk of being insensitive to context and may not match our ethical intuitions in individual cases. If the past decade of technology has taught us anything, it is that we face a complex and varied array of privacy problems. Case-by-case consideration might be the best way to find good solutions to many of them and, when the time comes (ifthe time comes), to guide the writing of general federal privacy legislation.

A defining fact of our existence today is that we share more of ourselves with Silicon Valley than with our accountants, lawyers and doctors. It is about time the law caught up with that."

Tuesday, May 1, 2018

Westworld Spoilers Club season 2, episode 2: Reunion The second episode of the season drops subtle clues with big ramifications; The Verge, April 30, 2018

Bryan Bishop, The Verge; Westworld Spoilers Club season 2, episode 2: Reunion

The second episode of the season drops subtle clues with big ramifications


[SPOILERS BELOW]





"...[O]n the matter of the true agenda of the parks themselves, the episode’s revelations raise questions that the show will almost certainly have to engage. For 30 years, Delos parks have been secretly gathering data on their guests. How is that data being used? Have guests been blackmailed, extorted, or otherwise had the records of their trips used against them as futuristic, Wild West kompromat? And what would the corporate consequences be if the existence of such a project was made public? Given that Bernard was not giving proper access to the drone host lab, it seems evident that only people at the highest levels are aware of the data collection initiative, with non-networked hosts used in the facility to help cut down on the chance of leaks.

Given all that, Peter Abernathy — and the data he’s carrying in his head — becomes much more than just a moving plot device. He is quite literally the future of Delos, Inc. itself. Should he fall into the wrong hands, with the data collection initiative made public, it could take down the entire company. It’s a timely storyline, coming right at the time that online services like Facebook are facing more public scrutiny than ever. And no doubt that’s exactly what Joy and Nolan are aiming for."

Wednesday, April 25, 2018

In global AI race, Europe pins hopes on ethics; Politico, April 25, 2018

Janosch Delcker, Politico; 

In global AI race, Europe pins hopes on ethics


"One of the central goals in the EU strategy is to provide customers with insight into the systems.

That could be easier said than done.

“Algorithmic transparency doesn’t mean [platforms] have to publish their algorithms,” Ansip said, “but ‘explainability’ is something we want to get.”

AI experts say that to achieve such explainability, companies will, indeed, have to disclose the codes they’re using – and more.

Virginia Dignum, an AI researcher at the Delft University of Technology, said “transparency of AI is more than just making the algorithm transparent,” adding that companies should also have to disclose details such as which data was used to train their algorithms, which data are used to make decisions, how this data was collected, or at which point in the process humans were involved in the decision-making process."

Tuesday, April 10, 2018

Zuckerberg tells Congress ‘we didn’t do enough’ to prevent privacy crises that rock Facebook; Washington Post, April 10, 2018

Craig TimbergTony Romm and Elizabeth Dwoskin, Washington Post; Zuckerberg tells Congress ‘we didn’t do enough’ to prevent privacy crises that rock Facebook

"Another pointed exchange took place when Sen. Richard Durbin (D-IL), asked Zuckerberg what hotel he stayed at Monday night and the names of anyone he messaged this week. Zuckerberg, appearing somewhat amused by the question, declined to answer.

Durbin shot back, “I think that may be what this is all about: your right to privacy, the limits of your right to privacy and how much you give away in modern America in the name of quote, 'connecting people around the world.' ”"

Full transcript: Apple CEO Tim Cook with Recode’s Kara Swisher and MSNBC’s Chris Hayes; Recode, April 6, 2018

Meghann Farnsworth, Recode; Full transcript: Apple CEO Tim Cook with Recode’s Kara Swisher and MSNBC’s Chris Hayes

"Recode’s Kara Swisher and MSNBC’s Chris Hayes interviewed Apple CEO Tim Cook in Chicago, IL. The interview was taped on Tuesday, March 27, and aired on Friday, April 6, 2018. Read the full transcript below.

The full video is not available online but you can listen to the full, uncut interview on Recode Decode, hosted by Kara Swisher. The audio is embedded below, or you can find the podcast on Apple Podcasts, Spotify, Pocket Casts, Overcast or wherever you listen to podcasts."

Thursday, April 5, 2018

Sorry, Facebook was never ‘free’; The New York Post, March 21, 2018

John Podhoretz, The New York Post; Sorry, Facebook was never ‘free’


[Kip Currier: On today's MSNBC Morning Joe show, The New York Post's John Podhoretz pontificated on the same provocative assertions that he wrote about in his March 21, 2018 opinion piece, excerpted below. It’s a post-Cambridge Analytica “Open Letter polemic” directed at anyone (--or using Podhoretz’s term, any fool) who signed up for Facebook “back in the day” and who may now be concerned about how free social media sites like Facebook use—as well as how Facebook et al enable third parties to “harvest”, “scrape”, and leverage—people’s personal data.

Podhoretz’s argument is flawed on so many levels it’s challenging to know where to begin. (Full disclosure: As someone working in academia in a computing and information science school, who signed up for Facebook some years ago to see what all the “fuss” was about, I’ve never used my Facebook account because of ongoing privacy concerns about it. Much to the chagrin of some family and friends who have exhorted me, unsuccessfully, to use it.)

Certainly, there is some level of “ownership” that each of us needs to take when we sign up for a social media site or app by clicking on the Terms and Conditions and/or End User License Agreement (EULA). But it’s also common knowledge now (ridiculed by self-aware super-speed-talking advertisers in TV and radio ads!) that these agreements are written in legalese that don’t fully convey the scope and potential scope of the ramifications of these agreements’ terms and conditions. (Aside: For a clever satirical take on the purposeful impenetrability and abstruseness of these lawyer-crafted agreements, see R. Sikoryak’s 2017 graphic novel Terms and Conditions, which visually lampoons an Apple iTunes user contract.)

Over the course of decades, for example, in the wake of the Tuskegee Syphilis experiments and other medical research abuses and controversies, medical research practitioners were legally coerced to come to terms with the fact that laws, ethics, and policies about “informed consent” needed to evolve to better inform and protect “human subjects” (translation: you and me).

A similar argument can be made regarding Facebook and its social media kin: namely, that tech companies and app developers need to voluntarily adopt (or be required to adopt) HIPAA-esque protections and promote more “informed” consumer awareness.

We also need more computer science ethics training and education for undergraduates, as well as more widespread digital citizenship education in K-12 settings, to ensure a level playing field of digital life awareness. (Hint, hint, Education Secretary Betsy DeVos or First Lady Melania Trump…here’s a mission critical for your patronage.)

Podhoretz’s simplistic Facebook user-as-deplorable-fool rant puts all of the blame on users, while negating any responsibility for bait-and-switch tech companies like Facebook and data-sticky-fingered accomplices like Cambridge Analytica. “Free” doesn’t mean tech companies and app designers should be free from enhanced and reasonable informed consent responsibilities they owe to their users. Expecting or allowing anything less would be foolish.]


"The science fiction writer Robert A. Heinlein said it best: “There ain’t no such thing as a free lunch.” Everything has a cost. If you forgot that, or refused to see it in your relationship with Facebook, or believe any of these things, sorry, you are a fool. So the politicians and pundits who are working to soak your outrage for their own ideological purposes are gulling you. But of course you knew.

You just didn’t care . . . until you cared. Until, that is, you decided this was a convenient way of explaining away the victory of Donald Trump in the 2016 election.

You’re so invested in the idea that Trump stole the election, you are willing to believe anything other than that your candidate lost because she made a lousy argument and ran a lousy campaign and didn’t know how to run a race that would put her over the top in the Electoral College — which is how you prevail in a presidential election and has been for 220-plus years.

The rage and anger against Facebook over the past week provide just the latest examples of the self-infantilization and flight from responsibility on the part of the American people and the refusal of Trump haters and American liberals to accept the results of 2016.

Honestly, it’s time to stop being fools and start owning up to our role in all this."