Showing posts with label Big Tech. Show all posts
Showing posts with label Big Tech. Show all posts

Sunday, January 9, 2022

Artificial intelligence author kicks off Friends of the Library nonfiction lecture series; Naples Daily News, January 7, 2022

Vicky Bowles, Naples Daily News; Artificial intelligence author kicks off Friends of the Library nonfiction lecture series

"Over the past few decades, a bunch of smart guys built artificial intelligence systems that have had deep impact on our everyday lives. But do they — and their billion-dollar companies — have the human intelligence to keep artificial intelligence safe and ethical?

Questions like this are part of the history and overview of artificial intelligence in Cade Metz’s book “Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World.”

On Monday, Jan. 17, Metz, a technology correspondent for The New York Times and former senior writer for Wired magazine, is the first speaker in the 2022 Nonfiction Author Series, sponsored by the nonprofit Friends of the Library of Collier County, which raises money for public library programs and resources...

"NDN: This was such a wonderful sentence early on in your book: “As an undergraduate at Harvard (in the 1940s), using over three thousand vacuum tubes and a few parts from an old B-52 bomber, (Marvin) Minsky built what may have been the first neural network.” Is that kind of amateur, garage-built science still possible, given the speed of innovation now and the billions of dollars that are thrown at development?

CM: It certainly is. It happens all the time, inside universities and out. But in the AI field, this has been eclipsed by the work at giant companies like Google and Facebook. That is one of the major threads in my book: academia struggling to keep up with the rapid rate of progress in the tech industry. It is a real problem. So much of the talent is moving into industry, leaving the cupboard bare at universities. Who will teach the next generation? Who will keep the big tech companies in check? 

NDN: I was amused to see that Google and DeepMind built a team “dedicated to what they called ‘AI safety,’ an effort to ensure that the lab’s technologies did no harm.” My question is, who defines harm within this race to monetize new technologies? Isn’t, for example, the staggering amount of electrical power used to run these systems harmful to the globe?

CM: I am glad you were amused. These companies say we should trust them to ensure AI "safety" and "ethics," but the reality is that safety and ethics are in the eye of the beholder. They can shape these terms to mean whatever they like. Many of the AI researchers at the heart of my book are genuinely concerned about how AI will be misused — how it will cause harm — but when they get inside these large companies, they find that their views clash with the economic aims of these tech giants."

Friday, December 31, 2021

Americans widely distrust Facebook, TikTok and Instagram with their data, poll finds; The Washington Post, December 22, 2021

 

, The Washington Post; Americans widely distrust Facebook, TikTok and Instagram with their data, poll finds

"According to the survey, 72 percent of Internet users trust Facebook “not much” or “not at all” to responsibly handle their personal information and data on their Internet activity. About 6 in 10 distrust TikTok and Instagram, while slight majorities distrust WhatsApp and YouTube. Google, Apple and Microsoft receive mixed marks for trust, while Amazon is slightly positive with 53 percent trusting the company at least “a good amount.” (Amazon founder Jeff Bezos owns The Washington Post.)

Only 10 percent say Facebook has a positive impact on society, while 56 percent say it has a negative impact and 33 percent say its impact is neither positive nor negative. Even among those who use Facebook daily, more than three times as many say the social network has a negative rather than a positive impact."

Friday, May 21, 2021

Privacy activists are winning fights with tech giants. Why does victory feel hollow?; The Guardian, May 15, 2021

, The Guardian; Privacy activists are winning fights with tech giants. Why does victory feel hollow?

"Something similar is likely to happen in other domains marked by recent moral panics over digital technologies. The tech industry will address mounting public anxieties over fake news and digital addiction by doubling down on what I call “solutionism”, with digital platforms mobilizing new technologies to offer their users a bespoke, secure and completely controllable experience...

What we want is something genuinely new: an institution that will know what parts of existing laws and regulations to suspend – like the library does with intellectual property law, for example – in order to fully leverage the potential inherent in digital technologies in the name of great public good."

Friday, April 16, 2021

Big Tech’s guide to talking about AI ethics; Wired, April 13, 2021

, Wired; Big Tech’s guide to talking about AI ethics

"AI researchers often say good machine learning is really more art than science. The same could be said for effective public relations. Selecting the right words to strike a positive tone or reframe the conversation about AI is a delicate task: done well, it can strengthen one’s brand image, but done poorly, it can trigger an even greater backlash.

The tech giants would know. Over the last few years, they’ve had to learn this art quickly as they’ve faced increasing public distrust of their actions and intensifying criticism about their AI research and technologies.

Now they’ve developed a new vocabulary to use when they want to assure the public that they care deeply about developing AI responsibly—but want to make sure they don’t invite too much scrutiny. Here’s an insider’s guide to decoding their language and challenging the assumptions and values baked in...

diversity, equity, and inclusion (ph) - The act of hiring engineers and researchers from marginalized groups so you can parade them around to the public. If they challenge the status quo, fire them...

ethics board (ph) - A group of advisors without real power, convened to create the appearance that your company is actively listening. Examples: Google’s AI ethics board (canceled), Facebook’s Oversight Board (still standing).

ethics principles (ph) - A set of truisms used to signal your good intentions. Keep it high-level. The vaguer the language, the better. See responsible AI."

Friday, February 21, 2020

Your DNA is a valuable asset, so why give it to ancestry websites for free?; The Guardian, February 16, 2020

; Your DNA is a valuable asset, so why give it to ancestry websites for free?

"The announcement by 23andMe, a company that sells home DNA testing kits, that it has sold the rights to a promising new anti-inflammatory drug to a Spanish pharmaceutical company is cause for celebration. The collected health data of 23andMe’s millions of customers have potentially produced a medical advance – the first of its kind. But a few weeks later the same company announced that it was laying off workers amid a shrinking market that its CEO put down to the public’s concerns about privacy.

These two developments are linked, because the most intimate data we can provide about ourselves – our genetic make-up – is already being harvested for ends we aren’t aware of and can’t always control. Some of them, such as better medicines, are desirable, but some of them should worry us...

These are the privacy concerns that may be behind layoffs, not only at 23andMe, but also at other DTC companies, and that we need to resolve urgently to avoid the pitfalls of genetic testingwhile [sic] realising its undoubted promise. In the meantime, we should all start reading the small print."

Thursday, February 13, 2020

Copyright could be the next way for Congress to take on Big Tech; The Verge, February 13, 2020

, The Verge; Copyright could be the next way for Congress to take on Big Tech

"By the end of the year, Tillis — who chairs the Senate’s intellectual property subcommittee — plans to draft changes to the DMCA. He and co-chair Sen. Chris Coons (D-DE) kicked off the process this week with an introductory hearing, speaking to eight legal experts and former congressional staffers. The hearing helped set the stage to re-fight some long-running battles over the balance between protecting copyrighted content and keeping the internet open — but at a time where internet companies are already facing a large-scale backlash.

The 1998 DMCA attempted to outline how copyright should work on the then-nascent internet, where you could almost freely and infinitely copy a piece of media. But it’s been widely criticized by people with very different stances on intellectual property."

Monday, January 20, 2020

A Practical Guide for Building Ethical Tech; January 20, 2020

Zvika Krieger, Wired;

A Practical Guide for Building Ethical Tech

Companies are hiring "chief ethics officers," hoping to regain public trust. The World Economic Forum's head of technology policy has a few words of advice.

""Techlash," the rising public animosity toward big tech companies and their impacts on society, will continue to define the state of the tech world in 2020. Government leaders, historically the stewards of protecting society from the impacts of new innovations, are becoming exasperated at the inability of traditional policymaking to keep up with the unprecedented speed and scale of technological change. In that governance vacuum, corporate leaders are recognizing a growing crisis of trust with the public. Rising consumer demands and employee activism require more aggressive self-regulation.

In response, some companies are creating new offices or executive positions, such as a chief ethics officer, focused on ensuring that ethical considerations are integrated across product development and deployment. Over the past year, the World Economic Forum has convened these new “ethics executives” from over 40 technology companies from across the world to discuss shared challenges of implementing such a far-reaching and nebulous mandate. These executives are working through some of the most contentious issues in the public eye, and ways to drive cultural change within organizations that pride themselves on their willingness to “move fast and break things.”"

Monday, November 4, 2019

Facebook and Twitter spread Trump’s lies, so we must break them up; The Guardian, November 3, 2019

Robert Reich, The Guardian; Facebook and Twitter spread Trump’s lies, so we must break them up 

"The reason 45% of Americans rely on Facebook for news and Trump’s tweets reach 66 million is because these platforms are near monopolies, dominating the information marketplace. No TV network, cable giant or newspaper even comes close. Fox News’ viewership rarely exceeds 3 million. The New York Times has 4.7 million subscribers.

Facebook and Twitter aren’t just participants in the information marketplace. They’re quickly becoming the information marketplace."

Thursday, October 24, 2019

The Black-and-White World of Big Tech; The New York Times, October 24, 2019

, The New York Times; The Black-and-White World of Big Tech

Mark Zuckerberg presented us with an either-or choice of free speech — either we have it or we’re China. Tech leaders have a duty to admit it’s much more complicated.

"Mr. Zuckerberg presented us with an either-or choice of free speech — either we have free speech or we’re China. “Whether you like Facebook or not, I think we need to come together and stand for voice and free expression,” he said with an isn’t-this-obvious tone.

But, as anyone who has lived in the real world knows, it’s much more complex. And that was the main problem with his speech — and it’s also what is at the crux of the myriad concerns we have with tech these days: Big Tech’s leaders frame the debate in binary terms. 

Mr. Zuckerberg missed an opportunity to recognize that there has been a hidden and high price of all the dazzling tech of the last decade. In offering us a binary view for considering the impact of their inventions, many digital leaders avoid thinking harder about the costs of technological progress."

Tuesday, April 23, 2019

What the EU’s copyright overhaul means — and what might change for big tech; NiemanLab, Nieman Foundation at Harvard, April 22, 2019

Marcello Rossi, NiemanLab, Nieman Foundation at Harvard; What the EU’s copyright overhaul means — and what might change for big tech

"The activity indeed now moves to the member states. Each of the 28 countries in the EU now has two years to transpose it into its own national laws. Until we see how those laws shake out, especially in countries with struggles over press and internet freedom, both sides of the debate will likely have plenty of room to continue arguing their sides — that it marks a groundbreaking step toward a more balanced, fair internet, or that it will result in a set of legal ambiguities that threaten the freedom of the web."

Once upon a time in Silicon Valley: How Facebook's open-data nirvana fell apart; NBC News, April 19, 2019

David Ingram and Jason Abbruzzese, NBC News; Once upon a time in Silicon Valley: How Facebook's open-data nirvana fell apart

"Facebook’s missteps have raised awareness about the possible abuse of technology, and created momentum for digital privacy laws in Congress and in state legislatures.

“The surreptitious sharing with third parties because of some ‘gotcha’ in the terms of service is always going to upset people because it seems unfair,” said Michelle Richardson, director of the data and privacy project at the Center for Democracy & Technology.

After the past two years, she said, “you can just see the lightbulb going off over the public’s head.”"

Monday, April 8, 2019

Are big tech’s efforts to show it cares about data ethics another diversion?; The Guardian, April 7, 2019

John Naughton, The Guardian; Are big tech’s efforts to show it cares about data ethics another diversion?

"No less a source than Gartner, the technology analysis company, for example, has also sussed it and indeed has logged “data ethics” as one of its top 10 strategic trends for 2019...

Google’s half-baked “ethical” initiative is par for the tech course at the moment. Which is only to be expected, given that it’s not really about morality at all. What’s going on here is ethics theatre modelled on airport-security theatre – ie security measures that make people feel more secure without doing anything to actually improve their security.

The tech companies see their newfound piety about ethics as a way of persuading governments that they don’t really need the legal regulation that is coming their way. Nice try, boys (and they’re still mostly boys), but it won’t wash. 

Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity."

Thursday, April 4, 2019

THE PROBLEM WITH AI ETHICS; The Verge, April 3, 2019

James Vincent, The Verge; 

THE PROBLEM WITH AI ETHICS

Is Big Tech’s embrace of AI ethics boards actually helping anyone?


"Part of the problem is that Silicon Valley is convinced that it can police itself, says Chowdhury.

“It’s just ingrained in the thinking there that, ‘We’re the good guys, we’re trying to help,” she says. The cultural influences of libertarianism and cyberutopianism have made many engineers distrustful of government intervention. But now these companies have as much power as nation states without the checks and balances to match. “This is not about technology; this is about systems of democracy and governance,” says Chowdhury. “And when you have technologists, VCs, and business people thinking they know what democracy is, that is a problem.”

The solution many experts suggest is government regulation. It’s the only way to ensure real oversight and accountability. In a political climate where breaking up big tech companies has become a presidential platform, the timing seems right."

Tuesday, January 29, 2019

FaceTime Is Eroding Trust in Tech Privacy paranoiacs have been totally vindicated.; The Atlantic, January 29, 2019

Ian Bogost, The Atlantic;

FaceTime Is Eroding Trust in Tech

Privacy paranoiacs have been totally vindicated.

"Trustworthy is hardly a word many people use to characterize big tech these days. Facebook’s careless infrastructure upended democracy. Abuse is so rampant on Twitter and Instagram that those services feel designed to deliver harassment rather than updates from friends. Hacks, leaks, and other breaches of privacy, at companies from Facebook to Equifax, have become so common that it’s hard to believe that any digital information is secure. The tech economy seems designed to steal and resell data."

New definition of privacy needed for the social media age; The San Francisco Chronicle, January 28, 2019

Jordan Cunningham, The San Francisco Chronicle; New definition of privacy needed for the social media age

"To bring about meaningful change, we need to fundamentally overhaul the way we define privacy in the social media age.

We need to stop looking at consumers’ data as a commodity and start seeing it as private information that belongs to individuals. We need to look at the impact of technology on young kids with developing brains. And we need to give consumers an easy way to ensure their privacy in homes filled with connected devices.

That’s why I’ve worked with a group of state lawmakers to create the “Your Data, Your Way” package of legislation."

The unnatural ethics of AI could be its undoing; The Outline, January 29, 2019

, The Outline; The unnatural ethics of AI could be its undoing

"When I used to teach philosophy at universities, I always resented having to cover the Trolley Problem, which struck me as everything the subject should not be: presenting an extreme situation, wildly detached from most dilemmas the students would normally face, in which our agency is unrealistically restricted, and using it as some sort of ideal model for ethical reasoning (the first model of ethical reasoning that many students will come across, no less). Ethics should be about things like the power structures we enter into at work, what relationships we decide to pursue, who we are or want to become — not this fringe-case intuition-pump nonsense.

But maybe I’m wrong. Because, if we believe tech gurus at least, the Trolley Problem is about to become of huge real-world importance. Human beings might not find themselves in all that many Trolley Problem-style scenarios over the course of their lives, but soon we're going to start seeing self-driving cars on our streets, and they're going to have to make these judgments all the time. Self-driving cars are potentially going to find themselves in all sorts of accident scenarios where the AI controlling them has to decide which human lives it ought to preserve. But in practice what this means is that human beings will have to grapple with the Trolley Problem — since they're going to be responsible for programming the AIs...

I'm much more sympathetic to the “AI is bad” line. We have little reason to trust that big tech companies (i.e. the people responsible for developing this technology) are doing it to help us, given how wildly their interests diverge from our own."

Meet the data guardians taking on the tech giants; BBC, January 29, 2019

Matthew Wall, BBC; Meet the data guardians taking on the tech giants

"Ever since the world wide web went public in 1993, we have traded our personal data in return for free services from the tech giants. Now a growing number of start-ups think it's about time we took control of our own data and even started making money from it. But do we care enough to bother?"

Big tech firms still don’t care about your privacy; The Washington Post, January 28, 2019

Rob Pegoraro, The Washington Post; Big tech firms still don’t care about your privacy

"Today is Data Privacy Day. Please clap.

This is an actual holiday of sorts, recognized as such in 2007 by the Council of Europe to mark the anniversary of the 1981 opening of Europe’s Convention for the Protection of Individuals With Regard to Automatic Processing of Personal Data — the grandfather of such strict European privacy rules as the General Data Protection Regulation.

In the United States, Data Privacy Day has yet to win more official acknowledgment than a few congressional resolutions. It mainly serves as an opportunity for tech companies to publish blog posts about their commitment to helping customers understand their privacy choices.

But in a parallel universe, today might feature different headlines. Consider the following possibilities."

Tuesday, January 22, 2019

The AI Arms Race Means We Need AI Ethics; Forbes, January 22, 2019

Kasia Borowska, Forbes; The AI Arms Race Means We Need AI Ethics

"In an AI world, the currency is data. Consumers and citizens trade data for convenience and cheaper services. The likes of Facebook, Google, Amazon, Netflix and others process this data to make decisions that influence likes, the adverts we see, purchasing decisions or even who we vote for. There are questions to ask on the implications of everything we access, view or read being controlled by a few global elite. There are also major implications if small companies or emerging markets are unable to compete from being priced out of the data pool. This is why access to AI is so important: not only does it enable more positives from AI to come to the fore, but it also helps to prevent monopolies forming. Despite industry-led efforts, there are no internationally agreed ethical rules to regulate the AI market."

Wednesday, January 2, 2019

Wielding Rocks and Knives, Arizonans Attack Self-Driving Cars; The New York Times, December 31, 2018

Simon Romero, The New York Times; Wielding Rocks and Knives,Arizonans Attack Self-Driving Cars

“They said they need real-world examples, but I don’t want to be their real-world mistake,” said Mr. O’Polka, who runs his own company providing information technology to small businesses.

“They didn’t ask us if we wanted to be part of their beta test,” added his wife, who helps run the business.

At least 21 such attacks have been leveled at Waymo vans in Chandler, as first reported by The Arizona Republic. Some analysts say they expect more such behavior as the nation moves into a broader discussion about the potential for driverless cars to unleash colossal changes in American society. The debate touches on fears ranging from eliminating jobs for drivers to ceding control over mobility to autonomous vehicles.

“People are lashing out justifiably," said Douglas Rushkoff, a media theorist at City University of New York and author of the book “Throwing Rocks at the Google Bus.” He likened driverless cars to robotic incarnations of scabs — workers who refuse to join strikes or who take the place of those on strike. 

“There’s a growing sense that the giant corporations honing driverless technologies do not have our best interests at heart,” Mr. Rushkoff said. “Just think about the humans inside these vehicles, who are essentially training the artificial intelligence that will replace them.””