Showing posts with label new technologies. Show all posts
Showing posts with label new technologies. Show all posts

Tuesday, December 5, 2023

Teaching kids to spot fake news: media literacy to be required in California schools; The Guardian, December 5, 2023

, The Guardian ; Teaching kids to spot fake news: media literacy to be required in California schools

"California next year will become one of the few US states to teach students media literacy, a move experts say is imperative at a time when distrust in the media is at an all-time high and new technologies pose unprecedented challenges to identifying false information.

A state bill signed into law this fall mandates public schools to instruct media literacy, a set of skills that includes recognizing falsified data, identifying fake news and generating responsible internet content.

Researchers have long warned that the current digital ecosystem has had dire consequences on young people, and have argued that such instruction could make a difference. The US surgeon general has cited digital and media literacy support as one way to combat the youth mental health crisis spurred by social media. The American Psychological Association already has urged parents and schools to teach media literacy before they expose young people to social media platforms."

Friday, April 29, 2022

Facebook whistleblower kicks off William R. Rhodes ’57 Lecture Series on Ethics of Capitalism; News from Brown, April 27, 2022

 Lauren Borsa-Curran, News from Brown; Facebook whistleblower kicks off William R. Rhodes ’57 Lecture Series on Ethics of Capitalism

"Haugen wrapped up the evening likening the issues surrounding social media to those previous generations faced with the advent of communications tools such as the printing presses, radio, television and cinema.

“Every single time we've invented a new form of media we've realized our limitations,” she said. “It's our burden to fight for what's next. But the thing I want you to take away from this is we've done it every single time before. Humans are incredibly resilient.”

LSU to Embed Ethics in the Development of New Technologies, Including AI; LSU Office of Research and Economic Development, April 2022

Elsa Hahne, LSU Office of Research and Economic Development ; LSU to Embed Ethics in the Development of New Technologies, Including AI

"“If we want to educate professionals who not only understand their professional obligations but become leaders in their fields, we need to make sure our students understand ethical conflicts and how to resolve them,” Goldgaber said. “Leaders don’t just do what they’re told—they make decisions with vision.”

The rapid development of new technologies has put researchers in her field, the world of Socrates and Rousseau, in the new and not-altogether-comfortable role of providing what she calls “ethics emergency services” when emerging capabilities have unintended consequences for specific groups of people.

“We can no longer rely on the traditional division of labor between STEM and the humanities, where it’s up to philosophers to worry about ethics,” Goldgaber said. “Nascent and fast-growing technologies, such as artificial intelligence, disrupt our everyday normative understandings, and most often, we lack the mechanisms to respond. In this scenario, it’s not always right to ‘stay in your lane’ or ‘just do your job.’”

Wednesday, November 10, 2021

Thinking Through the Ethics of New Tech…Before There’s a Problem; Harvard Business Review, November 9, 2021

Beena Ammanath , Harvard Business Review; Thinking Through the Ethics of New Tech…Before There’s a Problem


"Appoint a Chief Tech Ethics Officer

The best methods to address the ethics of new technologies are not going to be one size fits all. A broad range of potential impacts may need to be examined and a varied collection of potential risks may have to be mitigated. But most organizations would likely benefit from placing a single individual in charge of these processes. This is why organizations should consider a chief ethics officer — or a chief technology ethics officer — who would have both the responsibility and the authority to marshal necessary resources.

Some industries have grappled with trust and ethics challenges for decades. Hospitals and research centers have long employed ethics officers to oversee questions in research projects and clinical medical practice, for instance. Technology can certainly raise new concerns, even here: Think of a medical school implementing a VR tool to help augment the competency of surgeons and the importance of examining whether the tool works equally well across race or gender. But the broader point is that trust and ethics issues can be managed effectively — as long as the proper leadership commitments are made.

With a chief technology ethics officer in place, it remains important to involve specialists from a number of different disciplines, as discussed previously. These people may come from the fields of anthropology, sociology, philosophy, and other areas. Depending on the issues presented by a specific technology or application, it may be necessary to seek out people who bring knowledge of law, politics, regulation, education, or media."

Tuesday, April 9, 2019

Real or artificial? Tech titans declare AI ethics concerns; AP, April 7, 2019

Matt O'Brien and Rachel Lerman, AP; Real or artificial? Tech titans declare AI ethics concerns

"The biggest tech companies want you to know that they’re taking special care to ensure that their use of artificial intelligence to sift through mountains of data, analyze faces or build virtual assistants doesn’t spill over to the dark side.

But their efforts to assuage concerns that their machines may be used for nefarious ends have not been universally embraced. Some skeptics see it as mere window dressing by corporations more interested in profit than what’s in society’s best interests.

“Ethical AI” has become a new corporate buzz phrase, slapped on internal review committees, fancy job titles, research projects and philanthropic initiatives. The moves are meant to address concerns over racial and gender bias emerging in facial recognition and other AI systems, as well as address anxieties about job losses to the technology and its use by law enforcement and the military.

But how much substance lies behind the increasingly public ethics campaigns? And who gets to decide which technological pursuits do no harm?"

Wednesday, January 2, 2019

Tech predictions for 2019: It gets worse before it gets better; The Washington Post, December 27, 2018

Geoffrey A. Fowler, The Washington Post; Tech predictions for 2019: It gets worse before it gets better

"2018 is a year the tech industry wishes it could forget. But 2018’s problems aren’t going anywhere.

It was the year we came to grips with how little we can trust Facebook and how much we’re addicted to our screens. It was the year that online hate and misinformation became an unavoidable reality and Google, Microsoft and Amazon faced revolts from their own employees over ethical lapses. It was the year Apple became the first trillion-dollar company — and then lost a quarter of that when we yawned at its new iPhones.

Even YouTube’s “Rewind 2018” video is already the most-disliked video in history.

When my Post colleagues and I looked into a crystal ball to make this list of nine intentionally provocative headlines we might see in 2019, it was hard to see past the problems we’re bringing with us into the new year.

New technologies like 5G networks, alternative transportation and artificial intelligence promise to change our lives. But even these carry lots of caveats in the near term.

I’m still optimistic technology can make our world better. So here’s a glass half-full of hope for the new year: 2019 is tech’s chance to make it right."

Monday, December 31, 2018

Question Technology; Kip Currier, Ethics in a Tangled Web, December 31, 2018


Kip Currier; Question Technology

Ars Technica’s Timothy B. Lee’s 12/30/18 “The hype around driverless cars came crashing down in 2018” is a highly recommended overview of the annus horribilis the year that’s ending constituted for the self-driving vehicles industry. Lee references the Gartner consulting group’s "hype cycle" for new innovations and technology:

In the self-driving world, there's been a lot of discussion recently about the hype cycle, a model for new technologies that was developed by the Gartner consulting firm. In this model, new technologies reach a "peak of inflated expectations" (think the Internet circa 1999) before falling into a "trough of disillusionment." It's only after these initial overreactions—first too optimistic, then too pessimistic—that public perceptions start to line up with reality. 

We’ve seen the hype cycle replayed over and over again throughout the World Wide Web age (and throughout recorded history), albeit with new players and new innovations. Sometimes the hype delivers. Sometimes it comes with an unexpected price tag and consequences. Social media was hyped by many through branding and slogans. It offers benefits; chief among them, expanded opportunities for communication and connection. But it also has significant weaknesses that can and have been exploited by actors foreign and domestic.

Since 2016, as example, we’ve acutely learned—and are still learning—how social media, such as Facebook, can be used to weaponize information, misinform citizenry, and subvert democracy. From Facebook’s “inflated expectations” Version 1.0 through multiple iterations of hype and rehype, to its 2018 “trough of disillusionment”--which may or may not represent its nadir--much of the public’s perceptions of Facebook appear to finally be aligning with a more realistic picture of the company’s technology, as well as its less than transparent and accountable leadership. Indeed, consider how many times this year, and in the preceding decade and a half, Planet Earth’s social media-using citizens have heard Facebook CEO Mark Zuckerberg essentially say some version of “Trust me. Trust Facebook. We’re going to fix this.” (See CNBC’s 12/19/18 well-documented “Mark Zuckerberg has been talking and apologizing about privacy since 2003 — here’s a reminder of what he’s said) Only for the public, like Charlie Brown, to have the proverbial football once again yanked away with seemingly never-ending revelations of deliberate omissions by Facebook leadership concerning users’ data collection and use.

To better grasp the impacts and lessons we can learn from recognition of the hype cycle, it’s useful to remind ourselves of some other near-recent examples of highly-hyped technologies:

In the past decade, many talked about "the death of the print book"—supplanted by the ebook—and the extinction of independent (i.e. non-Amazon) booksellers. Now, print books are thriving again and independent bookstores are making a gradual comeback in some communities. See the 11/3/18 Observer article "Are E-Books Finally Over? The Publishing Industry Unexpectedly Tilts Back to Print" and Vox’s 12/18/18 piece, “Instagram is helping save the indie bookstore”.

More recently, Mass Open Online Courses (MOOCs) were touted as the game-changer that would have higher education quaking in its ivory tower-climbing boots. See Thomas L. Friedman's 2013 New York Times Opinion piece "Revolution Hits the Universities"; five years later, in 2018, a MOOCs-driven revolution seems less inevitable, or perhaps even less desirable, than postulated when MOOCs had become all the rage in some quarters. Even a few months before Friedman’s article, his New York Times employer had declared 2012 as “The Year of the MOOC”. In pertinent part from that article:


“I like to call this the year of disruption,” says Anant Agarwal, president of edX, “and the year is not over yet.”

MOOCs have been around for a few years as collaborative techie learning events, but this is the year everyone wants in. [Note to the author: you might just want to qualify and/or substantiate that hyperbolic assertion a bit about “everyone”!] Elite universities are partnering with Coursera at a furious pace. It now offers courses from 33 of the biggest names in postsecondary education, including Princeton, Brown, Columbia and Duke. In September, Google unleashed a MOOC-building online tool, and Stanford unveiled Class2Go with two courses.

Nick McKeown is teaching one of them, on computer networking, with Philip Levis (the one with a shock of magenta hair in the introductory video). Dr. McKeown sums up the energy of this grand experiment when he gushes, “We’re both very excited.” 

But read on, to the very next two sentences in the piece:

Casually draped over auditorium seats, the professors also acknowledge that they are not exactly sure how this MOOC stuff works.

“We are just going to see how this goes over the next few weeks,” says Dr. McKeown.

Yes, you read that right: 

“…they are not exactly sure how this MOOC stuff works.” And ““We are just going to see how this goes over the next few weeks,” says Dr. McKeown.”

Now, in 2018, who is even talking about MOOCs? Certainly, MOOCs are neither totally dead nor completely out of the education picture. But the fever pitch exhortations around the 1st Coming of the MOOC have ebbed, as hype machines—and change consultants—have inevitably moved on to “the next bright shiny object”.

Technology has many good points, as well as bad points, and, shall we say, aspects that cause legitimate concern. It’s here to stay. I get that. Appreciating the many positive aspects of technology in our lives does not mean that we can’t and shouldn’t still ask questions about the adoption and use of technology. As a mentor of mine often points out, society frequently pushes people to make binary choices, to select either X or Y, when we may, rather, select X and Y. The phrase Question Authority was popularized in the boundary-changing 1960’s. Its pedigree is murky and may actually trace back to ancient Greek society. That’s a topic for another piece by someone else. But the phrase, modified to Question Technology, can serve as an inspirational springboard for today. 

Happily, 2018 also saw more and more calls for AI ethics, data ethics, ethics courses in computer science and other educational programs, and more permutations of ethics in technology. (And that’s not even getting at all the calls for ethics in government!) Arguably, 2018 was the year that ethics was writ large.

In sum, we need to remind ourselves to be wary of anyone or any entity touting that they know with absolute certainty what a new technology will or will not do today, a year from now, or 10+ years in the fast-moving future, particularly absent the provision of hard evidence to support such claims. Just because someone says it’s so doesn’t make it so. Or, that it should be so.

In this era of digitally-dispersed disinformation, misinformation, and “alternate facts”, we all need to remind ourselves to think critically, question pronouncements and projections, and verify the truthfulness of assertions with evidence-based analysis and bonafide facts.


The hype around driverless cars came crashing down in 2018; Ars Technica, December 30, 2018

Timothy B. Lee, Ars Technica; The hype around driverless cars came crashing down in 2018

"In the self-driving world, there's been a lot of discussion recently about the hype cycle, a model for new technologies that was developed by the Gartner consulting firm. In this model, new technologies reach a "peak of inflated expectations" (think the Internet circa 1999) before falling into a "trough of disillusionment." It's only after these initial overreactions—first too optimistic, then too pessimistic—that public perceptions start to line up with reality."

Monday, June 5, 2017

How a rigid fair-use standard would harm free speech and fundamentally undermine the Internet; Los Angeles Times, June 1, 2017

Art Neill, Los Angeles Times; How a rigid fair-use standard would harm free speech and fundamentally undermine the Internet

"In a recent Times op-ed article, Jonathan Taplin of the USC Annenberg Innovation Lab claimed that an “ambiguous“ fair use definition is emboldening users of new technologies to challenge copyright infringement allegations, including takedown notices. He proposes rewriting fair use to limit reuses of audio or video clips to 30 seconds or less, a standard he mysteriously claims is “widely accepted.”

In fact, this is not a widely accepted standard, and weakening fair use in this way will not address copyright infringement concerns on the Internet. It would hurt the music, film and TV industries as much as it would hurt individual creators...

Fair use is inextricably linked to our 1st Amendment right to free speech. We are careful with fair use because it’s the primary way consumers, creators and innovators share new ideas. It’s a good thing, and it is worth protecting."