Showing posts with label algorithms. Show all posts
Showing posts with label algorithms. Show all posts

Thursday, March 14, 2024

The Dubious Ethics of “the World’s Most Ethical Companies”; The Nation, March 14, 2024

JESS MCALLEN , The Nation; The Dubious Ethics of “the World’s Most Ethical Companies”

"Ethisphere, a for-profit institution started in 2006, which describes itself as a tool to “accelerate ethical business,” has expanded significantly since its creation. In addition to producing its yearly list of the World’s Most Ethical Companies, Ethisphere hosts a podcast (Ethicast), a newsletter (“Ethisphere Insights”), and runs the Business Ethics Leadership Alliance, which allows members to access a “concierge” service where business executives can ponder ethical quandaries (“How do I measure and assess third-party risk?”) There is also a Global Ethics Summit, which occurs in tandem with the honoree gala. “It becomes evermore necessary to have a discussion about ethics,” the 2024 summit agenda reads, “and how we can maintain morality in our world.” Both nonprofits and for-profit companies can apply, with the exception of NGOs, government agencies, and nonprofit colleges and universities...

Some of this year’s honorees include...

The University of Pittsburgh Medical Center (UPMC) was awarded for the sixth time, despite being ordered to pay the federal government $8.5 million last year in a lawsuit over falsely billing federal programs, as well as jeopardizing patient health. In January of this year, a nurse filed a lawsuit against the hospital for the UPMC’s unfair business practices."

Saturday, March 26, 2022

Online Copyright Piracy Debate Ramps Up Over Proposed Legal Fix; Bloomberg Law, March 23, 2022

Riddhi Setty, Bloomberg LawOnline Copyright Piracy Debate Ramps Up Over Proposed Legal Fix

"Sen. Patrick Leahy (D-Vt.) and Sen. Thom Tillis (R-N.C.), the leaders of the Senate Judiciary Committee’s Intellectual Property Subcommittee, recently proposed the SMART (Strengthening Measures to Advance Rights Technologies) Copyright Act of 2022, which aims to hold service providers accountable for fighting copyright theft... 

New Tools

The SMART Act proposes to create a new Section 514 of the Online Copyright Infringement Liability Limitation Act, with a new set of technical measures, called Designated Technical Measures or DTMs, which would be automated tools for identifying and protecting copyrighted works online. 

The librarian of Congress would be responsible for designating DTMs. Failure to accommodate these technical measures would result in statutory damages for service providers, but wouldn’t threaten their safe harbor. The damages range from a minimum of $200 to $2.4 million per action of copyright holder, according to the draft law.

Tillis and Leahy said in a fact sheet that the bill would require the agency to hire a chief technology adviser and chief economist and that the office would start a public process to assess which existing technologies should be made standard for public use.

Free Speech

One of the primary concerns about the bill is how it might impact free speech if it becomes law. 

The SMART Act doesn’t provide technical details about how the filters would be set or what percentage of uploaded material would be required to be a match to an underlying copyrighted work to be flagged.

“Algorithms are designed to be over inclusive—when you’re designing them you want them to catch as much as possible and the problem is you can’t have a computer tell what is fair use and what is not,” said Rose. She anticipates that while the protective filters the Copyright Office would set up under this act would fix the problem for some, the collateral damage would be the free speech of possibly millions of internet users.

Joshua S. Lamel, executive director of a coalition of creators called Re:Create, said he didn’t think the Copyright Office could find the balance between taking down copyright infringing content and taking down content that is covered by fair use. “We as a society shouldn’t be violating privacy to that level and creating so much of a Big Brother-like situation in the name of policing for copyright infringement,” he said."

Monday, March 14, 2022

Sandy Hook review: anatomy of an American tragedy – and the obscenity of social media; The Guardian, March 13, 2022

 , The Guardian; Sandy Hook review: anatomy of an American tragedy – and the obscenity of social media

"Those recommendations are the result of the infernal algorithms which are at the heart of the business models of Facebook and YouTube and are probably more responsible for the breakdown in civil society in the US and the world than anything else invented.

“We thought the internet would give us this accelerated society of science and information,” says Lenny Pozner, whose son Noah was one of the Sandy Hook victims. But “really, we’ve gone back to flat earth”."

Wednesday, October 6, 2021

Here are 4 key points from the Facebook whistleblower's testimony on Capitol Hill; NPR, October 5, 2021

Bobby Allyn, NPR; Here are 4 key points from the Facebook whistleblower's testimony on Capitol Hill

"Research shows Facebook coveted young users, despite health concerns.

Of particular concern to lawmakers on Tuesday was Instagram's impact on young children.

Haugen has leaked one Facebook study that found that 13.5 percent of U.K. teen girls in one survey say their suicidal thoughts became more frequent.

Another leaked study found 17% of teen girls say their eating disorders got worse after using Instagram.

About 32% of teen girls said that when they felt bad about their bodies, Instagram made them feel worse, Facebook's researchers found, which was first reported by the Journal. 

Sen. Marsha Blackburn, R-Tenn., accused Facebook of intentionally targeting children under age 13 with an "addictive" product — despite the app requiring users be 13 years or older. 

"It is clear that Facebook prioritizes profit over the well-being of children and all users," she said. 

Blumenthal echoed this concern. 

"Facebook exploited teens using powerful algorithms that amplified their insecurities," Blumenthal said. "I hope we will discuss as to whether there is such a thing as a safe algorithm.""

Thursday, February 25, 2021

Life amid the ruinsof QAnon: ‘I wanted my family back’; The Washington Post, February 23, 2021

Greg Jaffe and 
, The Washington Post ; Life amid the ruins of QAnon: ‘I wanted my family back’ An epidemic of conspiracy theories, fanned by social media and self-serving politicians, is tearing families apart.

"Like many conspiracy theories, QAnon supplied a good-versus-evil narrative into which complicated world events could be easily incorporated. “Especially during the pandemic, Q provided a structure to explain what was going on,” said Mike Rothschild, author of “The Storm Is Upon Us,” which documents QAnon’s rise.

And it offered believers a sense of meaning and purpose. “We want to believe that we matter enough [that someone wants] to crush us,” Rothschild said. “It’s comforting to think that the New World Order would single us out for destruction.”

A big part of what made it novel was that it was interactive, allowing its followers to take part in the hunt for clues as if they were playing a video game. Social media algorithms, built to capture and keep consumers’ attention, helped expand the pool of hardcore believers by leading curious individuals to online groups of believers and feeding them fresh QAnon conspiracy theories."

Thursday, July 16, 2020

YouTube’s algorithms could be harming users looking for health information; Fast Company, July 15, 2020

ANJANA SUSARLA, Fast Company; 

YouTube’s algorithms could be harming users looking for health information


"A significant fraction of the U.S. population is estimated to have limited health literacy, or the capacity to obtain, process, and understand basic health information, such as the ability to read and comprehend prescription bottles, appointment slips, or discharge instructions from health clinics.
Studies of health literacy, such as the National Assessment of Adult Literacy conducted in 2003, estimated that only 12% of adults had proficient health literacy skills. This has been corroborated in subsequent studies.
I’m a professor of information systems, and my own research has examined how social media platforms such as YouTube widen such health literacy disparities by steering users toward questionable content."

Saturday, July 11, 2020

Wrongfully Accused by an Algorithm; The New York Times, June 24, 2020

, The New York Times; Wrongfully Accused by an Algorithm

In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.

"Clare Garvie, a lawyer at Georgetown University’s Center on Privacy and Technology, has written about problems with the government’s use of facial recognition. She argues that low-quality search images — such as a still image from a grainy surveillance video — should be banned, and that the systems currently in use should be tested rigorously for accuracy and bias.

“There are mediocre algorithms and there are good ones, and law enforcement should only buy the good ones,” Ms. Garvie said.

About Mr. Williams’s experience in Michigan, she added: “I strongly suspect this is not the first case to misidentify someone to arrest them for a crime they didn’t commit. This is just the first time we know about it.”"

Sunday, May 31, 2020

Think Outside the Box, Jack; The New York Times, May 30, 2020

Think Outside the Box, JackTrump, Twitter and the society-crushing pursuit of monetized rage.

"The Wall Street Journal had a chilling report a few days ago that Facebook’s own research in 2018 revealed that “our algorithms exploit the human brain’s attraction to divisiveness. If left unchecked,” Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”

Mark Zuckerberg shelved the research...

“The shareholders of Facebook decided, ‘If you can increase my stock tenfold, we can put up with a lot of rage and hate,’” says Scott Galloway, professor of marketing at New York University’s Stern School of Business.

“These platforms have very dangerous profit motives. When you monetize rage at such an exponential rate, it’s bad for the world. These guys don’t look left or right; they just look down. They’re willing to promote white nationalism if there’s money in it. The rise of social media will be seen as directly correlating to the decline of Western civilization.”"

Thursday, April 23, 2020

Fair and unfair algorithms: What to take into account when developing AI systems to fight COVID-19; JD Supra, April 17, 2020

Fabia Cairoli and Giangiacomo Olivi, JD Supra; Fair and unfair algorithms: What to take into account when developing AI systems to fight COVID-19

"The regulatory framework includes a number of sources from which to draw inspiration when developing AI technology. One of the most recent ones, the White Paper on Artificial Intelligence of the European Commission, is aimed at defining the risks associated with the implementation of AI systems, as well as determining the key features that should be implemented to ensure that data subjects’ rights are complied with (please see our articles The EU White Paper on Artificial Intelligence: the five requirements and Shaping EU regulations on Artificial Intelligence: the five improvements for a more detailed analysis).

It is worth noting that, particularly in relation to the development of AI technologies to fight the pandemic, the legislator is required to pay great attention to the principles and security systems. Risks associated to AI relate both to rights and technical functionalities. EU member states intending to use AI against COVID-19 will also need to ensure that any AI technology is ethical and is construed and operates in a safe way.

With regards to ethics, it is worth noting that the European Commission issued Ethics Guidelines for Trustworthy AI in April 2019. Those guidelines stressed the need for AI systems to be lawful, ethical and robust (more particularly, AI should comply with all applicable laws and regulations, as well as ensure adherence to ethical principles / values and be designed in a way that does not cause unintentional harm).

With the aim of ensuring that fundamental rights are complied with, the legislator should consider whether an AI system will maintain respect for human dignity, equality, non-discrimination and solidarity. Some of these rights may be restricted for extraordinary and overriding reasons – such as fighting against a pandemic – but this should take place under specific legal provisions and only so far as is necessary to achieve the main purpose. Indeed, the use of tracking apps and systems that profile citizens in order to determine which ones may suffer from COVID-19 entails the risk that an individual’s freedom and democratic rights could be seriously restricted."

Sunday, April 5, 2020

Developers - it's time to brush up on your philosophy: Ethical AI is the big new thing in tech; ZDNet, April 1, 2020

 , ZDNet; Developers - it's time to brush up on your philosophy: Ethical AI is the big new thing in tech

The transformative potential of algorithms means that developers are now expected to think about the ethics of technology -- and that wasn't part of the job description.

"Crucially, most guidelines also insist that thought be given to the ethical implications of the technology from the very first stage of conceptualising a new tool, and all the way through its implementation and commercialisation. 

This principle of 'ethics by design' goes hand in hand with that of responsibility and can be translated, roughly, as: 'coders be warned'. In other words, it's now on developers and their teams to make sure that their program doesn't harm users. And the only way to make sure it doesn't is to make the AI ethical from day one.
The trouble with the concept of ethics by design, is that tech wasn't necessarily designed for ethics."

Tuesday, November 26, 2019

NYC wants a chief algorithm officer to counter bias, build transparency; Ars Technica, November 25, 2019

Kate Cox, Ars Technica; NYC wants a chief algorithm officer to counter bias, build transparency

"It takes a lot of automation to make the nation's largest city run, but it's easy for that kind of automation to perpetuate existing problems and fall unevenly on the residents it's supposed to serve. So to mitigate the harms and ideally increase the benefits, New York City has created a high-level city government position essentially to manage algorithms."

Thursday, November 21, 2019

Why Business Leaders Need to Understand Their Algorithms; Harvard Business Review, November 19, 2019

Mike Walsh, Harvard Business Review; Why Business Leaders Need to Understand Their Algorithms

"Leaders will be challenged by shareholders, customers, and regulators on what they optimize for. There will be lawsuits that require you to reveal the human decisions behind the design of your AI systems, what ethical and social concerns you took into account, the origins and methods by which you procured your training data, and how well you monitored the results of those systems for traces of bias or discrimination. Document your decisions carefully and make sure you understand, or at the very least trust, the algorithmic processes at the heart of your business.

Simply arguing that your AI platform was a black box that no one understood is unlikely to be a successful legal defense in the 21st century. It will be about as convincing as “the algorithm made me do it.”"

Wednesday, November 6, 2019

Rights group files federal complaint against AI-hiring firm HireVue, citing ‘unfair and deceptive’ practices; The Washington Post, November 6, 2019

Drew Harwell, The Washington Post; Rights group files federal complaint against AI-hiring firm HireVue, citing ‘unfair and deceptive’ practices

"The Electronic Privacy Information Center, known as EPIC, on Wednesday filed an official complaint calling on the FTC to investigate HireVue’s business practices, saying the company’s use of unproven artificial intelligence systems that scan people’s faces and voices constituted a wide-scale threat to American workers."

Elisa Celis and the fight for fairness in artificial intelligence; Yale News, November 6, 2019

Jim Shelton, Yale News; Elisa Celis and the fight for fairness in artificial intelligence

"What can you tell us about the new undergraduate course you’re teaching at Yale?

It’s called “Data Science Ethics.” I came in with an idea of what I wanted to do, but I also wanted to incorporate a lot of feedback from students. The first week was spent asking: “What is normative ethics? How do we even go about thinking in terms of ethical decisions in this context?” With that foundation, we began talking about different areas where ethical questions come out, throughout the entire data science pipeline. Everything from how you collect data to the algorithms themselves and how they end up encoding these biases, and how the results of biased algorithms directly affect people. The goal is to introduce students to all the things they should have in their mind when talking about ethics in the technical sphere.

The class doesn’t require coding or technical background, because that allows students from other departments to participate. We have students from anthropology, sociology, and economics, and other departments, which broadens the discussion. That’s very valuable when grappling with these inherently interdisciplinary problems."

Wednesday, October 23, 2019

Trump housing plan would make bias by algorithm 'nearly impossible to fight'; The Guardian, October 23, 2019

Kari Paul, The Guardian; Trump housing plan would make bias by algorithm 'nearly impossible to fight'

"Under the Department of Housing and Urban Development’s (HUD) new rules, businesses would be shielded from liability when their algorithms are accused of bias through three different loopholes:
  • When the algorithm in question is vetted by a “neutral third party”.
  • When the algorithm itself was created by a third party.
  • If an algorithm used did not use race or a proxy for it in the computer model.
In the letter, groups in opposition to the change noted many pieces of data can be proxies for race – discriminating by a zip code, for example, can enable a racial bias. The rule would give “unprecedented deference” to mortgage lenders, landlords, banks, insurance companies, and others in the housing industry, the letter said."

Tuesday, October 15, 2019

Student tracking, secret scores: How college admissions offices rank prospects before they apply; The Washington Post, October 14, 2019

Douglas MacMillan and Nick Anderson, The Washington Post; Student tracking, secret scores: How college admissions offices rank prospects before they apply

"Admissions officers say behavioral tracking helps them serve students in the application process. When a college sees that a qualified student is serious about applying based on the student’s Web behavior, it can dedicate more staffers to follow up...

But Web tracking may unfairly provide an advantage to students with better access to technology, said Bradley Shear, a Maryland lawyer who has pushed for better regulation of students’ online privacy. A low-income student may be a strong academic candidate but receive less attention from recruiters because the student does not own a smartphone or have high-speed Internet access at home, he said.

“I don’t think the algorithm should run the admissions department,” Shear said."

Friday, March 29, 2019

'Bias deep inside the code': the problem with AI 'ethics' in Silicon Valley; The Guardian, March 29, 2019

Sam Levin, The Guardian; 'Bias deep inside the code': the problem with AI 'ethics' in Silicon Valley

"“Algorithms determine who gets housing loans and who doesn’t, who goes to jail and who doesn’t, who gets to go to what school,” said Malkia Devich Cyril, the executive director of the Center for Media Justice. “There is a real risk and real danger to people’s lives and people’s freedom.”

Universities and ethics boards could play a vital role in counteracting these trends. But they rarely work with people who are affected by the tech, said Laura Montoya, the cofounder and president of the Latinx in AI Coalition: “It’s one thing to really observe bias and recognize it, but it’s a completely different thing to really understand it from a personal perspective and to have experienced it yourself throughout your life.”

It’s not hard to find AI ethics groups that replicate power structures and inequality in society – and altogether exclude marginalized groups.

The Partnership on AI, an ethics-focused industry group launched by Google, Facebook, Amazon, IBM and Microsoft, does not appear to have black board members or staff listed on its site, and has a board dominated by men. A separate Microsoft research group dedicated to “fairness, accountability, transparency, and ethics in AI” also excludes black voices."
 

Wednesday, March 6, 2019

The ethical side of big data; Statistics Netherlands, March 4, 2019

Masja de Ree, Statistics NetherlandsThe ethical side of big data

"The power of data

Why do we need to highlight the importance of ethical data use? Dechesne explains: ‘I am a mathematician. My world is a world of numbers. My education did not put much emphasis on the power of data in our society, however. Numbers frequently have a veneer of objectivity, but any conclusions drawn on the basis of data are always contingent on the definitions maintained and the decisions made when designing a research project. These choices can have a huge impact on certain groups in our society. This is something we need to be aware of. Decisions have to be made. That is fine, of course, as long as everyone is mindful and transparent when making decisions.’"

Wednesday, July 25, 2018

Artificial Intelligence Shows Why Atheism Is Unpopular; The Atlantic, July 23, 2018

Sigal Samuel, The Atlantic;

Artificial Intelligence Shows Why Atheism Is Unpopular


"Even harder to sway may be those concerned not with the methodology’s technical complications, but with its ethical complications. As Wildman told me, “These models are equal-opportunity insight generators. If you want to go militaristic, then these models tell you what the targets should be.”...

Nevertheless, just like Wildman, Shults told me, “I lose sleep at night on this. ... It is social engineering. It just is—there’s no pretending like it’s not.” But he added that other groups, like Cambridge Analytica, are doing this kind of computational work, too. And various bad actors will do it without transparency or public accountability. “It’s going to be done. So not doing it is not the answer.” Instead, he and Wildman believe the answer is to do the work with transparency and simultaneously speak out about the ethical danger inherent in it.

“That’s why our work here is two-pronged: I’m operating as a modeler and as an ethicist,” Wildman said. “It’s the best I can do.”"

Monday, July 23, 2018

We Need Transparency in Algorithms, But Too Much Can Backfire; Harvard Business Review, July 23, 2018

Kartik Hosanagar and Vivian Jair, Harvard Business Review; We Need Transparency in Algorithms, But Too Much Can Backfire

"Companies and governments increasingly rely upon algorithms to make decisions that affect people’s lives and livelihoods – from loan approvals, to recruiting, legal sentencing, and college admissions. Less vital decisions, too, are being delegated to machines, from internet search results to product recommendations, dating matches, and what content goes up on our social media feeds. In response, many experts have called for rules and regulations that would make the inner workings of these algorithms transparent. But as Nass’s experience makes clear, transparency can backfire if not implemented carefully. Fortunately, there is a smart way forward."