Showing posts with label machine learning. Show all posts
Showing posts with label machine learning. Show all posts

Thursday, June 29, 2023

The Vatican Releases Its Own AI Ethics Handbook; Gizmodo, June 28, 2023

 Thomas Germain, Gizmodo; The Vatican Releases Its Own AI Ethics Handbook

"The Vatican is getting in on the AI craze. The Holy See has released a handbook on the ethics of artificial intelligence as defined by the Pope. 

The guidelines are the result of a partnership between Francis and Santa Clara University’s Markkula Center for Applied Ethics. Together, they’ve formed a new organization called the Institute for Technology, Ethics, and Culture (ITEC). The ITEC’s first project is a handbook titled Ethics in the Age of Disruptive Technologies: An Operational Roadmap, meant to guide the tech industry through the murky waters of ethics in AI, machine learning, encryption, tracking, and more."

Saturday, January 15, 2022

We’re failing at the ethics of AI. Here’s how we make real impact; World Economic Forum, January 14, 2022

Monday, October 25, 2021

Copyright Law and Machine Learning for AI: Where Are We and Where Are We Going?; Co-Sponsored by the United States Copyright Office and the United States Patent and Trademark Office, Tuesday, October 26, 2021 10 AM - 3 PM EDT

 Copyright Law and Machine Learning for AI: Where Are We and Where Are We Going?

Co-Sponsored by the United States Copyright Office and the United States Patent and Trademark Office


"The U.S. Copyright Office and the U.S. Patent and Trademark Office are hosting an October 26, 2021, conference that will explore machine learning in practice, how existing copyright laws apply to the training of artificial intelligence, and what the future may hold in this fast-moving policy space. The event will comprise three one-hour sessions, with a lunch break, and is expected to run from 10:00 a.m. to 2:30 p.m. eastern time. 

Due to the state of the COVID-19 pandemic, the on-site portion of the program initially scheduled to take place at the Library of Congress's Montpelier Room has been canceled. All sessions will still take place online as planned. Participants must register to attend this free, public event.


Download the agenda here."

Friday, June 4, 2021

Is A.I. the problem? Or are we?; The Ezra Klein Show, The New York Times, June 4, 2021

The Ezra Klein Show, The New York Times; Is A.I. the problem? Or are we?

"One of my projects this year is to get a better handle on this debate. A.I., after all, isn’t some force only future human beings will face. It’s here now, deciding what advertisements are served to us online, how bail is set after we commit crimes and whether our jobs will exist in a couple of years. It is both shaped by and reshaping politics, economics and society. It’s worth understanding.

Brian Christian’s recent book “The Alignment Problem” is the best book on the key technical and moral questions of A.I. that I’ve read. At its center is the term from which the book gets its name. “Alignment problem” originated in economics as a way to describe the fact that the systems and incentives we create often fail to align with our goals. And that’s a central worry with A.I., too: that we will create something to help us that will instead harm us, in part because we didn’t understand how it really worked or what we had actually asked it to do.

So this conversation is about the various alignment problems associated with A.I. We discuss what machine learning is and how it works, how governments and corporations are using it right now, what it has taught us about human learning, the ethics of how humans should treat sentient robots, the all-important question of how A.I. developers plan to make profits, what kinds of regulatory structures are possible when we’re dealing with algorithms we don’t really understand, the way A.I. reflects and then supercharges the inequities that exist in our society, the saddest Super Mario Bros. game I’ve ever heard of, why the problem of automation isn’t so much job loss as dignity loss and much more."

Friday, April 16, 2021

Big Tech’s guide to talking about AI ethics; Wired, April 13, 2021

, Wired; Big Tech’s guide to talking about AI ethics

"AI researchers often say good machine learning is really more art than science. The same could be said for effective public relations. Selecting the right words to strike a positive tone or reframe the conversation about AI is a delicate task: done well, it can strengthen one’s brand image, but done poorly, it can trigger an even greater backlash.

The tech giants would know. Over the last few years, they’ve had to learn this art quickly as they’ve faced increasing public distrust of their actions and intensifying criticism about their AI research and technologies.

Now they’ve developed a new vocabulary to use when they want to assure the public that they care deeply about developing AI responsibly—but want to make sure they don’t invite too much scrutiny. Here’s an insider’s guide to decoding their language and challenging the assumptions and values baked in...

diversity, equity, and inclusion (ph) - The act of hiring engineers and researchers from marginalized groups so you can parade them around to the public. If they challenge the status quo, fire them...

ethics board (ph) - A group of advisors without real power, convened to create the appearance that your company is actively listening. Examples: Google’s AI ethics board (canceled), Facebook’s Oversight Board (still standing).

ethics principles (ph) - A set of truisms used to signal your good intentions. Keep it high-level. The vaguer the language, the better. See responsible AI."

Saturday, March 27, 2021

Slick Tom Cruise Deepfakes Signal That Near Flawless Forgeries May Be Here; NPR, March 11, 2021

Emma Bowman, NPR; Slick Tom Cruise Deepfakes Signal That Near Flawless Forgeries May Be Here


"In a crop of viral videos featuring Tom Cruise, it's not the actor's magic trick nor his joke-telling that's deceptive — but the fact that it's not actually Tom Cruise at all.

The videos, uploaded to TikTok in recent weeks by the account @deeptomcruise, have raised new fears over the proliferation of believable deepfakes — the nickname for media generated by artificial intelligence technology showing phony events that often seem realistic enough to dupe an audience.

Hany Farid, a professor at the University of California, Berkeley, told NPR's All Things Considered that the Cruise videos demonstrate a step up in the technology's evolving sophistication."

Friday, July 17, 2020

If AI is going to help us in a crisis, we need a new kind of ethics; MIT Technology Review, June 24, 2020

, MIT Technology Review; If AI is going to help us in a crisis, we need a new kind of ethics

Ethics for urgency means making ethics a core part of AI rather than an afterthought, says Jess Whittlestone.

"What needs to change?

We need to think about ethics differently. It shouldn’t be something that happens on the side or afterwards—something that slows you down. It should simply be part of how we build these systems in the first place: ethics by design...

You’ve said that we need people with technical expertise at all levels of AI design and use. Why is that?

I’m not saying that technical expertise is the be-all and end-all of ethics, but it’s a perspective that needs to be represented. And I don’t want to sound like I’m saying all the responsibility is on researchers, because a lot of the important decisions about how AI gets used are made further up the chain, by industry or by governments.

But I worry that the people who are making those decisions don’t always fully understand the ways it might go wrong. So you need to involve people with technical expertise. Our intuitions about what AI can and can’t do are not very reliable.

What you need at all levels of AI development are people who really understand the details of machine learning to work with people who really understand ethics. Interdisciplinary collaboration is hard, however. People with different areas of expertise often talk about things in different ways. What a machine-learning researcher means by privacy may be very different from what a lawyer means by privacy, and you can end up with people talking past each other. That’s why it’s important for these different groups to get used to working together."

Tuesday, April 21, 2020

The Ethics of Developing COVID-19 Treatments and Vaccination; Carnegie Mellon University, April 7, 2020

Jason Maderer, Carnegie Mellon University; The Ethics of Developing COVID-19 Treatments and Vaccination

CMU's experts explore the options


"In the rush to do science quickly, Carnegie Mellon University ethicist Alex John London says it is easy to make mistakes. 

"The point of research is to reduce uncertainty — to sort out dead ends from fruitful treatment strategies," said London, the Clara L. West Professor of Ethics and Philosophy and director of the Center for Ethics and Policy. But if you don’t do rigorous science, you can wind up increasing uncertainty, which can actually make things worse."

London’s research in Carnegie Mellon’s Dietrich College of Humanities and Social Sciences focuses on ethical and policy issues surrounding the development and deployment of novel technologies in medicine...

One strategy to expedite the vaccine process for COVID-19 is turning to the power of artificial intelligence (AI). London’s colleague, Carnegie Mellon professor David Danks, looks at the intersection of ethics and machine learning."

Friday, March 27, 2020

COVID-19 first target of new AI research consortium; Berkeley News, March 26, 2020

Sarah Yang, College of Engineering, Berkeley News; COVID-19 first target of new AI research consortium


"The University of California, Berkeley, and the University of Illinois at Urbana-Champaign (UIUC) are the headquarters of a bold new research consortium established by enterprise AI software company C3.ai to leverage the convergence of artificial intelligence (AI), machine learning and the internet of things (IoT) to transform societal-scale systems.

C3.ai announced the creation of the C3.ai Digital Transformation Institute (C3.ai DTI) today, along with a call for research proposals for AI techniques to mitigate the effects of COVID-19 and possible future pandemics.

“The C3.ai Digital Transformation Institute is a consortium of leading scientists, researchers, innovators and executives from academia and industry, joining forces to accelerate the social and economic benefits of digital transformation,” said Thomas M. Siebel, CEO of C3.ai, in a statement. “We have the opportunity through public-private partnership to change the course of a global pandemic. I cannot imagine a more important use of AI.”

The first call for proposals, due May 1, 2020, targets research that addresses the application of AI and machine learning to mitigate the spread of COVID-19, rigorous approaches to design sampling and testing strategies, and methods to improve societal resilience in response to the COVID-19 pandemic, among other areas relevant to pandemic mitigation."

Wednesday, November 6, 2019

How Machine Learning Pushes Us to Define Fairness; Harvard Business Review, November 6, 2019

David Weinberger, Harvard Business Review; How Machine Learning Pushes Us to Define Fairness

"Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI. 

Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Our encounters with machine learning (ML) are beginning to  give us concepts, a vocabulary, and tools that enable us to address questions of bias and fairness more directly and precisely than before."

Wednesday, October 23, 2019

A face-scanning algorithm increasingly decides whether you deserve the job; The Washington Post, October 22, 2019

Drew Harwell, The Washington Post; A face-scanning algorithm increasingly decides whether you deserve the job 

HireVue claims it uses artificial intelligence to decide who’s best for a job. Outside experts call it ‘profoundly disturbing.’

"“It’s a profoundly disturbing development that we have proprietary technology that claims to differentiate between a productive worker and a worker who isn’t fit, based on their facial movements, their tone of voice, their mannerisms,” said Meredith Whittaker, a co-founder of the AI Now Institute, a research center in New York...

Loren Larsen, HireVue’s chief technology officer, said that such criticism is uninformed and that “most AI researchers have a limited understanding” of the psychology behind how workers think and behave...

“People are rejected all the time based on how they look, their shoes, how they tucked in their shirts and how ‘hot’ they are,” he told The Washington Post. “Algorithms eliminate most of that in a way that hasn’t been possible before.”...

HireVue’s growth, however, is running into some regulatory snags. In August, Illinois Gov. J.B. Pritzker (D) signed a first-in-the-nation law that will force employers to tell job applicants how their AI-hiring system works and get their consent before running them through the test. The measure, which HireVue said it supports, will take effect Jan. 1."

Thursday, April 25, 2019

The Legal and Ethical Implications of Using AI in Hiring; Harvard Business Review, April 25, 2019

  • Ben Dattner
  • Tomas Chamorro-Premuzic
  • Richard Buchband
  • Lucinda Schettler
  • , Harvard Business Review; 

    The Legal and Ethical Implications of Using AI in Hiring


    "Using AI, big data, social media, and machine learning, employers will have ever-greater access to candidates’ private lives, private attributes, and private challenges and states of mind. There are no easy answers to many of the new questions about privacy we have raised here, but we believe that they are all worthy of public discussion and debate."

    Tuesday, January 8, 2019

    Genetic data on half a million Brits reveal ongoing evolution and Neanderthal legacy; Science, January 3, 2019

    Ann Gibbons, Science; Genetic data on half a million Brits reveal ongoing evolution and Neanderthal legacy

    "For years, evolutionary biologists couldn't get their rubber-gloved hands on enough people's genomes to detect the relatively rare bits of Neanderthal DNA, much less to see whether or how our extinct cousins' genetic legacy might influence disease or physical traits.

    But a few years ago, Kelso and her colleagues at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, turned to a new tool—the UK Biobank (UKB), a large database that holds genetic and health records for half a million British volunteers. The researchers analyzed data from 112,338 of those Britons—enough that "we could actually look and say: ‘We see a Neanderthal version of the gene and we can measure its effect on phenotype in many people—how often they get sunburned, what color their hair is, and what color their eyes are,’" Kelso says. They found Neanderthal variants that boost the odds that a person smokes, is an evening person rather than a morning person, and is prone to sunburn and depression."

    Wednesday, July 25, 2018

    Artificial Intelligence Shows Why Atheism Is Unpopular; The Atlantic, July 23, 2018

    Sigal Samuel, The Atlantic;

    Artificial Intelligence Shows Why Atheism Is Unpopular


    "Even harder to sway may be those concerned not with the methodology’s technical complications, but with its ethical complications. As Wildman told me, “These models are equal-opportunity insight generators. If you want to go militaristic, then these models tell you what the targets should be.”...

    Nevertheless, just like Wildman, Shults told me, “I lose sleep at night on this. ... It is social engineering. It just is—there’s no pretending like it’s not.” But he added that other groups, like Cambridge Analytica, are doing this kind of computational work, too. And various bad actors will do it without transparency or public accountability. “It’s going to be done. So not doing it is not the answer.” Instead, he and Wildman believe the answer is to do the work with transparency and simultaneously speak out about the ethical danger inherent in it.

    “That’s why our work here is two-pronged: I’m operating as a modeler and as an ethicist,” Wildman said. “It’s the best I can do.”"

    Monday, July 23, 2018

    We Need Transparency in Algorithms, But Too Much Can Backfire; Harvard Business Review, July 23, 2018

    Kartik Hosanagar and Vivian Jair, Harvard Business Review; We Need Transparency in Algorithms, But Too Much Can Backfire

    "Companies and governments increasingly rely upon algorithms to make decisions that affect people’s lives and livelihoods – from loan approvals, to recruiting, legal sentencing, and college admissions. Less vital decisions, too, are being delegated to machines, from internet search results to product recommendations, dating matches, and what content goes up on our social media feeds. In response, many experts have called for rules and regulations that would make the inner workings of these algorithms transparent. But as Nass’s experience makes clear, transparency can backfire if not implemented carefully. Fortunately, there is a smart way forward."

    Facebook's pledge to eliminate misinformation is itself fake news ; The Guardian, July 20, 2018

    Judd Legum, The Guardian; Facebook's pledge to eliminate misinformation is itself fake news

    "The production values are high and the message is compelling. In an 11-minute mini-documentary, Facebook acknowledges its mistakes and pledges to “fight against misinformation”.

    “With connecting people, particularly at our scale, comes an immense amount of responsibility,” an unidentified Facebook executive in the film solemnly tells a nodding audience of new company employees.

    An outdoor ad campaign by Facebook strikes a similar note, plastering slogans like “Fake news is not your friend” at bus stops around the country.

    But the reality of what’s happening on the Facebook platform belies its gauzy public relations campaign."

    Sunday, June 10, 2018

    How data scientists are using AI for suicide prevention; Vox, June 9, 2018

    Brian Resnick, Vox; How data scientists are using AI for suicide prevention

    "At the Crisis Text Line, a text messaging-based crisis counseling hotline, these deluges have the potential to overwhelm the human staff.

    So data scientists at Crisis Text Line are using machine learning, a type of artificial intelligence, to pull out the words and emojis that can signal a person at higher risk of suicide ideation or self-harm. The computer tells them who on hold needs to jump to the front of the line to be helped.

    They can do this because Crisis Text Line does something radical for a crisis counseling service: It collects a massive amount of data on the 30 million texts it has exchanged with users. While Netflix and Amazon are collecting data on tastes and shopping habits, the Crisis Text Line is collecting data on despair.

    The data, some of which is available here, has turned up all kinds of interesting insights on mental health."

    Tuesday, May 29, 2018

    Why thousands of AI researchers are boycotting the new Nature journal ; Guardian, May 29, 2018

    Neil Lawrence, Guardian;
    Many in our research community see the Nature brand as a poor proxy for academic quality. We resist the intrusion of for-profit publishing into our field. As a result, at the time of writing, more than 3,000 researchers, including many leading names in the field from both industry and academia, have signed a statement refusing to submit, review or edit for this new journal. We see no role for closed access or author-fee publication in the future of machine-learning research. We believe the adoption of this new journal as an outlet of record for the machine-learning community would be a retrograde step."

    Monday, April 2, 2018

    Machine learning as a service: Can privacy be taught?; ZDnet, April 2, 2018

    Robin Harris, ZDNet; Machine learning as a service: Can privacy be taught?

    "Machine learning is one of the hottest disciplines in computer science today. So hot, in fact, that cloud providers are doing a good and rapidly growing business in machine-learning-as-a-service (MLaaS).

    But these services come with a caveat: all the training data must be revealed to the service operator. Even if the service operator does not intentionally access the data, someone with nefarious motives may. Or their may be legal reasons to preserve privacy, such as with health data.

    In a recent paper, Chiron: Privacy-preserving Machine Learning as a Service Tyler Hunt, of the University of Texas, and others, presents a system that preserves privacy while enabling the use of cloud MLaaS."

    Thursday, March 22, 2018

    A Huge Global Study On Driverless Car Ethics Found The Elderly Are Expendable; Forbes, March 21, 2018

    Oliver Smith, Forbes; A Huge Global Study On Driverless Car Ethics Found The Elderly Are Expendable

    "Over the last year, 4 million people took part by answering ethical questions in Moral Machine's many scenarios – which include different combinations of genders, ages, and even other species like cats and dogs, crossing the road.

    On Sunday, the day before the first pedestrian fatality by an autonomous car in America, MIT's Professor Iyad Rahwan revealed the first results of the Moral Machine study at the Global Education and Skills Forum in Dubai."