Showing posts with label computer scientists. Show all posts
Showing posts with label computer scientists. Show all posts

Tuesday, August 25, 2020

This Guy is Suing the Patent Office for Deciding an AI Can't Invent Things; Vice, August 24, 2020

Todd Feathers, Vice; This Guy is Suing the Patent Office for Deciding an AI Can't Invent Things

The USPTO rejected two patents applications written by a "creativity engine" named DABUS. Now a lawsuit raises fundamental questions about what it means to be creative

"A computer scientist who created an artificial intelligence system capable of generating original inventions is suing the US Patent and Trademark Office (USPTO) over its decision earlier this year to reject two patent applications which list the algorithmic system, known as DABUS, as the inventor.

The lawsuit is the latest step in an effort by Stephen Thaler and an international group of lawyers and academics to win inventorship rights for non-human AI systems, a prospect that raises fundamental questions about what it means to be creative and also carries potentially paradigm-shifting implications for certain industries."

Thursday, July 30, 2020

Study: Only 18% of data science students are learning about AI ethics; TNW, July 3, 2020

Thomas Macaulay, TNW; Study: Only 18% of data science students are learning about AI ethics
The neglect of AI ethics extends from universities to industry

"At least we can rely on universities to teach the next generation of computer scientists to make. Right? Apparently not, according to a new survey of 2,360 data science students, academics, and professionals by software firm Anaconda.

Only 15% of instructors and professors said they’re teaching AI ethics, and just 18% of students indicated they’re learning about the subject.

Notably, the worryingly low figures aren’t due to a lack of interest. Nearly half of respondents said the social impacts of bias or privacy were the “biggest problem to tackle in the AI/ML arena today.” But those concerns clearly aren’t reflected in their curricula."

Wednesday, July 15, 2020

AI gatekeepers are taking baby steps toward raising ethical standards; Quartz, June 26, 2020

Nicolás Rivero, Quartz; AI gatekeepers are taking baby steps toward raising ethical standards


"This year, for the first time, major AI conferences—the gatekeepers for publishing research—are forcing computer scientists to think about those consequences.

The Annual Conference on Neural Information Processing Systems will require a “broader impact statement” addressing the effect a piece of research might have on society. The Conference on Empirical Methods in Natural Language Processing will begin rejecting papers on ethical grounds. Others have emphasized their voluntary guidelines.

The new standards follow the publication of several ethically dubious papers. Microsoft collaborated with researchers at Beihang University to algorithmically generate fake comments on news stories. Harrisburg University researchers developed a tool to predict the likelihood someone will commit a crime based on their face. Researchers clashed on Twitter over the wisdom of publishing these and other papers.

“The research community is beginning to acknowledge that we have some level of responsibility for how these systems are used,” says Inioluwa Raji, a tech fellow at NYU’s AI Now Institute. Scientists have an obligation to think about applications and consider restricting research, she says, especially in fields like facial recognition with a high potential for misuse."

Thursday, February 13, 2020

How To Teach Artificial Intelligence; Forbes, February 12, 2020

Tom Vander Ark, Forbes; How To Teach Artificial Intelligence

"Artificial intelligence—code that learns—is likely to be humankind’s most important invention. It’s a 60-year-old idea that took off five years ago when fast chips enabled massive computing and sensors, cameras, and robots fed data-hungry algorithms...

A World Economic Forum report indicated that 89% of U.S.-based companies are planning to adopt user and entity big data analytics by 2022, while more than 70% want to integrate the Internet of Things, explore web and app-enabled markets, and take advantage of machine learning and cloud computing.

Given these important and rapid shifts, it’s a good time to consider what young people need to know about AI and information technology. First, everyone needs to be able to recognize AI and its influence on people and systems, and be proactive as a user and citizen. Second, everyone should have the opportunity to use AI and big data to solve problems. And third, young people interested in computer science as a career should have a pathway for building AI...

The MIT Media Lab developed a middle school AI+Ethics course that hits many of these learning objectives. It was piloted by Montour Public Schools outside of Pittsburgh, Pennsylvania, which has incorporated the three-day course in its media arts class."

Wednesday, November 6, 2019

How Machine Learning Pushes Us to Define Fairness; Harvard Business Review, November 6, 2019

David Weinberger, Harvard Business Review; How Machine Learning Pushes Us to Define Fairness

"Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI. 

Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Our encounters with machine learning (ML) are beginning to  give us concepts, a vocabulary, and tools that enable us to address questions of bias and fairness more directly and precisely than before."

Monday, September 16, 2019

Maths and tech specialists need Hippocratic oath, says academic; The Guardian, August 16, 2019

Ian Sample, The Guardian; Maths and tech specialists need Hippocratic oath, says academic

"“We need a Hippocratic oath in the same way it exists for medicine,” Fry said. “In medicine, you learn about ethics from day one. In mathematics, it’s a bolt-on at best. It has to be there from day one and at the forefront of your mind in every step you take.”...

The genetics testing firm 23andMe was a case in point, she said.

“We literally hand over our most private data, our DNA, but we’re not just consenting for ourselves, we are consenting for our children, and our children’s children. Maybe we don’t live in a world where people are genetically discriminated against now, but who’s to say in 100 years that we won’t? And we are are paying to add our DNA to that dataset.”"

Thursday, September 5, 2019

Teaching ethics in computer science the right way with Georgia Tech's Charles Isbell; TechCrunch, September 5, 2019

Greg Epstein, TechCrunch; Teaching ethics in computer science the right way with Georgia Tech's Charles Isbell

"The new fall semester is upon us, and at elite private colleges and universities, it’s hard to find a trendier major than Computer Science. It’s also becoming more common for such institutions to prioritize integrating ethics into their CS studies, so students don’t just learn about how to build software, but whether or not they should build it in the first place. Of course, this begs questions about how much the ethics lessons such prestigious schools are teaching are actually making a positive impression on students.

But at a time when demand for qualified computer scientists is skyrocketing around the world and far exceeds supply, another kind of question might be even more important: Can computer science be transformed from a field largely led by elites into a profession that empowers vastly more working people, and one that trains them in a way that promotes ethics and an awareness of their impact on the world around them?

Enter Charles Isbell of Georgia Tech, a humble and unassuming star of inclusive and ethical computer science. Isbell, a longtime CS professor at Georgia Tech, enters this fall as the new Dean and John P. Imlay Chair of Georgia Tech’s rapidly expanding College of Computing."

Monday, January 28, 2019

Embedding ethics in computer science curriculum: Harvard initiative seen as a national model; Harvard, John A. Paulson School of Engineering and Applied Sciences, January 28, 2019

Paul Karoff, Harvard, John A. Paulson School of Engineering and Applied Sciences; Embedding ethics in computer science curriculum:
Harvard initiative seen as a national model

"Barbara Grosz has a fantasy that every time a computer scientist logs on to write an algorithm or build a system, a message will flash across the screen that asks, “Have you thought about the ethical implications of what you’re doing?”
 
Until that day arrives, Grosz, the Higgins Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), is working to instill in the next generation of computer scientists a mindset that considers the societal impact of their work, and the ethical reasoning and communications skills to do so.

“Ethics permeates the design of almost every computer system or algorithm that’s going out in the world,” Grosz said. “We want to educate our students to think not only about what systems they could build, but whether they should build those systems and how they should design those systems.”"

Wednesday, March 28, 2018

Cambridge Analytica controversy must spur researchers to update data ethics; Nature, March 27, 2018

Editorial, Nature; Cambridge Analytica controversy must spur researchers to update data ethics

"Ethics training on research should be extended to computer scientists who have not conventionally worked with human study participants.

Academics across many fields know well how technology can outpace its regulation. All researchers have a duty to consider the ethics of their work beyond the strict limits of law or today’s regulations. If they don’t, they will face serious and continued loss of public trust."