Showing posts with label computer scientists. Show all posts
Showing posts with label computer scientists. Show all posts

Wednesday, October 16, 2024

Computer scientist speaks of effects of AI on humanity; Allied News, October 15, 2024

HAILEY ROGENSKI , Allied News; Computer scientist speaks of effects of AI on humanity

"What role will we let artificial intelligence play in our lives, and what effect will AI have on religion and the world? Can it replace human roles that require empathy?

Dr. Derek Schuurman, a Christian computer scientist from Calvin University in Grand Rapids, Mich., delved into those issues Oct. 7 at Grove City College in the college’s Albert A. Hopeman Jr. Memorial Lecture in Faith & Technology.

Schuurman, is a member of the American Scientific Affiliation and adviser for AI and faith, a contributor to the Christian Scholars Review blog, a columnist for the Christian Courier and an author of Shaping the Digital World: Faith, Culture and Computer Technology and a co-author of A Christian Field Guide to Technology for Engineers and Designers...

“I think at that point we have to get back to that question and say, ‘what does it mean to be human?’” Schuurman said. “What does it mean to be made in the image of God? What does that imply for certain types of relationships and work about having a human doing that, because we choose to have someone who can actually have empathy for us, someone who’s words can be influenced and shaped by the holy spirit speaking into our lives. There’s certain roles that require empathy, care (and) wisdom.”

Schuurman said he thinks some roles that require this kind of empathy, such as being a pastor or teacher, will remain untouched by AI.

He said the best way to use AI is to maintain a “hybrid approach” where “people do what people do well and machines do what machines do well.”"

Thursday, August 1, 2024

What do corporations need to ethically implement AI? Turns out, a philosopher; Northeastern Global News, July 26, 2024

, Northeastern Global News ; What do corporations need to ethically implement AI? Turns out, a philosopher

"As the founder of the AI Ethics Lab, Canca maintains a team of “philosophers and computer scientists, and the goal is to help industry. That means corporations as well as startups, or organizations like law enforcement or hospitals, to develop and deploy AI systems responsibly and ethically,” she says.

Canca has also worked with organizations like the World Economic Forum and Interpol.

But what does “ethical” mean when it comes to AI? That, Canca says, is exactly the point.

“A lot of the companies come to us and say, ‘Here’s a model that we are planning to use. Is this fair?’” 

But, she notes, there are “different definitions of justice, distributive justice, different definitions of fairness. They conflict with each other. It is a big theoretical question. How do we define fairness?”

"Saying that ‘We optimized this for fairness,’ means absolutely nothing until you have a working,  proper definition” — which shifts from project to project, she also notes.

Now, Canca has been named one of Mozilla’s Rise25 honorees, which recognizes individuals “leading the next wave of AI — using philanthropy, collective power, and the principles of open source to make sure the future of AI is responsible, trustworthy, inclusive and centered around human dignity,” the organization wrote in its announcement."

Thursday, June 27, 2024

God Chatbots Offer Spiritual Insights on Demand. What Could Go Wrong?; Scientific American, March 19, 2024

 , Scientific American; God Chatbots Offer Spiritual Insights on Demand. What Could Go Wrong?

"QuranGPT—which has now been used by about 230,000 people around the world—is just one of a litany of chatbots trained on religious texts that have recently appeared online. There’s Bible.Ai, Gita GPT, Buddhabot, Apostle Paul AI, a chatbot trained to imitate 16th-century German theologian Martin Luther, another trained on the works of Confucius, and yet another designed to imitate the Delphic oracle. For millennia adherents of various faiths have spent long hours—or entire lifetimes—studying scripture to glean insights into the deepest mysteries of human existence, say, the fate of the soul after death.

The creators of these chatbots don’t necessarily believe large language models (LLMs) will put these age-old theological enigmas to rest. But they do think that with their ability to identify subtle linguistic patterns within vast quantities of text and provide responses to user prompts in humanlike language (a feature called natural-language processing, or NLP), the bots can theoretically synthesize spiritual insights in a matter of seconds, saving users both time and energy. It’s divine wisdom on demand.

Many professional theologians, however, have serious concerns about blending LLMs with religion...

The danger of hallucination in this context is compounded by the fact that religiously oriented chatbots are likely to attract acutely sensitive questions—questions one might feel too embarrassed or ashamed to ask a priest, an imam, a rabbi or even a close friend. During a software update to QuranGPT last year, Khan had a brief glimpse into user prompts, which are usually invisible to him. He recalls seeing that one person had asked, “I caught my wife cheating on me—how should I respond?” Another, more troublingly, had asked, “Can I beat my wife?”

Khan was pleased with the system’s responses (it urged discussion and nonviolence on both counts), but the experience underscored the ethical gravity behind his undertaking."

Tuesday, August 25, 2020

This Guy is Suing the Patent Office for Deciding an AI Can't Invent Things; Vice, August 24, 2020

Todd Feathers, Vice; This Guy is Suing the Patent Office for Deciding an AI Can't Invent Things

The USPTO rejected two patents applications written by a "creativity engine" named DABUS. Now a lawsuit raises fundamental questions about what it means to be creative

"A computer scientist who created an artificial intelligence system capable of generating original inventions is suing the US Patent and Trademark Office (USPTO) over its decision earlier this year to reject two patent applications which list the algorithmic system, known as DABUS, as the inventor.

The lawsuit is the latest step in an effort by Stephen Thaler and an international group of lawyers and academics to win inventorship rights for non-human AI systems, a prospect that raises fundamental questions about what it means to be creative and also carries potentially paradigm-shifting implications for certain industries."

Thursday, July 30, 2020

Study: Only 18% of data science students are learning about AI ethics; TNW, July 3, 2020

Thomas Macaulay, TNW; Study: Only 18% of data science students are learning about AI ethics
The neglect of AI ethics extends from universities to industry

"At least we can rely on universities to teach the next generation of computer scientists to make. Right? Apparently not, according to a new survey of 2,360 data science students, academics, and professionals by software firm Anaconda.

Only 15% of instructors and professors said they’re teaching AI ethics, and just 18% of students indicated they’re learning about the subject.

Notably, the worryingly low figures aren’t due to a lack of interest. Nearly half of respondents said the social impacts of bias or privacy were the “biggest problem to tackle in the AI/ML arena today.” But those concerns clearly aren’t reflected in their curricula."

Wednesday, July 15, 2020

AI gatekeepers are taking baby steps toward raising ethical standards; Quartz, June 26, 2020

Nicolás Rivero, Quartz; AI gatekeepers are taking baby steps toward raising ethical standards


"This year, for the first time, major AI conferences—the gatekeepers for publishing research—are forcing computer scientists to think about those consequences.

The Annual Conference on Neural Information Processing Systems will require a “broader impact statement” addressing the effect a piece of research might have on society. The Conference on Empirical Methods in Natural Language Processing will begin rejecting papers on ethical grounds. Others have emphasized their voluntary guidelines.

The new standards follow the publication of several ethically dubious papers. Microsoft collaborated with researchers at Beihang University to algorithmically generate fake comments on news stories. Harrisburg University researchers developed a tool to predict the likelihood someone will commit a crime based on their face. Researchers clashed on Twitter over the wisdom of publishing these and other papers.

“The research community is beginning to acknowledge that we have some level of responsibility for how these systems are used,” says Inioluwa Raji, a tech fellow at NYU’s AI Now Institute. Scientists have an obligation to think about applications and consider restricting research, she says, especially in fields like facial recognition with a high potential for misuse."

Thursday, February 13, 2020

How To Teach Artificial Intelligence; Forbes, February 12, 2020

Tom Vander Ark, Forbes; How To Teach Artificial Intelligence

"Artificial intelligence—code that learns—is likely to be humankind’s most important invention. It’s a 60-year-old idea that took off five years ago when fast chips enabled massive computing and sensors, cameras, and robots fed data-hungry algorithms...

A World Economic Forum report indicated that 89% of U.S.-based companies are planning to adopt user and entity big data analytics by 2022, while more than 70% want to integrate the Internet of Things, explore web and app-enabled markets, and take advantage of machine learning and cloud computing.

Given these important and rapid shifts, it’s a good time to consider what young people need to know about AI and information technology. First, everyone needs to be able to recognize AI and its influence on people and systems, and be proactive as a user and citizen. Second, everyone should have the opportunity to use AI and big data to solve problems. And third, young people interested in computer science as a career should have a pathway for building AI...

The MIT Media Lab developed a middle school AI+Ethics course that hits many of these learning objectives. It was piloted by Montour Public Schools outside of Pittsburgh, Pennsylvania, which has incorporated the three-day course in its media arts class."

Wednesday, November 6, 2019

How Machine Learning Pushes Us to Define Fairness; Harvard Business Review, November 6, 2019

David Weinberger, Harvard Business Review; How Machine Learning Pushes Us to Define Fairness

"Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI. 

Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Our encounters with machine learning (ML) are beginning to  give us concepts, a vocabulary, and tools that enable us to address questions of bias and fairness more directly and precisely than before."

Monday, September 16, 2019

Maths and tech specialists need Hippocratic oath, says academic; The Guardian, August 16, 2019

Ian Sample, The Guardian; Maths and tech specialists need Hippocratic oath, says academic

"“We need a Hippocratic oath in the same way it exists for medicine,” Fry said. “In medicine, you learn about ethics from day one. In mathematics, it’s a bolt-on at best. It has to be there from day one and at the forefront of your mind in every step you take.”...

The genetics testing firm 23andMe was a case in point, she said.

“We literally hand over our most private data, our DNA, but we’re not just consenting for ourselves, we are consenting for our children, and our children’s children. Maybe we don’t live in a world where people are genetically discriminated against now, but who’s to say in 100 years that we won’t? And we are are paying to add our DNA to that dataset.”"

Thursday, September 5, 2019

Teaching ethics in computer science the right way with Georgia Tech's Charles Isbell; TechCrunch, September 5, 2019

Greg Epstein, TechCrunch; Teaching ethics in computer science the right way with Georgia Tech's Charles Isbell

"The new fall semester is upon us, and at elite private colleges and universities, it’s hard to find a trendier major than Computer Science. It’s also becoming more common for such institutions to prioritize integrating ethics into their CS studies, so students don’t just learn about how to build software, but whether or not they should build it in the first place. Of course, this begs questions about how much the ethics lessons such prestigious schools are teaching are actually making a positive impression on students.

But at a time when demand for qualified computer scientists is skyrocketing around the world and far exceeds supply, another kind of question might be even more important: Can computer science be transformed from a field largely led by elites into a profession that empowers vastly more working people, and one that trains them in a way that promotes ethics and an awareness of their impact on the world around them?

Enter Charles Isbell of Georgia Tech, a humble and unassuming star of inclusive and ethical computer science. Isbell, a longtime CS professor at Georgia Tech, enters this fall as the new Dean and John P. Imlay Chair of Georgia Tech’s rapidly expanding College of Computing."

Monday, January 28, 2019

Embedding ethics in computer science curriculum: Harvard initiative seen as a national model; Harvard, John A. Paulson School of Engineering and Applied Sciences, January 28, 2019

Paul Karoff, Harvard, John A. Paulson School of Engineering and Applied Sciences; Embedding ethics in computer science curriculum:
Harvard initiative seen as a national model

"Barbara Grosz has a fantasy that every time a computer scientist logs on to write an algorithm or build a system, a message will flash across the screen that asks, “Have you thought about the ethical implications of what you’re doing?”
 
Until that day arrives, Grosz, the Higgins Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), is working to instill in the next generation of computer scientists a mindset that considers the societal impact of their work, and the ethical reasoning and communications skills to do so.

“Ethics permeates the design of almost every computer system or algorithm that’s going out in the world,” Grosz said. “We want to educate our students to think not only about what systems they could build, but whether they should build those systems and how they should design those systems.”"

Wednesday, March 28, 2018

Cambridge Analytica controversy must spur researchers to update data ethics; Nature, March 27, 2018

Editorial, Nature; Cambridge Analytica controversy must spur researchers to update data ethics

"Ethics training on research should be extended to computer scientists who have not conventionally worked with human study participants.

Academics across many fields know well how technology can outpace its regulation. All researchers have a duty to consider the ethics of their work beyond the strict limits of law or today’s regulations. If they don’t, they will face serious and continued loss of public trust."