Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Friday, October 4, 2024

I Quit Teaching Because of ChatGPT; Time, September 30, 2024

 Victoria Livingstone, Time; I Quit Teaching Because of ChatGPT

"Students who outsource their writing to AI lose an opportunity to think more deeply about their research. In a recent article on art and generative AI, author Ted Chiang put it this way: “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.” Chiang also notes that the hundreds of small choices we make as writers are just as important as the initial conception. Chiang is a writer of fiction, but the logic applies equally to scholarly writing. Decisions regarding syntax, vocabulary, and other elements of style imbue a text with meaning nearly as much as the underlying research...

Generative AI is, in some ways, a democratizing tool...

The best educators will adapt to AI. In some ways, the changes will be positive. Teachers must move away from mechanical activities or assigning simple summaries. They will find ways to encourage students to think critically and learn that writing is a way of generating ideas, revealing contradictions, and clarifying methodologies.

However, those lessons require that students be willing to sit with the temporary discomfort of not knowing. Students must learn to move forward with faith in their own cognitive abilities as they write and revise their way into clarity. With few exceptions, my students were not willing to enter those uncomfortable spaces or remain there long enough to discover the revelatory power of writing."

Thursday, September 5, 2024

Intellectual property and data privacy: the hidden risks of AI; Nature, September 4, 2024

 Amanda Heidt , Nature; Intellectual property and data privacy: the hidden risks of AI

"Timothée Poisot, a computational ecologist at the University of Montreal in Canada, has made a successful career out of studying the world’s biodiversity. A guiding principle for his research is that it must be useful, Poisot says, as he hopes it will be later this year, when it joins other work being considered at the 16th Conference of the Parties (COP16) to the United Nations Convention on Biological Diversity in Cali, Colombia. “Every piece of science we produce that is looked at by policymakers and stakeholders is both exciting and a little terrifying, since there are real stakes to it,” he says.

But Poisot worries that artificial intelligence (AI) will interfere with the relationship between science and policy in the future. Chatbots such as Microsoft’s Bing, Google’s Gemini and ChatGPT, made by tech firm OpenAI in San Francisco, California, were trained using a corpus of data scraped from the Internet — which probably includes Poisot’s work. But because chatbots don’t often cite the original content in their outputs, authors are stripped of the ability to understand how their work is used and to check the credibility of the AI’s statements. It seems, Poisot says, that unvetted claims produced by chatbots are likely to make their way into consequential meetings such as COP16, where they risk drowning out solid science.

“There’s an expectation that the research and synthesis is being done transparently, but if we start outsourcing those processes to an AI, there’s no way to know who did what and where the information is coming from and who should be credited,” he says...

The technology underlying genAI, which was first developed at public institutions in the 1960s, has now been taken over by private companies, which usually have no incentive to prioritize transparency or open access. As a result, the inner mechanics of genAI chatbots are almost always a black box — a series of algorithms that aren’t fully understood, even by their creators — and attribution of sources is often scrubbed from the output. This makes it nearly impossible to know exactly what has gone into a model’s answer to a prompt. Organizations such as OpenAI have so far asked users to ensure that outputs used in other work do not violate laws, including intellectual-property and copyright regulations, or divulge sensitive information, such as a person’s location, gender, age, ethnicity or contact information. Studies have shown that genAI tools might do both1,2."

Tuesday, June 27, 2023

A Dishonesty Expert Stands Accused of Fraud. Scholars Who Worked With Her Are Scrambling.; The Chronicle of Higher Education, June 22, 2023

Nell Gluckman
, The Chronicle of Higher Education; A Dishonesty Expert Stands Accused of Fraud. Scholars Who Worked With Her Are Scrambling.

"To Maurice Schweitzer, a University of Pennsylvania professor, it seemed logical to team up with Francesca Gino, a rising star at Harvard Business School. They were both fascinated by the unseemly side of human behavior — misleading, cheating, lying in order to profit — and together, they published eight studies over nearly a decade.

Now, Schweitzer wonders if he was the one being deceived."

Thursday, June 8, 2023

How ethics is becoming a key part of research in tech; The Stanford Daily, June 7, 2023

Cassandra Huff, The Stanford Daily; How ethics is becoming a key part of research in tech

"Building off the IRB model, in 2020 the Ethics in Society Review (ESR) board was created under the McCoy Family Center, the Center for Advanced Study in Behavioral Sciences (CASBS) and Human-Centered AI (HAI) to make ethics a core part of research in computer science. The ESR acts similarly to the IRB by examining ethical concerns to minimize potential harm of the research before a project is approved for funding.

This process is integrated into grant proposal applications in HAI. After HAI reviews the technical merits of a proposal, it is handed off to the ESR, which assigns an interdisciplinary panel of faculty to review each of them. This panel acts as advisors on ethical issues to identify challenges and provide additional guidance on the ethical component of the research. Once completed, the panel will either release research funds, or recommend more iterations of the review process.

The ESR is not meant to determine whether the proposal should be funded, but rather to analyze the unintended consequences of the research prior to the start of the project. In discussing what ESR does, Betsy Rajala, Program Director at CASBS said, “Everytime you touch research these questions come up and [it’s better to] think about them sooner rather than later.”"

Friday, April 16, 2021

Scientists Create Early Embryos That Are Part Human, Part Monkey; NPR, April 15, 2021

; Scientists Create Early Embryos That Are Part Human, Part Monkey

""This work is an important step that provides very compelling evidence that someday when we understand fully what the process is we could make them develop into a heart or a kidney or lungs," said Dr. Jeffrey Platt, a professor of microbiology and immunology at the University of Michigan, who is doing related experiments but was not involved in the new research.

But this type of scientific work and the possibilities it opens up raises serious questions for some ethicists."

Tuesday, March 31, 2020

A Revolution in Science Publishing, or Business as Usual?; UNDARK, March 30, 2020

Michael Schulson, UNDARK; A Revolution in Science Publishing, or Business as Usual?

"Some advocates see corporate open-access as a pragmatic way of opening up research to the masses. But others see the new model as a corruption of the original vision — one that will continue to funnel billions of dollars into big publishing companies, marginalize scientists in lower income countries, and fail to fix deeper, systemic problems in scientific publishing.

As it stands, all trends point to an open-access future. The question now is what kind of open-access model it will be — and what that future may mean for the way new science gets evaluated, published, and shared. “We don’t know why we should accept that open access is a market,” said Dominique Babini, the open-access adviser to the Latin American Council of Social Sciences and a prominent critic of commercial open-access models. “If knowledge is a human right, why can’t we manage it as a commons, in collaborative ways managed by the academic community, not by for-profit initiatives?”"

Friday, March 29, 2019

A study of ethicists finds they’re no more ethical than the rest of us (and no better at calling their mothers); Quartz, March 27, 2019

Olivia Goldhill, Quartz;

A study of ethicists finds they’re no more ethical than the rest of us (and no better at calling their mothers)

 

"For all their pontificating and complex moral theories, ethicists are just as disappointingly flawed as the rest of humanity. A study of 417 professors published last week in Philosophical Psychology found that, though the 151 ethics professors expressed stricter moral views, they were no better at behaving ethically."

Wednesday, March 6, 2019

Open data needs ethical, efficient management; University of Cape Town News, March 6, 2019

Helen Swingler, University of Cape Town News; Open data needs ethical, efficient management

"Ethics in data management

Niklas Zimmer, manager of digital services at UCT Libraries, said that ethical management of data is key. Several of the lightning presentations made at the event underscored this.
UCT Gender Health and Justice Research Unit (GHJRU) research officer Kristen Daskilewicz cited an important example when she said the use of open data is not always appropriate for research where there are heightened safety concerns.
Her example described a collaborative two-year cross-sectional research project on LGBTI health, safety and other rights that the unit undertook on behalf of the Southern and Eastern Africa Research Collective on Health (SEARCH). SEARCH is a collective of 23 civil society organisations in nine countries.
The project participants had to be “very careful” with data collection and dissemination in the study countries, particularly those where aspects of same-sex relationships have been criminalised. There were concerns about protecting the survey participants and the unit’s civil society partners, who were the data collectors."

Wednesday, February 13, 2019

Defying Parents, A Teen Decides To Get Vaccinated; NPR, February 9, 2019

Amanda Morris and Scott Simon, NPR; Defying Parents, A Teen Decides To Get Vaccinated

"Ethan Lindenberger is getting vaccinated for well, just about everything.

He's 18 years old, but had never received vaccines for diseases like hepatitis, polio, measles, mumps, rubella, or the chickenpox.

Lindenberger's mother, Jill Wheeler, is anti-vaccine. He said she has been influenced by online misinformation, such as a debunked study that claimed certain vaccines were linked with autism, or a theory that vaccines cause brain damage. Incorrect ideas like these have spread like wildfire, so much so that the CDC has explicitly tried to combat them, posting pages like "Vaccines Do Not Cause Autism.""

Facebook under pressure to halt rise of anti-vaccination groups; The Guardian, February 12, 2019

Ed Pilkington and Jessica Glenza, The Guardian; Facebook under pressure to halt rise of anti-vaccination groups

"Dr Noni MacDonald, a professor of pediatrics at Dalhousie University in Halifax, Nova Scotia, Canada, who has worked as an expert adviser to the WHO on immunization, questioned why Facebook was unrestrained by the stringent controls against misinformation put on drug companies. “We don’t let big pharma or big food or big radio companies do this, so why should we let this happen in this venue?”

She added: “When a drug company puts a drug up in the formal media, they can’t tell you something false or they will be sued. So why is this different? Why is this allowed?”"

Tuesday, February 12, 2019

A.I. Shows Promise Assisting Physicians; The New York Times, February 11, 2019

Cade Metz, The New York Times; A.I. Shows Promise Assisting Physicians

"Each year, millions of Americans walk out of a doctor’s office with a misdiagnosis. Physicians try to be systematic when identifying illness and disease, but bias creeps in. Alternatives are overlooked.

Now a group of researchers in the United States and China has tested a potential remedy for all-too-human frailties: artificial intelligence.

In a paper published on Monday in Nature Medicine, the scientists reported that they had built a system that automatically diagnoses common childhood conditions — from influenza to meningitis — after processing the patient’s symptoms, history, lab results and other clinical data."

Thursday, January 31, 2019

Facebook has been paying teens $20 a month for access to all of their personal data; Vox, January 30, 2019

Kaitlyn Tiffany, Vox; Facebook has been paying teens $20 a month for access to all of their personal data

"The shocking “research” program has restarted a long-standing feud between Facebook and Apple.

 

"Facebook, now entering a second year of huge data-collection scandals, can’t really afford this particular news story. However, it’s possible the company just weighed the risks of public outrage against the benefits of the data and made a deliberate choice: Knowing which apps people are using, how they’re using them, and for how long is extremely useful information for Facebook."

Thursday, January 24, 2019

This Time It’s Russia’s Emails Getting Leaked; The Daily Beast, January 24, 2019

Kevin Poulsen, The Daily Beast; This Time It’s Russia’s Emails Getting Leaked

"Russian oligarchs and Kremlin apparatchiks may find the tables turned on them later this week when a new leak site unleashes a compilation of hundreds of thousands of hacked emails and gigabytes of leaked documents. Think of it as WikiLeaks, but without Julian Assange’s aversion for posting Russian secrets.

The site, Distributed Denial of Secrets, was founded last month by transparency activists. Co-founder Emma Best said the Russian leaks, slated for release on Friday, will bring into one place dozens of different archives of hacked material that at best has been difficult to locate, and in some cases appears to have disappeared entirely from the web...

Distributed Denial of Secrets, or DDoS, is a volunteer effort that launched last month. Its objective is to provide researchers and journalists with a central repository where they can find the terabytes of hacked and leaked documents that are appearing on the internet with growing regularity. The site is a kind of academic library or a museum for leak scholars, housing such diverse artifacts as the files North Korea stole from Sony in 2014, and a leak from the Special State Protection Service of Azerbaijan."

Friday, November 23, 2018

Addressing the Crisis in Academic Publishing; Inside Higher Ed, November 5, 2018

Hans De Wit and Phillip G. Altbach and Betty Leask, Inside Higher Ed; Addressing the Crisis in Academic Publishing

[Kip Currier: Important reading and a much-needed perspective to challenge the status quo!

I just recently was expressing aspects of this article to an academic colleague: For too long the dominant view of what constitutes "an academic" has been too parochial and prescriptive.

The academy should and must expand its notions of teaching, research, and service, in order to be more truly inclusive and acknowledge diverse kinds of knowledge and humans extant in our world.]

"We must find ways to ensure that equal respect, recognition and reward is given to excellence in teaching, research and service by institutional leaders, governments, publishers, university ranking and accreditation schemes."

Thursday, October 4, 2018

The push to create AI-friendly ethics codes is stripping all nuance from morality; Quartz, October 4, 2018

Olivia Goldhill, Quartz; The push to create AI-friendly ethics codes is stripping all nuance from morality

"A paper led by Veljko Dubljević, neuroethics researcher at North Carolina State University, published yesterday (Oct. 2) in PLOS ONE, claims to establish not just the answer to one ethical question, but the entire groundwork for how moral judgements are made.

According to the paper’s “Agent Deed Consequence model,” three things are taken into account when making a moral decision: the person doing the action, the moral action itself, and the consequences of that action. To test this theory, the researchers created moral scenarios that varied details about the agent, the action, and the consequences."

Saturday, September 1, 2018

Letter to the Editor: "Get the Facts on Readers", Emailed to The Pittsburgh Post-Gazette; Kip Currier, September 1, 2018


[Kip Currier: I'm copying below a Letter to the Editor--titled "Get the Facts on Readers"--that I emailed today (September 1, 2018) to The Pittsburgh Post-Gazette. For additional background, see this story.]

Get the Facts on Readers

Dear Editor,

The Post-Gazette is running a multi-platform ad campaign that weaponizes variations of the line “I will never go digital” to make fun of older readers, depicted as fuddy-duddy Luddites. In one particularly offensive TV spot, a digitally-savvy granddaughter openly mocks her grandmother who prefers print.

Research refutes the ageist “messages” in the P-G’s divisive marketing campaign. Many adult U.S. readers—of all ages—are hybrid readers who want the choice of information in both print and digital formats.

As evidence, take a look at some of the key findings from a Jan. 3-10, 2018 national survey of 2,002 U.S. adults, reported by the well-respected, non-partisan Pew Research Center:

Despite some growth in certain digital formats, it remains the case that relatively few Americans consume digital books (which include audiobooks and e-books) to the exclusion of print. Some 39% of Americans say they read only print books, while 29% read in these digital formats and also read print books.

And the coup de grace to the P-G’s graceless stereotyping:

Some demographic groups are more likely than others to be digital-only book readers, but in general this behavior is relatively rare across a wide range of demographics. For example, 10% of 18- to 29-year-olds only read books in digital formats, compared with 5% of those ages 50-64 and 4% of those 65 and older.

The P-G’s preening effort to digitally divide users borders on farce, given that P-G writers and staff repeatedly concede the deplorable state of the newspaper’s digital search and archival features. 

The P-G’s tagline is “One of America’s Great Newspapers”. Unfortunately, for a variety of reasons, that tagline is not supported by facts. So, here’s a “message” for P-G ownership:

Hire some of the Pittsburgh region’s highly educated information professionals to help the P-G become a bona fide leader in print and digital content, search, and delivery. Give the Pittsburgh region a truly great newspaper that inclusively serves and respects all of its readers and residents.


James “Kip” Currier

Mt. Lebanon

Thursday, July 19, 2018

Shadow Politics: Meet the Digital Sleuth Exposing Fake News; Wired, 7/18/18

Issie Lapowsky, Wired; Shadow Politics: Meet the Digital Sleuth Exposing Fake News

"After about 36 hours of work, during which his software crashed dozens of times under the weight of  all that data, he was able to map out these links, transforming the list into an impossibly intricate data visualization. “It was a picture of the entire ecosystem of misinformation a few days after the election,” Albright says, still in awe of his discovery. “I saw these insights I’d never thought of.”

And smack in the center of the monstrous web, was a giant node labeled YouTube."

Monday, June 4, 2018

Stanford to step-up teaching of ethics in technology; Financial Times, June 3, 2018

Financial Times; Stanford to step-up teaching of ethics in technology

"The university at the heart of Silicon Valley is to inject ethics into its technology teaching and research amid growing criticism of the excesses of the industry it helped spawn.

The board of Stanford University, one of the world’s richest higher education institutions with an endowment of $27bn, will meet this month to agree funding and a plan to implement the findings of an internal review that recommends a new initiative focused on “ethics, society and technology” and improved access to those on lower incomes."

Thursday, May 31, 2018

Why Are Academics Upset With Facebook's New Privacy Rules?; Forbes, May 4, 2018

Kalev Leetaru, Forbes; Why Are Academics Upset With Facebook's New Privacy Rules?

"Putting this all together, there is something inherently wrong with a world in which academics condemn Facebook for conducting consent-free research on its users, only to turn around and condemn the company again when it tries to institute greater privacy protections that would prevent academics from doing the same, all while those very same academics partner with Facebook to create a new research initiative that entirely removes consent from the equation and where ethical considerations are unilaterally TBD, to be figured out after researchers decide what they want to do with two billion people’s private information. Cambridge University’s ethics panel gives us hope that there are still some institutions that believe in the ethical protections that took decades to build, only to fall like dominoes in the digital “big data” era. In the end, it is not just the social media giants and private companies rushing to commercialize our digital selves and stave off any discussion of privacy protections – the academic community is running right alongside helping to clear the way."

Friday, May 25, 2018

Schools See Steep Drop in Librarians, New Analysis Finds; Education Week, May 16, 2018

and , Education Week; Schools See Steep Drop in Librarians, New Analysis Finds

"“When we’ve talked to districts that have chosen to put resources elsewhere, we really do see more than one who have then come back and wanted to reinstate [the librarian],” said Steven Yates, the president of the American Association of School Librarians. “Not only do you lose the person curating the resources for informational and pleasure reading, but you lose the person who can work with the students on the ethical side—how do you cite? How do you determine a credible source of information?”"