Showing posts with label MIT. Show all posts
Showing posts with label MIT. Show all posts

Tuesday, July 23, 2024

The Data That Powers A.I. Is Disappearing Fast; The New York Times, July 19, 2024

 Kevin Roose , The New York Times; The Data That Powers A.I. Is Disappearing Fast

"For years, the people building powerful artificial intelligence systems have used enormous troves of text, images and videos pulled from the internet to train their models.

Now, that data is drying up.

Over the past year, many of the most important web sources used for training A.I. models have restricted the use of their data, according to a study published this week by the Data Provenance Initiative, an M.I.T.-led research group.

The study, which looked at 14,000 web domains that are included in three commonly used A.I. training data sets, discovered an “emerging crisis in consent,” as publishers and online platforms have taken steps to prevent their data from being harvested.

The researchers estimate that in the three data sets — called C4, RefinedWeb and Dolma — 5 percent of all data, and 25 percent of data from the highest-quality sources, has been restricted. Those restrictions are set up through the Robots Exclusion Protocol, a decades-old method for website owners to prevent automated bots from crawling their pages using a file called robots.txt."

Friday, February 18, 2022

The government dropped its case against Gang Chen. Scientists still see damage done; WBUR, February 16, 2022

Max Larkin, WBUR ; The government dropped its case against Gang Chen. Scientists still see damage done

"When federal prosecutors dropped all charges against MIT professor Gang Chen in late January, many researchers rejoiced in Greater Boston and beyond.

Chen had spent the previous year fighting charges that he had lied and omitted information on U.S. federal grant applications. His vindication was a setback for the "China Initiative," a controversial Trump-era legal campaign aimed at cracking down on the theft of American research and intellectual property by the Chinese government.

Researchers working in the United States say the China Initiative has harmed both their fellow scientists and science itself — as a global cooperative endeavor. But as U.S.-China tensions remain high, the initiative remains in place." 

Wednesday, May 6, 2020

For Jeffrey Epstein, MIT Was Just a Safety School; Wired, May 4, 2020

Noam Cohen, Wired; For Jeffrey Epstein, MIT Was Just a Safety School

"The MIT and Harvard reports are most illuminating when read together. They overlap in revealing ways and share certain observations...

In part, we can chalk up the difference to bad timing. Harvard came first in Epstein’s mind, which, I suppose, says something about its reputation among status-obsessed faux-intellectuals. When Harvard was accepting Esptein’s donations, it was dealing with a disreputable character; MIT, by contrast, was dealing with a convicted sex offender...

What remains is the hard-baked irony that MIT, which got relatively little from Epstein, drew the bad headlines; whereas Harvard, which took 10 times as much of Epstein’s money, could almost claim its hands were clean. MIT announced last year that it would be donating to a charity benefiting sexual-abuse survivors all of its Epstein monies ($850,000 collected before and after his conviction). Harvard on Friday announced that it would be donating to organizations that support victims of human trafficking and sexual assault exactly what was left over from Epstein’s multimillion-dollar donations: $200,937."

Wednesday, September 11, 2019

How an Élite University Research Center Concealed Its Relationship with Jeffrey Epstein; The New Yorker, September 6, 2019

Ronan Farrow, The New Yorker;

How an Élite University Research Center Concealed Its Relationship with Jeffrey Epstein

New documents show that the M.I.T. Media Lab was aware of Epstein’s status as a convicted sex offender, and that Epstein directed contributions to the lab far exceeding the amounts M.I.T. has publicly admitted.

 

"Current and former faculty and staff of the media lab described a pattern of concealing Epstein’s involvement with the institution. Signe Swenson, a former development associate and alumni coordinator at the lab, told me that she resigned in 2016 in part because of her discomfort about the lab’s work with Epstein. She said that the lab’s leadership made it explicit, even in her earliest conversations with them, that Epstein’s donations had to be kept secret...

Swenson said that, even though she resigned over the lab’s relationship with Epstein, her participation in what she took to be a coverup of his contributions has weighed heavily on her since. Her feelings of guilt were revived when she learned of recent statements from Ito and M.I.T. leadership that she believed to be lies. “I was a participant in covering up for Epstein in 2014,” she told me. “Listening to what comments are coming out of the lab or M.I.T. about the relationship—I just see exactly the same thing happening again.”"

The Moral Rot of the MIT Media Lab; Slate, September 8, 2019

Justin Peters, Slate; The Moral Rot of the MIT Media Lab

"Over the course of the past century, MIT became one of the best brands in the world, a name that confers instant credibility and stature on all who are associated with it. Rather than protect the inherent specialness of this brand, the Media Lab soiled it again and again by selling its prestige to banks, drug companies, petroleum companies, carmakers, multinational retailers, at least one serial sexual predator, and others who hoped to camouflage their avarice with the sheen of innovation. There is a big difference between taking money from someone like Epstein and taking it from Nike or the Department of Defense, but the latter choices pave the way for the former."

Wednesday, September 4, 2019

MIT developed a course to teach tweens about the ethics of AI; Quartz, September 4, 2019

Jenny Anderson, Quartz; MIT developed a course to teach tweens about the ethics of AI

"This summer, Blakeley Payne, a graduate student at MIT, ran a week-long course on ethics in artificial intelligence for 10-14 year olds. In one exercise, she asked the group what they thought YouTube’s recommendation algorithm was used for.

“To get us to see more ads,” one student replied.

“These kids know way more than we give them credit for,” Payne said.

Payne created an open source, middle-school AI ethics curriculum to make kids aware of how AI systems mediate their everyday lives, from YouTube and Amazon’s Alexa to Google search and social media. By starting early, she hopes the kids will become more conscious of how AI is designed and how it can manipulate them. These lessons also help prepare them for the jobs of the future, and potentially become AI designers rather than just consumers."

Tuesday, March 19, 2019

Ethics, Computing, and AI: Perspectives from MIT; MIT News, March 18, 2019

MIT News;

Ethics, Computing, and AI: Perspectives from MIT

Faculty representing all five MIT schools offer views on the ethical and societal implications of new technologies.

"The MIT Stephen A. Schwarzman College of Computing will reorient the Institute to bring the power of computing and AI to all fields at MIT; allow the future of computing and AI to be shaped by all MIT disciplines; and advance research and education in ethics and public policy to help ensure that new technologies benefit the greater good.

To support ongoing planning for the new college, Dean Melissa Nobles invited faculty from all five MIT schools to offer perspectives on the societal and ethical dimensions of emerging technologies. This series presents the resulting commentaries — practical, inspiring, concerned, and clear-eyed views from an optimistic community deeply engaged with issues that are among the most consequential of our time. 

The commentaries represent diverse branches of knowledge, but they sound some common themes, including: the vision of an MIT culture in which all of us are equipped and encouraged to discern the impact and ethical implications of our endeavors."

Wednesday, March 6, 2019

Making a path to ethical, socially-beneficial artificial intelligence, MIT News, March 5, 2019

School of Humanities, Arts, and Social Sciences, MIT News; Making a path to ethical, socially-beneficial artificial intelligence

Leaders from government, philanthropy, academia, and industry say collaboration is key to make sure artificial intelligence serves the public good.

"Many speakers at the three-day celebration, which was held on Feb. 26-28, called for an approach to education, research, and tool-making that combines collective knowledge from the technology, humanities, arts, and social science fields, throwing the double-edged promise of the new machine age into stark relief...

The final panel was “Computing for the People: Ethics and AI,” moderated by New York Timescolumnist Thomas Friedman. In a conversation afterward, Nobles also emphasized that the goal of the new college is to advance computation and to give all students a greater “awareness of the larger political, social context in which we’re all living.” That is the MIT vision for developing “bilinguals” — engineers, scholars, professionals, civic leaders, and policymakers who have both superb technical expertise and an understanding of complex societal issues gained from study in the humanities, arts, and social science fields.  

The perils of speed and limited perspective
 
The five panelists on “Computing for the People” — representing industry, academia, government, and philanthropy — contributed particulars to the vision of a society infused with those bilinguals, and attested to the perils posed by an overly-swift integration of advanced computing into all domains of modern existence.
 
"I think of AI as jetpacks and blindfolds that will send us careening in whatever direction we're already headed," said Joi Ito, director of the MIT Media Lab. "It's going to make us more powerful but not necessarily more wise."


The key problem, according to Ito, is that machine learning and AI have to date been exclusively the province of engineers, who tend to talk only with each other. This means they can deny accountability when their work proves socially, politically, or economically destructive. "Asked to explain their code, technological people say: ‘We're just technical people, we don't deal with racial or political problems,’" Ito said." 

Tuesday, March 22, 2011

Gains, and Drawbacks, for Female Professors; New York Times, 3/21/11

Kate Zernike, New York Times; Gains, and Drawbacks, for Female Professors:

"Despite an effort to educate colleagues about bias in letters of recommendation for tenure, those for men tend to focus on intellect while those for women dwell on temperament.

“To women in my generation, these residual issues can sound small because we see so much progress,” said Nancy H. Hopkins, a molecular biologist who instigated the first report. “But they’re not small; they still create an unequal playing field for women — not just at universities, and certainly not just at M.I.T. And they’re harder to change because they are a reflection of where women stand in society.”"