Showing posts with label robots. Show all posts
Showing posts with label robots. Show all posts

Thursday, November 21, 2024

AI task force proposes ‘artificial intelligence, ethics and society’ minor in BCLA; The Los Angeles Loyolan, November 18, 2024

Coleman Standifer, asst. managing editor; Grace McNeill, asst. managing editor , The Los Angeles Loyolan; AI task force proposes ‘artificial intelligence, ethics and society’ minor in BCLA

"The Bellarmine College of Liberal Arts (BCLA) is taking steps to further educate students on artificial intelligence (AI) through the development of an “artificial intelligence, ethics and society," spearheaded by an AI task force. This proposed addition comes two years after the widespread adoption of OpenAI's ChatGPT in classrooms.

Prior to stepping into his role as the new dean of BCLA, Richard Fox, Ph.D., surveyed BCLA’s 175 faculty about how the college could best support their teaching. Among the top three responses from faculty were concerns about navigating AI in the classroom, Fox told the Loyolan.

As of now, BCLA has no college-wide policy on AI usage and allows instructors determine how AI is — or is not — utilized in the classroom.

“We usually don't dictate how people teach. That is the essence of academic freedom," said Fox. “What I want to make sure we're doing is we're preparing students to enter a world where they have these myriad different expectations on writing from their faculty members.”

Headed by Roberto Dell’Oro, Ph.D., professor of theological studies and director of the Bioethics Institute, the task force met over the summer and culminated in a proposal for a minor in BCLA. The proposal — which Dell'Oro sent to the Loyolan— was delivered to Fox in August and now awaits a formal proposal to be drawn up before approval, according to Dell’Oro.

The minor must then be approved by the Academic Planning and Review Committee (ARPC), a committee tasked with advising Provost Thomas Poon, Ph.D., on evaluating proposals for new programs.

According to the proposal, the proposed minor aims “to raise awareness about the implications of AI technologies, emphasize the importance of ethical considerations in its development and promote interdisciplinary research at the intersection of AI, ethics, and society.

The minor — if approved by the APRC — would have “four or five classes,” with the possibility of having an introductory course taught by faculty in the Seaver College of Science and Engineering, according to the proposal.

Most of the sample courses in the proposal include classes rooted in philosophy and ethics, such as, “AI, Robots, and the Philosophy of the Person,” “Could Robots Have Rights?” and “Introduction to Bioethics.” According to Dell’Oro, the hope is to have courses available for enrollment by Fall 2025."

Monday, July 15, 2024

One-third of US military could be robotic by 2039: Milley; Military Times, July 14, 2024

, Military Times; One-third of US military could be robotic by 2039: Milley

"The 20th chairman of the Joint Chiefs of Staff believes growing artificial intelligence and unmanned technology could lead to robotic military forces in the future.

“Ten to fifteen years from now, my guess is a third, maybe 25% to a third of the U.S. military will be robotic,” said retired Army Gen. Mark Milley at an Axios event Thursday launching the publication’s Future of Defense newsletter.

He noted these robots could be commanded and controlled by AI systems."

Saturday, March 9, 2024

The robots are coming. And that’s a good thing.; MIT Technology Review, March 5, 2024

, MIT Technology Review; The robots are coming. And that’s a good thing.

"In this excerpt from the new book, The Heart and the Chip: Our Bright Future with Robots, CSAIL Director Daniela Rus explores how robots can extend the reach of human capabilities...

These examples of how we can pair the heart with the chip to extend our perceptual reach range from the whimsical to the profound. And the potential for other applications is vast. Environmental and government organizations tasked with protecting our landscapes could dispatch eyes to autonomously monitor land for illegal deforestation without putting people at risk. Remote workers could use robots to extend their hands into dangerous environments, manipulating or moving objects at hazardous nuclear sites. Scientists could peek or listen into the secret lives of the many amazing species on this planet. Or we could harness our efforts to find a way to remotely experience Paris or Tokyo or Tangier. The possibilities are endless and endlessly exciting. We just need effort, ingenuity, strategy, and the most precious resource of all.

No, not funding, although that is helpful.

We need time."

Thursday, November 9, 2023

How robots can learn to follow a moral code; Nature, October 26, 2023

 Neil Savage, Nature; How robots can learn to follow a moral code

"Many computer scientists are investigating whether autonomous systems can be taught to make ethical choices, or to promote behaviour that aligns with human values. Could a robot that provides care, for example, be trusted to make choices in the best interests of its charges? Or could an algorithm be relied on to work out the most ethically appropriate way to distribute a limited supply of transplant organs? Drawing on insights from cognitive science, psychology and moral philosophy, computer scientists are beginning to develop tools that can not only make AI systems behave in specific ways, but also perhaps help societies to define how an ethical machine should act...

Defining ethics

The ability to fine-tune an AI system’s behaviour to promote certain values has inevitably led to debates on who gets to play the moral arbiter. Vosoughi suggests that his work could be used to allow societies to tune models to their own taste — if a community provides examples of its moral and ethical values, then with these techniques it could develop an LLM more aligned with those values, he says. However, he is well aware of the possibility for the technology to be used for harm. “If it becomes a free for all, then you’d be competing with bad actors trying to use our technology to push antisocial views,” he says.

Precisely what constitutes an antisocial view or unethical behaviour, however, isn’t always easy to define. Although there is widespread agreement about many moral and ethical issues — the idea that your car shouldn’t run someone over is pretty universal — on other topics there is strong disagreement, such as abortion. Even seemingly simple issues, such as the idea that you shouldn’t jump a queue, can be more nuanced than is immediately obvious, says Sydney Levine, a cognitive scientist at the Allen Institute. If a person has already been served at a deli counter but drops their spoon while walking away, most people would agree it’s okay to go back for a new one without waiting in line again, so the rule ‘don’t cut the line’ is too simple."

Thursday, July 13, 2023

Are We Going Too Far By Allowing Generative AI To Control Robots, Worriedly Asks AI Ethics And AI Law; Forbes, July 10, 2023

Dr. Lance B. Eliot , Forbes; Are We Going Too Far By Allowing Generative AI To Control Robots, Worriedly Asks AI Ethics And AI Law

"What amount of due diligence is needed or required on the part of the user when it comes to generative AI and robots?

Nobody can as yet say for sure. Until we end up with legal cases and issues involving presumed harm, this is a gray area. For lawyers that want to get involved in AI and law, these are going to be an exciting and emerging set of legal challenges and legal puzzles that will undoubtedly arise as the use of generative AI becomes further ubiquitous and the advent of robots becomes affordable and practical in our daily lives.

You might also find of interest that some of the AI makers have contractual or licensing clauses that if you are using their generative AI and they get sued for something you did as a result of using their generative AI, you indemnify the AI maker and pledge to pay for their costs and expenses to fight the lawsuit, see my analysis at the link here. This could be daunting for you. Suppose that the house you were cooking in burns to the ground. The insurer sues the AI maker claiming that their generative AI was at fault. But, you agreed whether you know it or not to the indemnification clause, thus the AI maker comes to you and says you need to pay for their defense.

Ouch."

Thursday, April 28, 2022

War ethics: Are drones in Ukraine a step toward robots that kill?; The Christian Science Monitor, April 28, 2022

  , The Christian Science Monitor; War ethics: Are drones in Ukraine a step toward robots that kill?

"Amid the bewildering array of brutality on and off the battlefields of Ukraine, military ethicists have been keeping a close eye on whether the war could also become a proving ground for drones that use artificial intelligence to decide whom to hurt."

Sunday, April 10, 2022

AI.Humanity Ethics Lecture Series will explore the ethics of artificial intelligence; Emory University, Emory News Center, April 5, 2022

Emory University, Emory News Center; AI.Humanity Ethics Lecture Series will explore the ethics of artificial intelligence

"As society increasingly relies on artificial intelligence (AI) technologies, how can ethically committed individuals and institutions articulate values to guide their development and respond to emerging problems? Join the Office of the Provost to explore the ethical implications of AI in a new AI.Humanity Ethics Lecture Series.

Over four weeks in April and May, world-renowned AI scholars will visit Emory to discuss the moral and social complexities of AI and how it may be shaped for the benefit of humanity. A reception will follow each lecture.

Matthias Scheutz: “Moral Robots? How to Make AI Agents Fit for Human Societies” 

Monday, April 11

Lecture at 4 p.m., reception at 5:30 p.m.

Convocation Hall — Community Room (210)

Register here.

AI is different from other technologies in that it enables and creates machines that can perceive the world and act on it autonomously. We are on the verge of creating sentient machines that could significantly improve our lives and better human societies. Yet AI also poses dangers that are ours to mitigate. In this presentation, Scheutz will argue that AI-enabled systems — in particular, autonomous robots — must have moral competence: they need to be aware of human social and moral norms, be able to follow these norms and justify their decisions in ways that humans understand. Throughout the presentation, Scheutz will give examples from his work on AI robots and human-robot interaction to demonstrate a vision for ethical autonomous robots...

Seth Lazar: “The Nature and Justification of Algorithmic Power” 

Monday, April 18

Lecture at 4 p.m., reception at 5:30 p.m.

Convocation Hall — Community Room (210)

Register here.

Algorithms increasingly mediate and govern our social relations. In doing so, they exercise a distinct kind of intermediary power: they exercise power over us; they shape power relations between us; and they shape our overarching social structures. Sometimes, when new forms of power emerge, our task is simply to eliminate them. However, algorithmic intermediaries can enable new kinds of human flourishing and could advance social structures that are otherwise resistant to progress. Our task, then, is to understand and diagnose algorithmic power and determine whether and how it can be justified. In this lecture, Lazar will propose a framework to guide our efforts, with particular attention to the conditions under which private algorithmic power either can, or must not, be tolerated.

Ifeoma Ajunwa: “The Unrealized Promise of Artificial Intelligence” 

Thursday, April 28

Lecture at 4 p.m., reception at 5:30 p.m.

Oxford Road Building — Presentation Room and Living Room/Patio

Register here.

AI was forecast to revolutionize the world for the better. Yet this promise is still unrealized. Instead, there is a growing mountain of evidence that automated decision making is not revolutionary; rather, it has tended to replicate the status quo, including the biases embedded in our societal systems. The question, then, is what can be done? The answer is twofold: One part looks to what can be done to prevent the reality of automated decision making both enabling and obscuring human bias. The second looks toward proactive measures that could allow AI to work for the greater good...

Carissa Véliz: “On Privacy and Self-Presentation Online” 

Thursday, May 5

Lecture at 4 p.m. 

Online via Zoom 

A long tradition in philosophy and sociology considers self-presentation as the main reason why privacy is valuable, often equating control over self-presentation and privacy. Véliz argues that, even though control over self-presentation and privacy are tightly connected, they are not the same — and overvaluing self-presentation leads us to misunderstand the threat to privacy online. Véliz argues that to combat some of the negative trends we witness online, we need, on the one hand, to cultivate a culture of privacy, in contrast to a culture of exposure (for example, the pressure on social media to be on display at all times). On the other hand, we need to readjust how we understand self-presentation  online."

Sunday, June 7, 2020

The Drones Were Ready for This Moment; The New York Times, May 23, 2020

; The Drones Were Ready for This Moment

"Coronavirus has been devastating to humans, but may well prove a decisive step toward a long-prophesied Drone Age, when aerial robots begin to shed their Orwellian image as tools of war and surveillance and become a common feature of daily life, serving as helpers and, perhaps soon, companions.

“Robots are so often cast as the bad guys,” said Daniel H. Wilson, a former roboticist and the author of the 2011 science fiction novel “Robopocalypse.” “But what’s happening now is weirdly utopic, as opposed to dystopic. Robots are designed to solve problems that are dull, dirty and dangerous, and now we have a sudden global emergency in which the machines we’re used to fearing are uniquely well suited to swoop in and save the day.”"

Thursday, September 27, 2018

92% Of AI Leaders Now Training Developers In Ethics, But 'Killer Robots' Are Already Being Built; Forbes, September 26, 2018

John Koetsier, Forbes; 92% Of AI Leaders Now Training Developers In Ethics, But 'Killer Robots' Are Already Being Built

""Organizations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people,” Rumman Chowdhury, Responsible AI Lead at Accenture Applied Intelligence, said in a statement. “Organizations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm.’ They need to provide prescriptive, specific and technical guidelines to develop AI systems that are secure, transparent, explainable, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society.""

Wednesday, July 18, 2018

One Job AI Won't Replace? Chief Ethics Officer; Fortune, July 17, 2018

Robert Hackett, Fortune; One Job AI Won't Replace? Chief Ethics Officer

"We’ve heard the warnings: The robots are coming, and they’re coming for your job.

Whose roles will be safe as the usurper, artificial intelligence, enters the workforce? Jeetu Patel, chief product officer at Box (box, -1.65%), a cloud storage and file-sharing company, says the secure ones will be those who fine-tune the machines’ moral compasses.

“I think chief ethics officer will be a big role in the AI world,” Patel said at a breakfast roundtable at Fortune’s Brainstorm Tech conference in Aspen, Colo. on Tuesday morning. “Lots of jobs will be killed, but ethics jobs will move forward.”"

Monday, April 23, 2018

It’s Westworld. What’s Wrong With Cruelty to Robots?; The New York Times, April 23, 2018

Paul Bloom and Sam Harris, The New York Times; It’s Westworld. What’s Wrong With Cruelty to Robots?

"This is where actually watching “Westworld” matters. The pleasure of entertainment aside, the makers of the series have produced a powerful work of philosophy. It’s one thing to sit in a seminar and argue about what it would mean, morally, if robots were conscious. It’s quite another to witness the torments of such creatures, as portrayed by actors such as Evan Rachel Wood and Thandie Newton. You may still raise the question intellectually, but in your heart and your gut, you already know the answer."

Thursday, March 1, 2018

Professor Tells UN, Governments Of Coming “Tsunami” Of Data And Artificial Intelligence; Intellectual Property Watch, February 21, 2018

William New, Intellectual Property Watch; Professor Tells UN, Governments Of Coming “Tsunami” Of Data And Artificial Intelligence

"[Prof. Shmuel (Mooly) Eden of the University of Haifa, Israel] said this fourth revolution in human history is made up of four factors. First, computing power is at levels that were unimaginable. This power is what makes artificial intelligence now possible. The smartphone in your hand has 1,000 times the components of the first rocket to the moon, he said, which led to a chorus of “wows” from the audience.

Second is big data. Every time you speak on the phone or go on the internet, someone records it, he said. The amount of data is unlimited. Eden said he would be surprised if we use 2 percent of the data we generate, but in the future “we will.”

Third is artificial intelligence (AI). No one could analyse all of that data, so AI came into play.

Fourth is robots. He noted that they don’t always look like human forms. Most robots are just software doing some function...

 Eden ended by quoting a hero of his, former Israeli Prime Minister Shimon Peres, who told him: “Technology without ethics is evil. Ethics without technology is poverty. That’s why we have to combine the two.”
Eden challenged the governments, the UN and all others to think about how to address this rapid change and come up with ideas.
He challenged the governments, the UN and all others to think about how to address this rapid change and come up with ideas. Exponentially."

Thursday, February 1, 2018

Tuesday, June 13, 2017

Experts Think Through Ethical, Legal, Social Challenges Of The Rise Of Robots; Intellectual Property Watch, June 13, 2017

Catherine Saez, Intellectual Property Watch; Experts Think Through Ethical, Legal, Social Challenges Of The Rise Of Robots

"Who thought that the laws of robotics described by famous science fiction author Isaac Asimov would one day resonate with real life issues on robots? Last week’s summit on artificial intelligence sought to imagine a world increasingly manned by machines and robots, even self-taught ones, and explore the legal, ethical, economic, and social consequences of this new world. And some panellists underlined a need to establish frameworks to manage this new species."

Friday, February 17, 2017

When Machines Create Intellectual Property, Who Owns What?; Intellectual Property Watch, February 16, 2017

Bruce Gain, Intellectual Property Watch; 

When Machines Create Intellectual Property, Who Owns What?

"The concept of machines that can think and create in ways that are indistinguishable from humans has been the stuff of science fiction for decades. Now, following major advances in artificial intelligence (AI), intellectual property created by machines without human input is fast becoming a reality. The development thus begs the question among legal scholars, legislative bodies, and judiciary branches of governments worldwide of who owns the intellectual property that humans did not create."

Wednesday, December 7, 2016

TV for the fake news generation: why Westworld is the defining show of 2016; Guardian, 12/7/16

Paul MacInnes, Guardian; TV for the fake news generation: why Westworld is the defining show of 2016:
"Westworld is a hit. Viewing figures released this week confirmed that the first season of HBO’s sci-fi western drama received a bigger audience than any other debut in the channel’s history...
The producers deliberately reached out to an audience that enjoys obsessing. They knew some fans would watch the show again and again on their laptops. They knew they would freeze-frame the screen and zoom in on details that would pass the casual viewer by. From there the fans would try to make connections, to unravel the mysteries, to find deeper meaning. Things were left uncertain enough that people could believe what they wanted. Whether a theory was “true” was less important than the fact that someone believed in it. Sound familiar?
I’m not calling HBO a purveyor of fake news, and neither am I suggesting that Westworld has been captured by the alt-right like Pepe the Frog. But the drama has certainly tapped into an audience of young people who love video games and cracking codes, and understands both technology and identity politics."