Showing posts with label autonomous systems. Show all posts
Showing posts with label autonomous systems. Show all posts

Sunday, May 18, 2025

RIP American innovation; The Washington Post, May 12, 2025

  , The Washington Post; RIP American innovation

"That U.S. businesses have led the recent revolution in artificial intelligence is owed to the decades of research supported by the U.S. government in computing, neuroscience, autonomous systems, biology and beyond that far precedes those companies’ investments. Virtually the entire U.S. biotech industry — which brought us treatments for diabetes, breast cancer and HIV — has its roots in publicly funded research. Even a small boost to NIH funding has been shown to increase overall patents for biotech and pharmaceutical companies...

Giving out grants for what might look frivolous or wasteful on the surface is a feature, not a bug, of publicly funded research. Consider that Agriculture Department and NIH grants to study chemicals in wild yamsled to cortisone and medical steroids becoming widely affordable. Or that knowing more about the fruit fly has aided discoveries related to human aging, Parkinson’s disease and cancer.

For obvious reasons, companies don’t tend to invest in shared scientific knowledge that then allows lots of innovation to flourish. That would mean spending money on something that does not reap quick rewards just for that particular company.

Current business trends are more likely to help kill the U.S. innovation engine. A growing share of the country’s research and development is now being carried out by big, old companies, as opposed to start-ups and universities — and, in the process, the U.S. as a whole is spending more on R&D without getting commensurately more economic growth."

Thursday, November 9, 2023

How robots can learn to follow a moral code; Nature, October 26, 2023

 Neil Savage, Nature; How robots can learn to follow a moral code

"Many computer scientists are investigating whether autonomous systems can be taught to make ethical choices, or to promote behaviour that aligns with human values. Could a robot that provides care, for example, be trusted to make choices in the best interests of its charges? Or could an algorithm be relied on to work out the most ethically appropriate way to distribute a limited supply of transplant organs? Drawing on insights from cognitive science, psychology and moral philosophy, computer scientists are beginning to develop tools that can not only make AI systems behave in specific ways, but also perhaps help societies to define how an ethical machine should act...

Defining ethics

The ability to fine-tune an AI system’s behaviour to promote certain values has inevitably led to debates on who gets to play the moral arbiter. Vosoughi suggests that his work could be used to allow societies to tune models to their own taste — if a community provides examples of its moral and ethical values, then with these techniques it could develop an LLM more aligned with those values, he says. However, he is well aware of the possibility for the technology to be used for harm. “If it becomes a free for all, then you’d be competing with bad actors trying to use our technology to push antisocial views,” he says.

Precisely what constitutes an antisocial view or unethical behaviour, however, isn’t always easy to define. Although there is widespread agreement about many moral and ethical issues — the idea that your car shouldn’t run someone over is pretty universal — on other topics there is strong disagreement, such as abortion. Even seemingly simple issues, such as the idea that you shouldn’t jump a queue, can be more nuanced than is immediately obvious, says Sydney Levine, a cognitive scientist at the Allen Institute. If a person has already been served at a deli counter but drops their spoon while walking away, most people would agree it’s okay to go back for a new one without waiting in line again, so the rule ‘don’t cut the line’ is too simple."

Tuesday, October 1, 2019

Roboethics: The Human Ethics Applied to Robots; Interesting Engineering, September 22, 2019

, Interesting Engineering; Roboethics: The Human Ethics Applied to Robots 

Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans?
"On ethics and roboethics 

Ethics is the branch of philosophy which studies human conduct, moral assessments, the concepts of good and evil, right and wrong, justice and injustice. The concept of roboethics brings up a fundamental ethical reflection that is related to particular issues and moral dilemmas generated by the development of robotic applications. 

Roboethics --also called machine ethics-- deals with the code of conduct that robotic designer engineers must implement in the Artificial Intelligence of a robot. Through this kind of artificial ethics, roboticists must guarantee that autonomous systems are going to be able to exhibit ethically acceptable behavior in situations where robots or any other autonomous systems such as autonomous vehicles interact with humans.

Ethical issues are going to continue to be on the rise as long as more advanced robotics come into the picture. In The Ethical Landscape of Robotics (PDF) by Pawel Lichocki et al., published by IEEE Robotics and Automation Magazine, the researchers list various ethical issues emerging in two sets of robotic applications: Service robots and lethal robots."