Showing posts with label moral agency. Show all posts
Showing posts with label moral agency. Show all posts

Friday, October 10, 2025

Here's who owns what when it comes to AI, creativity and intellectual property; World Economic Forum, October 10, 2025

Seemantani SharmaCo-Founder, Mabill Technologies | Intellectual Property & Innovation Expert, Mabill Technologies, World Economic Forum ; Here's who owns what when it comes to AI, creativity and intellectual property

"Rethinking ownership

The intersection of AI, consciousness and intellectual property requires us to rethink how ownership should evolve. Keeping intellectual property strictly human-centred safeguards accountability, moral agency and the recognition of human creativity. At the same time, acknowledging AI’s expanding role in production may call for new approaches in law. These could take the form of shared ownership models, new categories of liability or entirely new rights frameworks.


For now, the legal balance remains with humans. As long as AI lacks consciousness, it cannot be considered a rights-holder under existing intellectual property theories. Nonetheless, as machine intelligence advances, society faces a pivotal choice. Do we reinforce a human-centred system to protect dignity and creativity or do we adapt the law to reflect emerging realities of collaboration between humans and machines?


This is more than a legal debate. It is a test of how much we value human creativity in an age of intelligent machines. The decisions we take today will shape the future of intellectual property and the meaning of authorship, innovation and human identity itself."

Thursday, March 7, 2019

Does AI Ethics Have A Bad Name?; Forbes, March 7, 2019

Calum Chace, Forbes; Does AI Ethics Have A Bad Name?

"One possible downside is that people outside the field may get the impression that some sort of moral agency is being attributed to the AI, rather than to the humans who develop AI systems.  The AI we have today is narrow AI: superhuman in certain narrow domains, like playing chess and Go, but useless at anything else. It makes no more sense to attribute moral agency to these systems than it does to a car or a rock.  It will probably be many years before we create an AI which can reasonably be described as a moral agent...

The issues explored in the field of AI ethics are important but it would help to clarify them if some of the heat was taken out of the discussion.  It might help if instead of talking about AI ethics, we talked about beneficial AI and AI safety.  When an engineer designs a bridge she does not finish the design and then consider how to stop it from falling down.  The ability to remain standing in all foreseeable circumstances is part of the design criteria, not a separate discipline called “bridge ethics”. Likewise, if an AI system has deleterious effects it is simply a badly designed AI system.

Interestingly, this change has already happened in the field of AGI research, the study of whether and how to create artificial general intelligence, and how to avoid the potential downsides of that development, if and when it does happen.  Here, researchers talk about AI safety. Why not make the same move in the field of shorter-term AI challenges?"