Gilad Edelman, Wired; Congress Takes Aim at the Algorithms
"“I agree in principle that there should be liability, but I don’t think we’ve found the right set of terms to describe the processes we’re concerned about,” said Jonathan Stray, a visiting scholar at the Berkeley Center for Human-Compatible AI who studies recommendation algorithms. “What’s amplification, what’s enhancement, what’s personalization, what’s recommendation?”...
[Mary Anne] Franks proposes something both simpler and more sweeping: that Section 230 not apply to any company that “manifests deliberate indifference to unlawful material or conduct.” Her collaborator Danielle Citron has argued that companies should have to prove they took reasonable steps to prevent a certain type of harm before being granted immunity. If something like that became law, engagement-based algorithms wouldn’t go away—but the change could still be significant. The Facebook Papers revealed by Haugen, for example, show that Facebook very recently had little or no content-moderation infrastructurein regions like the Middle East and Africa, where hundreds of millions of its users live. Currently Section 230 largely protects US companies even in foreign markets. But imagine if someone defamed or targeted for harassment by an Instagram post in Afghanistan, where as of 2020 Facebook hadn’t even fully translated its forms for reporting hate speech, could sue under an “indifference” standard. The company would suddenly have a much stronger incentive to make sure its algorithms aren’t favoring material that could land it in court."
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.