![]() ![]() “I would have written it differently,” he says of the letter. “I decided to sign it and hope for an opportunity to explain a more nuanced view than is expressed in the letter. “My biggest concern is that the letter will be perceived as calling for more than it is,” he adds. “We are still seeing new, creative, unforeseen uses-and possible misuses-of existing models,” Stone says. IEEE Fellow Peter Stone, a computer science professor at the University of Texas at Austin, says some of the biggest threats posed by LLMs and similar big-AI systems remain unknown. They are being freely distributed, and there is no review or regulation in place to prevent harm.” “There are too many ways these systems could be abused. They are being freely distributed, and there is no review or regulation in place to prevent harm.”Įleanor “Nell” Watson, an AI ethicist who has taught IEEE courses on the subject, says the open letter raises awareness over such near-term concerns as AI systems cloning voices and performing automated conversations-which she says presents a “serious threat to social trust and well-being.”Īlthough Watson says she’s glad the open letter has sparked debate, she says she confesses “to having some doubts about the actionability of a moratorium, as less scrupulous actors are especially unlikely to heed it.” “This technology is as important as the coming of electricity or the Net,” Deiss says. IEEE Senior Life Member Stephen Deiss-a retired neuromorphic engineer from the University of California, San Diego-says he signed the letter because the AI industry is “unfettered and unregulated.” “I would like to see an unbiased group without personal or commercial agendas to create a set of standards that has to be followed by all users and providers of AI.” “AI can be manipulated by a programmer to achieve objectives contrary to moral, ethical, and political standards of a healthy society,” says IEEE Fellow Duncan Steel, a professor of electrical engineering, computer science, and physics at the University of Michigan, in Ann Arbor. IEEE members have expressed a similar diversity of opinions. In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“ shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead. It’s the latest of a host of recent “ AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction. ![]() It calls for cessation of research on “all AI systems more powerful than GPT-4.” The open letter, titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). The signatories expressed a range of fears and apprehensions including about rampant growth of AI large-language models (LLMs) as well as of unchecked AI media hype. The Institute contacted some of the members who signed the open letter, which was published online on 29 March. The recent call for a six-month “AI pause”-in the form of an online letter demanding a temporary artificial intelligence moratorium-has elicited concern among IEEE members and the larger technology world.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |