![]() Google offered a professor $60,000, but he turned it down. David Paul Morris/Bloomberg via Getty Images headquarters in Mountain View, California, U.S., on Wednesday, April 25, 2018. He quickly wrote a blog post pointing out that all such AI systems do is match patterns by pulling from enormous databases of language.Ī pedestrian walks past signage at Google Inc. Gary Marcus, founder and CEO of Geometric Intelligence, which was sold to Uber, and author of books including “Rebooting AI: Building Artificial Intelligence We Can Trust,” called the idea of LaMDA as sentient “nonsense on stilts” in a tweet. Abeba Birhane, a senior fellow in trustworthy AI at Mozilla, tweeted on Sunday, “we have entered a new era of ‘this neural net is conscious’ and this time it’s going to drain so much energy to refute.” Responses from those in the AI community to Lemoine’s experience ricocheted around social media over the weekend, and they generally arrived at the same conclusion: Google’s AI is nowhere close to consciousness. And sometimes advancements are viewed through the lens of what may come, rather than what’s currently possible. The continued emergence of powerful computing programs trained on massive troves data has also given rise to concerns over the ethics governing the development and use of such technology. Lemoine was not available for comment on Monday. According to The Washington Post, he was placed on leave for violating the company’s confidentiality policy. Gebru was ousted after internal scuffles, including one related to a research paper the company’s AI leadership told her to retract from consideration for presentation at a conference, or remove her name from.)Ī Google spokesperson confirmed that Lemoine remains on administrative leave. On June 6, Lemoine posted on Medium that Google put him on paid administrative leave “in connection to an investigation of AI ethics concerns I was raising within the company” and that he may be fired “soon.” (He mentioned the experience of Margaret Mitchell, who had been a leader of Google’s Ethical AI team until Google fired her in early 2021 following her outspokenness regarding the late 2020 exit of then-co-leader Timnit Gebru. In a statement, Google said Monday that its team, which includes ethicists and technologists, “reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.” The engineer, Blake Lemoine, reportedly told the Washington Post that he shared evidence with Google that LaMDA was sentient, but the company didn’t agree. Such systems have become increasingly good at answering questions and writing in ways that can seem convincingly human - and Google itself presented LaMDA last May in a blog post as one that can “engage in a free-flowing way about a seemingly endless number of topics.” But results can also be wacky, weird, disturbing, and prone to rambling. They are tasked, essentially, with finding patterns and predicting what word or words should come next. LaMDA, which stands for “Language Model for Dialog Applications,” is one of several large-scale AI systems that has been trained on large swaths of text from the internet and can respond to written prompts. But the belief that Google’s AI could be sentient arguably highlights both our fears and expectations for what this technology can do. In interviews and public statements, many in the AI community pushed back at the engineer’s claims, while some pointed out that his tale highlights how the technology can lead people to assign human attributes to it. ![]() How one employee's exit shook Google and the AI industry It is a multinational corporation specializing in services and products related to the Internet service. ![]() MOUNTAIN VIEW, USA - November 18, 2020, view of the main Google office building.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |