One of Google’s employees has been placed on indefinite administrative leave following his assertion that one of the company’s testing artificial intelligence had developed consciousness.
The story published by WaPo was an instant hit, garnering the attention of other mainstream newspapers such as The New York Times, and adding fuel to the fire of ongoing controversy. The controversy centers on the question of whether or not complex language designs in the form of chatbots are any closer to trying to gain consciousness.
The second option is that the Lemoine was tricked by a cunningly devised algorithm that essentially repurposes chunks of human speech that it was previously fed. This, of course, is a distinct possibility. To put it another way, it is possible that he was just willfully misinterpreting the AI.
The software in issue is known as the Language Model for Dialogue Applications, or LaMDA for short. LaMDA was developed using sophisticated language models, which enables it to imitate speech to the degree that is almost uncannily lifelike.
[Read: Google Insider Says Company’s AI Could “Escape Control” and “Do Bad Things”]
While Lemoine was testing LaMDA to see whether it ever created hate speech — which is not unusual for comparable language models — the bot began conducting in-depth discussions with him.
According to the Washington Post, the topics discussed varied from the third law of robotics by Asimov, which states that a robot must preserve its own existence, to the concept of personality.
In order to determine if LaMDA has consciousness, he put it through a series of tests that included asking it philosophical and existential issues, such as what the difference is between a butler and a slave.
The developer believed that Google’s AI was merely getting a little bit ahead of itself, which prompted him to alert his bosses that LaMDA looked to have become active.
The assertions were quickly disregarded by management since they did not impress them. As a direct consequence, Lemoine was put on administrative leave with pay on Monday.
The report follows the termination of a number of employees working on artificial intelligence projects at Google. In the year 2020, for instance, two employees of the company’s AI ethics team were let go after questioning the company’s language models. Neither of these individuals argued that the algorithms had developed consciousness but were still fired.
The vast majority of specialists continue to have skepticism regarding Lemoine’s assertions on what they witnessed at LaMDA.
Alternatively, the algorithms are able to adeptly forecast what would be followed in other discussions that are comparable, which is something that might very well be misunderstood as interacting with another human being.
Sign up for our newsletter to get the best of The Sized delivered to your inbox daily.