Introduction
In June of 2022 a man called Blake Lemoine told reporters at The Washington Post that he thought the computer system he worked with was sentient.[i] By itself, that does not seem strange. The Post is one of the United States’ finest newspapers and its reporters are used to hearing from people who think that the CIA is attempting to read their brainwaves or that prominent politicians are running a child sex trafficking ring from the basement of a pizzeria.[ii] (It is worth noting that the pizzeria had no basement.) But Mr. Lemoine was different; For one thing, he was not some random person off the street. He was a Google engineer. Google has since fired him. For another thing, the “computer system” wasn’t an apparently malevolent Excel program, or Apple’s Siri giving replies that sounded prescient. It was LaMDA, Google’s Language Model for Dialogue Applications[iii]—that is, an enormously sophisticated chatbot. Imagine a software system that vacuums up billions of pieces of text from the internet and uses them to predict what the next sentence in a paragraph or the answer to a question would be.