Published on

Talking Like a Human: How ChatGPT Plays the Imitation Game


In 1950, British computer scientist Alan Turing was already wondering whether machines could one day achieve human levels of intelligence. In a scenario Turing called The Imitation Game, if a machine could successfully imitate the language of a human counterpart, its intelligence would be functionally indistinguishable from that of a human. Later known as the Turing Test, this premise has served as the foundation of modern artificial intelligence. And today, with the development of ChatGPT, modern AI is one step closer to passing the Turing Test.

I was sitting in front of my computer the other day, eager to try out OpenAI’s latest technological innovation: ChatGPT. After a few moments of tinkering around, I felt ready to type in my first question. I held my breath as I pressed the enter key on my keyboard. To my surprise, ChatGPT replied with an accurate and detailed answer. It sounded human — like something I could have said.

As a young programmer myself, I was nothing short of amazed when I met ChatGPT: an algorithm — an intangible world of 1s and 0s — capable of composing love letters, debugging code, and answering virtually every question you throw at it. But how exactly does ChatGPT work? Before it can seamlessly generate brand-new text, the AI model first needs to understand what the user is asking. According to OpenAI, previous AI models were “trained to predict the next word on a large dataset of Internet text, rather than to safely perform the language task that the user wants,” often replicating the racist and misogynistic language present within the datasets used. The team implemented a technique called reinforcement learning from human feedback, in which they fine-tuned the model by having labelers demonstrate desired human outputs to randomly-selected prompts. The result? No more discrimination and bias.

Before ChatGPT spits out another beautifully-articulated response, the model first simplifies the user’s request: For example, the sentence “Why is the sky blue?” may become “Why sky blue?” Moreover, the model applies a concept called self-attention to establish the question’s context, in this case determining it to be the sky, rather than the color blue. The model then selects the word with the highest probability and adds it to the generated output.

According to OpenAI, their goal was to create a language model that was “helpful, honest, and harmless”. Thus, ChatGPT was born. As OpenAI’s most ambitious project yet, ChatGPT can revolutionize the way we interact with technology, ultimately blurring the lines between human and machine intelligence. While the innovations behind ChatGPT are certainly impressive, the rapid advancement of AI has left many concerned: beyond replacing jobs or helping students cheat, ChatGPT’s uncanny ability to mimic human speech entails an automated future. In the words of Kevin Roose for the New York Times, “We are not ready.” ChatGPT will be around for the rest of our lives, and it will only keep getting better — learning to embrace it and use the technology responsibly will be the next greatest challenge for humanity.

Works Cited

Heilweil, Rebecca. “AI is finally good at stuff, and that’s a problem.” Vox, 7 Dec. 2022,

OpenAI. “Aligning Language Models to Follow Instructions.” OpenAI, 27 Jan. 2022,

OpenAI. “ChatGPT: Optimizing Language Models for Dialogue.” OpenAI, 30 Nov. 2022,

OpenAI. “Training Language Models to Follow Instructions with Human Feedback.” OpenAI, 4 Mar. 2022,

Ramponi, Marco. “How ChatGPT Actually Works.” AssemblyAI, 23 Dec. 2023,

Roose, Kevin. “The Brilliance and Weirdness of ChatGPT.” The New York Times, 5 Dec. 2022,

Turing, Alan. “Computing Machinery and Intelligence.” Oxford Academic, Oct. 1950,