A Word Calculator…What is GPT?

How Will It Improve the World? How Will It Not?


a humanoid computer from the 90’s playing chess digital art, DALL-E

When I was 12, my dad took some math classes at our community college.  The advanced material required a new calculator.  One night, he pulled me over to his homework table, “Look at this!”  He typed an equation, pressed enter, and a squiggly line darted from left to right like when the patient gets zapped back to life in an ER episode.  “Ba-Bump, Ba-Bump” my dad made sound effects as the line bent up and down across the screen.

At the time, I didn’t know what a sine curve was.  It didn’t matter.  The output was exciting.

Y=sin(x^2) …exciting was relative, pre-YoutTube

Years later, I took those same classes. The green LCD screens had become as ubiquitous in math as pencils.  Now I know what sine is.  I still don’t know how a TI-83 calculates it.  What happens in that black box when you press enter?  

If cryptocurrency was the conversation of 2022 (is that finally behind us?), GPT will be the conversation of 2023.  Perhaps you need a primer: GPT is a calculator for text.

What is GPT?

GPT (short for “Generative Pre-training Transformer”) is a type of artificial intelligence model that is used for natural language processing tasks such as language translation, summarization, question answering, and language generation. It is a deep learning model that was developed by OpenAI, and it is designed to be able to learn and generate human-like text.

I didn’t write that.  I typed “What is GPT?” into OpenAI’s ChatGPT, and it generated that output. 

See:

What is GPT, ChatGPT

So What is Open AI’s ChatGPT?

OpenAI’s ChatGPT is a variant of the GPT (Generative Pre-training Transformer) language model specifically designed for chatbot applications. It was trained on a large dataset of human-human conversations and is able to generate human-like responses to a given input.

One of the key features of ChatGPT is its ability to generate responses that are appropriate to the context of the conversation. This allows it to engage in more natural and coherent conversations, as it is able to take into account the previous exchanges and respond in a way that is relevant to the current topic of discussion.

I didn’t write that either.  It’s not from some Frequently Asked Questions list.  When I prompted “What is OpenAI’s ChatGPT?”, the application (pre-trained on massive amounts of text) calculated the string of letters in those paragraphs was probably a good response.

Proof, ChatGPT

Infinite monkeys pounding on infinite typewriters for infinite years could produce Shakespeare.  ChatGPT supposes that with some training and reference materials a language model can produce useful output.  Of course, some training understates the endeavor: ChatGPT was trained on 45TB of text. All the data in the Library of Congress is only 15TB. Certainly more text than Shakespeare read in his lifetime.

Try it yourself here!

Artificial Intelligence

ChatGPT does more than just describe itself: it crafts sea shanties, and writes code, and plays chess, and holds imaginary conversations, and answers existential questions.

Move over Alexa, this is Artificial Intelligence.

It also does homework!  I asked ChatGPT to write an essay on symbolism in Lord of the Flies. In 30 seconds, it whipped up text freshman-me struggled for weeks to produce. (and was more legible…overdue apologies to the teachers forced to review my high school work.)

AI researcher Eliezer Yudkowsky comments cynically, “Any jobs that require mediocre essays can now, *in real life*, be done with ChatGPT assistance! Students who use ChatGPT to generate essays are *realistically* showing they can handle the mediocre-essay-writing jobs for which modern universities are vigorously training them.”

It’s early.  Beyond cheating on homework, we are only beginning to explore GPT’s productive possibilities.  While still improving, even if ChatGPT only ever manages ‘mediocre’ responses, half of human responses are…below average.  There’s a lot of low hanging fruit.

Average is Over

This is what progress looks like.  The industrial revolution applied steam power to an increasing number of tasks.  Our digital revolution applies computer power to an increasing number of tasks.

In Average is Over (2013), economist Tyler Cowen examined the productivity-enhancing relationships between people and machines.  Computers provide leverage.  That leverage promises massive rewards to people who can apply computers to productive tasks.

Google software engineer compensation plans are an obvious data point, but the opportunity extends beyond programmers.  Efficient googling is now a core skill for success in any industry – computer leverage for everyone!  Even people not “good at computers” can benefit from computer-enhanced productivity.  As Cowen notes, “If you have an unusual ability to spot, recruit, and direct those who work well with computers, even if you don’t work well with computers yourself, the contemporary world will make you rich.”  Still true a decade after publication.

Cowen’s instructive example is freestyle chess. Top computer engines have long outperformed top human players.  Enter freestyle chess: human-computer teams working together against other human-computer teams. I can beat Magnus Carlson with help from my phone’s chess app, but what about a team of people comparing data from multiple chess engines?  What happens when they optimize those engines and play against each other?  These team games have pushed chess play to its highest levels. (“though who actually is to judge,” Cowen wonders.)

And the best freestyle chess team members are not necessarily elite chess players.  The relevant skill is not “play great chess”, it is “understand the strengths and weaknesses of different chess engines, and when their decisions should be overruled”.  In other words: make computer play better chess.

With GPT as a text engine, soon elite writers might not produce the best writing.  The relevant skill may become prompting and tweaking GPT: make computer make better words.

GPT is a new, advanced tool in the writing arsenal.  “GPT-prompt engineer” may become a thing.  Or maybe GPT-prompting will become a general-purpose ability.  When were you last asked “are you good at google”?

Credit Where It Is Due

ChatGPT passes the Turing Test.  The computer response convincingly imitates real human output.  If I don’t tell you a response is ChatGPT, how would you know?  That’s a problem, right?

We don’t yet have norms around attributing credit in AI-generated text.

In fairness, writers use lots of tools.  How much credit should the tools get?  Everybody uses spell-check.  Nobody puts a disclaimer at the end of their work “I’d like to acknowledge the use of spell-check in the production of this manuscript.  I am not a naturally gifted speller…I also occasionally used a thesaurus for fancy words.”.  I use Evernote, Twitter, and even Google Search (gasp) while writing and researching.  Is it important to admit my memory isn’t as good nor my thoughts as clear as they seem in polished drafts?

Some corners of internet writing focus on attention metrics.  Writing algorithms optimized for views and engagements – “99% of X do Y…a 🧵” is the form that launched a thousand twitter discourses.  Now there’s an algorithm for more than the structure – GPT can fill in the composition as well.  GPT will reduce the intellectual-effort cost to create this type of engagement-seeking content.  As with most things, for better or worse, as the cost goes down more will be produced.

There is still skill in deriving these outputs: skill in crafting creative GPT inputs, skill in evaluating and curating GPT outputs, skill in nudging mediocre responses into great ones.  With the right prompting, these paragraphs may be interesting and informative.  Is GPT-text-generation creative work, or just Google search results with a level of abstraction?  

Attention is finite.  Status is a zero-sum game.  As the status of people who craft good ChatGPT prompts goes up, the status of essayists who don’t rely on algorithms in their work might go down. Still, “original intellectual output” will likely maintain prestige over “really good GPT-prompting”…so expect a lot of GPT content pawned off as original intellectual output.  

Will there be a backlash like PEDs in baseball?  A movement to esteem “all-natural” writers who “did it the right way”?

In a recent chess-world scandal, Magnus Carlsen accused his opponent, Hans Niemann, of what could be described as “discreetly playing freestyle chess in a tournament designed for human-only play”.  The controversy was magnified by the cheating’s salacious methods. It seems we still care about intellectual dishonesty, sometimes.

Did you know a significant portion of New York Times Best Sellers are ghostwritten?  I actually don’t know if that’s true.  I heard it and can’t disprove it.  How does it make you feel?  When does a team of editors become a group of co-writers?  Are the “authors” of ghostwritten books lying when they put their names on the cover?  Pop-science books like Sapiens don’t contain any original research.  Still, their authors are lauded as intellectual titans.  Are they frauds?

Is ChatGPT more spell-check or editor or ghostwriter?  If the content is good, does it matter?

Maybe it depends on the job to be done.  I’m reminded of the guy who professionally churned out essays for college students.  Most students paid, got their essay, turned it in, took their passing grade, and moved on.  A certain class of student, however, would receive the work, then demand the illicit draft be improved in some regard before turning it in.  The writer reflected that maybe this exchange prepared the students for the professional-managerial class jobs they sought better than writing the essay ever would.

What Can’t GPT Do?   

Writing is an exploratory process.  I write to figure out what I think.  Having the summary of a book is not the same thing as having read it.  Having read it is not the same thing as understanding.  Even once you understand, you still have to figure out how to apply it in your life.  

Wisdom is hard-won.  That’s why most self-help books don’t have the intended effect.

GPT can’t do that work for you.

There are margins where we substitute tools for thinking.  This is a concern with every new tool.  Are calculators making us bad at math?  Is writing ruining our memory?  I don’t know.  I know I can figure out 12×16 in my head: 12×12…+12×4…carry the 1…= 192. …but it takes too long. The effort is unpleasant. I’m not always confident in my mental arithmetic. So I use a calculator.  Now the machine is a crutch when I try comparing grocery prices.

“You can’t take a calculator with you everywhere,” my grade-school teacher’s voice echoes in my mind.  Surprise! Now you can.  You can even wear it on your wrist.

A calculator gives a nice, confident outputs:  2+3×4+5 = 25 ; sin(90) = .894.  ChatGPT, with its coherent sentences, also gives confident outputs.  But there’s a problem: sometimes those outputs are wrong.  The confidence is unfounded.  2+3×4+5 = 19 (order of operations; multiply before you add).  A basic calculator doesn’t account for that.  If you typed sin(90), you probably meant degrees instead of radians…in which case the answer is 1.

GPT’s legible paragraphs have the sophistry of a well formatted excel spreadsheet.  It is easy to be misled by things that are legible. So we get leverage when these AI tools solve problems for us, but it makes us vulnerable when we don’t (or can’t) understand those solutions.  This can be catastrophic for any tools, especially when fueled by financial leverage (see Mortgage Backed Securities, 2008 or Cryptocurrency, 2022).

If I trained a machine learning model on the picture below, it could do a regression on the bullet holes and tell me where to expect them.  But the model itself can’t ask, “Are those all the planes?  Are those the ones that count?”

Survivorship bias, Wikipedia

The world will always be more complex than our models.  The AI tools can’t verify themselves.

The Future of GPT

AI tools will keep getting better.

One consequence is it will be harder to identify and disregard bad information.  Through bad grammar and punctuation, it used to be easy to identify internet quackery.  The frequent sticking of the CAPS LOCK KEY was a sure sign.  With GPT to edit, reformat, and replicate bad information at scale, this first-pass filter will become less effective to readers.  Of course, bad information at scale is a human text problem we’ve struggled with on the internet for some time.   

Content will continue multiplying in quantity and variety, though not necessarily insight and accuracy.  Better personal tools for filtration and curation will help.  Individuals serving as idea and content curators become more important (see pop-science books), but there will also be a growing place for AI tools.  Facebook and Tik Tok already use AI tools to curate your feeds, though their success metrics (attention, engagement) might not mirror your personal success metrics (purpose, growth).

AI Tools for creation, AI Tools for curation. Staring into the future of AI can be daunting. If the computers outperform people on all the tasks, what’s left for a human to do? I think of the response Robin Williams’ Matt Damon’s super-genius character in Good Will Hunting:

So if I asked you about art you’d probably give me the skinny on every art book ever written. Michelangelo? …You can’t tell me what it smells like in the Sistine Chapel. You’ve never actually stood there and looked up at that beautiful ceiling.

I ask you about war, and you’d probably, uh, throw Shakespeare at me, right? “Once more into the breach, dear friends.” …you’ve never been near one. 

I asked you about love you probably quote me a sonnet. …you’ve never looked at a woman and been totally vulnerable. Known someone could level you with her eyes. Feeling like God put an angel on earth just for you.

You don’t know about real loss, because that only occurs when you love something more than you love yourself. I doubt you’ve ever dared to love anybody that much.

My brother JT wrote something similar in a recent exchange on GPT and AI tools: “Conversations around what AI can do are often framed around ‘how can X change the world’, but it’s also worth asking ‘what do people already need’. I think we have a fundamental need to feel heard by another person who is capable of feeling what we feel. I don’t think that’s a role a machine can ever fulfill.”

Let’s lean into what’s human.  

If you type 58,008 into a calculator and turn it upside down, will GPT ever laugh?

Your move, chief.

*I’d like to acknowledge the use of spell-check in the production of this manuscript.  I am not a naturally gifted speller…I also occasionally used a thesaurus for fancy words.

**Thanks to Margae Knox, Peter Agbo, Robert Heaton, and Tobi Emonts-Holley for reading drafts of this.

1 thought on “A Word Calculator…What is GPT?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s