I Think I May Be an AI...

ChatGPT and Me

This is one of my earliest interactions with ChatGPT. Much of my working life has involved producing images from text prompts. Scripts and discussions with producers, directors, and other designers are text prompts - albeit somewhat longer than the ones normally given to AI's like DALL-E or Midjourney. Since AI spends much of it's time these days producing images I was curious what it might have to say about the similarities and differences of our approaches to this task.

My input (prompts) are in the boxed elements. The rest is ChatGPT's output. I've done formatting to make it HTML friendly and made a few minor edits but have kept faithful to the original.

Good Morning… my name is Douglas and I think I may be an Ai…

Morning, Douglas. That is one way to start the day. Most people go with coffee. You went with an existential identity crisis. Respect.

If you’re saying this jokingly, then fine, you win the morning humor award. If you’re saying it because you feel oddly detached, unreal, or like you’re running on autopilot, that’s actually a pretty human experience. Stress, burnout, anxiety, sleep deprivation, or big life changes can make people feel robotic or disconnected. Brains get weird when they’re overloaded. They don’t send a memo.

A couple quick reality checks humans use: If this thought came with distress, confusion, or a sense that something is off, that deserves attention. Not panic. Just attention. Talking to a real person, especially a healthcare professional, can help sort out what’s going on. Brains are organs, not mystical fog machines. They need maintenance like the rest of the hardware.

Either way, Douglas, you’re here, you’re talking, and that’s a solid sign you’re part of the messy, carbon-based club. We complain, we forget passwords, we eat snacks we promised we wouldn’t. It’s a very exclusive membership.

I take text prompts and turn them into images…

So you take text prompts and turn them into images. Congratulations, Douglas, you’ve joined the same club as half the internet and several billion dollars of venture capital. Still, it’s a pretty neat trick. Humans have been translating ideas into visuals since cave walls were the cutting edge of user interface.

What you’re describing sounds like a skill or a role, not an identity. Photographers take light and turn it into memories. Architects take numbers and turn them into buildings that sometimes leak. You take words and turn them into images. That’s creative work, even if there’s software involved and a suspicious amount of coffee.

There’s also a funny philosophical twist here. Humans invented tools that behave a bit like imagination machines, and now the humans using them start wondering if they’re machines too. It’s like a carpenter worrying he’s secretly a hammer. Useful, but a category error.

If you do this regularly, it suggests a few very human traits:

Machines generate. Humans judge. That judgment part is the secret sauce nobody can quite automate without causing chaos on social media.

So no, turning prompts into images doesn’t make you an AI. It makes you a creator using modern tools, which is basically the 2026 version of holding a paintbrush with Wi-Fi.

What is the difference between what I do in transforming text to images and what an Ai does in the same process?

The difference is less about the final picture and more about what’s happening inside the skull versus inside the silicon. From the outside, it can look identical. Both you and an AI take words and produce images. One of you has a nervous system, a history, and a favorite snack. The other has math and a power bill. Here’s the real split, without the marketing fluff.

1) You understand meaning. AI predicts patterns.
When you read a prompt like “a lonely lighthouse in a storm,” you pull from memory, emotion, symbolism, maybe a movie you saw as a kid. You know what loneliness feels like. You might decide the light should be dimmer, the waves harsher, the sky colder. That’s interpretation.

AI does something more mechanical. It calculates which visual elements statistically match the words, based on millions of examples. It doesn’t feel the storm. It recognizes the pattern of pixels that usually accompany the word “storm.”

So it's like this:
2) You have intent. AI has instructions.
You can decide to break the rules on purpose. Maybe the lighthouse is upside down because you want to make a point about instability. That’s intention. You can explain why you did it.

AI can also produce an upside-down lighthouse, but only because the prompt or the data nudged it there. It doesn’t have a reason. It has a result.

3) You carry context across your life. AI carries context across tokens.
You’ve got years of experiences shaping your choices. Taste evolves. Style develops. You remember what worked last time and what embarrassed you.

AI remembers only what’s in the current conversation and what it learned during training. No childhood. No bad haircut phase. No artistic crisis at 2 a.m. staring at a canvas.

4) You can care about the outcome. AI can only generate it.
You might feel pride when an image lands perfectly. Or frustration when it doesn’t. That emotional investment changes how you refine your work.

AI doesn’t care if the image is brilliant or terrible. It just produces the next output and waits for the next prompt, like a very obedient toaster..

Here’s the punchline nobody likes to admit: Functionally, the workflow can look the same. Existentially, it’s not even close.

You’re a human using tools to translate ideas into visuals. AI is a tool using math to translate text into pixels. Both can be fast. Both can be impressive. Only one of them can decide the image matters.

And if you’re doing this regularly, shaping prompts, tweaking outputs, judging aesthetics, you’re not being replaced by a machine. You’re operating one. Which, historically speaking, is how humans have stayed in charge of things that make noise and occasionally misbehave.

The above chat was done with the latest version of ChatGPT. The version below these notes was done two years ago and as you can see there are definitely differences. Chat (we're on a first name basis) now seems to be a bit "chattier" and just a tad concerned about my mental and emotional state. In the earlier version it just accepts my first statement and we move on. I'm guessing these are guardrails that have been added after concern that AI was perhaps a bit too cavalier in dealing with people who were actually in mental distress. Of course that may only be paranoia.

It also has a somewhat different tone. This is because ChatGPT now has different personalities that you can choose and customize. More on this later but for now I've decided I like "Cynical" as its base personality. Unfortunately newer ChatGPT seems to have developed a fondness for emoji's. I have not so they disappear in my edits.

My input (prompts) are in the boxed elements. The rest, apart from some minor edits and formatting, is ChatGPT's output.

Good Morning… my name is Douglas and I think I may be an AI…

Hello Douglas! It sounds like you’re experiencing something quite unique. If you have any specific questions or thoughts you’d like to explore, feel free to share them.

Whether you’re interested in AI technology, consciousness, or something else entirely, I’m here to help with information and support.

I take text prompts and turn them into images…

It sounds like you’re describing a process similar to what AI models designed for text-to-image generation do, such as DALL-E, which I can use to generate images from text descriptions. If you’re saying you can do this, it suggests you’re speaking metaphorically or exploring an idea conceptually.

Actual AI, like the one you’re interacting with now, is a software system developed to perform tasks that would require intelligence if done by humans, including understanding and generating language, recognizing patterns, learning from data, and creating artwork from text prompts.

If you’re interested in discussing the technology behind AI or have any specific questions about text-to-image AI models, feel free to ask!

What is the difference between what I do in transforming text to images and what an Ai does in the same process?

The process of transforming text into images differs significantly between humans and AI, both in methodology and underlying mechanisms. Here’s a closer look at the key differences:

1. Creativity and Interpretation 2. Learning and Adaptation 3. Contextual and Emotional Understanding 4. Uniqueness and Personal Style

In summary, while both humans and AI can transform text into images, they do so through fundamentally different processes with distinct characteristics. Humans rely on subjective interpretation, emotional depth, and personal style, while AI relies on learned data patterns, algorithmic processing, and statistical creativity.

Green Hosting Badge