GPT Image 2

Five prompt tricks that made GPT Image 2 click for me

By Devon Park¡¡5 min read

I have been using GPT Image 2 for about two months now, mostly for design exploration on a side project. I am not a prompt engineer. I am a guy with a Notion full of "things that worked." Here are five of those things.

1. Name the camera, not just the mood

"Cinematic" does very little. "Shot on an Arri Alexa with a 50mm Master Prime" does a lot. You don't have to know what those words mean — you just have to use them. The model has seen enough metadata that it associates lens names with specific looks.

I tend to default to one of three camera framings: "shot on a 35mm lens, shallow depth of field" for product, "medium format, natural light through a window" for portrait, "wide angle, slight lens distortion" for environment. Pick one and stop typing "cinematic."

2. Put the subject first, the style last

This sounds obvious but I keep forgetting. The model weighs the start of the prompt more heavily. If you start with "in the style of a 1970s Polaroid" you get a 1970s Polaroid that happens to have your subject in it. If you start with the subject and end with the style, you get your subject rendered in that style. Different outcomes.

3. For text, spell it like a sign painter would

If you want the image to render the words "Open Late," put them in quotes in the prompt: render the sign reading "Open Late" in a hand-painted script. Telling the model where the text lives ("the sign," "the cover," "the t-shirt") helps a lot more than just listing the words.

This is the thing GPT Image 2 is best at compared to anything I had used before. Use it.

4. Reference images do not have to look like the result

I used to think reference images were for "make it look like this." They are also great for "match this color palette but make a totally different scene." Or "use the lighting from this photo." Or "match this person's hair color." You can be specific in the prompt about what you want from each reference.

I now keep a folder called references with about 40 images grouped by what they're useful for: lighting, color, composition, texture. I drop two or three in for almost every generation now.

5. Negative space is a real direction

If your image keeps coming out cluttered, add "generous negative space, minimal composition, the subject occupies the lower third of the frame". The model responds to compositional language better than I expected. It is not just "what is in the image" — it is "where is it in the image."

This is the trick that made the difference for me when I started using outputs as actual layouts, not just standalone visuals.

None of this is a formula

I want to be clear: there is no magic prompt template. These are habits, not recipes. The thing that has helped me most is keeping a running log of prompts that worked and what about them was specific. Look at your own log after a month and you will see your patterns.

That, more than any guide on the internet, is how you get good at this.

#prompts#tutorial#gpt image 2
Five prompt tricks that made GPT Image 2 click for me — GPT Image 2 | GPT Image 2