- Ai For Real Life
- Posts
- The Week of Nano Banana + A Prompt Trick You Need to Try
The Week of Nano Banana + A Prompt Trick You Need to Try
Google’s new image model took over the AI world, and I’ll show you the trending prompt hack creators are using right now

The Week of Nano Banana
Last week was all about one thing: Google’s Nano Banana (Gemini 2.5 Flash Image).
I’ve been using AI image and video tools nonstop, and this release really feels like the biggest since Veo 3. Nano Banana is not just another model, it’s a full-on editing and creation system that changes the way we approach visuals.
Nano Banana Takes Over
Google dropped Gemini 2.5 Flash Image, nicknamed Nano Banana, and the hype was real. Here’s why:
You can blend multiple images while keeping consistency.
You can edit photos with natural language instead of clunky Photoshop-style workflows.
It’s designed for image-to-video pipelines, giving us way more control over consistency across frames.
It’s everywhere: Gemini, FreePik, Higgsfield, and APIs like AI Studio and Vortex AI.
My Take on Google’s Prompt & Editing Guide
Google didn’t just ship the model; they also released a prompting and editing guide. And after reading it, here’s what stuck with me:
Details beat keywords. They want you to describe the scene like you’re telling a story, not just tossing in nouns. “A cat in sunlight shot on an 85mm lens” will always beat “cat sunset.”
Editing is conversational. You can literally say, “Make the sofa leather but keep everything else the same,” and it locks the style in while making just that change.
Fusion is powerful. You can upload two or three images — like a character design and a color scheme and it blends them into one coherent shot.
Iterate naturally. You don’t need to start from scratch every time. You can keep refining: “Now make the lighting warmer,” or “Change the expression.”
Everything is watermarked. Every image carries an invisible SynthID tag, which means we’re moving toward traceable AI content.
👉 My advice: read the guide yourself and play around. It’s going to be one of those documents people reference for months. Here’s the link straight from Google: How to Prompt Gemini 2.5 Flash Image

Nano Banana Prompting Cheat Sheet
5 Best Ways to Get the Most Out of Google’s Gemini 2.5 Flash Image (Nano Banana)
1. Go Descriptive, Not Keyword-Heavy
✅ Say: “A photorealistic close-up of a cat on a wooden fence at golden hour, soft sunlight, shot on an 85mm lens.”
❌ Don’t say: “cat sunset.”
2. Edit with Natural Language
✅ “Switch the blue sofa to brown leather, keep everything else the same.”
– The model preserves lighting, style, and composition.
3. Use Multi-Image Fusion
✅ Upload two or three images (like a character design + color palette) and ask Nano Banana to combine them into one coherent shot.
4. Iterate in Conversation
✅ After the first render, refine it naturally: “Make the lighting warmer” or “Change the expression to a smile.”
📌 Pro Tip: Google released a full guide with deeper examples and workflows. Check it out here: Google’s Prompt & Editing Guide
Tutorial Time
How I Made the Trending Nano Banana Toy Model Come to Life
Here’s the full detailed breakdown you can follow inside your own workflow. I’ll include the exact prompts, tools, and little tweaks I used so you can replicate it (or remix it) for your own projects.
Step 1: Generate the Base Image

Tool: Leonardo AI (Lucid Origin model)
Why Lucid Origin? It has a polished, cinematic look and I had credits to use up. You could swap this with another model you like.
Prompt:
A fashion editorial shot of a confident brown skinned female model posing on a minimalist concrete rooftop, dressed in high-end swimsuit riding a horse, dramatic shadows and strong contrast enhancing textures, early afternoon light with sharp angles, shot with a Canon EOS R5, 85mm f/1.2 lens,
Export the image at full resolution. This will be your starting frame.
Step 2: Create a Variation in Gemini

*tip - crop out the watermark before generating the video
Tool: Gemini
Prompt Source: Found online (Twitter/X thread).
Prompt:
Create a 1/7 scale commercialized figurine of the characters in the picture, in a realistic style, in a real environment. The figurine is placed on a computer desk. The figurine has a round transparent acrylic base, with no text on the base. The content on the computer screen is a 3D modeling process of this figurine. Next to the computer screen is a toy packaging box, designed in a style reminiscent of high-quality collectible figures, printed with original artwork. The packaging features two-dimensional flat illustrations.
Gemini generates a new image. This becomes your end frame.
Step 3: Animate the Transition in Flow

Tool: Flow (Veo 2 Start & End Frame feature)
Upload the Leonardo image as your start frame and the Gemini variation as your end frame.

Why Veo 2? The Start & End Frame feature is not yet available with Veo 3 — it only works on Veo 2 at this point. (Sept. 2025)
Prompt Example:
The horse gallops off the left frame and lands on the desk as it transforms into a toy
Run the generation. Flow outputs a short clip that morphs from one frame to the other.
Step 4: Add the “Toy Pickup” Clip

Take the same frame and drop it back into Flow with a new prompt.
Prompt Example:
Someone picks up the toy and inspects it
This gives you the second clip that matches the trending style you’ve seen online.
Step 5: Quick Edit in CapCut
Import both clips into CapCut (or your editor of choice).
Trim 1–2 seconds off the beginning and end to keep pacing tight.
Add music or sound effects if needed.
Final Result
That’s it — from raw images to a polished clip in about 3-4 minutes. No “huge editing job,” no waiting 3–4 years for AI to catch up. This is already here and usable today.
💡 Pro Tip: Keep a library of your favorite prompts in Notion or Google Docs. Reuse and tweak them — it saves tons of time when you’re experimenting.
The AI Income System™ is turning everyday people into digital entrepreneurs. Packed with 100 proven AI side hustles, 500+ ready-to-use prompts, 300 bonus income ideas, and a step-by-step 90-day plan, this system shows you exactly how to turn AI into real income — even if you’re starting from scratch. 👉 Don’t just read about the AI revolution. Profit from it.
Other Stories From the Week
🔹 Kling 2.1 released its start and end frame upgrade. Now transitions actually look cinematic. Drop in two frames, and you get a seamless cut that feels like pro editing.
🔹 Hailuo wasn’t far behind, launching their own start and end frame update. That sets up a real showdown: Kling vs. Hailuo for transition dominance.
🔹 Higgsfield launched Speak 2.0, giving AI-generated dialogue way more emotional realism. Plus, they integrated Kling into their platform, solidifying their push to be the fastest-growing all-in-one AI video hub.
Final Thoughts
Last week will probably be remembered as the week of Nano Banana. But the surrounding updates — Kling, Hailuo, and Higgsfield — show just how competitive and fast-moving this space is becoming.
That’s what excites me the most: we’re moving past “cool tricks” and into tools that give us real cinematic control.
👉 What about you? Was Nano Banana the biggest release for you too, or did something else grab your attention? Hit reply and let me know what rocked your AI world.
Until next time,
Khalil
#AI #NanoBanana #Gemini #AIforRealLife #KlingAI #Higgsfield

Please Support My Work
The best way to support continued testing and tutorials:
👉 Check out our sponsors below
Every click helps fund the tools and resources that make these insights possible. Thank you for your support!
The Gold standard for AI news
AI keeps coming up at work, but you still don't get it?
That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.
Here's what you get:
Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.
Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.
New AI tools tested and reviewed - We try everything to deliver tools that drive real results.
All in just 3 minutes a day