NEW Google NANO BANANA Just Destroyed All AI Image Editors - 15 Examples

πŸš€ Add to Chrome – It’s Free - YouTube Summarizer

Category: AI Tools

Tags: AIEditingGoogleImageTools

Entities: Chad GPTFlux ContextGemini 2 flashGoogleGoogle PixelLM MarinaNano Banana AI

Building WordCloud ...

Summary

    Introduction to Nano Banana AI
    • Nano Banana AI is a new image generator and contextual editor developed by Google.
    • It allows for easy modification of images without losing context, such as adding or removing elements.
    • The tool is creating excitement due to its superior capabilities compared to competitors.
    Accessing Nano Banana AI
    • Currently, Nano Banana AI can be accessed through the website lmarina.ai.
    • Users should be cautious about fake websites claiming to use Nano Banana.
    • The tool operates in a 'battle mode' where it compares results from different models.
    Features and Use Cases
    • Nano Banana AI excels in maintaining consistency and context in images.
    • It can handle complex tasks like decorating rooms or creating dynamic video compositions.
    • The tool is effective for creating professional ad images with a single prompt.
    • It offers superior results in retouching portraits and colorizing images.
    Comparisons with Competitors
    • Nano Banana outperforms competitors like Flux Context and Chad GPT in image quality and consistency.
    • Examples show Nano Banana's ability to maintain details and context better than others.
    Future of Nano Banana AI
    • Rumored to be part of Google Pixel, Gemini app, and Google Flow app.
    • Expected to be officially released soon with potential to change the AI landscape.

    Transcript

    00:00

    This man looks a bit lonely, sitting all alone in this cafe. What if we want this woman to give him some company?

    Very, very easy to do. Now, what if we want a cinematic portrait of this man standing next to this sports car?

    Not a big deal

    00:16

    anymore. What if we want two of the most iconic figures from different eras come together and just hang out with each other?

    Not a problem at all. What if we want to turn the scene into something much more dramatic with it being

    00:31

    overcast and raining and all these people holding umbrellas? Never ever been easier.

    What if we have this particular shot and we want to turn this into a video, so we needed different angles, but we wanted to create all those new shots without

    00:46

    losing consistency. Now you can create any type of a shot.

    Neither will the characters deviate from the original picture nor the objects. What if we want this woman to be holding this perfume bottle and create an advertisement like professional image?

    Again, very, very

    01:04

    easy to do. All this with a single click.

    >> Wait, I was created with one click. Seriously, just wake me up when the robots take over.

    I'm so done. >> It may not be a robot, but it is more than capable of taking over.

    This new

    01:20

    tool has got even the most seasoned deep fakers licking their lips because this is Google's very new Nano Banana AI. And don't let the name fool you because it is not here to play.

    In this tutorial, we'll be seeing first of all what is

    01:35

    Nanobanana AI, how to access it and use it completely for free. Then we'll be seeing 15 different examples and we'll also be comparing these examples with its competitors mainly Flux contexts.

    So let's get started. So Nano Banana is

    01:51

    basically an image generator and a contextual editor which just means that it should be able to modify anything in an image without really losing the context or the essence. For example, in this particular image, if you want to remove the people, it should be able to do that with ease.

    But this is a simple

    02:07

    example, but we'll be seeing way more complex examples than this one. It was Google who actually got contextual editing into Limelight in March 2025 with Gemini 2 flash image generator.

    But the quality wasn't that great. It was very easily overshadowed by Chad GPT40

    02:26

    image generator which literally came out a week later in March. Even that was forgotten when Flux Context came out and now Nano Banana aims to dethrone all contextual editors out there.

    Once it is released officially, it is rumored that

    02:43

    this will be a part of the Google Pixel phone, the Gemini app and also the Google Flow app so that it can be used along with V3. So it has huge potential.

    Now let's see how to access it. Before that you should also know how not to access it.

    So, if you go over to Google

    03:00

    and you type in Nanobanana, there are a couple of websites with the same domain name which are claiming that they're using the Nano Banana model, but the reality is that they aren't and they are even taking payments and just giving you images using some other models. So, be wary of that.

    The only way to access

    03:16

    Nano Banana right now is through a website called as lmarina.ai. I have given the links as well as the images that we will be using in this tutorial in the description so that you will be able to work along with me.

    Once you go to lmarina.ai,

    03:32

    you really don't have to do anything. You don't even have to create an account.

    First of all, just make sure right at the top here, this is selected as the battle mode. And then make sure you hit this option which says generate image.

    And this is where we can upload

    03:48

    our images and start writing in the prompts. However, LM Marina works in a slightly different way.

    Once we do upload our files and write the prompt, it's not a guarantee that we are going to get the result by nano banana. It's almost like a lottery that we just have to wait till the time we see a result

    04:05

    which says nano banana. Let me show you how this works.

    So for this first example, we have got this image that we saw before and let's say we want to input this woman inside that frame and in a realistic way. So, first thing that we're going to do is once we've selected the image option here, we're going to

    04:21

    upload both these images right here. Once that is the case, I'm just going to write in a simple prompt.

    A woman sitting with a man, they are both looking at the phone and laughing. We're now going to hit generate.

    And now you'll see that it will basically start to give out results by generating this

    04:38

    image inside two different models. So, that's why it's called the battle mode.

    And we will be able to see which one gives out a better image. And if we're lucky right in the first go, we might just get one of these images generated by Nano Banana.

    If not, then we just do it the second time around. To in order

    04:54

    to increase the probability of this happening, one of the things that you can do is to basically open multiple tabs of Lamarina and just do this simultaneously. But often I have seen that right on the first go, there's a very high probability these days of getting Nano Banana.

    If not, it

    05:11

    definitely comes around in round two. So we've got both these images.

    Right now we don't know which is the model being used. For that to happen we actually have to vote first.

    So we have to tell LM Marina which image do we think is better. Is it are both of them bad or is

    05:28

    it a tie. So let's say in this case I say definitely left looks better in terms of consistency.

    So I say left is better. And that is when you will actually see what is exactly the name of the model that has generated the image.

    Sometimes you might get this security

    05:43

    verification uh window right here. But you can see clearly as expected this is by Nano Banana.

    And right here you can see the difference in the quality. So if I open up this image just see the consistency when we compare it to the original.

    05:59

    This is absolutely fantastic and just see in this case the other competitor is Chad GPT and this is absolutely a disaster. So, right in this first example, I hope that now you know why

    06:14

    Nano Banana has been creating so much noise. But we've got 14 other examples to see and trust me, they will blow your mind away, too.

    So, let's get started. So, in this first example, I took both these portraits of Michael Jackson and Albert Einstein.

    The prompt was very

    06:29

    simple. Portrait of Michael Jackson and Albert Einstein standing in a room like they are the best of friends.

    And this is what Nano Banana gave out. And this was absolutely stunning, especially when you're soon going to see what flux context gave out.

    Here's another result

    06:45

    by Nano Banana where the only change I made was that give me a full body portrait. And this also in my opinion looked very very amazing.

    But just see the result by its competitor which is flux context in this case. And you can see that as compared to the nano banana

    07:01

    image, this just lags behind too much. Let's see the next example.

    This time the prompt was very easy. Make the weather overcast and rainy and the people in the photo should be holding umbrellas.

    This is the result that Nano Banana gave out. Absolutely fantastic.

    07:17

    Barring that one person who didn't have that umbrella. But that's okay because again just see the result of flux context.

    Just notice the umbrella. Nobody's actually even holding the uh that umbrella.

    The photograph looks pixelated. It has changed the face and

    07:33

    overall even the rain just looks really really terrible. So no competition at all.

    Let's look at the next example. I uploaded these two images.

    One room which was decorated with this uh Buddhist theme setting and one was an empty room. And this can be a great use

    07:50

    case of Nano Banana because I've only seen this tool being able to handle such a complex task. Decorate the empty room with a Buddhist theme.

    And just see this is what Nano Banana gave us exactly that same room. And if you look at both these

    08:06

    pictures very very close to the setting that we had asked for again and here is the result by flux context. It kept the theme consistent but the room was totally different.

    So there was no point at all. Next up was this image where I

    08:21

    wanted this guy to be standing next to this car. And initially I had actually only written cinematic shot of a man standing next to a sports car.

    But I had not actually uploaded the image of this sports car. Just wanted to see what happens if you just give it the portrait itself.

    This is what Nano Banana gave

    08:39

    out and I absolutely loved it. Was it cinematic?

    Absolutely yes. Was it the same person?

    Yes. Now I uploaded this image and I just wrote in a simple prompt by also uploading the image of the yellow car.

    as it changed the car over to the yellow sports car and it

    08:54

    gave me a wide angle shot this time which was absolutely fantastic. Just see the result by Gemini 2 flash.

    This is also by Google but the result here was just absolutely terrible. So you can see how far ahead Nano Banana is of its competitors.

    This is one of the best use

    09:11

    cases of Nano Banana which can not just help people who are into image editing but also who are into video production because here we have a real life shot of this couple sitting in this restaurant and we're going to test out different angles and we're going to see if Nano Banana can maintain that consistency. So

    09:28

    I started writing very simple prompts like medium close-up shot of the man, close-up shot of the woman and I just wanted to see does it keep the things consistent. So just see for the man we had this and if you compare it with the original image everything is as it is

    09:46

    right from all the glasses to the setting to the man himself we've just never seen anything like this before and just see this is the result that Chad GPT gave out. So this is in no way a medium close-up shot because we can't see anything.

    This is a fullon close-up

    10:02

    shot. So it didn't make really uh too much sense.

    You can see how important context is for Nano Banana. It really really maintains that.

    Same thing happened when I wrote a medium close-up shot for the woman. This was a fantastic result.

    Again,

    10:18

    here I wrote overhead shot of the same scene. This was by Nano Banana.

    And just see the result by its competitors. I think this was by Flux context.

    It just changed the scene altogether because we never had a window like this in the original scene. So now with Nano Banana

    10:35

    from a single scene you can have different compositions and just imagine how amazing this is if you're into video production because often times when you've seen such scenes you're not going to have that one constant scene. You need different compositions to keep things dynamic.

    But in real life this this would require an entire crew and

    10:51

    maybe even a crane for the overhead shot. But right now just by typing in different prompts you can get different compositions, keep everything as it is and just turn it into a dynamic video.

    No wonder Google wants this inside Google floor because this is going to be a match made in heaven with V3. Then we

    11:08

    had this very simple image of this leather couch and I just typed in this chair in a cigar lounge. I just want to test the context part of it and you're going to be again amazed.

    This is the result that Nano Banana gave. I think this was absolutely fantastic.

    We've got

    11:24

    the cigar, we've got the whiskey, the colors, everything that you would expect inside a cigar lounge. And of course it kept the couch very very consistent.

    Just see the result by flux context. Only thing here is that yes it maintained the consistency when it comes to the couch.

    But where is the context?

    11:41

    Remember the prompt said cigar lounge. This doesn't really look like that.

    Right? So you can again see that even for something like product lifestyle shots.

    This can be amazing. Very similar to this was the next one.

    This coffee machine is kept in a kitchen. A cup of

    11:57

    coffee is being made using it. Check out the result by Nano Banana.

    Fantastic. Look at those beans behind and that bean pouch.

    This is fantastic. We know that yes, this is coffee being made.

    The result by Flux Context wasn't bad, but you can see that it just didn't add

    12:13

    those extra things. And even the image quality wasn't as great when you compare side by side to Nano Banana.

    If you've watched a couple of my tutorials in the past, you would know that two weeks back, I released a very long video on this particular task where we had the

    12:29

    headsh shot of this woman and we wanted her to hold this perfume bottle. It should look very real and it should look like a professional ad image there.

    If you remember, we did this through ideoggram character, but in it involved two steps that we also had to then replace the bottle using IDOG's magic

    12:45

    fill. But right now with Nano Banana, all this can be done with a single prompt.

    A woman holding a blue perfume bottle sitting on a luxurious couch and wearing pink leather boots. Pretty much the same prompt that I used in that video.

    And just see the result. The one on the left is by Nano Banana and the

    13:02

    right one I just put it through enhancer to upscale it just to make it even more real. But even you can see the image on the left, the original image.

    This also had a fantastic quality. The product exactly had that same label.

    So basically we achieved everything that we did in that video with a single prompt

    13:19

    and have a look at flux context. Nano banana is just on another level right now.

    Let's look at the next example. Again this was very important from a context point of view.

    Man sitting in a sports car on a busy street in downtown.

    13:35

    This is what Nano Banana gave us. Looked amazing.

    Followed everything that we described in the prompt. Only thing I would have probably liked it is if he was uh sitting on the driver's seat, but that's okay.

    We didn't mention that in the prompt. But you can see the downtown is there, the sports car is there, and

    13:51

    more importantly, check him out. The consistency is pretty much 100%.

    Now, see the result by flux context. It also follows the same things.

    But again, you can see the essence of that shot just goes away when you can't see the buildings in the downtown because that

    14:06

    is probably what we want when we mention something like in downtown. So you can see flux context is not bad.

    It's just that Nano Banana understands things in a way way more superior manner. Next, I wanted to test this out for a clothes

    14:22

    swap. And how consistent does it keep the design?

    This is again from one of my past tutorials where the only tool that could do this was seller pick AI if you've seen that video. And that is an expensive tool.

    So this was a pretty complex looking design and I wanted to

    14:39

    see if it can handle such a complex looking pose also for the person. So man wearing a t-shirt which says uh stay rad.

    This was the prompt. This is what Nano Banana gave out thing looked amazing.

    Even those little blue droplets that were there on the left top corner

    14:54

    it maintained that everything looked amazing as good a result as seller pick AI had produced. We just see the result by Chad GPT this time and it just changed the pose al together.

    The next example was a simple prompt. I have this image of this bike cinematic

    15:11

    advertisement for a bicycle. Simple enough.

    And this is what Nano Banana gave out. One of the very important things here was that it kept that text, the bull's text absolutely consistent.

    It even added on the bottom uh that

    15:26

    little tag or the label there because I had asked ultimately for an ad. Even Flux Context didn't do a very bad job as you can see here.

    But the problem was with consistency. So you can see the text this time it doesn't say the same thing.

    So that itself is huge that it

    15:42

    just keeps even the minute details very very consistent. Design the van in the same style as the logo.

    Brilliant result by Nano Banana. Again, probably I would have liked the design to be slightly louder.

    I'm sure that can also be

    15:58

    achieved by tweaking the prompt a bit, but overall this looked very real. It didn't change anything, especially when you look at the result with which Gemini 2 flash gave out.

    It just basically looked like an illustration. The next example here is something that we have never ever seen any image editor achieve

    16:15

    until now which is if we ask it to make the skin smoother. Up till now most of the editors out there like Gemini Flash, Flux Context, Chad GPD, they would all change the face to a significant degree.

    But have a look at the results here. The

    16:31

    result on the left is of course by Nano Banana. And if you compare it with the original, that's exactly the same woman.

    We've just never seen be this before. And have a look at the image on the right.

    That is by flux context. That is not just a small change.

    That is a

    16:48

    completely different person altogether. Therefore, now for the first time, even portrait retouchers have something at their disposal which is at least for now available for free because something like this can only right now be done by

    17:04

    using expensive AI software like Evoto or Retouch for me. Finally, I wanted to see how it handles colorizing.

    So, simple enough prompt, add color to the image. Both results weren't that bad.

    The one on the left is by Nano Banana. The one on the right is by Flux Context.

    17:20

    I still feel even in this the nano banana image just looks more natural because the flux context image just made it look a bit cinematic whereas we never asked for that. We just said add color to the image.

    I think the nano banana result just looks more natural. It is

    17:36

    rumored that within a few weeks Google will be releasing Nano Banana officially with a different name. How will we access it then?

    Will it still remain free? These are things we'll just have to wait to find out.

    One thing is sure though, just like in the case of V3,

    17:53

    once it does come around, it is bound to change the AI landscape forever. And when that happens, I will be here to review it.

    In case this video helped you out, do give it a like. And for more AI tutorials like this one, make sure you subscribe.

    And I will see you next