How to Build an ENTIRE iPhone App From Scratch (AI + No code)

πŸš€ Add to Chrome – It’s Free - YouTube Summarizer

Category: App Development

Tags: AI App DevelopmentCommunity FeaturesInsect IdentificationMobile AppsNo Code Tools

Entities: ExpoGoogle GeminiPicture InsectRORSuperbase

Building WordCloud ...

Summary

    Introduction
    • The video demonstrates how to build a fully functional iPhone app from scratch using AI and zero coding skills.
    • The app being recreated is similar to Picture Insect, which generates significant monthly revenue.
    AI Tools and App Development
    • ROR is introduced as the AI tool used to build iPhone and Android apps without coding.
    • The app includes features like insect scanning, threat level assessment, and pest control recommendations.
    • Google's Gemini API is used for image detection and detailed bug information.
    App Features and Functionality
    • The app has five main tabs: home/scanner, history, community, learn, and profile.
    • An AI chat feature is integrated to allow users to ask questions about scanned insects.
    • A community section is added for users to share discoveries and interact with others.
    • A learning section offers educational resources on entomology.
    Backend and User Experience
    • Superbase is used for backend integration, enabling login and signup functionality.
    • A predictive widget provides insect activity analysis based on user location.
    Publishing and Accessibility
    • The app is published to the App Store using Expo, and can be shared pre-publication via QR code.
    • A new ROR mobile app is introduced, allowing app development directly from a phone.

    Transcript

    00:00

    This app from the App Store is making $400,000 every single month. And I just built the exact same iPhone app in under 60 minutes without writing a single line of code.

    So today, I'm going to show you exactly how to build an entire iPhone

    00:15

    app from scratch using AI and zero coding skills. We're going to recreate our own version of Picture Insect, an app that's made millions of dollars, and I'm going to walk you through every single step of the process.

    But here's what makes this even more incredible.

    00:31

    Not only will you have a fully functional app by the end of this video, but I'm also going to show you the one AI tool that makes all of this possible, how to get your app on your phone in under a minute, and how to build complex features with simple prompts. This is

    00:48

    literally the future of app development, and it's happening right now. Now, I've tested dozens of AI coding tools, and this one is hands down the best one available right now.

    Everything you're about to see was built with zero coding knowledge. It's just AI doing all the

    01:04

    heavy lifting for us. Now, the AI tool that allows you to build iPhone and Android apps is ROR.

    I added a link in the description below, so you can go ahead and check it out, too. All right.

    So, this app lets you point your camera at any insect and instantly AI tells you

    01:20

    exactly what species it is. So, there's no more guessing.

    You get real time insect scanning with full camera integration. And it doesn't stop there.

    The app shows you whether the insect is harmful, beneficial, or neutral with even a threat level assessment. You also

    01:35

    get pest control and treatment recommendations if it's something you don't want around. And if you are curious about what you are looking at, well, you can go ahead and ask questions like, "What is the lifespan of this inst?" And you can get an answer

    01:50

    immediately through the built-in AI chat. And just like that, you can publish the results.

    Yeah, publish the results straight to your phone with a QR code using Expo Go. Now, I built the entire app with Ror Superbase and the Gemini API.

    And it's all connected in

    02:06

    under a minute with just simple steps. So before anything else, we do need to have something on the screen.

    It doesn't have to be pretty just yet. It just needs to be enough to click through and then know that we're headed in the right direction.

    So we're starting with a basic setup. Five tabs, one scanner, and

    02:24

    no distractions. We'll be using Ror here, which is an AI powered platform that lets us build mobile apps without writing a single line of code.

    Now, what I'm going to do is just describe what I want and within minutes, I'm going to have a working mobile app ready to go. Okay, so here's what I'm going to drop

    02:40

    into ROR. Hi, Ror.

    Let's build an awesome bug scanner app. For starters, let's build the app's base.

    Please create an app with five tabs: home/canner, history, community, learn, and profile. We'll build more functionality later, but please focus on

    02:56

    those five. then make sure the upload photo and scanning feature works.

    All right, so first important disclaimer here. Building apps and making money online is not easy despite what other YouTubers are saying.

    So when I show you how to build an app, I'm demonstrating

    03:12

    the technical process and the potential. I'm not guaranteeing our results.

    These AI tools are legit and the techniques do work, but success depends on your execution, your timing, and honestly some luck. And most apps don't make money immediately right away.

    And many

    03:28

    don't even make money at all. So, I'm not promising that you'll get rich or that you quit your job.

    This is not financial advice or a get-richqu scheme. I'm just showing you what's possible with these tools.

    Now, what you do with that knowledge, well, that's entirely up

    03:44

    to you. All right, with those expectations set, let's go ahead and continue to build the app.

    as War responds with exactly what we asked for. A super simple app with a green theme and five tabs.

    Nothing's live yet inside those sections. They're just placeholders right now.

    Placeholder

    04:01

    content across the board, but that's totally okay because at least now we know the bones are there. The rest we're going to fill in as we go.

    Now that we've got the basic structure in place, it's time to make things look a little bit better. The layout works, the tabs are there, but right now the design is

    04:16

    still pretty plain. So, for this part, we're going to get some visual inspiration from another app and then ask Ror to kind of match that style.

    And to do that, I'm going to give Ror a screenshot of the app we want to base it on, something we like the look of. And

    04:32

    here's the exact prompt I'm going to use. Great job.

    Now, I have a screenshot of the app that I'd like to get some design inspiration from. Can you implement a similar style here?

    And with that, the look of the app changes right away. The color scheme shifts to that

    04:49

    clean iOS style blue and the homepage shows a bit more detail than before. Everything else still follows the same layout, just with a fresher and more familiar design.

    It's not final, but it already does feel like a better version of what we originally first had. And

    05:05

    here you can see how the updated version stacks up against the first one. What do you think?

    So, I'll go through each section again just to show the differences. And even though most of the features are still just placeholders, the overall vibe is starting to come

    05:20

    together. The layout's in place and the design is looking better.

    So, it's time to give the scanner some real functionality. Now, this part's all about getting the image detection to work.

    And we're going to do that by connecting it to Google's Gemini. And to make that happen, we will first head on

    05:37

    over to Gemini's API dashboard. We're going to create a new key and then copy it.

    Then we're going to go back to ROR, open the integrations panel here, and then add the key under environment variables so the app can access it securely. After setting that part up, we're going to tell ROR to connect

    05:54

    Google Gemini to the scanner. Now, we're going to let it know that we want the app to scan user uploaded photos using Gemini and that the API key we just added should be used.

    We're also going to ask it to generate av file since we're going to need it when we clone the

    06:11

    project later. And once all of that is in, the scanner kicks in properly.

    And at this point, uploading a photo triggers Gemini. And then the app starts identifying the bugs that it sees.

    It's working. Though the results are still pretty basic just for now.

    You'll see as we try it with a few different images.

    06:27

    The detection does go through, but there's definitely some room to make the output smarter in the next step. So, like what I mentioned earlier, the scanner, well, it works, but the output does still feel a little thin.

    We're getting basic results. Sure, but not much beyond that.

    So, in this part,

    06:44

    we're going to ask Ror to expand what Gemini gives us after each scan. Now, here's what we're going to say.

    Great job so far. Now, please expand the scanning capabilities and include more info.

    Please include the following

    06:59

    information from the scanned bugs. threat level assessment, pest control recommendations, environmental impact information, similar species comparison, conservation status, and with that, ROR updates the scan function to match exactly what we asked for.

    And after

    07:16

    scanning, we now have a full set of details for each bug. All of that extra info shows up under the details tab.

    The scans are a lot more informative now, and this gives the app a more complete feeling without changing how the rest of it works. So far so good.

    And this is

    07:33

    the first time we're actually trying everything out inside the app. So, after allowing camera access, we're going to go through each of the tabs one by one.

    Most of them are still placeholders for now, but it already gives us a very clear idea of what features we'll have

    07:48

    later on and how everything is structured. After that, we're going to head to the scanner and use the phone's camera to take a photo of the ant image we set aside earlier.

    The app scans it right away and Gemini pulls up a full set of details based on what it detects.

    08:05

    Then we're going to jump over to the AI chat tab and ask a few follow-up questions based on the scan results. Now the replies show up right away.

    No issues. They are great, which means the integration is working just the way we wanted to.

    And so far everything we've

    08:20

    set up is running as expected. The scanner, the extra details, the chat.

    It's a simple initial version, but the main flow is already there. The next thing we're building is the AI chat.

    And for this part, we're going to switch over to Cursor. Before we can do

    08:35

    anything with it, we do need to set up a GitHub repository that Cursor can clone. I'm going to start by clicking the integrations button in the top right here.

    I'm going to link the account if it isn't already, and then I'll create a new repository. In this case, the

    08:51

    accounts are already linked. So, we're going to go ahead straight to creating the repository.

    Once that's done, we're going to copy the link since we're going to need it to clone the project into cursor. Inside cursor, I'm going to ask it to clone the repository using that

    09:06

    same link. And after the clone finishes, we're going to open up the project and we'll get ready to prompt it for the AI chat setup.

    Here's what we're going to tell Cursor. Hi, Cursor.

    Please assess the project files. I want you to build an AI chat companion for the app we have.

    The platform we're using is Roric.

    09:24

    Please use their native AI dependency. Please create an AI chat where the users can ask questions about the bug they scanned to give you more info.

    We have a bug scanner app. Once a bug has been scanned, it's saved and info about it is

    09:39

    given by Gemini. Please use that info as the basis topic for the AI chat.

    Now, it's taken a bit of back and forth, but cursor eventually pulls it off. The AI chat is fully integrated, and it connects directly with the bug details we're getting from Gemini.

    After

    09:55

    scanning something like an ant, we're going to scroll down. We're going to open up the AI chat modal and then ask questions based on that scan.

    And everything works right inside of the app. And just like that, the chat feature is live.

    All right. So far, the app looks great, at least to me, and the

    10:10

    main features are working. But right now it is still wide open, meaning there's no way for users to sign in to save anything or come back to their data later, which does kind of break the whole point of it.

    All right, but to fix that, we're going to set up a proper login and signup system. And we're going

    10:27

    to use Superbase to handle the backend side of things. And we're going to begin by heading over to Superbase and creating a new project.

    Once that's ready, we're going to go into the API section, copy the anon key, then grab the project URL by clicking connect.

    10:43

    Now, these two keys will be used later when we link everything together. Now, it's time to bring this over to Ror, and we tell it to handle the back-end setup, install Superbase, and create both the login and signup pages along with the full authentication flow.

    Now, here's

    10:59

    what we're going to say. Great job so far.

    Now, let's do some back-end integration. Please install Superbase and create a login and sign up page.

    Please create the authentication process as well. Afterwards, I will give you the API keys for Superbase so we can have a

    11:16

    full login and signup system. Once that part is done, we're now going to move to the integration tab in Ror and paste both the anon key and the project URL into the environment variables.

    Doing that connects everything behind the scenes so that we can move on to

    11:32

    building the actual database. Now, what's really impressive here is how ROR handles complex back-end functionality through simple props because I'm not writing any scripts or setting up servers.

    Basically, we're just describing what we want and then it

    11:48

    builds the logic automatically for us. Now, we're going to let ROR know the keys are in and it's time to ask for the SQL queries we'll need to create the tables in Superbase.

    So let's go ahead and say this. The keys are now added in the environment variables.

    Now please

    12:03

    send me the queries needed to create the tables at Superbase. Here Roar gives us the full set of SQL queries.

    So let's go ahead and copy those. Go back to Superbase.

    Open the SQL editor. Paste them in and then run the command.

    And right away the tables show up in the

    12:19

    table editor. And just like that, our backend is live.

    After going through all of those steps, we now have a smooth, modernlooking login and signup page. So here I'm going to try to create an account.

    And after I log in, we're successfully inside the application. So

    12:36

    far, everything works seamlessly. The authentication flow, the transitions, and basically the overall experience, it's both clean and responsive.

    The next thing we want to bring in is the community feature. Because up until now, the app's been focused on scanning and

    12:51

    storing information. But what if users can't share what they find?

    It starts to feel a little isolated. So, we're going to build out a social section where people can post their discoveries, join groups, and interact with others who are just as into this stuff.

    And to make

    13:07

    that happen, we will tell ROR to set up a full community system. We will ask Ror to add the capability for our users to join or create communities to react to post, leave comments, and share their own posts with captions and geoloccation.

    We're also going to

    13:23

    mention that after scanning a bug, pressing the share now button should also let them post it directly into the community of their choice. Now, this one does take a bit of back and forth, but what's great is that whenever something doesn't quite work the way we want or

    13:39

    expect it to, all we have to do is just hit send to AI and ROR will handle the fix automatically. Now, this takes a ton of pressure off when you're building something this interactive.

    After trying a few times, we can see that everything is finally coming together. The

    13:55

    community section is live and it's working. I can make a post.

    I can tag my location. I can comment on someone else's scan or I can start my own group.

    The share button from the scanner also links straight to this. So posting a find is super easy.

    Everything's working

    14:10

    smoothly so far. So it's a good time to build out one more important feature, a proper learning section.

    Now the idea here is to give users a space where they can dive deeper into entomology, not just scan bugs and then move on. So we're going to turn that old learn tab

    14:27

    into something actually useful. Here's what we tell Roar.

    Everything's going great so far. Now, let's move on.

    Please create a dedicated learning section. Let's use the learn placeholder section we did before.

    Please add some thorough

    14:42

    educational resources such as insect facts, detailed taxonomic information, educational articles, and some interactive learning modules about entomology. Now, admittedly, it does take a few rounds of back and forth prompting to get it just right, but we

    14:59

    eventually land on a working version. The learn tab now has multiple categories, including general insect info, deep dive articles, and even a quiz module where users can test what they've learned.

    Everything feels organized and easy to explore, and the

    15:14

    section adds a whole new layer of value to the app. All right.

    So, to wrap up the community features, we're going to add one last piece, a dedicated journal or log where users can keep track of everything they've scanned or collected. It is a simple feature, but it does give

    15:30

    users a more personal way to document their discoveries and their observations. So, here's what we're going to tell ROR in building this out.

    Great. Now, please create a dedicated journal log for the user.

    This is a place where they should be able to journal their collections and scans.

    15:47

    Please put this in the community tab at the very top of that section. And with that, Ror sets it up.

    And now there's a fully working journal. Users can create entries, write down notes, and organize what they found.

    It is a small addition, but it really ties the whole experience

    16:04

    together, and it makes the app feel a lot more personal. At this point, we've covered all the basics: scanning, chat, community, learning.

    But to really push the app just a little bit more, we're going to add one final advanced feature. Now, this one's designed to make the

    16:19

    experience more dynamic by using locationbased data to predict insect activities. Pretty cool.

    Now, to set that up, we're going to tell Roric, "Okay, great job so far." Now, lastly, let's expand the app by building an advanced feature. Please create a

    16:35

    section at the scanner page, sort of like a widget at the top, where based on your current location or chosen location, the user can see predictive analysis based on the current weather at that location and seasonal insect

    16:50

    activity. If the user presses the widget, it should open up and provide more detail.

    All right, after generating that, the app now has a live predictive widget built into the scanner page. It shows the current weather and expected insect activity for that area or any

    17:06

    location that our user selects. And whenever the widget is tapped, it expands to show more detailed insight.

    Everything adjusts based on that location. And it brings a whole new level of relevance to the whole scanning feature.

    All right, so everything is built, the features are in, and now it's

    17:22

    time to push the app live. The publishing process is pretty straightforward, and we're going to do it through the App Store using Expo.

    We'll start by clicking the publish button here at the top right corner of the screen. And from there, all we have to do is choose the app store as our

    17:38

    target platform. The next step is to enter our Apple developer credentials and paste in the Expo API key.

    And once everything's filled out, we're going to go ahead and finish the setup. And if you want to test your build before going live, you can instantly try it on your phone using a QR code with the Expo Go

    17:56

    app. It's fast, it's simple, and it works without any complicated deployment steps.

    And that's it. The app is officially ready to go live.

    But here's the cooler part. Even before publishing, you can already share your app with family and friends without needing app

    18:11

    store approval. ROR makes that possible by giving you a direct share link or a QR code that works instantly.

    It's actually the only platform I've seen so far that makes pre-publication sharing this easy. Okay, so before we wrap things up, there's actually one more

    18:27

    thing I want to show you. Something really exciting that just dropped.

    So, I actually got an exclusive test flight invite from Ror through their X account to try out their brand new mobile app. And I'm going to give you an exclusive

    18:43

    first look at what this looks like. Now, for those of us who don't know, TestFlight is Apple's official beta testing platform.

    This is where developers share their apps before they hit the app store. So getting access to this is pretty special.

    And here it is,

    18:59

    the ROR mobile app. This is literally cutting edge stuff that most people haven't even seen yet.

    And what's crazy is that this mobile app lets you build other mobile apps right from your phone. Like we just built this entire insect identification app on desktop.

    But now

    19:16

    imagine being able to do this from anywhere. This is honestly gamechanging for no code app development.

    Now, if you want to get access as well to exclusive updates just like this and potentially getting your own test flight invite,

    19:31

    please do make sure you're following Gork on X. That's where they share these early access opportunities.

    First, I'm going to go ahead and put the link in the description below. Okay, so now that everything is built, published, and even accessible on mobile, let's do one full

    19:47

    walkthrough. Okay, from start to finish just to see how the entire app works in action.

    So, we're going to start by logging into the account and once we're in, we're going to go straight to the scanner and use the phone's camera to capture an insect. The scan runs and right after that, we will now open the

    20:02

    AI chat to ask a few questions based on the result. And from there, I will hit the share button, and I'm going to post the scan bug directly into a community.

    Then we're going to switch over to the collections tab to check if the scan was logged properly and save to our list.

    20:20

    Next up, we're going to head into the community we just posted in. We're going to drop a quick comment on the post and add a reaction and explore what other people are sharing.

    After that, we will try to tap into the learn section and read through a few articles and complete one of the interactive modules. And to

    20:37

    finish everything off, I'm going to try to scroll up to the top of the scanner page and I'm going to check the predictive widget. And right there, as you can see, it's working properly.

    Everything works together seamlessly. And the app feels complete from end to end.

    And that wraps things up. We just

    20:54

    built an entire AI powered app from the ground up. No code, no dev team, just smart prompts and a clear goal.

    And what's even more exciting is how smooth the process really was. Seeing everything come together shows how far

    21:09

    no code tools have come. So if you're also thinking about building your own app or you just want to keep up with how fast this space is moving and it is so fast, go follow Ror on X.

    That's where all the early drops, the invites and

    21:24

    updates usually land first. And if you want more builds exactly like this, or you want me to try cloning a different kind of app, I'm totally game.

    Let me know in the comment section below. I'll see you at the next one and thank you for spending your time with me