🚀 Add to Chrome – It’s Free - YouTube Summarizer
Category: N/A
No summary available.
00:00
Today, I'm super excited to share with you guys the ultimate media agent. Not only does this agent have access to personal assistant functions like email, Google Drive, and calendar, but can also do so much on the creative side of things like creating an image, editing an image, creating a video, or turning an image into video.
And then finally,
00:15
it can post any of the media that you create on X, Tik Tok, or Instagram. And finally, something that's really important, it logs everything it does, even if there are errors, so that you can have full visibility into what your media agent is doing.
As always, I'm going to give you guys this entire system for free so that you can get
00:30
hooked up and start using this thing just like I am. So, stick around to the end of the video and I'll show you how to set everything up.
So, I don't want to waste any time. Let's start seeing how this thing works.
Okay, so we talked to our agent through Telegram and on this lefth hand side, you can see I'm going to send over an image. And what the agent's going to do is first of all process this into our Google Drive
00:46
environment and then it's going to come back and say, "Hey, what do you want to call this file?" That way, we can keep it in your database for later. And while we're letting this spin, I'll just show you a quick sneak peek of something that it did earlier for me.
this exact team of agents. So, here you can see the agent's pretty eager to help us out.
It's asking us about how we want to name
01:01
this and also if we want to change any sharing settings for this picture. So, right now we're not going to touch the sharing settings.
I'm just going to give it a name. So, here you can see I just sent off this message that says to just name it speaker.
What it's going to do is it's going to go to our Google Drive agent right here and that is the one that has the right tools to actually
01:17
change the name. As you can see, this tool right here is called change name.
So, we'll watch it hit the Google Drive agent and then we'll go check in our actual Google Drive folder to see if it has been changed. Okay, so it just responded to us and of course it's logging all of its actions which I'll show you guys later.
But if I pull back up Telegram, it says that it renamed the
01:34
file to speaker. I can click on this link right here and we should see that exact picture that I just uploaded.
And you can see it exists in our folder called media. It is called speaker and it's right here.
So now what I'm going to do is send off this other message that says please edit that image. Turn it into a studio looking image.
It
01:49
should be energetic, colorful, and highlight the feeling of listening to music on a speaker, whatever that means. The media team will figure it out.
So, what this agent is going to do is it's using GPT5 Mini, which is its brain, to figure out which agents do I actually want to use in order to edit that image and turn it into a studio looking sort
02:05
of advertisement. And so, what it's going to do is it's going to hit its creative agent right here.
And this is the one that has access to that tool called edit image. And we should see it basically be able to grab that file from our Google Drive and edit it.
It's also using a think tool because it has so many different actions to process that
02:21
sometimes it needs to think. And if you haven't seen what this tool does, I'll link a video right up here where I cover that.
As you can see, just as we predicted, the creative agent is using its edit image tool. So, I'll check back in with you guys once we get that final product back.
All right, look at that. The agent actually ended up creating three different images for us.
So, we
02:37
have a couple different styles that we can choose from, which is pretty cool. So, now that it logged its actions, it said, "I created preview edits and saved them to Drive, but they're 1024 x 1024 proofs.
confirm before I render final 2048 deliverables. That's fine.
So, as you guys saw, it sent us three different ones. As you can see, they're pretty
02:52
much all named the same thing, just speaker studio vibrant. So, then it asks us what we want to do next.
First thing I'm going to do is just go to our media folder and make sure they're there. So, we have the original source file, and then we have the three edited ones right here.
So, here's number one. Here's number two.
Here's number three. I think honestly number one is probably my
03:08
favorite. So, this is the one that we're going to use to turn it into a video.
All right. So, it just fired off this new message where I said, "Actually, what I want you to do is take that first preview file." So, this was the one that we liked the best.
And then I said, "Turn that into a video. Create a VFX ad with music and lights that sync to the
03:24
beat as an advertisement for a JBL speaker." So, once again, this is going to hit the creative agent. It's going to use that imagetovideo tool.
And let me check back in with you guys once that has been processed. And of course, real quick, we're using GPT1 for the image generation, and we're using V3 fast for
03:39
the video generation. Real quick, wanted to update you guys.
The creative agent has full autonomy to use its tools to help us out with our media. And you can see it's going to turn that image into a video, but it also wanted to try out creating its own video with just text.
So, I'm not too confident how good this one will be, but we'll definitely take a
03:55
look. All right, so that just finished up.
I'm very, very impressed. So, this first one is the one that was actually the image turned into video.
So, let's click into this real quick. We'll hear the audio.
04:12
That one's super cool. Very impressed.
And now this one is just text to video. [Music] So obviously in that one we don't get the same like JBL branding or the actual
04:28
speaker, but still could be some really good B-roll. And both of these were generated with V3 fast, which is cheaper and faster than V3.
So imagine if we had these results in V3. Also, you can customize these prompts a lot more for your use case.
You guys will see how I did it here and it's very very minimal. So very impressive.
And then basically
04:44
it responds to us and says, "Okay, here are the two different files we just made. What do you want to do next?" All right.
So I'm going to shoot off this one. I'm saying to send the JBL speaker VFX video to Dexter Morgan.
So what it's going to have to do is find in our Google Drive the right file that we're talking about. It's going to have to go to our contact agent to find Dexter
05:00
Morgan's email and then figure out if it can send it to him over email. It also may have to go make the file sharable.
So, it's got two options. It can share with what specific email or it can just share and make everyone a viewer of that file.
So, we'll see what it does. Okay, cool.
So, it got Dexter Morgan's email.
05:16
It's searching through media right now. I'm assuming it's going to go back and either make it sharable if it's not already.
And then, okay, there we go. It's sharing the file right now.
And now it should kick off the actual email. All right, sweet.
So, that just finished up. It said done.
I found his contact information. Set the video to anyone
05:32
with the link and sent the email. So, it signed off best your name and placeholders, which I don't like, but that was my fault for not prompting it that way.
But either way, I'm just going to hop into the email and let's take a look. All right, so here's the email.
I'm sure you guys are glad that Dexter Morgan made another return. Dextermiami.com.
Although, if you've been watching the newer seasons, it's now dexterl.com.
05:50
And then, who knows? But either way, please find the 15C VFX ad draft for the JBL speaker here.
I don't think it's 15 seconds, but either way, if we click on this link, it should now just pop up as our video file, which isn't fully processed yet, so we'd have to download it. Let me do that real quick.
And when I hit download, this thing pops up. And
06:06
that's the exact ad that we wanted to send. So, it was able to find the right one.
And man, I just love how like the blue and red, they come out with music beats and it's like syncing to the audio. So cool.
Okay, so let's give the other agents some love. We haven't used the social media one or the posting one or the create doc tool.
So, what I'm
06:22
going to do is kick off this message that says, "Find me two high-erforming videos about NAND on each of these platforms, Tik Tok, Instagram, and YouTube." So, it's going to go down here to the social media agent, and that's going to do all of the searching. And then we will basically tell it to compile those results into a doc, and
06:39
then we'll come back later and have it post one of those ads on Tik Tok or something just to make sure that it's working. You can see it's doing search right now on all three of these simultaneously down here, YouTube, Instagram, and Tik Tok.
These are all happening through Ampify, by the way, which I will show you guys like what's exactly going on. Don't forget to use the code down below if you want to get
06:56
30% off Ampify. Okay, workflow just finished up.
It said, "Got it. I searched through these platforms and I found two high performing videos on each." So, for Tik Tok, we have these four editing workflows are the silent killers.
We have the link to the video. We have the creator.
We can see different statistics about it. Same thing with the second video.
We'll come
07:12
down here to YouTube. We can see you need to use niten right now.
Free, local, private. That one's by network chuck.
We've got a URL and then the second one is also about local setup. This one's also by Network Chuck and we've got a video right there.
And then for Instagram, we have our URL, our
07:27
creator, we have the caption, all this kind of information. So, we're able to actually search through based on, you know, a search term or a hashtag and we can specify how many videos we want from each platform.
Then it said, "What do you want me to do next?" I'm just going to shoot this off, which says, "Put those insights into Google Doc." We've got a tool right over here. It didn't
07:43
really make sense to throw this in with a different agent, so I just gave the main agent a tool to create a doc. and that goes to a separate end workflow that we built out and does that whole thing.
Okay, cool. So, that one's finishing up right now.
We should be receiving a link to the actual Google doc it just made. So, here it is.
It
07:58
called it niten high performing videos, Tik Tok, Instagram, YouTube. Here's the link.
Let me click on that real quick. And it pulls up our Google doc right here where we can see it pretty much gave us those insights with a brief summary.
And of course, it titled it as well. And it should have put this document in a folder in our drive called
08:14
media analysis where you can see it popped up right here. Earlier I was testing this and I made a file called brainstorm and it just says don't forget to make your bed.
All right, so I'm thinking you guys have probably seen the functionality of all these agents. You don't need to see calendar.
You've seen that before. You don't need to see the web.
You've seen that before. Let's just
08:29
do a quick one with the posting agent. And what I'm going to do is tell it to just go grab that VFX of the JBL that we made and just post it to Tik Tok.
All right. So I'm telling it to post that JBL VFX video ad on Tik Tok with the caption music to my ears.
So it'll be interesting to see what it does here. It
08:45
has memory. It has a five window context length, but we weren't most recently talking about this ad.
You know, we were talking about some other stuff. So, it's going to have to probably search through the actual um media folder in order to pull back that ID of that file.
And then
09:00
once it has that ID, it can go ahead and hit the posting agent to make that post. So, as you can see, it looks like it's making sure the file is actually public because in order to post it using potato, the file's got to be public.
So, it's doing that and then it's basically going to shoot it off over here and hit
09:17
the Tik Tok post tool. All right, so it just said it posted the ad to Tik Tok.
It gave us a submission ID and it captioned it music to my ears. Let me just hop over to Tik Tok real quick and make sure it's there.
And as you can see, 1 minute ago, this just got posted and we have our ad.
09:32
All right, so where to even begin? We've pretty much seen most of the agents here and I wanted to start actually breaking down how this kind of stuff works.
So, I'll start with the two things up top, which is the input and the output. What happens over here is we're just doing a quick switch to see what type of input it is.
So, if a photo exists, then we're
09:48
going to go up to the photo path, and that's where we download it, put it to our drive, and then set the text. And if it's a regular text message that comes through, we're just going to feed it straight to the AI agent.
And the reason why we have to set the text here is we have to make sure that both of the inputs equal message.ext text so that
10:06
whichever way it comes, the agent will at least receive something from there. Obviously, it has its massive collection of agents and tools below it.
But then what happens on this right side is it's going to clean up its intermediate steps and then log those in our tracker sheet and then send back a message to us in Telegram. And so what that tracker sheet
10:22
looks like is right here we have this. We have timestamp, workflow, input, output, actions, tokens, and then total tokens.
And let me just go all the way down to the bottom where our most recent runs would have been. Okay, so here's the most recent one where I said, "Post my JBL VFX video ad on TikTok with the
10:38
caption." Here's what how it responded to us. And then we can see right here the actions that were taken.
If we clicked into here, we could see the different tools it called and the different inputs and the stuff like that. And of course, we have our tokens.
It shows us how many were prompt tokens, completion tokens, the total, and also which model was used for each of these
10:54
little objects. And so, obviously, this is going to be really nice because you can see exactly what time different things were triggered.
You can see inputs and outputs and you can see you know based on these patterns of actions that are happening and you know stuff like tokens you can make adjustments from there. And if you're confused about how you actually get there in your agent
11:10
you have a setting down at the bottom or an option where you can return intermediate steps. And if you turn that on that's how you get this extra output on the right hand side of your agent right here where you have this big array of intermediate steps which has all of the actions and it has stuff like that.
And the reason that we have both a
11:26
success branch and an error branch is because we go to the agent settings and we want it to continue using an error output on error. If we didn't have it like this, basically that means if the agent failed for some reason, it would just stop the whole flow and we wouldn't get a log or notification or anything like that.
So that's why we're able to
11:41
split it up into two different branches. So real quick before I show you guys some of these tools and how these sub agents work, I thought real quick we' just actually read the system prompts for the main AI agent.
So the overview is that you are the ultimate manager agent. Your job is to help the user out with the task by using your tools to delegate the task to their correct tool.
11:58
You yourself should not be writing emails or creating summaries. Your sole responsibility is just to call the correct tool.
It's got a ton of tools, right? It's got the Google Drive agent.
It's got the email agent, calendar, contact, social media, creative, posting, web agent, create doc tool, and then a think tool. These descriptions of
12:15
these tools are very, very high level. And the reason they're so high level is because in each of the sub aents or the sub tools, there's another description there where it's a little more detailed on when to use this tool.
It's good to send as much information in the description over to the sub tool or sub agent because otherwise every single
12:30
time your agent's processing all of this. So, if you make this really chunky, you're just going to be using more tokens.
So, just something to think about. And then I have seven little key notes here that I decided to give it.
I said, if the user submits a photo, ask them what to call the photo, then change the name. Some actions require you to look up contact information first, like
12:46
these three. Images and videos are found in the database.
Use the Google Drive agent to get to those. Before asking follow-up questions, use your Think Tool to figure out what to do next.
And we've seen that it's been using the Think Tool a ton, which is awesome. Before posting anything, that file must be shared to anyone in Google Drive.
When creating videos, don't ask how long they should
13:02
be. You know, VO3 is basically just like 8 seconds every time.
And then always output a message back to the user. Never say nothing.
So, as you can see, that's not even too difficult or sophisticated of a system prompt. you'll receive an input, figure out which tools to use, and just use them.
Couldn't be simpler. And then one other thing I will show you
13:18
guys is that we're using GPT5 Mini through Open Router as the main model, but we also have a fallback model where we're using GPT5 Mini as well, but we're just doing it through OpenAI rather than Open Router. And maybe that's not the smartest because if OpenAI goes down, then both of these are screwed.
But I
13:35
could easily just switch this out to, you know, Anthropic or Google or something else. But if you didn't know about the fallback model, it's just a cool feature that they added in in the settings of the agent where you can enable a fallback model like I said.
All right, so the first thing that I'm going to do is talk about these custom tools
13:50
that I built. So we have the edit image, the create image, the image to video, and the create video.
And then we have these three for posting. And then we have one for creating a doc.
So like I said, these are all workflows that I built in edit and we're just having these sub aents call on those workflows. But they're all very simple.
The reason
14:06
why we're going through sub workflows is because we have to handle binary data. And sometimes binary data can be really annoying to work with sending between flows.
So that's why you guys may have noticed that when the tools would finish up, we would get a response in Telegram before the main agent actually responded to us in Telegram. And it's just because
14:22
I'm sending the binary in the subflows. So if you don't care about getting like super technical and you don't want to see what these tools are doing, then skip towards the end where I show you guys like what you need to know to set this stuff up.
But I'm just going to talk about these subflows real quick. So let's just start with the create image tool.
As you can see, the trigger has to
14:38
be the when executed by another workflow. And what we do in here is we give specific inputs that we need.
So in this case, we need the name that the image will be called. We need the image prompt.
And we need the chat ID so that we can send all of the chats back to our Telegram. And we use the image name to
14:54
name it in the drive. And we use the prompt here.
So I'm not going to dive into all these flows and how I set up these requests, but essentially we're hitting OpenAI to create the image. We're turning that URL into a binary file and then sending it to ourselves as well as putting it in the Google Drive folder.
And like I said, we have these
15:10
inputs that we're sending over. And the way that we're able to specify how those work is in the tool.
So that was the create image workflow. And if we click into this create image tool, you can see that it gives us these three things to send over.
And I'm letting the AI model define what's the image name, what's the
15:25
image prompt. But for the chat ID, we're referencing a variable.
And I'm basically just able to grab this from the Telegram trigger node and pull over the chat ID that kicked off the original message. And that's really important to understand because we're doing that throughout all of these subtools.
For
15:41
edit image, what we're sending over is image name, image request, chat ID, and picture ID. And this is the edit image tool.
It's basically identical to the create image tool except for we actually have to download that file. So we need the file ID of the original one that we
15:56
want to edit because that lives in our Google Drive. And once we are able to pull that in, we can feed it into this edit image node, which is another node request to OpenAI's image generator.
And now we're able to actually give it the original, give it the prompt, and it makes a new one. And then once again, it
16:11
sends that new one to us in Telegram and also puts it in our Google Drive. Now, these two get a little more complicated, but not too bad.
First, let's look at the create video. What we're telling it to send over is a video prompt, chat ID, video title, and an aspect ratio.
And
16:26
then the actual tool looks like this. It's going to capture all those variables, feed them into FAL AI, which is where we're accessing Google V3 fast.
We set up a polling flow. So, it's going to continuously check in to make sure that the video is actually done.
And once it finally is done, it's going to move on, download that file, and then
16:42
send it to us in Telegram and also put it back in our media folder drive. And then the image tovide tool is basically identical to the create video tool except for we actually need that file ID to hand the original image to the video generation model.
very similar to what we did earlier with the edit image tool.
16:58
So you can see we pass over a file ID, a video prompt, a chat ID, and the name of the original image. And what that workflow looks like is this.
It's very similar to that first one, except for we actually have to download the file first. So basically, we get the file ID,
17:13
we share it, we download it, we get a URL for it, and then we pass that URL into FAL to create the video based on that original source file image. Same thing.
We poll, we get it done, and then we send it to us in Telegram and we put it in our Google Drive. So, I know this may seem kind of intimidating, but
17:29
hopefully you can understand the patterns here of all we're doing is we're passing over small variables between flows and the variables. Now, we have everything that we need to do in this workflow.
This workflow in itself is not complicated. And I think if you guys, you know, download this workflow and you dive into it, you'll see it's not complicated.
I think sometimes when
17:46
you think about like having to pass stuff from agent to tool, that's where it gets a little intimidating. But hopefully now you can see that's a pretty simple agent setup.
And it was pretty cool that it had like full autonomy to decide which of these tools should I actually use. While I was editing this video, I realized that I didn't touch on something in as much
18:01
depth as I'd like to. So, real quick, just going to come in here and explain this.
So, I pulled back in this execution where we had the creative agent use our image of that speaker to turn it into a video ad. And then it also created its own video.
But what I wanted to show you guys is that if we go into this subexecution here, what you
18:18
can see is that there's no AI step in here. We're basically feeding in our prompt and everything right from that main workflow that called on it.
So all the magic of the prompting is happening right here. As you can see, we basically tell this main creative agent, this is its full system prompt instructions.
18:36
Image prompts should be detailed and stylized. Video prompts should be concise, energetic, and should be one seamless video with no cuts.
Explain the sounds in the video or any dialogue. And then up here, it's overview is that you're a creative agent.
use your tools to take action as requested and that you are an expert AI image video prompt
18:52
generator. So this creative agent receives the request from its main agent up here and it itself sends over these really you know high quality prompts for image to video and for video and then also the same thing with the images.
And so the reason I wanted to bring this up is because this is GPT5 mini that's
19:10
creating these prompts over here. As you can see, it's sending over this this full video prompt to the workflow.
And when I was brainstorming about building this system, my idea was that I was going to go into this workflow. I was going to have the main one send over an initial small prompt and then I was
19:26
going to use like a prompt AI agent in this flow that would be super super prompted on how to create like a structured JSON prompt. I'm sure you guys have seen like V3 JSON prompting all over X, LinkedIn, stuff like that.
And so that's what I was going to do. And I started off doing that and then it just became like not very consistent and
19:43
it it wasn't super super good and it also was like very specific to one use case and then I just basically gave the agent a little more autonomy and I was really happy with the output. So for now it's like this.
I'm happy with it. But once again, if you had a very specific use case where you would need this media
19:59
agent to make certain types of content, really easy to just basically move over this trigger, put an AI agent right here, and have the AI agent create a very specific prompt for you. You know, I think even if I go into the workflow history real quick, we can see that that's the route that I was taking
20:14
initially earlier today. Yep, here it is.
So, this is the latest saved and this was about 20 minutes before. You can see I had this prompt agent in here and I was giving it like structured JSON prompts to do.
And like I said, I was just getting issues where it was like throwing in the reference image randomly and maybe I'm not the best at JSON
20:29
prompting and I have some learning to do here, but it's kind of cool and I wanted to show you guys behind the scenes that I had tried this and I ended up taking it away and I was happier with those results. And I know that I'm not diving into all of these different prompts and all of these different agents.
They're all pretty standard, but keep in mind when you download the template, you'll
20:44
be able to look at all of this and just dissect whatever you want to. And then when it comes to these posting flows, these are all very, very identical.
There's just one thing switched. And let me just pull up the X one to show you guys.
It's super simple. What we're capturing over here is the file ID of course and then the text which will be
21:01
like the caption of the post. And then with those two things, we basically upload the Google Drive file to Blot and then Blot post it to X.
And so all of these are the same. We have X, we have Tik Tok, we have Instagram.
The only thing that changed is within each of
21:16
these nodes. We just change the platform that we're posting on.
And so that's why I said when you get this template and you want to change like what you're posting to, it'd be super easy to just customize those tools for your liking. And then the final tool over here is creating a doc.
What we're doing is we're sending over the title of the doc as well as the content of the doc. And
21:32
if I click into this button to show you guys what this subworkflow looks like, you can see this one's very, very simple as well. We're capturing the title and the content.
When we go to create the doc, we're basically just creating a document with a title. And then when we make that, it gives us a document ID.
And we use that document ID right here
21:47
to update it. And all we do then is we pass in the actual content.
So it's a two-step thing. And then all I did here and then all I did on this side is I have a variable just to show the the doc ID so that we get our clickable link right away.
Okay. Next, let me explain what's going on down here in the social media agent.
So what we're doing here is
22:04
we are doing three different requests to ampify. And ampify is kind of like a marketplace for different actors we can use.
And actors is just a fancy word for like a scraper. So, what I did is the first one I wanted to do is I knew that I wanted to scrape YouTube.
So, I went here and I grabbed this YouTube scraper
22:21
and then I just set up this request. So, I'm not going to dive too deep into it, but what you're going to do is you're going to go to Apify, create an API token.
You will then come up here to authentication and create a predefined credential type. You will go to Appify API and then just put in your API key.
And this will already be completely
22:37
configured for you where the AI is basically making this request to Amplify and it's saying, "Okay, here's how many results the user wants and here's the search term that he is looking for." And pretty much this exact same thing is happening within each of these two requests. The only difference is that they're all different actors.
So, they
22:53
accept data a little bit differently, but essentially it's the same thing of search term and how many, you know, posts do you want back. And this is where you can really feel free to switch out different scrapers that you may want to use and what exactly you're looking to do when you're using like a social media agent.
But I just really wanted to
23:08
show you guys how easy this can be to make these different systems of autonomous agents. Give them tools based on like things that kind of make sense, different buckets.
And once again, I'm using GBT5 Mini here to power this entire thing. All right, to wrap this one up here, we're going to do a quick
23:23
cost breakdown of everything that's associated with running the system. And then I'm also going to show you guys like what you need to do if you actually want to get this set up.
So, first of all, if you do want to get this set up, you're going to go to my free school community. You can also access this full guide if you go there.
The free school community is linked down in the description. And once you get here, you
23:39
just need to search for the title of this video or it might be pinned or in the YouTube resources, but it will be here. And once you click on that post, you'll be able to download the JSON file.
In this case, it'll probably be a zip file with like eight or nine total files, but you'll download those. and then I'll show you like the full setup
23:55
guide of how to get going with this. So, first of all, we have the cost breakdown.
Your first cost to be thinking about is your chat models, your token usage there. And so, today I was using a mix of GPT5 mini.
So, pretty cool that the entire system, the main agent was running on 5 Mini. And then
24:10
for some of these sub agents, I was using 4.1 just because I'd had them on 4.1 for so long and they worked so well. So, for this video, just didn't touch them.
And you can actually see here that 5 minis cheaper on the input but more expensive on the output. So you can do the math and kind of even that out there.
Then we have the pricing of the
24:27
image generation and the video generation. So we were using GPT- image 1 for the image generation.
And so this doesn't matter as much like in my mind at least. This is more about like the input tokens on the prompts.
I think what people are more concerned about is per image pricing. So for lowquality
24:43
square image it's going to be roughly a cent. For medium it's roughly 4 cents and for high roughly 17 cents.
And I think in this case we were using the medium images for 5 cents a pop. And then we were using V3 fast to make videos.
And the pricing is a little bit different if you're doing text to video or image to video. So for text to video,
24:59
for every second you're going to be charged 25 cents, but we were doing audio on, so that would have been 40. And then for image to video, it's the same price for audio on per second, but if you don't do audio, it's going to be a little more expensive per second.
But it also said that this is experimental pricing and could change. And then
25:15
there's a couple subscriptions that you may want if you want to duplicate exactly what I have here. The first one would be Blotato, which the first plan is 29 bucks a month, but you can get 30% off for 6 months if you use code Nate30 at checkout.
And then you also have the Aify scrapers down here for different social medias. And Apify, you've got
25:31
different tiers of pricing, but you can also use code 30 Nate Herk for 30% off your first 3 months at Appify. And then down here in the web agent, we had a few APIs as well.
We had Perplexity, we had Tavly, we had Open Weather Map, and all of these are fairly costefficient. All right, so now moving on to part two, the
25:47
setup instructions, because I'll be honest, this is not going to be like a super simple two-minute setup. You're just going to have to do a few things, but it's not going to be too bad.
And also, if you hear jets in the background, they're like practicing for the air show. And um yeah, there's just a bunch of jets.
It's getting super loud. But either way, let's talk about
26:02
setup. So, when you go to my free school community and you click on the post, there's going to be a zip file right here.
And the zip file will have nine different workflows. It's going to first of all have the four for the creative agent down here.
So it's going to have edit image, create image, image to video, and create video. And then for the posting agent, it'll have these
26:18
three X, Tik Tok, Instagram. And then you'll have the create doc tool, which will also be a workflow.
And then you'll have this entire thing as its own workflow, too. So that's going to be nine total files to download and import into niten.
But the rest of these agents will seamlessly integrate because they're already going to be baked into
26:34
the main agent workflow. Luckily, we have this functionality.
Otherwise, this would have been like 15 workflows to download. And so once you have all of those downloaded, what's going to be really important is that you download them into your NADN.
You name them something and then once you name them,
26:49
you're going to have to link them to each tool. So in this case, the edit image tool.
I called it edit image tool and I sync them up. You can tell because if I click on it, it's going to pull up the actual tool itself.
So that's a good test. Make sure that once you connect these, if you click on this button,
27:04
it'll pull up that actual tool and the workflow that you just downloaded. So, like I said, you're going to have to do that for all of these down here that are custom tools.
And then after that, the last thing you'll have to configure is your own Google environment. So, keep in mind what's happening is our creative agent is putting all of the videos and
27:20
images in a folder in my Google Drive called media. So, if you just create a folder in Google Drive called media and you sync them up to those nodes, you'll be set.
And then what's happening is once our agent creates a Google doc, it puts those in a Google Drive folder called media analysis. So, if you sync
27:35
those things up and then you come in here and you make sure that like you know the search media tool links up to your folder called media and that the search docs tool over here links up to your folder called media analysis. And then the other piece is in the actual workflows that create something at the
27:50
end. If you remember, we were uploading them back to our drive and this is where I chose the folder called media to put them in.
But when we have our Google doc tool, which I think was this one, we can see that what it does is it just creates a doc. And this is where we actually choose to put it in the folder called media analysis.
So like I said, a couple
28:07
things you'll have to configure, but once you do, you're all set. And then finally, the last thing to do would be the actual Google Sheets.
So we have two Google Sheets nodes up here, and that's what links up to this media agent logger. And I'm going to give you guys the copy of this template, and all the
28:23
names will be set up. So all you'll have to do is come in here and just make sure that you're looking at the right document and the right sheet, and you should be set.
So, I know there's a lot going on in this flow. Tried to do my best here to make that holistic.
You guys will have access to this document. You'll have access to a little setup
28:38
guide in here once you download this main flow. So, hopefully that's all you guys need to actually get this up and running.
And if you're looking to take these skills further or you're really having trouble getting this thing set up and working, then definitely check out my plus community. The link for that's down in the description.
We've got a great community of members who are sharing what they're doing with end
28:54
every single day. People in here are building businesses around this end stuff.
It's super cool to be in an environment where everyone is as obsessed with this as you are. Besides our two full courses and unlimited tech support, you're also going to get one live call a week with me.
And every single month, we're running hackathons with over $6,000 of prizes every single
29:12
month. And you can essentially get paid for building cool automations.
Anyways, I'd love to see you guys in this community. But that is going to do it for the video.
If you enjoyed the video or you learned something new, please give it a like. Definitely helps me out a ton.
And as always, I appreciate you guys making it to the end of the video, even as we have jets flying through and
29:28
have been making me stop recording like every 20 seconds to calm down. But either way, thanks so much, guys.
appreciate you and we'll see you in the next