Vibe Coding a $1B App from SCRATCH with Gemini 3 + Claude Opus (beginner friendly)

Olly Rosewell| 00:38:46|Mar 28, 2026
Chapters15
Intro to SlideForge, a simplified AI presentation tool that converts documents into slides and presents them with a canvas editor.

Olly Rosewell builds a from-scratch AI presentation tool called SlideForge, wiring Next.js, Supabase, and Gemini-powered AI to convert documents into editable slide decks and presentations.

Summary

Olly Rosewell walks through building SlideForge, a beginner-friendly prototype that turns documents into AI-generated presentations. He compares it to Gamma and shows how the front end, backend, and AI calls fit together, using Supabase for storage and Next.js for the app shell. The demo covers uploading a document or pasting text, having Gemini-powered AI generate slides, and editing slides in a canvas editor with custom backgrounds. He details two backend functions (process document and AI call) and explains edge functions on Supabase for hosting the Gemini API calls. Along the way, Olly narrates setting up a Supabase project, creating storage buckets for documents and backgrounds, wiring API keys, and iterating on the UI (including a clunky landing page and the imperfect thumbnail experience). He also documents debugging steps, upgrading to Gemini 3 Flash, and resolving issues with background uploads and slide previews. The video doubles as a live coding session, showing plan drafting in Composer, switching models (Opus 4.5), and refining the landing page with visuals and animations. The takeaway is a practical, if rough-hewn, blueprint for building an AI-assisted presentation tool using modern stack components and Gemini API capabilities.

Key Takeaways

  • SlideForge uses a two-bucket storage approach in Supabase: one for documents and one for presentation backgrounds.
  • Gemini-based generation is wired through Supabase Edge Functions to generate slide content from uploaded documents or pasted text.
  • Upgrading to Gemini 3 Flash and correcting API secrets were pivotal steps to getting the edge function flow to produce slides.
  • The onboarding flow is intentionally simple: sign up, create a project, upload a document, and click generate slides to see a canvas editor and fullscreen presentation.
  • Front-end iterations revealed UI gaps (thumbnails, background handling, and slide previews) that Olly planned to address with design and layout improvements.
  • Composer was used to plan app architecture and prompts, illustrating a practical approach to scaffolding a Next.js + Supabase app.
  • The tutorial emphasizes live debugging and incremental testing, including inspecting edge function logs and adjusting storage bucket policies.

Who Is This For?

Developers and startup builders who want a practical, hands-on roadmap to build AI-assisted presentation tools using Next.js, Supabase, and Gemini APIs. Great for learners who enjoy watching real-time debugging and incremental deployment.

Notable Quotes

""SlideForge turns documents into presentations using AI.""
Intro defining the product goal and core feature.
""We’re using the Gemini Flash API to generate the slides""
Describes the AI tech stack and integration point.
""Edge function failed logs... you go into the edge function that failed""
Shows debugging approach and troubleshooting mindset.
""Upgrade to Gemini 3 Flash""
Important step in resolving generation issues.
""The problem is that one, the thumbnails look absolutely dreadful""
Honest evaluation of UI/UX issues and next steps.

Questions This Video Answers

  • How do you convert a document into a slide deck using Gemini AI in a Next.js app?
  • What are edge functions in Supabase and how are they used to call Gemini APIs?
  • How can you set up storage buckets in Supabase for documents and presentation backgrounds?
  • What challenges arise when rendering slide thumbnails and backgrounds in an AI presentation builder?
  • What changes are needed to upgrade from Gemini Flash to Gemini 3 Flash in a live project?
Olly RosewellSlideForgeGemini Flash APIGemini 3 FlashClaude OpusNext.jsSupabaseEdge FunctionsPresentation generatorCanvas editor
Full Transcript
What's going on, guys? This is Oliver, formerly from Response AI and a few other software tools, now running papers.com and teaching people how to build software and make money from it um with rosewell.dev. In this video, um we're going to be building from scratch um a full AI um presentation builder. So, in general, you upload a document. Um it's very similar to Gamma, for example. So, Gamma is a tool that allows you to upload um sort of documents and and sort of text, that kind of thing, and it creates these presentations, right? Obviously, um my version is very very simplified. It's just to show you guys how the front end and back end works and how to call AI APIs. So, you can actually build this, but essentially you just upload a document um and then it creates the actual presentation with text. It's very, very simple. It's obviously not a clear clone because that would take me um weeks to build, months to build. but it's just to show you guys how the front end and back end works. So, what the app does, SlideForge turns documents into presentations using AI. So, upload a document or paste text. AI extracts the content and creates slides, edits the slides um in a canvas editor and then presents it in full screen and you can use custom backgrounds. So, what does the user see? The landing page, dashboard, agent, canvas, and then the presentation mode. How it works? It's just components on the front end. The superbase back end and edge functions handle the document processing. And then on this backend side, you've got two functions. You've got process document which actually like eats the document itself to see what it is. Then AI is called. So we're using the Gemini Flash API to generate the slides and obviously people can log in and log out. So in general, what we're going to do is I'm just going to go through exactly how it looks. Obviously, the landing page itself is pretty simplified. You've got um a carousel of fake companies here. Beautiful decks, clean spacing, that kind of thing. You've got images of the actual um you know, program itself. So, upload a document, create the slides, and then present and share. Obviously, some fake testimonials here and then obviously a um features section and then a pricing table. So, what we're going to do is we're going to start for free. Um obviously, I'm going to log out and start this from scratch. So, from here, I'm going to click sign up. Um, let's use a random um email. Uh, and then from there, we're just going to create the account. From here, we are in. So, I'm going to start a new project. Um, and we're going to go ahead and upload a document. So, if I upload a um text document, which is um a analysis of whether Tom Cruz is better than Brad Pitt at um acting, right? So, very very random. or you can obviously paste the text. Right, the project name is going to be Brad Pitt versus uh Tom Cruz and we're going to click generate slides. So, it's going to upload the content to the back end. It's going to analyze the slides and then it's going to generate the slides for us. Now, how this works on the back end, guys, really quickly is there's two storage buckets that we'll create in the full um demo. Presentation backgrounds and documents. And obviously the documents are, you know, is the Brad Pit document we just uploaded. And then presentation backgrounds is the design for the um storage itself. On the table editor, we've got the projects themselves. So the names of the projects, we've got the slide content. So all of the content from the slides are saved here. Then you've got the slides themselves. And then you've got users. And obviously there's just two test users. So we go back to SlideForge. Our Brad Pitt versus Tom Cruz presentation has been created. So we're going to go to present and obviously you can see the last great movie stars defining the pinnacle of Hollywood origins and archetypes which one is better acting method methodologies awards and industry etc and then the final verdict whatever it may be from there you can upload because this looks really ugly the presentation we're going to upload a background so I'm going to just choose this one um which is quite pretty so it's just like a a watercolor background of the sky and we're going to open um this background has been uploaded included. As you can see on the back end, the background itself is going to be in the storage. So, if we go to presentation backgrounds, um, if we go to this one, background PNG, it should be one of these. Um, there we are. So, it's on the back end now. So, when we uploaded the actual image, it's sent it to the back end and we're going to present now. And as you can see, there's a pretty background with a drop shadow and you can just, you know, obviously post all of these and and have a look. So, the next thing I would obviously add is images. is I want to add images to the backgrounds. Um, sorry, images to the actual slides themselves, be able to change the font and stuff like that. But for now, I'm just going to jump into how I built this, um, how I built the landing page, the front end, the back end, etc. And, um, I'll see you guys soon. Okay, guys. So, we are currently in Cursor inside a completely blank folder. And the first thing that we've actually done is gone over and copy and pasted the SlideForge AI product requirements. Now I'm obviously going to pop this in the description um for you guys to copy paste the actual prompt and then the raw um out you know output of the PRD the final PRD which is this um I'll put this link in the description but in general I just said look I'm trying to um you know clone gamma I want a simple flow where a user can upload a document or paste text or whatever it may be the AI ingests that document and creates a canvas of 8 to 10 slides with titles subtitles etc. We'll use Superbase, Nex.js, etc. We're going to use Sup Super Superbase Edge Functions to call the Gemini API to create the documents. Um, and then I said below is the design PRD as well. Now, in general, like I said, I'm not going to worry about um actually going through all of this because it would take ages. But in general, it's gone through um all of the actual um setup and flow of the app itself. So, I'm going to post that in. What I'm going to do now is um paste this in and I'm going to say uh at the top please uh let's run this for um next app latest to create a nextjs app. Now what we're going to do is it might be a waste of tokens guys but I like being really concise here. I need to make sure that the AI understands the plan itself. So, I'm going to switch to composer. So, here I'm I've already I'm already on composer, but I'm going to choose plan mode from the drop down here. And I'm going to plan this. Okay. So, it's going to say planning next moves. Um reviewing the workspace and drafting the plan. Check what's already in place, that kind of thing. And it is going to um ask me a few questions. So, do you have a superbase project set up or should the plan include instructions for creating one? So let's say include superbase setup instructions. Do you have Google cloud Gemini API credentials or should the plan include setup instructions? So again include Gemini setup instructions just so we you know I can show you guys how to do it. Which package manager should we use? I'm going to use npm and I think that is it. So if we click continue, it is going to start writing um as you can see it's going nuts. It's going to start writing the impleation do uh implementation document. Okay. So, architectural overview, um, Nex.js, that kind of thing, Tailwind, Superbase, Google Gemini, Flash, whatever it may be. Um, and it's going to start creating all of this plan, right? I don't know what the jittering is about. Um, but it's already created the plan pretty quickly. What I'm going to do now is I'm going to switch to quite an expensive model. You guys don't have to use this model. I'm going to choose Opus 4.5. And then I'm going to click build. and I will be back um when it's done. Right, guys? So, as you can see um it literally ran for ages and ages, right? And what we've got to uh show for it so far is this dreadful landing page, right? And this is Opus 4.5. So, this is quite surprising how bad this is, right? [snorts] But what we've got is we have got a um plan at the bottom here. So set up superbase get a Gemini API key um and go from there. So if we actually find um the architecture so success criteria where is the read me okay so here's the readme so keep all so what we're going to do first is go and create a superbase project so if we go over here let's create a superbase project I'm going to just um do this create a new project Uh sign up. Uh what we doing? Uh plus YouTube. Um and sign up. I'm just going to uh confirm this. I'll skip ahead. [snorts] Okay. So, it's uh the organization. So, let's say YouTube um Slide Forge um personal is fine. Educator free $0. um create organization and now YouTube slide forge um let's call it slide forge database password um region is Europe that's fine create new project and then what we're going to do is just wait for this project to actually be created looks like it's already created project overview um let's just copy and paste all of this um my Superbase project stuff. Um what are we doing? I can go to API docs. You should never show people these documents, guys, but we'll do it. Um uh API reference. No, wrong one. Project settings, data API. Okay, there we are. Go from here. API keys. um default. Pop that in there. Again, guys, do not do this. Not post these things. It's just I'm going to delete the project after this anyway. Um and tell me what to do next. So, it says run the database migrations. So I've gone into um the initial schema file which is the SQL. So remember guys the SQL is what cursor has written me to create the actual back end. Um so if we go to table editor uh sorry SQL editor post this in run it and then we're going to wait for the no success success no runs returned and we're going to go back to um RLS policies SQL and paste this in as well. um pop this in run and that's absolutely fine. We should now go to table editor and see project side content slides users etc. Um copy the contents click run create another query uh you should see four tables create a storage bucket. So if we go into um the dashboard and go to storage we're now going to create a new bucket. the um policies for the documents buckets. Let's call it documents. Um it should be create policy from scratch. Uh so public bucket restrict file size. Um pop this in. Go into here and we are going to look for edit bucket. Bucket settings. Um I've actually not done this before. Where is it? Oh, policies. Um, we are going to new policy and then create a policy from scratch and we are policy definition will be this policy name um can be users can upload docs. Um if we go to uh policy definition going to post this here users can upload documents policy definition click review and save policy. Obviously copy this as well review. Oh please allow at least one operation for your policy. So let's do um create read update and delete or select insert update delete whatever it may be. review, save policy, and there is an error. Um, so I'm going to say errors with the SQL code. Okay, so it's given me SQL to post in. So I'm actually going to go back, go to SQL, pop this in, paste this in. And now we have a new um bucket called documents which is absolutely fine. So I just in the end I just said I don't know how to do this. Can you use SQL to create it? So I did. Um so I'm going to say okay what next? So what it has asked me to do is find the Superbase service ro key. So, if we go into project settings, um, API keys, I'm going to go to the service ro API key. So, reveal this. Obviously, never reveal this in front of people, guys. Uh, copy. Um, paste this into the service ro key by here. And then it says, go to Google AI Studio, sign in with your Google account, create API key, copy the key, and add it to the um, env.local. local. So if we go over to AI Studio, going to go to get API key. Let's create an API key. uh let's call it God. Let's call it test for YouTube. Now this API key has been created. We're going to uh copy paste this in. Pop that in. Going to save that. And then what we're going to do is go to superbase. We are going to go to authentication and we are going to go to um where is it or project settings sorry then authentication and we are going to say confirm email off. So this means that users can create an account without having to authenticate email because obviously we haven't set that up just yet. So um we're going to go into authentication and we can see there's no users in the project. The first thing we're going to do is get started or go to the sign up. So, let's go to local host. Go and just click get started. Let's say Oliver 3 plus YouTube test and then um test 199. And I don't like that button actually. Create an account. And we are inside the app. So I don't really remember what to do next. So we got quick checklist database migration. So upload document. So go to agent and upload a PDF document slashext. Um so what I'm going to actually do is go to Gemini. Going to say um write me a 1,000word passage about apples versus oranges for health. um so that I can test a software that creates slides from documents. So obviously this is going to create um a 1,00 word passage about apples versus oranges, which is just a random example. I'm going to copy and paste this into a Google doc, get it as a doc x, and then upload it to um the tool. So, I've got this um dumb [laughter] I've got this dumb passage about apples versus oranges and we're popping this in. So, um example doc for YouTube and we are going to download that as a doc x. And what I've actually asked it for is I've said what about um my edge functions. So we're actually using API roots instead of edge functions. This simpler check what we have using HS API roots. This simpler works your API is already set up and will work automatically. These run on your next server versus require separate deployment. Um and we'll go from there. So um what I'm actually going to do is what it's done guys is it's created um Nex.js JS API routes instead of um Superbase Edge functions. So I'm going to say no, I do not want Versel serverless functions or API routes. I want full scale edge functions that are hosted on Superbase. Now that we have the URL and data etc. Let's connect to Superbase and create the edge functions for the Gemini API with my API key etc. and set up all the secrets. So here's what we're going to do. We need to create the edge functions which are like the little jobs. Um let's actually just refresh this so we know I'm in. That's cool. So we are in as a user inside the authentication screen. We are going to create edge functions. Um we're going to obviously have to upload our API keys as secrets. So the edge functions which are just the jobs can obviously follow along. Um and it's going to start creating the superbase edge functions and setting up the structure. So I'll just skip ahead. Okay guys, so I asked it um to set up superbase. So, we've connected Superbase 2 to cursor and it's asked me to set up the um API keys um inside of Superbase. So, we've created the edge functions themselves. We're just going to manually um set up the edge function secret. So, if we go to superb secret um let's say Google API key, Google API key is here. So, let's do this. Um, and again guys, never ever show people this. Um, but I'm going to say add another. Um, it says Superbase URL. Superbase URL is down here. I think that's fine. And then Superbase and nonkey is by there. So that's fine. So let's just bulk save. Let's delete that one. Save. Edge functions. Next up, run Superbase login. Link the project. Set your Superbase secret. Deploy the functions. Um, okay. So, how do we test this? I have my DOCX, my DOCX file to upload and set up the first campaign. Um, and slide creation. Okay, so it says make sure the dev server is running. Sign up, upload your docx file, generate the slides, and you'll be redirected to the canvas editor, and then view and edit slides. You should see thumbnails of all the slides. Okay. So, if we go over to localhost, going to just refresh this, and we're currently logged in, which is fine. We're going to click get started and we're going to create a new presentation. So if we go into choose file, let's click on example doc for YouTube and we are going to name this um example test. We're going to generate the slides. I hate how those buttons like turn white when you go over them. So generate slides. I'm [snorts] just going to skip ahead and see what happens. Okay, so the edge function failed and let's go into the um logs and paste this in. So edge function failed logs. Now here's what you're going to do, guys. It's so important. You go into the edge function that failed. So I presume it's processed document or maybe that was fine. That seems to be fine. Edge function generates slides and there's no logs. There is no logs for the generate slides edge function at all. So what this is likely going to do is debug what's going on. So checking edge function or handling and the front end call. So try rerunning the command 401 indicates XYZ D and it's going to rehash the edge function. Okay. So the main problems that it found um were uh problems with my API keys. So I just had to update um the secrets on the um in the edge functions and I also had to upgrade to Gemini 3 Flash. Um, so all I did to do that guys is I went into the uh documentation which is um on Google AI studio. So AI studio.google.com API quick start and then I went um to the specific model. So I went to Gemini 3 and I just copy and pasted the developer guide and it has created um the specific model for the edge function. Right? So in other words now my app is using um Gemini 3. So, if we go into SlideForge, let's um create a new presentation. We're going to go example for doc example doc for YouTube. Let's call it um test one, two, three. And we're going to generate the slides. Now, as you can see, it's pretty nice. So, it says analyzing content, generating slides. Um and it says about 30 seconds remaining. So, this is kind of like what I expect. It's kind of animated, that kind of thing. I'll just skip ahead to when it says almost done, which should be in about 5 10 seconds. Oh, there we are. And here it is. [laughter] Okay, that looks terrible, but we're getting there. We are absolutely getting there. So, what we have so far is an app that allows you to upload a document of your choice or raw paste text and it creates slideshows. So, let's present the slideshow. [laughter] Okay, it's looking pretty bad. Oh, it's looking [laughter] it's looking terrible. But we have we have guys a presentation flow. So, not only can you open the presentation, [laughter] the case for apples, fiber and gut health, antioxidant specialization, flamoids, um no flavonoids. Um conclusion. So, what is the conclusion? It's come to integrating both staples. Ah, right. Right. It didn't even say apples was better than oranges or vice versa. So, pretty cool. If we go back to um the dashboard, it's actually saved as well. Now, let's analyze, guys. Let's really get down to business now. What is the problem here? So, the problem is that one, the thumbnails look absolutely dreadful. two, when you present um the you know I don't even [laughter] know where to start with this like it kind of looks okay but obviously I can't really decide what the problem is. I suppose what we could do [snorts] is if we go into dribble I'm going to say okay this is great and the process works perfectly. Okay guys, so um I actually lost about um two hours of footage. So I'm I'm coming back to this and starting all over again. So currently the actual slides um inside the app themselves are really ugly. Okay, so one of the issues are that the thumbnails themselves um are extremely skewed. They they look really ugly. They look really small. Um they're strange looking. you know, they're either sort of too zoomed in or zoomed out, that kind of thing. So, I'd like all of them, um, all of the slides in the previews to be standardized, um, sorry, all the slides in the actual live, you know, uh, presentation to be standardized and much smaller. So, the e the UI and thumbnail previews are also, um, basically just smaller versions of the slides themselves. [snorts] I don't know why, but the previews look different to the slides as well. Um, so the font and text isn't the same, the spacing isn't the same, and it's just so it's not like a jump scare when you actually go to the uh the preview itself. So, as you can see, obviously, all of the um thumbnails look really terrible. Um, and on top of that, you can see that there's a sort of bias towards the top left of the actual slides themselves. So the next issue um is that the slides themselves um effectively need to be sort of um rehashed or edited in a way that allows us to see the slides better. I suppose in general I don't like the bias on the top left. It just looks really ugly. The text is ugly. The text is a bit too big. I think the division um of uh text to the left is ugly as well. Um so we're just going to fix that as well. So obviously here I'm just going to say I will share some examples of the slides. Then um I will share some UI inspiration um from images of the slides so we can really start dialing down the aesthetic um of the slides themselves and obviously of the previews because currently they look dreadful. So this is really important. I've just taken a screenshot of the actual slides themselves from our UI because especially with Composer or with Claude or with Opus, it's really important that you actually share images with them because they can read images and they know what the images look like. So again, we're just going to copy paste this one as well. Um, pop that in there. Obviously, image two is um how ugly the slides look. Image one is how ugly the previews look. And then image three, four, and five are inspirational um designs. of um you know slides that we found online. And we're going to use dribble for this. So dribble.com um dribble with 3Bs. For now, we can ignore the designs and photos in the slides. All we're going to do um is show, you know, what the actual slides look like in terms of text alignment and size and area, you know. Now, something else that we're doing here is I'd also like the option to upload an image as the background of the presentation. So I don't mean the background of um you know like background in the slide itself. I mean that the white slide, you know, similar how similar to how we're doing this now, guys. You can see that there's um the actual video that we're doing, but then there's a background. You can see it's like a pretty um watercolor art background. I want that effect like I I want to be able to present with a nice background. So the white slides are normal, but there is a subtle drop shadow just like there is on my current video now. Um, and then a background with an image of our choice, very similar to how Screen Studio allows users, um, you know, which is what I use to present on a background, like a drop shadowed background. So, this is going to be a bit sort of tedious. We're going to go and look for slides. Um, and I think I'm I search I'm going to search for a while here, so I'm probably going to skip ahead to when I actually have slides that we can use. But I'm just looking for textbased slides, which is actually [laughter] harder harder than it is to find, right? You've got to like find just slideshows that only have text on them. I just I didn't really know how to articulate this. I didn't know what I wanted from the slides. I probably should have just said that the text should be smaller and, you know, and and the alignment should be better. [snorts] But I'm just going to go ahead and find some slide decks. I'll skip ahead now. Okay. So, I'm actually coming to the end of the search here. We got a load of different like pitch decks with sort of like prettier formatting. Um, so I'm just going to pop all of these um into the dashboard itself. And the next thing I was going to do is go to paper schedule, go to the landing page and just show what I mean by the drop background by actually going to my own website that does it. So from here now, pop it into composer. Um, and I'm going to create a plan instead because this seems like a pretty intensive sort of detailed um agent request that I've given um to cursor. So, I'm just going to plan it and I'll pop uh ahead to when it's done. Okay. So, the plan is actually done. So, it's going to fix the thumbnail rendering. It's going to reduce the title size. Um it's going to add the um formatting improvements. Then it's obviously going to add that feature that allows us to upload um a background for the video. Now what this has done guys is it's obviously had to create back um backend functionality to actually save the image itself. So there is now a new bucket um for storing video um storing images on Superbase. So in the storage bucket there's now storage for the actual slides themselves um and there is um and the docs that you upload and there is now storage for um the images that we want to upload to the background. So we're actually going to test the background upload. So if we click um on the background and we upload an image, it says fail to upload background image. So, we're going to go into inspect element and there's just a load of errors, right? So, remember guys, this is so important for you to do. You need to always go into the console and copy paste all of the errors and show them to um show them to uh cursor. Right? So, what we did here is it obviously sent all of the SQL forward. It built the tables and stuff, but what we didn't do is create the actual um Superbase public bucket. So it says you need to create the storage bucket in Superbase, name it presentation backgrounds and then enable it as a public bucket. So we're actually just going to go and do that and the upload feature will work then. So I'm going to head over to storage. Going to go to new bucket and we're going to call it presentation hyphen backgrounds. Enable it as a public bucket and create. And now that's all it said that we needed to do. So I'm going to go back to um Slide Forge. I'm going to try and upload a background again. Just going to actually test out the the I mean, you know, the actual like um zooming and stuff and the formatting of this isn't ideal. Um but that's not what we need to focus on right now. We need to focus on the actual design. So, I've just uploaded another background and it says background updated. Whoa. No, that is not what I meant, is it? That is obviously not what I meant. I did not mean that that the canvas Oh, okay. Okay. So, it it it has done what I wanted it to do. It has created the beautiful background with the drop shadow, right? The the pro the problem is that um the there's a background on the entire app. [laughter] So, that's not what I wanted at all. That looks so bad. So, we're going to obviously have to say the slides look great. [snorts] Um, the background of the actual entire app has changed. So, we're just going to write that in. Um, because the app when you know the background when presenting looks great, but we don't need the background as the entire um canvas, you know. Now, before we actually post this, I want um the background and preview of the project itself that we're working on to be the first slide with the background image showing. So, if the background image has been set as some nice watercolor art, I want the first slide to be shown and then I also want um it to populate inside the preview because otherwise the blank preview, we have no idea what the projects are, what they're about, what what we were working on. So that recent projects tab there needs to show what the project was in the first slide. And obviously we need to get rid of the background because that looks so ridiculous. Okay. So now the next thing we're going to do is while that's actually building up um we're just going to um create a new document just for another test. So this document is um a analysis on whether Tom Cruz is better than Brad Pitt um at acting. Right. So, just another like random um document that we can create. We're going to uh save it as a doc x. Head over to recent projects. You can see that the slide preview doesn't quite work, which is interesting. We'll fix that. Going to drag this and choose the file. Going to name this test two. And we are going to generate the slide. So, from scratch, we will see how this goes. I'm going to head back into MidJourney as well to get a new image that we can use as the background for this And this presentation is done. So if we go into present to see what it looks like of Tinsel Towns or comparative analysis of Tom Cruz and Brad Pitt and the text looks much better now. Um you've got working titles that are centralized. The text is centralized and obviously we've got like we've got like a decent a decent little sort of um presentation here. Going to go and try and try to add the background. Hopefully it should work um first time this time. Going to upload it. And it shouldn't be in the background. Okay, that's good. So background has been uploaded but it hasn't changed the background of the entire app. And there it is. So we've got um a nice AI generated presentation now um without um any of the ugly stuff. So, as you can see, all of my tests now, they do work, but you can see that in the recent projects, the slide isn't showing as a preview. Um, so we're just going to fix that because there is still no slide preview, [snorts] um, inside the recent project. Now, the next thing I want to do, um, is I've just skipped ahead a bit here, but please rehash my entire landing page to add a hero section, fake review images, um, you know, a features section, that kind of thing. We just need um a sort of new better looking landing page to start as a canvas. So we need some lotty style animations, some pricing tables, that kind of thing. Um in general, it's going to be pretty rough how it looks, but um we'll start adding more things to it at the end. And I'll show you how I design the landing page from scratch. Now, pretty simple, guys. What we did um I to design the landing page itself. Um if we go down to where we had done Uh, so I said, "Thanks. Please now add these images where relevant. Add some additional sections with actual lotty style animations and make sure the page is complete and pretty. Hero image is called hero image." What it did then is it um added all the images to the landing page itself. Um, it didn't actually add any designs um or specific kind of animations that I asked for because I used composer just to save some tokens and you know save a bit of uh save a bit of money to be honest. Um, what I had originally said is please rehash the landing page page to make it way more beautiful. Add a placeholder hero image, more animations, prettier, better interext, etc. And that is basically the last of um the you know app design and app build. I know it's been a long one. Obviously u I'm going to show the start of the uh you know that at the start of the video I'll have shown the actual you know finished product. But if any of you guys have any questions or anything just let me know. Um, and obviously take care and I'll speak to you

Get daily recaps from
Olly Rosewell

AI-powered summaries delivered to your inbox. Save hours every week while staying fully informed.