As a developer, you understand the importance of staying ahead of the curve when it comes to the latest advancements in technology. Artificial Intelligence (AI) is no exception. In next years, AI will be increasingly popular tool for developers looking to create smart, profitable, intuitive, and dynamic applications.
However, building an AI-powered app from scratch can be a time-consuming and complex process. And time-to-market is critical for AI-powered apps. But what if you could automate much of the process, and build your app with just a few drag-and-drop . . .
In this post, we'll explore how to build an AI-powered app with the sczhou/codeformer
and Replicate integration. We'll take a deep dive into what these tools are, how they work, and how you can use them to create your own AI-powered apps with ease.
What is sczhou/codeformer model?
First, let's take a look at the sczhou/codeformer
model. This is a powerful AI model that has been trained to restore images based on provided images. With this model, you can simply upload an image you want to restore, and it will significantly increase the quality for you! This makes it a ideal choice for creating an AI-powered app that can restore images.
What is Altogic?
Altogic is a platform that allows you to build, deploy, and manage the backend of your AI-powered apps. It's a great choice for developers who want to build AI-powered apps without having to worry about the technical details. Altogic provides a simple, intuitive interface that makes it easy to build and deploy your with clicks in minutes.
In our case, we'll be using Altogic to integrate the replicate sczhou/codeformer
model into our app. This will allow us to send requests to the model and store the results, all within the Altogic platform. With its intuitive interface and robust API, Altogic makes it easy to add AI functionality to your app.
Building an AI-Powered App with sczhou/codeformer Model and Altogic Integration
Now that we've explored what the sczhou/codeformer
model and Altogic integration are, let's dive into how to use them to build an AI-powered app. We'll be using Replicate to run the sczhou/codeformer
model and integrate Replicate with Altogic to store the results. Here's how it will work:
1. Create Replicate account
First, you'll need to create a Replicate account. This will allow you to run the sczhou/codeformer
model and integrate it with Altogic. You can create a free account from here:
Once you've created your account, you'll need to create an API token. To do this, click on your profile picture in the top right corner of the Replicate dashboard and select the Account
option.

2. Create a new Altogic app
Next, you'll need to create a new Altogic app. This will allow you to build and deploy backend of your AI-powered app. You can create a free account using the button below:
3. Replicate Integration with Altogic
Once you've created your Altogic account, you can integrate it with Replicate. This will allow you to run the sczhou/codeformer
model with Replicate and store the results within Altogic. To do this, you'll need to create a new project in Altogic to add the API token you created in the previous step.
To create a new app, follow the below video tutorial:
In this video, we'll be creating a new project called image-restoration
in Altogic. Once you've created your project, you'll need to add API token you created in the previous step. To do this, follow the below video tutorial:
In this video, we'll be adding the API token as a parameter called Replicate
to our project. Once you've added the API token, you will be available to run the sczhou/codeformer
model with Replicate.
info
It is important to note that, With Altogic and Replicate integration, you will be able to run different models.
4. Creating a backend for your app
Now that we've integrated Replicate with Altogic, we can create a backend for our app. This will allow us to send requests to the sczhou/codeformer
model and store the results within Altogic. To do this, we'll be creating database and endpoint in Altogic.
Before creating data model for the response, let's take a look the below diagram, here we will check the status field of the response to see if the prediction is completed or not. Then, if status is succeeded, we need to store the response in the database, otherwise, we can show the response to the user without storing it.

4.1 Creating a database
Now, let's open Altogic Designer and create a new model. Click on the Add button and select Model from JSON. Copy and paste the following JSON to the editor and click on the Next button.
{ "id": "rrr4z55ocneqzikepnug6xezpe", "version": "be04660a5b93ef2aff61e3668dedb4cbeb14941e62a3fd5998364a32d613e35e", "urls": { "get": "https://api.replicate.com/v1/predictions/rrr4z55ocneqzikepnug6xezpe", "cancel": "https://api.replicate.com/v1/predictions/rrr4z55ocneqzikepnug6xezpe/cancel" }, "created_at": "2022-09-13T22:54:18.578761Z", "started_at": "2022-09-13T22:54:19.438525Z", "completed_at": "2022-09-13T22:54:23.236610Z", "source": "api", "status": "succeeded", "input": { "prompt": "oak tree with boletus growing on its branches" }, "output": null, "error": null, "logs": "Using seed: 36941...", "metrics": { "predict_time": 4.484541 }}
After controlling the field types you can click Next and define model name as predictions
and click Finish. Watch the 7-minute video below to prepare the database and endpoint for your ai powered app:
In the above video basically,
- We will create a database to store the response of the model
- Create a
/create-prediction
endpoint to send requests to thesczhou/codeformer
model. - Create a
/get-prediction
endpoint to get the response from Replicate, and if the conditionstatus==succeeded
matches stores it in the database, otherwise we will show loading icon to the user.
If our backend is ready as in the above video. Let's continue the front-end part.
5. Create a frontend for the app
Our front-end will built with Next.js and Tailwind CSS. We'll be using the react-dropzone
, zustand
and altogic
libraries to allow users to upload images, manage state, and send requests to the endpoint that we have created in the previous steps.
Now let's run the following command to install the dependencies:
yarn add altogic zustand react-dropzone react-toastify
Open the /src
directory of your Next.js 13 app and create a folder namely /components
and create and open new file namely DropZone
component. Copy and paste the following component code to your files.
~/src/components/Dropzone.tsx
import Dropzone from "react-dropzone";import altogic from "@/libs/altogic";import { Prediction } from "@/types";import { cn } from "@/helpers";import Loading from "@/components/Loading";import { useStore } from "@/store";import { toast } from "react-toastify";export default function MyDropzone({ className }: { className?: string }) { const { uploading, setUploading, setProcessing, setOriginalImage, setProcessedImage, } = useStore(); async function onDrop([acceptedFiles]: File[]) { if (acceptedFiles.length === 0) return; setUploading(true); const { data, errors } = await upload(acceptedFiles); if (errors) { toast.error("Something went wrong, please try again"); setUploading(false); return; } setOriginalImage(data.input.image); setUploading(false); setProcessing(true); const { outputImage, error } = await getGeneratedImage(data.id); if (error) { toast.error(error); setProcessing(false); return; } setProcessedImage(outputImage); setProcessing(false); } async function getGeneratedImage(id: string) { const res = await fetch("/api/get-image", { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ id }), }); if (!res.ok) { return { outputImage: null, error: "Something went wrong, please try again", }; } return (await res.json()) as { outputImage: string | null; error: string | null; }; } async function upload(file: File) { const formData = new FormData(); formData.append("image", file); let { data, errors } = await altogic.endpoint.post("/prediction", formData); return { data: data as Prediction, errors, }; } return ( <Dropzone accept={{ "image/jpeg": [], "image/png": [], "image/jpg": [] }} onDropAccepted={onDrop} multiple={false} > {({ getRootProps, getInputProps, isDragActive }) => ( <div className={cn( "group p-10 rounded-md border-2 border-dashed transition hover:bg-gray-50 cursor-pointer flex flex-col items-center justify-center", isDragActive ? "border-indigo-700" : "border-gray-300", className )} {...getRootProps()} > <input {...getInputProps()} /> <div className="flex justify-center items-center flex-col gap-4"> {uploading ? ( <Loading /> ) : ( <> <span className="text-lg bg-indigo-700 text-white px-6 py-3 rounded-full"> Upload an image </span> <p className="text-gray-500 group-hover:text-gray-700"> Or drag and drop an image </p> <p className="text-gray-500 text-xs"> Supported formats <strong>.jpg, .jpeg, .png</strong> </p> </> )} </div> </div> )} </Dropzone> );}
In the Dropzone
component, we are using the onDrop
function to upload the image to the /prediction
endpoint and get the response from the endpoint. If the response is successful we will store the id
of the response in the zustand
store. Then we will call the getGeneratedImage
function to get the generated image from the /api/get-image
endpoint. If the response is successful we will store the outputImage
in the zustand
store.
~/src/components/Loading.tsx
export default function Loading() { return ( <svg className="pl" width="240" height="240" viewBox="0 0 240 240"> <circle className="pl__ring pl__ring--a" cx="120" cy="120" r="105" fill="none" stroke="#000" strokeWidth="20" strokeDasharray="0 660" strokeDashoffset="-330" strokeLinecap="round" /> <circle className="pl__ring pl__ring--b" cx="120" cy="120" r="35" fill="none" stroke="#000" strokeWidth="20" strokeDasharray="0 220" strokeDashoffset="-110" strokeLinecap="round" /> <circle className="pl__ring pl__ring--c" cx="85" cy="120" r="70" fill="none" stroke="#000" strokeWidth="20" strokeDasharray="0 440" strokeLinecap="round" /> <circle className="pl__ring pl__ring--d" cx="155" cy="120" r="70" fill="none" stroke="#000" strokeWidth="20" strokeDasharray="0 440" strokeLinecap="round" /> </svg> );}
The Loading
component is a simple SVG animation that we are using to show the user that the image is being processed.
~/src/components/Footer.tsx
export default function Footer() { return ( <footer className="h-header bg-white z-50 fixed left-0 right-0 bottom-0"> <div className="px-4 h-full container mx-auto"> <div className="border-t flex justify-center items-center h-full"> <p className="text-gray-500 text-center"> Made with ❤️ and powered by{" "} <a href="https://altogic.com/" target="_blank" className="text-sky-600 font-semibold" > Altogic </a>{" "} &{" "} <a className="text-sky-600 font-semibold" target="_blank" href="https://replicate.com/" > Replicate </a> </p> </div> </div> </footer> );}
The Footer
component is a simple footer that we are using to show the Made with ❤️ and powered by Altogic & Replicate.
~/src/components/Header.tsx
export default function Header() { return ( <header className="h-header border-b flex items-center"> <img className="h-10" src="/svg/logo_light.svg" alt="App Logo" /> </header> );}
The Header
component is a simple header that we are using to show the logo of the app.
~/src/components/ShowImage.tsx
import { useStore } from "@/store";import Loading from "@/components/Loading";import { useRouter } from "next/router";import { useState } from "react";export default function ShowPictures() { const { originalImage, processedImage, processing, reset } = useStore(); const [downloading, setDownloading] = useState(false); const router = useRouter(); function download() { if (!processedImage || downloading) return; setDownloading(true); fetch(processedImage) .then((res) => res.blob()) .then((blob) => { const link = document.createElement("a"); const url = URL.createObjectURL(blob); link.href = url; link.download = "processed-image.png"; link.click(); URL.revokeObjectURL(url); setDownloading(false); }); } function resetHandler() { reset(); router.push("/"); } return ( <div className="flex flex-col gap-10 items-center"> <div className="grid md:grid-cols-2 gap-10"> <div className="w-full"> <h2 className="text-2xl font-bold text-center mb-2"> Original Image </h2> <img className="rounded-2xl w-full h-auto max-h-[90vh] drop-shadow" src={originalImage!} alt="Original Image" /> </div> <div className="w-full"> <h2 className="text-2xl font-bold text-center mb-2"> Processed Image </h2> {processing ? ( <div className="flex justify-center items-center h-full w-full"> <Loading /> </div> ) : ( <img className="rounded-2xl w-full h-auto max-h-[90vh] drop-shadow" src={processedImage!} alt="Processed Image" /> )} </div> </div> <div className="flex justify-center items-center gap-2"> {processedImage && ( <> <button className="border px-4 py-2 shadow rounded-lg bg-indigo-600 transition hover:bg-indigo-700 text-white" onClick={resetHandler} > Reset and try again </button> <button disabled={downloading} onClick={download} className="border px-4 py-2 shadow rounded-lg bg-green-600 transition hover:bg-green-700 text-white" > {downloading ? "Downloading..." : "Download Image"} </button> </> )} </div> </div> );}
In the ShowImage
component, we are using the useStore
hook to get the state from the store. We are also using the useRouter
hook to navigate to the home page when the user clicks on the Reset and try again
button.
Let's create a new file ~/src/store/index.ts
and add the following code to make a store for our app:
~/src/store/index.ts
import { create } from "zustand";import { devtools } from "zustand/middleware";export const useStore = create<Store>()( devtools( (set) => ({ uploading: false, processing: false, originalImage: null, processedImage: null, reset: () => set({ originalImage: null, processedImage: null, uploading: false, processing: false, }), setProcessedImage: (processedImage: string | null) => set({ processedImage }), setOriginalImage: (originalImage: string | null) => set({ originalImage }), setUploading: (uploading: boolean) => set({ uploading }), setProcessing: (processing: boolean) => set({ processing }), }), { name: "general-storage", } ));interface Store { uploading: boolean; processing: boolean; originalImage: string | null; processedImage: string | null; reset: () => void; setProcessedImage: (processedImage: string | null) => void; setOriginalImage: (originalImage: string | null) => void; setUploading: (uploading: boolean) => void; setProcessing: (processing: boolean) => void;}
In the above code, we are creating a store using the zustand
package. The store is a global state management library. We are using the devtools
middleware to enable the Redux DevTools extension in the browser. We are using it to debug the store. Here in our store, we are storing the following data: uploading
, processing
, originalImage
, and processedImage
. We are also creating some functions to update the store. The reset
function is used to reset the store to its initial state.
Let's open the file ~/src/pages/index.tsx
and add the following code:
~/src/pages/index.tsx
import MyDropzone from "@/components/Dropzone";import { useStore } from "@/store";import { useRouter } from "next/router";import { useEffect } from "react";export default function Home() { const { originalImage } = useStore(); const router = useRouter(); useEffect(() => { if (originalImage) router.push("/restored"); }, [originalImage]); return ( <div className="flex flex-col gap-6 md:gap-12 h-full pb-20"> <div> <h1 className="animate-text text-center text-4xl md:text-6xl bg-gradient-to-r from-teal-500 via-purple-500 to-orange-500 bg-clip-text text-transparent text-5xl font-black"> Restore any photo </h1> </div> <div className="flex flex-col items-center gap-10"> <MyDropzone className="w-full max-w-lg h-56" /> </div> </div> );}
Now we need to create a client for our app to communicate with the Altogic API. Let's create a new file ~/src/libs/altogic.ts
and add the following code:
import { createClient } from "altogic";const ENV_URL = process.env.NEXT_PUBLIC_ALTOGIC_API_BASE_URL;const CLIENT_KEY = process.env.NEXT_PUBLIC_ALTOGIC_CLIENT_KEY;if (!ENV_URL || !CLIENT_KEY) { throw new Error("Missing Altogic API base URL or client key, check .env");}const client = createClient(ENV_URL, CLIENT_KEY);export default client;
In the above code, we are creating a client using the createClient
function from the altogic
package. The createClient
function takes two arguments, the first one is the API base URL and the second one is the client key. You can get the API base URL and client key from the Altogic dashboard. The client key is a secret key, so make sure to keep it safe.
info
Instead of explaining all of the code in this tutorial. You can access the Github repository for this sczhou/codeformer tutorial here.
Cloning the repository
You can clone the repository and run the app locally by running the following commands:
git clone \
--depth 2 \
--filter=blob:none \
--sparse \
https://github.com/altogic/altogic \
;
cd altogic
git sparse-checkout set examples/nextjs-image-restoration
cd examples
cd nextjs-image-restoration
npm install
- If you have any questions about AI-powered apps or want to share what you have built, please post a message in our community forum or discord channel.
After you clone the repository ,create a new environment file ~/src/.env
and add the following code:
NEXT_PUBLIC_ALTOGIC_API_BASE_URL=https://lowa-ai.altogic.com/api/v1NEXT_PUBLIC_ALTOGIC_CLIENT_KEY=e2e8b9b0-3b5a-4b1f-9c1f-8c1f8c1f8c1f
Okay, now we are ready to start our app. Let's run the following command to start the development server:
npm run dev
Now open the browser and go to http://localhost:3000/
to see the app in action.
Conclusion
In this tutorial, we have learned how to build a photo restoration app using sczhou/codeformer model and Replicate. We have also learned how to create database model, endpoints, and services for our app. We have also learned how to create a store for our app using the zustand package.