This year, I became serious about publishing video content on my YouTube channel. Recently, I thought, 'Why not display these videos on my portfolio as well?'
I already had a JSON file where I manually added video data, but I hadn’t touched it in over six months. When I finally checked, I realized I was about twelve videos behind.
Fortunately, I was in a productive state. I knew I needed automation. The straightforward idea was to just use the YouTube API… until I opened the YouTube Data API documentation and saw that it only allows 10,000 quota units per day. Meaning, each search.list call costs 100 units, and each videos.list call costs 1 unit per video.
I did the math… If my portfolio site got just 1,000 visitors a day, and each one loaded my videos page, I’d burn through my quota in a few hours, and that’s assuming they only loaded it once. Suddenly, my “simple” feature had turned into a potential API nightmare.
So instead, I decided to build a small, efficient script that fetches my channel videos once, handles pagination automatically, retries gracefully on rate limits, and writes everything to a local static file.
This is not anything crazy. I’m not scraping or mirroring YouTube. I only need a list of my own public videos (title, link, duration, date, thumbnail) so I can render them on my site and link back to YouTube.
From there, my site simply imports that file (no live API calls, no waiting, and no wasted quota). The result is a setup that’s fast, cost-efficient, and reliable. I’ve effectively reduced my API usage by 99.9%, and my site stays up to date with a single command.
In this article, I’ll walk you through the thought process, the architecture, and the full code behind this YouTube video fetcher — from the initial idea to a production-ready solution you can adapt for your own projects.
The approach: fetch once, use forever
The idea was simple: if the API was expensive to call frequently, then I’d call it once, store the results, and make them part of my site’s codebase.
Instead of fetching videos every time someone visits my site, I’d run a small script that:
- Calls the YouTube Data API only when I need fresh data.
- Fetches all my channel’s videos (with pagination).
- Retries automatically if rate-limited.
- Formats the response data.
- Saves everything into a local static file inside my project.
That file becomes a permanent snapshot of my YouTube channel, one that my site can import and render instantly, without needing to interact with the YouTube API again.
Project Setup and Structure
Before writing any code, I wanted a structure that felt clean and predictable, something easy to maintain, but also flexible enough to reuse across projects later.
Here’s the basic layout I ended up with:
lib/
├── youtube-utils.ts # Helper functions
└── fetch-youtube-videos.ts # Core API logic
scripts/
└── update-youtube-videos.ts # CLI script that runs the whole process
data/
└── youtubevideos.ts # Generated output file
Here is a breakdown of the structure above:
**youtube-utils.ts**→ Handles all the small but essential parts from formatting durations, parsing ISO 8601 strings, date formatting, adding delay between API calls, and implementing retry logic with exponential backoff.**fetch-youtube-videos.ts**→ This is the core module that talks to the YouTube API, handles pagination, applies the retry logic, and formats the response into a uniform shape that your site can easily consume.**update-youtube-videos.ts**→ A small CLI script that ties everything together. It loads environment variables, parses command-line arguments, calls the fetch function, and writes the result into a static file.**youtubevideos.ts**→ The final output. It’s a generated file containing all your YouTube video data — clean, typed, and ready to import anywhere in your app.
Once the foundation was in place, I started building out the utilities that would make the script reliable — because fetching data once is great, but fetching it safely and consistently is better.
Building the utilities
Before fetching anything, I wanted to make sure the script could handle the messy parts of working with APIs — things like parsing YouTube’s oddly formatted duration strings, handling dates consistently, waiting between requests to respect rate limits, and retrying failed requests safely.
These helper functions became the backbone of the entire script.
1. Parsing video duration
YouTube returns video durations in ISO 8601 format — something like PT15M33S. It’s great for machines, not so much for humans. I needed a simple, readable HH:MM:SS or MM:SS format for the final output.
This little function converts the data from ISO 8601 format into 15:33 or 01:15:33 — depending on the video’s length:
export function parseISO8601Duration(duration: string): string {
const timeString = duration.replace('PT', '');
const hoursMatch = timeString.match(/(\d+)H/);
const minutesMatch = timeString.match(/(\d+)M/);
const secondsMatch = timeString.match(/(\d+)S/);
const hours = hoursMatch ? parseInt(hoursMatch[1], 10) : 0;
const minutes = minutesMatch ? parseInt(minutesMatch[1], 10) : 0;
const seconds = secondsMatch ? parseInt(secondsMatch[1], 10) : 0;
return hours > 0
? `${hours.toString().padStart(2, '0')}:${minutes
.toString()
.padStart(2, '0')}:${seconds.toString().padStart(2, '0')}`
: `${minutes.toString().padStart(2, '0')}:${seconds
.toString()
.padStart(2, '0')}`;
}
It’s small details like this that make the final data easier to work with anywhere you render it.
2. Formatting dates
I also wanted consistent date formats. YouTube returns timestamps like 2024-10-31T12:45:00Z, but for display or filtering, I prefer YYYY-MM-DD.
export function formatDateForVideo(dateString: string): string {
const date = new Date(dateString);
const year = date.getFullYear();
const month = String(date.getMonth() + 1).padStart(2, '0');
const day = String(date.getDate()).padStart(2, '0');
return `${year}-${month}-${day}`;
}
3. Adding delay between requests
To avoid rate limits (HTTP 429 errors), I added a simple sleep utility. Sometimes the safest thing you can do when fetching in batches is just to pause for a bit.
export function sleep(ms: number): Promise<void> {
return new Promise((resolve) => setTimeout(resolve, ms));
}
By spacing out API requests (I use a 500ms delay between pages), the script stays polite to YouTube’s servers and reduces the chance of hitting quota throttling.
4. Retrying failed requests
APIs fail sometimes due to rate limits, network issues, or temporary outages. Instead of letting one bad request break everything, I built a fetchWithRetry function with exponential backoff.
export async function fetchWithRetry(
url: string,
options: Partial<RetryOptions> = {}
): Promise<Response> {
const {
maxRetries = 3,
initialDelay = 1000,
maxDelay = 10000,
backoffMultiplier = 2,
} = options;
let lastError: Error;
let delay = initialDelay;
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
const response = await fetch(url);
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After');
const waitTime = retryAfter ? parseInt(retryAfter) * 1000 : delay;
console.warn(`⚠️ Rate limited. Retrying in ${waitTime / 1000}s...`);
await sleep(waitTime);
delay = Math.min(delay * backoffMultiplier, maxDelay);
continue;
}
if (response.ok) return response;
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
} catch (error: any) {
lastError = error;
if (attempt === maxRetries) throw lastError;
console.warn(`⚠️ ${error.message}. Retrying in ${delay / 1000}s...`);
await sleep(delay);
delay = Math.min(delay * backoffMultiplier, maxDelay);
}
}
throw lastError!;
}
This small block of logic makes sure that if a request fails — whether from network hiccups or temporary API throttling — the script automatically retries instead of crashing.
Fetching and formatting YouTube videos
With the utilities in place, the next step was to build the main logic that actually interacts with the YouTube Data API, handles pagination, and formats the data into a reusable structure.
I wanted this function to be reliable, extensible, and simple to use — something that could fetch videos from any channel or handle, without repeating code.
Step 1: Accept channel ID or handle
YouTube supports both channel IDs (like UCabc123...) and handles (like @joelolawanle).
For flexibility, I wanted my function to work with either, so if a handle is provided, it first resolves it to the corresponding channel ID.
if (channelId.startsWith('@')) {
const channelResponse = await fetchWithRetry(
`https://www.googleapis.com/youtube/v3/search?part=snippet&q=${encodeURIComponent(
channelId
)}&type=channel&key=${apiKey}&maxResults=1`
);
const channelData = await channelResponse.json();
actualChannelId = channelData.items[0].id.channelId;
}
That quick search call ensures the script can start from a handle, just like users do when they type it on YouTube.
Step 2: Get the uploads playlist
Every YouTube channel has a hidden “uploads” playlist that contains all videos. Once we know the channel ID, we fetch its content details to extract that playlist ID.
const channelInfoResponse = await fetchWithRetry(
`https://www.googleapis.com/youtube/v3/channels?part=contentDetails&id=${actualChannelId}&key=${apiKey}`
);
const channelInfo = await channelInfoResponse.json();
const uploadsPlaylistId =
channelInfo.items[0]?.contentDetails?.relatedPlaylists?.uploads;
That uploadsPlaylistId becomes our key to fetching every video from the channel.
Step 3: Handle pagination
The YouTube API only returns 50 items per page. So, to fetch all videos, the function loops through pages using the nextPageToken.
let allVideos: YouTubeVideoItem[] = [];
let nextPageToken: string | undefined = undefined;
do {
const playlistUrl =
`https://www.googleapis.com/youtube/v3/playlistItems?` +
`part=snippet&playlistId=${uploadsPlaylistId}&maxResults=50` +
`${nextPageToken ? `&pageToken=${nextPageToken}` : ''}&key=${apiKey}`;
const playlistResponse = await fetchWithRetry(playlistUrl);
const playlistData = await playlistResponse.json();
const videoIds = playlistData.items
.map((item: any) => item.snippet.resourceId.videoId)
.join(',');
At the end of each loop, the script waits a few hundred milliseconds before fetching the next page, to stay well within rate limits.
Step 4: Fetch full video details
Once we have all the video IDs from a playlist page, we fetch detailed data for each — including duration and high-resolution thumbnails.
const videosResponse = await fetchWithRetry(
`https://www.googleapis.com/youtube/v3/videos?part=snippet,contentDetails&id=${videoIds}&key=${apiKey}`
);
const videosData = await videosResponse.json();
This call combines snippet and contentDetails into a single request for efficiency — saving extra quota and round trips.
Step 5: Format the Data
Finally, each video gets formatted into a consistent shape that’s easy to use anywhere in the app.
const pageVideos: YouTubeVideoItem[] = videosData.items.map((video: any) => {
const duration = parseISO8601Duration(video.contentDetails.duration);
const thumbnail =
video.snippet.thumbnails.maxres?.url || video.snippet.thumbnails.high.url;
return {
id: 0,
title: video.snippet.title,
link: `https://www.youtube.com/watch?v=${video.id}`,
timeLength: duration,
datePosted: formatDateForVideo(video.snippet.publishedAt),
featuredImg: thumbnail,
};
});
Each formatted video now contains:
- A clean title
- A watchable YouTube link
- A human-readable duration
- A standardized publish date
- A featured thumbnail URL
These objects get added to an array, which keeps growing until all pages are processed — or until a specified maxResults limit is reached.
Step 6: Assign IDs and return
After fetching all pages, we assign incremental IDs and return the list. This keeps the final output organized and ready for rendering in your frontend.
const finalVideos = maxResults > 0 ? allVideos.slice(0, maxResults) : allVideos;
const videosWithIds = finalVideos.map((video, index) => ({
...video,
id: index + 1,
}));
return videosWithIds;
Automating everything with a CLI script
Once the core logic was ready, I wanted a convenient way to run it — ideally without opening a Node REPL or writing throwaway code each time.
The goal was to fetch videos, generate the file, and update my portfolio — all in one command.
So, I built a small CLI script called update-youtube-videos.ts that ties everything together.
Running the script
The script can be executed directly from your terminal using tsx, or through an npm script defined in package.json:
npm run update-youtube-videos
I also added a few handy flags for flexibility:
npm run update-youtube-videos -- --all # Fetch all videos
npm run update-youtube-videos -- --max=100 # Fetch up to 100 videos
npm run update-youtube-videos -- --help # Show available options
This gives me full control over how much data I fetch — whether I want every video or just the latest few.
How it works
The CLI script does four main things:
- Loads environment variables: Using
dotenv, it reads API keys and settings from a.env.localfile. - Parses command-line options: Flags like
--allor--maxdetermine how many videos to fetch. - Calls the main fetch function: It passes through those options to
fetchYouTubeVideos()from thelib/folder. - Writes the output file: Once the data is ready, it writes a new
youtubevideos.tsfile to the/datafolder.
Command-line parsing
Here’s the part that handles user arguments and ensures you don’t accidentally run the script with invalid options:
function parseArgs() {
const args = process.argv.slice(2);
if (args.includes('--help') || args.includes('-h')) {
console.log(`
Usage:
npm run update-youtube-videos [options]
Options:
--all Fetch all videos from channel
--max=<number> Maximum number of videos to fetch (default: 50)
--help, -h Show this help message
`);
process.exit(0);
}
const fetchAll = args.includes('--all');
const maxArg = args.find((arg) => arg.startsWith('--max='));
const maxResults = maxArg ? parseInt(maxArg.split('=')[1], 10) : undefined;
if (maxResults !== undefined && (isNaN(maxResults) || maxResults < 1)) {
console.error('❌ Error: --max must be a positive number');
process.exit(1);
}
return { fetchAll, maxResults };
}
It’s simple, self-documenting, and makes the script feel like a real command-line tool rather than a one-off Node script.
Writing the data file
Once the videos are fetched, the script reverses the order so that oldest videos come first and latest appear last — useful for chronological rendering.
It then generates a TypeScript file that exports the entire list:
const fileContent = `export const youTubeVideos = [
${videosWithNewIds
.map(
(video) => ` {
id: ${video.id},
title: ${JSON.stringify(video.title)},
link: ${JSON.stringify(video.link)},
timeLength: ${JSON.stringify(video.timeLength)},
datePosted: ${JSON.stringify(video.datePosted)},
featuredImg: ${JSON.stringify(video.featuredImg)},
}`
)
.join(',\n')},
];
`;
That output gets written to /data/youtubevideos.ts:
fs.writeFileSync(filePath, fileContent, 'utf-8');
console.log(`✅ Updated ${filePath}`);
The result is a neatly formatted, importable file that looks like this:
export const youTubeVideos = [
{
id: 1,
title: "Build a Slackbot With Node.js",
link: "https://www.youtube.com/watch?v=7Ys-6MkekBw",
timeLength: "38:16",
datePosted: "2023-12-19",
featuredImg: "https://i.ytimg.com/vi/7Ys-6MkekBw/maxresdefault.jpg",
},
...
];
Next, I’ll show how I integrated the generated data into my Next.js site — so the videos appear instantly, without any API calls or loading spinners.
Using the generated data in Next.js
Once the script was generating a clean youtubevideos.ts file, integration into my site became effortless.
Instead of fetching data from the YouTube API on every request, I could now import my videos directly as static data — just like any other local file.
Importing the data
The output file lives in the /data directory and exports an array of formatted video objects:
import { youTubeVideos } from '@/data/youtubevideos';
That single import gives me immediate access to all my videos — including their titles, durations, thumbnails, and upload dates — without making a single network call.
Rendering the videos
Here’s a simple example of how I render them in a page component:
export default function VideosPage() {
return (
<section className="grid gap-6 md:grid-cols-2 lg:grid-cols-3">
{youTubeVideos.map((video) => (
<a
key={video.id}
href={video.link}
target="_blank"
rel="noopener noreferrer"
className="rounded-xl overflow-hidden border hover:shadow-lg transition"
>
<img
src={video.featuredImg}
alt={video.title}
className="w-full aspect-video object-cover"
/>
<div className="p-3">
<h3 className="font-medium text-lg">{video.title}</h3>
<p className="text-sm text-gray-500">
{video.timeLength} • {video.datePosted}
</p>
</div>
</a>
))}
</section>
);
}
It’s static, fast, and SEO-friendly — no loading spinners, no API errors, no useEffect fetching logic.
Handling images in Next.js
Since YouTube thumbnails come from i.ytimg.com, you’ll need to allow external images in your next.config.js:
// next.config.js
module.exports = {
images: {
remotePatterns: [
{
protocol: 'https',
hostname: 'i.ytimg.com',
pathname: '/vi/**',
},
],
},
};
Then you can safely use the next/image component instead of a raw <img> tag:
import Image from 'next/image';
<Image
src={video.featuredImg}
alt={video.title}
width={1280}
height={720}
className="rounded-t-xl"
/>;
This gives you optimized image loading, responsive sizes, and automatic caching.
Updating the data
Whenever you upload new videos, you simply run:
npm run update-youtube-videos
This regenerates the file with the latest uploads, and you can commit the updated data to your repo.
Wrapping up
What started as a quick fix for my portfolio turned into a small but powerful workflow. Instead of relying on live API calls, I now have a single script that keeps my YouTube data fresh, reduces quota usage by 99.9%, and makes my site load instantly.
And the best part is: once you have the script, you can take it as far as you want. If you’d like to explore the full implementation, the source code is available here:
👉 github.com/olawanlejoel/youtube-video-fetcher
If you want everything to stay in sync automatically, you can set up a GitHub Action (or any CI pipeline) to run the script periodically — for example, once a week. It can fetch new videos, update the static file, and even commit the changes back to your repository.
That means you’ll never have to manually update anything again.
You can also tweak how aggressive the script is with API calls. The built-in rate limiting, retry logic, and delays between pages make it safe to run in production or CI environments without risking API throttling.



