Hyperframes- HTML-to-Video
Snaplyze Digest
GitHub Repos intermediate 2 min read Apr 23, 2026

Hyperframes- HTML-to-Video

“A 9,428-star repo is betting that AI video should start as HTML, not React.”

In Short

Hyperframes has 9,428 GitHub stars and a v0.4.14 release published on 2026-04-23. It turns HTML, CSS, and JavaScript compositions into deterministic video through headless Chrome and FFmpeg. You write the video like a web page, then Hyperframes seeks each frame and captures pixels with Chrome's BeginFrame API. The strongest caveat is in the docs: rendering runs on a single machine today, and the adapter API is still v0.

aivideohtmltypescriptffmpeg
Why It Matters
The practical pain point this digest is really about.

You know that feeling when you want generated videos, but the tool asks you to think like a video editor or rewrite web code into React first? Hyperframes addresses that by making the source file plain HTML with timing data attributes. The before state is translating web layouts and GSAP animations into a video framework. The after state is writing HTML, previewing it in a browser, and rendering it to MP4.

How It Works
The mechanism, architecture, or workflow behind it.

Think of it like stop-motion for a web page. You define a composition in HTML, add timing with data attributes, and preview it in the browser. During render, Hyperframes computes the exact time for each frame, asks a frame adapter such as GSAP what the screen should look like, captures that frame through Chrome's BeginFrame API, and sends the frames to FFmpeg. The key idea is that the animation clock comes from `frame / fps`, not wall time.

Key Takeaways
7 fast bullets that make the core value obvious.
  • HTML authoring — you write compositions as HTML files instead of React components, so you can paste and animate web content with less translation work.
  • Frame-by-frame rendering — you get deterministic output because Hyperframes seeks each frame before capture instead of recording wall-clock playback.
  • GSAP adapter — you can keep library-clock animations in sync by seeking timelines to `frame / fps` before each frame capture.
  • Docker render mode — you can get closer reproduction across machines by pinning Chrome, fonts, and FFmpeg, while accepting slower startup and no container GPU path.
  • CLI workflow — you can scaffold, preview, render, lint, transcribe, and run diagnostics from commands that agents can call without prompts.
  • Catalog blocks — you can add 50+ documented blocks and components such as social overlays, shader transitions, and data charts.
  • Studio and player packages — you get a browser composition editor and an embeddable player as part of the monorepo.
Should You Care?
Audience fit, decision signal, and the original source in one place.

Who It Is For

If you build agent workflows, video automation, or code-driven marketing clips, Hyperframes is worth a hands-on spike. It fits you if you prefer HTML, CSS, GSAP, and FFmpeg over React component video code. It is not for you if you need mature distributed rendering today or a stable adapter API.

Worth Exploring?

Worth exploring as an experimental repo with strong activity: 9,428 stars, 49 releases, 423 commits, and a release on 2026-04-23. Do not treat it as a drop-in Remotion Lambda replacement because the docs say Hyperframes runs on a single machine today. Start with a local proof of concept and test Docker rendering before you tie it to production output.

View original source
What the full digest unlocks

There is more here than the public preview.

This page gives you the hook. The full Snaplyze digest goes deeper so you can move from curiosity to decision with less noise.

Open the full digest to read the deeper breakdown, compare viewpoints, and get the practical next-step playbooks.

Open the full digest

Snaplyze

Go beyond the preview

Read the full digest for deep-dive insight, Easy Mode, Pro Mode, and practical playbooks you can actually use.

Install Snaplyze