Don't just read research—watch it think. AI agent powered architect that transforms the static text of multiple arXiv papers into "living" structural models. Synth Lab is a multimodal "Live Lab Notebook" that doesn't just explain research papers; it architecturally reconstructs them. As the agent analysis methodology of one or more arXiv papers, it simultaneously "draws" the logic in real-time using interleaved D3.js and Mermaid.js diagrams.
Demo
Synth Lab — live walkthrough
Screenshots
Problem Statement
Academic research papers on arXiv are dense, static documents. Most AI summaries and outlines are just walls of text that lose the complex architecture of a research paper. Readers must manually trace through complex architectures, methodology flows, and hierarchical relationships buried in text and raw figures — a cognitive bottleneck that slows down understanding and knowledge synthesis, especially in fast-moving fields like ML & AI, Markets, and Drug Discovery.
There was no automated way to instantly convert a paper's structural logic into an interactive, navigable diagram — forcing researchers and engineers to spend hours building mental models that an AI could generate in seconds.
Solution
Synth Lab is an application with an AI agent pipeline built on the Google Agent Development Kit that parses arXiv papers end-to-end and synthesises their architecture into interactive, hierarchical diagrams — turning static research into explorable technical synthesis, meta and comparative analysis.
Future
Next milestones focus on scaling the agent workflow, improving synthesis quality, and broadening visualization output formats so researchers can move from reading to structured understanding even faster.
Sponsors