25 Mar 2025 • 4 min read
Hi Devs,
Hope you’ve been well! It’s been a big month for us at JigsawStack. Between funding milestones, major product updates, team expansion, and community events, we’ve been moving quickly on all fronts.
Let’s get into what’s been happening.
First, we’re excited to welcome Harsha Vardhan Khurdula as our new Founding AI Researcher! Harsha will be leading model training, fine-tuning, and benchmarking. In his own words, he’s ready for “endless training loops and the occasional glitch that reminds us even AI has off days.” He’s already off to a strong start.
We’re proud to share that JigsawStack has raised a total of $1.5M, with our most recent $1M round led by Ada Ventures. This follows our earlier pre-seed from Antler.
Michael Tefula from Ada Ventures has been an incredible partner, within minutes of meeting, he pulled out his laptop, installed the SDK, and began sending API requests on the spot. That energy is exactly what we’re building for: developers who want fast, seamless AI infrastructure they can trust.
You can read more about why Ada invested here.
JigsawStack vOCR vs Mistral OCR
We put our vOCR model up against Mistral OCR in a series of tests covering multilingual text, handwriting, document structure, and bounding box accuracy.
Mistral OCR struggled with handwritten content and lacked bounding box support. Meanwhile, JigsawStack vOCR extracted text with high precision—even from noisy, multilingual, or handwritten sources—and returned structured outputs with full position data.
You can explore the full comparison here.
JigsawStack STT vs OpenAI Audio STT
After OpenAI’s latest update to Whisper, we ran a comparison using real-world audio files.
JigsawStack’s Speech-to-Text delivered faster results, supported speaker labels out of the box, and offered cleaner structured output—without sacrificing accuracy. If you're working with meetings, voice notes, or podcasts, our transcription model is built for both speed and clarity.
Read full breakdown here.
We’ve updated the JigsawStack SDKs in TypeScript, Python, and cURL, and overhauled our documentation to make onboarding smoother. There are now clearer error messages, new CLI tooling, and expanded code samples.
You can now try any of our models directly on the API landing pages—no signup or login needed. Each tool comes with 20 free requests per day for testing. This is especially helpful for no-code builders or developers exploring fast prototyping.
The AI Scraper has evolved. It now supports full web crawling with automatic link discovery, infinite scroll handling, custom headers, and proxy support. Whether you’re scraping structured data from pricing pages or crawling an entire site like Wikipedia, the process is now as simple as giving a few prompts and running one API call.
We’ve put together a blog walking through how it works here, and you can try it live on the Scraper product page. No login required.
This month, I tested the top three AI coding assistants—Luvable, v0, and Bolt—by building the same JigsawStack app with each. Some flew through it, others needed a bit more help.
From API setup to debugging, I break down what worked and what didn’t.
Check out full video here.
We launched the JigsawStack MCP Server—our open-source implementation of the Model Context Protocol (MCP) that allows any compliant agent (like Claude Desktop or Smithery) to use our tools on demand. You can now plug JigsawStack’s services into autonomous agents, without hardcoding API logic.
More about it here, or check out the GitHub repo.
Thanks for being on this journey with us. These updates represent a significant leap forward in our mission to simplify backend AI. We’re building a platform that helps developers move fast without friction—tools that work out of the box and scale when you need them to.
If you’re experimenting with JigsawStack or thinking about integrating it into your project, we’d love to hear what you’re building. You can always reach us on X, Discord, or just by replying directly.
Talk soon,
Angel & The JigsawStack Team