Models

Explore our complete list of models and API references here.

Prerequisites

Before you begin, make sure you have:

Installation

Add the JigsawStack SDK to your Next.js project:

npm install jigsawstack

Configuration

Setting up environment variables

  1. Create or modify your .env.local file in the root of your Next.js project:
.env.local
JIGSAWSTACK_API_KEY='your-api-key'
  1. For production, make sure to add the environment variable to your deployment platform (Vercel, Netlify, etc.)

API Route Implementation

JigsawStack can be easily integrated into Next.js API routes. Here’s an example of a text-to-speech API endpoint:

pages/api/tts.ts
import type { NextApiRequest, NextApiResponse } from "next";
import { JigsawStack } from "jigsawstack";

const jigsawstack = JigsawStack(); // API key will be read from environment

export default async (req: NextApiRequest, res: NextApiResponse) => {
  try {
    const result = await jigsawstack.audio.text_to_speech({
      text: "Hello, how are you doing?",
    });

    const data = result.blob();

    res.status(200).json({
      success: true,
    });
  } catch (error) {
    return res.status(400).json(error);
  }
};

Client-Side Implementation

For client components that need to use JigsawStack, you should make requests through your API routes rather than directly from the client to protect your API key.

components/TextToSpeechButton.jsx
"use client";
import { useState } from "react";

export default function TextToSpeechButton() {
  const [loading, setLoading] = useState(false);

  const handleTextToSpeech = async () => {
    setLoading(true);
    try {
      const response = await fetch("/api/tts");
      const data = await response.json();
      
      if (data.success) {
        // Handle successful response
        console.log("Text-to-speech generated successfully");
      }
    } catch (error) {
      console.error("Error generating speech:", error);
    } finally {
      setLoading(false);
    }
  };

  return (
    <button 
      onClick={handleTextToSpeech}
      disabled={loading}
      className="px-4 py-2 bg-blue-500 text-white rounded"
    >
      {loading ? "Generating..." : "Convert to Speech"}
    </button>
  );
}

App Router Support

If you’re using Next.js App Router, you can create a route handler:

app/api/tts/route.ts
import { NextResponse } from "next/server";
import { JigsawStack } from "jigsawstack";

const jigsawstack = JigsawStack();

export async function POST(request) {
  try {
    const { text } = await request.json();
    
    const result = await jigsawstack.audio.text_to_speech({
      text: text || "Hello, how are you doing?",
    });

    const data = result.blob();

    return NextResponse.json({ success: true });
  } catch (error) {
    return NextResponse.json({ error: error.message }, { status: 400 });
  }
}

JigsawStack seamlessly integrates with Next.js to provide powerful AI capabilities while maintaining best practices for security and performance.