Skip to Content
Getting Started

Getting Started

Get your AI-powered documentation chatbot running in minutes. This guide covers installation, configuration, and integration with your Next.js application.

Prerequisites: Node.js 20+, a Next.js 15+ project (App Router), and API keys for Gemini/OpenAI and Qdrant.


Install Docsy

Install the CLI globally and add the core package to your project.

pnpm add -g @gaureshart/docsy-cli pnpm add @gaureshart/docsy-core ai

The ai package is Vercel’s AI SDK, required for streaming responses via useChat().


Initialize Configuration

Run the interactive setup wizard from your project root.

npx @gaureshart/docsy-cli init

This creates two files:

  • docsy.config.ts
  • .env.example

What happens:

  • Interactive prompts ask for your GitHub repo, embedding provider, vector DB settings.
  • Generates a type-safe configuration file.
  • Creates an environment variables template.

Configure Environment Variables

Copy .env.example to .env.local and add your API keys.

cp .env.example .env.local
.env.local
# GitHub Token (optional for public repos) GITHUB_TOKEN=ghp_your_github_personal_access_token # Embedding Provider API Key GOOGLE_GENERATIVE_AI_API_KEY=your_gemini_api_key # or # OPENAI_API_KEY=your_openai_api_key # Vector Database QDRANT_URL=[https://your-cluster.qdrant.io](https://your-cluster.qdrant.io) QDRANT_API_KEY=your_qdrant_api_key

Security: Never commit .env to version control. Add it to .gitignore.


Review Your Configuration

The CLI generated docsy.config.ts. Review and adjust as needed.

docsy.config.ts
import { defineConfig } from '@gaureshart/docsy-core' export default defineConfig({ source: { type: 'github', owner: 'facebook', // Your target repo owner repo: 'react', // Your target repo name branch: 'main', }, processing: { maxFiles: 100, // Max files to index chunkSize: 1000, // Characters per chunk chunkOverlap: 200, // Overlap between chunks }, embeddings: { provider: 'google', // 'google' or 'openai' model: 'gemini-embedding-001', taskType: 'QUESTION_ANSWERING', }, vectorDatabase: { provider: 'qdrant', collection: 'react-docs', }, })

Type safety: defineConfig() provides full autocomplete and type checking in your editor.


Run Ingestion

Process your documentation and store it in the vector database.

npx @gaureshart/docsy-cli ingest

What happens:

  1. Fetches markdown files from your GitHub repo.
  2. Filters and cleans the content.
  3. Chunks documents intelligently (preserves code blocks, headers).
  4. Generates embeddings via Gemini/OpenAI.
  5. Stores vectors in Qdrant.

Time: Typically takes 1-3 minutes for ~100 docs. Progress is shown in your terminal.


Create the API Route

Set up the backend endpoint to handle chat requests.

        • route.ts
app/api/docsy/route.ts
import { createDocsy } from '@gaureshart/docsy-core' export const maxDuration = 30 // Vercel function timeout export async function POST(req: Request) { const { messages } = await req.json() // Extract the latest user query const query = messages.at(-1)?.content || messages.at(-1)?.parts?.at(-1)?.text // Run the RAG pipeline const result = await createDocsy({ pattern: 'naive', query, messages, vectorDatabase: { provider: 'qdrant', collection: 'react-docs', // Must match your config }, llmConfig: { provider: 'google', model: 'gemini-2.5-flash', }, embeddings: { provider: 'google', model: 'gemini-embedding-001', taskType: 'QUESTION_ANSWERING', }, }) // Stream the response return result }

Streaming: The response streams token-by-token for a better user experience.


Add the Chat UI

Create a chat component using Vercel AI SDK’s useChat hook. Docsy handles the backend, leaving you free to design the UI using Shadcn or any custom components you prefer.

components/DocsChat.tsx
'use client' import { useChat } from '@ai-sdk/react' import { DefaultChatTransport } from 'ai' import { useState } from 'react' export default function Page() { const { messages, sendMessage, status } = useChat({ transport: new DefaultChatTransport({ api: '/api/docsy', }), }) const [input, setInput] = useState('') return ( <> {messages.map((message) => ( <div key={message.id}> {message.role === 'user' ? 'User: ' : 'AI: '} {message.parts.map((part, index) => part.type === 'text' ? <span key={index}>{part.text}</span> : null, )} </div> ))} <form onSubmit={(e) => { e.preventDefault() if (input.trim()) { sendMessage({ text: input }) setInput('') } }} > <input value={input} onChange={(e) => setInput(e.target.value)} disabled={status !== 'ready'} placeholder="Say something..." /> <button type="submit" disabled={status !== 'ready'}> Submit </button> </form> </> ) }

Done! Your AI-powered documentation chatbot is ready. Users can now ask questions and get intelligent answers directly from your repository.

Need help? Open an issue on GitHub 

Last updated on