ScribeNova is a sophisticated, fully local AI companion built with the latest web technologies. It combines high-performance inference via Ollama, semantic long-term memory using Qdrant, and a beautiful, interactive UI to provide a private and powerful assistant experience directly on your machine.
- Conversation Memory: Remembers past interactions using semantic search, allowing for deep contextual continuity.
- Custom Fact Store: Proactively learn and recall specific user-provided facts (preferences, names, projects) to personalize every response.
- Deep Crawling: Built-in Playwright crawler that navigates websites, extracts meaningful content, and indexes it into your local vector database.
- Instant Q&A: Ask complex questions about any website and get structured, cited answers based on the crawled data.
- On-device Processing: Apply filters, edge detection, and enhancements using OpenCV.js directly in the browser.
- Vision Helpers: Use AI to analyze and describe uploaded images (requires vision-capable local models).
- Canvas-Rendered Avatar: Meet Kiro, your animated assistant rendered with high-performance canvas physics.
- Emotional Intelligence: Kiro reacts to chat states—thinking, sleeping, being happy, or surprised—making the interaction feel alive.
| Chat UI | Customization | Output |
|---|---|---|
![]() |
![]() |
![]() |
ScribeNova utilizes a multi-layered agentic architecture designed for privacy and speed.
graph TD
User((User)) <--> UI[Next.js Frontend / React 19]
UI <--> API[Next.js API Routes]
subgraph Agent_Orchestrator [Agentic Core]
API <--> LG[LangGraph ReAct Agent]
LG <--> LLM[Ollama LLM - Qwen 2.5]
end
subgraph Knowledge_Base [Vector Storage]
LG <--> Qdrant[(Qdrant Vector DB)]
Qdrant --- CM[Conversation Memory]
Qdrant --- UM[User Facts]
Qdrant --- WC[Website Chunks]
end
subgraph Tools [External Tools]
LG --- WQA[Website Q&A Tool]
LG --- Search[DuckDuckGo Search]
LG --- Calc[Calculator]
WQA --- Crawler[Playwright Crawler]
end
subgraph Client_Side [Local Browser Tools]
UI --- CV[OpenCV.js Processing]
UI --- Mascot[Canvas Kiro Mascot]
end
- Input: User sends a message via the React 19 interface.
- Context Retrieval: The LangGraph agent queries Qdrant for relevant past conversations and user facts.
- Reasoning: Ollama processes the combined context and decides whether to use a tool (e.g., search or crawl).
- Action: If a tool is called, the system executes it (e.g., Playwright crawls a site) and feeds the results back.
- Generation: The final response is generated, sanitized, and streamed back to the UI.
llm-next-app/
├── app/ # Next.js App Router
│ ├── api/ # Backend API Endpoints (Agent, Memory, Website)
│ ├── components/ # React Components (Chat, KiroMascot, Settings)
│ ├── globals.css # Tailwind & Global Styles
│ └── layout.tsx # Root Layout
├── lib/ # Core Logic & Utilities
│ ├── agent.ts # LangChain/LangGraph Agent Definition
│ ├── crawler.ts # Playwright Web Crawler
│ ├── imageProcessing.ts # OpenCV.js Vision Tools
│ ├── vectorMemory.ts # Conversation Persistence
│ ├── customMemory.ts # User Fact Management
│ └── tools.ts # Tool Definitions (Search, Calculator, etc.)
├── public/ # Static Assets & Demo Images
├── .env.example # Environment Variable Template
├── SYSTEM.md # Deep Technical Reference
└── package.json # Dependencies & Scripts
- Frontend: Next.js 16, TypeScript, Tailwind CSS 4, Framer Motion
- AI Core: LangChain, LangGraph, Ollama
- Database: Qdrant (Vector Search)
- Scraping: Playwright, Cheerio
- Vision: OpenCV.js
- Node.js (v20.x+)
- Ollama: Download
- Docker: (Recommended for Qdrant)
ollama pull qwen2.5:1.5b
ollama pull nomic-embed-textdocker run -p 6333:6333 -v $(pwd)/qdrant_storage:/qdrant/storage qdrant/qdrantnpm install
npx playwright install chromium
cp .env.example .env.local
npm run devBuilt with ❤️ by the AI Community.


