Back to Blog

1. Install dependencies and setup

A comprehensive, microservice-driven AI backend designed for flexibility, scalability, and real-world value

AIMicroservicesBackendArchitecture
🚀 Multi-modal AI processing platform

Quick Start Guide

Nexus AI is a microservice-based AI platform that combines embeddings, RAG, function calling, and content analysis capabilities.

Prerequisites

  • Docker & Docker Compose
  • Bun (JavaScript runtime)
  • Ollama (for AI models)
  • NVIDIA GPU (recommended)

Setup & Run

# 1. Install dependencies and setup
pnpm run setup

# 2. Start all services (Docker)
pnpm run start

# OR run in development mode
pnpm run dev

Architecture

  • API Gateway (3000) - Unified interface
  • Embedding Service (3001) - Vector embeddings
  • RAG Service (3002) - Retrieval Augmented Generation
  • Function Calling Service (3003) - Tool execution
  • Content Analysis Service (3004) - Content analysis
  • Model Management Service (3005) - Model testing

Key Endpoints

  • Main API: http://localhost:3000/api/v1/ai/process
  • Health Check: http://localhost:3000/health
  • API Docs: http://localhost:3000/api/v1/docs

Development Commands

# Run individual services
pnpm run dev:embedding
pnpm run dev:rag
pnpm run dev:function
pnpm run dev:content
pnpm run dev:model

# Stop services
pnpm run stop

The setup script automatically installs dependencies, pulls required Ollama models, and creates configuration files. The platform runs entirely locally with Docker containers for Ollama, ChromaDB, and Redis.