Files
nexusAI/docs/services/embedding-service.md
2026-04-04 05:22:36 -07:00

34 lines
920 B
Markdown

# Embedding Service
**Package:** `@nexusai/embedding-service`
**Location:** `packages/embedding-service`
**Deployed on:** Mini PC 1 (192.168.0.81)
**Port:** 3003
## Purpose
Converts text into vector embeddings for storage in Qdrant. Keeps
embedding workload off the main inference node.
## Dependencies
- `express` — HTTP API
- `ollama` — Ollama client for embedding model
- `dotenv` — environment variable loading
- `@nexusai/shared` — shared utilities
## Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
| PORT | No | 3003 | Port to listen on |
| OLLAMA_URL | No | http://localhost:11434 | Ollama instance URL |
| EMBEDDING_MODEL | No | nomic-embed-text | Ollama embedding model to use |
## Endpoints
| Method | Path | Description |
|---|---|---|
| GET | /health | Service health check |
> Further endpoints will be documented as the service is built out.