Node Neo exposes an OpenAI-compatible HTTP API on your computer so tools like Zed, LangChain, or custom scripts can send chat, completions, and embeddings through the same Morpheus sessions as the app — with your wallet and API keys staying local.
Internally, Node Neo uses the embedded Morpheus proxy-router SDK for inference.
The AI Gateway is a small HTTP server in front of that SDK that speaks the same JSON shape as OpenAI's API:
/v1/chat/completions,
/v1/models, and related routes.
External apps point their “base URL” at Node Neo instead of OpenAI.
The gateway does not replace the Morpheus network or run models offline — it routes requests through your wallet's on-chain sessions on Base, just like chat in the UI. API-driven chats can appear in the app as conversations sourced from the API.
Platform: In the shipping app, AI Gateway controls appear under Expert Mode on desktop platforms (macOS today; Linux and Windows builds follow the same Expert Mode layout when available). Mobile builds focus on consumer chat and do not expose this server.
127.0.0.1:8083 or 0.0.0.0:8083).
The Expert screen also shows a base URL hint for OpenAI-style clients (for example http://127.0.0.1:8083/v1) and reminds you to use an API key from the section below.
| Mode | Bind address | Who can connect | Typical use |
|---|---|---|---|
| Local only | 127.0.0.1 |
Processes on this Mac only | Zed, Terminal scripts, or local agents on the same machine as Node Neo. |
| Network | 0.0.0.0 |
Any device that can reach your machine on your LAN IP and port (same Wi-Fi / Ethernet segment) | Another laptop, a Linux box on your desk, or a teammate on the office subnet hitting your gateway. |
In Network mode, the Expert UI shows a base URL using your detected LAN IP when available so you can paste http://<lan-ip>:<port>/v1 into clients on other hosts.
You still must supply a valid Bearer API key; the gateway does not accept unauthenticated usage on those routes.
Firewall and router rules are outside Node Neo: ensure your OS allows inbound TCP on the chosen port when using Network mode.
sk-... secret immediately; the app stores only a hash and cannot show it again.Authorization: Bearer <your-key> on every gateway request except GET /health.| Path | Auth | Notes |
|---|---|---|
GET /health |
None | Sanity check that the HTTP server is up. |
GET /v1/models |
Bearer | Lists Morpheus models (cached; suitable for filling editor dropdowns). |
POST /v1/chat/completions |
Bearer | OpenAI chat shape; streaming supported. Optional header X-Chat-Id ties multi-turn traffic to a stable external id. |
POST /v1/completions |
Bearer | Legacy completions path for tools that still call it. |
POST /v1/embeddings |
Bearer | OpenAI embeddings shape for clients that need embeddings. |
Replace 8083 and the token with your port and key.
curl -sS http://127.0.0.1:8083/health curl -sS http://127.0.0.1:8083/v1/models \ -H "Authorization: Bearer sk-your-key-here"
Session length for new on-chain sessions follows the same preference as the UI (Settings → session duration), with a floor enforced by the protocol.
Zed documents OpenAI-compatible providers under LLM Providers: you add a provider with a custom api_url and list the models Zed should show in the Agent panel.
Official reference: zed.dev/docs/ai/llm-providers (see the “OpenAI API Compatible” section).
127.0.0.1.http://127.0.0.1:<port>/v1 — include the /v1 suffix.GET /v1/models (with curl above) and note the exact model id strings you want in Zed.settings.json — adjust provider label, model names, and URL to match your machine:{
"language_models": {
"openai_compatible": {
"Node Neo": {
"api_url": "http://127.0.0.1:8083/v1",
"available_models": [
{
"name": "your-model-id-from-/v1/models",
"display_name": "Morpheus (Node Neo)"
}
]
}
}
}
}
Zed expects API keys via the Agent UI or environment variables derived from the provider name; keys are not embedded in the JSON file. Follow the current Zed docs for the exact variable name pattern and keychain behavior.
For Zed on another machine, run Node Neo in Network mode and point api_url at http://<your-lan-ip>:<port>/v1, still using Bearer auth.
Some IDE features that rewrite OpenAI base URLs route traffic through vendor infrastructure, which blocks localhost.
For Cursor, the Node Neo repo ships an MCP (Model Context Protocol) server that talks to your local gateway over HTTP from a stdio subprocess instead.
Advanced setup lives in the repository (mcp-server/) and requires building the Node package, pointing NODENEO_GATEWAY_URL at your running gateway (must match your chosen port), and setting NODENEO_API_KEY.
See the project's llms.txt and MCP README when you are ready; details change with releases.
http:// not https://.GET /v1/models; Morpheus rotates advertised models over time.More context: Support, Deep dive, or open an issue on GitHub.