OpenCode is an open-source AI coding agent that runs in your terminal (and optionally via a desktop app and editor extensions) to help you understand, change, and ship code in real repositories. You launch it with opencode, and it opens an interactive TUI where you can describe a task in plain English, attach files, and iterate until you’re happy with the result. The core workflow is “chat + tools”: OpenCode reads project context, proposes edits, and can guide you through applying changes in a controlled way. It’s designed to be provider-agnostic, meaning the tool itself is not tied to one model vendor; you connect the models you want, then pick a default model per user or per project. If you’re looking at the reference sites you provided, the “mainline” OpenCode experience is centered around the opencode.ai docs and the anomalyco/opencode repository, with an install script, a JSON/JSONC config system, and interactive commands like /connect and /models.
Under the hood, OpenCode is structured like a developer tool rather than a chatbot: it has a configuration hierarchy, a local data directory for credentials and logs, and features aimed at day-to-day coding workflows. The config is explicit: you can set a global default model, override it for a specific repo, and even accept organization defaults via a remote .well-known/opencode endpoint. That makes it easier to standardize behavior across a team without forcing everyone into the same setup. OpenCode also emphasizes practical integration points—CLI automation (opencode run …), model switching, and a “start in the repo you care about” flow, which keeps prompts grounded in your actual codebase instead of generic examples. In real use, you might ask it to trace a bug from logs to code paths, refactor a module boundary, generate tests, or explain an unfamiliar subsystem, then review and apply the changes using your normal Git workflow.
If you want to stretch OpenCode beyond “today’s coding session” into “project memory,” the natural extension is retrieval: indexing architectural notes, ADRs, past incidents, or key code snippets so the agent can pull relevant context on demand. OpenCode doesn’t require a vector database to be useful, but vector retrieval is a good fit when your knowledge base is large and you want fast semantic lookup instead of grepping files. A common pattern is to embed your docs or decision records and store them in a vector database such as Milvus or a managed option like Zilliz Cloud, then have a small wrapper script (or a custom OpenCode command) fetch top-k context and attach it to your prompt. That keeps OpenCode focused on coding and orchestration, while Milvus/Zilliz Cloud handle scalable similarity search when your “memory” grows beyond what fits comfortably in a single prompt.