Semantic context for your LLMs β generated automatically.
No more copying schemas. No manual documentation. Just accurate answers.
Website β’ Quickstart β’ Data Sources β’ Contributing
Databao Context Engine is a CLI tool that automatically generates governed semantic context from your databases, BI tools, documents, and spreadsheets.
Integrate it with any LLM to deliver accurate, context-aware answers β without copying schemas or writing documentation by hand.
Your data sources β Context Engine β Unified semantic graph β Any LLM
| Feature | What it means for you |
|---|---|
| Auto-generated context | Extracts schemas, relationships, and semantics automatically |
| Runs locally | Your data never leaves your environment |
| MCP integration | Works with Claude Desktop, Cursor, and any MCP-compatible tool |
| Multiple sources | Databases, dbt projects, spreadsheets, documents |
| Built-in benchmarks | Measure and improve context quality over time |
| LLM agnostic | OpenAI, Anthropic, Ollama, Gemini β use any model |
| Governed & versioned | Track, version, and share context across your team |
| Dynamic or static | Serve context via MCP server or export as artifact |
Databao Context Engine is available on PyPI and can be installed with uv, pip, or another package manage.
-
Install Databao Context Engine:
uv tool install databao-context-engine
-
Add it to your PATH:
uv tool update-shell
-
Verify the installation:
dce --help
-
Install Databao Context Engine:
pip install databao-context-engine
-
Verify the installation:
dce --help
| Provider | Configuration |
|---|---|
| Ollama | languageModel: OLLAMA: runs locally, free |
| OpenAI | languageModel: OPENAI: requires an API key |
| Anthropic | languageModel: CLAUDE: requires an API key |
languageModel: GEMINI: requires an API key |
-
Create a new directory for your project and navigate to it:
mkdir dce-project && cd dce-project
-
Initialize a new project:
dce init
-
When prompted, agree to create a new datasource. You can also use the
dce datasource addcommand. -
Provide the data source type and its name.
-
Open the config file that was created for you in your editor and fill in the connection details.
-
Repeat these steps for all data sources you want to include in your project.
-
If you have data in Markdown or text files, you can add them to the
dce/src/filesdirectory.
-
To build the context, run the following command:
dce build
Option A: Dynamic via MCP Server
Databao Context Engine exposes the context through a local MCP Server, so your agent can access the latest context at runtime.
-
In Claude Desktop, Cursor, or another MCP-compatible agent, add the following configuration. Replace
dce-project/with the path to your project directory:# claude_desktop_config.json, mcp.json, or similar { "mcpServers": { "dce": { "command": "dce mcp", "args": ["--project-dir", "dce-project/"] } } }
-
Save the file and restart your agent.
-
Open a new chat, in the chat window, select the
dceserver, and ask questions related to your project context.
Option B: Static artifact
Even if you donβt have Claude or Cursor installed on your local machine, you can still use the context built by Databao Context Engine by pasting it directly into your chat with an AI assistant.
-
Navigate to
dce-project/output/and open the directory with the latest run. -
Attach the
all_results.yamlfile to your chat with the AI assistant or copy and paste its contents into your chat.
# Initialise the project in an existing directory
from databao_context_engine import init_dce_project
project_manager = init_dce_project(Path(tempfile.mkdtemp()))
# Or use an existing project
from databao_context_engine import DatabaoContextProjectManager
project_manager = DatabaoContextProjectManager(project_dir=Path("path/to/project"))from databao_context_engine import (
DatasourceConnectionStatus,
DatasourceType,
)
# Create a new datasource
postgres_datasource_id = project_manager.create_datasource_config(
DatasourceType(full_type="postgres"),
datasource_name="my_postgres_datasource",
config_content={
"connection": {"host": "localhost", "user": "dev", "password": "pass"}
},
).datasource.id
# Check the connection to the datasource is valid
check_result = project_manager.check_datasource_connection()
assert len(check_result) == 1
assert check_result[0].datasource_id == postgres_datasource_id
assert check_result[0].connection_status == DatasourceConnectionStatus.VALIDbuild_result = project_manager.build_context()
assert len(build_result) == 1
assert build_result[0].datasource_id == postgres_datasource_id
assert build_result[0].datasource_type == DatasourceType(full_type="postgres")
assert build_result[0].context_file_path.is_file()# Switch to the engine if you're already using a project_manager
context_engine = project_manager.get_engine_for_project()
# Or directly create a context engine from the path to your DCE project
from databao_context_engine import DatabaoContextEngine
context_engine = DatabaoContextEngine(project_dir=Path("path/to/project"))# Switch to the engine to use the context built
all_built_contexts = context_engine.get_all_contexts()
assert len(all_built_contexts) == 1
assert all_built_contexts[0].datasource_id == postgres_datasource_id
print(all_built_contexts[0].context)# Run a vector similarity search
results = context_engine.search_context("my search query")
print(f"Found {len(results)} results for query")
print(
"\n\n".join(
[f"{str(result.datasource_id)}\n{result.context_result}" for result in results]
)
)Weβd love your help! Hereβs how to get involved:
- β Star this repo β it helps others find us!
- π Found a bug? Open an issue
- π‘ Have an idea? Weβre all ears β create a feature request
- π Upvote issues you care about β helps us prioritize
- π§ Submit a PR
- π Improve docs β typos, examples, tutorials β everything helps!
New to open source? No worries! We're friendly and happy to help you get started. π±
For more details, see CONTRIBUTING.
Apache 2.0 β use it however you want. See the LICENSE file for details.
Like Databao Context Engine? Give us a β β it means a lot!