This repository implements an HTTP API for submitting OpenQASM 3 (QASM3) quantum circuits for asynchronous execution and retrieving their results by task ID.
- API: FastAPI (HTTP layer, validation, task submission, result retrieval)
- Worker: Celery (background execution)
- Broker: RabbitMQ (durable queueing)
- Database: Postgres (system-of-record for task state + results)
- Quantum execution: Qiskit + Aer simulator
Why this design:
- Task integrity / “no lost tasks”:
- RabbitMQ provides durable queueing + acknowledgements.
- Postgres persists task records and results independently of Celery’s internal state.
- Clear separation of concerns:
- Routers (
app/api/routers) → Services (app/services) → Worker tasks (app/worker).
- Routers (
docker-compose.yml defines:
api(FastAPI on port8000)worker(Celery worker)rabbitmq(broker, management UI on15672)postgres(DB)migrate(one-shot Alembic migrations before app starts)tests(integration tests; runs only with Compose profiletest)
Build and run the full stack:
docker compose up --build- API:
http://localhost:8000 - Demo UI:
http://localhost:8000/ - OpenAPI docs:
http://localhost:8000/docs - RabbitMQ UI:
http://localhost:15672(user/pass:app/app)
Stop:
docker compose downReset DB/broker state (delete volumes):
docker compose down -vSubmit a QASM3 program:
python - <<'PY'
import json
import urllib.request
qasm3 = """OPENQASM 3;
include "stdgates.inc";
qubit[2] q;
bit[2] c;
h q[0];
cx q[0], q[1];
c[0] = measure q[0];
c[1] = measure q[1];
"""
payload = json.dumps({"qc": qasm3}).encode("utf-8")
req = urllib.request.Request(
"http://localhost:8000/tasks",
data=payload,
headers={"Content-Type": "application/json"},
method="POST",
)
with urllib.request.urlopen(req) as resp:
print(resp.status, resp.read().decode("utf-8"))
PYResponse:
{
"task_id": "…",
"message": "Task submitted successfully."
}- Pending:
{ "status": "pending", "message": "Task is still in progress." }- Completed:
{ "status": "completed", "result": { "00": 512, "11": 512 } }- Not found (HTTP 404):
{ "status": "error", "message": "Task not found." }The stack is configured via environment variables (Compose sets these by default).
DATABASE_URL(default:postgresql://app:app@postgres:5432/qc)CELERY_BROKER_URL(default:amqp://app:app@rabbitmq:5672//)CELERY_RESULT_BACKEND(default:rpc://)TASK_QUEUE_NAME(default:qc)NUM_SHOTS(default:1024)WORKER_MAX_RETRIES(default:3)WORKER_RETRY_DELAY_S(default:5)TASK_SOFT_TIME_LIMIT_S(default:60)TASK_TIME_LIMIT_S(default:90)
For non-Docker local runs, you can create a .env file (see env.example) or export variables.
Integration tests run against the live Compose stack:
docker compose --profile test up --build --abort-on-container-exit --exit-code-from tests tests- At-least-once execution: Celery is configured with late acknowledgements (
task_acks_late=True) and requeue-on-worker-loss (task_reject_on_worker_lost=True). If a worker crashes mid-task, the broker can re-deliver the message. - Idempotency: the worker checks the persisted DB status; if a task is already
completed, it returns the stored result rather than executing again. - Source of truth: Postgres is used to reliably distinguish “pending vs not found” and to persist results beyond Celery backend retention.