- Introduction
- Routes
- Authentication & Security
- Observability & Distributed Tracing
- Async Processing with RabbitMQ
- Caching (Redis)
- User Preferences
- AI Recipe Assistant
- Analytics
- Database Transactional Outbox Pattern
- Database Scaling
- Automated Testing & CI
- Project setup
- Environment variables
- Compile and run the project
- Run tests
- Deployment
- License
SuperChef is an AI-powered chef assistant designed to analyze existing recipes, suggest meaninful improvements, and help you create better dishes using your current ingredients.
It works on top of your current database, providing practical, cooking-focused recommendations rather than generic advice.
TLDR features:
- Real world backend concerns
- Async workflows
- Security best practices
- Clean NestJS architecture
- Pragmatic use of message queue
All API endpoints are documented using Swagger (OpenAPI).
Once the application is running, the interactive API documentation is available at:
GET /apis
The Swagger UI provides request/resoponse schemas, parameters, and example payloads for each endpoint.
Below is a high level overview of the available routes:
| Verb | Resource | Description | Scope | Role Access |
|---|---|---|---|---|
| POST | /auth/login | Superchef sign in | Public | Admin, Viewer |
| POST | /auth/refresh | Token refresh | Protected | Admin, Viewer |
| POST | /auth/logout | Sign out | Protected | Admin, Viewer |
| GET | /ingredients | Get ingredients list | Protected | Admin, Viewer |
| GET | /ingredients/:id | Get a single ingredient | Protected | Admin, Viewer |
| POST | /ingredients | Create an ingredient | Protected | Admin, Viewer |
| PUT | /ingredients/:id | Update an ingredient | Protected | Admin, Viewer |
| DELETE | /ingredients/:id | Delete ingredient | Protected | Admin, Viewer |
| GET | /recipes | Get the recipes list | Protected | Admin, Viewer |
| GET | /recipes/:id | Get a single recipe | Protected | Admin, Viewer |
| POST | /recipes | Create a recipe | Protected | Admin, Viewer |
| PUT | /recipes/:id | Update a recipe | Protected | Admin, Viewer |
| DELETE | /recipes/:id | Delete a recipe | Protected | Admin, Viewer |
| GET | /users | Get users list | Protected | Admin |
| GET | /users/:id | Get a single user | Protected | Admin |
| POST | /users | Create a user | Protected | Admin |
| PUT | /users/:id | Update a user | Protected | Admin |
| DELETE | /users/:id | Delete a user | Protected | Admin |
| POST | /chat | Sends a message to the superchef agent | Protected | Admin |
| GET | /analytics/top-recipes | Get the top ten most improved recipes | Protected | Admin, Viewer |
- Stateless authentication using JWT access tokens
- Tokens are issued on login and required for protected routes
- Designed to be compatible with API clients and frontends.
- Refresh tokens, enabling token rotation and secure session renewal without re-authentication.
- Users can have one or more roles
- Example roles:
- admin
- viewer
RBAC is applied at the route level, ensuring fine grained authorization.
- Global authentication guard ensures all protected routes require a valid JWT.
- Public routes are explicitly marked.
- Authorization logic is separated from controllers.
- Built-in rate limiter to protect the API from abuse.
- Prevents excessive requests to sensitive endpoints
- Configurable limits per route or globally.
This project implements a high-level observability pattern using OpenTelemetry and Honeycomb. Instead of traditional isolated logs, we use Distributed Tracing to correlate every log entry with a specific request flow.
-
Log Correlation: Every log entry is automatically enriched with a traceId and spanId, allowing for end-to-end debugging.
-
Span Events: Application logs are pushed to Honeycomb as "Span Events," providing a millisecond-accurate timeline of events within a request.
-
Automatic Context Propagation: Tracing context is maintained across asynchronous boundaries and distributed systems (e.g., RabbitMQ).
-
Custom Telemetry Logger: A specialized Logger extends the NestJS ConsoleLogger to handle tracing logic without polluting the Business Logic.
The system is designed to be transparent for developers. Simply use the standard NestJS Logger service:
private readonly logger = new Logger(AuthService.name);
async handle() {
this.logger.log('User logged in'); // Automatically appears in Honeycomb's trace timeline
}Traces, errors, and performance metrics are available in Honeycomb. Search by traceId to see the full "waterfall" view of any operation.
- RabbitMQ is used to handle async workflows
- Examples:
- User registration triggers a welcome email
- Extensible to notifications
- API publishes domain events
- Workers consume and process them independently
- Designed to be monolith-friendly, without premature microservices.
Superchef uses Redis as an in-memory cache to reduce latency and decrease load on the primary database.
The cache is applied to read-heavy endpoints, which keeps PostgreSQL as the single source of thruth while improving response times for frequent reads.
Each user can configure dietary preferences that are stored as JSON object inside the user table. Suported fields:
diet: "none" | "vegetarian" | "vegan" | "omnivore"alergies: string[]
Superchef includes an AI-powered assistant via the /chat endpoint, that helps users improve existing recipes by suggesting variations, optimizations, or substitutions based on natural language prompts.
The assistant is implemented as a backend agent powered by OpenAI and orchestrated server side.
All suggestions are generated in the context of a real recipe stored in the database.
This module handles the processing and delivery of recipe popularity metrics using a decoupled, event-driven architecture. By offloading analytics from the main API, we ensure high performance and system resilience.
The system implements separation of concerns by decoupling read and write operations through an event bus:
- Ingestion: Every time a recipe is improved through the ai agent, a
recipe.improvementevent is published to kafka. - Processing: The analytics microservice reads this event and performs an atomic increment in redis.
- Consumption: The api retrieves the ranking by talking to the analytics microservice through a rpc-like request, fetching the pre-aggregated data from redis.
For better understanding you can find the detailed process in the imagen below:
GET /analytics/top-recipes
- Retrieves the top ten most improved recipes.
Sample response:
[
{ "id": "uuid-101", "name": "Classic Lasagna", "count": 245 },
{ "id": "uuid-202", "name": "Spicy Ramen", "count": 189 }
]When a user is created in the system two things happen:
- A new record is created in the users table respectively.
- A welcome email is dispatched to the user's inbox.
This alone, could be a problem and lead to data inconsistencies, for example, it could be the case that the user is created successfully in our database but the message broker for whatever reason goes down precisely at that moment and the message is not delivered.
How do we mitigate this ?
By adding a transactional outbox pattern, this is how it works:
- There's a table holding the outbox events:
model OutboxEvent {
id String @id @default(uuid())
topic String
payload Json
status OutboxStatus @default(PENDING)
error String?
attempts Int @default(0)
createdAt DateTime @default(now()) @map("created_at")
updatedAt DateTime @default(now()) @map("updated_at")
@@index([status, createdAt])
@@map("outbox_event")
}Creating the user is done by means of a transaction, this transaction involves updating both the user table and the outbox_event table.
- There's a cron job checking the outbox_event table every
5seconds.
- If there's a
PENDINGevent it will try to send it to the message broker so it can be processed. - If the processing fails and the third attempt hasn't been reached the status remains
PENDINGand the attempts count bumps up. - If it's the third attempt and the processing fails the status is updated to
FAILED.
You can check the entire flow in the image below.
As a first measure to horizontally scale the persistence layer two instances of the PostgreSQL database are present, the primary node and the replica. Separating the reading conexions from the writting ones.
This was easy to implement with Prisma through the extension extension-read-replicas
The project implements automated robust E2E tests using Jest and Prisma.
To avoid affecting the real database the CI/CD pipeline in Github orchestrates a real-time testing environment using Docker Service Containers:
- Every PR triggers a workflow that spins up a dedicated Postgres 15 container.
- The pipeline automatically handles Prisma migrations and database seeding before executing tests.
- Implements pg_isready checks to ensure database availability before the test suite starts, preventing race conditions.
Superchef integrates with Stripe for subscription management, allowing users to subscribe to a basic plan and access enhanced features.
- Checkout Session Creation: Create Stripe Checkout sessions for subscription purchases
- Webhook Processing: Handle Stripe webhook events (checkout.session.completed)
- Subscription Management: Automatically update subscription status and billing periods
- Customer Management: Create and manage Stripe customers linked to user accounts
| Event | Description | Action |
|---|---|---|
checkout.session.completed |
Checkout completed successfully | Updates subscription status, billing period, and user access |
| ------------------------------- | ------------------------------- | ------------------------------------------------------------ |
customer.subscription.deleted |
Customer unsubscribed. | Updates subscription status, billing period, and user access |
| ------------------------------- | ------------------------------- | ------------------------------------------------------------ |
invoice.paid |
Invoice paid successfully. | Updates subscription status, billing period, and user access |
| ------------------------------- | ------------------------------- | ------------------------------------------------------------ |
invoice.payment.failed. |
Invoice Payment Failed | Updates subscription status, and notifies the user. |
$ npm installCreate a .env file in the root of your project and add the following env vars:
DATABASE_URL=
OPENAI_API_KEY=
RESEND_API_KEY=
JWT_SECRET=
LOG_LEVEL=["log", "error", "warn", "debug", "verbose"]
REDIS_HOST=
REDIS_PORT=
REDIS_USERNAME=
REDIS_PASSWORD=
STRIPE_API_KEY=
HONEYCOMB_API_KEY=# development
$ npm run start
# watch mode
$ npm run start:dev
# production mode
$ npm run start:prod# unit tests
$ npm run testWhen you're ready to deploy your NestJS application to production, there are some key steps you can take to ensure it runs as efficiently as possible. Check out the deployment documentation for more information.
If you are looking for a cloud-based platform to deploy your NestJS application, check out Mau, our official platform for deploying NestJS applications on AWS. Mau makes deployment straightforward and fast, requiring just a few simple steps:
$ npm install -g @nestjs/mau
$ mau deployWith Mau, you can deploy your application in just a few clicks, allowing you to focus on building features rather than managing infrastructure.
This project is licensed under the MIT License - see the LICENSE file for details.



