Skip to content

Fix blockchain integrity, optimize deduplication, and add issue caching#396

Open
RohanExploit wants to merge 4 commits intomainfrom
fix-blockchain-and-cache-optimization-8147028805788840111
Open

Fix blockchain integrity, optimize deduplication, and add issue caching#396
RohanExploit wants to merge 4 commits intomainfrom
fix-blockchain-and-cache-optimization-8147028805788840111

Conversation

@RohanExploit
Copy link
Owner

@RohanExploit RohanExploit commented Feb 16, 2026

This PR addresses several critical issues and optimizations:

  1. Blockchain Integrity: Previously, verification relied on dynamic querying of the "previous" issue, which was fragile. Now, previous_integrity_hash is stored on each record, enabling robust verification. Legacy data is supported via a fallback mechanism.
  2. Duplicate Handling: Instead of discarding duplicate reports, they are now saved with status='duplicate' and linked via parent_issue_id. This preserves data for analytics while maintaining the existing frontend behavior (returning id=None for duplicates).
  3. Performance: Added a new issue_details_cache to backend/cache.py and implemented caching for the new GET /api/issues/{issue_id} endpoint.
  4. Missing Endpoint: Added GET /api/issues/{issue_id} which was missing but expected by the frontend/memory.

PR created automatically by Jules for task 8147028805788840111 started by @RohanExploit


Summary by cubic

Strengthens issue blockchain verification, speeds up reads with caching, fixes Postgres migrations, and stabilizes frontend builds.

  • Data Integrity

    • Store previous_integrity_hash on new issues; verification now uses the stored hash with a legacy fallback.
    • Save duplicates as status='duplicate' with parent_issue_id (indexed). Frontend still returns id=None on duplicates.
  • Performance, Migrations & Build

    • Add issue_details_cache (60s TTL, max 1000) and GET /api/issues/{issue_id}. Invalidate on upvote/verify/status changes; clear nearby/recent caches on create.
    • Use safe_migrate for PostgreSQL (per-step commit/rollback) to avoid aborted transactions; move vite/tailwindcss/postcss/autoprefixer to dependencies and regenerate lockfile to fix "vite: command not found" in production builds.

Written for commit 0af9212. Summary will update on new commits.

Summary by CodeRabbit

  • New Features

    • New endpoint to fetch issue details with optimized caching.
    • Duplicate detection now links reports to existing issues instead of creating separate ones.
    • Blockchain integrity checks now use prior-hash tracking for faster, more reliable tamper detection.
    • Background grievance processing for new issues and improved AI action gating.
  • Refactor

    • Centralized migration flow and schema updates to add provenance (previous hash) and parent-issue linkage.
    • Improved cache invalidation across create/update/verify flows.

- Added `previous_integrity_hash` and `parent_issue_id` to `Issue` model and migration.
- Updated `create_issue` to store duplicate issues with `status='duplicate'` and `parent_issue_id` for better tracking.
- Implemented `GET /api/issues/{issue_id}` with thread-safe caching (`issue_details_cache`).
- Fixed blockchain verification to use stored `previous_integrity_hash` (O(1) verification) with fallback for legacy data.
- Added cache invalidation for issue updates (`upvote`, `verify`, `status_update`).

Co-authored-by: RohanExploit <178623867+RohanExploit@users.noreply.github.com>
@google-labs-jules
Copy link
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

Copilot AI review requested due to automatic review settings February 16, 2026 10:58
@netlify
Copy link

netlify bot commented Feb 16, 2026

Deploy Preview for fixmybharat failed. Why did it fail? →

Name Link
🔨 Latest commit 0af9212
🔍 Latest deploy log https://app.netlify.com/projects/fixmybharat/deploys/6992fd773f5f8600084ec0f2

@github-actions
Copy link

🙏 Thank you for your contribution, @RohanExploit!

PR Details:

Quality Checklist:
Please ensure your PR meets the following criteria:

  • Code follows the project's style guidelines
  • Self-review of code completed
  • Code is commented where necessary
  • Documentation updated (if applicable)
  • No new warnings generated
  • Tests added/updated (if applicable)
  • All tests passing locally
  • No breaking changes to existing functionality

Review Process:

  1. Automated checks will run on your code
  2. A maintainer will review your changes
  3. Address any requested changes promptly
  4. Once approved, your PR will be merged! 🎉

Note: The maintainers will monitor code quality and ensure the overall project flow isn't broken.

@coderabbitai
Copy link

coderabbitai bot commented Feb 16, 2026

📝 Walkthrough

Walkthrough

Adds a global thread-safe issue_details_cache, extends the Issue model with previous_integrity_hash and parent_issue_id, applies corresponding DB migrations, refactors create_issue to always compute integrity and handle duplicates with parent linking, adds a cached get_issue endpoint, and ensures cache invalidation across write paths.

Changes

Cohort / File(s) Summary
Cache
backend/cache.py
Adds a new module-level issue_details_cache (ThreadSafeCache, ttl=60s, max_size=1000).
Models & Migrations
backend/models.py, backend/init_db.py
Adds previous_integrity_hash (String, nullable) and parent_issue_id (Integer FK, indexed) to Issue; migrates DB schema via consolidated safe_migrate calls and adds index on parent_issue_id.
API / Router
backend/routers/issues.py
Refactors create_issue to always compute integrity using previous hash, create duplicates with status="duplicate" and parent linkage, move DB save to threadpool, add get_issue endpoint with caching, and invalidate issue_details_cache on relevant writes (upvote, verify, status updates).
Tests
tests/test_blockchain.py
Updates tests to set previous_integrity_hash on Issue instances to reflect chained integrity in test scenarios.

Sequence Diagram

sequenceDiagram
    participant Client
    participant Router as issues.Router
    participant Blockchain as IntegrityCalc
    participant DB as Database
    participant Cache as CacheSystem

    Client->>Router: POST /create_issue(payload)
    Router->>Blockchain: fetch_previous_hash() / compute(prev_hash, description, category)
    Blockchain-->>Router: prev_hash, integrity_hash
    Router->>DB: query nearby issues
    DB-->>Router: nearby list
    alt no nearby
        Router->>DB: save Issue(integrity_hash, previous_integrity_hash, status="new")
        DB-->>Router: new_issue
        Router->>Cache: invalidate recent_issues_cache, nearby_issues_cache
        Router-->>Client: {id: issue_id, message: "new issue created"}
    else nearby found
        Router->>DB: save Issue(status="duplicate", parent_issue_id, previous_integrity_hash)
        DB-->>Router: duplicate_issue
        Router->>Cache: invalidate recent_issues_cache, nearby_issues_cache
        Router-->>Client: {id: null, message: "linked to existing issue"}
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐇 I hop on keys and stamp my paw,

prev_hash linked, a clever law.
I cache the tale for sixty ticks,
New parents join the issue mix.
Hooray — the chain grows, snug and tight!

🚥 Pre-merge checks | ✅ 3 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately captures the three main changes: blockchain integrity fixes, deduplication optimization, and issue caching. It directly corresponds to the substantial changes across cache.py, models.py, init_db.py, and routers/issues.py.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into main
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix-blockchain-and-cache-optimization-8147028805788840111

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
backend/routers/issues.py (2)

146-161: ⚠️ Potential issue | 🟠 Major

Stale cache: linked issue's detail cache not invalidated after duplicate-triggered upvote.

When a duplicate is detected, the closest existing issue is upvoted (lines 152–156), but issue_details_cache for that linked_issue_id is never invalidated. Subsequent GET /api/issues/{linked_issue_id} calls will return a stale upvote count until the TTL expires.

Add invalidation after the duplicate upvote:

🐛 Proposed fix
                 # Commit the upvote
                 await run_in_threadpool(db.commit)
 
+                # Invalidate cache for the upvoted issue
+                issue_details_cache.invalidate(f"issue_{linked_issue_id}")
+
                 logger.info(f"Spatial deduplication: Linked new report to existing issue {linked_issue_id}")

Also applies to: 228-240


228-240: ⚠️ Potential issue | 🟡 Minor

Cache not invalidated for duplicate issues.

When a duplicate is created, the recent_issues_cache and nearby_issues_cache are not cleared (the invalidation block at lines 236–238 is gated by new_issue.status != "duplicate"). While duplicates have status='duplicate' and probably won't appear in "open" filtered queries, the recent_issues endpoint has no status filter — so duplicates could appear there once the cache expires, but wouldn't show up while the old cache is still warm. This inconsistency is minor but could confuse debugging.

Consider moving cache invalidation outside the status != "duplicate" guard, or documenting why duplicates shouldn't invalidate list caches.

🤖 Fix all issues with AI agents
In `@backend/routers/issues.py`:
- Around line 168-177: Add a short comment above the block that computes
prev_hash/integrity_hash (the run_in_threadpool call that queries
Issue.integrity_hash and the subsequent prev_hash and integrity_hash
computation) noting the race condition: concurrent create_issue requests can
read the same latest integrity_hash before commits and thus produce identical
previous_integrity_hash values (a forked chain); mention this is a known
limitation unless writes are serialized (e.g., DB row/table locks, serializable
transactions, or a monotonic sequence enforced), so auditors are aware and you
can consider adding locking or transaction isolation in future.
- Around line 270-315: The get_issue route (function get_issue handling GET
"/api/issues/{issue_id}") is declared before static routes and will preemptively
match requests like "/api/issues/nearby" causing 422s; move the get_issue route
declaration so it appears after the static routes "/api/issues/nearby",
"/api/issues/user", "/api/issues/status", and "/api/issues/recent" in the file
so the static paths are registered first and will be matched before the
parameterized "/api/issues/{issue_id}" handler.
🧹 Nitpick comments (1)
backend/init_db.py (1)

127-147: Migration steps follow the existing pattern.

The new migration steps are consistent with the rest of the file. One minor note: lines 130 and 137 use print() while the index creation on line 144 uses logger.info(). This inconsistency exists elsewhere in the file too, but worth standardizing if you do a cleanup pass.

Comment on lines 168 to 177
# Fetch previous issue's integrity hash for blockchain chaining
# Optimization: Fetch only the last hash to maintain the chain with minimal overhead
prev_issue = await run_in_threadpool(
lambda: db.query(Issue.integrity_hash).order_by(Issue.id.desc()).first()
)
prev_hash = prev_issue[0] if prev_issue and prev_issue[0] else ""

# Simple but effective SHA-256 chaining
hash_content = f"{description}|{category}|{prev_hash}"
integrity_hash = hashlib.sha256(hash_content.encode()).hexdigest()
# Simple but effective SHA-256 chaining
hash_content = f"{description}|{category}|{prev_hash}"
integrity_hash = hashlib.sha256(hash_content.encode()).hexdigest()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Race condition: concurrent create_issue calls can fetch the same prev_hash.

Two concurrent requests can both read the same latest integrity_hash before either commits, resulting in two issues with identical previous_integrity_hash values — a forked chain. This is somewhat inherent to the design without serialized writes, but worth documenting as a known limitation, especially if blockchain integrity audits are performed on the full chain.

At minimum, consider adding a comment acknowledging this concurrency limitation.

🤖 Prompt for AI Agents
In `@backend/routers/issues.py` around lines 168 - 177, Add a short comment above
the block that computes prev_hash/integrity_hash (the run_in_threadpool call
that queries Issue.integrity_hash and the subsequent prev_hash and
integrity_hash computation) noting the race condition: concurrent create_issue
requests can read the same latest integrity_hash before commits and thus produce
identical previous_integrity_hash values (a forked chain); mention this is a
known limitation unless writes are serialized (e.g., DB row/table locks,
serializable transactions, or a monotonic sequence enforced), so auditors are
aware and you can consider adding locking or transaction isolation in future.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements blockchain integrity improvements, duplicate issue tracking, and performance optimizations for the issue reporting system. The changes enhance data reliability by storing the previous hash directly on each issue record for O(1) blockchain verification, while preserving all duplicate reports for analytics purposes. A new caching layer for individual issue details is introduced along with the missing GET endpoint that the frontend expects.

Changes:

  • Blockchain integrity: Store previous_integrity_hash on each issue for efficient verification with fallback to legacy query method
  • Duplicate tracking: Save duplicate reports with status='duplicate' and parent_issue_id while maintaining existing API contract (returning id=None)
  • Performance: Add issue_details_cache and implement caching for the new issue details endpoint

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
backend/models.py Add previous_integrity_hash and parent_issue_id columns to Issue model with ForeignKey constraint
backend/init_db.py Add migration scripts for new columns and index on parent_issue_id
backend/cache.py Add new issue_details_cache instance with 60s TTL and 1000 max entries
backend/routers/issues.py Implement blockchain hash storage, duplicate issue tracking, new GET endpoint, and cache invalidation
tests/test_blockchain.py Update tests to include previous_integrity_hash in test data

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

action_plan = Column(JSONEncodedDict, nullable=True)
integrity_hash = Column(String, nullable=True) # Blockchain integrity seal
previous_integrity_hash = Column(String, nullable=True)
parent_issue_id = Column(Integer, ForeignKey("issues.id"), nullable=True, index=True)
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ForeignKey constraint ForeignKey("issues.id") on parent_issue_id should specify ondelete behavior. Without an explicit ondelete parameter, if the parent issue is deleted, the database behavior depends on the backend configuration and could cause constraint violations or orphaned records. Consider adding ondelete="SET NULL" or ondelete="CASCADE" based on the desired behavior for duplicate tracking when parent issues are deleted.

Suggested change
parent_issue_id = Column(Integer, ForeignKey("issues.id"), nullable=True, index=True)
parent_issue_id = Column(Integer, ForeignKey("issues.id", ondelete="SET NULL"), nullable=True, index=True)

Copilot uses AI. Check for mistakes.
# Invalidate cache so new issue appears
try:
recent_issues_cache.clear()
nearby_issues_cache.clear()
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nearby_issues_cache is being cleared when a new non-duplicate issue is created, but this cache should also be invalidated when issues are updated (e.g., status changes, upvotes) since those changes affect the nearby issues query results. Consider whether this cache should have a shorter TTL or be invalidated more granularly based on geographic proximity to changed issues.

Copilot uses AI. Check for mistakes.
category="Road",
integrity_hash="invalidhash"
integrity_hash="invalidhash",
previous_integrity_hash=""
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tests should be updated to verify that duplicate issues are properly saved with status='duplicate' and parent_issue_id set correctly. The existing test_blockchain.py only tests the blockchain integrity fields but doesn't cover the new duplicate tracking functionality introduced in this PR. Consider adding tests that create a duplicate report and verify the database record is created with the correct status and parent_issue_id.

Copilot uses AI. Check for mistakes.
Comment on lines 136 to 137
conn.execute(text("ALTER TABLE issues ADD COLUMN parent_issue_id INTEGER"))
print("Migrated database: Added parent_issue_id column.")
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The migration adds the parent_issue_id column but doesn't create the foreign key constraint defined in the SQLAlchemy model. For existing databases, the ForeignKey constraint won't be enforced until explicitly added via migration. Consider adding a migration statement to create the constraint: ALTER TABLE issues ADD CONSTRAINT fk_parent_issue_id FOREIGN KEY (parent_issue_id) REFERENCES issues(id) with appropriate ON DELETE behavior.

Copilot uses AI. Check for mistakes.
Comment on lines 282 to 293
issue = db.query(
Issue.id,
Issue.category,
Issue.description,
Issue.created_at,
Issue.image_path,
Issue.status,
Issue.upvotes,
Issue.location,
Issue.latitude,
Issue.longitude
).filter(Issue.id == issue_id).first()
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The GET endpoint returns issues regardless of their status, including issues with status='duplicate'. Consider whether duplicate issues should be exposed via this public API endpoint, or if they should be filtered out since they're primarily for internal analytics. If duplicates should be accessible, document this behavior in the docstring.

Copilot uses AI. Check for mistakes.
action_plan=None,
integrity_hash=integrity_hash,
previous_integrity_hash=prev_hash,
status="duplicate",
Copy link

Copilot AI Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The status value "duplicate" should be added to the IssueStatus enum in backend/schemas.py. Currently, the Issue model uses this string literal status value, but it's not defined in the enum. This could cause validation issues or unexpected behavior when the status field is validated against the enum in API responses or status transitions. Add DUPLICATE = "duplicate" to the IssueStatus enum.

Copilot uses AI. Check for mistakes.
Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 5 files

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="backend/routers/issues.py">

<violation number="1" location="backend/routers/issues.py:211">
P2: Missing `issue_details_cache` invalidation for the parent issue after the deduplication upvote. The `upvote_issue` endpoint correctly invalidates the cache, but this deduplication code path does not, so `GET /api/issues/{linked_issue_id}` will return a stale upvote count. Add `issue_details_cache.invalidate(f"issue_{linked_issue_id}")` after the upvote commit.</violation>

<violation number="2" location="backend/routers/issues.py:270">
P0: Route declaration order bug: This parameterized route `/api/issues/{issue_id}` must be declared after all static `/api/issues/*` sub-routes (like `/api/issues/nearby`, `/api/issues/user`, `/api/issues/status`, `/api/issues/recent`). FastAPI/Starlette matches routes in declaration order, so requests to static paths will attempt int validation on path segments like 'nearby' and return 422 errors instead of reaching the correct handlers.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

linked_issue_id=linked_issue_id
)

@router.get("/api/issues/{issue_id}", response_model=IssueSummaryResponse)
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P0: Route declaration order bug: This parameterized route /api/issues/{issue_id} must be declared after all static /api/issues/* sub-routes (like /api/issues/nearby, /api/issues/user, /api/issues/status, /api/issues/recent). FastAPI/Starlette matches routes in declaration order, so requests to static paths will attempt int validation on path segments like 'nearby' and return 422 errors instead of reaching the correct handlers.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/routers/issues.py, line 270:

<comment>Route declaration order bug: This parameterized route `/api/issues/{issue_id}` must be declared after all static `/api/issues/*` sub-routes (like `/api/issues/nearby`, `/api/issues/user`, `/api/issues/status`, `/api/issues/recent`). FastAPI/Starlette matches routes in declaration order, so requests to static paths will attempt int validation on path segments like 'nearby' and return 422 errors instead of reaching the correct handlers.</comment>

<file context>
@@ -247,6 +267,53 @@ async def create_issue(
             linked_issue_id=linked_issue_id
         )
 
+@router.get("/api/issues/{issue_id}", response_model=IssueSummaryResponse)
+def get_issue(issue_id: int, db: Session = Depends(get_db)):
+    """
</file context>
Fix with Cubic

- Refactored `backend/init_db.py` to use `safe_migrate` helper function.
- Each migration step is now executed in an independent transaction (commit on success, rollback on failure).
- This prevents `current transaction is aborted` errors on PostgreSQL when checking for existing columns, ensuring all migrations (including `previous_integrity_hash` and `parent_issue_id`) run successfully.
- This fixes the deployment failure where new columns were missing due to aborted transactions.

Co-authored-by: RohanExploit <178623867+RohanExploit@users.noreply.github.com>
Copy link

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="backend/init_db.py">

<violation number="1" location="backend/init_db.py:40">
P2: Silent exception swallowing hides real migration failures. The `except Exception` block discards the error without any logging, making it impossible to diagnose unexpected failures (permissions, disk issues, DB corruption, etc.). At minimum, log the exception at `debug` or `warning` level so operators can troubleshoot.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment on lines 40 to 41
except Exception:
conn.rollback()
Copy link

@cubic-dev-ai cubic-dev-ai bot Feb 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Silent exception swallowing hides real migration failures. The except Exception block discards the error without any logging, making it impossible to diagnose unexpected failures (permissions, disk issues, DB corruption, etc.). At minimum, log the exception at debug or warning level so operators can troubleshoot.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At backend/init_db.py, line 40:

<comment>Silent exception swallowing hides real migration failures. The `except Exception` block discards the error without any logging, making it impossible to diagnose unexpected failures (permissions, disk issues, DB corruption, etc.). At minimum, log the exception at `debug` or `warning` level so operators can troubleshoot.</comment>

<file context>
@@ -31,206 +31,98 @@ def migrate_db():
+                    conn.commit()
+                    if success_msg:
+                        logger.info(success_msg)
+                except Exception:
+                    conn.rollback()
+
</file context>
Suggested change
except Exception:
conn.rollback()
except Exception as e:
conn.rollback()
logger.debug(f"Migration skipped (likely already applied): {e}")
Fix with Cubic

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@backend/init_db.py`:
- Around line 34-41: The safe_migrate function is swallowing all
exceptions—capture and log the exception before rolling back so real failures
are visible; inside safe_migrate (which calls conn.execute(text(sql)) and
conn.commit()) update the except Exception block to log the exception (e.g.,
logger.debug or logger.error with the exception object and include context like
the SQL or success_msg) and then perform conn.rollback() so you still revert on
error but retain diagnostic information.

Comment on lines 34 to 41
def safe_migrate(sql, success_msg):
try:
conn.execute(text(sql))
conn.commit()
if success_msg:
logger.info(success_msg)
except Exception:
conn.rollback()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

safe_migrate silently swallows all migration failures — at minimum log the exception.

The except Exception block rolls back but discards the error entirely. This is fine for expected "column/index already exists" errors, but genuine failures (permission denied, syntax errors, disk full, connection issues) will be invisible, making production debugging very difficult.

Add at least a logger.debug so failures are traceable when needed:

Proposed fix
             def safe_migrate(sql, success_msg):
                 try:
                     conn.execute(text(sql))
                     conn.commit()
                     if success_msg:
                         logger.info(success_msg)
-                except Exception:
+                except Exception as e:
                     conn.rollback()
+                    logger.debug("Migration skipped or failed: %s — %s", sql.strip()[:60], e)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def safe_migrate(sql, success_msg):
try:
conn.execute(text(sql))
conn.commit()
if success_msg:
logger.info(success_msg)
except Exception:
conn.rollback()
def safe_migrate(sql, success_msg):
try:
conn.execute(text(sql))
conn.commit()
if success_msg:
logger.info(success_msg)
except Exception as e:
conn.rollback()
logger.debug("Migration skipped or failed: %s — %s", sql.strip()[:60], e)
🧰 Tools
🪛 Ruff (0.15.0)

[warning] 40-40: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
In `@backend/init_db.py` around lines 34 - 41, The safe_migrate function is
swallowing all exceptions—capture and log the exception before rolling back so
real failures are visible; inside safe_migrate (which calls
conn.execute(text(sql)) and conn.commit()) update the except Exception block to
log the exception (e.g., logger.debug or logger.error with the exception object
and include context like the SQL or success_msg) and then perform
conn.rollback() so you still revert on error but retain diagnostic information.

- Moved `vite`, `tailwindcss`, `postcss`, and `autoprefixer` from `devDependencies` to `dependencies`.
- This ensures these packages are installed and available during the build process on platforms like Render/Netlify, which often set `NODE_ENV=production` and prune dev dependencies before building.
- Fixes the "vite: command not found" error during deployment.

Co-authored-by: RohanExploit <178623867+RohanExploit@users.noreply.github.com>
@github-actions
Copy link

🔍 Quality Reminder

Thanks for the updates! Please ensure:
- Your changes don't break existing functionality
- All tests still pass
- Code quality standards are maintained

*The maintainers will verify that the overall project flow remains intact.*

- Moved `vite`, `tailwindcss`, `postcss`, and `autoprefixer` from `devDependencies` to `dependencies`.
- This ensures these packages are installed and available during the build process on platforms like Render/Netlify, which often set `NODE_ENV=production` and prune dev dependencies before building.
- Fixes the "vite: command not found" error during deployment.
- Regenerated `package-lock.json` to reflect changes.

Co-authored-by: RohanExploit <178623867+RohanExploit@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant