Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Jan 8, 2026

Benchmark Implementation Plan

  • Create benchmark test suite
    • Set up benchmark framework (using benchmark.js)
    • Create benchmark tests for core functions (enforce, enforceSync, getRolesForUser, etc.)
    • Test different models (RBAC, ABAC, ACL)
    • Create test data and fixtures for benchmarks
  • Create GitHub Actions workflow for benchmarks
    • Add benchmark workflow that runs on PRs
    • Configure to run benchmarks on base branch and PR branch
    • Store benchmark results
    • Compare results between base and PR
  • Implement PR comment posting
    • Format benchmark results for display
    • Post results as PR comment
    • Show performance comparison (faster/slower)
  • Test and validate
    • Test benchmark suite locally (✓ All benchmarks run successfully)
    • Fix linting issues (✓ All ESLint checks pass)
    • Verify existing tests pass (✓ 192/192 tests pass)
    • Add documentation for benchmark usage
    • Extract scripts for better maintainability
    • Improve error handling in workflow
    • Address code review feedback
    • Fix CI workflow issue with missing scripts on base branch

Summary

Implementation Complete!

This PR adds a comprehensive benchmark suite and automated CI workflow to node-casbin:

What's Included

  • Benchmark Suite: Tests for RBAC, ABAC, and Basic models covering enforce, enforceSync, and policy management APIs
  • CI Workflow: Automated benchmarks on every PR with comparison against base branch
  • PR Comments: Clear, formatted results showing performance changes with visual indicators
  • Robust Error Handling: Gracefully handles failures while still providing useful feedback
  • Comprehensive Documentation: Complete guide for running and understanding benchmarks

Benefits

  • 🎯 Catch performance regressions before merging
  • 📊 Make data-driven decisions about code changes
  • 🚀 Maintain high performance standards
  • 📈 Track performance trends over time

Recent Fix

  • Fixed CI workflow to preserve benchmark scripts when checking out base branch for comparison
Original prompt

This section details on the original issue you should resolve

<issue_title>ci: implement benchmarks and automated CI workflow</issue_title>
<issue_description>We need to implement benchmark tests to measure the performance of different models and functions:

  • set up a CI workflow to run these benchmarks automatically on every Pull Request;

  • then post benchmark results in PR comments to compare performance changes;

  • this helps maintainers catch performance regressions early and keep the library fast.</issue_description>

Comments on the Issue (you are @copilot in this section)


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@hsluoyz hsluoyz marked this pull request as ready for review January 8, 2026 06:04
@coveralls
Copy link

coveralls commented Jan 8, 2026

Pull Request Test Coverage Report for Build 20820648323

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 78.257%

Totals Coverage Status
Change from base Build 20679550322: 0.0%
Covered Lines: 1645
Relevant Lines: 2001

💛 - Coveralls

Copilot AI and others added 4 commits January 8, 2026 06:12
- Add benchmark.ts with tests for RBAC, ABAC, and Basic models
- Add benchmark script to package.json
- Install benchmark.js and @types/benchmark dependencies
- Create GitHub Actions workflow for automated benchmarking on PRs
- Update .gitignore to exclude benchmark artifacts

Co-authored-by: hsluoyz <3787410+hsluoyz@users.noreply.github.com>
- Add comprehensive README for benchmark directory
- Fix ESLint errors in benchmark.ts
- Verify all tests still pass

Co-authored-by: hsluoyz <3787410+hsluoyz@users.noreply.github.com>
- Extract JSON parsing logic to separate extract-results.js script
- Extract comparison logic to separate compare-results.js script
- Improve error handling in benchmark workflow
- Add benchmark JS files to .eslintignore
- Fix misleading comment in benchmark.ts

Co-authored-by: hsluoyz <3787410+hsluoyz@users.noreply.github.com>
- Fix usage message in extract-results.js and compare-results.js
- Add file existence checks in benchmark.ts to validate example files
- Improve error handling in workflow using continue-on-error
- Make benchmark failures more visible while still allowing comparison

Co-authored-by: hsluoyz <3787410+hsluoyz@users.noreply.github.com>
Copilot AI changed the title [WIP] Implement benchmarks and automated CI workflow ci: implement benchmarks and automated CI workflow Jan 8, 2026
Copilot AI requested a review from hsluoyz January 8, 2026 06:26
@hsluoyz
Copy link
Member

hsluoyz commented Jan 8, 2026

@copilot fix:

Run node benchmark/extract-results.js base-benchmark.txt base-results.json
  node benchmark/extract-results.js base-benchmark.txt base-results.json
  shell: /usr/bin/bash -e {0}
node:internal/modules/cjs/loader:1210
  throw err;
  ^

Error: Cannot find module '/home/runner/work/node-casbin/node-casbin/benchmark/extract-results.js'
    at Module._resolveFilename (node:internal/modules/cjs/loader:1207:15)
    at Module._load (node:internal/modules/cjs/loader:1038:27)
    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:164:12)
    at node:internal/main/run_main_module:28:49 {
  code: 'MODULE_NOT_FOUND',
  requireStack: []
}

Node.js v20.19.6
Error: Process completed with exit code 1.

The workflow was failing because after checking out the base branch,
the benchmark scripts (extract-results.js and compare-results.js)
no longer exist. Now we copy them to /tmp before checkout and use
them from there for processing base branch results.

Co-authored-by: hsluoyz <3787410+hsluoyz@users.noreply.github.com>
Copy link
Contributor Author

Copilot AI commented Jan 8, 2026

@copilot fix:

...

Fixed in commit 2ec74ba. The issue was that the workflow checked out the base branch to run benchmarks on it, but the benchmark scripts (extract-results.js and compare-results.js) only exist in the PR branch. Now the scripts are copied to /tmp before checking out the base branch, so they're available for processing results from both branches.

@github-actions
Copy link

github-actions bot commented Jan 8, 2026

📊 Benchmark Results

Performance comparison between base branch and PR:

Benchmark Base (ops/sec) PR (ops/sec) Change
RBAC - enforce (allow) 0 202,350.92 N/A%
RBAC - enforce (deny) 0 131,031.02 N/A%
RBAC - enforceSync (allow) 0 457,877.02 N/A%
RBAC - enforceSync (deny) 0 300,335.07 N/A%
RBAC - getRolesForUser 0 3,616,150.89 N/A%
RBAC - hasRoleForUser 0 2,782,171.5 N/A%
ABAC - enforce (allow) 0 396,923.64 N/A%
ABAC - enforce (deny) 0 395,806.34 N/A%
ABAC - enforceSync (allow) 0 734,113.99 N/A%
ABAC - enforceSync (deny) 0 732,359.05 N/A%
Basic - enforce (allow) 0 257,166.52 N/A%
Basic - enforce (deny) 0 180,374.56 N/A%
Basic - enforceSync (allow) 0 569,687.94 N/A%
Basic - enforceSync (deny) 0 411,682.18 N/A%
getPolicy 0 3,905,176 N/A%
hasPolicy 0 4,710,110.67 N/A%
getFilteredPolicy 0 3,995,205.72 N/A%

Legend:

  • 🚀 Significant improvement (>5%)
  • ✅ Improvement (0-5%)
  • ➖ No significant change
  • ⬇️ Minor regression (0-5%)
  • ⚠️ Regression (>5%)

@hsluoyz hsluoyz closed this Jan 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ci: implement benchmarks and automated CI workflow

4 participants