Skip to content

Conversation

@benalleng
Copy link
Collaborator

@benalleng benalleng commented Dec 30, 2025

This PR is testing whether we can reduce the blocks mined in receiver_consolidates_utxos to reduce the CPU load as it was suggested that perhaps that test was CPU bound leading to a timeout when running nix flake check as referenced in #1228 (comment)

Its also worth considering if this test no longer is effective at testing what it was originally set out to when dramatically reducing the number of blocks. I tend to think it is still an effective test but curious what others think.

This allowed nix flake check to succeed on top of 238d849 where it otherwise failed without this PRs commit

Pull Request Checklist

Please confirm the following before requesting review:

@coveralls
Copy link
Collaborator

coveralls commented Dec 30, 2025

Pull Request Test Coverage Report for Build 20603228178

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 82.902%

Totals Coverage Status
Change from base Build 20575392805: 0.0%
Covered Lines: 9668
Relevant Lines: 11662

💛 - Coveralls

@benalleng
Copy link
Collaborator Author

I think that this is actually unnecessary testing on #1228 I found that the latest commit 238d849 is sufficient to prevent the timeout

@benalleng benalleng closed this Dec 30, 2025
@benalleng benalleng reopened this Dec 30, 2025
@benalleng
Copy link
Collaborator Author

Still seems that others are having timeouts

@spacebear21
Copy link
Collaborator

Its also worth considering if this test no longer is effective at testing what it was originally set out to when dramatically reducing the number of blocks. I tend to think it is still an effective test but curious what others think.

I had picked 100 UTXOs to more closely simulate a consolidation tx that e.g. an exchange or other service provider might make (e.g. https://mempool.space/tx/40942260a61b0c51d2ccfe22fbe2ab0c474feab74fac3b7d99e663a1001199c6). Such a large number isn't strictly necessary for test coverage but I think it's nice to sanity check the fee estimation and performance of our API under those conditions.

@benalleng
Copy link
Collaborator Author

benalleng commented Dec 31, 2025

I think we can avoid this, through caching, splitting the jobs up to reduce the load on individual runs, and transitioning to the dev profile it seems that we can't get it to reliably timeout anymore

@benalleng benalleng closed this Dec 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants