Skip to content

Conversation

@danieljvickers
Copy link
Member

@danieljvickers danieljvickers commented Jan 9, 2026

User description

User description

Description

I added code to handle two-way coupling for moving IBMs with viscous forces. I also added an ellipsoidal IB patch for sediment-based cases that we may want to run. I also have added a new example case of the 2D shock cylinder, which was analyzed and compared to a published example.

The code already works with MPI and GPU capability, and was adequately tested on phoenix for both features.

Of note, there are a few things missing from this PR is higher-order derivatives, which is going to come in a more in-depth validation PR next.

Type of change

  • New feature (non-breaking change which adds functionality)

Scope

  • This PR comprises a set of related changes with a common goal

If you cannot check the above box, please split your PR into multiple PRs that each have a common goal.

How Has This Been Tested?

Please describe the tests that you ran to verify your changes.
Provide instructions so we can reproduce.
Please also list any relevant details for your test configuration

  • I ran a comparison to a published shock-cylinder case, as well as verification for things like boyant forces and verification that it works with gravity.
  • I tested locally with GNU as well as on Phoenix with NVHPC on GPUs and obtained identical results on both.

Test Configuration:

  • What computers and compilers did you use to test this:

Checklist

  • I have added comments for the new code
  • I added Doxygen docstrings to the new code
  • I have made corresponding changes to the documentation (docs/)
  • I have added regression tests to the test suite so that people can verify in the future that the feature is behaving as expected
  • I have added example cases in examples/ that demonstrate my new feature performing as expected.
    They run to completion and demonstrate "interesting physics"
  • I ran ./mfc.sh format before committing my code
  • New and existing tests pass locally with my changes, including with GPU capability enabled (both NVIDIA hardware with NVHPC compilers and AMD hardware with CRAY compilers) and disabled
  • This PR does not introduce any repeated code (it follows the DRY principle)
  • I cannot think of a way to condense this code and reduce any introduced additional line count

If your code changes any code source files (anything in src/simulation)

To make sure the code is performing as expected on GPU devices, I have:

  • Checked that the code compiles using NVHPC compilers
  • Checked that the code compiles using CRAY compilers
  • Ran the code on either V100, A100, or H100 GPUs and ensured the new feature performed as expected (the GPU results match the CPU results)
  • Ran the code on MI200+ GPUs and ensure the new features performed as expected (the GPU results match the CPU results)
  • Enclosed the new feature via nvtx ranges so that they can be identified in profiles
  • Ran a Nsight Systems profile using ./mfc.sh run XXXX --gpu -t simulation --nsys, and have attached the output file (.nsys-rep) and plain text results to this PR
  • Ran a Rocprof Systems profile using ./mfc.sh run XXXX --gpu -t simulation --rsys --hip-trace, and have attached the output file and plain text results to this PR.
  • Ran my code using various numbers of different GPUs (1, 2, and 8, for example) in parallel and made sure that the results scale similarly to what happens if you run without the new code/feature

PR Type

Enhancement, Tests


Description

  • Added viscous stress tensor computation for moving immersed boundaries with two-way coupling

  • Implemented ellipsoidal immersed boundary patch geometry for 2D simulations

  • Extended force and torque calculations to include viscous stress contributions

  • Added body force support (gravity) to immersed boundary dynamics

  • Included shock-cylinder and ellipse example cases with validation tests


Diagram Walkthrough

flowchart LR
  A["Viscous Stress Tensor"] --> B["Force/Torque Calculation"]
  C["Ellipse IB Patch"] --> D["Levelset Generation"]
  B --> E["Two-Way Coupling"]
  D --> E
  F["Body Forces"] --> E
  E --> G["Moving IBM Dynamics"]
Loading

File Walkthrough

Relevant files
Enhancement
6 files
m_ibm.fpp
Integrated viscous forces and body forces into IBM             
+118/-46
m_viscous.fpp
Added viscous stress tensor computation routine                   
+69/-6   
m_compute_levelset.fpp
Implemented ellipse levelset generation method                     
+58/-1   
m_ib_patches.fpp
Added ellipse immersed boundary patch implementation         
+44/-0   
m_time_steppers.fpp
Updated force computation call with viscosity parameter   
+1/-1     
m_check_ib_patches.fpp
Added validation check for ellipse patch geometry               
+2/-0     
Miscellaneous
1 files
m_rhs.fpp
Renamed viscous stress function for clarity                           
+12/-12 
Tests
4 files
case.py
New shock-cylinder test case with moving cylinder               
+138/-0 
case.py
New ellipse immersed boundary example case                             
+97/-0   
golden-metadata.txt
Golden test metadata for shock-cylinder case                         
+193/-0 
golden-metadata.txt
Golden test metadata for ellipse case                                       
+193/-0 
Additional files
2 files
golden.txt +11/-0   
golden.txt +11/-0   


CodeAnt-AI Description

Add viscous two-way coupling for moving immersed boundaries and ellipse IB support

What Changed

  • Two-way coupling now includes viscous stresses: moving immersed boundaries receive forces and torques from both pressure and viscous stress contributions, and those summed forces update IB linear and angular motion.
  • Elliptical immersed boundary geometry added: domain markers, levelset normals, and levelset values are computed for ellipse patches so ellipsoidal bodies can be used in simulations.
  • Ghost-cell treatment and IB initialization updated: interior ghost cells for moving IBs set momentum consistent with the IB velocity so moving bodies start with correct local momentum; moment-of-inertia computation corrected for ellipse geometry.
  • Body forces (gravity/acceleration) are applied to IBs after global reduction to avoid double counting.
  • Two new example case files added: a 2D shock-cylinder case and a 2D IB ellipse case demonstrating viscous and moving-IB behavior.

Impact

✅ Two-way viscous coupling for moving bodies
✅ Simulations can include ellipsoidal immersed boundaries
✅ Examples available to reproduce viscous moving-IB scenarios

💡 Usage Guide

Checking Your Pull Request

Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.

Talking to CodeAnt AI

Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:

@codeant-ai ask: Your question here

This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.

Example

@codeant-ai ask: Can you suggest a safer alternative to storing this secret?

Preserve Org Learnings with CodeAnt

You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:

@codeant-ai: Your feedback here

This helps CodeAnt AI learn and adapt to your team's coding style and standards.

Example

@codeant-ai: Do not flag unused imports.

Retrigger review

Ask CodeAnt AI to review the PR again, by typing:

@codeant-ai: review

Check Your Repository Health

To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.

Summary by CodeRabbit

  • New Features

    • Elliptical immersed-boundary support
    • Viscous coupling for immersed-boundary forces and improved moving-body inertia handling
    • New example case generators that print JSON configs for 2D ellipse and 2D shock-cylinder
  • Bug Fixes

    • Corrected viscous-stress handling for cylindrical-boundary cases and fixed related argument usage
  • Tests

    • Added extensive golden-data files and run metadata for new scenarios

✏️ Tip: You can customize this high-level summary in your review settings.

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Jan 9, 2026

CodeAnt AI is reviewing your PR.


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 9, 2026

📝 Walkthrough

Walkthrough

Adds 2D ellipse IB support, two example JSON case generators, viscous coupling into IBM force computations with signature and call-site updates, viscous-stress API variants and fixes, updated inertia handling, and new golden-file test data. A duplicate s_ellipse_levelset implementation was introduced.

Changes

Cohort / File(s) Summary
Example Cases
examples/2D_ibm_ellipse/case.py, examples/2D_mibm_shock_cylinder/case.py
New scripts that build and print JSON configuration dictionaries for an IBM ellipse case and a MIBM shock-cylinder case.
Ellipse Levelset & Patch
src/common/m_compute_levelset.fpp, src/common/m_ib_patches.fpp, src/pre_process/m_check_ib_patches.fpp
Adds s_ellipse_levelset (ellipse implicit levelset), s_ib_ellipse (2D ellipse patch marker), acceptance of geometry code 6, and ellipse-parameter validation. Note: s_ellipse_levelset appears duplicated in the file.
IBM Force, Inertia & Setup
src/simulation/m_ibm.fpp
Changes s_compute_ib_forces signature to (q_prim_vf, dynamic_viscosity), incorporates viscous contributions into force/torque accumulation, requires axis input for moment-of-inertia, computes inertia during setup, and uses normal_axis internally.
Viscous Module API & Impl
src/simulation/m_viscous.fpp
Public exposure renamed to s_compute_viscous_stress_cylindrical_boundary; adds a new public s_compute_viscous_stress_tensor(...) implementation (Cartesian variant) and updates end-subroutine labels.
Call-sites & RHS Fixes
src/simulation/m_time_steppers.fpp, src/simulation/m_rhs.fpp
Updates IBM force call-sites to pass full q_prim_vf plus explicit Re-like value; replaces viscous-stress calls in cylindrical-BC branch and adjusts viscous-stress call arguments.
Tests / Golden Data
tests/.../golden-metadata.txt, tests/.../golden.txt
Adds golden-file metadata snapshots and large numeric datasets (D/cons.*, D/ib_markers.*) used by regression tests.

Sequence Diagram(s)

sequenceDiagram
    participant PreProc as Pre-processor
    participant IBCheck as IB Patch Checker
    participant IBPatch as IB Patch Module
    participant Domain as Grid/Cells
    participant LevelSet as Levelset Module

    PreProc->>IBCheck: s_check_ib_patches(..., geometry=6)
    IBCheck->>IBPatch: s_check_ellipse_ib_patch_geometry(patch_id)
    IBPatch->>Domain: s_ib_ellipse(patch_id, ib_markers_sf)
    Domain->>Domain: evaluate (x/a)^2+(y/b)^2 ≤ 1 with inverse rotation
    Domain-->>IBPatch: mark covered cells
    IBPatch->>LevelSet: s_ellipse_levelset(ib_patch_id, levelset, levelset_norm)
    LevelSet->>LevelSet: compute rotated coords, quadratic form, normal
    LevelSet-->>Domain: return levelset values and normals
Loading
sequenceDiagram
    participant TimeStepper as Time-stepper
    participant IBForce as IBM Force Module
    participant Viscous as Viscous Module
    participant IBData as IBM Data Store

    TimeStepper->>IBForce: s_compute_ib_forces(q_prim_vf, dynamic_viscosity)
    IBForce->>IBData: iterate IB markers / cells
    IBForce->>Viscous: s_compute_viscous_stress_tensor(viscous_stress, q_prim_vf, dynamic_viscosity, i,j,k)
    Viscous-->>IBForce: viscous stress tensor
    IBForce->>IBForce: accumulate pressure + viscous forces & torques
    IBForce-->>TimeStepper: return forces & torques
    TimeStepper->>TimeStepper: update IB linear/angular velocities
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested reviewers

  • sbryngelson
  • wilfonba

Poem

🐇
I hopped through grids where ellipses bend,
Rotated normals guide each friend,
Viscous whispers join the flow,
Forces tally, torques now show,
A tiny hop — simulations mend.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Description check ❓ Inconclusive The PR description covers the main changes, motivation (sediment-based cases), testing details, and lists example cases. However, several template checklist items remain unchecked, including Doxygen docstrings, documentation updates, GPU testing, and profiling, raising concerns about completeness against stated requirements. Complete the remaining checklist items or clarify which items are intentionally deferred. Document testing on NVHPC/CRAY compilers and GPUs, and confirm whether Doxygen docstrings and documentation updates have been addressed or are planned for follow-up PRs.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Viscous stress and ellipse ib' is concise and directly reflects the two main features added in the PR (viscous stress handling and ellipse immersed boundary support), making it clear and specific to the changeset.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codeant-ai codeant-ai bot added the size:XL This PR changes 500-999 lines, ignoring generated files label Jan 9, 2026
@qodo-code-review
Copy link
Contributor

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
🧪 PR contains tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Possible Issue

The new call sites compute cylindrical-boundary viscous stress using s_compute_viscous_stress_cylindrical_boundary, but the 2D branch appears to pass dq_prim_dy_vf for both the y- and z-derivative arguments. If this routine expects a true z-derivative field (even in 2D), this may silently produce incorrect stresses at the boundary.

    call s_compute_viscous_stress_cylindrical_boundary(q_prim_vf, &
                                                       dq_prim_dx_vf(mom_idx%beg:mom_idx%end), &
                                                       dq_prim_dy_vf(mom_idx%beg:mom_idx%end), &
                                                       dq_prim_dz_vf(mom_idx%beg:mom_idx%end), &
                                                       tau_Re_vf, &
                                                       idwbuff(1), idwbuff(2), idwbuff(3))
else
    call s_compute_viscous_stress_cylindrical_boundary(q_prim_vf, &
                                                       dq_prim_dx_vf(mom_idx%beg:mom_idx%end), &
                                                       dq_prim_dy_vf(mom_idx%beg:mom_idx%end), &
                                                       dq_prim_dy_vf(mom_idx%beg:mom_idx%end), &
                                                       tau_Re_vf, &
                                                       idwbuff(1), idwbuff(2), idwbuff(3))
Possible Issue

s_compute_moment_of_inertia normalizes the rotation axis using axis/sqrt(sum(axis)), which is not the vector magnitude and can lead to incorrect normalization (and potential NaNs for negative components). Additionally, the projection step uses dot_product(axis, position) while multiplying by normal_axis, which is inconsistent if axis is not normalized. This can skew the computed moment of inertia and downstream rigid-body dynamics.

real(wp), dimension(3), intent(in) :: axis !< the axis about which we compute the moment. Only required in 3D.
integer, intent(in) :: ib_marker

real(wp) :: moment, distance_to_axis, cell_volume
real(wp), dimension(3) :: position, closest_point_along_axis, vector_to_axis, normal_axis
integer :: i, j, k, count

if (p == 0) then
    normal_axis = [0, 1, 0]
else if (sqrt(sum(axis**2)) == 0) then
    ! if the object is not actually rotating at this time, return a dummy value and exit
    patch_ib(ib_marker)%moment = 1._wp
    return
else
    normal_axis = axis/sqrt(sum(axis))
end if

! if the IB is in 2D or a 3D sphere, we can compute this exactly
if (patch_ib(ib_marker)%geometry == 2) then ! circle
    patch_ib(ib_marker)%moment = 0.5_wp*patch_ib(ib_marker)%mass*(patch_ib(ib_marker)%radius)**2
elseif (patch_ib(ib_marker)%geometry == 3) then ! rectangle
    patch_ib(ib_marker)%moment = patch_ib(ib_marker)%mass*(patch_ib(ib_marker)%length_x**2 + patch_ib(ib_marker)%length_y**2)/6._wp
elseif (patch_ib(ib_marker)%geometry == 6) then ! ellipse
    patch_ib(ib_marker)%moment = 0.0625_wp*patch_ib(ib_marker)%mass*(patch_ib(ib_marker)%length_x**2 + patch_ib(ib_marker)%length_y**2)
elseif (patch_ib(ib_marker)%geometry == 8) then ! sphere
    patch_ib(ib_marker)%moment = 0.4*patch_ib(ib_marker)%mass*(patch_ib(ib_marker)%radius)**2

else ! we do not have an analytic moment of inertia calculation and need to approximate it directly
    count = 0
    moment = 0._wp
    cell_volume = (x_cc(1) - x_cc(0))*(y_cc(1) - y_cc(0)) ! computed without grid stretching. Update in the loop to perform with stretching
    if (p /= 0) then
        cell_volume = cell_volume*(z_cc(1) - z_cc(0))
    end if

    $:GPU_PARALLEL_LOOP(private='[position,closest_point_along_axis,vector_to_axis,distance_to_axis]', copy='[moment,count]', copyin='[ib_marker,cell_volume,normal_axis]', collapse=3)
    do i = 0, m
        do j = 0, n
            do k = 0, p
                if (ib_markers%sf(i, j, k) == ib_marker) then
                    $:GPU_ATOMIC(atomic='update')
                    count = count + 1 ! increment the count of total cells in the boundary

                    ! get the position in local coordinates so that the axis passes through 0, 0, 0
                    if (p == 0) then
                        position = [x_cc(i), y_cc(j), 0._wp] - [patch_ib(ib_marker)%x_centroid, patch_ib(ib_marker)%y_centroid, 0._wp]
                    else
                        position = [x_cc(i), y_cc(j), z_cc(k)] - [patch_ib(ib_marker)%x_centroid, patch_ib(ib_marker)%y_centroid, patch_ib(ib_marker)%z_centroid]
                    end if

                    ! project the position along the axis to find the closest distance to the rotation axis
                    closest_point_along_axis = normal_axis*dot_product(axis, position)
                    vector_to_axis = position - closest_point_along_axis
Possible Issue

s_compute_ib_forces uses central differences with i±1, j±1, k±1 and also queries x_cc(i+1)/etc while looping from 0..m/n/p. Unless ghost layers are guaranteed for these arrays and for q_prim_vf, boundary iterations will read out-of-bounds. Also, the pressure gradient is computed from q_prim_vf(E_idx), which (by naming) looks like total energy rather than pressure; if E_idx is not actually pressure, force/torque will be wrong.

subroutine s_compute_ib_forces(q_prim_vf, dynamic_viscosity)

    ! real(wp), dimension(idwbuff(1)%beg:idwbuff(1)%end, &
    !             idwbuff(2)%beg:idwbuff(2)%end, &
    !             idwbuff(3)%beg:idwbuff(3)%end), intent(in) :: pressure
    type(scalar_field), dimension(1:sys_size), intent(in) :: q_prim_vf
    real(wp), intent(in) :: dynamic_viscosity

    integer :: gp_id, i, j, k, l, q, ib_idx
    real(wp), dimension(num_ibs, 3) :: forces, torques
    real(wp), dimension(1:3, 1:3) :: viscous_stress_div, viscous_stress_div_1, viscous_stress_div_2 ! viscous stress tensor with temp vectors to hold divergence calculations
    real(wp), dimension(1:3) :: local_force_contribution, radial_vector, local_torque_contribution, vel
    real(wp) :: cell_volume, dx, dy, dz

    forces = 0._wp
    torques = 0._wp

    $:GPU_PARALLEL_LOOP(private='[ib_idx,radial_vector,local_force_contribution,cell_volume,local_torque_contribution, viscous_stress_div, viscous_stress_div_1, viscous_stress_div_2, dx, dy, dz]', copy='[forces,torques]', copyin='[ib_markers,patch_ib,dynamic_viscosity]', collapse=3)
    do i = 0, m
        do j = 0, n
            do k = 0, p
                ib_idx = ib_markers%sf(i, j, k)
                if (ib_idx /= 0) then
                    ! get the vector pointing to the grid cell from the IB centroid
                    if (num_dims == 3) then
                        radial_vector = [x_cc(i), y_cc(j), z_cc(k)] - [patch_ib(ib_idx)%x_centroid, patch_ib(ib_idx)%y_centroid, patch_ib(ib_idx)%z_centroid]
                    else
                        radial_vector = [x_cc(i), y_cc(j), 0._wp] - [patch_ib(ib_idx)%x_centroid, patch_ib(ib_idx)%y_centroid, 0._wp]
                    end if
                    dx = x_cc(i + 1) - x_cc(i)
                    dy = y_cc(j + 1) - y_cc(j)

                    ! Get the pressure contribution to force via a finite difference to compute the 2D components of the gradient of the pressure and cell volume
                    local_force_contribution(1) = -1._wp*(q_prim_vf(E_idx)%sf(i + 1, j, k) - q_prim_vf(E_idx)%sf(i - 1, j, k))/(2._wp*dx) ! force is the negative pressure gradient
                    local_force_contribution(2) = -1._wp*(q_prim_vf(E_idx)%sf(i, j + 1, k) - q_prim_vf(E_idx)%sf(i, j - 1, k))/(2._wp*dy)
                    cell_volume = abs(dx*dy)
                    ! add the 3D component of the pressure gradient, if we are working in 3 dimensions
                    if (num_dims == 3) then
                        dz = z_cc(k + 1) - z_cc(k)
                        local_force_contribution(3) = -1._wp*(q_prim_vf(E_idx)%sf(i, j, k + 1) - q_prim_vf(E_idx)%sf(i, j, k - 1))/(2._wp*dz)
                        cell_volume = abs(cell_volume*dz)
                    else
                        local_force_contribution(3) = 0._wp
                    end if

                    ! Update the force values atomically to prevent race conditions
                    call s_cross_product(radial_vector, local_force_contribution, local_torque_contribution)

                    ! get the viscous stress and add its contribution if that is considered
                    ! TODO :: This is really bad code
                    ! if (.false.) then
                    if (viscous) then
                        ! get the linear force component first
                        call s_compute_viscous_stress_tensor(viscous_stress_div_1, q_prim_vf, dynamic_viscosity, i - 1, j, k)
                        call s_compute_viscous_stress_tensor(viscous_stress_div_2, q_prim_vf, dynamic_viscosity, i + 1, j, k)
                        viscous_stress_div = (viscous_stress_div_2 - viscous_stress_div_1)/(2._wp*dx) ! get the x derivative of the viscous stress tensor
                        local_force_contribution(1:3) = local_force_contribution(1:3) + viscous_stress_div(1, 1:3) ! add te x components of the derivative to the force
                        do l = 1, 3
                            ! take the cross products for the torque component
                            call s_cross_product(radial_vector, viscous_stress_div_1(l, 1:3), viscous_stress_div_1(l, 1:3))
                            call s_cross_product(radial_vector, viscous_stress_div_2(l, 1:3), viscous_stress_div_2(l, 1:3))
                        end do

                        viscous_stress_div = (viscous_stress_div_2 - viscous_stress_div_1)/(2._wp*dx) ! get the x derivative of the cross product
                        local_torque_contribution(1:3) = local_torque_contribution(1:3) + viscous_stress_div(1, 1:3) ! apply the cross product derivative to the torque

                        call s_compute_viscous_stress_tensor(viscous_stress_div_1, q_prim_vf, dynamic_viscosity, i, j - 1, k)
                        call s_compute_viscous_stress_tensor(viscous_stress_div_2, q_prim_vf, dynamic_viscosity, i, j + 1, k)
                        viscous_stress_div = (viscous_stress_div_2 - viscous_stress_div_1)/(2._wp*dy)
                        local_force_contribution(1:3) = local_force_contribution(1:3) + viscous_stress_div(2, 1:3)
                        do l = 1, 3
                            call s_cross_product(radial_vector, viscous_stress_div_1(l, 1:3), viscous_stress_div_1(l, 1:3))
                            call s_cross_product(radial_vector, viscous_stress_div_2(l, 1:3), viscous_stress_div_2(l, 1:3))
                        end do

                        viscous_stress_div = (viscous_stress_div_2 - viscous_stress_div_1)/(2._wp*dy)
                        local_torque_contribution(1:3) = local_torque_contribution(1:3) + viscous_stress_div(2, 1:3)

                        if (num_dims == 3) then
                            call s_compute_viscous_stress_tensor(viscous_stress_div_1, q_prim_vf, dynamic_viscosity, i, j, k - 1)
                            call s_compute_viscous_stress_tensor(viscous_stress_div_2, q_prim_vf, dynamic_viscosity, i, j, k + 1)
                            viscous_stress_div = (viscous_stress_div_2 - viscous_stress_div_1)/(2._wp*dz)
                            local_force_contribution(1:3) = local_force_contribution(1:3) + viscous_stress_div(3, 1:3)
                            do l = 1, 3
                                call s_cross_product(radial_vector, viscous_stress_div_1(l, 1:3), viscous_stress_div_1(l, 1:3))
                                call s_cross_product(radial_vector, viscous_stress_div_2(l, 1:3), viscous_stress_div_2(l, 1:3))
                            end do
                            viscous_stress_div = (viscous_stress_div_2 - viscous_stress_div_1)/(2._wp*dz)
                            local_torque_contribution(1:3) = local_torque_contribution(1:3) + viscous_stress_div(3, 1:3)
                        end if
                    end if

                    do l = 1, 3
                        $:GPU_ATOMIC(atomic='update')
                        forces(ib_idx, l) = forces(ib_idx, l) + (local_force_contribution(l)*cell_volume)
                        $:GPU_ATOMIC(atomic='update')
                        torques(ib_idx, l) = torques(ib_idx, l) + local_torque_contribution(l)*cell_volume
                    end do
                end if

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Jan 9, 2026

Nitpicks 🔍

🔒 No security issues identified
⚡ Recommended areas for review

  • Viscous tensor corruption
    In s_compute_ib_forces the code calls s_cross_product with the same array used for an input and for the output (e.g. passing viscous_stress_div_1(l,1:3) as both input b and output c). That will overwrite viscous stress data and corrupt subsequent calculations. The block also includes repeated assignments and unclear intent for torque computation.

  • Possible OOB access
    s_compute_viscous_stress_tensor indexes neighbors with i±1, j±1, k±1 and directly computes (x_cc(i+1)-x_cc(i-1)) and accesses q_prim_vf(...%sf(i±1,...)). If this routine is ever called for boundary indices (or called without guaranteeing interior indices) it will read out-of-bounds memory. Callers must guarantee safe indices or the routine should guard against boundaries.

  • Moment normalization
    The new moment-of-inertia routine creates and uses both axis and normal_axis but normalizes axis incorrectly (uses sqrt(sum(axis)) instead of sqrt(sum(axis**2))) and then mixes axis and normal_axis when projecting. When p == 0 the code sets normal_axis but still calls dot_product(axis, position) which can use an uninitialized or non-normalized axis. This will produce incorrect projections and wrong moments.

  • Possible Bug
    In s_get_viscous there is a call that passes the same array for both the z-derivative and the y-derivative slots to s_compute_fd_gradient (i.e., passing dq_prim_dy_qp where dq_prim_dz_qp likely intended). This will produce incorrect z-gradient values when p>0 code path is expected. Validate and correct the arguments.

  • Numerical robustness
    The quadratic formula used to compute the levelset evaluates sqrt(B^2 - 4AC) (discriminant). Small negative values due to round-off can cause NaNs; also A could be extremely small leading to large/unstable division. Add guards to clamp small negative discriminant to zero and handle near-zero A.

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Jan 9, 2026

CodeAnt AI finished reviewing your PR.

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

12 issues found across 13 files

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them.


<file name="src/simulation/m_viscous.fpp">

<violation number="1" location="src/simulation/m_viscous.fpp:1519">
P3: `viscous_stress_tensor` is fully overwritten; declare it as `intent(out)` instead of `intent(inout)` for clearer semantics and better compiler diagnostics/optimization.</violation>

<violation number="2" location="src/simulation/m_viscous.fpp:1536">
P2: Guard y-/z-derivative work by dimension (e.g., only compute `dx(2)` and `velocity_gradient_tensor(:,2)` when `num_dims >= 2`). This avoids unnecessary memory reads and reduces the chance of invalid index access in 1D configurations.</violation>

<violation number="3" location="src/simulation/m_viscous.fpp:1543">
P1: `s_compute_viscous_stress_tensor` does unchecked `i±1/j±1/k±1` accesses; when called near boundaries (or with insufficient `buff_size`), it can read out-of-bounds. Add an explicit precondition (and enforce it at call sites) or implement one-sided stencils / index clamping for boundary-adjacent cells.</violation>
</file>

<file name="src/common/m_ib_patches.fpp">

<violation number="1" location="src/common/m_ib_patches.fpp:779">
P3: Comment refers to "rectangle" but this is the ellipse subroutine. Should say "ellipse's centroid and length information".</violation>

<violation number="2" location="src/common/m_ib_patches.fpp:786">
P3: Comment refers to "rectangle" but this is the ellipse subroutine. Should describe ellipse geometry check instead.</violation>
</file>

<file name="src/simulation/m_ibm.fpp">

<violation number="1" location="src/simulation/m_ibm.fpp:209">
P2: Type inconsistency: `moving_ibm` is an integer but is being compared to `0._wp` (a real literal). All other comparisons in the codebase use integer values. This should be `> 0` for consistency.</violation>

<violation number="2" location="src/simulation/m_ibm.fpp:1074">
P1: Don’t call s_cross_product in-place (same actual for INTENT(IN) and INTENT(OUT)); this is undefined behavior and can corrupt the viscous torque computation (and is repeated in multiple blocks). Use a temporary vector for the result before assigning back.</violation>

<violation number="3" location="src/simulation/m_ibm.fpp:1169">
P1: Axis normalization is incorrect: sqrt(sum(axis)) is not the vector magnitude. This will compute a wrong rotation axis (and can become invalid if sum(axis) < 0).</violation>

<violation number="4" location="src/simulation/m_ibm.fpp:1206">
P1: Bug: The projection uses the unnormalized `axis` in `dot_product(axis, position)` but uses the normalized `normal_axis` for scaling. This should be `dot_product(normal_axis, position)` for the projection formula to be mathematically correct.</violation>
</file>

<file name="src/common/m_compute_levelset.fpp">

<violation number="1" location="src/common/m_compute_levelset.fpp:379">
P1: Division by zero when computing `normal_vector` at the ellipse center. When `xy_local` is `[0, 0, 0]`, `dot_product(normal_vector, normal_vector)` is zero, causing division by zero. Add a guard similar to `s_circle_levelset` and `s_sphere_levelset` which check for this case using `f_approx_equal`.</violation>
</file>

<file name="examples/2D_ibm_ellipse/case.py">

<violation number="1" location="examples/2D_ibm_ellipse/case.py:4">
P2: Unused variable: `Mu` is defined but never used. Either remove it if unnecessary, or incorporate it into the configuration if it was intended to be used (e.g., for viscosity calculations).</violation>
</file>

<file name="src/pre_process/m_check_ib_patches.fpp">

<violation number="1" location="src/pre_process/m_check_ib_patches.fpp:66">
P1: Missing validation for ellipse IB patch parameters. Unlike all other geometry types that call validation subroutines, this branch only prints a debug message. The ellipse patch requires `x_centroid`, `y_centroid`, `length_x`, and `length_y` to be properly set (per `s_ib_ellipse` in m_ib_patches.fpp). Consider creating an `s_check_ellipse_ib_patch_geometry(i)` subroutine similar to other geometry validators.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/common/m_ib_patches.fpp (1)

109-126: Fix copy-paste comment and add validation for ellipse axis lengths.

Cross-tooling consistency for geometry 6 is correct (intentionally deprecated for ICPP patches but active for IB). However, two issues need addressing:

  1. Comment error (line 786): "Checking whether the rectangle covers..." should say "ellipse" in the s_ib_ellipse subroutine.

  2. Missing input validation: ellipse_coeffs (lines 782–783) is computed from length_x and length_y, then used in divisions (line 799 in s_ib_ellipse and line 363 in s_ellipse_levelset) without validating the axis lengths are non-zero. Add @:ASSERT macros to ensure patch_ib(patch_id)%length_x > 0 and patch_ib(patch_id)%length_y > 0 before computing ellipse_coeffs.

src/simulation/m_time_steppers.fpp (1)

628-638: Fix semantic mismatch: s_compute_ib_forces expects dynamic viscosity, not inverse Reynolds number.

At line 630, 1._wp/fluid_pp(1)%Re(1) passes inverse Reynolds number but the subroutine expects actual dynamic viscosity (parameter named dynamic_viscosity and used directly in viscous stress tensor computation). Additionally, this creates a divide-by-zero when Re==0 (inviscid flows). The correct approach requires either (1) converting Re to actual viscosity using appropriate reference scales before the call, or (2) restructuring the subroutine to accept inverse Reynolds and handle the inviscid case explicitly.

🤖 Fix all issues with AI agents
In @src/common/m_ib_patches.fpp:
- Around line 768-807: s_ib_ellipse divides by ellipse_coeffs(1:2) (derived from
patch_ib(patch_id)%length_x/length_y) which can be zero and cause
divide-by-zero/NaNs; validate these axes once before the parallel loop (using a
small tolerance, e.g. eps = 0._wp or machine epsilon) and fast-fail: if either
ellipse_coeffs(1) or ellipse_coeffs(2) is <= eps then skip marking (return from
s_ib_ellipse or skip the loop) so no division occurs; ensure the check
references s_ib_ellipse, ellipse_coeffs, patch_ib(patch_id)%length_x/length_y
and ib_markers_sf so the guard is co-located with the existing setup code.

In @src/pre_process/m_check_ib_patches.fpp:
- Around line 65-66: Add a validation routine for ellipse IB patches: create an
impure subroutine named s_check_ellipse_ib_patch_geometry(patch_id) that uses
s_int_to_str to build the id string and the existing PROHIBIT-style check (or
equivalent) to enforce n > 0, p == 0, length_x > 0, length_y > 0, and
non-default x_centroid and y_centroid (use f_is_default for centroid checks);
then call s_check_ellipse_ib_patch_geometry(patch_id) in the branch handling
patch_ib(i)%geometry == 6 (the "Ellipse Patch" branch) so ellipse patches follow
the same validation pattern as other geometries.

In @src/simulation/m_ibm.fpp:
- Line 1169: The normalization uses sum(axis) incorrectly; replace the divisor
with the vector's Euclidean norm (sqrt of sum of squares) to compute normal_axis
= axis / sqrt(sum(axis*axis)) (or use an existing norm function if available)
and also guard against division by zero by checking the norm (e.g., handle
zero-length axis before dividing).

In @src/simulation/m_rhs.fpp:
- Around line 1635-1651: The else branch for the cylindrical boundary
viscous-stress call mistakenly passes dq_prim_dy_vf twice; update the third
derivative argument in the call to s_compute_viscous_stress_cylindrical_boundary
(the dq_prim_* argument currently duplicated) to
dq_prim_dz_vf(mom_idx%beg:mom_idx%end) so it matches the p>0 branch and provides
the correct z-derivative placeholder for 2D cylindrical handling.

In @tests/5600D63B/golden.txt:
- Around line 1-11: The file tests/5600D63B/golden.txt is missing a trailing
newline; update the file (golden.txt) so the final line is terminated with a
newline character by adding a newline at the end of the file and commit the
change.
🧹 Nitpick comments (4)
src/simulation/m_viscous.fpp (1)

1515-1575: Fix typos in comments.

The implementation is correct, but there are several typographical errors in the comments:

  • Line 1515: "particule" → "particle"
  • Line 1529: "diriviatives" → "derivatives", "spacial" → "spatial"
  • Line 1534: "equaiont" → "equation"
📝 Proposed fix for typos
-    ! computes the viscous stress tensor at a particule i, j, k element
+    ! computes the viscous stress tensor at a particle (i, j, k) element
-        ! zero the viscous stress, collection of velocity diriviatives, and spacial finite differences
+        ! zero the viscous stress, collection of velocity derivatives, and spatial finite differences
-        ! get the change in x used in the finite difference equaiont
+        ! get the change in x used in the finite difference equation
src/simulation/m_ibm.fpp (3)

1023-1027: Remove unused variable vel.

The variable vel is declared at line 1026 but never used in this subroutine.

♻️ Proposed fix
-        real(wp), dimension(1:3) :: local_force_contribution, radial_vector, local_torque_contribution, vel
+        real(wp), dimension(1:3) :: local_force_contribution, radial_vector, local_torque_contribution

1064-1066: Clean up TODO comment and commented-out code.

The TODO comment and commented-out conditional ! if (.false.) then suggest incomplete cleanup from development. These should be removed or addressed before merging.

♻️ Proposed fix
                         ! get the viscous stress and add its contribution if that is considered
-                        ! TODO :: This is really bad code
-                        ! if (.false.) then
                         if (viscous) then

1180-1180: Use working precision literal for consistency.

For consistency with the codebase style and to ensure proper precision, 0.4 should be 0.4_wp.

♻️ Proposed fix
         elseif (patch_ib(ib_marker)%geometry == 8) then ! sphere
-            patch_ib(ib_marker)%moment = 0.4*patch_ib(ib_marker)%mass*(patch_ib(ib_marker)%radius)**2
+            patch_ib(ib_marker)%moment = 0.4_wp*patch_ib(ib_marker)%mass*(patch_ib(ib_marker)%radius)**2
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 31c938f and e909026.

📒 Files selected for processing (13)
  • examples/2D_ibm_ellipse/case.py
  • examples/2D_mibm_shock_cylinder/case.py
  • src/common/m_compute_levelset.fpp
  • src/common/m_ib_patches.fpp
  • src/pre_process/m_check_ib_patches.fpp
  • src/simulation/m_ibm.fpp
  • src/simulation/m_rhs.fpp
  • src/simulation/m_time_steppers.fpp
  • src/simulation/m_viscous.fpp
  • tests/5600D63B/golden-metadata.txt
  • tests/5600D63B/golden.txt
  • tests/7FA04E95/golden-metadata.txt
  • tests/7FA04E95/golden.txt
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f
_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop

**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_<feature> prefix (e.g., m_transport)
Name public subroutines as s_<verb>_<noun> (e.g., s_compute_flux) and functions as f_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoid goto statements (except unavoidable legacy); avoid global state (COMMON, save)
Every variable must have intent(in|out|inout) specification and appropriate dimension / allocatable / pointer
Use s_mpi_abort(<msg>) for error termination instead of stop
Use !> style documentation for header comments; follow Doxygen Fortran format with !! @param and !! @return tags
Use implicit none statement in all modules
Use private declaration followed by explicit public exports in modules
Use derived types with pointers for encapsulation (e.g., pointer, dimension(:,:,:) => null())
Use pure and elemental attributes for side-effect-free functions; combine them for array ...

Files:

  • src/common/m_ib_patches.fpp
  • src/simulation/m_rhs.fpp
  • src/simulation/m_time_steppers.fpp
  • src/pre_process/m_check_ib_patches.fpp
  • src/common/m_compute_levelset.fpp
  • src/simulation/m_viscous.fpp
  • src/simulation/m_ibm.fpp
src/**/*.fpp

📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)

src/**/*.fpp: Use .fpp file extension for Fypp preprocessed files; CMake transpiles them to .f90
Start module files with Fypp include for macros: #:include 'macros.fpp'
Use the fypp ASSERT macro for validating conditions: @:ASSERT(predicate, message)
Use fypp macro @:ALLOCATE(var1, var2) for device-aware allocation instead of standard Fortran allocate
Use fypp macro @:DEALLOCATE(var1, var2) for device-aware deallocation instead of standard Fortran deallocate

Files:

  • src/common/m_ib_patches.fpp
  • src/simulation/m_rhs.fpp
  • src/simulation/m_time_steppers.fpp
  • src/pre_process/m_check_ib_patches.fpp
  • src/common/m_compute_levelset.fpp
  • src/simulation/m_viscous.fpp
  • src/simulation/m_ibm.fpp
src/simulation/**/*.{fpp,f90}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers

src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with $:GPU_ROUTINE(function_name='...', parallelism='[seq]') immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros from src/common/include/parallel_macros.fpp instead
Wrap tight loops with $:GPU_PARALLEL_FOR(private='[...]', copy='[...]') macro; add collapse=n for safe nested loop merging
Declare loop-local variables with private='[...]' in GPU parallel loop macros
Allocate large arrays with managed or move them into a persistent $:GPU_ENTER_DATA(...) region at start-up
Do not place stop or error stop inside device code

Files:

  • src/simulation/m_rhs.fpp
  • src/simulation/m_time_steppers.fpp
  • src/simulation/m_viscous.fpp
  • src/simulation/m_ibm.fpp
🧠 Learnings (13)
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Add and update focused tests for changes; ensure no regressions in golden test outputs without clear justification

Applied to files:

  • tests/5600D63B/golden-metadata.txt
  • tests/7FA04E95/golden-metadata.txt
  • tests/7FA04E95/golden.txt
  • tests/5600D63B/golden.txt
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: After each successful build step, run focused tests using `./mfc.sh test -j $(nproc) -f EA8FA07E -t 9E2CA336` instead of running all tests

Applied to files:

  • tests/5600D63B/golden-metadata.txt
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Review Fypp macros in src/<subprogram>/include/ before reviewing generated code

Applied to files:

  • tests/5600D63B/golden-metadata.txt
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Fypp macros are defined in `src/<subprogram>/include/` directories where `<subprogram>` ∈ {`simulation`,`common`,`pre_process`,`post_process`}; scan these first when understanding macro usage

Applied to files:

  • tests/5600D63B/golden-metadata.txt
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/**/*.fpp : Use `.fpp` file extension for Fypp preprocessed files; CMake transpiles them to `.f90`

Applied to files:

  • tests/5600D63B/golden-metadata.txt
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers

Applied to files:

  • tests/5600D63B/golden-metadata.txt
  • tests/7FA04E95/golden-metadata.txt
  • src/simulation/m_time_steppers.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Draft a step-by-step plan before making changes; build after each step using `./mfc.sh build -t pre_process simulation -j $(nproc)`

Applied to files:

  • tests/5600D63B/golden-metadata.txt
  • tests/7FA04E95/golden-metadata.txt
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Compile with Cray `ftn` or NVIDIA `nvfortran` for GPU offloading; also build CPU-only with GNU `gfortran` and Intel `ifx`/`ifort` for portability

Applied to files:

  • tests/5600D63B/golden-metadata.txt
  • tests/7FA04E95/golden-metadata.txt
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead

Applied to files:

  • tests/5600D63B/golden-metadata.txt
  • src/simulation/m_time_steppers.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)

Applied to files:

  • src/simulation/m_time_steppers.fpp
  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging

Applied to files:

  • src/simulation/m_time_steppers.fpp
  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Mark GPU-callable helpers with `$:GPU_ROUTINE(function_name='...', parallelism='[seq]')` immediately after declaration

Applied to files:

  • src/common/m_compute_levelset.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Use `private` declaration followed by explicit `public` exports in modules

Applied to files:

  • src/simulation/m_viscous.fpp
🧬 Code graph analysis (1)
src/common/m_ib_patches.fpp (1)
toolchain/mfc/test/cases.py (1)
  • alter_ib (377-422)
🪛 LanguageTool
tests/7FA04E95/golden.txt

[duplication] ~4-~4: Possibile errore di battitura: parola ripetuta
Context: ....0 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 0.0 1e-14 1e-14 1e-14 1e-14 1e-14 1e-14 0.0 0.0 0.0 0.0...

(ITALIAN_WORD_REPEAT_RULE)


[duplication] ~4-~4: Possibile errore di battitura: parola ripetuta
Context: ....0 0.0 -0.0 0.0 0.0 0.0 0.0 1e-14 1e-14 1e-14 1e-14 1e-14 1e-14 0.0 0.0 0.0 0.0 -0.0 0.0 0....

(ITALIAN_WORD_REPEAT_RULE)


[duplication] ~4-~4: Possibile errore di battitura: parola ripetuta
Context: ...0.0 0.0 0.0 0.0 1e-14 1e-14 1e-14 1e-14 1e-14 1e-14 0.0 0.0 0.0 0.0 -0.0 0.0 0.0 0.0 0.0 0....

(ITALIAN_WORD_REPEAT_RULE)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: Coverage Test on CodeCov
  • GitHub Check: Oak Ridge | Frontier (gpu-acc)
  • GitHub Check: Github (ubuntu, mpi, debug, true)
  • GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
  • GitHub Check: Georgia Tech | Phoenix (gpu-omp)
  • GitHub Check: Github (ubuntu, mpi, no-debug, false)
  • GitHub Check: Github (macos, mpi, debug, false)
  • GitHub Check: Github (ubuntu, mpi, no-debug, true)
  • GitHub Check: Github (macos, mpi, no-debug, false)
  • GitHub Check: Github (ubuntu, mpi, debug, false)
  • GitHub Check: Georgia Tech | Phoenix (gpu-acc)
  • GitHub Check: Code Cleanliness Check
  • GitHub Check: Build & Publish
🔇 Additional comments (16)
tests/7FA04E95/golden-metadata.txt (1)

1-193: LGTM! Test metadata correctly captures build and environment configuration.

This golden metadata file appropriately documents the test execution environment, including compiler versions, build flags, and system information. The file serves as a reference for reproducibility and regression testing.

examples/2D_ibm_ellipse/case.py (3)

4-5: Verify unused viscosity constant.

The variable Mu is defined but never used in the configuration. If this represents dynamic viscosity, verify whether it should be incorporated into the Reynolds number calculation at Line 94, or if it's a leftover from development.


84-90: LGTM! Ellipse IB patch configuration is correct.

The ellipse geometry (code 6) is properly specified with centroid, semi-axes (length_x, length_y), and boundary condition. This configuration appropriately exercises the new ellipse patch support introduced in this PR.


91-95: Fluid parameters are correctly specified.

The gamma and Reynolds number settings are appropriate for this viscous ellipse IBM validation case.

examples/2D_mibm_shock_cylinder/case.py (1)

115-132: LGTM! Moving IBM configuration is complete and correct.

The circular IB patch is properly configured with:

  • Geometry 2 (circle) at (-0.5, 0.0)
  • Radius 0.5
  • Slip condition enabled
  • moving_ibm = 2 (dynamic/two-way coupling)
  • Initial velocities and angular velocities set to zero
  • Mass = 0.5 specified for dynamics

This configuration appropriately tests the new viscous stress contributions to IBM forces introduced in this PR.

tests/7FA04E95/golden.txt (1)

1-11: LGTM! Golden test outputs correctly capture simulation results.

The file contains expected numeric outputs for conservative variables (cons.1-5) at initial and final timesteps, plus IB marker data. This provides regression test coverage for the new ellipse IBM functionality.

Based on learnings: Add and update focused tests for changes; ensure no regressions in golden test outputs.

tests/5600D63B/golden-metadata.txt (1)

1-193: Golden metadata snapshot addition is fine; watch for host-specific churn.
Given this is generated on 2026-01-09 (Line 1) and includes machine/toolchain details, please ensure the project expects these fields to vary across CI runners. Based on learnings, keep golden updates narrowly-scoped and justified.

src/common/m_compute_levelset.fpp (1)

21-27: Public export update for s_ellipse_levelset is fine.
The module interface extension (Line 26-27) is clear.

src/simulation/m_viscous.fpp (1)

23-28: LGTM!

Public interface updated correctly with the renamed subroutine and new s_compute_viscous_stress_tensor export. Naming follows the s_<verb>_<noun> pattern as per coding guidelines.

src/simulation/m_ibm.fpp (7)

29-30: LGTM!

Correctly imports the m_viscous module to enable viscous stress tensor computation for IBM force calculations.


101-101: LGTM!

Correctly passes the angular velocity vector to compute the moment of inertia about the appropriate rotation axis.


202-225: LGTM!

Ghost-cell momentum initialization for moving IBs correctly accumulates density from all fluids and sets momentum consistent with the immersed boundary velocity.


1123-1134: LGTM!

Body forces are correctly applied after the MPI reduction to avoid double-counting across ranks. The implementation properly checks bf_x, bf_y, and bf_z flags before adding the respective acceleration components scaled by mass.


1163-1163: Verify 2D rotation axis convention.

In 2D simulations (p == 0), the rotation axis is set to [0, 1, 0] (y-axis). Typically, 2D rotations in the x-y plane occur about the z-axis [0, 0, 1]. Please verify this is the intended convention for this codebase.


1177-1178: Verify ellipse moment of inertia formula.

The formula 0.0625_wp * mass * (length_x² + length_y²) corresponds to (1/16) * m * (a² + b²). This is correct for an ellipse rotating about its center if length_x and length_y represent the full axis lengths (diameters), since I = (1/4) * m * ((length_x/2)² + (length_y/2)²) = (1/16) * m * (length_x² + length_y²).

Please confirm that length_x and length_y are full lengths rather than semi-axes in the patch definition.


1139-1139: LGTM!

Torques are correctly converted from the global coordinate frame to the IB's local coordinates using the inverse rotation matrix before assignment.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/simulation/m_ibm.fpp (1)

202-225: Refactor variable usage and add missing GPU loop decorators

Issues identified:

  1. Variable i reused for multiple purposes: Line 211 uses i for fluid iteration, and line 216 reuses it for dimension iteration within the same scope. This violates the guideline: "Each variable should have one clear purpose; do not use the same variable for multiple purposes."

  2. Missing GPU loop decorators: The inner sequential loops at lines 211-213 and 216-219 should use $:GPU_LOOP(parallelism='[seq]') to be consistent with similar patterns elsewhere in the file (e.g., lines 265-269, 333-338).

♻️ Proposed fix
-        $:GPU_PARALLEL_LOOP(private='[i,j,k,patch_id,rho]', copyin='[E_idx,momxb]', collapse=3)
+        $:GPU_PARALLEL_LOOP(private='[i_fluid,i_dim,j,k,patch_id,rho]', copyin='[E_idx,momxb]', collapse=3)
         do l = 0, p
             do k = 0, n
                 do j = 0, m
                     patch_id = ib_markers%sf(j, k, l)
                     if (patch_id /= 0) then
                         q_prim_vf(E_idx)%sf(j, k, l) = 1._wp
                         if (patch_ib(patch_id)%moving_ibm > 0._wp) then
                             rho = 0._wp
-                            do i = 1, num_fluids
-                                rho = rho + q_prim_vf(contxb + i - 1)%sf(j, k, l)
+                            $:GPU_LOOP(parallelism='[seq]')
+                            do i_fluid = 1, num_fluids
+                                rho = rho + q_prim_vf(contxb + i_fluid - 1)%sf(j, k, l)
                             end do
 
                             ! Sets the momentum
-                            do i = 1, num_dims
-                                q_cons_vf(momxb + i - 1)%sf(j, k, l) = patch_ib(patch_id)%vel(i)*rho
-                                q_prim_vf(momxb + i - 1)%sf(j, k, l) = patch_ib(patch_id)%vel(i)
+                            $:GPU_LOOP(parallelism='[seq]')
+                            do i_dim = 1, num_dims
+                                q_cons_vf(momxb + i_dim - 1)%sf(j, k, l) = patch_ib(patch_id)%vel(i_dim)*rho
+                                q_prim_vf(momxb + i_dim - 1)%sf(j, k, l) = patch_ib(patch_id)%vel(i_dim)
                             end do
                         end if
                     end if

Based on coding guidelines requiring single-purpose variables and consistent GPU loop decoration.

🤖 Fix all issues with AI agents
In @src/simulation/m_ibm.fpp:
- Around line 1068-1105: The cross-product calls (s_cross_product) currently
pass viscous_stress_div_1/2 as both input and output, corrupting the stress
tensors before the finite-difference; allocate temporary 3-element arrays (e.g.,
tmp_cross_1, tmp_cross_2) and for each l call s_cross_product(radial_vector,
viscous_stress_div_1(l,1:3), tmp_cross_1) and s_cross_product(radial_vector,
viscous_stress_div_2(l,1:3), tmp_cross_2), then compute the derivative using
(tmp_cross_2 - tmp_cross_1)/(2._wp*dx or dy or dz) and add that result to
local_torque_contribution(1:3); apply this change for the x-, y-, and
z-direction blocks referencing s_cross_product, viscous_stress_div_1,
viscous_stress_div_2, viscous_stress_div, and local_torque_contribution.
- Around line 1164-1169: The normalization is inconsistent: the zero check uses
sqrt(sum(axis**2)) but normal_axis is computed with axis/sqrt(sum(axis)) which
omits squaring; update the normalization in the else branch to use axis /
sqrt(sum(axis**2)) so it matches the magnitude check (adjust the expression
where normal_axis is assigned in the same block that sets
patch_ib(ib_marker)%moment and returns).
- Line 1176: The rectangle moment-of-inertia uses the wrong divisor; update the
assignment to patch_ib(ib_marker)%moment inside the rectangle (geometry == 3)
branch to use /12._wp instead of /6._wp so it computes I = m*(length_x**2 +
length_y**2)/12 for full-width dimensions (refer to the existing
patch_ib(ib_marker)%moment assignment to locate the line).
🧹 Nitpick comments (3)
src/simulation/m_ibm.fpp (3)

1064-1066: Remove TODO comment and dead code

Lines 1064-1065 contain a TODO comment stating "This is really bad code" and a commented-out if (.false.) statement. After addressing the critical cross-product bug, please remove these developer notes and dead code.


1026-1026: Potentially unused variable vel

The variable vel is declared on line 1026 but does not appear to be used in the s_compute_ib_forces subroutine. Consider removing it unless it's planned for future use.


1-1236: File exceeds size limit

This module is 1,236 lines, exceeding the 1,000-line limit specified in the coding guidelines. Consider splitting the module into smaller, focused units—for example, separating ghost-point logic, force/torque computation, and moment-of-inertia calculation into helper modules.

Based on coding guidelines requiring files ≤ 1,000 lines.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e909026 and e5cd354.

📒 Files selected for processing (1)
  • src/simulation/m_ibm.fpp
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f
_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop

**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_<feature> prefix (e.g., m_transport)
Name public subroutines as s_<verb>_<noun> (e.g., s_compute_flux) and functions as f_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoid goto statements (except unavoidable legacy); avoid global state (COMMON, save)
Every variable must have intent(in|out|inout) specification and appropriate dimension / allocatable / pointer
Use s_mpi_abort(<msg>) for error termination instead of stop
Use !> style documentation for header comments; follow Doxygen Fortran format with !! @param and !! @return tags
Use implicit none statement in all modules
Use private declaration followed by explicit public exports in modules
Use derived types with pointers for encapsulation (e.g., pointer, dimension(:,:,:) => null())
Use pure and elemental attributes for side-effect-free functions; combine them for array ...

Files:

  • src/simulation/m_ibm.fpp
src/simulation/**/*.{fpp,f90}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers

src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with $:GPU_ROUTINE(function_name='...', parallelism='[seq]') immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros from src/common/include/parallel_macros.fpp instead
Wrap tight loops with $:GPU_PARALLEL_FOR(private='[...]', copy='[...]') macro; add collapse=n for safe nested loop merging
Declare loop-local variables with private='[...]' in GPU parallel loop macros
Allocate large arrays with managed or move them into a persistent $:GPU_ENTER_DATA(...) region at start-up
Do not place stop or error stop inside device code

Files:

  • src/simulation/m_ibm.fpp
src/**/*.fpp

📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)

src/**/*.fpp: Use .fpp file extension for Fypp preprocessed files; CMake transpiles them to .f90
Start module files with Fypp include for macros: #:include 'macros.fpp'
Use the fypp ASSERT macro for validating conditions: @:ASSERT(predicate, message)
Use fypp macro @:ALLOCATE(var1, var2) for device-aware allocation instead of standard Fortran allocate
Use fypp macro @:DEALLOCATE(var1, var2) for device-aware deallocation instead of standard Fortran deallocate

Files:

  • src/simulation/m_ibm.fpp
🧠 Learnings (5)
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to **/*.{fpp,f90} : Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)

Applied to files:

  • src/simulation/m_ibm.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Coverage Test on CodeCov
  • GitHub Check: Github (ubuntu, mpi, debug, true)
  • GitHub Check: Github (ubuntu, mpi, no-debug, false)
  • GitHub Check: Github (ubuntu, mpi, debug, false)
  • GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
  • GitHub Check: Github (ubuntu, mpi, no-debug, true)
  • GitHub Check: Code Cleanliness Check
🔇 Additional comments (4)
src/simulation/m_ibm.fpp (4)

29-29: LGTM: Viscous module import

The addition of use m_viscous appropriately supports the new viscous stress coupling functionality.


100-103: LGTM: Moment of inertia initialization

The call to s_compute_moment_of_inertia with patch_ib(i)%angular_vel correctly matches the updated signature requiring the axis parameter.


1123-1134: LGTM: Body force application

Body forces are correctly applied after MPI reduction to avoid double counting across ranks. The comment clearly documents this design choice.


1173-1180: LGTM: Analytic moment of inertia formulas

The analytic formulas for circle (0.5·m·r²) and ellipse (0.0625·m·(lx² + ly²)) are correct for 2D geometries, assuming standard conventions (circle: disk, ellipse: lx/ly are diameters).

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (all issues)

Check if these issues are valid — if so, understand the root cause of each and fix them.


<file name="src/pre_process/m_check_ib_patches.fpp">

<violation number="1" location="src/pre_process/m_check_ib_patches.fpp:97">
P0: Subroutine name mismatch: opening declaration is `s_check_ellipse_ib_patch_geometry` but closing statement says `s_check_circle_ib_patch_geometry`. This will cause a Fortran compilation error.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Jan 10, 2026

CodeAnt AI is running Incremental review


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/pre_process/m_check_ib_patches.fpp (1)

62-71: Update the invalid-geometry error message to include ellipse (and model) geometry IDs.

On Line 68-70, the abort message still says 2-4, 8-10, 11 or 12, but you now accept 6 (ellipse) on Line 65-66 (and already accept 5 on Line 62-64). This will mislead users when they typo geometry IDs.

Proposed fix
-                    call s_prohibit_abort("Invalid IB patch", &
-                                          "patch_ib("//trim(iStr)//")%geometry must be "// &
-                                          "2-4, 8-10, 11 or 12.")
+                    call s_prohibit_abort("Invalid IB patch", &
+                                          "patch_ib("//trim(iStr)//")%geometry must be "// &
+                                          "2-6, 8-12.")
🤖 Fix all issues with AI agents
In @src/pre_process/m_check_ib_patches.fpp:
- Around line 99-116: Update the Doxygen header for subroutine
s_check_ellipse_ib_patch_geometry to refer to an "ellipse patch" (not "circle
patch") and reflow the comment lines so the leading comment markers and
indentation remain formatter-friendly; ensure the @param tag still documents
patch_id and preserve existing alignment/spacing of the comment block to avoid
triggering the Pretty comment-alignment check.
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e5cd354 and 3c41c90.

📒 Files selected for processing (1)
  • src/pre_process/m_check_ib_patches.fpp
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{fpp,f90}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f
_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop

**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_<feature> prefix (e.g., m_transport)
Name public subroutines as s_<verb>_<noun> (e.g., s_compute_flux) and functions as f_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoid goto statements (except unavoidable legacy); avoid global state (COMMON, save)
Every variable must have intent(in|out|inout) specification and appropriate dimension / allocatable / pointer
Use s_mpi_abort(<msg>) for error termination instead of stop
Use !> style documentation for header comments; follow Doxygen Fortran format with !! @param and !! @return tags
Use implicit none statement in all modules
Use private declaration followed by explicit public exports in modules
Use derived types with pointers for encapsulation (e.g., pointer, dimension(:,:,:) => null())
Use pure and elemental attributes for side-effect-free functions; combine them for array ...

Files:

  • src/pre_process/m_check_ib_patches.fpp
src/**/*.fpp

📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)

src/**/*.fpp: Use .fpp file extension for Fypp preprocessed files; CMake transpiles them to .f90
Start module files with Fypp include for macros: #:include 'macros.fpp'
Use the fypp ASSERT macro for validating conditions: @:ASSERT(predicate, message)
Use fypp macro @:ALLOCATE(var1, var2) for device-aware allocation instead of standard Fortran allocate
Use fypp macro @:DEALLOCATE(var1, var2) for device-aware deallocation instead of standard Fortran deallocate

Files:

  • src/pre_process/m_check_ib_patches.fpp
🪛 GitHub Actions: Pretty
src/pre_process/m_check_ib_patches.fpp

[error] 96-96: CI formatting failed: detected change in line 96 during MFC formatting (Fortran comment alignment). Process exited with code 1.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
  • GitHub Check: Github (macos, mpi, no-debug, false)
  • GitHub Check: Oak Ridge | Frontier (gpu-omp)
  • GitHub Check: Oak Ridge | Frontier (cpu)
  • GitHub Check: Github (ubuntu, mpi, debug, true)
  • GitHub Check: Github (ubuntu, mpi, no-debug, false)
  • GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
  • GitHub Check: Github (ubuntu, mpi, no-debug, true)
  • GitHub Check: Github (macos, mpi, debug, false)
  • GitHub Check: Github (ubuntu, mpi, debug, false)
  • GitHub Check: Oak Ridge | Frontier (gpu-acc)
  • GitHub Check: Georgia Tech | Phoenix (cpu)
  • GitHub Check: Georgia Tech | Phoenix (gpu-acc)
  • GitHub Check: Georgia Tech | Phoenix (gpu-omp)
  • GitHub Check: Coverage Test on CodeCov
  • GitHub Check: Code Cleanliness Check
  • GitHub Check: Build & Publish
🔇 Additional comments (1)
src/pre_process/m_check_ib_patches.fpp (1)

81-116: Pretty CI failure: rerun formatter and ensure comment alignment is stable around the new ellipse block.

CI reports: “CI formatting failed: detected change in line 96 during MFC formatting (Fortran comment alignment)”. Given the only new comments are around Line 99-103, please rerun the project’s formatting/pretty target locally and adjust comment indentation until it’s stable in CI.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In @src/simulation/m_ibm.fpp:
- Line 1169: The normalization of the rotation axis is wrong: in the assignment
to normal_axis you must compute the vector length using the sum of squares, i.e.
replace the current sqrt(sum(axis)) with sqrt(sum(axis**2)) so normalization
uses the Euclidean norm of axis (update the expression in the normal_axis = ...
statement that references axis and normal_axis).
- Around line 1025-1105: viscous_cross_1 and viscous_cross_2 are declared as 1D
(dimension(1:3)) but are used as 3x3 arrays and subtracted into the 3x3 tensor
viscous_stress_div (e.g., in the loop around s_cross_product and the assignment
viscous_stress_div = (viscous_cross_2 - viscous_cross_1)/(2._wp*dx)); fix by
changing viscous_cross_1 and viscous_cross_2 declarations to real(wp),
dimension(1:3,1:3) so their shapes match the per-row cross-product writes and
the subsequent subtraction, and verify s_cross_product calls write into
viscous_cross_*(l,1:3) and local_torque_contribution(1:3) uses the correct slice
of viscous_stress_div.
- Around line 1176-1178: The moment-of-inertia formulas for
patch_ib(ib_marker)%moment are wrong for rectangle (geometry==3) and ellipse
(geometry==6); update the rectangle case to use I = m*(length_x**2 +
length_y**2)/12 (replace the /6 with /12) and update the ellipse case to use I =
0.25_wp*mass*(length_x**2 + length_y**2) (replace 0.0625_wp with 0.25_wp) so the
computed angular dynamics for patch_ib are correct.
🧹 Nitpick comments (1)
src/simulation/m_ibm.fpp (1)

1163-1163: Use real constants with working precision suffix.

For consistency with Fortran coding guidelines, use real constants with the _wp suffix instead of integer literals when initializing real arrays.

♻️ Proposed change
-            normal_axis = [0, 0, 1]
+            normal_axis = [0._wp, 0._wp, 1._wp]

Based on coding guidelines.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 137376d and e262ef9.

📒 Files selected for processing (1)
  • src/simulation/m_ibm.fpp
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f
_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop

**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_<feature> prefix (e.g., m_transport)
Name public subroutines as s_<verb>_<noun> (e.g., s_compute_flux) and functions as f_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoid goto statements (except unavoidable legacy); avoid global state (COMMON, save)
Every variable must have intent(in|out|inout) specification and appropriate dimension / allocatable / pointer
Use s_mpi_abort(<msg>) for error termination instead of stop
Use !> style documentation for header comments; follow Doxygen Fortran format with !! @param and !! @return tags
Use implicit none statement in all modules
Use private declaration followed by explicit public exports in modules
Use derived types with pointers for encapsulation (e.g., pointer, dimension(:,:,:) => null())
Use pure and elemental attributes for side-effect-free functions; combine them for array ...

Files:

  • src/simulation/m_ibm.fpp
src/simulation/**/*.{fpp,f90}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers

src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with $:GPU_ROUTINE(function_name='...', parallelism='[seq]') immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros from src/common/include/parallel_macros.fpp instead
Wrap tight loops with $:GPU_PARALLEL_FOR(private='[...]', copy='[...]') macro; add collapse=n for safe nested loop merging
Declare loop-local variables with private='[...]' in GPU parallel loop macros
Allocate large arrays with managed or move them into a persistent $:GPU_ENTER_DATA(...) region at start-up
Do not place stop or error stop inside device code

Files:

  • src/simulation/m_ibm.fpp
src/**/*.fpp

📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)

src/**/*.fpp: Use .fpp file extension for Fypp preprocessed files; CMake transpiles them to .f90
Start module files with Fypp include for macros: #:include 'macros.fpp'
Use the fypp ASSERT macro for validating conditions: @:ASSERT(predicate, message)
Use fypp macro @:ALLOCATE(var1, var2) for device-aware allocation instead of standard Fortran allocate
Use fypp macro @:DEALLOCATE(var1, var2) for device-aware deallocation instead of standard Fortran deallocate

Files:

  • src/simulation/m_ibm.fpp
🧠 Learnings (8)
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to **/*.{fpp,f90} : Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Each variable should have one clear purpose; do not use the same variable for multiple purposes

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Every variable must have `intent(in|out|inout)` specification and appropriate `dimension` / `allocatable` / `pointer`

Applied to files:

  • src/simulation/m_ibm.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: Coverage Test on CodeCov
  • GitHub Check: Github (ubuntu, mpi, debug, false)
  • GitHub Check: Github (macos, mpi, debug, false)
  • GitHub Check: Github (ubuntu, mpi, no-debug, true)
  • GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
  • GitHub Check: Github (ubuntu, mpi, debug, true)
  • GitHub Check: Github (ubuntu, mpi, no-debug, false)
  • GitHub Check: Code Cleanliness Check
  • GitHub Check: cubic · AI code reviewer
  • GitHub Check: Build & Publish
🔇 Additional comments (4)
src/simulation/m_ibm.fpp (4)

29-29: LGTM: Module import for viscous coupling.

The addition of use m_viscous is necessary to support the new viscous stress tensor computations in the updated s_compute_ib_forces subroutine.


101-101: LGTM: Updated moment of inertia computation.

Passing patch_ib(i)%angular_vel to s_compute_moment_of_inertia correctly provides the rotation axis for computing the moment of inertia about that axis.


202-225: LGTM: Interior ghost cell momentum initialization.

The logic correctly initializes momentum in interior ghost cells for moving immersed boundaries by computing the total density and setting momentum consistent with the IB velocity. This aligns with the PR objectives for updated ghost-cell treatment.


1123-1135: LGTM: Body forces applied after reduction.

Applying body forces (gravity/acceleration) to immersed boundaries after the MPI reduction correctly prevents double counting across ranks. The logic properly adds accel_bf * mass to the force components when the corresponding body force flags are enabled.

@codecov
Copy link

codecov bot commented Jan 10, 2026

Codecov Report

❌ Patch coverage is 50.00000% with 61 lines in your changes missing coverage. Please review.
✅ Project coverage is 44.06%. Comparing base (31c938f) to head (26d8207).

Files with missing lines Patch % Lines
src/simulation/m_ibm.fpp 43.66% 31 Missing and 9 partials ⚠️
src/common/m_compute_levelset.fpp 52.17% 10 Missing and 1 partial ⚠️
src/common/m_ib_patches.fpp 73.33% 3 Missing and 1 partial ⚠️
src/simulation/m_viscous.fpp 0.00% 3 Missing ⚠️
src/simulation/m_rhs.fpp 50.00% 2 Missing ⚠️
src/pre_process/m_check_ib_patches.fpp 80.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1102      +/-   ##
==========================================
+ Coverage   44.00%   44.06%   +0.06%     
==========================================
  Files          71       71              
  Lines       20293    20379      +86     
  Branches     1982     1989       +7     
==========================================
+ Hits         8929     8980      +51     
- Misses      10227    10254      +27     
- Partials     1137     1145       +8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Jan 11, 2026

CodeAnt AI is running Incremental review


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In @src/simulation/m_ibm.fpp:
- Line 1169: The normalization of the vector is incorrect: change the
computation of normal_axis so it divides axis by the vector magnitude, i.e., use
sqrt(sum(axis**2)) instead of sqrt(sum(axis)); locate the assignment to
normal_axis (symbol: normal_axis) that currently uses axis/sqrt(sum(axis)) and
replace the denominator with sqrt(sum(axis**2)) so the vector is normalized by
its Euclidean norm.
🧹 Nitpick comments (1)
src/simulation/m_ibm.fpp (1)

1044-1050: Local variable shadowing: dx, dy, dz shadow module-level arrays.

The local scalars dx, dy, dz (declared on line 1027) shadow the module-level grid spacing arrays from m_global_parameters. While functionally correct here since you're computing local cell spacing, this can cause confusion during maintenance.

Consider renaming to cell_dx, cell_dy, cell_dz for clarity.

Suggested rename
-        real(wp) :: cell_volume, dx, dy, dz
+        real(wp) :: cell_volume, cell_dx, cell_dy, cell_dz

And update all usages accordingly.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e262ef9 and 26d8207.

📒 Files selected for processing (5)
  • src/common/m_ib_patches.fpp
  • src/simulation/m_ibm.fpp
  • src/simulation/m_rhs.fpp
  • tests/5600D63B/golden-metadata.txt
  • tests/7FA04E95/golden-metadata.txt
🚧 Files skipped from review as they are similar to previous changes (3)
  • tests/7FA04E95/golden-metadata.txt
  • tests/5600D63B/golden-metadata.txt
  • src/simulation/m_rhs.fpp
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{fpp,f90}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

**/*.{fpp,f90}: Use 2-space indentation; continuation lines align beneath &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_ pattern (e.g., m_transport)
Name public subroutines with s_ pattern (e.g., s_compute_flux)
Name public functions with f
_ pattern
Keep subroutine size ≤ 500 lines, helper subroutines ≤ 150 lines, functions ≤ 100 lines, files ≤ 1000 lines
Limit routine arguments to ≤ 6; use derived-type params struct if more are needed
Forbid goto statements (except in legacy code), COMMON blocks, and save globals
Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate
Call s_mpi_abort() for errors, never use stop or error stop

**/*.{fpp,f90}: Indent 2 spaces; continuation lines align under &
Use lower-case keywords and intrinsics (do, end subroutine, etc.)
Name modules with m_<feature> prefix (e.g., m_transport)
Name public subroutines as s_<verb>_<noun> (e.g., s_compute_flux) and functions as f_<verb>_<noun>
Keep private helpers in the module; avoid nested procedures
Enforce size limits: subroutine ≤ 500 lines, helper ≤ 150, function ≤ 100, module/file ≤ 1000
Limit subroutines to ≤ 6 arguments; otherwise pass a derived-type 'params' struct
Avoid goto statements (except unavoidable legacy); avoid global state (COMMON, save)
Every variable must have intent(in|out|inout) specification and appropriate dimension / allocatable / pointer
Use s_mpi_abort(<msg>) for error termination instead of stop
Use !> style documentation for header comments; follow Doxygen Fortran format with !! @param and !! @return tags
Use implicit none statement in all modules
Use private declaration followed by explicit public exports in modules
Use derived types with pointers for encapsulation (e.g., pointer, dimension(:,:,:) => null())
Use pure and elemental attributes for side-effect-free functions; combine them for array ...

Files:

  • src/common/m_ib_patches.fpp
  • src/simulation/m_ibm.fpp
src/**/*.fpp

📄 CodeRabbit inference engine (.cursor/rules/mfc-agent-rules.mdc)

src/**/*.fpp: Use .fpp file extension for Fypp preprocessed files; CMake transpiles them to .f90
Start module files with Fypp include for macros: #:include 'macros.fpp'
Use the fypp ASSERT macro for validating conditions: @:ASSERT(predicate, message)
Use fypp macro @:ALLOCATE(var1, var2) for device-aware allocation instead of standard Fortran allocate
Use fypp macro @:DEALLOCATE(var1, var2) for device-aware deallocation instead of standard Fortran deallocate

Files:

  • src/common/m_ib_patches.fpp
  • src/simulation/m_ibm.fpp
src/simulation/**/*.{fpp,f90}

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

src/simulation/**/*.{fpp,f90}: Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)
Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up
Avoid stop/error stop inside GPU device code
Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers

src/simulation/**/*.{fpp,f90}: Mark GPU-callable helpers with $:GPU_ROUTINE(function_name='...', parallelism='[seq]') immediately after declaration
Do not use OpenACC or OpenMP directives directly; use Fypp macros from src/common/include/parallel_macros.fpp instead
Wrap tight loops with $:GPU_PARALLEL_FOR(private='[...]', copy='[...]') macro; add collapse=n for safe nested loop merging
Declare loop-local variables with private='[...]' in GPU parallel loop macros
Allocate large arrays with managed or move them into a persistent $:GPU_ENTER_DATA(...) region at start-up
Do not place stop or error stop inside device code

Files:

  • src/simulation/m_ibm.fpp
🧠 Learnings (10)
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Ensure GPU code compiles with Cray ftn, NVIDIA nvfortran, GNU gfortran, and Intel ifx/ifort compilers

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight GPU loops with !$acc parallel loop gang vector default(present) reduction(...); add collapse(n) when safe; declare loop-local variables with private(...)

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Do not use OpenACC or OpenMP directives directly; use Fypp macros from `src/common/include/parallel_macros.fpp` instead

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Avoid stop/error stop inside GPU device code

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to **/*.{fpp,f90} : Every argument must have explicit intent; use dimension/allocatable/pointer as appropriate

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Each variable should have one clear purpose; do not use the same variable for multiple purposes

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to **/*.{fpp,f90} : Every variable must have `intent(in|out|inout)` specification and appropriate `dimension` / `allocatable` / `pointer`

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Wrap tight loops with `$:GPU_PARALLEL_FOR(private='[...]', copy='[...]')` macro; add `collapse=n` for safe nested loop merging

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:46.909Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .cursor/rules/mfc-agent-rules.mdc:0-0
Timestamp: 2025-11-24T21:50:46.909Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large arrays with `managed` or move them into a persistent `$:GPU_ENTER_DATA(...)` region at start-up

Applied to files:

  • src/simulation/m_ibm.fpp
📚 Learning: 2025-11-24T21:50:16.713Z
Learnt from: CR
Repo: MFlowCode/MFC PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2025-11-24T21:50:16.713Z
Learning: Applies to src/simulation/**/*.{fpp,f90} : Allocate large GPU arrays with managed memory or move them into persistent !$acc enter data regions at start-up

Applied to files:

  • src/simulation/m_ibm.fpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
  • GitHub Check: Coverage Test on CodeCov
  • GitHub Check: Code Cleanliness Check
  • GitHub Check: Github (ubuntu, mpi, debug, true)
  • GitHub Check: Github (ubuntu, mpi, debug, false)
  • GitHub Check: Github (macos, mpi, debug, false)
  • GitHub Check: Georgia Tech | Phoenix (cpu)
  • GitHub Check: Oak Ridge | Frontier (cpu)
  • GitHub Check: Github (ubuntu, mpi, no-debug, true)
  • GitHub Check: Georgia Tech | Phoenix (gpu-omp)
  • GitHub Check: Github (ubuntu, no-mpi, single, no-debug, false)
  • GitHub Check: Georgia Tech | Phoenix (gpu-acc)
  • GitHub Check: Github (macos, mpi, no-debug, false)
  • GitHub Check: Github (ubuntu, mpi, no-debug, false)
  • GitHub Check: Oak Ridge | Frontier (gpu-acc)
  • GitHub Check: Oak Ridge | Frontier (gpu-omp)
  • GitHub Check: Build & Publish
🔇 Additional comments (9)
src/common/m_ib_patches.fpp (2)

768-805: LGTM! Ellipse patch implementation is mathematically correct and follows existing patterns.

The ellipse coverage logic correctly:

  1. Transforms coordinates to the local IB frame via inverse rotation
  2. Applies the standard ellipse equation (x/a)² + (y/b)² ≤ 1
  3. Uses half-lengths for a and b coefficients

Minor observation: variable k (line 773) is declared but unused, though this matches the pattern of other 2D routines in this file.


122-124: Ellipse dispatch follows established pattern correctly.

The geometry code 6 dispatch for ellipse patches is consistent with existing 2D geometries (circle, rectangle, airfoil). The s_ellipse_levelset subroutine is properly exported as public from m_compute_levelset, and the s_ib_ellipse implementation follows all coding guidelines: proper naming convention, correct argument count with explicit intents, efficient implementation under 150 lines, proper GPU parallelization, and correct ellipse equation handling with rotation matrix transformation.

src/simulation/m_ibm.fpp (7)

29-30: LGTM! Viscous module import added.

The m_viscous module is correctly imported to support the new viscous stress computations in s_compute_ib_forces.


101-101: LGTM! Moment of inertia initialization with rotation axis.

Computing moment of inertia about the angular velocity axis is physically appropriate for rotational dynamics calculations.


202-224: LGTM! Interior ghost cell momentum initialization for moving IBs.

The loop correctly sets momentum inside moving immersed boundaries to match the IB velocity, ensuring consistency between the IB motion and the fluid state in ghost cells.


1123-1134: LGTM! Body force application after MPI reduction.

Correctly applies body forces (F = m·a) after the global reduction to prevent double-counting across MPI ranks.


1177-1178: LGTM! Ellipse moment of inertia formula is correct.

The formula 0.0625 * m * (length_x² + length_y²) correctly computes m*(a² + b²)/4 for a solid ellipse when length_x and length_y are full axis lengths (consistent with how s_ib_ellipse uses them as 2a and 2b).


1173-1180: LGTM! Analytic moment of inertia formulas.

The analytic formulas for 2D/3D primitives are correct:

  • Circle (solid disk): I = ½mr²
  • Rectangle: I = m(a² + b²)/6 (about center, perpendicular to plane)
  • Ellipse: I = m(a² + b²)/4 (correctly scaled for full-length parameters)
  • Sphere: I = ⅖mr²

1064-1104: Viscous stress contribution logic is correct.

The implementation properly computes:

  • Force contribution: ∇·τ (divergence of viscous stress tensor)
  • Torque contribution: r × (∇·τ)

The s_compute_viscous_stress_tensor function is correctly defined in src/simulation/m_viscous.fpp (line 1516) with signature:

subroutine s_compute_viscous_stress_tensor(viscous_stress_tensor, q_prim_vf, dynamic_viscosity, i, j, k)
    real(wp), dimension(1:3, 1:3), intent(inout) :: viscous_stress_tensor
    type(scalar_field), dimension(1:sys_size), intent(in) :: q_prim_vf
    real(wp), intent(in) :: dynamic_viscosity
    integer, intent(in) :: i, j, k

All function calls in lines 1068–1095 match this signature correctly. Parameters use proper intent declarations, and the routine follows naming conventions.

Copy link
Contributor

@wilfonba wilfonba left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

benchmark

@danieljvickers
Copy link
Member Author

Spencer, the phoenix benchmark is failing because pip is not installing python dependencies correctly. Seems like an issue outside of the scope of files changes and more a problem with master. What do you think.

@sbryngelson
Copy link
Member

just needed to be restarted. @danieljvickers can you close the ai reviewer comments that are irrelevant/false positives?

@danieljvickers
Copy link
Member Author

I resolved them all

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Review effort 4/5 size:XL This PR changes 500-999 lines, ignoring generated files

Development

Successfully merging this pull request may close these issues.

3 participants