-
Notifications
You must be signed in to change notification settings - Fork 43
Description
Describe the Bug
The reduce-snapshot.sh script in the Conforma CLI has a critical bug where it reports successful execution (exit code 0) and logs "COMPONENT_COUNT: 1" to stdout, but actually produces an empty output file (0 bytes) instead of writing the reduced snapshot JSON. This results in data loss and downstream pipeline failures.
Steps to Reproduce
- Create a Kubernetes Snapshot resource with 2 components (e.g., "scott" and "tom")
- Set the component label: appstudio.openshift.io/component: tom
- Create a snapshot JSON file at a path (e.g., /var/workdir/release/snapshot.json)
- Set environment variables:
export SNAPSHOT="/var/workdir/release/snapshot.json" export SINGLE_COMPONENT="true" export CUSTOM_RESOURCE="snapshot/snapshot-sample" export CUSTOM_RESOURCE_NAMESPACE="default" export SNAPSHOT_PATH="/var/workdir/release/snapshot.json" # Same as SNAPSHOT - Execute: /usr/local/bin/reduce-snapshot.sh
Expected Behavior
- Script exits with code 0
- Logs: "COMPONENT_COUNT: 1"
- SNAPSHOT_PATH file contains reduced snapshot JSON with 1 component:
{ "application": "foo", "components": [ { "containerImage": "newimage2", "name": "tom" } ] }
Actual Behavior
- Script exits with code 0 ✓
- Logs: "COMPONENT_COUNT: 1" ✓
- SNAPSHOT_PATH file is EMPTY (0 bytes) ✗
- Original data is lost/corrupted
Screenshots or Terminal Output
Test Output Example:
`
[run-task : reduce] Single Component mode? true
[run-task : reduce] SNAPSHOT_CREATION_TYPE: component
[run-task : reduce] SNAPSHOT_CREATION_COMPONENT: tom
[run-task : reduce] Single Component mode is true and Snapshot type is component
[run-task : reduce] COMPONENT_COUNT: 1
But the file is actually empty:
[check-result : check-result] + cat snapshot.json
[check-result : check-result] ++ jq '.components | length'
[check-result : check-result] + '[' '' -ne 1 ']'
[check-result : check-result] /tekton/scripts/script-1-txfcb: line 5: [: : integer expression expected
`
Environment Details
Platform: Tekton Pipeline on Kubernetes
User: 1001 (runAsUser security context)
File permissions: Input file has 666 permissions
Impact
Severity: Critical
Data Loss: When SNAPSHOT and SNAPSHOT_PATH point to the same file (common pattern for in-place modification), the original data is destroyed
Downstream Failures: Subsequent steps fail with JSON parse errors when trying to read the empty file
Silent Failure: The script reports success, making the issue difficult to detect
Additional Information:
- The bug occurs even in the "happy path" with all correct parameters
- File permissions are correct (666, writable by user 1001)
- The script successfully queries Kubernetes and identifies the correct component
- The issue appears to be in the final file write operation
Context
This bug was discovered during migration from Enterprise Contract CLI (quay.io/enterprise-contract/ec-cli@sha256:913c7dac...) which worked correctly with the same parameters and workflow. The Enterprise Contract CLI was deprecated, necessitating the migration to Conforma CLI.
Possible Solution
Likely Root Causes:
- Most Probable: Shell Redirection Timing
This pattern is broken when input == output:
process_data < "$INPUT_FILE" > "$OUTPUT_FILE"
When INPUT_FILE == OUTPUT_FILE:
- Shell truncates OUTPUT_FILE FIRST (becomes 0 bytes)
- Then tries to read from INPUT_FILE (now empty!)
- Processes nothing, writes nothing
- Secondary Possibility: Missing fsync/flush
If buffered output isn't flushed:
echo "$REDUCED_SNAPSHOT" > "$SNAPSHOT_PATH"
Fix: Force flush
echo "$REDUCED_SNAPSHOT" > "$SNAPSHOT_PATH"
sync # or use 'tee' which auto-flushes