Symbol report verification report#2544
Symbol report verification report#2544pawelrutkaq wants to merge 5 commits intoeclipse-score:mainfrom
Conversation
|
|
|
The created documentation from the pull request is available at: docu-html |
| - no | ||
| - yes | ||
| - no | ||
| - high |
There was a problem hiding this comment.
You did not cover bugs introduced by instrumentation, standard mitigation here would required running the test twice, once with and once w/o instrumentation
There was a problem hiding this comment.
There is general problem: This is tool verification report and code coverage is a process. So how shall i describe it if i am specifically interested whether post procesing tooling require qualification. (here symbol report and blanket)
There was a problem hiding this comment.
The code coverage think may part of another report or you must also explain it here as part of your use cases. And if the tools help to mitigate the rustc coverage issues, e.g. detect failure by instrumentation, than it should be added here
There was a problem hiding this comment.
ok, now you removed that part of the process, may then good idea to create another report for the missing part of the process then already
There was a problem hiding this comment.
see inline comments, proposal: take over the failure modes and mitigations from https://public-docs.ferrocene.dev/main/certification/core/safety-plan/tools.html#code-coverage
| - | The reported coverage number is reported higher than the real coverage. | ||
| | This may lead to false assumption that code is sufficiently covered by tests, although it is not. | ||
| - no | ||
| - Function will not be present in the report, so it will be visible that it is not covered. |
There was a problem hiding this comment.
Please describe better how this is visible, to be able to judge if it can be easily observed. E.g. if the coverage data is displayed in a view together with the source, I would belive that this is a high detection likelyhood. If it is only a seperate file logging all functions and their coverage I would see a low detection value appropriate.
| | This may lead to false assumption that code is sufficiently covered by tests, although it is not. | ||
|
|
||
| - no | ||
| - Likelihood of this malfunction is low |
There was a problem hiding this comment.
No reason given for this statement. Where is the measure?
| - | The reported coverage number is reported higher than the real coverage. | ||
| | This may lead to false assumption that code is sufficiently covered by tests, although it is not. | ||
|
|
||
| - no |
There was a problem hiding this comment.
safety impact must be yes - because testing is not complete.
| * - 4 | ||
| - Calculations are wrong, so the reported coverage number is not correct. | ||
| - | The reported coverage is different than the coverage in `profraw` files. | ||
| - no |
There was a problem hiding this comment.
A "real" coverage below 100% but calculated/displayed as 100% would be a safety problem.
There was a problem hiding this comment.
Rounding can also become an issue.
| - | The reported coverage is different than the coverage in `profraw` files. | ||
| - no | ||
| - no | ||
| - yes |
There was a problem hiding this comment.
What detection does this refer to?
|
|
||
| Result | ||
| ~~~~~~ | ||
| Symbol report and blanket do not require qualification for use in safety-related software development according to ISO 26262. |
There was a problem hiding this comment.
as stated by masc2023 - without additional measures I cannot agree to this result
| Safety evaluation | ||
| ----------------- | ||
| This section outlines the safety evaluation of symbol report and blanket for its use within the S-CORE project. This evaluation assumes that the Rust compiler is | ||
| qualified and output of coverage data in `.profraw` format is correct. Due to that, we solely focus on post processing that is done by symbol report and blanket only. |
There was a problem hiding this comment.
Will be use and measure the coverage based on the .profraw somewhere else? This can have impact on the tool classification, which is the reason I am asking.
There was a problem hiding this comment.
No, these tools are the only measurement for coverage of rust
| * - 4 | ||
| - Calculations are wrong, so the reported coverage number is not correct. | ||
| - | The reported coverage is different than the coverage in `profraw` files. | ||
| - no |
There was a problem hiding this comment.
Rounding can also become an issue.
| :widths: 1 2 8 2 6 4 2 2 | ||
|
|
||
| * - Malfunction identification | ||
| - Use case description |
There was a problem hiding this comment.
Your use cases are actually malfunctions and not use cases. See https://eclipse-score.github.io/score/pr-2544/score_tools/doc_as_code.html#doc_tool__doc_as_code as an example of the use case and the resulting malfunction.
There was a problem hiding this comment.
I reworded but kep same layout. I see the layout of table is different acros tools so...
There was a problem hiding this comment.
Layout should be overall same, using https://eclipse-score.github.io/process_description//main/folder_templates/tools/tool_verification_report_template.html
| :header-rows: 1 | ||
| :widths: 1 2 8 2 6 4 2 2 | ||
|
|
||
| * - Malfunction identification |
There was a problem hiding this comment.
Can there be anything as a case that:
- The coverage is reported for another function.
- There is a version mismatch that the tool takes the data from a previous or other run.
- Could there be an issue for dynamic configuration like calling the unit test with configuration a but have configuration b mapped in code for the report?
I assume the use cases are limited which you want to do with the report, but the 4 malfunctions which you call use cases are incomplete.
There was a problem hiding this comment.
- added
- The tools takes what is given by bazel as in picture. so its not tool reposiblity to handle that. But in any case of this, it would not casue reporting more coverage then it really is (realtivelly)
- maybe, but that would make a report one big
scrambled eggs;)
|
@pahmann @aschemmel-tech @masc2023 tried to resolve comments please recheck |
| - Overreporting, could result in testing gap. | ||
| - yes | ||
| - | Likelihood of such an error low due to wide usage of the tool (many S-CORE modules and other projects like ferrocene) | ||
| | Additionally, every new tool release is tested by running tests in prepared integration testsuite to detect such errors. (PROPOSAL POINT) |
There was a problem hiding this comment.
I would also add the expectation of the tester/user. So a failure can be detected.
| - Underreporting, will not result in testing gap. | ||
| - yes | ||
| - Since we want to achieve 90%+ branch coverage this would stand out and be manually investigated. | ||
| - yes |
There was a problem hiding this comment.
| - yes | |
| - no |
When I understand it correctly this will not have a impact on safety.
| - A function is not being considered, although it is part of the certified subset | ||
| - yes | ||
| - | `symbol-report` is developed to use exactly the same information as the compiler | ||
| | Additionally, every new tool release is tested by running tests in prepared integration testsuite to detect such errors. (PROPOSAL POINT) |
There was a problem hiding this comment.
Also in this case the user expectation will lead in some cases to a failure detection.
| :widths: 1 2 8 2 6 4 2 2 | ||
|
|
||
| * - Malfunction identification | ||
| - Use case description |
There was a problem hiding this comment.
Layout should be overall same, using https://eclipse-score.github.io/process_description//main/folder_templates/tools/tool_verification_report_template.html
| :version: 1.90.0 (see [1]) | ||
| :tcl: HIGH | ||
| :safety_affected: YES | ||
| :security_affected: NO |
There was a problem hiding this comment.
Why NO, security requires also coverage
| ---------------------------- | ||
| Installation | ||
| ~~~~~~~~~~~~ | ||
| | To add the Code coverage to your project or module follow guidelines in WIP |
There was a problem hiding this comment.
Is there a link to WIP available, or nothing at the moment, then add a issue to follow up with that and link it here
| ~~~~~~~~~~~ | ||
| Requires Rust toolchain and Bazel build environment. | ||
|
|
||
| Safety evaluation |
There was a problem hiding this comment.
Compare template report and other examples, add missing Security Evaluation, you can keep it blank for now
| - Overreporting, could result in testing gap. | ||
| - yes | ||
| - | Likelihood of such an error low due to wide usage of the tool (many S-CORE modules and other projects like ferrocene) | ||
| | Additionally, every new tool release is tested by running tests in prepared integration testsuite to detect such errors. (PROPOSAL POINT) |
There was a problem hiding this comment.
What means Proposal Point? Additional measure to reduce the risk? Then add issue, link it here, that we can follow up as it then supports the argumentation why the rating is high
If Proposal Point is additional measure, update the table with additional measure needed yes
| - Underreporting, will not result in testing gap. | ||
| - yes | ||
| - Since we want to achieve 90%+ branch coverage this would stand out and be manually investigated. | ||
| - yes |
There was a problem hiding this comment.
are the 90% somewhere document, then link to it?
| - no | ||
| - yes | ||
| - no | ||
| - high |
There was a problem hiding this comment.
ok, now you removed that part of the process, may then good idea to create another report for the missing part of the process then already
No description provided.