-
Notifications
You must be signed in to change notification settings - Fork 0
Closed
Labels
peer reviewFeedback from peer review of the repoFeedback from peer review of the repo
Description
Feedback from @Falidoost - thank you!
This continues on from #134
Ch5 - Output analysis:
Initialisation bias:
- In adding a warm up period to the model section code snippet, two parameters has been added but number_of_doctors=3 is also bolded. See image below: --> The highlight may correspond to a change anywhere on that line, not necessarily to the entire line. Afraid I can't modify that behaviour to allow me to partially highlight lines, so I have just instead add a statement to this effect by the first instance of code highlighting on each page, to help make this a bit clearer.
- typo, see image below:
- When I run the first block for the first time in a new jupyterbook I encounter following comment, though running for the second time is error-free: --> Glad it worked the second time! Kept a log of the error in this issue (Environment bug identified by Fatemeh #142).
Performance measures:
- I think it would be better to bring queuing theory definition when we refer to it later in the Mean wait time. Here it seems coming from nowhere to Q theory. Further, a hint/ reference to a short youtube video might be helpful if anyone requires more explanation (e.g. https://youtu.be/SqSUJ0UYWMQ?si=O4XdIs49jjI4zQOa). --> Thanks for the video link! I've add this in. I decided to keep this intro at the start of the page, as each output is in a collapse box, so it means if mean wait time isn't opened first, people will see the term queueing theory before seeing its definition. However, you make a very good point that it feels a bit weird opening the page and just jumping into queueing theory! So I've add more of an introduction to the page, and hidden the explanation of queueing theory in a collapse box.
- Typo:
- Might seems obvious but do we need any explanation here for why np.nan? --> Have add explanation.
- Since we have been incrementally adding performance measures throughout this section, I suggest including performance measures from previous discussed sections (e.g. mean wait time) in the code snippet. this way we can build the complete KPI table, mirroring the procedure used in a real sim project, and we can compare them at the end of the section --> This is currently available on the following page - therefore, I've add two paragraphs at the start of the output measures page to explain that it will add them individually here, but to see all at once, check out the following page.
- In the utilisation measure, what is the curve? Any visualisation/ explanation would be helpful. --> I've removed formula and changed to a simpler / descriptive explanation, that then refers to the "Time-weighted averages" section at the end of the page, which provides a clear visual explanation with curve.
- also for mean queue length, see my previous comment. --> As above..
- In the "mean time in system" --> run the model --> np.float64? --> I've add explanation of what
np.float64()is.
- In the "backlogged patient count" --> run the model --> np.int64? --> I've add explanation of what
np.fint64()is.
- In the "backlogged patient mean wait time" --> run the model --> np.float64? --> I've add explanation of what
np.float64()is.
Replications:
- I would suggest adding a brief explanation/ comparison for the image below, e.g. variability in PM1 vs PM2: --> Have add a paragraph explaining.
Length of warm-up
- I would suggest to bring the code snippets from replication section, mentioning those that dont require change, and the provide the required codes for the length of warm-up for the sake of consistency, as it has been done in other sections.
Number of replications
- >> manual inspection, I think the light blue ara is for both upper and lower but the legend does not read like that. Ignore this comment if you dont feel it. --> I agree with your comment - but this is a function imported from
sim-tools(plotly_confidence_interval_method) - so a little harder for me to tweak - have decided this is fairly minor so will not address this, as more complicated with it being from sim-tools, if that's alright.
- I have not run the model as I encounter a persistent error, but it would be good to know if the classess that have been mentioned in the automated option (see image below) is going to be added/ replaced with which class in the replication model (like what you did in the "Parallel processing" section). I think reader might get lost a bit here. However, it might be due to not being able to run the model by my own in this section, so feel free to ignore this comment :)
Also, a brief on runtime_checkable would be helpful, if possible --> I've edited the automated section to try and make this a bit clearer.
- I dropped the last couple of code snippets (the ones after "Choosing an appropriate numberof cores") after runner class, and I encountered the following error: --> I've created a separate issue to investigate this
sim-toolsbug identified by Fatemeh #147
- It might be good to document some of these errors and provide readers with some possible solutions... This can be applied for all sections. --> If I manage to work out why Environment bug identified by Fatemeh #142 and
sim-toolsbug identified by Fatemeh #147 have occurred, I will do this! As of yet, I am still unsure.
Ch6 - Experimentation:
Scenario and sensitivity analysis
- The highlighted sentence reads a bit odd:
Full run
- I got lost at the following example script. Where should I write this? in powershell? --> Have add explanation of this!
- I would suggest to use images to guide the reader in this section. --> I've fixed the illustrative full run image at start of page. Elsewhere, I considered that I could add images should the bash console where a script is run, but that is no different to the commands provided within the page as is. I hope that the extra sections explaining what bash is clarify? As I otherwise wasn't sure where else to add images.
Ch6 - Verification and validation:
Verification and validation
I liked the what does this mean in practice sections :) --> Thanks!
Tests
- I would suggest to bring the code snippets that you are continuing the run (either from parallel processing or scenario analysis) and add the snippets from this section. This would be consistent with other sections where the book was continuing the model run from other sections, and would improve readability.
Or, it might be good to have the .ipynb files of each sections available at the end of the sections with the same naming from the book.
Quality assurance
- Typo:

Metadata
Metadata
Assignees
Labels
peer reviewFeedback from peer review of the repoFeedback from peer review of the repo
Type
Projects
Status
Done