You are not logged in. Click here to log in.

Application Lifecycle Management

Search In Project

Search inClear

Tags:  not added yet

Current Task Priority List (Updated 5/10/24):

1st Priority:

  • M4 R:
    • N/A
  • Services
    1. RFC Forecast Ingest
      • default get this data with every forecast run & display it as a member --> don't automatically start it as enabled, but just added automatically will allow them to view it --> so user won't need to click the plus button on the bottom and manually have to add it all the time
    2. G: SNOTEL monthly data is being construed from the SNOTEL daily data files instead of the monthly endpoint
      1. the back-est data only exists in the monthly data call
      2. https://nwcc-apps.sc.egov.usda.gov/awdb/sitedata/MONTHLY/WTEQ/
      3. how to know in Persephone if they want the predictor to be monthly
        1. for SNOTEL stations, with WTEQ selected --> if date is relative and unit is 'months' then specifically go to the Monthly endpoint
          1. *Will require need to pass 'unit's value to service call for predictor --> Mikka will discuss with Olaf and determine if this is possible and/or how big of a lift
    3. Create service to handle and return correlation coefficients
  • Forecast Publishing:
    • N/A
  • Persephone
    • Bugs/Features NWCC shared:
      1. Kevin: building with dates ranging from 9-1 --> 4-1 should work --> this is returning an error though…
      2. J/G: If routed predictor(s) in step 2, do one of the following:
        1. display routed icon either at top of list or over element dropdown
      3. G: If a child scenario has a shortened target period, but there's no observed flow data that can be added to this, don't display the forecast values in evolution plot at all, or display them, but alert user that these are values for shortened target period
      4. G: Alert about old build is still appearing after first build of scenario (after scenario was added to a batch)
        1. Gus finds if he rebuilds then it goes away
      5. P: it appears that the partial observed flow (the green line) won't plot on the chart until you're at least 2 months into the target period. Because, currently it doesn't actually plot the point on the line like in the regular part of the graph --> see if we can get a point to display for each month of observed flow data, so only 1 month will still display without need to hover.
      6. All: Weird stickiness issues with changing years in evolution plot --> would change years, it would show the newly selected year in dropdown; however evolution plot doesn't update accordingly --> have to switch years and come back to force evolution plot to update
      7. L: When headless forecasts run, change the year selected for that scenario, the siblings and parent to be the year that the forecast was just run for
      8. L: When you run forecast via cross-project view, change the year selected for that scenario, the siblings and parent to be the year that the forecast was just run for
      9. L: Add ability in cross-project view to stop currently running forecasts (when you use 'generate forecasts') button
      10. G: Batch duplication target period change:
        1. Make this automatic logic optional --> give user choice if they want to do this or not
          1. the default should be the automatic change logic
        2. This same automatic logic and it being optional should be applied if you are adding an individual scenario to an existing batch
      11. G: inappropriate disabled logic in scenario overflow menu --> says you can't train, even though in step 4 it allows you to train (scenario has been built, trained and forecasted, so you should be able to rerun all)
        1. Gus's example (however this appears to always happen): Sprague R nr Chiloquin > Operational > 03-01
      12. L: Change alert message in parent scenario if you run all forecasts and one or multiple are missing wcis data --> currently says you need to forecast children still
      13. G: Slow load time when selecting bast file in cross project view - can we alter how we're getting the data so it doesn't get the bulk of the data per scenario until a pub date and/or target period is selected, and user clicks 'Search' --> then the additional filters would just narrow down the forecasts displayed…?
      14. G: Lower priority --> option to edit all of the forecast wcis values
        1. would require prompting then when moving forward and forecasting --> if the user added an edited value, then the wcis returns data at a later point --> in this case, prompt the user and have them select if they want to use their edited value or the wcis data value
          1. currently, the wcis data value overwrites anything that currently exists
      15. G: Evolution plot
        1. constrain total flow decimal points to 3 significant digits
        2. 10-90%
          1. The values represented after this should be flipped (the bigger number is the 10% value)
          2. **Same for the 30-70% values
        3. The values displayed should respect the individual forecast's 'rounding' selection
          1. **when we have to combine forecast with observed flow, the 'rounding' application should ideally be applied after these values are combined
      16. G: Add ability in template overflow to promote all children's status
        1. Biggest reason for this request is for daily models - promoting to 'Guidance'
      17. G: in FMT report and maybe in scenario list, highlight scenario if target period start is before pub date
      18. G: Add in check when creating bulk scenario --> alert user and don't let them move forward if pub date end is after target period end
      19. G: Predictor absolute start date --> if you delete predictor start date, it says there need to be a date, but it still allows you to build, and just builds and updates predictor start with default '10-01' start date
        1. Gus has some cases where the P predictor start was actually empty and didn't have any alerts --> check to make sure that if the predictor start is an '' or null, that both are flagged as needing a start date, and doesn't allow you to build….
      20. Olaf: Create option to sanity check validity of scenarios within an individual project - we can add validity checks as we want; but first one we could add is if the pub date is after the target period start
        1. maybe put this by the project rename button in the header --> use similar pop-up and process to log file
    • DB/Other/To-Ponder
      1. Low priority --> RARE issue they ran into --> one of Lexi's projects/forecast points triplet changed…
        • **mostly is an issue with streamflow stations; happens because of lack of protocol when station is entered into db
          • Not only can a station's triplet change, but there is also a corner case where a newly added station can take the old triplet
        • Need to think through how we'll handle these cases on the rare occasions they arise, couple ideas thrown out in meeting:
          • Do we want to change it in our db so that it's not the unique identifier, and just the field id is?
          • Or, do we want to put an option in the UI to allow the user (maybe just admin) to specify if a station's triplet changed, and what it changed to?
    • Tasks with blockers:
      1. G: add in station overflow menu (step 2) correlation coefficient (correlation from predictor to target)
        1. summary of all the predictors would be preferable, but per predictor would be nice
      2. G/L: Intermittent issues creating FMT report; usually just one proj > one scenario that's causing issue
        1. One issue Mikka found was if a forecast failed, then the FMT report wouldn't create --> now the values in the table and FMT report display as 'FAIL' if the forecast failed; however Gus said that he knew the scenarios he was trying to create an FMT report for all succeeded…. however couldn't remember exactly which project it was --> they said they'd keep an eye on this and let us know when they experience it again
        2. Gus will look into and find a project he knows was causing this
    • Intermittent issues to keep in back of our mind, but not highest priority:
      1. trying to change name in master scenario not working
      2. when clicking between scenarios, it would jump her back to the last scenario in her list; after refresh this stopped happening…. making this low priority, and just adding to 'intermittent bugs' category as it's not repeatable, but we at least want it on our radar for if we run into it again
      3. Build CSIP-related failures: **believe these were related to our intermittent server restarting issues…
        • "No station data for [predictor]"
        • "Error [link to SRVO site data json file] error 403"
      4. Forecasts fail when run via one method, but then succeed via running it another way --> not sure if it's actually the way you run it that matters or just happened to be the timing of the runs, or something else… they will send us ex projects when they run into this again
        • if you run all the forecasts via the parent, one of the forecasts fail; but when you run the scenario via the individual scenario overflow menu, 'run forecast' it worked…
        • Lexi saw this also --> it failed when she ran it from cross-project view; but when she went to the individual scenario and re-ran it, it worked
      5. G: Lower Priority --> Occasionally he will see that the table is pushed down below the screen, so he can't see the part of the table where you can go to the next page of results… can't determine exact steps to reproduce
      6. L: When she trained via the list, but then went into the individual scenario, she couldn't see the training results --> said the settings didn't match; however they did --> when she refreshed the page, she could see the results
        1. Gus has also seen a slight variation of this as well
          1. sometimes it shows the results and shows the message, other times it doesn't show results
          2. related to training that just completed


2nd Priority:

  • M4 R:
    • N/A
  • Services:
    • WCIS:
      1. Review and start using the WCIS REST API: https://wcc.sc.egov.usda.gov/awdbRestApi/swagger-ui.html
        1. * Once we've reviewed this, set up a meeting with Beau to discuss/clarify any questions we may have
      2. ability to use the following elements for predictor: SMS, SRDOO & SRDOX
      3. for WTEQ element, if user chooses 'months' as relative unit --> Gus is hoping that we can let the service know that 'months' is chosen, and if so --> he would like it to look at the 'MONTHLY/WTEQ' files instead of 'DAILY/WTEQ', because there is backEST data available ONLY in the monthly folder
      4. Ability to select 'P' for a 'COOP' predictor; ex. Lexi showed in 4/27 demo was 'White Sulphur Springs 2'; selecting 'P' - is not 'PREC' it's 'PRCP' and monthly data, so we'll have to handle this differently
        1. *As of 11/16/23 they didn't have 'PRCP' data in their sitedata files --> so, have to wait for them to get this before we can do this task
  • Forecast Publishing:
    1. if multiple users have same forecast station - the api will pull ALL the operational forecasts for specific pub date/water year --> currently no safeguard to prevent multiple operational forecast for same forecast station/pub date
      1. for now, they will just be diligent about making sure they don't have the same target stations across users, with same pub dates and a status of 'OPERATIONAL' --> but maybe later at some point we will try to implement some safeguard, not sure what that would be
  • Persephone:
    • Project Related:
      • add ability to transfer projects
    • Admin:
      • Figma:
        • figure out how an admin user can go in and change the role of other users
        • figure out how an admin can retrieve individual forecast inputs/results from another user's account
        • ability for admin to manually call for the stations or watershed boundaries to be updated in the database
    • Map:
      • can we add option to expand the bottom to cover the whole map…. or at least make the bottom part a little bigger?
    • Cross-Projects View:
      • Add ability to view skill metrics via cross-project view (across forecasts/projects)
      • Add ability to view a map of the forecasts/projects selected via the cross-project view --> they will talk through what all they might want to see on the map
    • Step 2:
      • ability to choose the following elements for predictor: SMS, SRDOO & SRDOX
      • add option when 'Applying start/date to all' to allow user to specify if they only want it to be applied to predictors with certain element (ex. only apply this date to all predictors using 'WTEQ')
    • Training:
      • ability to manually edit the ensemble members excluded from M4, instead of just accepting what M4 says
        • Steps:
          • Train scenario
          • Review visuals of this training
          • Allow user to manually change the exclusions - would need to be able to explicitly outline all 10
    • Forecasts:
      • Storing forecast values per status --> so, store the forecast values when the status was preliminary, then when it's changed to final, store the forecast values for this as well; so in case it changed after being turned to final, they would have an audit of this
      • have an audit of who changed the forecast publish status and when
    • Routed Models:
      • Can we provide option to download/upload routed models, and have the association maintain?
      • If you delete an 'upstream' model --> should we delete all downstream models?
        • Currently, the downstream model shows as having no predictors; but still shows as having forecast results…

Later:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Forecast Publishing:
    • N/A
  • Persephone:
    • Step 2:
      • ability for start date to have relative year specification (like '10-01 -1')
    • DB/Other/To-Ponder:
      • for an organization association per user --> when will this be set for the user? are there any limitations for app-use based on the organization?
      • Allow user-generated training file to be pushed from Persephone to M4 (essentially skipping steps 1 - 3, and uploading own build file to step 4)
    • Step 4:
      • changing the pdf visualization for training results --> view them instead in a chart-like fashion, to more easily digest and maneuver between
    • Include external forecasts:
      • Ability to weight members



Deployment V 0.3.39 on 4/26/24

New Features/Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • If building with a routed predictor, build fails --> because in exclusions csv file, the member name is 'PCMCQRN', whereas the actual file names for this member are 'PCMCQRNN'
    • Climate data doesn't produce a climate data graph for snow stations (step 2 overflow menu --> climate data)
  • Forecast Publishing:
    • G/B: if user requests verbose for forecast publish api, add option to specify verbosity --> ex: only get verbose training settings, and/or build settings, and/or skill metrics
  • Persephone
    • Bugs NWCC shared
      1. L: ability to view 'climate data' for SNOW courses
      2. G: Differing target periods for children if pub date is after target period start:
        1. Add duplication logic --> if pub date is >= target period start, then pub date becomes new target period start
          1. Jan 1 - Jun 1 pub date
          2. Apr - Jul Target Period
            1. May 1 pub date --> target period: May - Jul
            2. Jun 1 pub date --> target period: Jun - Jul
          3. **Limitation:
            1. Can't do this for daily models, or pub dates that aren't 1st/16th (16th only works if data exists)…
        2. Evolution plot:
          1. to normalize months we altered target period for --> add observed data back that is missing:
            1. May 1 pub date --> target period: May - Jul
              1. Add April exceedance data to essentially make this the original Apr - Jul target period
                1. *If April obs isn't available - what should we do? NWCC team will discuss
      3. G: Pred list duplication bug
        1. Gus sent an email to Mikka on 4/22 about this, subject: 'Predictor List bug report'
          1. I’ve come across a bug that I’m not sure I fully understand but will try to describe as best I can. It happens when I have created a predictor list with a model form, then remove that model form and apply a new one or just manually rearrange the predictor list. When I build the data file, it returns a message that the list has changed and needs to be rebuilt, but if I click away and back it updates, but now with the duplicates. It’s as if the build causes the original predictor list to return and be merged with the new selections.
      4. G: Changing predictor list via parent --> need to make sure the abs/rel date selections for an SRVO are the same as what we have them when editing in an individual scenario (shouldn't have a 'days' selection)
        1. Also make it so user can add a routed predictor via the parent (however, this would be selecting an 'upstream' parent scenario --> then when applied to the children, the children would route to the appropriate pub date child underneath 'upstream' parent selected)
      5. G: Timing update issue - Step 2 alert if needs to be rebuilt because predictors changed --> if you build, then delete model form, apply new model form, then re-build --> after this rebuild, the alert that it needs to be rebuilt is still there…. however if you click away and come back it's gone
      6. G/B: if user requests verbose for forecast publish api, add option to specify verbosity --> ex: only get verbose training settings, and/or build settings, and/or skill metrics
      7. G: ability to create a model form entry for SRVO element
        1. Can't prevent user from selecting certain dates (semi-monthly or not) since those are decided based on station, not element….
          1. make sure this maintains; however then just fails --> if this is the case that you apply the SRVO model form with semi-monthly dates, and the station doesn't have semi-monthly data, still apply the dates, and user will be alerted on-run about failure



Deployment V 0.3.38 on 4/18/24

New Features/Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Forecast Publishing:
    • if user requests verbose for forecast publish api, add training results skill metrics to this (table displayed in step 4)
    • Add 'GUIDANCE' as 'scenario_status' option
  • Persephone:
    • MVP Tasks
      1. G: Introduce scenario status flag 'Guidance'
        • supports automated generation and publishing of forecasts --> daily or weekly cadence of forecasts
        • This flag is more similar to 'DEV' than 'OPER' --> so, if it's 'Guidance', still can't change forecast status
    • Bugs NWCC shared
      1. G: Allow child scenario target periods to be changed
      2. K: Make it apparent that the scenario name in the cross-project view is a hyperlink
      3. G: Lower priority --> evolution plot edits
        1. If we're not yet in target period, 'Total flow', should be 'null' or '0'
          1. currently this is displaying as '-#'
          2. once you're in target period, then it has a value
        2. Ability to Download this plot as an image doesn't work --> the y-axis appears to be collapsed
        3. convert values displayed onHover to KAF - and only display with 1 or 2 decimals
      4. Kevin: make it so custom values are sticky if this same predictor (dates, triplet, element and forecast year) is used in another scenario (essentially putting back in the project-stickiness association)
      5. L: if a member has been pruned by M4, or has 'inf' or 'na' values; and is a box-cox member --> this is then permanently disabled; so in this case, enable the non-box-cox version of this member
      6. L: if a member has a negative 50% value, start this out as disabled --> usually if one is negative, both are --> so, can't switch this out for the non-box-cox
      7. G: Lower priority --> Can we maintain selected scenario position in list on left?
        1. If Gus has a batch of scenarios with daily scenarios across 4 months - he scrolls down to select a scenario - when the entire page reloads (because it was just selected) he is taken back to the top of the list…. just annoying
      8. G: FMT report addition: if >1 pub date is selected (or no pub dates selected), display pub date in rows of FMT report
      9. L: Add option to ALSO get 'csv' format for the FMT report
      10. G: Time of build/train/forecast displayed onHover (in step 1), says that it's Pacific Daylight Time, but it's actually an hour ahead
      11. L: If you add '0.0' for a custom forecast wcis value --> it still lets you forecast, however, it displays as 'Add Value' still
      12. G: When a user retrains a scenario, we should delete the local storage of the old forecasts
        1. When Gus downloaded the forecast results (he had previously forecasted this scenario, then re-trained, and forecasted again, then downloaded the forecast results), he looked into the build_file that was downloaded, and inside forecasts > 2024, there were all the forecasts in there….
        2. I believe we are deleting the old forecasts from the db, but it appears we aren't deleting the local storage



Deployment V 0.3.37 on 4/11/24

New Features/Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • G: Routed predictor step 2 build
      1. Take into account the members outlined as needing to be excluded in the: "AutomatedEnsembleMemberExclusions.csv"
      2. When taking the average, instead of using all 6, just use included ones
        1. use max of 6 members
          1. Possible M4 code change - be explicit in AutomatedEnsembleMemberExclusions file to explicitly outline ALL 10 members and say if they're excluded or not
            1. In the meantime, honor the exclusions that are here --> don't replace a BC exclusion with a non_box_cox
              1. If there is a 'Yes' in a column, we don't include that member at all (neither BC or non_box_cox)
  • Forecast Publishing:
    • N/A
  • Persephone:
    • MVP Tasks
      1. L: Batch routed predictor creation --> when creating batch, should take whatever user entered into the prompt; appears to be honoring only the initial scenarios target period, not what's entered into prompt
      2. G: when duplicating a routed model with 2 predictors, the first predictor seems to be handled correctly but the second predictor ends up just being converted to observed streamflow predictor rather than the routed predictor.
      3. G: Batch duplicating routed model desired functionality:
        • they will create a batch from a scenario that points at a child in the upstream scenario
          • don't have any restrictions on 'OPER' status
          • don't NOT create the batch if ALL the pub dates don't match, go ahead and create association for ones that do
            • So, if upstream only has 01-01, 03-01, 04-01; and the newly created downstream batch has pub dates 01-01 - 04-01, Monthly, only create 01-01, 03-01, & 04-01 as children in the newly created downstream batch
              • maybe alert user that the individual child scenarios weren't created…. but not required
          • If it points at an individual scenario - display alert and tell user they can't move forward
    • Bugs NWCC shared
      1. G: There is now a significant lag time between choosing the project for routing and the ability to choose the desired scenario. I assume this is where the dynamic search is happening, but still seems longer than I would have expected.
      2. G: When project(s) are selected in cross-project view, can we display 'loading' message while forecasts are being queried for and loaded in
        1. **ex: if you have forecasts in list, filter down to only see scenario status of 'OPER', go to models, come back --> if you add the scenario status of 'DEV', it takes a while for the 'DEV' forecasts to show up in table, it does eventually show, but for user, they don't see anything happening… even just alerting them it's loading would be nice
      3. Kevin: FMT report creation --> need to add some ordering logic --> Apr-Jul should always be above Apr-Sep
        1. Kevin sent an email with ex. FMT report and details on 2/29 - subject line: FMT reports
        2. G: This same ordering logic should be applied to both the table and the FMT report --> as long as the forecasts in each project are sorted with the same logic, that's the biggest desire --> so, Gus said we can maybe just sort the forecasts alphabetically by target period
      4. G: When you forecast via cross-project view, and the wcis part of the call has missing data, the forecast doesn’t show up in this table; the only way to know this is the issue is going to the individual scenario… is there a way we can alert user about this so they know to go add custom values?
      5. L: when she switches project, she is getting the site crash --> she thinks it may be associated with if the project has routed models or not --> Patrick doesn't get this, and he doesn't have any routed models…..
        1. **As of 3/27, Dave thinks he may have fixed this --> but since we can't recreate the issue, we're not sure --> definitely made a change to fix a bug, just hoping it is the same issue Lexi is experiencing
      6. G: Cross Project now retains Bast/Project Group but not Projects
      7. Kevin: If you want to investigate an individual scenario from the cross-project view, allow them to just click on the scenario name and go to the models view and already be on that project and scenario
      8. G: The streamflow obs in the evolution plots are now off by a month (example April-July target is actually March-June obs)
      9. G: 3rd level of sorting logic for cross-project table sort --> If the pub date, and target period are the same, then sort the scenarios by forecast status
      10. L: SRVO Pred 'Date' value not always setting appropriately
        1. When Lexi added an SRVO predictor, then checked the 'Abs Start' and 'Abs End' checkbox, then selected the desired start/end --> she had selected an end date of 'Apr L' --> however this got stored in the db as '05-01' instead of the appropriate '04-16'
        2. interim workaround - change date selection, then change it back --> this appears to correctly set it as '04-16'
        3. the ex. proj she shared via email on 3/29, subject: 'Error adding streamflow as a predictor' --> proj: Kuskokwim R at Crooked Creek --> scenario: May-SRVO
      11. Kevin: Applying a model form if you have a SNOW predictor --> snow gets deleted (even if there is snow in the model form)
        1. Decision in 4/4 mtg for how to handle this: **If a model form is applied, and the predictors can't have that model form applied - just disable, don't delete
      12. Kevin: If you apply a model form with a relative start date --> this is calculating a relative date to start of water year, not to pub date
        1. This should be relative to the pubDate always - regardless of which element is selected (all relative dates should be relative to pubDate) - whether it's the model form applying start for a 'P' element, or when a USGS station first gets added
      13. L: Safeguard relative date selections for an SRVO predictor with only monthly data (currently both start/date calculate from pub date, so if you select relative monthly values for both, they will both be the start of the month, which signals a semi-monthly data fetch….)
      14. G: Allow same rel dates for an SRVO predictor (currently has warning color/note)
      15. G: Confirm how the publish status works across forecasts AND let them all know --> result of this investigation:
        1. If the forecast is rerun, and the results:
          1. change
            1. change the publish status from 'Preliminary' to 'Original'
          2. If no change in forecast results
            1. keep the status the forecast currently has
      16. G: Bast file upload intermittently not working correctly
      17. L: Project group creation not working - resulting in site crashing and/or table showing as completely empty when she selected the project group with this bug
        1. One of the project groups she had an issue with was uploaded via a bast file, the other was created manually
        2. For one of the project groups she remembers having deleted either the project(s) or all the scenarios within, and recreating (because there were forecasting errors that forced her to start fresh)... so maybe an issue when deleting a project within a project group
      18. P/Karl: Update forecasterName in Persephone **Wait for email from Gus saying to move forward with this
        1. Patrick: pkormos *he uses the usda login
        2. Karl: kwetlaufer



Deployment V 0.3.36 on 3/22/24 (Mini-Deployment on 3/26 - these highlighted in yellow)

New Features/Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • L/P: The % of med isn't returning values for single month forecasts
  • Forecast Publishing:
    • N/A
  • Persephone:
    • MVP Tasks
      1. L/G: Forecasting call(s) timeout if there are a lot of predictors
      2. G: Ensemble values are greyed out - always, regardless of ensemble members enabled
        1. The actual ensemble values should NEVER be greyed out, only the metrics, and these should only be greyed out if the default selections aren't what's chosen
      3. G: decrease cross-projects load time
      4. Assign correct predictor scenario when duplicating routed models - *first attempt at this included in this deployment - anticipate change requests/bugs
        1. Match pub date and target period and scenario status used in base scenario
        2. If more than one option:
          1. if there's more than one, and one of the scenario statuses is 'OPERATIONAL' - select this one
          2. otherwise: prompt user to select correct predictor scenario or maybe just duplicate without association and user would manually re-establish this
      5. G: Site crashed when you tried to add a routed predictor
      6. G: Site crashed when you built a whole bunch of children at the same time
    • Bugs NWCC shared
      1. L: Project Group - Lower Colorado Bast 15 --> was able to run these forecasts via cross-project view, she could see it ran when she went to individual scenario, but it never showed up in cross-project table
        1. Lexi shared the projects that create this bast file on 3/6, subject: Projects for lower Colorado mid-month
        2. **Figured out this was an issue with deleting projects that were associated with project groups --> we found a workaround for this specific issue for her project group - just deleting and recreating; and created a whole new task to actually address the issue when you delete a project that's in a project group
      2. L: Cross project table breakage:
        • Steps to recreate:
          1. select a bast file
          2. narrow it down to this months pub date
          3. narrow it down to one water year
          4. narrow it down to scenario status
          5. Go to remove a project
            1. breaks and says, "cannot read properties of undefined (reading 'numRows')"
      3. G: Add target period to cross-projects table
      4. G: Add ability to filter the cross-projects table by: scenario status & target period
      5. G: decrease routed predictor addition load time
      6. G: Bast file dropdown selector sizing is weird… words run over edit/delete buttons
      7. If you want to go to a routed predictor's scenario, you can click the overflow menu (three dots) next to the routed predictor and 'Go to this routed predictor scenario' --> this will change the project and scenario you're on
      8. We hope the issue Lexi was experiencing with the site crashing sometimes when she would switch projects is fixed (we haven't been able to recreate it, so aren't sure, but did make a few changes)
      9. G: the project groups selection should be maintained along with all the other selections if you go from cross-projects, to models, back to cross-projects



Deployment V 0.3.35 on 3/12/24 (Mini-Deployment for bug fix on 3/14/24)

New Features/Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • L/P: The % of med isn't returning values for single month forecasts
  • Forecast Publishing:
    • N/A
  • Persephone:
    • MVP Tasks
      • G: Forecast ensemble member disable stickiness is not staying sticky…
        • Ex. Gus had (worked on our call, but this is one where he saw it fail repeatedly): Purgatoire R at Trinidad --> Primary Target | Mar 1 - Jul 1 --> 01-01; when he first forecasts, the enabled ones are correct ('BCs & then for the 2 without BC, just the non-BC ones), he switches to all non-BC versions, goes to '02-01', reruns forecast, expects defaults to be what he had just previously changed to --> doesn't always work this way
        • **Possible short-term fix:
          • don't have the project-associated stickiness --> default first time it's run as a forecast, select the base 6 (the deferred); if you've already run the forecast, start scenario selections off with what the user had selected
      • L: Forecast fails if all members are disabled --> could happen if user adds a custom ensemble member, disables all other members then tries re-running the forecast
      • L: Maintain custom ensemble members across forecasts, within that specific year only
      • G: After you build a scenario, if you delete a predictor the whole site crashes. Even after refresh, whenever going back to that scenario's step 2, the site immediately crashes
    • Bugs NWCC shared
      • L: Headless forecasting runs, may return missing wcis data --> however the forecasters know that later in the day that data should exist --> they need to be able to rerun the process, without adding in custom data
      • L: if there's any disabled station, then the training fails
        • She shared an example project - Two Medicine R bl South Fork on 2/29, subject line: FMT reports
      • G/J: Predictor list glitchiness --> badges not appearing in timely fashion, being wrong, and stations without variability (even disabled) cause training to fail…
        1. Julie shared an example project for this - where if a station has no variability then the predictor list gets REALLY weird - additional stations are added to the list, the disabled status gets all wonky, the badges are weird and wrong; Emailed on 1/11/24, subject line: Bug to report, proj: Libby Reservoir Inflow
        2. Lexi shared an additional example proj with Mikka for this on 2/29, with subject line: FMT reports, proj: Two Medicine R bl South Fork
      • G: Add 'ensemble' metrics from training tab --> to the forecast table
        1. If any of the following members are disabled, display the ensemble metrics in a light grey (so user can still see them, but user knows they are 'invalid' based on chosen selections)
          1. PCANN, PCANN_BC, PCMCQRNN, PCQR, PCR, PCR_BC, PCRF, PCRF_BC, PCSVM, PCSVM_BC
      • G: ability to change the scenario status to 'Operational' for all children via the parent --> even more ideal is change it via the project --> most ideal is changing it across projects, but only for specific scenarios
        • You can change the individual scenario status by clicking here, in the individual forecast
        • You can change all the scenarios status listed under the project, in the table (ONLY applies change to scenarios displayed in table, not all scenarios in this project), by clicking here, in the project header
        • You can change all the scenarios status listed under the project group, in the table (ONLY applies change to scenarios displayed in table, not all scenarios in all the projects from group), by clicking here, in the project group header

      • Kevin: Start out project selection list box with the currently selected project at the top --> so they don't have to scroll to find their last place
      • L: if you disable an ensemble member, the plots for the remaining members should re-scale --> scale of the plots should only be based on the enabled members --> the ensemble plot appears to be rescaling, but all need to rescale



Deployment V 0.3.34 on 2/25/24

New Features/Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Forecast Publishing:
    • N/A
  • Persephone:
    • MVP Tasks
      • Run scripts to fix local file storage in place for existing scenarios flagged as having invalid local files
        • Used the training and forecast archives as our source of truth
        • Moving forward, if one of your forecasts used the wrong build_files, this alert will be updated - so you will know.
        • **There were only 9 forecasts (2 scenarios for Lexi, and 5 for Gus) that we were unable to fix the local files for. These can be found by clicking the alert bell icon in the top right.
          • For these scenarios, you should re-build, re-train and re-forecast; but then they should be good moving forward. This alert and pop-up will only show you the invalid forecasts for you --> it only displays them for all the users if it's Dave or I logged in.

      • Update production and dev to be pointing to their own local file storage, not the same place
      • Visual Feedback if possible false positive forecast result --> this alert for the user if they encounter this bug again, along with the details will hopefully help in tracking down and addressing the root issue
        • Add ability to download what's stored locally for a scenario when you go to download forecast archive
        • After every future forecast, a script will run to ensure the right build_files are used. If a scenario uses the wrong build_files, the forecast will be marked as failed, this will be added to the list as mentioned above, and there will be an alert for the individual scenario, like below:

      • G: Stickiness of missing forecast wcis values --> a custom forecast wcis value from 01-01 should NOT automatically be used for any other date; and if the data file returns data that it didn't previously, defer to this value always
        • Routed models with different forecast year selected --> somehow we need to keep these in-sync across the routed models; otherwise the upstream model could have 2021 selected, when the downstream model has 2023 selected --> which will cause the downstream model to pull and use the 2021 results of the upstream… how do we want to try to handle/prevent this?
        • When you run a routed model, if it has 'upstream' predictors --> grab the appropriate forecast year from those upstream (the year we grab NEEDS TO MATCH the one selected)
      • L: There appears to be a new bug when adding a custom forecast value for a scenario. After a custom forecast value is entered, Persephone provides the “This wasn’t in the forecast” screen and we can no longer access the forecast tab for that scenario until it is reforecast from the cross-project view. It seems that we can no longer use the add custom forecast functionality at all.
    • Bugs NWCC shared
      • L: Ability to edit custom ensemble members after you've added them
      • L: Ability to delete custom ensemble members after you've added them
      • G: ability to add a routed predictor from the same project (scenario would have same pub date, but might have diff target period)
      • G: is it possible to run forecasts via cross-project view, without changing the 'forecastYearSelected'? (otherwise individual child can have diff yr selection from parent… makes things weird)
      • L: if you disable an ensemble member, the plots for the remaining members should re-scale --> scale of the plots should only be based on the enabled members
      • L: Unable to select a previous year and run the forecast for a routed model
      • Ability to edit a project name
      • G: computation run dates are UTC dates, not PST dates - so, if Gus runs forecast late at night, it shows as having been run the following day
      • G: can we display the time next to each run date in build > step 1
      • Snow network data retrieval performance --> build is slow, but the forecast is fast…
    • DB/Other/To Ponder
      • review on_delete rules in db, to ensure it's all set up the way we would want
        • if station group/model form is edited or deleted, what do we want to happen with scenarios referencing those?



Deployment V 0.3.31 on 12/18/23

New Features/Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Forecast Publishing:
    • N/A
  • Persephone:
    • Bugs NWCC shared
      • Gus:
        • forecast failure, possibly intermittent - originally only an issue for child pub date 12-21; but now an issue for all children:
          • "more columns than column names"
        • Re-forecast a scenario after project upload --> if we don't already have them, get the appropriate files (build_files) from the archive
      • Lexi:
        • FMT report exceedence is mixed up -->
          • change 'max' to be '10%' value, and make sure this is actually the 10% value (currently is the 90% value)
          • change 'min' to be '90%' value, and make sure this is actually the 90% value (currently is the 10% value)



Deployment V 0.3.30 on 12/13/23

New Features/Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • Ability to get semi-monthly WCIS data for adjusted streamflow stations
      • *The start/date associated with the SRVO predictor should be respected (currently appears to just be using the scenario's target period)
        • Data gathered should be done like the target period --> accounting for semi-monthly values, if necessary (so, user could use just 1 month, if they select a start date of 04-01, and an end date of 04-16)
      • Using all semi-monthly data and just aggregating it works fine if there's a semi-monthly value in the target period; make sure that if the semi-monthly data is not COMPLETE that it doesn't cause it to error or break
        • if start/end date is in the middle of a month, all of the data requested for this could be semi-monthly calls
          • could use monthly data, then tack on semi-monthly as appropriate - whichever is easiest
        • if start and end are NOT in the middle of a month, can just fetch for all monthly data
      • *Can set up a meeting with Beau if we want to discuss the semi-monthly data being at a separate endpoint for us to access
    • Create service that gets and returns the 30-yr median of streamflow for a forecast point (to be used in FMT report) --> awdb/data/SRVO/MONTHLY/PERIODMED
    • Target period From/To selection should be inclusive, the options for both 'From' and 'To' will be in the format of: Jan F, Jan L, Feb F, Feb L, Mar F, Mar L --> however, when Mikka makes service calls these will go to the service as: 01-01, 01-16, 02-01, 02-16, 03-01, 03-16
      • The Jan F value is the first two weeks of Jan values --> It's not the value accumulated up to Jan 1st.
    • Forecasting:
      • Ability to query for external forecast centers data
  • Forecast Publishing:
    • By default this now automatically rounds all the ensemble values using the methodology Gus shared
      • You can pass the key/value "userounding: false" if you want to turn this off
  • Persephone:
    • Bugs NWCC shared
      • Gus:
        • make target period From/To selection be inclusive:
          • change the verbiage of the selections to be Jan F, Jan L (F for first half, L for second half)… when I send this to the service, I will send it as 01-01, and 01-16
            • The Jan F value is the first two weeks of Jan values --> It's not the value accumulated up to Jan 1st.
        • Ability to upload projects with all service inputs/results (so user they are sharing projects with doesn't have to rerun everything, they can just see the inputs/results
        • Ability to forecast a routed model
        • SRVO as predictor - make it possible to use a relative date for the start date
        • Absolute date checkbox for 'date' of WTEQ predictors is now visible again
        • Ability to export multiple projects at once
          • You can do this via the 'Models' side of things (where you downloaded projects before), except now you can multi-select multiple projects at once
          • You can also download groups of projects, via the 'Cross-Projects' view, the down arrow to the right of the project groups selection will download the currently selected project group
        • Ability to upload multiple projects at once


          • You can do this via the 'Models' side of things (where you uploaded projects before)
        • FMT report now has the '30-yr median' value, if the station has that value available
        • Now that we have semi-monthly date selections available for forecast stations, we have safety checks in place, so if the forecast station doesn't have semi-monthly data available, you aren't able to select semi-monthly selections - and there are tags next to the forecast station name (if in step 2, and an SRVO pred), and next to the Target Period (if in step 1) saying if it has just Monthly or both Monthly and Semi-monthly data


Deployment V 0.3.29 on 12/1/23

New Features/Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • Increase time CSIP waits for M4
      • currently timing out if M4 takes more than 30 mins
    • Ability to create a routed model
      • For mean - use 'all' members --> except, don't use the ones with a box-cox version
        • should be 6 methods that get averaged
      • if one routed predictor or predictand has different years, or missing years --> that's okay, just return null (as we are currently with regular predictors)
    • ability to use the following elements for predictor: SRVO
  • Forecast Publishing:
    • N/A
  • Persephone:
    • Bugs NWCC shared:
      • Lexi:
        • Add routed predictor prompt:
          • Only display models that have SAME publication data
          • Add target period & parent name (if any) to model dropdown --> so user can narrow it down
      • Gus:
        • Cross-Project view:
          • ability to 'generate an FMT report' from the forecasts filtered down to in table
            • order of projects in group, and order of groups in bast file matters!!
          • ability to run all forecasts in table via cross-project view
          • Ability to download multiple projects at once (can download multiple via the regular project download button, or can download entire project groups via newly added button by project group selection on cross projects side of things)
        • SRVO as predictor:
          • the start/end selections should be available, with option to do absolute or relative; however, SRVO doesn't have daily data, so only provide monthly relative selections
        • Forecast failure if there are custom wcis values
        • Inappropriately labeled forecast failure ("Error when calculating the ensemble of enabled members for prediction of forecast…")



Deployment V 0.3.26 on 10/5/23

New Features/Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Forecast Publishing:
    • N/A
  • Persephone:
    • Bugs NWCC shared:
      • Lexi:
        • Build fake 'failure's - these should not be fake 'fail'-ing anymore
        • Disabled years should now remain sticky
          • We have some thoughts to try and tweak how this is implemented (right now, after disabling a year you have to wait a beat before you can disable/enable any additional year), but we believe we've fixed the bug, so wanted to get that fix out
      • Gus:
        • You can upload a bast file and have the project groups automatically created for you
          • **Currently this is not necessarily taking order into account, and we aren't tracking the groups of project groups yet… so there will be additional changes/features as discussed with Gus on 10/4


Deployment V 0.3.25 on 9/15/23

New Features/Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Forecast Publishing:
    • there is now a query param, 'scenario_status'
      • if you don't pass this, you'll get all forecasts, regardless of the forecast's scenario_status
        • you can pass 'DEV' or 'OPERATIONAL' to get just the forecasts with those scenario_status
  • Persephone:
    • Bugs NWCC shared:
      • Lexi:
        • set default water year in cross project view to be current water year
        • sort project list by huc, for upstream station list
      • Gus:
        • Once we have the flag for the model (DEV, OPERATIONAL, RETIRED)… have this being able to be queried for in the forecast publish API --> and limit in the UI, so that user can only have 1 model (with same forecast point, target period & pub date) with 'OPERATIONAL' status
          • Can have lots of 'DEV' models; but only one 'OPERATIONAL' at a time (per same forecast point, target period & pub date)
            • if user already has a model with that same forecast point, target period & pub date that is 'OPERATIONAL' alert them that continuing to promote this other one will result in demoting the first model back down to 'DEV'
            • *Only have forecast publish API query for and return 'OPERATIONAL' models
            • Only 'OPERATIONAL' models will even have option to set a forecast publish status in UI
        • change 'upstream'/'downstream' terminology, as this isn't always accurate (could be a boundary as a neighbor) --> make this terminology: From/To
        • Cross-Project view:
          • ability to save the filter selections and be able to go back to them quickly (ex. a group of 50 stations - they wouldn't want to have to re-select these every time)
          • add ability to filter by:
            • huc
            • state
        • Only allow scenario status change with build, step 1 (not in cross projects view), and just have each forecast grab the scenario status at that point in time and track this with the forecast (so forecast across different years, yet within same scenario can have different scenario statuses)
        • ability to create/view groups of projects


Deployment V 0.3.24 on 8/17/23

Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Forecast Publishing:
    • N/A
  • Persephone:
    • Bugs NWCC shared:
      • Lexi:
        • option to download the evolution plot as a CSV
        • Automatic, nightly stations update script now updates the station name in the db, so it will appear in the create a project list
      • Gus:
        • settings panel:
          • initially displayed 's' & 'p' & 'sntl' --> don't display 's' & 'p'
          • SNOTEL --> no more extra text in '()'
        • missing forecast point: SF Humboldt ab 10 mile ck has been added (name was missing)
        • forecast - all ensemble members should now maintain stickiness
        • step 2 alert that there's an inactive predictor in the list should no longer inappropriately appear
        • added ex of expected format for pub date entry, with validation to user can't proceed if they've entered the wrong format
        • hid Mikka and Dave's model forms from displaying in the dropdown list
        • the button 'click to show pinned stations/click to show all stations' should now be working right
      • Julie:
        • fixed: already built a model, tried to add a site, when she tried to add it, said 'no feature for popup'
    • Intermittent Bugs:
      • changing year in template scenario on forecast side of things --> changing year doesn't update the graph…. Gus ex: Rio Grande Nr Del norte > semimonthlytest; turns out this was an issue when you added an individual scenario to a group of scenarios
      • Did many, various things to improve speed that should help the below bugs:
        • Login/creating a new project
          • lots of variability with how long it takes to load, can be minutes before any projects or models appear --> it's a blank state - can't see or select any projects
        • upon creating a new project --> occasionally will get the 'not in the forecast' error immediately after creating a new project


New Features:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Forecast Publishing:
    • Removed from JSON: final, published, forecastSource
    • Added to JSON:
      • status: 'PRELIMINARY', 'FINAL', or 'ORIGINAL'
      • unique model id and configuration vars (including list of station triplets used) used for the model that resulted in this forecast ensemble --> so there's an audit trail about what created this forecast result
        • **Make these optionally returned, with the argument of 'verbose'; if no 'verbose' argument passed, we'll assume 'verbose: false'
    • make forecastPeriod format follow: "forecastPeriod": ["04-01", "07-31"]
    • add year to pub_date that they pass in as an input to publish forecasting
      • **they will send calendar year, NOT water year-specific --> we'll need to account for this
    • Change 'forecast_point' to 'fcst_id' with a comma delimited list (rather than an append method arg you'd split the whole string on commas to create the list)
    • Change so nothing is wildcarded or pattern matched; we will only use explicit parameter values
    • There are different limits on URL lengths with query strings and it's a configuration thing. We can be safe and implement a POST with a JSON content type, containing the same parameter names as keys. The forecast point ids would be stored as string arrays,
      • Beau response: would love to see a GET method that is mapped directly from the POST method if at all possible, just to make testing via the browser super easy - but if that's not simple to do no worries, that's why someone invented tools like postman.
  • Persephone
    • Forecast:
      • PARTIALLY DONE: CSIP service that gets SRVO element data for a predictor currently returns all 'N/A's; and then once the CSIP service is set up to handle forecasting using upstream stations, then we'll make sure we pass the right information and get it all working start to finish (as of 0.3.24 release, UI portion of it works, but since the upstream stations return all 'N/A's they're excluded from predictors, so it's as if there isn't an upstream station)
        • Ability to create a routed model *NWCC is going to decide what approach we should take
          • includes 2 things:
            • be able to include streamflow station as a predictor
              • need access to training best estimates; might be using training forecast timeseries as input
            • in operational execution of a forecast, order matters
      • Map:
        • add ability to see 'upstream' station and it's watershed boundary on the map


Deployment V 0.3.23 on 7/7/23 - (highlighted in green were bug/features deployed in mini-deployment on 7/18/23)

Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Persephone:
    • Bugs NWCC shared:
      • Lexi:
        • via icon in list - when it's in the process of build/train/forecasting it says you can click to cancel, but it doesn't always cancel when you click this
        • Add notes field to forecasts side of things - add as additional tab in StepPredict, 'Notes' after 'Forecast Stats' tab --> essentially just copy and paste the notes section from build step 1, and put it here
        • Project Settings:
          • The settings are associated with the individual project (so one project could have different map settings than another)
          • When you create a new project, it will inherit the current projects' settings as it's initial, default settings
        • Complete Years Count:
          • when you enable/disable a year in step 3, the complete years count should recalculate appropriately
        • Log File:
          • The project log file should now be displaying captured actions
        • Intermittent:
          • Change in pub date should now update all relative dates in step 2, even the last predictor
      • Gus:
        • via icon in list - it says 'click to train', when done building, however, when you click this, it doesn't actually do the service call
        • Inactive Predictor:
          • If there's an inactive predictor, alert user in many places; however, let them build/train/forecast still
        • Bulk Scenario Creation:
          • If you enter multiple target periods when creating bulk scenarios, the parent names should have the target period they're associated with, not 'undefined-undefined'
      • Julie:
        • N/A


New Features:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Persephone
    • Forecast:
      • Ability to view ensemble values and publish status of all children scenarios, and change the individual status of some or all, via the parent scenario on the 'forecast' side of things
      • REST endpoint HTTP GET with query parameters for forecast publishing - https://persephone.erams.com/er2_m4/api/v1/forecast/publish
      • Ability to upload final forecast from Persephone to NWCC db
        • Ability to set 'Ready for Preliminary' or 'Ready for Final' status for a forecast
        • Ability to download a JSON (matching outline Beau shared) of the forecast for an individual scenario



Deployment V 0.3.22 on 6/8/23 - (highlighted in green were bugs found/changes, & blue were during bug demo mtg, requested specifically by NWCC team)

Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Persephone:
    • Bugs NWCC shared:
      • Lexi:
        • Forecast failure with message: Error during Forecast Prediction service call: ERROR - scenario Lexi dupe, id 2588: Failed to download returned data files after predicting the forecast. Exception: Service failed: For input string: "Inf"
          • Solution discussed in 6/1 meeting:
            • If forecast returns '0' or 'inf' for ensemble member values:
              • display this in the ui still (with these 'inappropriate' values):
                • display as disabled, prevent from enabling
        • project: Gulkana R at Sourdough - predictor: Fielding Lake
          • 2 stations at same spot on map, one sntl, one snow - not showing as two selectable sites via map
            • when there's a cluster of stations on map - you can now toggle between station pop-ups by clicking forward arrow icon in map pop-up
        • calculation of -partial month relative date:
          • if predictor date value is a month relativity, the final calculated relative date should be the 1st or 16th of a month:
            • if the pub date is not the 1st or 16th of a month but user put in for it to be a partial month relativity:
              • then go to the most recent, previous semi-month date:
                • so July 10th would actually essentially be July 1st
              • then, apply the relative month calculation to this date:
                • so -.5 months prior to this would be June 16th
        • after importing a project, immediately display this in the project list (Lexi had to refresh the page before she could see the import in the project list)
        • Evolution plot is not showing if you just forecast the batch - have to click into all children, to be able to see all the plots on the evolution
      • Gus:
        • Add single date to batch:
          • not retaining genetic algorithm settings with the newly created scenario
        • Parent scenario:
          • should be able to turn on/off the genetic algorithm in this parent-level
        • Project creation - should only see active forecast stations as choices when creating a new project
        • Allow user to jump to a specific step and not have to 'Next' all the way through them - IF all steps before the one they're trying to go to are valid and complete
        • If you name scenarios the same - it does actually allow you to save this duplicated name --> change warning message so it accurately says "Warning: Another model already has this name" instead of "Model can't have duplicate name."
        • in grey box on hover of evolution plot:
          • add full seasonal volume to tooltip box
        • have control of default networks displayed on refresh/project change, in settings panel
          • the defaults selected here would be default stations displayed in map for that project
      • Julie:
        • N/A


New Features:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Persephone
    • Project:
      • Sort project list dropdowns (when adding/selecting from) by HUC 8; then within that, alphabetically
    • Stations & Watershed Boundary updates:
      • We have manually applied the scripts to update the stations and watershed boundaries on PROD
    • Forecast:
      • Ability to add additional, custom member values & name:
        • ability to see these in the table, and enable/disable them from being included in the ensemble
    • DB/Other/To-Ponder:
      • db backup - we keep the last 7 days of db backups in case we need to revert back
      • stations update - automatically happens via a script overnight
      • watershed boundary update - automatically happens via a script overnight


Deployment V 0.3.21 on 6/2/23 - (highlighted in green were bugs found/changes, & blue were during bug demo mtg, requested specifically by NWCC team)

Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Persephone:
    • New User:
      • ensure site works when a new user logs in, with no project or scenarios
    • Forecasting:
      • not all ensemble members should be enabled upon initial return of forecast
        • should only automatically enable the box cox ones (for the members with a box cox)
          • PCANN-BC, PCMCQRNN, PCQR, PCR-BC, PCRF-BC & PCSVM-BC
      • There is no longer a confirmation prompt when you disable an ensemble member *Unless, it was pruned by M4, then there is still a confirmation prompt
    • Bugs NWCC shared:
      • Lexi:
        • If you copy a scenario with a snow course station as a predictor, make the pub date one where the snow course wouldn't have data (June), then run the build, it assumes all predictors have no variability even though it's ONLY the snow station with no variability (and re-building doesn't help)
          • Ex project: Badger Ck Nr Browning
        • Intermittently, after building all predictors show as having no variability (no SNOW stations included); but if you re-enable all then build again, it’s fine…
        • If you build a scenario, then delete or add a predictor the alert to ‘Re-build’ doesn’t appear and you’re allowed to go to step 3; which you shouldn’t be able to do
        • Intermittently, after building in step 2, the alert that ‘something in the predictor list has changed so you need to rebuild’ appears, but after trying to rebuild multiple times it never goes away…. Except at one point it finally did à intermittently stuck in endless loop
      • Gus:
        • increase training timeout to be 2 hours
        • Log file is displaying information from inappropriate project --> pulling the wrong project id
          • same no matter what project you're looking at
      • Julie:
        • copying of a scenario where the forecast failed made it so when she went to step 2 of the copied scenario, it just said 'sorry :(' didn't display anything else it should of step 2
        • If scenario has 50 predictors, and you build, the edit of the txt file after the build does unwanted things --> which can cause the txt file to be empty, which means it can't train
          • Ex. Yakima R Nr Parker
      • General:
        • If scenario trained, but after training failed or finished there was an error in our python code, then the UI still displays it as training, and you can't 'Cancel' the training (likely because the suid no longer exists); however we still want user to be able to set the status to 'Cancel' to try again
        • if there's an error to be caught in the python code during training, catch the error, change status to 'FAILED' and log what the error in the python code was --> so user knows that it failed in some form, and can let us know the error that caused it


New Features:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Persephone
    • Scenario List:
      • Add some sort of flag/badge/label to the checkmarks in the scenario list to differentiate the checkmark between Build/Train/Forecast - so you can easily tell which part of the process each scenario is at



Deployment V 0.3.20 on 5/18/23 - (highlighted in green were bugs found/changes, & blue were during bug demo mtg, requested specifically by NWCC team)

Fixed Bugs:

  • M4 R:
    • Build (aka 'Training', step 4 in Persephone):
      • there is still a consistent, although minimal fail rate for trainings
  • Services:
    • Forecast:
      • Failure if training was run as non-ga
  • Persephone:
    • Global Log:
      • There is now a 'Global' log option if you open the log menu --> this is where you will be able to see what changes were made to the db at-large --> so, if the stations update script runs, and the stations are updated overnight, you will see which stations were added/changed, etc.
        • *We haven't yet tried running these scripts on production, as we wanted to wait until the 2 production sites (sandbox and live) were up and running - we plan to work on this next week
    • Project:
      • project creation --> project list is showing all stations, including SNOW stations; it should just be displaying forecast points (which include element A)
    • Step 1:
      • Pub date change:
        • after duplicating a scenario (single duplication) and trying to edit the pub date, sometimes you can't click 'OK"
        • Sometimes after editing the pub date, Gus would be able to click 'OK' but when he went to step 2, the last couple of stations in the list wouldn't have the updated pub date applied
    • Step 3:
      • Historical years excluded: when you batch create, if there are years excluded, apply these to all children; however if you edit a child's years excluded, don't apply this to all children
      • When years are removed, re-check each of the stations remaining data for variability --> if the one year you removed is what gave a station variability; then we need to flag that station as having no variability
    • Training:
      • extended the timeout period for trainings to be 60 mins (was at 30 mins, so if a training took longer than that then it appeared as failed, with no error message in UI)
    • Forecast:
      • If the forecast or wcis fail - still display option to download the archive file(s) - of the failed run(s)
      • if you re-forecast after a failed forecast, it gets stuck in an endless loop and never finishes forecasting and it shouldn't
      • a '0' WCIS value when forecasting is a valid value --> however forecasting doesn’t run


New Features:

  • M4 R:
    • Build (aka 'Training', step 4 in Persephone):
      • add the box-cox outputs to the return --> so that then Olaf can send these back to the UI to have the training display all the results
    • Forecasting:
      • add the non-box-cox outputs to the forecast return --> so that then Olaf can send these back to the UI to have the forecast display all the members
  • Services:
    • N/A
  • Persephone:
    • DB/Other/To-Ponder:
      • set up separate db for 'sandbox' prod site
      • URL Lexi shared for being able to update the watershed boundaries using a script worked - this is implemented



Deployment V 0.3.19 on 4/20/23 - (highlighted in green were bugs found/changes requested specifically by NWCC team)

Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Persephone:
    • Step 2/3:
      • Have data variability checks when years get removed --> if the one year that gave a station variability gets removed, then that station no longer has variability, so make it disabled

New Features:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Persephone:
    • Step 1:
      • If you change the pub date, and the predictor list already had dates added, you will be prompted to confirm your change, and then the dates will be re-calculated according the newly selected pub date
    • Step 3:
      • You can now remove individual years from the historical data
    • Forecasting:
      • Forecasts now run 'headless' --> so, if you have a scenario with a pub date of 04/28, you build/train it now --> then when you login on 04/28, the scenario will already have been forecasted, so you can view the results
      • There are now box plots showing the forecast ensemble member results compared to each other
      • If the WCIS part of the forecasting call returns no data, you can now add your own custom value
        • If you then click the 'play' button the forecast will run with the values you see in the Wcis Data tab (including the custom value you entered)
        • if you add a custom value, copy that scenario, and go through to forecasting, you will see that the custom value you previously set for that predictor is what's set as the default custom value moving forward (if that predictor continues to return a null value)



Deployment V 0.3.18 on 4/7/23 - (highlighted in green were bugs found/changes requested specifically by NWCC team)

Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • Build:
      • 16th of the month should be the cutover when looking up semi-monthly data for a station (so a 04-15 pub date should fall back to 04-01 data; and 04-16 is when it would start to get the mid-month value)
  • Persephone:
    • UI Edits
      • Paper Cut Fixes
      • Sizing of south panel contents - now, there should always be a scrollbar and way to see all contents
    • Step 1:
      • Target period 'From' selection is working again

New Features:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Persephone:
    • Project Related:
      • Ability to add project without it having a watershed boundary
      • Ability to download entire project (and it's models) as a JSON file **only downloading the settings, not any of the run results
      • Ability to upload a previously downloaded project
      • ability to submit bugs/changes via a button that links to our ALM tracker
      • ability to view current deployment details & current task priorities via a button that links to ALM page
    • Additional UI edits



Deployment V 0.3.17 on 3/28/23 - (highlighted in green were bugs found/changes requested specifically by NWCC team)

Fixed Bugs:

  • M4 R:
    • N/A
  • Services:
    • Build:
      • Allow the addition of SNOW stations as predictors
        • Regardless of the network, we fall back to a semimonthly lookup if there is no daily data. If there is semimonthly data, we map the latest available value (1. or 16.) to the daily value.
  • Persephone:
    • Step 1:
      • Prevent user from making name of scenario be a duplicate of another scenario name
    • Step 2:
      • if no variability in data --> instead of hardcoding as disabled; start it as disabled; display tooltip saying why it started as disabled; if user tries to enable - display confirmation prompt that they want to continue
      • if PREC element make start value start out as an absolute date of '10-01'
      • if missing data badges, then you change years (so there aren't missing data), then re-run, the badges still appear… need to fix this
      • you can zoom to multiple different stations (via the list’s overflow menu) in a row now and have the map centered on that station

New Features:

  • M4 R:
    • N/A
  • Services:
    • N/A
  • Persephone:
    • Parent scenario:
      • display final accumulated value across entire evolution plot
      • have this year selection be for the water year & not calendar year
      • takeout year selection from individual children scenarios (can only change years for a batch via parent)
    • Step 2:
      • upset plot of missing data
    • Step 4:
      • Training:
      • Ensure appropriate information is being passed in when ga vs. non-ga
      • If you changed training settings since you last trained, an alert now pops up telling you to retrain
    • Forecasting:
      • If a member was pruned by M4, it should now appropriately display that; yet still allow user to enable it, if desired
    • Project Related:
      • Ability to change map settings context
      • /settings/ url --> isn't redirecting to the login page (like /models or /docs does)… fix this
      • You are now started on the project that you left off on when you last logged out or before you refreshed the page
    • Some of the paper cut UI edits


Deployment V 0.3.16 on 2/21/23 - (highlighted in green were bugs found/changes requested specifically by NWCC team)

  • Service-related:
    • Build service-related:
      • Forecast station data range:
        • Only display the historical year selection with year range the forecast station has data for
        • If a forecast station has missing data for a year, still return it & display value as ‘null’
      • If it is an Oct/Nov/Dec pub date, the data is being pulled from appropriate year
      • PREC start date is now being accounted for in the build results.
    • Training service-related:
      • The training now doesn’t fail if non-ga
    • Forecast service-related:
      • Service appropriately handles if max modes is > 1
  • Persephone:
    • Model Scenario List:
      • Filtering by model form works again
    • Building:
      • If after building, a predictor has no variability of data across all the years; then the station is hardcoded as disabled
    • Training:
      • Non-GA – related (these are not yet visible since the ga selection is currently disabled for the interim):
        • If non-ga selected, other settings won’t display (max modes, min vars, pop, num gens)
        • If non-ga selected, you can’t select more man pc selection than you have enabled predictors (so if you only have 3 predictor stations, you can’t select manual pc selection 4)
      • The training download now just downloads the training archive file
      • GA – related:
        • As the training runs, the ‘Output’ tab now displays the log in a more helpful table format
        • The ‘Chromosomes’ tab now displays not only which method is to be retained for each station, but also has a column outlining the ‘PCA Modes to Retain’
    • Forecasting:
      • The forecasting download now just downloads the forecast’s archive files (one for the wcis call, and one for the forecast) – for that year, for the most recent forecast
      • If the wcis call of a forecast run returns a missing data value, there is now an alert explaining what happened, and the cell in the table with missing data is highlighted red
      • Evolution plot in parent scenario now is displaying the appropriate target period


Deployment V 0.3.15 on 2/7/23 - (highlighted in green were bugs found/changes requested specifically by NWCC team)

  • Project Selection:
    • The project selection dropdown now has a scrollbar
  • Model Scenario List:
    • If a scenario has NO complete years of data, make sure this is appropriately displaying as 0, and not ‘n/a’
  • Parent Scenario:
    • When on the parent scenario, on the forecast side, if you change the year you’re forecasting the children scenarios for, then go to an individual child scenario, the year will be the newly selected year (before it was inappropriately showing as 2023 always)
    • If you are on the parent scenario, on the ‘Build’ side of things, in step 2 or 3, and switch to the ‘Forecast’ side of things, it doesn’t break
    • If you change the min num vars at parent-level, and save this change, it now appropriately applies to all the children
  • Training:
    • We implemented the changes Sean outlined in his email, as it relates to the Min # of Vars:
      • Automatically starts out as 2
      • Deter user from increasing this past 2, by making them confirm change via a confirmation prompt
      • If user has a selection > 2, display text below selection alerting them that this is not best practice
      • The smallest this selection can be is 2, and the most is the number of enabled predictors – 1 (min vars equal to num of predictors experienced this intermittent failure more frequently)
    • The archive file is now included in the downloadable zip file after a successful training
  • Forecasting:
    • Deter user from disabling an ensemble member after forecasting by making them confirm this via a confirmation prompt
    • Occasionally a forecast would never stop ‘Loading’, even though you could see the forecast had returned results; or only 1 members’ results would display à this happened for stations that experienced more water accumulation, so we just had to adjust our db format for these values, so it could store the larger values


Deployment V 0.3.14 on 1/27/23

  • Model Bulk Creation:
    • Selecting 12-31 as the end date for pub date works.
  • Forecasting:
    • Forecasting for a future date is prevented using the action buttons via the list (overflow menus, toolbar, or service-call icons)
    • Removed year selection from bulk duplication.
      • Coded in assumption that if the start date is bigger than the end date, then those pub dates will be for previous year (current water year)
      • These are sorted under the parent in water year order (so Oct/Nov/Dec at top of list)
    • Forecast year not is only being displayed in scenario info IF on ‘Forecast’ side of things.
  • Step 2:
    • Interim change – prevent user from adding a ‘SNOW’ station, so they don’t run into above known service-related bug (until this service-edit is made)
    • Bug fix for: Race condition when applying model form to predictor list --> not all the entries appear in list (if it should be 6 total, sometimes only 4 or 5 will be in list)
    • GUS-FOUND BUG FIX FOR: Complete years not appropriately updating after build (if there are missing years)
    • Build data file & step 3 table is being sorted alphabetically, like in step 2 predictor list.
  • Log File:
    • We’ve updated/added to the log file – to better help the other forecasters share information with us when they encounter issues.


Deployment V 0.3.13 on 1/23/23

  • General UI Edits:
    • Map:
      • Forecast point in map is now just the target icon, not a green dot.
      • When you delete the selected scenario, the map extends the entire display.
    • Step 2:
      • Predictor list now has an on-hover tooltip telling you the station’s triplet.
      • Predictor list is now sorted alphabetically.
    • Other:
      • Run buttons displaying appropriate message on-hover.
  • Bigger Bug Edits:
    • Model Scenario List:
      • Train all button (in toolbar at top) now appropriately queue’s models to train
      • Forecast all button (in parent scenario overflow menu) à now, after using this, the selected scenario’s forecast results automatically appear (as opposed to a table with ‘Nan’s), without need for refreshing.
      • When you add to an existing batch, if user adds scenario for an existing pub date, it will have a unique, distinguishing number at the end of the name (to differentiate)
      • You can’t create 2 scenarios with the same name.
    • Building:
      • If you re-build a model that already has training/forecasting results; you will be alerted that if you continue, that the training/forecasting results will be reset.
    • Training:
      • Now, if the training fails, the message appropriately says failed, and not that the user cancelled the training.
      • The ‘Chromosomes’ tab is now displaying results again (this broke at some point)
    • Forecast:
      • We haven’t done much testing on this (so I’m sure we’ll find some more bugs), but we have started adding the ability to associate a year with a forecast; done so in the following ways:
        • You can now include a year when batch duplicating à so, if you want pub-dates from 11/1/22 – 2/1/23; you will do pub dates from 11/1 – 12/31 in 2022; and then pub dates from 1/1 – 2/1 in 2023.
          • BUG NOTE: for some reason using 12-31 isn’t working right now (I believe using the 31st of any month isn’t working)… we have a ticket logged to fix this.
          • BUG NOTE: currently the year when you batch create is being stored as part of the model’s name, however we will make this a dynamic variable; so, if you change the forecast year selected for a model this value changes as well.
        • You can forecast for multiple years for an individual scenario – and go between these to see the results.
        • You can view the evolution plot for a batch, for multiple different years – and go between these to see the results.
        • BUG NOTE: Forecasting for a future date is prevented in the individual scenario, however it is not yet prevented via the list action buttons à which will cause an error.
      • Min num of vars has been capped at the number of enabled predictors.
      • We’ve handled edge case error that occurred if you have 2 predictor stations with same element but different dates.
      • The evolution plot is looking at the actual ensemble values (reflecting changes if user disables an individual model’s member(s))
      • The ‘stickiness’ of the disabled members is now associated with the batch, if model is a child.
  • Interim Changes to Prevent Users from Experiencing Known Bugs:
    • Can’t run a training as Non-GA (until service-edit is made)
    • Can’t increase max modes past 1 (until service-edit is made)


Deployment V 0.3.12 on 1/6/23

  • General:
    • Deletion Confirmations --> Now, there is a confirmation prompt when you delete a project, model(s), and station group/model form association
    • Can now view appropriate log file for your project
    • Can delete all projects without error
    • Stepper messages reflect more appropriately
  • Model Scenario List:
    • You can now duplicate a batch of scenarios (this functionality broke at some point)
    • There are now consistent (across all 3 service calls), progressive (icon tooltip reflects service call user is currently on) icons associated with each scenario, they:
      • Alert user:
        • If they need to build/train/forecast
        • If one of these is in-progress
        • If one of these failed
    • Allow user to do the following by clicking on the icon:
      • Start the build/train/forecast
      • Stop a currently running training
      • Re-run a previously failed build/train/forecast
    • You can now successfully batch create from Feb. to March (we've just gone with never including the 2/29 date)
    • When you add a scenario to an existing batch, you now see that pub date reflected in the template's 'Publication Date Ranges' section
    • When you batch train, the icons showing if scenarios are queued/in-progress/finish now auto-update, no need for refresh
  • Step 1:
    • Now displays the who/when associated with the 3 service calls (built, trained, forecasted)
  • Step 4:
    • Now, if you select non-GA, the selections you make will persist, and you can select any combo of 2 - 4 for manual pc selection (because 1 is always selected)
    • Min # of vars starts at 1/2 of the enabled predictors
    • The edge case of a predictor station name having a unique character in it and causing the training to fail has now been addressed (we just took the station name out of the column header in the .txt file passed to the M4 service)
    • Build PDF's tab is working again (this functionality broke at some point)
  • Forecast:
    • If you're on the forecast side of things (for individual scenario or master), and you haven't built/trained/forecasted yet, there is now a message telling you to do that first
    • Master scenario's evolution plot:
      • Now also displays the observed flow values for previous years


Deployment V 0.3.11 on 12/13/22

  • Logging File (at top, by project creation button):
    • This is logging all service request inputs/outputs; and there is an easy ‘Copy All’ button so if we ever ask you to send this file, you can click this and send it via email
  • Model Scenario List:
    • Master Scenario Overflow Menu:
      • Batch build/train/forecast is working; however, you must do these in order à so you can’t train til you’ve built, and can’t forecast til you’ve trained
    • Toolbar at top of scenario list:
      • You can now filter the list based on who/when a scenario was last built/trained/forecasted
  • Training:
    • Our Amazing Dave implemented a throttle of training-specific service requests! So, now you can batch train multiple scenarios, and the number of scenarios hitting the service will be limited to 5 at a time (we will adjust this accordingly as we add more cores)
  • Downloading files:
    • You can now download the following:
      • Build > Step 4 > Training:
        • Build_files.zip
        • MMPE_RunControlFile.txt
      • Forecasting:
        • MMPE_RunControlFile.txt
      • *If either of these service calls fail, you can’t download any of the above (or, it will download the files associated with the most recently successful run); we have a task item to address this soon
  • Bug Fixes (*Just including these so that in case you guys have noticed any of these while testing, you know if they’ve been addressed or not):
    • When you add a scenario to a batch of scenarios the name automatically turns green now
    • Model scenario list filtering by model form isn’t buggy
    • When you select a child scenario of a master, the list doesn’t automatically close
    • Run button for all service calls, text on hover should be accurate now
    • When you add a station to the predictor list via the list ‘+’ row, that station name doesn’t stay selected in that row after it’s been added to list
    • Step 2 -> Build, the forecast stepper message is now up to date with if the build has been run or not
    • If you haven’t ‘Built’ the scenario in step 2, or if the build fails, you can not proceed to step 3
    • Master scenarios’ checkboxes to edit settings, hover text is no longer backwards to the action being performed
    • When creating a new project, you will now see a loading icon, and the ‘Ok’ button will be disabled, until the project is built à to prevent users from accidentally creating multiple of the same projects


Deployment V 0.3.10 on 11/30/22

  • Model Scenario List:
    • Master Scenario Overflow Menu:
      • You can now build/train/forecast all children scenarios underneath a master ** See ‘Known bug that we’re in the process of addressing’
      • You can now add a single scenario, or scenarios across a date range to an existing batch of scenarios
        • *When you do this, you must click on the scenario before you can build it… this is a bug we’re working on
    • Toolbar at top of scenario list:
      • You can now build/train/forecast all scenarios in the list ** See ‘Known bug that we’re in the process of addressing’
        • If you filter the list, then click one of these buttons, you will only be build/train/forecast-ing the filtered scenarios
  • Step 1:
    • The name/status/notes info can now be found/changed here
  • Step 4:
    • You can now cancel a currently running training
    • After training a scenario, there is now a button (next to the play button) that will allow you to download the entire ‘build’ file returned from the training
  • Forecasting:
    • Master Scenario:
      • Now, if the children of a master scenario have forecast results, you can view an evolution plot in the Master scenario (by clicking on the Master scenario, and being on the ‘forecast’ side)
    • Non-Master Scenario:
      • You can now enable/disable members of an ensemble
        • The ensemble mean value will update to only be the man of the enabled members
        • When you change a disabled status of a member, all future trainings’ members will assume the same disabled statuses