By

Raleigh, Mark S.Ìý1Ìý;ÌýDeems, Jeffrey S.Ìý2

1ÌýCU / CIRES / NSIDC
2ÌýCU / CIRES/ NSIDC

In the mountainous western United States, operational runoff forecasting centers typically use streamflow observations to calibrate snow models over multi-decadal periods to capture the mean snow accumulation and melt dynamics of watersheds. When seasonal or annual snow conditions deviate too far from mean historic conditions, such as with extreme events (e.g., snow drought) or hydrologic disturbances (e.g., dust-on-snow), the accuracy of operational snow models can degrade. Recently, streamflow forecasting errors in the Upper Colorado River Basin (UCRB) have been linked to interannual variations in the dust radiative forcing retrieved from remote sensing. Here we build off previous work in the UCRB and attempt to quantify spatial and temporal variations in optimal operational snow model parameters and evaluate linkages with interannual variations in snow accumulation (e.g., low vs. high snow years), spring snowstorm regimes, and dust radiative forcing. We leverage 30 years of snow and streamflow data, and 15 years of remotely sensed dust radiative forcing data from MODIS to investigate the conditions influencing optimal calibrations. Using snow data and corrected temperature data at over 100 stations in the Colorado-Utah-Wyoming domain, we derive optimal snow model parameters with independent calibrations at each site and year. Initial results show coherent spatial and temporal patterns in the model melt factors that cannot be explained by geographic predictors but rather appear to have possible linkages to snow accumulation regime and dust-on-snow. We demonstrate how streamflow, meteorological, and remote sensing data provide clues about the effects of mass and energy balance deviations on operational model skill. Finally, we discuss implications for forecasting operations and how this can guide modifications to model procedures, such as using ensembles of model parameters.