Quantifying the impact of hospital catchment area definitions on hospital admissions forecasts: COVID-19 in England … – BMC Medicine

Comparison of catchment area definitions Catchment area definition descriptive statistics and overlap-similarity

A visual inspection reveals some clear differences between the hospital catchment area definitions (Fig.1); these differences were more evident from the descriptive summary statistics (Additional file 1: Fig. S35). The marginal distribution definition is clearly very different from all other definitions, since it includes all local authorities in the catchment area for every Trust; we do not discuss it further here. The other major differences were between the nearest and nearby heuristic definitions and between the heuristic definitions and the three data-derived definitions.

Comparison of six LTLA-level hospital catchment area definitions for University Hospitals Bristol And Weston NHS Foundation Trust. The hospital catchment areas are defined as follows: by the marginal distribution of hospital admissions to Trusts June 2020May 2021 (marginal); the nearest Trust for each LTLA (nearest); any Trust within a 40-km radius (shown by the red dashed circle) of the LTLA (nearby); by the distribution of emergency, or elective, hospital admissions in 2019 (emergency and elective, respectively); and by the distribution of COVID-19 hospital admissions June 2020May 2021 (covid). In each panel, the colour denotes the proportion of all patients from that LTLA that are admitted to University Hospitals Bristol And Weston: darker colours indicate a higher proportion, and white indicates zero admissions. The Trusts main site is marked by a red cross

First, we compared the nearest and nearby heuristics. The characteristics of the nearby heuristic were very different for Trusts inside versus outside the London NHS region. In London, the distribution of weights was very homogeneous (median 0.037, IQR [0.037, 0.04]), each Trust had on average 38 UTLAs in its catchment area (IQR [38, 38]), and the median distance between Trusts and UTLAs was 14.4km (IQR [13.4, 17.9]). This is because local authorities in London (boroughs) are small (all less than 10km in diameter) and so the 40-km radius includes most other boroughs as well as other nearby local authorities. Outside of London, the distribution of weights was more heterogeneous, as was the median number of UTLAs per catchment area (median 7, IQR [3, 11]) and the median distance was higher (median 23.1km, IQR [18.8, 29.5]). In comparison to the nearby heuristic, the nearest heuristic was very homogeneous and, by construction, constrained to a much smaller geographic area. The majority of Trusts (98/138) had a single UTLA in its catchment area (median 1, IQR [1, 2]). The median distance between Trusts and their nearest UTLA is very small (median 5.4km and IQR [2.8, 14.7]; Additional file 1: Fig. S5A).

In contrast to the simple heuristics, the distributions of weights for the three definitions derived from admissions data were more heterogeneous (Fig.1 and Additional file 1: Fig. S3). More specifically, we found that the weight assigned to local authorities for a given Trust decreased as the distance between the local authority and Trust increased. Compared to the heuristic definitions, the emergency, elective, and COVID-19 admissions definitions shared many similarities. The number of local authorities in a Trusts catchment area was comparable across the three definitions (median of 4, 5, and 3 UTLAs for the emergency, elective, and COVID-19 admissions data definitions, respectively, for a threshold of x=1%; Additional file 1: Fig. S4, second column), as was the average distance (median 17.0, 18.2, and 14.3km for the emergency, elective, and COVID-19 admissions data definitions, respectively, using a weight threshold of x=1%; Additional file 1: Fig. S5A, second column). A large proportion of Trusts emergency, elective, and COVID-19 admissions catchment areas are from the nearest local authority (median 76.5%, 72.4%, and 83.1%, respectively; Additional file 1: Fig. S5B). Furthermore, virtually all of Trusts emergency, elective, and COVID-19 catchment areas were from nearby local authorities (within 40km) (median 99.3%, 97.8%, and 100%, respectively; Additional file 1: Fig. S5C). By contrast, only 15.3% of the nearby heuristic definition was from the nearest local authority, since all local authorities within a Trusts 40-km radius were assigned the same weight, irrespective of distance.

According to the overlap-similarity metric, the emergency and elective catchment area definitions were most similar (median overlap-similarity 0.84; Additional file 1: Fig. S6A), and both were also similar to the COVID-19 definition (0.74 and 0.77 median overlap-similarity with the emergency and elective definitions, respectively; Additional file 1: Fig. S6A). Moreover, the median asymmetric overlap-similarity relative to the COVID-19 definition was 1 and 0.99 for the emergency and elective admissions definitions, respectively (Additional file 1: Fig. S6B), that is, for more than half the Trusts all local authority weights assigned by the COVID-19 definition were less than or equal to the weights assigned by either the emergency or elective definitions.

As expected, the marginal distribution definition was very dissimilar to all other definitions, with a median overlap-similarity0.05 with all other definitions (Additional file 1: Fig. S6A), and the majority of asymmetric overlap-similarity values<20% (Additional file 1: Fig. S6C). The nearby heuristic was also dissimilar to the other definitions on average (median overlap-similarity<0.3; Additional file 1: Fig. S6A), although individual asymmetric overlap-similarity values varied considerably from one Trust to another for all definitions except the marginal distribution (Additional file 1: Fig. S6C).

Although trends in local COVID-19 cases often varied across England, cases in neighbouring local authorities were generally strongly correlated with each other (Additional file 1: Fig. S7). The median pairwise correlation by LTLA (averaged across the correlation with all other LTLAs in England) varied substantially, with some notable dates and locations where the median value was negative (Additional file 1: Fig. S7A). For example, the median correlation in Liverpool in mid-October and early November 2020 was negative: while cases in most local authorities were rising, they were decreasing in Liverpool and nearby local authorities due to local restrictions on social distancing [27]. Another example: the median correlation in Medway (a mainly rural local authority in South East England) in the second half of November and early December 2020 was negative: cases in Medway were rising as the Alpha variant emerged, while cases were stable or declining in most local authorities following the second national lockdown (05 November02 December 2020) and additional earlier restrictions on social distancing.

In contrast, the number of cases reported by local authorities within the same catchment area were usually strongly correlated with each other (median correlation coefficient>0.5; Additional file 1: Fig. S7B), especially from October 2020 through February 2021. For example, in Liverpool NHS Foundation Trust, the median correlation between the main LTLAs in its catchment area (Liverpool, Knowsley, Sefton, and West Lancashire) was above 0.7 throughout October and November 2020, despite the negative correlation nationally. Similarly, the correlation between the main LTLAs in the catchment area of Medway NHS Foundation Trust (Medway and Swale) was above 0.5.

Despite some clear differences between the six catchment area definitions, the median forecasts under each definition were, on average, strongly positively correlated with each other. This was likely a result of a high correlation between reported COVID-19 cases within the majority of catchment areas during the evaluation period. The median correlation coefficient (across all locations and dates) was above 0.8 for any pair of definitions (Additional file 1: Fig. S8A) and was especially high during October 2020 (when national admissions were increasing), and December 2020 through February 2021 (when national admissions quickly increased, and then decreased after the national lockdown was implemented) (Additional file 1: Fig. S8B). However, forecasts made under different catchment area definitions were less strongly correlated during other time periods. Notably, the median correlation coefficient (across all locations) between forecasts made on 29 November 2020 using the marginal distribution definition and any other definition was less than 25% (Additional file 1: Fig. S8B). This is likely due to the emergence of the Alpha variant in London and Kent in South East England and subsequent rise in cases following a period of varying local restrictions (the tier system) in the North of England and a month-long national lockdown. Since the marginal distribution definition is not a local definition (the catchment area is the same for all Trusts), then it is unsurprising that in this very localised context, it leads to very different median forecasts than the other definitions. A visual inspection of the forecasts shows example forecast dates and locations for which the forecasts are meaningfully different (for example, Mid And South Essex NHS Foundation Trust on 13 December 2020: Fig.2).

Example of retrospective forecasts made 13 December 2020 for Mid And South Essex NHS Foundation Trust. These forecasts are made based on UTLA-level catchment area definitions and using future observed cases. Shown are median forecasts (line) and 50% and 90% quantile forecasts (dark and light ribbon, respectively). The black solid line shows admissions observed up to the forecast date (13 December, marked by a vertical dotted line), while the black dashed line and points show realised admissions, for reference

All catchment area definitions resulted in forecasts that overestimated the uncertainty (Additional file 1: Fig. S9): for nominal coverage of 50%, the empirical coverage of all definitions was 6070%, and for nominal coverage of 90% the empirical coverage was 9095%. The difference between nominal and empirical coverage decreased, albeit not substantially, at longer forecast horizons. For example, for a nominal coverage of 50%, empirical coverage of all definitions was in the range 6870% and 6165% at a 1- and 14-day forecast horizon, respectively (Additional file 1: Fig. S9, second row).

There was little difference between the calibration of the forecasts using the different catchment area definitions, especially compared to the difference between nominal and empirical coverage. The COVID-19 definition was the definition with the smallest overestimate of uncertainty for nominal coverage of 20% (Additional file 1: Fig. S9, first row), but this difference disappeared at higher nominal coverage values. There was also no difference between calibration at spatial scales (upper- vs. lower-tier local authority; Additional file 1: Fig. S9, first and second columns) or for future observed vs. future forecast cases (Additional file 1: Fig. S9, first and third column).

By forecast horizon, the forecasts made using the marginal distribution definition are consistently the least accurate (highest rWIS values); on the other hand, the nearby heuristic and COVID-19 data definitions were generally the most accurate, with the other definitions (nearest heuristic, and emergency and elective data) falling in the mid-ranks (Fig.3A). There was only a small difference between definitions absolute rWIS values, which could suggest only small differences in probabilistic forecast accuracy, or no consistent trend in performance across forecast dates and/or locations. Finally, the average accuracy of forecasts made with the emergency and elective data definitions was very similar at all forecast horizons.

Forecasting performance under different hospital catchment area definitions (by UTLA) using future observed cases. A Median interval score (taken over all forecast dates and locations) for each forecast horizon, with the values highlighted for a 7-day forecast horizon in the grey-shaded region. B Median interval score for each forecast date; 7-day forecast horizon. C Median interval score for the 40 acute NHS Trusts with the most total COVID admissions (descending top to bottom); 7-day forecast horizon. Trusts are defined by their three-letter organisational code; see [15] for a full list of Trust codes and names

There was no clear best catchment area definition when we evaluated forecasts by forecast date (Fig.3B), with almost all definitions being either first- and last-ranked by rWIS for at least one forecast date (only the nearest heuristic was never first-ranked, and the elective admissions data definition was never last-ranked). The COVID-19 admissions data definition was first- or second-ranked by rWIS for the majority (9/15) of forecast dates at both a 7- and 14-day forecast horizon (Additional file 1: Fig. S10). At the same time, the definition was only last-ranked once at a 7-day horizon and was never last-ranked at a 14-day horizon. The nearby heuristic performed comparably to the COVID-19 definition, and especially at a 7-day forecast horizon there was little to differentiate them. Forecasts made with the marginal distribution definition had the most variable accuracy, but were especially poor during December 2020. As noted previously, it was during this period that local COVID-19 case trends were more heterogeneous (Additional file 1: Fig. S2) as a result of the emergence of Alpha and local social distancing regulations. In general, there was more variation between different catchment area definitions before February 2021, when there was more heterogeneity in subnational cases and admissions trends (Additional file 1: Fig. S1 and S2), than after, when both cases and admissions were consistently falling across England. These results therefore suggest that it is more important to use a local catchment area definition where there is heterogeneity in local case trends. Finally, we saw again that the emergency and elective admissions data definitions performed similarly across all forecast dates.

Again, there was no clear best definition when we evaluated forecasts by location, but there was more variation between definitions (Fig.3C). The COVID-19 definition was the first- or second-ranked definition more frequently than other definitions: (approx. 45% and 50% for a 7- and 14-day horizon, respectively; Additional file 1: Fig. S10). Again, the nearby heuristic also performed well (first- or second-ranked for approximately 35% and 40% of Trusts for a 7- and 14-day forecast horizon, respectively). However, the COVID-19 was more consistent than the nearby heuristic: while the COVID-19 definition was last-ranked for only 5% of Trusts, the nearby heuristic was last-ranked for 20%. Interestingly, the marginal distribution baseline definition was first-ranked, that is, it resulted in more accurate forecasts than other definitions for approximately 30% of locations, yet it also resulted in the least accurate forecasts in approximately 40% of locations (Additional file 1: Fig. S10B).

We found no change in relative forecast accuracy for any of the catchment area definitions when using LTLA-level catchment area definitions compared to UTLA-level definitions (Fig. S11). Forecasts either performed comparatively or there was no clear pattern to differences when forecasts were evaluated by forecast horizon (Fig. S11A), forecast date (Fig. S11B), or location (Fig. S11C).

When using forecasts of future cases instead of the retrospectively known case trajectories to make forecasts, the choice of catchment area definition had the biggest effect on probabilistic forecast accuracy at a 14-day forecast horizon (Fig.4). The marginal distribution definition had the largest rWIS value (rWIS=1.53; Fig.4A). This poor performance was linked to a few forecast dates (4 and 18 October, 29 November, and 13 December 2020; Fig.4B); for forecast dates in January 2021 onwards, the relative accuracy of all definitions was comparable. The nearest hospital heuristic resulted in the most accurate forecasts for a 14-day horizon (rWIS=0.84; Fig.4A), largely due to particularly good relative forecast performance on 13 and 27 December 2020 (Fig.4B). All other definitions had rWIS values in the range 0.920.96 (Fig.4A). At shorter forecast horizons, the relative performance of all definitions was comparable, although the marginal distribution definition was consistently one of the worst-performing definitions.

Forecasting performance under different hospital catchment area definitions using future forecast cases. A Median interval score (taken over all forecast dates and locations) for each forecast horizon, with the values highlighted for a 14-day forecast horizon in the grey-shaded region. B Median interval score for each forecast date; 14-day forecast horizon. C Median interval score for the top 40 acute NHS Trusts (by total COVID-19 admissions); 14-day forecast horizon

When considering the rWIS rankings by forecast date and location, the COVID-19 data definition stood out (Additional file 1: Fig. S12): it was first- or second-ranked for 7/14 forecast dates for both a 7- and 14-day forecast horizon, and for approximately 35% and 50% of locations for a 7- and 14-day forecast horizon. Although the nearby hospitals heuristic performed comparably to the COVID-19 definition as measured by top rWIS rankings, it was less consistent when evaluated by location: it was ranked in the bottom two for 40% of locations, compared to only 21% and 12% for the COVID-19 definition for 7- and 14-day horizons, respectively.

See original here:

Quantifying the impact of hospital catchment area definitions on hospital admissions forecasts: COVID-19 in England ... - BMC Medicine

Related Posts
Tags: