{"title":"用于气象应用的非常规观测","authors":"Joanne Waller, Tess O' Hara","doi":"10.1002/met.70034","DOIUrl":null,"url":null,"abstract":"<p>Conventional observations, such as those from satellites, radiosondes, weather balloons, ships, aircraft, traditional surface weather stations and rain gauges are commonly used in meteorological applications. Unconventional observations are becoming an increasingly valuable source of information for meteorological applications, often providing information at much higher spatial and temporal resolution than conventional observing networks and typically at a fraction of the cost (e.g., Nipen et al., <span>2020</span>; O'Hara et al., <span>2023</span>; Waller, <span>2020</span>). They are also able to provide information more representative of local situations, such as individual urban streets, where conventional observing sites are not situated (e.g., Brousse et al., <span>2022</span>; Feichtinger et al., <span>2020</span>). As a result, the usefulness of these observations is being investigated for a variety of different meteorological uses (Hahn et al., <span>2022</span>; Muller et al., <span>2015</span>). There are also coordinated efforts to improve data access, processing and application, for example, the EU OpenSense project on the opportunistic sensing of rainfall (https://opensenseaction.eu/). However, a key issue identified with unconventional observations is the need for a good understanding of their quality, and the development of appropriate quality control methods (e.g., Beele et al., <span>2022</span>; Fenner et al., <span>2021</span>; Napoly et al., <span>2018</span>) to ensure their usefulness in various meteorological applications.</p><p>Unconventional observations for meteorological applications can be obtained in a variety of ways. Data may be obtained opportunistically with meteorological information derived from non-meteorological sensors, or via the deployment of a network of low-cost sensors (e.g., Chapman et al., <span>2015</span>; Vetra-Carvalho et al., <span>2020</span>). Alternatively, data can be ‘crowdsourced’ and obtained from a group of people either with or without their explicit involvement in the data collection process, for example, via private automatic weather stations or a smartphone ‘app’ or collected via citizen-science projects where information obtained from a group of people who are invited to participate in the data collection process (Hintz, Vedel, et al., <span>2019</span>; Kirk et al., <span>2021</span>). Such citizen science projects can be particularly valuable as they permit interaction between experts and the public, providing educational opportunities and experiential learning to aid in the appreciation of risks, for example, extreme weather impacts (Batchelder et al., <span>2023</span>; Paul et al., <span>2018</span>).</p><p>Within Numerical Weather Prediction (NWP), unconventional observations have been used to supplement conventional data for nowcasting, data assimilation, forecast post-processing and forecast verification (Hintz et al., <span>2019</span>). For example, private weather stations have been used to aid the forecasting of convection initiation (Gasperoni et al., <span>2018</span>). They have also been used for severe weather detection and to provide early warnings in order to help reduce the impacts of events such as tornadoes, hailstorms or flash floods; furthermore, they can be used in an educational context to increase public understanding of high-impact weather events and improve public preparedness and response to such hazardous situations (WMO, <span>2024</span>). Urban meteorology, both in terms of forecasting and climate services, has also benefited from the high-density information provided by unconventional observations (e.g., Meier et al., <span>2017</span>; Mitchell & Fry, <span>2024</span>).</p><p>This special issue brings together articles that discuss some of the recent progress in using such unconventional observations, particularly in terms of development of quality control (QC) procedures and potential use in NWP. We now introduce each of the articles beginning with those that focus on the development of quality control procedures for various types of unconventional observations, moving through various stages of cross-validation and proof of concept applications.</p><p>The article by Coney et al. (<span>2022</span>) assesses the reliability of Netatmo brand weather station external temperature sensors under laboratory conditions and ‘in the wild’, using the UK Met Office weather observation network for reference. Controlled testing of Netatmo sensors allows accuracy assessment for the absolute observation and lag time, showing the sensors meet the manufacturer's specification and are reasonably accurate (±0.3°C) as compared with laboratory grade equipment (±0.1°C); however, a lag of several minutes in registering temperature change was recorded. Following the testing, temperature observations from Netatmo sensors are subject to three quality control algorithms before being gridded and compared with observations from the Met Office ground-based monitoring network. The inter-comparison between gridded Met Office data and the Netatmo observations showed a reduction, but not an elimination, of erroneous temperatures and positive bias following quality control. The researchers conclude that the Netatmo temperature sensor casing may be contributing to lag time and that anomalies in temperature are most likely due to incorrect placement of sensors in direct sunlight, or indoors. They recommend that more work is done to develop an appropriate QC methodology, and that Netatmo station owners are provided with more support to ensure their sensors are appropriately located. If this can be achieved, it is likely that, due to the popularity of the units, especially in cities, Netatmo temperature sensor data could aid investigations into urban heat islands and observations could ultimately be used for initiating and/or validating weather forecasts.</p><p>The article by Siems-Anderson (<span>2024</span>) reviews the use of vehicle-based observations in weather prediction and decision support. The paper discusses the historical use of observations from vehicles, which has typically been confined to using limited sets of observations from research and field campaigns, and describes how the emergence of connected and autonomous vehicles has the potential to vastly increase the number of vehicle-based observations available for meteorological applications. Like any opportunistic observation dataset, vehicle-based observations are subject to a number of data quality concerns and challenges; the paper states that data latency and the availability of metadata can inhibit the basic use of such observations. Furthermore, traditional quality control is not always applicable to observations from moving platforms, so new methods must also be adopted, and a number of these are described. Once the data has been quality controlled, there are a wider variety of weather applications that can take advantage of this type of observation. Typically, they have been used to provide information on winter weather, for example, to improve winter weather road maintenance or to provide guidance for safer travel in the presence of hazardous weather. More recent research has focused on the information these observations can provide in urban heat islands or for dust detection and has considered if they are suitable for use in data assimilation systems. Finally, the article discusses the public policy considerations of this unique source of weather data. One of the key concerns with vehicle-based observations is privacy; this is a major issue since many vehicles are privately owned, and metadata associated with the observations can give away information such as a person's location, or the speed at which they are travelling, and therefore suitable policy should be developed to deal with this. Additionally, policy may also need to be developed to handle, and store, the very large quantity of observations that could be collected by connected and autonomous vehicles, and cross-agency/sector collaboration will be invaluable for this. The author concludes that, whilst they have potential, vehicle-based observations remain under-utilised; to exploit these observations it will be necessary to establish quality standards for this dataset, balance quality control with data privacy issues and determine how to best make the observations available to the meteorological community to increase their use.</p><p>Furthermore, it is shown that these uncertainties are likely to be weather- and vehicle-dependent.</p><p>The authors conclude that despite the challenging nature of this data, with appropriate quality control it could be a useful source of spatially dense and temporally frequent near-surface temperature observations for NWP.</p><p>Particulate matter (everything in the air that is not a gas) less than 2.5 micrometres in diameter (PM<sub>2.5</sub>) has been in the news due to emissions from wood-burning stoves, particularly in urban areas, but vehicles can be another source of this contaminant (from tyres and brakes) that can cause serious health impacts. The research presented in the paper by Cowell et al. (<span>2024</span>) uses data from two low-cost sensor types at a total of 28 locations, from 2 sensors in the DEFRA Automatic Urban Rural Network (AURN) and from the Birmingham Air Quality Supersite (BAQS), collected via the Birmingham Urban observatory cloud platform. A QC methodology is devised comprising a four-step process to assess completeness, meteorological operating ranges (for temperature and relative humidity, faulty zeros and outliers, which is applied to the available data). The resulting data are then compared with a high-resolution (10 m × 10 m) air quality map derived from a previous year's data, limiting the ability to perform absolute comparisons but providing an opportunity to assess the spatial pattern of PM<sub>2.5</sub> concentrations. The study finds data loss across the sensor locations (ranging from minor to extreme), in more severe cases due to units requiring repair or failing due to loss of power (from solar panels) during winter. Other units experienced environmental damage and/or issues with calibration or battery power which took them out of service. Internet of Things (IoT; a network of devices that can connect and exchange data with other devices and systems over the internet) reliability was highlighted as an issue that could be improved with more robust systems; however, despite data gaps, the number of sensors meant highly localised variation in PM<sub>2.5</sub> concentrations could be identified. The IoT results compared favourably with modelled air quality, offering an alternative and close to real-time representation of PM<sub>2.5</sub> concentrations. This research demonstrates the applicability of IoT PM<sub>2.5</sub> sensor data both for highly localised analysis of air quality and for delineating wider spatial patterns, identifying variability more readily than data from a limited number of official monitoring points. IoT sensors can be used to identify ‘hotspots’ for air pollution and the increased spatial resolution can support identifying and addressing sources/problem areas. Where longer term and high-reliability data are required the IoT sensors could be replaced by more robust systems. The IoT sensors may contribute to assessing the impact of Clean Air Zones (CAZ), allowing detailed analysis along major routes that could further contribute to traffic management in the interests of reducing air pollution. Increasingly, meteorological forecast providers are aiming to generate nowcasting, the shortest forecast covering the coming minutes to hour(s) that have the highest reliability due to proximity.</p><p>Precipitation forecasting of all kinds can be particularly difficult, due to natural high spatial variation, and there are various data quality issues with all the commonly used data sources (including ground-based gauges, weather radar and satellite). In the paper by Pasierb et al. (<span>2024</span>), the researchers consider the applicability of commercial microwave link (CML) attenuation for the delineation of precipitation as an opportunistic data source that could improve the spatial resolution of precipitation data. For the uninitiated, the paper describes how CML data can be processed to obtain precipitation values, extensively referencing prior research making it an informative piece for those wanting to know more of the principles. CMLs are used in the mobile phone network, operating at multiple frequencies, and causing attenuation of electromagnetic wave propagation between cell towers. The degree of attenuation can be converted to a precipitation rate, offering a potentially high temporal and spatial data set, particularly in urban areas where towers are concentrated. For data from the month of summer 2022, which saw several precipitation events including intense convective rainfall, the researchers compare two methods of CML processing for calculating precipitation. This is followed by testing a series of approaches for assigning precipitation values to specific points along the link and validating the resulting precipitation fields against daily rain gauge observations and blended data derived from rain gauges, radar, and satellite. The researchers state the data that they have access to are limited, which prevents them from drawing generalised conclusions regarding CML use, but, in their analysis, they find large errors where the distance of the CML is <1 km, which is consistent with previous research. They indicate this is likely if wet antenna attenuation (WAA) occurs when the antenna is wet due to rain, resulting in overestimation, particularly when rainfall is light. When comparing CML-derived precipitation with data from a rain gauge close to a tower, the results show that the precipitation estimation exhibits both positive and negative errors. The study concludes that whilst CML-derived precipitation is less reliable than that from radar or rain gauges, it is more accurate than that from satellites. The researchers recommend continuing research on larger datasets covering a wider area to further assess the methods for rainfall calculation from attenuation, allowing different types of precipitation to be considered. Further investigation of WAA is suggested, particularly for short CMLs which would be particularly beneficial for urban areas where improved spatial rainfall estimation is likely to have the most value in providing high spatial and temporal data for rainfall-runoff modelling to mitigate flash flooding.</p><p>The article of de Bruijn et al. (<span>2023</span>) assesses the quality of a wind observation dataset derived from information collected along hot-air balloon (HAB) tracks. The authors then consider if such observations can detect and help solve model deficiencies; they then carry out a proof of concept single-observation data assimilation experiment to investigate whether a HAB wind observation can help improve an NWP model in a rather complex wind case study. The authors show that HAB flights can provide valuable high-resolution wind observations, particularly in the atmospheric boundary layer which is not well observed. The observations are shown to be of good quality, and they only differ slightly from high-quality wind mast observations. To assess the observation error standard deviations the HAB observations were compared with the background state of the HARMONIE_AROME model and the resulting observation error standard deviations were found to be in the same range as those for radiosonde and aircraft observations; again, this suggests that the HAB observations are of comparative quality to other observations typically used for atmospheric data assimilation. Using a single-observation data assimilation experiment, the authors showed that it is feasible to assimilate HAB wind observations and that the observations have the potential to push the NWP model in the right direction even in complex baroclinic conditions. The authors conclude that crowdsourced HAB observations would be a valuable addition to the existing observation network, though further work is required to extend the size of the dataset and expand the proof-of-concept assimilation experiments presented in this study.</p><p>The article of Alerskans et al. (<span>2022</span>) evaluates the performance of a new Machine Learning method for post-processing deterministic forecasts of 2 m temperature. The authors use a method based on the Transformer model to ingest observational data from a network of private weather stations alongside forecasts from the Global Forecast System NWP model to produce site-specific 48-h 2-m temperature forecasts. The performance of the method is compared with the performance of the raw GFS forecast and two standard post-processing models: a linear regression and a neural network. The results in the article show that, when using test data in the same period as the training data, the use of the Transformer model, which provides a more complex non-linear post-processing, is beneficial as its forecasts outperform the raw forecast and two standard post-processing models, particularly at short lead times. The method is then tested with surface synoptic observations, rather than those from private weather stations, for a different period than the training data. In this case, all three post-processing methods outperform the raw forecast model in spring and summer, with the Transformer model showing the largest improvement. However, in winter months the improvement is restricted to short lead times only. The performance of the Transformer model was found to have a dependence on temperature, with poorer performance at colder temperatures. It was highlighted that additional training data containing observations in a colder range may benefit the Transformer model. Overall, the authors suggest that the Transformer model is a promising method to provide post-processed forecasts of site-specific 48-h 2-m temperature forecasts.</p><p>The papers presented in this special issue highlight the potential for unconventional observations for meteorological applications. The applications for such data are wide-ranging and, in particular, the papers suggest that such observations can provide spatially dense and temporally frequent observations, covering currently unobserved areas, for data assimilation, NWP, flood forecasting and forecast validation and verification. Furthermore, they can provide valuable information on urban heat islands and air pollution. However, as acknowledged in all the papers, unconventional observations face challenges in terms of data quality and consistency in reporting, and to date, this has limited their uptake in operational centres. There is often an expectation for unconventional observations to achieve the same data quality standards as those obtained by conventional means, but the ‘reasonableness’ of that position can be questioned, particularly if it is preventing the uptake of these valuable data sources. Statistical data quality algorithms can be devised and have been effectively applied to a variety of unconventional observations; however, they run the risk of removing the extremes which are the precise data that would be most enlightening. But, without the quality assurance procedures implemented during conventional observation collection, and with the uncontrollable risk of temporally inconsistent data provision from unconventional sources, perhaps the answer is to develop more of an appetite for uncertainty and identify ways in which that uncertainty can be communicated and accommodated. Once the reliability and quality of the data can be trusted, it is likely that the use of these observations in the applications suggested in these papers will become more commonplace. Alternatively, a more pragmatic approach can be adopted, whereby researchers find the most appropriate application for the data, for example, a big data approach to cross-reference fast moving high-impact weather systems in nowcasting, or for casting a wide net before honing in on-target areas for official monitoring networks.</p><p>Research like that showcased in this special issue identifies progress to date on unconventional observations, the next logical step is further research into applying these alternative data sets to a range of scenarios and demands to determine the most appropriate applications. For now, it is encouraging to see research exploring these unconventional observations and the range of possible data sources and applications. It is hoped that by persevering in addressing the challenges presented by unconventional observations the opportunities they offer can be realised more widely.</p><p><b>Joanne Waller:</b> Writing – original draft; writing – review and editing. <b>Tess O' Hara:</b> Writing – original draft; writing – review and editing.</p>","PeriodicalId":49825,"journal":{"name":"Meteorological Applications","volume":"32 2","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2025-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/met.70034","citationCount":"0","resultStr":"{\"title\":\"Unconventional observations for meteorological applications\",\"authors\":\"Joanne Waller, Tess O' Hara\",\"doi\":\"10.1002/met.70034\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Conventional observations, such as those from satellites, radiosondes, weather balloons, ships, aircraft, traditional surface weather stations and rain gauges are commonly used in meteorological applications. Unconventional observations are becoming an increasingly valuable source of information for meteorological applications, often providing information at much higher spatial and temporal resolution than conventional observing networks and typically at a fraction of the cost (e.g., Nipen et al., <span>2020</span>; O'Hara et al., <span>2023</span>; Waller, <span>2020</span>). They are also able to provide information more representative of local situations, such as individual urban streets, where conventional observing sites are not situated (e.g., Brousse et al., <span>2022</span>; Feichtinger et al., <span>2020</span>). As a result, the usefulness of these observations is being investigated for a variety of different meteorological uses (Hahn et al., <span>2022</span>; Muller et al., <span>2015</span>). There are also coordinated efforts to improve data access, processing and application, for example, the EU OpenSense project on the opportunistic sensing of rainfall (https://opensenseaction.eu/). However, a key issue identified with unconventional observations is the need for a good understanding of their quality, and the development of appropriate quality control methods (e.g., Beele et al., <span>2022</span>; Fenner et al., <span>2021</span>; Napoly et al., <span>2018</span>) to ensure their usefulness in various meteorological applications.</p><p>Unconventional observations for meteorological applications can be obtained in a variety of ways. Data may be obtained opportunistically with meteorological information derived from non-meteorological sensors, or via the deployment of a network of low-cost sensors (e.g., Chapman et al., <span>2015</span>; Vetra-Carvalho et al., <span>2020</span>). Alternatively, data can be ‘crowdsourced’ and obtained from a group of people either with or without their explicit involvement in the data collection process, for example, via private automatic weather stations or a smartphone ‘app’ or collected via citizen-science projects where information obtained from a group of people who are invited to participate in the data collection process (Hintz, Vedel, et al., <span>2019</span>; Kirk et al., <span>2021</span>). Such citizen science projects can be particularly valuable as they permit interaction between experts and the public, providing educational opportunities and experiential learning to aid in the appreciation of risks, for example, extreme weather impacts (Batchelder et al., <span>2023</span>; Paul et al., <span>2018</span>).</p><p>Within Numerical Weather Prediction (NWP), unconventional observations have been used to supplement conventional data for nowcasting, data assimilation, forecast post-processing and forecast verification (Hintz et al., <span>2019</span>). For example, private weather stations have been used to aid the forecasting of convection initiation (Gasperoni et al., <span>2018</span>). They have also been used for severe weather detection and to provide early warnings in order to help reduce the impacts of events such as tornadoes, hailstorms or flash floods; furthermore, they can be used in an educational context to increase public understanding of high-impact weather events and improve public preparedness and response to such hazardous situations (WMO, <span>2024</span>). Urban meteorology, both in terms of forecasting and climate services, has also benefited from the high-density information provided by unconventional observations (e.g., Meier et al., <span>2017</span>; Mitchell & Fry, <span>2024</span>).</p><p>This special issue brings together articles that discuss some of the recent progress in using such unconventional observations, particularly in terms of development of quality control (QC) procedures and potential use in NWP. We now introduce each of the articles beginning with those that focus on the development of quality control procedures for various types of unconventional observations, moving through various stages of cross-validation and proof of concept applications.</p><p>The article by Coney et al. (<span>2022</span>) assesses the reliability of Netatmo brand weather station external temperature sensors under laboratory conditions and ‘in the wild’, using the UK Met Office weather observation network for reference. Controlled testing of Netatmo sensors allows accuracy assessment for the absolute observation and lag time, showing the sensors meet the manufacturer's specification and are reasonably accurate (±0.3°C) as compared with laboratory grade equipment (±0.1°C); however, a lag of several minutes in registering temperature change was recorded. Following the testing, temperature observations from Netatmo sensors are subject to three quality control algorithms before being gridded and compared with observations from the Met Office ground-based monitoring network. The inter-comparison between gridded Met Office data and the Netatmo observations showed a reduction, but not an elimination, of erroneous temperatures and positive bias following quality control. The researchers conclude that the Netatmo temperature sensor casing may be contributing to lag time and that anomalies in temperature are most likely due to incorrect placement of sensors in direct sunlight, or indoors. They recommend that more work is done to develop an appropriate QC methodology, and that Netatmo station owners are provided with more support to ensure their sensors are appropriately located. If this can be achieved, it is likely that, due to the popularity of the units, especially in cities, Netatmo temperature sensor data could aid investigations into urban heat islands and observations could ultimately be used for initiating and/or validating weather forecasts.</p><p>The article by Siems-Anderson (<span>2024</span>) reviews the use of vehicle-based observations in weather prediction and decision support. The paper discusses the historical use of observations from vehicles, which has typically been confined to using limited sets of observations from research and field campaigns, and describes how the emergence of connected and autonomous vehicles has the potential to vastly increase the number of vehicle-based observations available for meteorological applications. Like any opportunistic observation dataset, vehicle-based observations are subject to a number of data quality concerns and challenges; the paper states that data latency and the availability of metadata can inhibit the basic use of such observations. Furthermore, traditional quality control is not always applicable to observations from moving platforms, so new methods must also be adopted, and a number of these are described. Once the data has been quality controlled, there are a wider variety of weather applications that can take advantage of this type of observation. Typically, they have been used to provide information on winter weather, for example, to improve winter weather road maintenance or to provide guidance for safer travel in the presence of hazardous weather. More recent research has focused on the information these observations can provide in urban heat islands or for dust detection and has considered if they are suitable for use in data assimilation systems. Finally, the article discusses the public policy considerations of this unique source of weather data. One of the key concerns with vehicle-based observations is privacy; this is a major issue since many vehicles are privately owned, and metadata associated with the observations can give away information such as a person's location, or the speed at which they are travelling, and therefore suitable policy should be developed to deal with this. Additionally, policy may also need to be developed to handle, and store, the very large quantity of observations that could be collected by connected and autonomous vehicles, and cross-agency/sector collaboration will be invaluable for this. The author concludes that, whilst they have potential, vehicle-based observations remain under-utilised; to exploit these observations it will be necessary to establish quality standards for this dataset, balance quality control with data privacy issues and determine how to best make the observations available to the meteorological community to increase their use.</p><p>Furthermore, it is shown that these uncertainties are likely to be weather- and vehicle-dependent.</p><p>The authors conclude that despite the challenging nature of this data, with appropriate quality control it could be a useful source of spatially dense and temporally frequent near-surface temperature observations for NWP.</p><p>Particulate matter (everything in the air that is not a gas) less than 2.5 micrometres in diameter (PM<sub>2.5</sub>) has been in the news due to emissions from wood-burning stoves, particularly in urban areas, but vehicles can be another source of this contaminant (from tyres and brakes) that can cause serious health impacts. The research presented in the paper by Cowell et al. (<span>2024</span>) uses data from two low-cost sensor types at a total of 28 locations, from 2 sensors in the DEFRA Automatic Urban Rural Network (AURN) and from the Birmingham Air Quality Supersite (BAQS), collected via the Birmingham Urban observatory cloud platform. A QC methodology is devised comprising a four-step process to assess completeness, meteorological operating ranges (for temperature and relative humidity, faulty zeros and outliers, which is applied to the available data). The resulting data are then compared with a high-resolution (10 m × 10 m) air quality map derived from a previous year's data, limiting the ability to perform absolute comparisons but providing an opportunity to assess the spatial pattern of PM<sub>2.5</sub> concentrations. The study finds data loss across the sensor locations (ranging from minor to extreme), in more severe cases due to units requiring repair or failing due to loss of power (from solar panels) during winter. Other units experienced environmental damage and/or issues with calibration or battery power which took them out of service. Internet of Things (IoT; a network of devices that can connect and exchange data with other devices and systems over the internet) reliability was highlighted as an issue that could be improved with more robust systems; however, despite data gaps, the number of sensors meant highly localised variation in PM<sub>2.5</sub> concentrations could be identified. The IoT results compared favourably with modelled air quality, offering an alternative and close to real-time representation of PM<sub>2.5</sub> concentrations. This research demonstrates the applicability of IoT PM<sub>2.5</sub> sensor data both for highly localised analysis of air quality and for delineating wider spatial patterns, identifying variability more readily than data from a limited number of official monitoring points. IoT sensors can be used to identify ‘hotspots’ for air pollution and the increased spatial resolution can support identifying and addressing sources/problem areas. Where longer term and high-reliability data are required the IoT sensors could be replaced by more robust systems. The IoT sensors may contribute to assessing the impact of Clean Air Zones (CAZ), allowing detailed analysis along major routes that could further contribute to traffic management in the interests of reducing air pollution. Increasingly, meteorological forecast providers are aiming to generate nowcasting, the shortest forecast covering the coming minutes to hour(s) that have the highest reliability due to proximity.</p><p>Precipitation forecasting of all kinds can be particularly difficult, due to natural high spatial variation, and there are various data quality issues with all the commonly used data sources (including ground-based gauges, weather radar and satellite). In the paper by Pasierb et al. (<span>2024</span>), the researchers consider the applicability of commercial microwave link (CML) attenuation for the delineation of precipitation as an opportunistic data source that could improve the spatial resolution of precipitation data. For the uninitiated, the paper describes how CML data can be processed to obtain precipitation values, extensively referencing prior research making it an informative piece for those wanting to know more of the principles. CMLs are used in the mobile phone network, operating at multiple frequencies, and causing attenuation of electromagnetic wave propagation between cell towers. The degree of attenuation can be converted to a precipitation rate, offering a potentially high temporal and spatial data set, particularly in urban areas where towers are concentrated. For data from the month of summer 2022, which saw several precipitation events including intense convective rainfall, the researchers compare two methods of CML processing for calculating precipitation. This is followed by testing a series of approaches for assigning precipitation values to specific points along the link and validating the resulting precipitation fields against daily rain gauge observations and blended data derived from rain gauges, radar, and satellite. The researchers state the data that they have access to are limited, which prevents them from drawing generalised conclusions regarding CML use, but, in their analysis, they find large errors where the distance of the CML is <1 km, which is consistent with previous research. They indicate this is likely if wet antenna attenuation (WAA) occurs when the antenna is wet due to rain, resulting in overestimation, particularly when rainfall is light. When comparing CML-derived precipitation with data from a rain gauge close to a tower, the results show that the precipitation estimation exhibits both positive and negative errors. The study concludes that whilst CML-derived precipitation is less reliable than that from radar or rain gauges, it is more accurate than that from satellites. The researchers recommend continuing research on larger datasets covering a wider area to further assess the methods for rainfall calculation from attenuation, allowing different types of precipitation to be considered. Further investigation of WAA is suggested, particularly for short CMLs which would be particularly beneficial for urban areas where improved spatial rainfall estimation is likely to have the most value in providing high spatial and temporal data for rainfall-runoff modelling to mitigate flash flooding.</p><p>The article of de Bruijn et al. (<span>2023</span>) assesses the quality of a wind observation dataset derived from information collected along hot-air balloon (HAB) tracks. The authors then consider if such observations can detect and help solve model deficiencies; they then carry out a proof of concept single-observation data assimilation experiment to investigate whether a HAB wind observation can help improve an NWP model in a rather complex wind case study. The authors show that HAB flights can provide valuable high-resolution wind observations, particularly in the atmospheric boundary layer which is not well observed. The observations are shown to be of good quality, and they only differ slightly from high-quality wind mast observations. To assess the observation error standard deviations the HAB observations were compared with the background state of the HARMONIE_AROME model and the resulting observation error standard deviations were found to be in the same range as those for radiosonde and aircraft observations; again, this suggests that the HAB observations are of comparative quality to other observations typically used for atmospheric data assimilation. Using a single-observation data assimilation experiment, the authors showed that it is feasible to assimilate HAB wind observations and that the observations have the potential to push the NWP model in the right direction even in complex baroclinic conditions. The authors conclude that crowdsourced HAB observations would be a valuable addition to the existing observation network, though further work is required to extend the size of the dataset and expand the proof-of-concept assimilation experiments presented in this study.</p><p>The article of Alerskans et al. (<span>2022</span>) evaluates the performance of a new Machine Learning method for post-processing deterministic forecasts of 2 m temperature. The authors use a method based on the Transformer model to ingest observational data from a network of private weather stations alongside forecasts from the Global Forecast System NWP model to produce site-specific 48-h 2-m temperature forecasts. The performance of the method is compared with the performance of the raw GFS forecast and two standard post-processing models: a linear regression and a neural network. The results in the article show that, when using test data in the same period as the training data, the use of the Transformer model, which provides a more complex non-linear post-processing, is beneficial as its forecasts outperform the raw forecast and two standard post-processing models, particularly at short lead times. The method is then tested with surface synoptic observations, rather than those from private weather stations, for a different period than the training data. In this case, all three post-processing methods outperform the raw forecast model in spring and summer, with the Transformer model showing the largest improvement. However, in winter months the improvement is restricted to short lead times only. The performance of the Transformer model was found to have a dependence on temperature, with poorer performance at colder temperatures. It was highlighted that additional training data containing observations in a colder range may benefit the Transformer model. Overall, the authors suggest that the Transformer model is a promising method to provide post-processed forecasts of site-specific 48-h 2-m temperature forecasts.</p><p>The papers presented in this special issue highlight the potential for unconventional observations for meteorological applications. The applications for such data are wide-ranging and, in particular, the papers suggest that such observations can provide spatially dense and temporally frequent observations, covering currently unobserved areas, for data assimilation, NWP, flood forecasting and forecast validation and verification. Furthermore, they can provide valuable information on urban heat islands and air pollution. However, as acknowledged in all the papers, unconventional observations face challenges in terms of data quality and consistency in reporting, and to date, this has limited their uptake in operational centres. There is often an expectation for unconventional observations to achieve the same data quality standards as those obtained by conventional means, but the ‘reasonableness’ of that position can be questioned, particularly if it is preventing the uptake of these valuable data sources. Statistical data quality algorithms can be devised and have been effectively applied to a variety of unconventional observations; however, they run the risk of removing the extremes which are the precise data that would be most enlightening. But, without the quality assurance procedures implemented during conventional observation collection, and with the uncontrollable risk of temporally inconsistent data provision from unconventional sources, perhaps the answer is to develop more of an appetite for uncertainty and identify ways in which that uncertainty can be communicated and accommodated. Once the reliability and quality of the data can be trusted, it is likely that the use of these observations in the applications suggested in these papers will become more commonplace. Alternatively, a more pragmatic approach can be adopted, whereby researchers find the most appropriate application for the data, for example, a big data approach to cross-reference fast moving high-impact weather systems in nowcasting, or for casting a wide net before honing in on-target areas for official monitoring networks.</p><p>Research like that showcased in this special issue identifies progress to date on unconventional observations, the next logical step is further research into applying these alternative data sets to a range of scenarios and demands to determine the most appropriate applications. For now, it is encouraging to see research exploring these unconventional observations and the range of possible data sources and applications. It is hoped that by persevering in addressing the challenges presented by unconventional observations the opportunities they offer can be realised more widely.</p><p><b>Joanne Waller:</b> Writing – original draft; writing – review and editing. <b>Tess O' Hara:</b> Writing – original draft; writing – review and editing.</p>\",\"PeriodicalId\":49825,\"journal\":{\"name\":\"Meteorological Applications\",\"volume\":\"32 2\",\"pages\":\"\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2025-04-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/met.70034\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Meteorological Applications\",\"FirstCategoryId\":\"89\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/met.70034\",\"RegionNum\":4,\"RegionCategory\":\"地球科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"METEOROLOGY & ATMOSPHERIC SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Meteorological Applications","FirstCategoryId":"89","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/met.70034","RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"METEOROLOGY & ATMOSPHERIC SCIENCES","Score":null,"Total":0}
Unconventional observations for meteorological applications
Conventional observations, such as those from satellites, radiosondes, weather balloons, ships, aircraft, traditional surface weather stations and rain gauges are commonly used in meteorological applications. Unconventional observations are becoming an increasingly valuable source of information for meteorological applications, often providing information at much higher spatial and temporal resolution than conventional observing networks and typically at a fraction of the cost (e.g., Nipen et al., 2020; O'Hara et al., 2023; Waller, 2020). They are also able to provide information more representative of local situations, such as individual urban streets, where conventional observing sites are not situated (e.g., Brousse et al., 2022; Feichtinger et al., 2020). As a result, the usefulness of these observations is being investigated for a variety of different meteorological uses (Hahn et al., 2022; Muller et al., 2015). There are also coordinated efforts to improve data access, processing and application, for example, the EU OpenSense project on the opportunistic sensing of rainfall (https://opensenseaction.eu/). However, a key issue identified with unconventional observations is the need for a good understanding of their quality, and the development of appropriate quality control methods (e.g., Beele et al., 2022; Fenner et al., 2021; Napoly et al., 2018) to ensure their usefulness in various meteorological applications.
Unconventional observations for meteorological applications can be obtained in a variety of ways. Data may be obtained opportunistically with meteorological information derived from non-meteorological sensors, or via the deployment of a network of low-cost sensors (e.g., Chapman et al., 2015; Vetra-Carvalho et al., 2020). Alternatively, data can be ‘crowdsourced’ and obtained from a group of people either with or without their explicit involvement in the data collection process, for example, via private automatic weather stations or a smartphone ‘app’ or collected via citizen-science projects where information obtained from a group of people who are invited to participate in the data collection process (Hintz, Vedel, et al., 2019; Kirk et al., 2021). Such citizen science projects can be particularly valuable as they permit interaction between experts and the public, providing educational opportunities and experiential learning to aid in the appreciation of risks, for example, extreme weather impacts (Batchelder et al., 2023; Paul et al., 2018).
Within Numerical Weather Prediction (NWP), unconventional observations have been used to supplement conventional data for nowcasting, data assimilation, forecast post-processing and forecast verification (Hintz et al., 2019). For example, private weather stations have been used to aid the forecasting of convection initiation (Gasperoni et al., 2018). They have also been used for severe weather detection and to provide early warnings in order to help reduce the impacts of events such as tornadoes, hailstorms or flash floods; furthermore, they can be used in an educational context to increase public understanding of high-impact weather events and improve public preparedness and response to such hazardous situations (WMO, 2024). Urban meteorology, both in terms of forecasting and climate services, has also benefited from the high-density information provided by unconventional observations (e.g., Meier et al., 2017; Mitchell & Fry, 2024).
This special issue brings together articles that discuss some of the recent progress in using such unconventional observations, particularly in terms of development of quality control (QC) procedures and potential use in NWP. We now introduce each of the articles beginning with those that focus on the development of quality control procedures for various types of unconventional observations, moving through various stages of cross-validation and proof of concept applications.
The article by Coney et al. (2022) assesses the reliability of Netatmo brand weather station external temperature sensors under laboratory conditions and ‘in the wild’, using the UK Met Office weather observation network for reference. Controlled testing of Netatmo sensors allows accuracy assessment for the absolute observation and lag time, showing the sensors meet the manufacturer's specification and are reasonably accurate (±0.3°C) as compared with laboratory grade equipment (±0.1°C); however, a lag of several minutes in registering temperature change was recorded. Following the testing, temperature observations from Netatmo sensors are subject to three quality control algorithms before being gridded and compared with observations from the Met Office ground-based monitoring network. The inter-comparison between gridded Met Office data and the Netatmo observations showed a reduction, but not an elimination, of erroneous temperatures and positive bias following quality control. The researchers conclude that the Netatmo temperature sensor casing may be contributing to lag time and that anomalies in temperature are most likely due to incorrect placement of sensors in direct sunlight, or indoors. They recommend that more work is done to develop an appropriate QC methodology, and that Netatmo station owners are provided with more support to ensure their sensors are appropriately located. If this can be achieved, it is likely that, due to the popularity of the units, especially in cities, Netatmo temperature sensor data could aid investigations into urban heat islands and observations could ultimately be used for initiating and/or validating weather forecasts.
The article by Siems-Anderson (2024) reviews the use of vehicle-based observations in weather prediction and decision support. The paper discusses the historical use of observations from vehicles, which has typically been confined to using limited sets of observations from research and field campaigns, and describes how the emergence of connected and autonomous vehicles has the potential to vastly increase the number of vehicle-based observations available for meteorological applications. Like any opportunistic observation dataset, vehicle-based observations are subject to a number of data quality concerns and challenges; the paper states that data latency and the availability of metadata can inhibit the basic use of such observations. Furthermore, traditional quality control is not always applicable to observations from moving platforms, so new methods must also be adopted, and a number of these are described. Once the data has been quality controlled, there are a wider variety of weather applications that can take advantage of this type of observation. Typically, they have been used to provide information on winter weather, for example, to improve winter weather road maintenance or to provide guidance for safer travel in the presence of hazardous weather. More recent research has focused on the information these observations can provide in urban heat islands or for dust detection and has considered if they are suitable for use in data assimilation systems. Finally, the article discusses the public policy considerations of this unique source of weather data. One of the key concerns with vehicle-based observations is privacy; this is a major issue since many vehicles are privately owned, and metadata associated with the observations can give away information such as a person's location, or the speed at which they are travelling, and therefore suitable policy should be developed to deal with this. Additionally, policy may also need to be developed to handle, and store, the very large quantity of observations that could be collected by connected and autonomous vehicles, and cross-agency/sector collaboration will be invaluable for this. The author concludes that, whilst they have potential, vehicle-based observations remain under-utilised; to exploit these observations it will be necessary to establish quality standards for this dataset, balance quality control with data privacy issues and determine how to best make the observations available to the meteorological community to increase their use.
Furthermore, it is shown that these uncertainties are likely to be weather- and vehicle-dependent.
The authors conclude that despite the challenging nature of this data, with appropriate quality control it could be a useful source of spatially dense and temporally frequent near-surface temperature observations for NWP.
Particulate matter (everything in the air that is not a gas) less than 2.5 micrometres in diameter (PM2.5) has been in the news due to emissions from wood-burning stoves, particularly in urban areas, but vehicles can be another source of this contaminant (from tyres and brakes) that can cause serious health impacts. The research presented in the paper by Cowell et al. (2024) uses data from two low-cost sensor types at a total of 28 locations, from 2 sensors in the DEFRA Automatic Urban Rural Network (AURN) and from the Birmingham Air Quality Supersite (BAQS), collected via the Birmingham Urban observatory cloud platform. A QC methodology is devised comprising a four-step process to assess completeness, meteorological operating ranges (for temperature and relative humidity, faulty zeros and outliers, which is applied to the available data). The resulting data are then compared with a high-resolution (10 m × 10 m) air quality map derived from a previous year's data, limiting the ability to perform absolute comparisons but providing an opportunity to assess the spatial pattern of PM2.5 concentrations. The study finds data loss across the sensor locations (ranging from minor to extreme), in more severe cases due to units requiring repair or failing due to loss of power (from solar panels) during winter. Other units experienced environmental damage and/or issues with calibration or battery power which took them out of service. Internet of Things (IoT; a network of devices that can connect and exchange data with other devices and systems over the internet) reliability was highlighted as an issue that could be improved with more robust systems; however, despite data gaps, the number of sensors meant highly localised variation in PM2.5 concentrations could be identified. The IoT results compared favourably with modelled air quality, offering an alternative and close to real-time representation of PM2.5 concentrations. This research demonstrates the applicability of IoT PM2.5 sensor data both for highly localised analysis of air quality and for delineating wider spatial patterns, identifying variability more readily than data from a limited number of official monitoring points. IoT sensors can be used to identify ‘hotspots’ for air pollution and the increased spatial resolution can support identifying and addressing sources/problem areas. Where longer term and high-reliability data are required the IoT sensors could be replaced by more robust systems. The IoT sensors may contribute to assessing the impact of Clean Air Zones (CAZ), allowing detailed analysis along major routes that could further contribute to traffic management in the interests of reducing air pollution. Increasingly, meteorological forecast providers are aiming to generate nowcasting, the shortest forecast covering the coming minutes to hour(s) that have the highest reliability due to proximity.
Precipitation forecasting of all kinds can be particularly difficult, due to natural high spatial variation, and there are various data quality issues with all the commonly used data sources (including ground-based gauges, weather radar and satellite). In the paper by Pasierb et al. (2024), the researchers consider the applicability of commercial microwave link (CML) attenuation for the delineation of precipitation as an opportunistic data source that could improve the spatial resolution of precipitation data. For the uninitiated, the paper describes how CML data can be processed to obtain precipitation values, extensively referencing prior research making it an informative piece for those wanting to know more of the principles. CMLs are used in the mobile phone network, operating at multiple frequencies, and causing attenuation of electromagnetic wave propagation between cell towers. The degree of attenuation can be converted to a precipitation rate, offering a potentially high temporal and spatial data set, particularly in urban areas where towers are concentrated. For data from the month of summer 2022, which saw several precipitation events including intense convective rainfall, the researchers compare two methods of CML processing for calculating precipitation. This is followed by testing a series of approaches for assigning precipitation values to specific points along the link and validating the resulting precipitation fields against daily rain gauge observations and blended data derived from rain gauges, radar, and satellite. The researchers state the data that they have access to are limited, which prevents them from drawing generalised conclusions regarding CML use, but, in their analysis, they find large errors where the distance of the CML is <1 km, which is consistent with previous research. They indicate this is likely if wet antenna attenuation (WAA) occurs when the antenna is wet due to rain, resulting in overestimation, particularly when rainfall is light. When comparing CML-derived precipitation with data from a rain gauge close to a tower, the results show that the precipitation estimation exhibits both positive and negative errors. The study concludes that whilst CML-derived precipitation is less reliable than that from radar or rain gauges, it is more accurate than that from satellites. The researchers recommend continuing research on larger datasets covering a wider area to further assess the methods for rainfall calculation from attenuation, allowing different types of precipitation to be considered. Further investigation of WAA is suggested, particularly for short CMLs which would be particularly beneficial for urban areas where improved spatial rainfall estimation is likely to have the most value in providing high spatial and temporal data for rainfall-runoff modelling to mitigate flash flooding.
The article of de Bruijn et al. (2023) assesses the quality of a wind observation dataset derived from information collected along hot-air balloon (HAB) tracks. The authors then consider if such observations can detect and help solve model deficiencies; they then carry out a proof of concept single-observation data assimilation experiment to investigate whether a HAB wind observation can help improve an NWP model in a rather complex wind case study. The authors show that HAB flights can provide valuable high-resolution wind observations, particularly in the atmospheric boundary layer which is not well observed. The observations are shown to be of good quality, and they only differ slightly from high-quality wind mast observations. To assess the observation error standard deviations the HAB observations were compared with the background state of the HARMONIE_AROME model and the resulting observation error standard deviations were found to be in the same range as those for radiosonde and aircraft observations; again, this suggests that the HAB observations are of comparative quality to other observations typically used for atmospheric data assimilation. Using a single-observation data assimilation experiment, the authors showed that it is feasible to assimilate HAB wind observations and that the observations have the potential to push the NWP model in the right direction even in complex baroclinic conditions. The authors conclude that crowdsourced HAB observations would be a valuable addition to the existing observation network, though further work is required to extend the size of the dataset and expand the proof-of-concept assimilation experiments presented in this study.
The article of Alerskans et al. (2022) evaluates the performance of a new Machine Learning method for post-processing deterministic forecasts of 2 m temperature. The authors use a method based on the Transformer model to ingest observational data from a network of private weather stations alongside forecasts from the Global Forecast System NWP model to produce site-specific 48-h 2-m temperature forecasts. The performance of the method is compared with the performance of the raw GFS forecast and two standard post-processing models: a linear regression and a neural network. The results in the article show that, when using test data in the same period as the training data, the use of the Transformer model, which provides a more complex non-linear post-processing, is beneficial as its forecasts outperform the raw forecast and two standard post-processing models, particularly at short lead times. The method is then tested with surface synoptic observations, rather than those from private weather stations, for a different period than the training data. In this case, all three post-processing methods outperform the raw forecast model in spring and summer, with the Transformer model showing the largest improvement. However, in winter months the improvement is restricted to short lead times only. The performance of the Transformer model was found to have a dependence on temperature, with poorer performance at colder temperatures. It was highlighted that additional training data containing observations in a colder range may benefit the Transformer model. Overall, the authors suggest that the Transformer model is a promising method to provide post-processed forecasts of site-specific 48-h 2-m temperature forecasts.
The papers presented in this special issue highlight the potential for unconventional observations for meteorological applications. The applications for such data are wide-ranging and, in particular, the papers suggest that such observations can provide spatially dense and temporally frequent observations, covering currently unobserved areas, for data assimilation, NWP, flood forecasting and forecast validation and verification. Furthermore, they can provide valuable information on urban heat islands and air pollution. However, as acknowledged in all the papers, unconventional observations face challenges in terms of data quality and consistency in reporting, and to date, this has limited their uptake in operational centres. There is often an expectation for unconventional observations to achieve the same data quality standards as those obtained by conventional means, but the ‘reasonableness’ of that position can be questioned, particularly if it is preventing the uptake of these valuable data sources. Statistical data quality algorithms can be devised and have been effectively applied to a variety of unconventional observations; however, they run the risk of removing the extremes which are the precise data that would be most enlightening. But, without the quality assurance procedures implemented during conventional observation collection, and with the uncontrollable risk of temporally inconsistent data provision from unconventional sources, perhaps the answer is to develop more of an appetite for uncertainty and identify ways in which that uncertainty can be communicated and accommodated. Once the reliability and quality of the data can be trusted, it is likely that the use of these observations in the applications suggested in these papers will become more commonplace. Alternatively, a more pragmatic approach can be adopted, whereby researchers find the most appropriate application for the data, for example, a big data approach to cross-reference fast moving high-impact weather systems in nowcasting, or for casting a wide net before honing in on-target areas for official monitoring networks.
Research like that showcased in this special issue identifies progress to date on unconventional observations, the next logical step is further research into applying these alternative data sets to a range of scenarios and demands to determine the most appropriate applications. For now, it is encouraging to see research exploring these unconventional observations and the range of possible data sources and applications. It is hoped that by persevering in addressing the challenges presented by unconventional observations the opportunities they offer can be realised more widely.
Joanne Waller: Writing – original draft; writing – review and editing. Tess O' Hara: Writing – original draft; writing – review and editing.
期刊介绍:
The aim of Meteorological Applications is to serve the needs of applied meteorologists, forecasters and users of meteorological services by publishing papers on all aspects of meteorological science, including:
applications of meteorological, climatological, analytical and forecasting data, and their socio-economic benefits;
forecasting, warning and service delivery techniques and methods;
weather hazards, their analysis and prediction;
performance, verification and value of numerical models and forecasting services;
practical applications of ocean and climate models;
education and training.