To see the other types of publications on this topic, follow the link: Clouds Stochastic processes.

Journal articles on the topic 'Clouds Stochastic processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Clouds Stochastic processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Mueller, Eli A., and Samuel N. Stechmann. "Shallow-cloud impact on climate and uncertainty: A simple stochastic model." Mathematics of Climate and Weather Forecasting 6, no. 1 (March 20, 2020): 16–37. http://dx.doi.org/10.1515/mcwf-2020-0002.

Full text
Abstract:
AbstractShallow clouds are a major source of uncertainty in climate predictions. Several different sources of the uncertainty are possible—e.g., from different models of shallow cloud behavior, which could produce differing predictions and ensemble spread within an ensemble of models, or from inherent, natural variability of shallow clouds. Here, the latter (inherent variability) is investigated, using a simple model of radiative statistical equilibrium, with oceanic and atmospheric boundary layer temperatures, To and Ta, and with moisture q and basic cloud processes. Stochastic variability is used to generate a statistical equilibrium with climate variability. The results show that the intrinsic variability of the climate is enhanced due to the presence of shallow clouds. In particular, the on-and-off switching of cloud formation and decay is a source of additional climate variability and uncertainty, beyond the variability of a cloud-free climate. Furthermore, a sharp transition in the mean climate occurs as environmental parameters are changed, and the sharp transition in the mean is also accompanied by a substantial enhancement of climate sensitivity and uncertainty. Two viewpoints of this behavior are described, based on bifurcations and phase transitions/statistical physics. The sharp regime transitions are associated with changes in several parameters, including cloud albedo and longwave absorptivity/carbon dioxide concentration, and the climate state transitions between a partially cloudy state and a state of full cloud cover like closed-cell stratocumulus clouds. Ideas of statistical physics can provide a conceptual perspective to link the climate state transitions, increased climate uncertainty, and other related behavior.
APA, Harvard, Vancouver, ISO, and other styles
2

Tacchella, Sandro, John C. Forbes, and Neven Caplar. "Stochastic modelling of star-formation histories II: star-formation variability from molecular clouds and gas inflow." Monthly Notices of the Royal Astronomical Society 497, no. 1 (June 26, 2020): 698–725. http://dx.doi.org/10.1093/mnras/staa1838.

Full text
Abstract:
ABSTRACT A key uncertainty in galaxy evolution is the physics regulating star formation, ranging from small-scale processes related to the life-cycle of molecular clouds within galaxies to large-scale processes such as gas accretion on to galaxies. We study the imprint of such processes on the time-variability of star formation with an analytical approach tracking the gas mass of galaxies (‘regulator model’). Specifically, we quantify the strength of the fluctuation in the star-formation rate (SFR) on different time-scales, i.e. the power spectral density (PSD) of the star-formation history, and connect it to gas inflow and the life-cycle of molecular clouds. We show that in the general case the PSD of the SFR has three breaks, corresponding to the correlation time of the inflow rate, the equilibrium time-scale of the gas reservoir of the galaxy, and the average lifetime of individual molecular clouds. On long and intermediate time-scales (relative to the dynamical time-scale of the galaxy), the PSD is typically set by the variability of the inflow rate and the interplay between outflows and gas depletion. On short time-scales, the PSD shows an additional component related to the life-cycle of molecular clouds, which can be described by a damped random walk with a power-law slope of β ≈ 2 at high frequencies with a break near the average cloud lifetime. We discuss star-formation ‘burstiness’ in a wide range of galaxy regimes, study the evolution of galaxies about the main sequence ridgeline, and explore the applicability of our method for understanding the star-formation process on cloud-scale from galaxy-integrated measurements.
APA, Harvard, Vancouver, ISO, and other styles
3

Feitzinger, J. V., E. Harfst, and J. Spicker. "Stochastic Star Formation and Magnetic Fields." Symposium - International Astronomical Union 140 (1990): 257–58. http://dx.doi.org/10.1017/s0074180900190175.

Full text
Abstract:
The model of selfpropagating star formation uses local processes (200 pc cell size) in the interstellar medium to simulate the large scale cooperative behaviour of spiral structure in galaxies. The dynamic of the model galaxies is taken into account via the mass distribution and the resulting rotation curve; flat rotation curves are used. The interstellar medium is treated as a multiphase medium with appropriate cooling times and density history. The phases are: molecular gas, cool HI gas, warm intercloud and HII gas and hot coronal fountain gas. A detailed gas reshuffeling between the star forming cells in the plane and outside the galactic plane controls the cell content. Two processes working stochastically are incooperated: the building and the decay of molecular clouds and the star forming events in the molecular clouds.
APA, Harvard, Vancouver, ISO, and other styles
4

Shutts, Glenn, Thomas Allen, and Judith Berner. "Stochastic parametrization of multiscale processes using a dual-grid approach." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366, no. 1875 (April 29, 2008): 2623–39. http://dx.doi.org/10.1098/rsta.2008.0035.

Full text
Abstract:
Some speculative proposals are made for extending current stochastic sub-gridscale parametrization methods using the techniques adopted from the field of computer graphics and flow visualization. The idea is to emulate sub-filter-scale physical process organization and time evolution on a fine grid and couple the implied coarse-grained tendencies with a forecast model. A two-way interaction is envisaged so that fine-grid physics (e.g. deep convective clouds) responds to forecast model fields. The fine-grid model may be as simple as a two-dimensional cellular automaton or as computationally demanding as a cloud-resolving model similar to the coupling strategy envisaged in ‘super-parametrization’. Computer codes used in computer games and visualization software illustrate the potential for cheap but realistic simulation where emphasis is placed on algorithmic stability and visual realism rather than pointwise accuracy in a predictive sense. In an ensemble prediction context, a computationally cheap technique would be essential and some possibilities are outlined. An idealized proof-of-concept simulation is described, which highlights technical problems such as the nature of the coupling.
APA, Harvard, Vancouver, ISO, and other styles
5

Arakawa, Akio, and Chien-Ming Wu. "A Unified Representation of Deep Moist Convection in Numerical Modeling of the Atmosphere. Part I." Journal of the Atmospheric Sciences 70, no. 7 (July 1, 2013): 1977–92. http://dx.doi.org/10.1175/jas-d-12-0330.1.

Full text
Abstract:
Abstract A generalized framework for cumulus parameterization applicable to any horizontal resolution between those typically used in general circulation and cloud-resolving models is presented. It is pointed out that the key parameter in the generalization is σ, which is the fractional area covered by convective updrafts in the grid cell. Practically all conventional cumulus parameterizations assume σ ≪ 1, at least implicitly, using the gridpoint values of the thermodynamic variables to define the thermal structure of the cloud environment. The proposed framework, called “unified parameterization,” eliminates this assumption from the beginning, allowing a smooth transition to an explicit simulation of cloud-scale processes as the resolution increases. If clouds and the environment are horizontally homogeneous with a top-hat profile, as is widely assumed in the conventional parameterizations, it is shown that the σ dependence of the eddy transport is through a simple quadratic function. Together with a properly chosen closure, the unified parameterization determines σ for each realization of grid-scale processes. The parameterization can also provide a framework for including stochastic parameterization. The remaining issues include parameterization of the in-cloud eddy transport because of the inhomogeneous structure of clouds.
APA, Harvard, Vancouver, ISO, and other styles
6

Palmer, T. N., and P. D. Williams. "Introduction. Stochastic physics and climate modelling." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366, no. 1875 (April 29, 2008): 2419–25. http://dx.doi.org/10.1098/rsta.2008.0059.

Full text
Abstract:
Finite computing resources limit the spatial resolution of state-of-the-art global climate simulations to hundreds of kilometres. In neither the atmosphere nor the ocean are small-scale processes such as convection, clouds and ocean eddies properly represented. Climate simulations are known to depend, sometimes quite strongly, on the resulting bulk-formula representation of unresolved processes. Stochastic physics schemes within weather and climate models have the potential to represent the dynamical effects of unresolved scales in ways which conventional bulk-formula representations are incapable of so doing. The application of stochastic physics to climate modelling is a rapidly advancing, important and innovative topic. The latest research findings are gathered together in the Theme Issue for which this paper serves as the introduction.
APA, Harvard, Vancouver, ISO, and other styles
7

Prat, O. P., and A. P. Barros. "Exploring the use of a column model for the characterization of microphysical processes in warm rain: results from a homogeneous rainshaft model." Advances in Geosciences 10 (April 26, 2007): 145–52. http://dx.doi.org/10.5194/adgeo-10-145-2007.

Full text
Abstract:
Abstract. A study of the evolution of raindrop spectra (raindrop size distribution, DSD) between cloud base and the ground surface was conducted using a column model of stochastic coalescense-breakup dynamics. Numerical results show that, under steady-state boundary conditions (i.e. constant rainfall rate and DSD at the top of the rainshaft), the equilibrium DSD is achieved only for high rain rates produced by midlevel or higher clouds and after long simulation times (~30 min or greater). Because these conditions are not typical of most rainfall, the results suggest that the theoretical equilibrium DSD might not be attainable for the duration of individual rain events, and thus DSD observations from field experiments should be analyzed conditional on the specific storm environment under which they were obtained.
APA, Harvard, Vancouver, ISO, and other styles
8

Mohamadi, Sara, and David Lattanzi. "Life-Cycle Modeling of Structural Defects via Computational Geometry and Time-Series Forecasting." Sensors 19, no. 20 (October 21, 2019): 4571. http://dx.doi.org/10.3390/s19204571.

Full text
Abstract:
The evaluation of geometric defects is necessary in order to maintain the integrity of structures over time. These assessments are designed to detect damages of structures and ideally help inspectors to estimate the remaining life of structures. Current methodologies for monitoring structural systems, while providing useful information about the current state of a structure, are limited in the monitoring of defects over time and in linking them to predictive simulation. This paper presents a new approach to the predictive modeling of geometric defects. A combination of segments from point clouds are parametrized using the convex hull algorithm to extract features from detected defects, and a stochastic dynamic model is then adapted to these features to model the evolution of the hull over time. Describing a defect in terms of its parameterized hull enables consistent temporal tracking for predictive purposes, while implicitly reducing data dimensionality and complexity as well. In this study, two-dimensional (2D) point clouds analogous to information derived from point clouds were firstly generated over simulated life cycles. The evolutions of point cloud hull parameterizations were modeled as stochastic dynamical processes via autoregressive integrated moving average (ARIMA) and vectorized autoregression (VAR) and compared against ground truth. The results indicate that this convex hull approach provides consistent and accurate representations of defect evolution across a range of defect topologies and is reasonably robust to noisy measurements; however, assumptions regarding the underlying dynamical process play a significant the role in predictive accuracy. The results were then validated on experimental data from fatigue testing with high accuracy. Longer term, the results of this work will support finite element model updating for predictive analysis of structural capacity.
APA, Harvard, Vancouver, ISO, and other styles
9

Jameson, A. R., and A. J. Heymsfield. "A Bayesian Approach to Upscaling and Downscaling of Aircraft Measurements of Ice Particle Counts and Size Distributions." Journal of Applied Meteorology and Climatology 52, no. 9 (September 2013): 2075–88. http://dx.doi.org/10.1175/jamc-d-12-0301.1.

Full text
Abstract:
AbstractThis study addresses the issue of how to upscale cloud-sized in situ measurements of ice to yield realistic simulations of ice clouds for a variety of modeling studies. Aircraft measurements of ice particle counts along a 79-km zigzag path were collected in a Costa Rican cloud formed in the upper-level outflow from convection. These are then used to explore the applicability of Bayesian statistics to the problems of upscaling and downscaling. Using the 10-m particle counts, the analyses using Bayesian statistics provide estimates of the probability distribution function of all possible mean values corresponding to these counts. The statistical method of copulas is used to produce an extensive ensemble of estimates of these mean values, which are then combined to derive the probability density function (pdf) of mean values at 1-km resolution. These are found to compare very well to the observed 1-km particle counts when spatial correlation is included. The profiles of the observed and simulated mean counts along the flight path show similar features and have very similar statistical characteristics. However, because the observed and the simulated counts are both the results of stochastic processes, there is no way to upscale exactly to the observed profile. Each simulation is a unique realization of the stochastic processes, as are the observations themselves. These different realizations over all the different sizes can then be used to upscale particle size distributions over large areas.
APA, Harvard, Vancouver, ISO, and other styles
10

Cirisan, A., B. P. Luo, I. Engel, F. G. Wienhold, U. K. Krieger, U. Weers, G. Romanens, et al. "Balloon-borne match measurements of mid-latitude cirrus clouds." Atmospheric Chemistry and Physics Discussions 13, no. 10 (October 2, 2013): 25417–79. http://dx.doi.org/10.5194/acpd-13-25417-2013.

Full text
Abstract:
Abstract. Observations of persistent high supersaturations with respect to ice inside cirrus clouds are challenging our understanding of cloud microphysics and of climate feedback processes in the upper troposphere. Single measurements of a cloudy air mass provide only a snapshot from which the persistence of ice supersaturation cannot be judged. We introduce here the "cirrus match technique" to obtain information of the evolution of clouds and their saturation ratio. The aim of these coordinated balloon soundings is to analyze the same air mass twice. To this end the standard radiosonde equipment is complemented by a frost point hygrometer "SnowWhite" and a particle backscatter detector "COBALD" (Compact Optical Backscatter Aerosol Detector). Extensive trajectory calculations based on regional weather model COSMO forecasts are performed for flight planning and COSMO analyses are used as basis for comprehensive microphysical box modeling (with grid scale 2 km and 7 km, respectively). Here we present the results of matching a cirrus cloud to within 2–15 km, realized on 8 June 2010 over Payerne, Switzerland, and a location 120 km downstream close to Zurich. A thick cirrus was detected over both measurement sites. We show that in order to quantitatively reproduce the measured particle backscatter ratios, the small-scale temperature fluctuations not resolved by COSMO must be superimposed on the trajectories. The stochastic nature of the fluctuations is captured by ensemble calculations. Possibilities for further improvements in the agreement with the measured backscatter data are investigated by assuming a very slow mass accommodation of water on ice, the presence of heterogeneous ice nuclei, or a wide span of (spheroidal) particle shapes. However, the resulting improvements from microphysical refinements are moderate and comparable in magnitude with changes caused by assuming different regimes of temperature fluctuations for clear sky or cloudy sky conditions, highlighting the importance of a proper treatment of subscale fluctuations. The model yields good agreement with the measured backscatter over both sites and reproduces the measured saturation ratios with respect to ice over Payerne. Conversely, the 30% in-cloud supersaturation measured in a massive, 4-km thick cloud layer over Zurich cannot be reproduced, irrespective of the choice of meteorological or microphysical model parameters. The measured supersaturation can only be explained by either resorting to an unknown physical process, which prevents the ice particles from consuming the excess humidity, or – much more likely – by a measurement error, such as a contamination of the sensor housing of the SnowWhite hygrometer by a precipitation drop from a mixed phase cloud just below the cirrus layer or from some very slight rain in the boundary layer. This uncertainty calls for in-flight checks or calibrations of hygrometers under the extreme humidity conditions in the upper troposphere.
APA, Harvard, Vancouver, ISO, and other styles
11

Cirisan, A., B. P. Luo, I. Engel, F. G. Wienhold, M. Sprenger, U. K. Krieger, U. Weers, et al. "Balloon-borne match measurements of midlatitude cirrus clouds." Atmospheric Chemistry and Physics 14, no. 14 (July 18, 2014): 7341–65. http://dx.doi.org/10.5194/acp-14-7341-2014.

Full text
Abstract:
Abstract. Observations of high supersaturations with respect to ice inside cirrus clouds with high ice water content (> 0.01 g kg−1) and high crystal number densities (> 1 cm−3) are challenging our understanding of cloud microphysics and of climate feedback processes in the upper troposphere. However, single measurements of a cloudy air mass provide only a snapshot from which the persistence of ice supersaturation cannot be judged. We introduce here the "cirrus match technique" to obtain information about the evolution of clouds and their saturation ratio. The aim of these coordinated balloon soundings is to analyze the same air mass twice. To this end the standard radiosonde equipment is complemented by a frost point hygrometer, "SnowWhite", and a particle backscatter detector, "COBALD" (Compact Optical Backscatter AerosoL Detector). Extensive trajectory calculations based on regional weather model COSMO (Consortium for Small-Scale Modeling) forecasts are performed for flight planning, and COSMO analyses are used as a basis for comprehensive microphysical box modeling (with grid scale of 2 and 7 km, respectively). Here we present the results of matching a cirrus cloud to within 2–15 km, realized on 8 June 2010 over Payerne, Switzerland, and a location 120 km downstream close to Zurich. A thick cirrus cloud was detected over both measurement sites. We show that in order to quantitatively reproduce the measured particle backscatter ratios, the small-scale temperature fluctuations not resolved by COSMO must be superimposed on the trajectories. The stochastic nature of the fluctuations is captured by ensemble calculations. Possibilities for further improvements in the agreement with the measured backscatter data are investigated by assuming a very slow mass accommodation of water on ice, the presence of heterogeneous ice nuclei, or a wide span of (spheroidal) particle shapes. However, the resulting improvements from these microphysical refinements are moderate and comparable in magnitude with changes caused by assuming different regimes of temperature fluctuations for clear-sky or cloudy-sky conditions, highlighting the importance of proper treatment of subscale fluctuations. The model yields good agreement with the measured backscatter over both sites and reproduces the measured saturation ratios with respect to ice over Payerne. Conversely, the 30% in-cloud supersaturation measured in a massive 4 km thick cloud layer over Zurich cannot be reproduced, irrespective of the choice of meteorological or microphysical model parameters. The measured supersaturation can only be explained by either resorting to an unknown physical process, which prevents the ice particles from consuming the excess humidity, or – much more likely – by a measurement error, such as a contamination of the sensor housing of the SnowWhite hygrometer by a precipitation drop from a mixed-phase cloud just below the cirrus layer or from some very slight rain in the boundary layer. This uncertainty calls for in-flight checks or calibrations of hygrometers under the special humidity conditions in the upper troposphere.
APA, Harvard, Vancouver, ISO, and other styles
12

Staveley-Smith, L. "Michigan 160: a precursor to the LMC?" Symposium - International Astronomical Union 148 (1991): 376–77. http://dx.doi.org/10.1017/s0074180900200910.

Full text
Abstract:
The tidal interaction between the Magellanic Clouds and the Galaxy is an important factor in influencing the physical and dynamical evolution of the Clouds (e.g. the Magellanic Stream) as well as the genesis and evolution of their respective stellar populations. However, how important is the influence of the Galaxy? This is a key question since we know that relatively isolated, magellanic-type galaxies do exist (e.g. NGC 3109 and NGC 4449) and have been just as efficient at star-formation as the LMC. It is possible in fact that the star formation in the clouds is primarily stochastic in nature and is relatively insensitive to the global forces which seem to have shaped stellar formation processes in massive spiral and elliptical galaxies. Unsupported by a massive bulge or halo component, cold gas disks are inherently susceptible to radial and bar-like instabilities (Efstathiou et al. 1982) which are very efficient at creating the dynamical pressures required for rapid star-formation. With this in mind, a detailed comparison of 'field' magellanic-type galaxies with the LMC and SMC is of some importance.
APA, Harvard, Vancouver, ISO, and other styles
13

Suselj, Kay, Derek Posselt, Mark Smalley, Matthew D. Lebsock, and Joao Teixeira. "A New Methodology for Observation-Based Parameterization Development." Monthly Weather Review 148, no. 10 (October 1, 2020): 4159–84. http://dx.doi.org/10.1175/mwr-d-20-0114.1.

Full text
Abstract:
AbstractWe develop a methodology for identification of candidate observables that best constrain the parameterization of physical processes in numerical models. This methodology consists of three steps: (i) identifying processes that significantly impact model results, (ii) identifying observables that best constrain the influential processes, and (iii) investigating the sensitivity of the model results to the measurement error and vertical resolution of the constraining observables. This new methodology is applied to the Jet Propulsion Laboratory stochastic multiplume Eddy-Diffusivity/Mass-Flux (JPL-EDMF) model for two case studies representing nonprecipitating marine stratocumulus and marine shallow convection. The uncertainty of physical processes is characterized with uncertainty of model parameters. We find that the most uncertain processes in the JPL-EDMF model are related to the representation of lateral entrainment for convective plumes and parameterization of mixing length scale for the eddy-diffusivity part of the model. The results show a strong interaction between these uncertain processes. Measurements of the water vapor profile for shallow convection and of the cloud fraction profile for the stratocumulus case are among those measurements that best constrain the uncertain JPL-EDMF processes. The interdependence of the required vertical resolution and error characteristics of the observational system are shown. If the observations are associated with larger error, their vertical resolution has to be finer and vice versa. We suggest that the methodology and results presented here provide an objective basis for defining requirements for future observing systems such as future satellite missions to observe clouds and the planetary boundary layer.
APA, Harvard, Vancouver, ISO, and other styles
14

Harris, Daniel, and Efi Foufoula-Georgiou. "Subgrid variability and stochastic downscaling of modeled clouds: Effects on radiative transfer computations for rainfall retrieval." Journal of Geophysical Research: Atmospheres 106, no. D10 (May 1, 2001): 10349–62. http://dx.doi.org/10.1029/2000jd900797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sušelj, Kay, João Teixeira, and Daniel Chung. "A Unified Model for Moist Convective Boundary Layers Based on a Stochastic Eddy-Diffusivity/Mass-Flux Parameterization." Journal of the Atmospheric Sciences 70, no. 7 (July 1, 2013): 1929–53. http://dx.doi.org/10.1175/jas-d-12-0106.1.

Full text
Abstract:
Abstract A single-column model (SCM) is developed for representing moist convective boundary layers. The key component of the SCM is the parameterization of subgrid-scale vertical mixing, which is based on a stochastic eddy-diffusivity/mass-flux (EDMF) approach. In the EDMF framework, turbulent fluxes are calculated as a sum of the turbulent kinetic energy–based eddy-diffusivity component and a mass-flux component. The mass flux is modeled as a fixed number of steady-state plumes. The main challenge of the mass-flux model is to properly represent cumulus clouds, which are modeled as moist plumes. The solutions have to account for a realistic representation of condensation within the plumes and of lateral entrainment into the plumes. At the level of mean condensation within the updraft, the joint pdf of moist conserved variables and vertical velocity is used to estimate the proportion of dry and moist plumes and is sampled in a Monte Carlo way creating a predefined number of plumes. The lateral entrainment rate is modeled as a stochastic process resulting in a realistic decrease of the convective cloudiness with height above cloud base. In addition to the EDMF scheme, the following processes are included in the SCM: a pdf-based parameterization of subgrid-scale condensation, a simple longwave radiation, and one-dimensional dynamics. Note that in this approach there are two distinct pdfs, one representing the variability of updraft properties and the other one the variability of thermodynamic properties of the surrounding environment. The authors show that the model is able to capture the essential features of moist boundary layers, ranging from stratocumulus to shallow-cumulus regimes. Detailed comparisons, which include pdfs, profiles, and integrated budgets with the Barbados Oceanographic and Meteorological Experiment (BOMEX), Dynamics and Chemistry of Marine Stratocumulus (DYCOMS), and steady-state large-eddy simulation (LES) cases, are discussed to confirm the quality of the present approach.
APA, Harvard, Vancouver, ISO, and other styles
16

Wilson, Anna M., and Ana P. Barros. "An Investigation of Warm Rainfall Microphysics in the Southern Appalachians: Orographic Enhancement via Low-Level Seeder–Feeder Interactions." Journal of the Atmospheric Sciences 71, no. 5 (April 28, 2014): 1783–805. http://dx.doi.org/10.1175/jas-d-13-0228.1.

Full text
Abstract:
Abstract Observations of the vertical structure of rainfall, surface rain rates, and drop size distributions (DSDs) in the southern Appalachians were analyzed with a focus on the diurnal cycle of rainfall. In the inner mountain region, a 5-yr high-elevation rain gauge dataset shows that light rainfall, described here as rainfall intensity less than 3 mm h−1 over a time scale of 5 min, accounts for 30%–50% of annual accumulations. The data also reveal warm-season events characterized by heavy surface rainfall in valleys and along ridgelines inconsistent with radar observations of the vertical structure of precipitation. Next, a stochastic column model of advection–coalescence–breakup of warm rain DSDs was used to investigate three illustrative events. The integrated analysis of observations and model simulations suggests that seeder–feeder interactions (i.e., Bergeron processes) between incoming rainfall systems and local fog and/or low-level clouds with very high number concentrations of small drops (<0.2 mm) govern surface rainfall intensity through driving significant increases in coalescence rates and efficiency. Specifically, the model shows how accelerated growth of small- and moderate-size raindrops (<2 mm) via Bergeron processes can enhance surface rainfall rates by one order of magnitude for durations up to 1 h as in the observations. An examination of the fingerprints of seeder–feeder processes on DSD statistics conducted by tracking the temporal evolution of mass spectrum parameters points to the critical need for improved characterization of hydrometeor microstructure evolution, from mist formation to fog and from drizzle development to rainfall.
APA, Harvard, Vancouver, ISO, and other styles
17

Bodifee, G., and C. de Loore. "On the Occurrence and Appearance of Galactic Life Forms: A Thermodynamic Approach." Symposium - International Astronomical Union 112 (1985): 255–59. http://dx.doi.org/10.1017/s0074180900146595.

Full text
Abstract:
Biological life is an evolving complex dissipative structure, in the sense introduced by Prigogine. The question is considered how life in an advanced stage of evolution manifests itself to an outside observer. Fundamental uncertainties are inherent to the process of self-organization as manifested by dissipative structures. Their evolution is determined by stochastic fluctuations in far from equilibrium conditions, where instabilities arise that cause bifurcations in the behavior of the process system. Since no specific physical or chemical properties of extraterrestrial life can be predicted, general characteristics based on fundamental thermodynamic principles should guide us in our SETI programs. A biological system necessarily must be macroscopic, open with regard of the rest of the universe, and far from thermodynamic equilibrium. The processes within the system must be strongly nonlinear. We should therefore look for life in environments that are in a staticnary far from equilibrium state, such as planetary surfaces and cool interstellar molecular clouds, where an extreme nonequilibrium exists between radiation and kinetic temperature. In any case an advanced form of life should manifest itself by a large entropy emission, possibly carried by low-grade energy radiation, and propably not by information-rich communication channels intended for internal process regulation.
APA, Harvard, Vancouver, ISO, and other styles
18

Kuell, Volker, and Andreas Bott. "Stochastic parameterization of cloud processes." Atmospheric Research 143 (June 2014): 176–97. http://dx.doi.org/10.1016/j.atmosres.2014.01.027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Schertzer, D., and S. Lovejoy. "EGS Richardson AGU Chapman NVAG3 Conference: Nonlinear Variability in Geophysics: scaling and multifractal processes." Nonlinear Processes in Geophysics 1, no. 2/3 (September 30, 1994): 77–79. http://dx.doi.org/10.5194/npg-1-77-1994.

Full text
Abstract:
Abstract. 1. The conference The third conference on "Nonlinear VAriability in Geophysics: scaling and multifractal processes" (NVAG 3) was held in Cargese, Corsica, Sept. 10-17, 1993. NVAG3 was joint American Geophysical Union Chapman and European Geophysical Society Richardson Memorial conference, the first specialist conference jointly sponsored by the two organizations. It followed NVAG1 (Montreal, Aug. 1986), NVAG2 (Paris, June 1988; Schertzer and Lovejoy, 1991), five consecutive annual sessions at EGS general assemblies and two consecutive spring AGU meeting sessions. As with the other conferences and workshops mentioned above, the aim was to develop confrontation between theories and experiments on scaling/multifractal behaviour of geophysical fields. Subjects covered included climate, clouds, earthquakes, atmospheric and ocean dynamics, tectonics, precipitation, hydrology, the solar cycle and volcanoes. Areas of focus included new methods of data analysis (especially those used for the reliable estimation of multifractal and scaling exponents), as well as their application to rapidly growing data bases from in situ networks and remote sensing. The corresponding modelling, prediction and estimation techniques were also emphasized as were the current debates about stochastic and deterministic dynamics, fractal geometry and multifractals, self-organized criticality and multifractal fields, each of which was the subject of a specific general discussion. The conference started with a one day short course of multifractals featuring four lectures on a) Fundamentals of multifractals: dimension, codimensions, codimension formalism, b) Multifractal estimation techniques: (PDMS, DTM), c) Numerical simulations, Generalized Scale Invariance analysis, d) Advanced multifractals, singular statistics, phase transitions, self-organized criticality and Lie cascades (given by D. Schertzer and S. Lovejoy, detailed course notes were sent to participants shortly after the conference). This was followed by five days with 8 oral sessions and one poster session. Overall, there were 65 papers involving 74 authors. In general, the main topics covered are reflected in this special issue: geophysical turbulence, clouds and climate, hydrology and solid earth geophysics. In addition to AGU and EGS, the conference was supported by the International Science Foundation, the Centre Nationale de Recherche Scientifique, Meteo-France, the Department of Energy (US), the Commission of European Communities (DG XII), the Comite National Francais pour le Programme Hydrologique International, the Ministere de l'Enseignement Superieur et de la Recherche (France). We thank P. Hubert, Y. Kagan, Ph. Ladoy, A. Lazarev, S.S. Moiseev, R. Pierrehumbert, F. Schmitt and Y. Tessier, for help with the organization of the conference. However special thanks goes to A. Richter and the EGS office, B. Weaver and the AGU without whom this would have been impossible. We also thank the Institut d' Etudes Scientifiques de Cargese whose beautiful site was much appreciated, as well as the Bar des Amis whose ambiance stimulated so many discussions. 2. Tribute to L.F. Richardson With NVAG3, the European geophysical community paid tribute to Lewis Fry Richardson (1881-1953) on the 40th anniversary of his death. Richardson was one of the founding fathers of the idea of scaling and fractality, and his life reflects the European geophysical community and its history in many ways. Although many of Richardson's numerous, outstanding scientific contributions to geophysics have been recognized, perhaps his main contribution concerning the importance of scaling and cascades has still not received the attention it deserves. Richardson was the first not only to suggest numerical integration of the equations of motion of the atmosphere, but also to attempt to do so by hand, during the First World War. This work, as well as a presentation of a broad vision of future developments in the field, appeared in his famous, pioneering book "Weather prediction by numerical processes" (1922). As a consequence of his atmospheric studies, the nondimensional number associated with fluid convective stability has been called the "Richardson number". In addition, his book presents a study of the limitations of numerical integration of these equations, it was in this book that - through a celebrated poem - that the suggestion that turbulent cascades were the fundamental driving mechanism of the atmosphere was first made. In these cascades, large eddies break up into smaller eddies in a manner which involves no characteristic scales, all the way from the planetary scale down to the viscous scale. This led to the Richardson law of turbulent diffusion (1926) and tot he suggestion that particles trajectories might not be describable by smooth curves, but that such trajectories might instead require highly convoluted curves such as the Peano or Weierstrass (fractal) curves for their description. As a founder of the cascade and scaling theories of atmospheric dynamics, he more or less anticipated the Kolmogorov law (1941). He also used scaling ideas to invent the "Richardson dividers method" of successively increasing the resolution of fractal curves and tested out the method on geographical boundaries (as part of his wartime studies). In the latter work he anticipated recent efforts to study scale invariance in rivers and topography. His complex life typifies some of the hardships that the European scientific community has had to face. His educational career is unusual: he received a B.A. degree in physics, mathematics, chemistry, biology and zoology at Cambridge University, and he finally obtained his Ph.D. in mathematical psychology at the age of 47 from the University of London. As a conscientious objector he was compelled to quit the United Kingdom Meteorological Office in 1920 when the latter was militarized by integration into the Air Ministry. He subsequently became the head of a physics department and the principal of a college. In 1940, he retired to do research on war, which was published posthumously in book form (Richardson, 1963). This latter work is testimony to the trauma caused by the two World Wars and which led some scientists including Richardson to use their skills in rational attempts to eradicate the source of conflict. Unfortunately, this remains an open field of research. 3. The contributions in this special issue Perhaps the area of geophysics where scaling ideas have the longest history, and where they have made the largest impact in the last few years, is turbulence. The paper by Tsinober is an example where geometric fractal ideas are used to deduce corrections to standard dimensional analysis results for turbulence. Based on local spontaneous breaking of isotropy of turbulent flows, the fractal notion is used in order to deduce diffusion laws (anomalous with respect to the Richardson law). It is argued that his law is ubiquitous from the atmospheric boundary layer to the stratosphere. The asymptotic intermittency exponent i hypothesized to be not only finite but to be determined by the angular momentum flux. Schmitt et al., Chigirinskaya et al. and Lazarev et al. apply statistical multifractal notions to atmospheric turbulence. In the former, the formal analogy between multifractals and thermodynamics is exploited, in particular to confirm theoretical predictions that sample-size dependent multifractal phase transitions occur. While this quantitatively explains the behavior of the most extreme turbulent events, it suggests that - contrary to the type of multifractals most commonly discussed in the literature which are bounded - more violent (unbounded) multifractals are indeed present in the atmospheric wind field. Chigirinskaya et al. use a tropical rather than mid-latitude set to study the extreme fluctuations form yet another angle: That of coherent structures, which, in the multifractal framework, are identified with singularities of various orders. The existence of a critical order of singularity which distinguishes violent "self-organized critical structures" was theoretically predicted ten years ago; here it is directly estimated. The second of this two part series (Lazarev et al.) investigates yet another aspect of tropical atmospheric dynamics: the strong multiscaling anisotropy. Beyond the determination of universal multifractal indices and critical singularities in the vertical, this enables a comparison to be made with Chigirinskaya et al.'s horizontal results, requiring an extension of the unified scaling model of atmospheric dynamics. Other approaches to the problem of geophysical turbulence are followed in the papers by Pavlos et al., Vassiliadis et al., Voros et al. All of them share a common assumption that a very small number of degrees of freedom (deterministic chaos) might be sufficient for characterizing/modelling the systems under consideration. Pavlos et al. consider the magnetospheric response to solar wind, showing that scaling occurs both in real space (using spectra), and also in phase space; the latter being characterized by a correlation dimension. The paper by Vassiliadis et al. follows on directly by investigating the phase space properties of power-law filtered and rectified gaussian noise; the results further quantify how low phase space correlation dimensions can occur even with very large number of degrees of freedom (stochastic) processes. Voros et al. analyze time series of geomagnetic storms and magnetosphere pulsations, also estimating their correlation dimensions and Lyapounov exponents taking special care of the stability of the estimates. They discriminate low dimensional events from others, which are for instance attributed to incoherent waves. While clouds and climate were the subject of several talks at the conference (including several contributions on multifractal clouds), Cahalan's contribution is the only one in this special issue. Addressing the fundamental problem of the relationship of horizontal cloud heterogeneity and the related radiation fields, he first summarizes some recent numerical results showing that even for comparatively thin clouds that fractal heterogeneity will significantly reduce the albedo. The model used for the distribution of cloud liquid water is the monofractal "bounded cascade" model, whose properties are also outlined. The paper by Falkovich addresses another problem concerning the general circulation: the nonlinear interaction of waves. By assuming the existence of a peak (i.e. scale break) at the inertial oscillation frequency, it is argued that due to remarkable cancellations, the interactions between long inertio-gravity waves and Rossby waves are anomalously weak, producing a "wave condensate" of large amplitude so that wave breaking with front creation can occur. Kagan et al., Eneva and Hooge et al. consider fractal and multifractal behaviour in seismic events. Eneva estimates multifractal exponents of the density of micro-earthquakes induced by mining activity. The effects of sample limitations are discussed, especially in order to distinguish between genuine from spurious multifractal behaviour. With the help of an analysis of the CALNET catalogue, Hooge et al. points out, that the origin of the celebrated Gutenberg-Richter law could be related to a non-classical Self-Organized Criticality generated by a first order phase transition in a multifractal earthquake process. They also analyze multifractal seismic fields which are obtained by raising earthquake amplitudes to various powers and summing them on a grid. In contrast, Kagan, analyzing several earthquake catalogues discussed the various laws associated with earthquakes. Giving theoretical and empirical arguments, he proposes an additive (monofractal) model of earthquake stress, emphasizing the relevance of (asymmetric) stable Cauchy probability distributions to describe earthquake stress distributions. This would yield a linear model for self-organized critical earthquakes. References: Kolmogorov, A.N.: Local structure of turbulence in an incompressible liquid for very large Reynolds number, Proc. Acad. Sci. URSS Geochem. Sect., 30, 299-303, 1941. Perrin, J.: Les Atomes, NRF-Gallimard, Paris, 1913. Richardson, L.F.: Weather prediction by numerical process. Cambridge Univ. Press 1922 (republished by Dover, 1965). Richardson, L.F.: Atmospheric diffusion on a distance neighbour graph. Proc. Roy. of London A110, 709-737, 1923. Richardson, L.F.: The problem of contiguity: an appendix of deadly quarrels. General Systems Yearbook, 6, 139-187, 1963. Schertzer, D., Lovejoy, S.: Nonlinear Variability in Geophysics, Kluwer, 252 pp, 1991.
APA, Harvard, Vancouver, ISO, and other styles
20

Alpert, P. A., J. Y. Aller, and D. A. Knopf. "Ice nucleation from aqueous NaCl droplets with and without marine diatoms." Atmospheric Chemistry and Physics Discussions 11, no. 3 (March 11, 2011): 8291–336. http://dx.doi.org/10.5194/acpd-11-8291-2011.

Full text
Abstract:
Abstract. Ice formation in the atmosphere by homogeneous and heterogeneous nucleation is one of the least understood processes in cloud microphysics and climate. Here we describe our investigation of the marine environment as a potential source of atmospheric IN by experimentally observing homogeneous ice nucleation from aqueous NaCl droplets and comparing against heterogeneous ice nucleation from aqueous NaCl droplets containing intact and fragmented diatoms. Homogeneous and heterogeneous ice nucleation are studied as a function of temperature and water activity, aw. Additional analyses are presented on the dependence of diatom surface area and aqueous volume on heterogeneous freezing temperatures, ice nucleation rates, ωhet, ice nucleation rate coefficients, Jhet, and differential and cumulative ice nuclei spectra, k(T) and K(T), respectively. Homogeneous freezing temperatures and corresponding nucleation rate coefficients are in agreement with the water activity based homogeneous ice nucleation theory within experimental and predictive uncertainties. Our results confirm, as predicted by classical nucleation theory, that a stochastic interpretation can be used to describe this nucleation process. Heterogeneous ice nucleation initiated by intact and fragmented diatoms can be adequately represented by a modified water activity based ice nucleation theory. A horizontal shift in water activity, Δaw, het = 0.2303, of the ice melting curve can describe median heterogeneous freezing temperatures. Individual freezing temperatures showed no dependence on available diatom surface area and aqueous volume. Determined at median diatom freezing temperatures for aw from 0.8 to 0.99, ωhet ~ 0.11+0.06−0.05 s−1, Jhet ~ 1.0+1.16−0.61 × 104 cm−2 s−1, and K ~ 6.2+3.5−4.1 × 104 cm−2. The experimentally derived ice nucleation rates and nuclei spectra allow us to estimate ice particle production which we subsequently use for a comparison with observed ice crystal concentrations typically found in cirrus and polar marine mixed-phase clouds. Differences in application of time-dependent and time-independent analyses to predict ice particle production are discussed.
APA, Harvard, Vancouver, ISO, and other styles
21

Alpert, P. A., J. Y. Aller, and D. A. Knopf. "Ice nucleation from aqueous NaCl droplets with and without marine diatoms." Atmospheric Chemistry and Physics 11, no. 12 (June 16, 2011): 5539–55. http://dx.doi.org/10.5194/acp-11-5539-2011.

Full text
Abstract:
Abstract. Ice formation in the atmosphere by homogeneous and heterogeneous nucleation is one of the least understood processes in cloud microphysics and climate. Here we describe our investigation of the marine environment as a potential source of atmospheric IN by experimentally observing homogeneous ice nucleation from aqueous NaCl droplets and comparing against heterogeneous ice nucleation from aqueous NaCl droplets containing intact and fragmented diatoms. Homogeneous and heterogeneous ice nucleation are studied as a function of temperature and water activity, aw. Additional analyses are presented on the dependence of diatom surface area and aqueous volume on heterogeneous freezing temperatures, ice nucleation rates, ωhet, ice nucleation rate coefficients, Jhet, and differential and cumulative ice nuclei spectra, k(T) and K(T), respectively. Homogeneous freezing temperatures and corresponding nucleation rate coefficients are in agreement with the water activity based homogeneous ice nucleation theory within experimental and predictive uncertainties. Our results confirm, as predicted by classical nucleation theory, that a stochastic interpretation can be used to describe the homogeneous ice nucleation process. Heterogeneous ice nucleation initiated by intact and fragmented diatoms can be adequately represented by a modified water activity based ice nucleation theory. A horizontal shift in water activity, Δaw, het = 0.2303, of the ice melting curve can describe median heterogeneous freezing temperatures. Individual freezing temperatures showed no dependence on available diatom surface area and aqueous volume. Determined at median diatom freezing temperatures for aw from 0.8 to 0.99, ωhet~0.11+0.06−0.05 s−1, Jhet~1.0+1.16−0.61×104 cm−2 s−1, and K~6.2+3.5−4.1 ×104 cm−2. The experimentally derived ice nucleation rates and nuclei spectra allow us to estimate ice particle production which we subsequently use for a comparison with observed ice crystal concentrations typically found in cirrus and polar marine mixed-phase clouds. Differences in application of time-dependent and time-independent analyses to predict ice particle production are discussed.
APA, Harvard, Vancouver, ISO, and other styles
22

Ni, C. F., C. P. Lin, S. G. Li, and J. S. Chen. "Quantifying flow and remediation zone uncertainties for partially opened wells in heterogeneous aquifers." Hydrology and Earth System Sciences Discussions 8, no. 2 (March 30, 2011): 3133–66. http://dx.doi.org/10.5194/hessd-8-3133-2011.

Full text
Abstract:
Abstract. This study presents a numerical first-order spectral model to quantify flow and remediation zone uncertainties for partially opened wells in heterogeneous aquifers. Taking advantages of spectral theories in solving unmodeled small-scale variability in hydraulic conductivity (K), the presented nonstationary spectral method (NSM) can efficiently estimate flow uncertainties, including hydraulic heads and Darcy velocities in r- and z profile in a cylindrical coordinate system. The velocity uncertainties associated with the particle backward tracking algorithm are then used to estimate stochastic remediation zones for scenarios with partially opened well screens. In this study the flow and remediation zone uncertainties obtained by NSM were first compared with those obtained by Monte Carlo simulations (MCS). A layered aquifer with different geometric mean of K and screen locations was then illustrated with the developed NSM. To compare NSM flow and remediation zone uncertainties with those of MCS, three different small-scale K variances and correlation lengths were considered for illustration purpose. The MCS remediation zones for different degrees of heterogeneity were presented with the uncertainty clouds obtained by 200 equally likely MCS realizations. Results of simulations reveal that the first-order NSM solutions agree well with those of MCS for partially opened wells. The flow uncertainties obtained by using NSM and MCS show identically for aquifers with small ln K variances and correlation lengths. Based on the test examples, the remediation zone uncertainties are not sensitive to the changes of small-scale ln K correlation lengths. However, the increases of remediation zone uncertainties are significant with the increases of small-scale ln K variances. The largest displacement uncertainties may have several meters of differences when the ln K variances increase from 0.1 to 1.0. Such results are also valid for the estimations of remediation zones in layered aquifers.
APA, Harvard, Vancouver, ISO, and other styles
23

Kobayashi, Atsuko, Masamoto Horikawa, Joseph L. Kirschvink, and Harry N. Golash. "Magnetic control of heterogeneous ice nucleation with nanophase magnetite: Biophysical and agricultural implications." Proceedings of the National Academy of Sciences 115, no. 21 (May 7, 2018): 5383–88. http://dx.doi.org/10.1073/pnas.1800294115.

Full text
Abstract:
In supercooled water, ice nucleation is a stochastic process that requires ∼250–300 molecules to transiently achieve structural ordering before an embryonic seed crystal can nucleate. This happens most easily on crystalline surfaces, in a process termed heterogeneous nucleation; without such surfaces, water droplets will supercool to below −30 °C before eventually freezing homogeneously. A variety of fundamental processes depends on heterogeneous ice nucleation, ranging from desert-blown dust inducing precipitation in clouds to frost resistance in plants. Recent experiments have shown that crystals of nanophase magnetite (Fe3O4) are powerful nucleation sites for this heterogeneous crystallization of ice, comparable to other materials like silver iodide and some cryobacterial peptides. In natural materials containing magnetite, its ferromagnetism offers the possibility that magneto-mechanical motion induced by external oscillating magnetic fields could act to disrupt the water–crystal interface, inhibiting the heterogeneous nucleation process in subfreezing water and promoting supercooling. For this to act, the magneto-mechanical rotation of the particles should be higher than the magnitude of Brownian motions. We report here that 10-Hz precessing magnetic fields, at strengths of 1 mT and above, on ∼50-nm magnetite crystals dispersed in ultrapure water, meet these criteria and do indeed produce highly significant supercooling. Using these rotating magnetic fields, we were able to elicit supercooling in two representative plant and animal tissues (celery and bovine muscle), both of which have detectable, natural levels of ferromagnetic material. Tailoring magnetic oscillations for the magnetite particle size distribution in different tissues could maximize this supercooling effect.
APA, Harvard, Vancouver, ISO, and other styles
24

Griffin, Sarah M., Jason A. Otkin, Gregory Thompson, Maria Frediani, Judith Berner, and Fanyou Kong. "Assessing the Impact of Stochastic Perturbations in Cloud Microphysics using GOES-16 Infrared Brightness Temperatures." Monthly Weather Review 148, no. 8 (July 8, 2020): 3111–37. http://dx.doi.org/10.1175/mwr-d-20-0078.1.

Full text
Abstract:
Abstract In this study, infrared brightness temperatures (BTs) are used to examine how applying stochastic perturbed parameter (SPP) methodology to the widely used Thompson–Eidhammer cloud microphysics scheme impacts the cloud field in high-resolution forecasts. Modifications are made to add stochastic perturbations to three parameters controlling cloud generation and dissipation processes. Two five-member ensembles are generated, one using the microphysics parameter perturbations (SPP-MP) and another where white noise perturbations were added to potential temperature fields at initialization time (Control). The impact of the SPP method was assessed using simulated and observed GOES-16 BTs. This analysis uses pixel-based and object-based methods to assess the impact on the cloud field. Pixel-based methods revealed that the SPP-MP BTs are slightly more accurate than the Control BTs. However, too few pixels with a BT lower than 270 K result in a positive bias compared to the observations. A negative bias compared to the observations is observed when only analyzing lower BTs. The spread of the ensemble BTs was analyzed using the continuous ranked probability score differences, with the SPP-MP ensemble BTs having less (more) spread during May (January) compared to the Control. Object-based analysis using the Method for Object-Based Diagnostic Evaluation revealed the upper-level cloud objects are smaller in the SPP-MP ensemble than the Control but a lower bias exists in the SPP-MP BTs compared to the Control BTs when overlapping matching objects. However, there is no clear distinction between the SPP-MP and Control ensemble members during the evolution of objects, Overall, the SPP-MP perturbations result in lower BTs compared to the Control ensemble and more cloudy pixels.
APA, Harvard, Vancouver, ISO, and other styles
25

Tonttila, J., H. Järvinen, and P. Räisänen. "Explicit representation of subgrid variability in cloud microphysics yields weaker aerosol indirect effect in the ECHAM5-HAM2 climate model." Atmospheric Chemistry and Physics Discussions 14, no. 10 (June 12, 2014): 15523–43. http://dx.doi.org/10.5194/acpd-14-15523-2014.

Full text
Abstract:
Abstract. Impacts of representing cloud microphysical processes in a stochastic subcolumn framework are investigated, with emphasis on estimating the aerosol indirect effect. It is shown that subgrid treatment of cloud activation and autoconversion of cloud water to rain reduce the impact of anthropogenic aerosols on cloud properties and thus reduce the global mean aerosol indirect effect by 18%, from 1.59 to 1.30 W m−2. Although the results show the importance of considering subgrid variability in the treatment of autoconversion, representing several processes in a self-consistent subgrid framework is emphasized. This paper provides direct evidence that omitting subgrid variability in cloud microphysics significantly contributes to the apparently chronic overestimation of the aerosol indirect effect by climate models, as compared to satellite-based estimates.
APA, Harvard, Vancouver, ISO, and other styles
26

Remy, F., and F. Mignard. "Stellar perturbations on comets." International Astronomical Union Colloquium 83 (1985): 97–104. http://dx.doi.org/10.1017/s0252921100083822.

Full text
Abstract:
AbstractThe stochastic processes involved in the evolution of a hypothetical cloud of comets are investigated. The cloud is assumed to be randomly perturbed by passing stars approaching the Sun at distance less than 1 pc. Within the frame of the impulse approximation we derive analytical expressions for the probability distribution of the impulse imparted to comets for both close and distant approaches. An application to the rate of ejection of comets is presented.
APA, Harvard, Vancouver, ISO, and other styles
27

Feitzinger, J. V. "Star Formation in the Large Magellanic Cloud." Symposium - International Astronomical Union 115 (1987): 521–33. http://dx.doi.org/10.1017/s0074180900096315.

Full text
Abstract:
Methods used in pattern recognition and cluster analysis are applied to investigate the spatial distribution of the star forming regions. The fractal dimension of these structures is deduced. The new 21 cm, radio continuum (1.4 GHz) and IRAS surveys reveal scale structures of 700 pc to 1500 pc being identical with the optically identified star forming sites. The morphological structures delineated by young stars reflect physical parameters which determine the star formation in this galaxy. The formation of spiral arm filaments is understandable by stochastic selfpropagating star formation processes.
APA, Harvard, Vancouver, ISO, and other styles
28

Hottovy, Scott, and Samuel N. Stechmann. "A Spatiotemporal Stochastic Model for Tropical Precipitation and Water Vapor Dynamics." Journal of the Atmospheric Sciences 72, no. 12 (November 24, 2015): 4721–38. http://dx.doi.org/10.1175/jas-d-15-0119.1.

Full text
Abstract:
Abstract A linear stochastic model is presented for the dynamics of water vapor and tropical convection. Despite its linear formulation, the model reproduces a wide variety of observational statistics from disparate perspectives, including (i) a cloud cluster area distribution with an approximate power law; (ii) a power spectrum of spatiotemporal red noise, as in the “background spectrum” of tropical convection; and (iii) a suite of statistics that resemble the statistical physics concepts of critical phenomena and phase transitions. The physical processes of the model are precipitation, evaporation, and turbulent advection–diffusion of water vapor, and they are represented in idealized form as eddy diffusion, damping, and stochastic forcing. Consequently, the form of the model is a damped version of the two-dimensional stochastic heat equation. Exact analytical solutions are available for many statistics, and numerical realizations can be generated for minimal computational cost and for any desired time step. Given the simple form of the model, the results suggest that tropical convection may behave in a relatively simple, random way. Finally, relationships are also drawn with the Ising model, the Edwards–Wilkinson model, the Gaussian free field, and the Schramm–Loewner evolution and its possible connection with cloud cluster statistics. Potential applications of the model include several situations where realistic cloud fields must be generated for minimal cost, such as cloud parameterizations for climate models or radiative transfer models.
APA, Harvard, Vancouver, ISO, and other styles
29

Seiden, Philip E. "Stochastic star formation and spiral structure." Symposium - International Astronomical Union 106 (1985): 551–58. http://dx.doi.org/10.1017/s0074180900243143.

Full text
Abstract:
Most approaches to explaining the long-range order of the spiral arms in galaxies assume that it is induced by the long-range gravitational interaction. However, it is well-known in many fields of physics that long-range order may be induced by short-range interactions. A typical example is magnetism, where the exchange interaction between magnetic spins has a range of only 10 ångströms, yet a bar magnet can be made as large as one likes. Stochastic self-propagating star formation (SSPSF) starts from the point of view of a short-range interaction and examines the spiral structure arising from it (Seiden and Gerola 1982). We assume that the energetic processes of massive stars, stellar winds, ionization-front shocks and supernova shocks, in an OB association or open cluster can induce the creation of a new molecular cloud from cold interstellar atomic hydrogen. In turn this new molecular cloud will begin to form stars that will allow the process to repeat, creating a chain reaction. The differential rotation existing in a spiral galaxy will stretch the aggregation of recently created stars into spiral features.
APA, Harvard, Vancouver, ISO, and other styles
30

Tonttila, J., H. Järvinen, and P. Räisänen. "Explicit representation of subgrid variability in cloud microphysics yields weaker aerosol indirect effect in the ECHAM5-HAM2 climate model." Atmospheric Chemistry and Physics 15, no. 2 (January 19, 2015): 703–14. http://dx.doi.org/10.5194/acp-15-703-2015.

Full text
Abstract:
Abstract. The impacts of representing cloud microphysical processes in a stochastic subcolumn framework are investigated, with emphasis on estimating the aerosol indirect effect. It is shown that subgrid treatment of cloud activation and autoconversion of cloud water to rain reduce the impact of anthropogenic aerosols on cloud properties and thus reduce the global mean aerosol indirect effect by 19%, from −1.59 to −1.28 W m−2. This difference is partly related to differences in the model basic state; in particular, the liquid water path (LWP) is smaller and the shortwave cloud radiative forcing weaker when autoconversion is computed separately for each subcolumn. However, when the model is retuned so that the differences in the basic state LWP and radiation balance are largely eliminated, the global-mean aerosol indirect effect is still 14% smaller (i.e. −1.37 W m−2) than for the model version without subgrid treatment of cloud activation and autoconversion. The results show the importance of considering subgrid variability in the treatment of autoconversion. Representation of several processes in a self-consistent subgrid framework is emphasized. This paper provides evidence that omitting subgrid variability in cloud microphysics contributes to the apparently chronic overestimation of the aerosol indirect effect by climate models, as compared to satellite-based estimates.
APA, Harvard, Vancouver, ISO, and other styles
31

Brito, Carlos, Leonardo Silva, Gustavo Callou, Tuan Anh Nguyen, Dugki Min, Jae-Woo Lee, and Francisco Airton Silva. "Offloading Data through Unmanned Aerial Vehicles: A Dependability Evaluation." Electronics 10, no. 16 (August 10, 2021): 1916. http://dx.doi.org/10.3390/electronics10161916.

Full text
Abstract:
Applications in the Internet of Things (IoT) context continuously generate large amounts of data. The data must be processed and monitored to allow rapid decision making. However, the wireless connection that links such devices to remote servers can lead to data loss. Thus, new forms of a connection must be explored to ensure the system’s availability and reliability as a whole. Unmanned aerial vehicles (UAVs) are becoming increasingly empowered in terms of processing power and autonomy. UAVs can be used as a bridge between IoT devices and remote servers, such as edge or cloud computing. UAVs can collect data from mobile devices and process them, if possible. If there is no processing power in the UAV, the data are sent and processed on servers at the edge or in the cloud. Data offloading throughout UAVs is a reality today, but one with many challenges, mainly due to unavailability constraints. This work proposes stochastic Petri net (SPN) models and reliability block diagrams (RBDs) to evaluate a distributed architecture, with UAVs focusing on the system’s availability and reliability. Among the various existing methodologies, stochastic Petri nets (SPN) provide models that represent complex systems with different characteristics. UAVs are used to route data from IoT devices to the edge or the cloud through a base station. The base station receives data from UAVs and retransmits them to the cloud. The data are processed in the cloud, and the responses are returned to the IoT devices. A sensitivity analysis through Design of Experiments (DoE) showed key points of improvement for the base model, which was enhanced. A numerical analysis indicated the components with the most significant impact on availability. For example, the cloud proved to be a very relevant component for the availability of the architecture. The final results could prove the effectiveness of improving the base model. The present work can help system architects develop distributed architectures with more optimized UAVs and low evaluation costs.
APA, Harvard, Vancouver, ISO, and other styles
32

Lu, Yingdong, Mayank Sharma, Mark S. Squillante, and Bo Zhang. "STOCHASTIC OPTIMAL DYNAMIC CONTROL OF GIm/GIm/1n QUEUES WITH TIME-VARYING WORKLOADS." Probability in the Engineering and Informational Sciences 30, no. 3 (May 19, 2016): 470–91. http://dx.doi.org/10.1017/s0269964816000103.

Full text
Abstract:
Motivated by applications in areas such as cloud computing and information technology services, we consider GI/GI/1 queueing systems under workloads (arrival and service processes) that vary according to one discrete time scale and under controls (server capacity) that vary according to another discrete time scale. We take a stochastic optimal control approach and formulate the corresponding optimal dynamic control problem as a stochastic dynamic program. Under general assumptions for the queueing system, we derive structural properties for the optimal dynamic control policy, establishing that the optimal policy can be obtained through a sequence of convex programs. We also derive fluid and diffusion approximations for the problem and propose analytical and computational approaches in these settings. Computational experiments demonstrate the benefits of our theoretical results over standard heuristics.
APA, Harvard, Vancouver, ISO, and other styles
33

Kermarrec, Paffenholz, and Alkhatib. "How Significant Are Differences Obtained by Neglecting Correlations When Testing for Deformation: A Real Case Study Using Bootstrapping with Terrestrial Laser Scanner Observations Approximated by B-Spline Surfaces." Sensors 19, no. 17 (August 21, 2019): 3640. http://dx.doi.org/10.3390/s19173640.

Full text
Abstract:
B-spline surfaces possess attractive properties such as a high degree of continuity or the local support of their basis functions. One of the major applications of B-spline surfaces in engineering geodesy is the least-square (LS) fitting of surfaces from, e.g., 3D point clouds obtained from terrestrial laser scanners (TLS). Such mathematical approximations allow one to test rigorously with a given significance level the deformation magnitude between point clouds taken at different epochs. Indeed, statistical tests cannot be applied when point clouds are processed in commonly used software such as CloudCompare, which restrict the analysis of deformation to simple deformation maps based on distance computation. For a trustworthy test decision and a resulting risk management, the stochastic model of the underlying observations needs, however, to be optimally specified. Since B-spline surface approximations necessitate Cartesian coordinates of the TLS observations, the diagonal variance covariance matrix (VCM) of the raw TLS measurements has to be transformed by means of the error propagation law. Unfortunately, this procedure induces mathematical correlations, which can strongly affect the chosen test statistics to analyse deformation, if neglected. This may lead potentially to rejecting wrongly the null hypothesis of no-deformation, with risky and expensive consequences. In this contribution, we propose to investigate the impact of mathematical correlations on test statistics, using real TLS observations from a bridge under load. As besides TLS, a highly precise laser tracker (LT) was used, the significance of the difference of the test statistics when the stochastic model is misspecified can be assessed. However, the underlying test distribution is hardly tractable so that only an adapted bootstrapping allows the computation of trustworthy p-values. Consecutively, the extent to which heteroscedasticity and mathematical correlations can be neglected or simplified without impacting the test decision is shown in a rigorous way, paving the way for a simplification based on the intensity model.
APA, Harvard, Vancouver, ISO, and other styles
34

Thual, Sulian, Andrew Majda, and Nan Chen. "Seasonal Synchronization of a Simple Stochastic Dynamical Model Capturing El Niño Diversity." Journal of Climate 30, no. 24 (December 2017): 10047–66. http://dx.doi.org/10.1175/jcli-d-17-0174.1.

Full text
Abstract:
Recently, a simple stochastic dynamical model was developed that automatically captures the diversity and intermittency of El Niño–Southern Oscillation (ENSO) in nature, where state-dependent stochastic wind bursts and nonlinear advection of sea surface temperature (SST) are coupled to simple ocean–atmosphere processes that are otherwise deterministic, linear, and stable. In the present article, it is further shown that the model can reproduce qualitatively the ENSO synchronization (or phase locking) to the seasonal cycle in nature. This goal is achieved by incorporating a cloud radiative feedback that is derived naturally from the model’s atmosphere dynamics with no ad hoc assumptions and accounts in simple fashion for the marked seasonal variations of convective activity and cloud cover in the eastern Pacific. In particular, the weak convective response to SSTs in boreal fall favors the eastern Pacific warming that triggers El Niño events while the increased convective activity and cloud cover during the following spring contributes to the shutdown of those events by blocking incoming shortwave solar radiations. In addition to simulating the ENSO diversity with realistic non-Gaussian statistics in different Niño regions, the eastern Pacific moderate and super El Niño and the central Pacific El Niño and La Niña show a realistic chronology with a tendency to peak in boreal winter as well as decreased predictability in spring consistent with the persistence barrier in nature. The incorporation of other possible seasonal feedbacks in the model is also documented for completeness.
APA, Harvard, Vancouver, ISO, and other styles
35

WANG, CHUNTIAN, YUAN ZHANG, ANDREA L. BERTOZZI, and MARTIN B. SHORT. "A stochastic-statistical residential burglary model with independent Poisson clocks." European Journal of Applied Mathematics 32, no. 1 (February 5, 2020): 32–58. http://dx.doi.org/10.1017/s0956792520000029.

Full text
Abstract:
Residential burglary is a social problem in every major urban area. As such, progress has been to develop quantitative, informative and applicable models for this type of crime: (1) the Deterministic-time-step (DTS) model [Short, D’Orsogna, Pasour, Tita, Brantingham, Bertozzi & Chayes (2008) Math. Models Methods Appl. Sci.18, 1249–1267], a pioneering agent-based statistical model of residential burglary criminal behaviour, with deterministic time steps assumed for arrivals of events in which the residential burglary aggregate pattern formation is quantitatively studied for the first time; (2) the SSRB model (agent-based stochastic-statistical model of residential burglary crime) [Wang, Zhang, Bertozzi & Short (2019) Active Particles, Vol. 2, Springer Nature Switzerland AG, in press], in which the stochastic component of the model is theoretically analysed by introduction of a Poisson clock with time steps turned into exponentially distributed random variables. To incorporate independence of agents, in this work, five types of Poisson clocks are taken into consideration. Poisson clocks (I), (II) and (III) govern independent agent actions of burglary behaviour, and Poisson clocks (IV) and (V) govern interactions of agents with the environment. All the Poisson clocks are independent. The time increments are independently exponentially distributed, which are more suitable to model individual actions of agents. Applying the method of merging and splitting of Poisson processes, the independent Poisson clocks can be treated as one, making the analysis and simulation similar to the SSRB model. A Martingale formula is derived, which consists of a deterministic and a stochastic component. A scaling property of the Martingale formulation with varying burglar population is found, which provides a theory to the finite size effects. The theory is supported by quantitative numerical simulations using the pattern-formation quantifying statistics. Results presented here will be transformative for both elements of application and analysis of agent-based models for residential burglary or in other domains.
APA, Harvard, Vancouver, ISO, and other styles
36

van Lier-Walqui, Marcus, Tomislava Vukicevic, and Derek J. Posselt. "Quantification of Cloud Microphysical Parameterization Uncertainty Using Radar Reflectivity." Monthly Weather Review 140, no. 11 (November 1, 2012): 3442–66. http://dx.doi.org/10.1175/mwr-d-11-00216.1.

Full text
Abstract:
Abstract Uncertainty in cloud microphysical parameterization—a leading order contribution to numerical weather prediction error—is estimated using a Markov chain Monte Carlo (MCMC) algorithm. An inversion is performed on 10 microphysical parameters using radar reflectivity observations with vertically covarying error as the likelihood constraint. An idealized 1D atmospheric column model with prescribed forcing is used to simulate the microphysical behavior of a midlatitude squall line. Novel diagnostics are employed for the probabilistic investigation of individual microphysical process behavior vis-à-vis parameter uncertainty. Uncertainty in the microphysical parameterization is presented via posterior probability density functions (PDFs) of parameters, observations, and microphysical processes. The results of this study show that radar reflectivity observations, as expected, provide a much stronger constraint on microphysical parameters than column-integral observations, in most cases reducing both the variance and bias in the maximum likelihood estimate of parameter values. This highlights the enhanced potential of radar reflectivity observations to provide information about microphysical processes within convective storm systems despite the presence of strongly nonlinear relationships within the microphysics model. The probabilistic analysis of parameterization uncertainty in terms of both parameter and process activity PDFs suggest the prospect of a stochastic representation of microphysical parameterization uncertainty—specifically the results indicate that error may be more easily represented and estimated by microphysical process uncertainty rather than microphysical parameter uncertainty. In addition, these new methods of analysis allow for a detailed investigation of the full nonlinear and multivariate relationships between microphysical parameters, microphysical processes, and radar observations.
APA, Harvard, Vancouver, ISO, and other styles
37

Krishna, Gudur Vamsi, and Dr K. F. Bharati. "Improvising Dynamic Cloud Resource Allocation to Optimise QoS and Cost Effectiveness." International Journal of Engineering and Advanced Technology 10, no. 3 (February 28, 2021): 206–9. http://dx.doi.org/10.35940/ijeat.d2289.0210321.

Full text
Abstract:
Cloud computing offers streamlined instruments for outstanding business efficiency processes. Cloud distributors typically give two distinct forms of usage plans: Reserved as well as On-demand. Restricted policies provide inexpensive long-term contracting services, while order contracts were very expensive and ready for brief rather than long longer periods. In order to satisfy current customer demands with equal rates, cloud resources must be delivered wisely. Many current works depend mainly on low-cost resource-reserved strategies, which may be under-provisioning and over-provisioning rather than costly ondemand solutions. Since unfairness can cause enormous high availability costs and cloud demand variability in the distribution of cloud resources, resource allocation has become an extremely challenging issue. The hybrid approach to allocating cloud services according to complex customer orders is suggested in that article. The strategy was constructed as a two-step mechanism consisting of accommodation stages and then a versatile structure. In this way, by constructing each step primarily as an optimization problem, we minimize the total cost of implementation, thereby preserving service quality. By modeling client prerequisites as probability distributions are disseminated owing to the dubious presence of cloud requests, we set up a stochastic Optimization-based approach. Using various approaches, our technique is applied, and the results demonstrate its effectiveness when assigning individual cloud resources
APA, Harvard, Vancouver, ISO, and other styles
38

Jeffery, Christopher A., Jon M. Reisner, and Miroslaw Andrejczuk. "Another Look at Stochastic Condensation for Subgrid Cloud Modeling: Adiabatic Evolution and Effects." Journal of the Atmospheric Sciences 64, no. 11 (November 1, 2007): 3949–69. http://dx.doi.org/10.1175/2006jas2147.1.

Full text
Abstract:
Abstract The theory of stochastic condensation, which models the impact of an ensemble of unresolved supersaturation fluctuations S′ on the volume-averaged droplet-size distribution f (r), is revisited in the modern context of subgrid cloud parameterization. The exact transition probability density for droplet radius driven by independent, Gaussian S′ fluctuations that are periodically renewed is derived and shown to be continuous but not smooth. The Fokker–Planck model follows naturally as the smooth-in-time approximation to this discrete-in-time process. Evolution equations for the moments of f (r) that include a contribution from subgrid S′ fluctuations are presented; these new terms are easily implemented in moment-based cloud schemes that resolve supersaturation. New, self-consistent expressions for the evolution of f (r) and mean supersaturation S in a closed, adiabatic volume are derived without approximation; quite appropriately, these coupled equations exactly conserve total water mass. The behavior of this adiabatic system, which serves as a surrogate for a closed model grid column, is analyzed in detail. In particular, a new nondimensional number is derived that determines the relative impact of S′ fluctuations on droplet spectral evolution, and the contribution of fluctuations to S is shown to be negative definite and maximal near the accommodation length and has a direct correspondence to the analysis of Cooper. Observational support for the theory of stochastic condensation is found in cloud droplet spectra from cumulus cloud fields measured during the Rain in the Cumulus over the Ocean (RICO) and Small Cumulus Microphysics Study (SCMS) campaigns. Increasing spectral broadening with increasing spatial scale is discovered and compares well with theoretical predictions. However, the observed spectra show evidence of non-Gaussian S′ fluctuations and inhomogeneous mixing, processes neglected in the current theory.
APA, Harvard, Vancouver, ISO, and other styles
39

Deng, Qiang, Boualem Khouider, and Andrew J. Majda. "The MJO in a Coarse-Resolution GCM with a Stochastic Multicloud Parameterization." Journal of the Atmospheric Sciences 72, no. 1 (January 1, 2015): 55–74. http://dx.doi.org/10.1175/jas-d-14-0120.1.

Full text
Abstract:
Abstract The representation of the Madden–Julian oscillation (MJO) is still a challenge for numerical weather prediction and general circulation models (GCMs) because of the inadequate treatment of convection and the associated interactions across scales by the underlying cumulus parameterizations. One new promising direction is the use of the stochastic multicloud model (SMCM) that has been designed specifically to capture the missing variability due to unresolved processes of convection and their impact on the large-scale flow. The SMCM specifically models the area fractions of the three cloud types (congestus, deep, and stratiform) that characterize organized convective systems on all scales. The SMCM captures the stochastic behavior of these three cloud types via a judiciously constructed Markov birth–death process using a particle interacting lattice model. The SMCM has been successfully applied for convectively coupled waves in a simplified primitive equation model and validated against radar data of tropical precipitation. In this work, the authors use for the first time the SMCM in a GCM. The authors build on previous work of coupling the High-Order Methods Modeling Environment (HOMME) NCAR GCM to a simple multicloud model. The authors tested the new SMCM-HOMME model in the parameter regime considered previously and found that the stochastic model drastically improves the results of the deterministic model. Clear MJO-like structures with many realistic features from nature are reproduced by SMCM-HOMME in the physically relevant parameter regime including wave trains of MJOs that organize intermittently in time. Also one of the caveats of the deterministic simulation of requiring a doubling of the moisture background is not required anymore.
APA, Harvard, Vancouver, ISO, and other styles
40

Ren, Yongtai, Jiping Yao, Dongyang Xu, and Jing Wang. "A comprehensive evaluation of regional water safety systems based on a similarity cloud model." Water Science and Technology 76, no. 3 (April 21, 2017): 594–604. http://dx.doi.org/10.2166/wst.2017.235.

Full text
Abstract:
Regional water safety systems are affected by social, economic, ecological, hydrological and other factors, and their effects are complicated and variable. Studying water safety systems is crucial to promoting the coordinated development of regional water safety systems and anthropogenic processes. Thus, a similarity cloud model is developed to simulate the evolution mechanisms of fuzzy and complex regional systems of water security and overcome the uncertainty that is associated with the indices that are used in water safety index systems. This cloud generator is used to reciprocally transform a qualitative cloud image with a quantitative cloud characteristic value, and the stochastic weight assignment method is used to determine the weight of the evaluation indices. The results of case studies show that Jiansanjiang's water safety systems were in a safe state in 2002–2011, but the water safety systems in the arid area of Yinchuan City were in a dangerous state in 2006–2007 because of climate factors and a lack of effective water and soil resource protection. The experimental results are consistent with the research subjects' actual situations, and the proposed model provides a tool for decision makers to better understand the security issues that are associated with regional water safety systems.
APA, Harvard, Vancouver, ISO, and other styles
41

Rémillard, J., A. M. Fridlind, A. S. Ackerman, G. Tselioudis, P. Kollias, D. B. Mechem, H. E. Chandler, et al. "Use of Cloud Radar Doppler Spectra to Evaluate Stratocumulus Drizzle Size Distributions in Large-Eddy Simulations with Size-Resolved Microphysics." Journal of Applied Meteorology and Climatology 56, no. 12 (December 2017): 3263–83. http://dx.doi.org/10.1175/jamc-d-17-0100.1.

Full text
Abstract:
AbstractA case study of persistent stratocumulus over the Azores is simulated using two independent large-eddy simulation (LES) models with bin microphysics, and forward-simulated cloud radar Doppler moments and spectra are compared with observations. Neither model is able to reproduce the monotonic increase of downward mean Doppler velocity with increasing reflectivity that is observed under a variety of conditions, but for differing reasons. To a varying degree, both models also exhibit a tendency to produce too many of the largest droplets, leading to excessive skewness in Doppler velocity distributions, especially below cloud base. Excessive skewness appears to be associated with an insufficiently sharp reduction in droplet number concentration at diameters larger than ~200 μm, where a pronounced shoulder is found for in situ observations and a sharp reduction in reflectivity size distribution is associated with relatively narrow observed Doppler spectra. Effectively using LES with bin microphysics to study drizzle formation and evolution in cloud Doppler radar data evidently requires reducing numerical diffusivity in the treatment of the stochastic collection equation; if that is accomplished sufficiently to reproduce typical spectra, progress toward understanding drizzle processes is likely.
APA, Harvard, Vancouver, ISO, and other styles
42

Krasnopolsky, Vladimir M., Michael S. Fox-Rabinovitz, and Alexei A. Belochitski. "Using Ensemble of Neural Networks to Learn Stochastic Convection Parameterizations for Climate and Numerical Weather Prediction Models from Data Simulated by a Cloud Resolving Model." Advances in Artificial Neural Systems 2013 (May 7, 2013): 1–13. http://dx.doi.org/10.1155/2013/485913.

Full text
Abstract:
A novel approach based on the neural network (NN) ensemble technique is formulated and used for development of a NN stochastic convection parameterization for climate and numerical weather prediction (NWP) models. This fast parameterization is built based on learning from data simulated by a cloud-resolving model (CRM) initialized with and forced by the observed meteorological data available for 4-month boreal winter from November 1992 to February 1993. CRM-simulated data were averaged and processed to implicitly define a stochastic convection parameterization. This parameterization is learned from the data using an ensemble of NNs. The NN ensemble members are trained and tested. The inherent uncertainty of the stochastic convection parameterization derived following this approach is estimated. The newly developed NN convection parameterization has been tested in National Center of Atmospheric Research (NCAR) Community Atmospheric Model (CAM). It produced reasonable and promising decadal climate simulations for a large tropical Pacific region. The extent of the adaptive ability of the developed NN parameterization to the changes in the model environment is briefly discussed. This paper is devoted to a proof of concept and discusses methodology, initial results, and the major challenges of using the NN technique for developing convection parameterizations for climate and NWP models.
APA, Harvard, Vancouver, ISO, and other styles
43

Alfonso, L., G. B. Raga, and D. Baumgardner. "The validity of the kinetic collection equation revisited." Atmospheric Chemistry and Physics 8, no. 4 (February 25, 2008): 969–82. http://dx.doi.org/10.5194/acp-8-969-2008.

Full text
Abstract:
Abstract. The kinetic collection equation (KCE) describes the evolution of the average droplet spectrum due to successive events of collision and coalescence. Fluctuations and non-zero correlations present in the stochastic coalescence process would imply that the size distributions may not be correctly modeled by the KCE. In this study we expand the known analytical studies of the coalescence equation with some numerical tools such as Monte Carlo simulations of the coalescence process. The validity time of the KCE was estimated by calculating the maximum of the ratio of the standard deviation for the largest droplet mass over all the realizations to the averaged value. A good correspondence between the analytical and the numerical approaches was found for all the kernels. The expected values from analytical solutions of the KCE, were compared with true expected values of the stochastic collection equation (SCE) estimated with Gillespie's Monte Carlo algorithm and analytical solutions of the SCE, after and before the breakdown time. The possible implications for cloud physics are discussed, in particular the possibility of application of these results to kernels modified by turbulence and electrical processes.
APA, Harvard, Vancouver, ISO, and other styles
44

Alfonso, L., G. B. Raga, and D. Baumgardner. "The validity of the kinetic collection equation revisited." Atmospheric Chemistry and Physics Discussions 7, no. 5 (September 20, 2007): 13733–71. http://dx.doi.org/10.5194/acpd-7-13733-2007.

Full text
Abstract:
Abstract. The kinetic collection equation (KCE) describes the evolution of the average droplet spectrum due to successive events of collision and coalescence. Fluctuations and non-zero correlations present in the stochastic coalescence process would imply that the size distributions may not be correctly modelled by the KCE. In this study we expand the known analytical studies of the coalescence equation with some numerical tools such as Monte Carlo simulations of the coalescence process. The validity time of the KCE was estimated by calculating the maximum of the ratio of the standard deviation for the largest droplet mass over all the realizations to the averaged value. A good correspondence between the analytical and the numerical approaches was found for all the kernels studied. The expected values from analytical solutions of the KCE, were compared with true expected values of the stochastic collection equation (SCE) estimated with Gillespie's Monte Carlo algorithm and analytical solutions of the SCE, after and before the breakdown time. The possible implications for cloud physics are discussed, in particular the possibility of application of these results to kernels modified by turbulence and electrical processes.
APA, Harvard, Vancouver, ISO, and other styles
45

Leung, Kimberly, Max Velado, Aneesh Subramanian, Guang J. Zhang, Richard C. J. Somerville, and Samuel S. P. Shen. "Simulation of High-Resolution Precipitable Water Data by a Stochastic Model with a Random Trigger." Advances in Data Science and Adaptive Analysis 08, no. 02 (April 2016): 1650006. http://dx.doi.org/10.1142/s2424922x16500066.

Full text
Abstract:
We use a stochastic differential equation (SDE) model with a random precipitation trigger for mass balance to simulate the 20 s temporal resolution column precipitable water vapor (PWV) data during the tropical warm pool international cloud experiment (TWP-ICE) period of January 20 to February 15, 2006 at Darwin, Australia. The trigger is determined by an exponential cumulative distribution function, the time step size in the SDE simulation, and a random precipitation indicator uniformly distributed over [0, 1]. Compared with the observed data, the simulations have similar means, extremes, skewness, kurtosis, and overall shapes of probability distribution, and are temporally well synchronized for increasing and decreasing, but have about 20% lower standard deviation. Based on a 1000-day run, the correlations between the model data and the observations in TWP-ICE period were computed in a moving time window of 25 days and show quasi-periodic variations between (−0.675, 0.697). This shows that the results are robust for the stochastic model simulation of the observed PWV data, whose fractal dimension is 1.9, while the dimension of the simulated data is also about 1.9. This agreement and numerous sensitivity experiments form a test on the feasibility of using an SDE model to simulate precipitation processes in more complex climate models.
APA, Harvard, Vancouver, ISO, and other styles
46

Huang, Jiwei, Yihan Lan, and Minfeng Xu. "A Simulation-Based Approach of QoS-Aware Service Selection in Mobile Edge Computing." Wireless Communications and Mobile Computing 2018 (November 1, 2018): 1–10. http://dx.doi.org/10.1155/2018/5485461.

Full text
Abstract:
Edge computing is an emerging computational model that enables efficient offloading of service requests to edge servers. By leveraging the well-developed technologies of cloud computing, the computing capabilities of mobile devices can be significantly enhanced in edge computing paradigm. However, upon the arrival of user requests, whether to dispatch them to the edge servers or cloud servers in order to guarantee the quality of service (QoS), i.e., the QoS-aware service selection problem, still remains an open problem. Due to the dynamic mobility of users and the variation of task arrivals and service processes, it is extremely costly to obtain the global optimal solution by both mathematical approaches and simulation-based schemes. To attack this challenge, this paper proposes a simulation-based approach of QoS-aware dynamic service selection for mobile edge computing systems. Stochastic system models are presented and mathematical analyses are provided. Based on the analytical results, the QoS-aware service selection problem is formulated by a dynamic optimization problem. Goal softening is applied to the original problem, and service selection algorithms are designed using ordinal optimization techniques. Simulation experiments are conducted to validate the efficacy of the approach presented in this paper.
APA, Harvard, Vancouver, ISO, and other styles
47

Hashino, Tempei, and Gregory J. Tripoli. "The Spectral Ice Habit Prediction System (SHIPS). Part IV: Box Model Simulations of the Habit-Dependent Aggregation Process." Journal of the Atmospheric Sciences 68, no. 6 (June 1, 2011): 1142–61. http://dx.doi.org/10.1175/2011jas3667.1.

Full text
Abstract:
Abstract The purpose of this paper is to assess the prediction of particle properties of aggregates and particle size distributions with the Spectral Ice Habit Prediction System (SHIPS) and to investigate the effects of crystal habits on aggregation process. Aggregation processes of ice particles are critical to the understanding of precipitation and the radiative signatures of cloud systems. Conventional approaches taken in cloud-resolving models (CRMs) are not ideal to study the effects of crystal habits on aggregation processes because the properties of aggregates have to be assumed beforehand. As described in Part III, SHIPS solves the stochastic collection equation along with particle property variables that contain information about crystal habits and maximum dimensions of aggregates. This approach makes it possible to simulate properties of aggregates explicitly and continuously in CRMs according to the crystal habits. The aggregation simulations were implemented in a simple model setup, assuming seven crystal habits and several initial particle size distributions (PSDs). The predicted PSDs showed good agreement with observations after rescaling except for the large-size end. The ice particle properties predicted by the model, such as the mass–dimensional (m-D) relationship and the relationship between diameter of aggregates and number of component crystals in an aggregate, were found to be quantitatively similar to those observed. Furthermore, these predictions were dependent on the initial PSDs and habits. A simple model for the growth of a particle’s maximum dimension was able to simulate the typically observed fractal dimension of aggregates when an observed value of the separation ratio of two particles was used. A detailed analysis of the collection kernel indicates that the m-D relationship unique to each crystal habit has a large impact on the growth rate of aggregates through the cross-sectional area or terminal velocity difference, depending on the initial equivalent particle distribution. A significant decrease in terminal velocity differences was found in the inertial flow regime for all the habits but the constant-density sphere. It led to formation of a local maximum in the collection kernel and, in turn, formed an identifiable mode in the PSDs. Remaining issues that must be addressed in order to improve the aggregation simulation with the quasi-stochastic model are discussed.
APA, Harvard, Vancouver, ISO, and other styles
48

Brito, Carlos, Laécio Rodrigues, Brena Santos, Iure Fé, Tuan-Anh Nguyen, Dugki Min, Jae-Woo Lee, and Francisco Airton Silva. "Stochastic Model Driven Performance and Availability Planning for a Mobile Edge Computing System." Applied Sciences 11, no. 9 (April 29, 2021): 4088. http://dx.doi.org/10.3390/app11094088.

Full text
Abstract:
Mobile Edge Computing (MEC) has emerged as a promising network computing paradigm associated with mobile devices at local areas to diminish network latency under the employment and utilization of cloud/edge computing resources. In that context, MEC solutions are required to dynamically allocate mobile requests as close as possible to their computing resources. Moreover, the computing power and resource capacity of MEC server machines can directly impact the performance and operational availability of mobile apps and services. The systems practitioners must understand the trade off between performance and availability in systems design stages. The analytical models are suited to such an objective. Therefore, this paper proposes Stochastic Petri Net (SPN) models to evaluate both performance and availability of MEC environments. Different to previous work, our proposal includes unique metrics such as discard probability and a sensitivity analysis that guides the evaluation decisions. The models are highly flexible by considering fourteen transitions at the base model and twenty-five transitions at the extended model. The performance model was validated with a real experiment, the result of which indicated equality between experiment and model with p-value equal to 0.684 by t-Test. Regarding availability, the results of the extended model, different from the base model, always remain above 99%, since it presents redundancy in the components that were impacting availability in the base model. A numerical analysis is performed in a comprehensive manner, and the output results of this study can serve as a practical guide in designing MEC computing system architectures by making it possible to evaluate the trade-off between Mean Response Time (MRT) and resource utilization.
APA, Harvard, Vancouver, ISO, and other styles
49

Goudenège, Ludovic, Adam Larat, Julie Llobell, Marc Massot, David Mercier, Olivier Thomine, and Aymeric Vié. "Statistical and probabilistic modeling of a cloud of particles coupled with a turbulent fluid." ESAIM: Proceedings and Surveys 65 (2019): 401–24. http://dx.doi.org/10.1051/proc/201965401.

Full text
Abstract:
This paper exposes a novel exploratory formalism, the end goal of which is the numerical simulation of the dynamics of a cloud of particles weakly or strongly coupled with a turbulent fluid. Given the large panel of expertise of the list of authors, the content of this paper scans a wide range of connex notions, from the physics of turbulence to the rigorous definition of stochastic processes. Our approach is to develop reduced-order models for the dynamics of both carrying and carried phases which remain consistant within this formalism, and to set up a numerical process to validate these models. The novelties of this paper lie in the gathering of a large panel of mathematical and physical definitions and results within a common framework and an agreed vocabulary (sections 1 and 2), and in some preliminary results and achievements within this context, section 3. While the first three sections have been simplified to the context of a gas field providing that the disperse phase only retrieves energy through drag, the fourth section opens this study to the more complex situation when the disperse phase interacts with the continuous phase as well, in an energy conservative manner. This will allow us to expose the perspectives of the project and to conclude.
APA, Harvard, Vancouver, ISO, and other styles
50

HASTINGS, I. M., and M. J. MACKINNON. "The emergence of drug-resistant malaria." Parasitology 117, no. 5 (November 1998): 411–17. http://dx.doi.org/10.1017/s0031182098003291.

Full text
Abstract:
Stochastic processes play a vital role in the early stages of the evolution of drug-resistant malaria. We present a simple and flexible method for investigating these processes and understanding how they affect the emergence of drug-resistant malaria. Qualitatively different predictions can be made depending on the biological and epidemiological factors which prevail in the field. Intense intra-host competition between co-infecting clones, low numbers of genes required to encode resistance, and high drug usage all encourage the emergence of drug resistance. Drug-resistant forms present at the time drug application starts are less likely to survive than those which arise subsequently; survival of the former largely depends on how rapidly malaria population size stabilizes after drug application. In particular, whether resistance is more likely to emerge in areas of high or low transmission depends on malaria intra-host dynamics, the level of drug usage, the population regulation of malaria, and the number of genes required to encode resistance. These factors are discussed in relation to the practical implementation of drug control programmes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography