To see the other types of publications on this topic, follow the link: Interpolated Markov model.

Journal articles on the topic 'Interpolated Markov model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 21 journal articles for your research on the topic 'Interpolated Markov model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Jones, P. G., and P. K. Thornton. "Fitting a third-order Markov rainfall model to interpolated climate surfaces." Agricultural and Forest Meteorology 97, no. 3 (November 1999): 213–31. http://dx.doi.org/10.1016/s0168-1923(99)00067-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Aguilar, F. J., M. A. Aguilar, J. L. Blanco, A. Nemmaoui, and A. M. García Lorca. "ANALYSIS AND VALIDATION OF GRID DEM GENERATION BASED ON GAUSSIAN MARKOV RANDOM FIELD." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B2 (June 7, 2016): 277–84. http://dx.doi.org/10.5194/isprs-archives-xli-b2-277-2016.

Full text
Abstract:
Digital Elevation Models (DEMs) are considered as one of the most relevant geospatial data to carry out land-cover and land-use classification. This work deals with the application of a mathematical framework based on a Gaussian Markov Random Field (GMRF) to interpolate grid DEMs from scattered elevation data. The performance of the GMRF interpolation model was tested on a set of LiDAR data (0.87 points/m<sup>2</sup>) provided by the Spanish Government (PNOA Programme) over a complex working area mainly covered by greenhouses in Almería, Spain. The original LiDAR data was decimated by randomly removing different fractions of the original points (from 10% to up to 99% of points removed). In every case, the remaining points (scattered observed points) were used to obtain a 1 m grid spacing GMRF-interpolated Digital Surface Model (DSM) whose accuracy was assessed by means of the set of previously extracted checkpoints. The GMRF accuracy results were compared with those provided by the widely known Triangulation with Linear Interpolation (TLI). Finally, the GMRF method was applied to a real-world case consisting of filling the LiDAR-derived DSM gaps after manually filtering out non-ground points to obtain a Digital Terrain Model (DTM). Regarding accuracy, both GMRF and TLI produced visually pleasing and similar results in terms of vertical accuracy. As an added bonus, the GMRF mathematical framework makes possible to both retrieve the estimated uncertainty for every interpolated elevation point (the DEM uncertainty) and include break lines or terrain discontinuities between adjacent cells to produce higher quality DTMs.
APA, Harvard, Vancouver, ISO, and other styles
3

Aguilar, F. J., M. A. Aguilar, J. L. Blanco, A. Nemmaoui, and A. M. García Lorca. "ANALYSIS AND VALIDATION OF GRID DEM GENERATION BASED ON GAUSSIAN MARKOV RANDOM FIELD." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B2 (June 7, 2016): 277–84. http://dx.doi.org/10.5194/isprsarchives-xli-b2-277-2016.

Full text
Abstract:
Digital Elevation Models (DEMs) are considered as one of the most relevant geospatial data to carry out land-cover and land-use classification. This work deals with the application of a mathematical framework based on a Gaussian Markov Random Field (GMRF) to interpolate grid DEMs from scattered elevation data. The performance of the GMRF interpolation model was tested on a set of LiDAR data (0.87 points/m&lt;sup&gt;2&lt;/sup&gt;) provided by the Spanish Government (PNOA Programme) over a complex working area mainly covered by greenhouses in Almería, Spain. The original LiDAR data was decimated by randomly removing different fractions of the original points (from 10% to up to 99% of points removed). In every case, the remaining points (scattered observed points) were used to obtain a 1 m grid spacing GMRF-interpolated Digital Surface Model (DSM) whose accuracy was assessed by means of the set of previously extracted checkpoints. The GMRF accuracy results were compared with those provided by the widely known Triangulation with Linear Interpolation (TLI). Finally, the GMRF method was applied to a real-world case consisting of filling the LiDAR-derived DSM gaps after manually filtering out non-ground points to obtain a Digital Terrain Model (DTM). Regarding accuracy, both GMRF and TLI produced visually pleasing and similar results in terms of vertical accuracy. As an added bonus, the GMRF mathematical framework makes possible to both retrieve the estimated uncertainty for every interpolated elevation point (the DEM uncertainty) and include break lines or terrain discontinuities between adjacent cells to produce higher quality DTMs.
APA, Harvard, Vancouver, ISO, and other styles
4

Burks, David J., and Rajeev K. Azad. "Higher-order Markov models for metagenomic sequence classification." Bioinformatics 36, no. 14 (June 9, 2020): 4130–36. http://dx.doi.org/10.1093/bioinformatics/btaa562.

Full text
Abstract:
Abstract Motivation Alignment-free, stochastic models derived from k-mer distributions representing reference genome sequences have a rich history in the classification of DNA sequences. In particular, the variants of Markov models have previously been used extensively. Higher-order Markov models have been used with caution, perhaps sparingly, primarily because of the lack of enough training data and computational power. Advances in sequencing technology and computation have enabled exploitation of the predictive power of higher-order models. We, therefore, revisited higher-order Markov models and assessed their performance in classifying metagenomic sequences. Results Comparative assessment of higher-order models (HOMs, 9th order or higher) with interpolated Markov model, interpolated context model and lower-order models (8th order or lower) was performed on metagenomic datasets constructed using sequenced prokaryotic genomes. Our results show that HOMs outperform other models in classifying metagenomic fragments as short as 100 nt at all taxonomic ranks, and at lower ranks when the fragment size was increased to 250 nt. HOMs were also found to be significantly more accurate than local alignment which is widely relied upon for taxonomic classification of metagenomic sequences. A novel software implementation written in C++ performs classification faster than the existing Markovian metagenomic classifiers and can therefore be used as a standalone classifier or in conjunction with existing taxonomic classifiers for more robust classification of metagenomic sequences. Availability and implementation The software has been made available at https://github.com/djburks/SMM. Contact Rajeev.Azad@unt.edu Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
5

Cunial, Fabio, Jarno Alanko, and Djamal Belazzougui. "A framework for space-efficient variable-order Markov models." Bioinformatics 35, no. 22 (April 20, 2019): 4607–16. http://dx.doi.org/10.1093/bioinformatics/btz268.

Full text
Abstract:
Abstract Motivation Markov models with contexts of variable length are widely used in bioinformatics for representing sets of sequences with similar biological properties. When models contain many long contexts, existing implementations are either unable to handle genome-scale training datasets within typical memory budgets, or they are optimized for specific model variants and are thus inflexible. Results We provide practical, versatile representations of variable-order Markov models and of interpolated Markov models, that support a large number of context-selection criteria, scoring functions, probability smoothing methods, and interpolations, and that take up to four times less space than previous implementations based on the suffix array, regardless of the number and length of contexts, and up to ten times less space than previous trie-based representations, or more, while matching the size of related, state-of-the-art data structures from Natural Language Processing. We describe how to further compress our indexes to a quantity related to the redundancy of the training data, saving up to 90% of their space on very repetitive datasets, and making them become up to 60 times smaller than previous implementations based on the suffix array. Finally, we show how to exploit constraints on the length and frequency of contexts to further shrink our compressed indexes to half of their size or more, achieving data structures that are a hundred times smaller than previous implementations based on the suffix array, or more. This allows variable-order Markov models to be used with bigger datasets and with longer contexts on the same hardware, thus possibly enabling new applications. Availability and implementation https://github.com/jnalanko/VOMM Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
6

Ardid, Alberto, David Dempsey, Edward Bertrand, Fabian Sepulveda, Pascal Tarits, Flora Solon, and Rosalind Archer. "Bayesian magnetotelluric inversion using methylene blue structural priors for imaging shallow conductors in geothermal fields." GEOPHYSICS 86, no. 3 (April 8, 2021): E171—E183. http://dx.doi.org/10.1190/geo2020-0226.1.

Full text
Abstract:
In geothermal exploration, magnetotelluric (MT) data and inversion models are commonly used to image shallow conductors typically associated with the presence of an electrically conductive clay cap that overlies the main reservoir. However, these inversion models suffer from nonuniqueness and uncertainty, and the inclusion of useful geologic information is still limited. We have developed a Bayesian inversion method that integrates the electrical resistivity distribution from MT surveys with borehole methylene blue (MeB) data, an indicator of conductive clay content. The MeB data were used to inform structural priors for the MT Bayesian inversion that focus on inferring with uncertainty the shallow conductor boundary in geothermal fields. By incorporating borehole information, our inversion reduced nonuniqueness and then explicitly represented the irreducible uncertainty as estimated depth intervals for the conductor boundary. We used the Markov chain Monte Carlo and a 1D three-layer resistivity model to accelerate the Bayesian inversion of the MT signal beneath each station. Then, inferred conductor boundary distributions were interpolated to construct pseudo-2D/3D models of the uncertain conductor geometry. We compare our approach against deterministic MT inversion software on synthetic and field examples, and our approach has good performance in estimating the depth to the bottom of the conductor, a valuable target in geothermal reservoir exploration.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhu, Yuxin, Yanchen Bo, Jinzong Zhang, and Yuexiang Wang. "Fusion of Multisensor SSTs Based on the Spatiotemporal Hierarchical Bayesian Model." Journal of Atmospheric and Oceanic Technology 35, no. 1 (January 2018): 91–109. http://dx.doi.org/10.1175/jtech-d-17-0116.1.

Full text
Abstract:
AbstractThis study focuses on merging MODIS-mapped SSTs with 4-km spatial resolution and AMSR-E optimally interpolated SSTs at 25-km resolution. A new data fusion method was developed—the Spatiotemporal Hierarchical Bayesian Model (STHBM). This method, which is implemented through the Markov chain Monte Carlo technique utilized to extract inferential results, is specified hierarchically by decomposing the SST spatiotemporal process into three subprocesses, that is, the spatial trend process, the seasonal cycle process, and the spatiotemporal random effect process. Spatial-scale transformation and spatiotemporal variation are introduced into the fusion model through the data model and model parameters, respectively, with suitably selected link functions. Compared with two modern spatiotemporal statistical methods—the Bayesian maximum entropy and the robust fixed rank kriging—STHBM has the following strength: it can simultaneously meet the expression of uncertainties from data and model, seamless scale transformation, and SST spatiotemporal process simulation. Utilizing multisensors’ complementation, merged data with complete spatial coverage, high resolution (4 km), and fine spatial pattern lying in MODIS SSTs can be obtained through STHBM. The merged data are assessed for local spatial structure, overall accuracy, and local accuracy. The evaluation results illustrate that STHBM can provide spatially complete SST fields with reasonably good data values and acceptable errors, and that the merged SSTs collect fine spatial patterns lying in MODIS SSTs with fine resolution. The accuracy of merged SSTs is between MODIS and AMSR-E SSTs. The contribution to the accuracy and the spatial pattern of the merged SSTs from the original MODIS SSTs is stronger than that of the original AMSR-E SSTs.
APA, Harvard, Vancouver, ISO, and other styles
8

Liang, Chia-Chun, Wei-Chung Hsu, Yao-Te Tsai, Shao-Jen Weng, Ho-Pang Yang, and Shih-Chia Liu. "Healthy Life Expectancies by the Effects of Hypertension and Diabetes for the Middle Aged and Over in Taiwan." International Journal of Environmental Research and Public Health 17, no. 12 (June 18, 2020): 4390. http://dx.doi.org/10.3390/ijerph17124390.

Full text
Abstract:
(1) Introduction: This study aims to investigate the disparity in the healthy life expectancy of the elderly with hypertension and diabetes mellitus. (2) Materials and Methods: This study used survey data collected in five waves (1996, 1999, 2003, 2007, and 2011) of the “Taiwan Longitudinal Study on Aging” (TLSA) to estimate the life expectancy and healthy life expectancy of different age groups. The activities of daily living, the health condition of hypertension and diabetes and the survival statuses of these cases were analyzed by the IMaCh (Interpolated Markov Chain) and logistic regression model. (3) Results: As regards the elderly between age 50 and 60 with hypertension and diabetes, women with hypertension only exhibited the longest life expectancy, and the healthy life expectancy and the percentage of remaining life with no functional incapacity were 33.74 years and 87.11%, respectively. In contrast, men with diabetes only showed the shortest life expectancy, and the healthy life expectancy and the percentage of remaining life with no functional incapacity were 22.51 years and 93.16%, respectively. We also found that people with diabetes showed a lower percentage of remaining life with no functional incapacity. (4) Conclusions: We suggest that policymakers should pay special attention to publicizing the importance of health control behavior in order to decrease the risk of suffering diseases and to improve the elderly’s quality of life.
APA, Harvard, Vancouver, ISO, and other styles
9

Reubelt, T., G. Austen, and E. W. Grafarend. "Space Gravity Spectroscopy - determination of the Earth’s gravitational field by means of Newton interpolated LEO ephemeris Case studies on dynamic (CHAMP Rapid Science Orbit) and kinematic orbits." Advances in Geosciences 1 (July 11, 2003): 127–35. http://dx.doi.org/10.5194/adgeo-1-127-2003.

Full text
Abstract:
Abstract. An algorithm for the (kinematic) orbit analysis of a Low Earth Orbiting (LEO) GPS tracked satellite to determine the spherical harmonic coefficients of the terrestrial gravitational field is presented. A contribution to existing long wavelength gravity field models is expected since the kinematic orbit of a LEO satellite can nowadays be determined with very high accuracy in the range of a few centimeters. To demonstrate the applicability of the proposed method, first results from the analysis of real CHAMP Rapid Science (dynamic) Orbits (RSO) and kinematic orbits are illustrated. In particular, we take advantage of Newton’s Law of Motion which balances the acceleration vector and the gradient of the gravitational potential with respect to an Inertial Frame of Reference (IRF). The satellite’s acceleration vector is determined by means of the second order functional of Newton’s Interpolation Formula from relative satellite ephemeris (baselines) with respect to the IRF. Therefore the satellite ephemeris, which are normally given in a Body fixed Frame of Reference (BRF) have to be transformed into the IRF. Subsequently the Newton interpolated accelerations have to be reduced for disturbing gravitational and non-gravitational accelerations in order to obtain the accelerations caused by the Earth’s gravitational field. For a first insight in real data processing these reductions have been neglected. The gradient of the gravitational potential, conventionally expressed in vector-valued spherical harmonics and given in a Body Fixed Frame of Reference, must be transformed from BRF to IRF by means of the polar motion matrix, the precession-nutation matrices and the Greenwich Siderial Time Angle (GAST). The resulting linear system of equations is solved by means of a least squares adjustment in terms of a Gauss-Markov model in order to estimate the spherical harmonics coefficients of the Earth’s gravitational field.Key words. space gravity spectroscopy, spherical harmonics series expansion, GPS tracked LEO satellites, kinematic
APA, Harvard, Vancouver, ISO, and other styles
10

Salzberg, Steven L., Mihaela Pertea, Arthur L. Delcher, Malcolm J. Gardner, and Hervé Tettelin. "Interpolated Markov Models for Eukaryotic Gene Finding." Genomics 59, no. 1 (July 1999): 24–31. http://dx.doi.org/10.1006/geno.1999.5854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Salzberg, S. L., A. L. Delcher, S. Kasif, and O. White. "Microbial gene identification using interpolated Markov models." Nucleic Acids Research 26, no. 2 (January 1, 1998): 544–48. http://dx.doi.org/10.1093/nar/26.2.544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Brady, Arthur, and Steven L. Salzberg. "Phymm and PhymmBL: metagenomic phylogenetic classification with interpolated Markov models." Nature Methods 6, no. 9 (August 2, 2009): 673–76. http://dx.doi.org/10.1038/nmeth.1358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kazemian, Majid, Qiyun Zhu, Marc S. Halfon, and Saurabh Sinha. "Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison." Nucleic Acids Research 39, no. 22 (August 5, 2011): 9463–72. http://dx.doi.org/10.1093/nar/gkr621.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Whalen, John, Ipek Stillman, Apoorva Ambavane, Eugene Felber, Dinara Makenbaeva, and Bjorn Bolinder. "Cost-Effectiveness Analysis (CEA) of Sequential Treatment with Tyrosine Kinase Inhibitors (TKIs) for Chronic Myelogenous Leukemia (CML)." Blood 124, no. 21 (December 6, 2014): 2607. http://dx.doi.org/10.1182/blood.v124.21.2607.2607.

Full text
Abstract:
Abstract Background: Imatinib has been approved for treatment of newly diagnosed CML since 2001. For patients failing imatinib, other treatment options such as dasatinib, nilotinib, bosutinib, and high-dose imatinib are recommended. Previous economic analyses assumed that patients are treated until progression, despite guidelines recommending changing treatments for non-responders. The objective was to perform a CEA of sequential treatment with 2nd line TKIs, from a commercial payer perspective in the United States (US). Methods: A Markov-cohort model was used to simulate lifetime treatment costs and health outcomes (discounted 3.0% per annum) for patients resistant or intolerant to 1st line imatinib. It compared six treatment sequences, starting from 2nd line (shown in the results table). The model included five health states: chronic phase 2nd line TKI, chronic phase 3rd line TKI, chronic phase no TKI, post-progression, and death. After 12 months of TKI treatment, patients without a major molecular or complete cytogenetic response (MMR or CCyR) moved to the next line of therapy (per NCCN guidelines). Patients could also move to the next treatment line if they lost MMR or CCyR, or discontinued due to drug-related toxicity. Data for response achievement, risks of progression, death, adverse events, and discontinuation were primarily based on data in published trials. Due to the lack of head-to-head studies for dasatinib vs. nilotinib in 2nd line, and a lack of time-to-response data for all three 2nd line treatments, MMR and CCyR rates were interpolated from available data points in 2nd line, while dasatinib and nilotinib rates were assumed to be equal in 3rd line. Dasatinib and imatinib progression, loss of response, and survival rates for 2nd line responders were assumed equal, due to data limitations. In each health state, patients accrued drug costs, resource use (related to monitoring, AE management, and disease management) costs and quality-adjusted life years (QALYs). Resource use, cost, and utility estimates were based on FDA labels, RedBook, Medicare and AHRQ Healthcare Cost Utilization Project data, and published economic analyses. Multi-way uncertainty analyses evaluated key contributors to uncertainty in the results, by testing various assumptions for probabilities of discontinuation, response, loss of response, progression, and survival. Results: The model predicts that 2nd line dasatinib provides increased survival (ΔLYs = ~0.4-2.6 years) and QALYs (ΔQALYs = ~0.4-2.8 years) in all patient groups when compared with 2nd line high-dose imatinib or nilotinib sequences. Also, 2nd line dasatinib was more costly (ΔLifetime Costs = ~$65,000 - $225,000) than high-dose imatinib and nilotinib, primarily due to longer survival and corresponding longer time on TKI treatment. In ±20% univariate sensitivity analyses, the model was most sensitive to 2nd line progression and survival estimates. Conclusions: This analysis suggests that dasatinib may be associated with increased life expectancy and quality of life when compared with high dose imatinib or nilotinib, among patients who are resistant or intolerant to 1st line imatinib, primarily based on higher cytogenetic response rates observed in studies of dasatinib. Other studies have shown improved quality of life for responders, and landmark analyses have shown improved survival for patients achieving cytogenetic response, but head-to-head clinical studies of sequential use of dasatinib and nilotinib are needed to confirm the model result. Based on the threshold of $150,000/QALY, dasatinib can be considered cost-effective in the US. Results Table: Sequence # Sequence 1 Sequence 2 Sequence 3 Sequence 4 Sequence 5 Sequence 6 1st line (not modeled – assumes 1st line imatinib for all sequences) 2nd line DAS NIL HDI DAS NIL HDI 3rd line NIL DAS NIL BOS BOS DAS Imatinib-resistant population LYs 7 6.5 4.5 7.3 6.8 4.5 QALYs 5.9 5.4 3.6 6.2 5.7 3.6 Lifetime Cost $497,391 $431,346 $270,308 $496,897 $431,084 $270,051 ICERs: DAS followed by NIL vs. NIL followed by DAS - $129,139 DAS followed by NIL vs. HDI followed by NIL - $96,356 Imatinib-intolerant population LYs 7.8 7.1 -- 8.1 7.4 -- QALYs 6.7 6.0 -- 7.0 6.3 -- Lifetime Cost $599,270 $509,456 -- $598,766 $509,174 -- ICERs: DAS followed by NIL vs. NIL followed by DAS - $125,800 BOS=bosutinib; DAS=dasatinib; HDI= high-dose imatinib; NIL=nilotinib; ICER=incremental cost-effectiveness ratio Disclosures Whalen: Evidera, Inc.: Consultancy, Employment; Bristol-Myers Squibb: Research Funding. Stillman:Evidera, Inc.: Consultancy, Employment; Bristol-Myers Squibb: Research Funding. Ambavane:Evidera, Inc.: Consultancy, Employment; Bristol-Myers Squibb: Research Funding. Felber:Bristol-Myers Squibb: Employment, Equity Ownership. Makenbaeva:Bristol-Myers Squibb: Employment, Equity Ownership. Bolinder:Bristol-Myers Squibb: Employment, Equity Ownership.
APA, Harvard, Vancouver, ISO, and other styles
15

Vaccaro, Adam, Julien Emile-Geay, Dominque Guillot, Resherle Verna, Colin Morice, John Kennedy, and Bala Rajaratnam. "Climate Field Completion via Markov Random Fields: Application to the HadCRUT4.6 Temperature Dataset." Journal of Climate 34, no. 10 (May 2021): 4169–88. http://dx.doi.org/10.1175/jcli-d-19-0814.1.

Full text
Abstract:
AbstractSurface temperature is a vital metric of Earth’s climate state but is incompletely observed in both space and time: over half of monthly values are missing from the widely used HadCRUT4.6 global surface temperature dataset. Here we apply the graphical expectation–maximization algorithm (GraphEM), a recently developed imputation method, to construct a spatially complete estimate of HadCRUT4.6 temperatures. GraphEM leverages Gaussian Markov random fields (also known as Gaussian graphical models) to better estimate covariance relationships within a climate field, detecting anisotropic features such as land–ocean contrasts, orography, ocean currents, and wave-propagation pathways. This detection leads to improved estimates of missing values compared to methods (such as kriging) that assume isotropic covariance relationships, as we show with real and synthetic data. This interpolated analysis of HadCRUT4.6 data is available as a 100-member ensemble, propagating information about sampling variability available from the original HadCRUT4.6 dataset. A comparison of Niño-3.4 and global mean monthly temperature series with published datasets reveals similarities and differences due in part to the spatial interpolation method. Notably, the GraphEM-completed HadCRUT4.6 global temperature displays a stronger early twenty-first-century warming trend than its uninterpolated counterpart, consistent with recent analyses using other datasets. Known events like the 1877/78 El Niño are recovered with greater fidelity than with kriging, and result in different assessments of changes in ENSO variability through time. Gaussian Markov random fields provide a more geophysically motivated way to impute missing values in climate fields, and the associated graph provides a powerful tool to analyze the structure of teleconnection patterns. We close with a discussion of wider applications of Markov random fields in climate science.
APA, Harvard, Vancouver, ISO, and other styles
16

Huber, Mark, and Sarah Schott. "Random Construction of Interpolating Sets for High-Dimensional Integration." Journal of Applied Probability 51, no. 01 (March 2014): 92–105. http://dx.doi.org/10.1017/s002190020001010x.

Full text
Abstract:
Computing the value of a high-dimensional integral can often be reduced to the problem of finding the ratio between the measures of two sets. Monte Carlo methods are often used to approximate this ratio, but often one set will be exponentially larger than the other, which leads to an exponentially large variance. A standard method of dealing with this problem is to interpolate between the sets with a sequence of nested sets where neighboring sets have relative measures bounded above by a constant. Choosing such a well-balanced sequence can rarely be done without extensive study of a problem. Here a new approach that automatically obtains such sets is presented. These well-balanced sets allow for faster approximation algorithms for integrals and sums using fewer samples, and better tempering and annealing Markov chains for generating random samples. Applications, such as finding the partition function of the Ising model and normalizing constants for posterior distributions in Bayesian methods, are discussed.
APA, Harvard, Vancouver, ISO, and other styles
17

Huber, Mark, and Sarah Schott. "Random Construction of Interpolating Sets for High-Dimensional Integration." Journal of Applied Probability 51, no. 1 (March 2014): 92–105. http://dx.doi.org/10.1239/jap/1395771416.

Full text
Abstract:
Computing the value of a high-dimensional integral can often be reduced to the problem of finding the ratio between the measures of two sets. Monte Carlo methods are often used to approximate this ratio, but often one set will be exponentially larger than the other, which leads to an exponentially large variance. A standard method of dealing with this problem is to interpolate between the sets with a sequence of nested sets where neighboring sets have relative measures bounded above by a constant. Choosing such a well-balanced sequence can rarely be done without extensive study of a problem. Here a new approach that automatically obtains such sets is presented. These well-balanced sets allow for faster approximation algorithms for integrals and sums using fewer samples, and better tempering and annealing Markov chains for generating random samples. Applications, such as finding the partition function of the Ising model and normalizing constants for posterior distributions in Bayesian methods, are discussed.
APA, Harvard, Vancouver, ISO, and other styles
18

Kelley, David R., and Steven L. Salzberg. "Clustering metagenomic sequences with interpolated Markov models." BMC Bioinformatics 11, no. 1 (November 2, 2010). http://dx.doi.org/10.1186/1471-2105-11-544.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ciecka, James E., and Gary R. Skoog. "Worklife Expectancies of Railroad Workers Based on the Twenty-Seventh Actuarial Valuation Using Competing Risks/Multiple Decrement Theory and the Markov Railroad Model." Journal of Forensic Economics, December 30, 2020. http://dx.doi.org/10.5085/jfe-472.

Full text
Abstract:
Abstract This paper contains worklife expectancies (WLE) of railroad workers based on the Twenty-Seventh Actuarial Valuation (Bureau of the Actuary, 2018), thereby updating the previous study of railroad workers' WLE based on the Twenty-Fifth Actuarial Valuation (Bureau of the Actuary, 2012). The main results of this paper are shown in a set of tables.11The tables in this paper provide worklife expectancies and standard deviations for every five years of service and five years of age and are referred to as abridged tables. Readers may interpolate as appropriate—e.g., a 23-year-old railroader would have a 60%/40% weighted average between the age 25 and age 20 entries. In addition, a more accurate calculation is available. The Association of American Railroads has requested that we provide it with complete unabridged tables that may be distributed to its members and posted on its web site. We have done so under a contract with the Association of American Railroads, which provides that those unabridged tables may be posted on the Journal of Forensic Economics web site. They appear there as supplemental materials to this paper, along with other supplemental content which includes Excel worksheets and additional statistical characteristics.
APA, Harvard, Vancouver, ISO, and other styles
20

Svalova, Aleksandra, Peter Helm, Dennis Prangle, Mohamed Rouainia, Stephanie Glendinning, and Darren J. Wilkinson. "Emulating computer experiments of transport infrastructure slope stability using Gaussian processes and Bayesian inference." Data-Centric Engineering 2 (2021). http://dx.doi.org/10.1017/dce.2021.14.

Full text
Abstract:
Abstract We propose using fully Bayesian Gaussian process emulation (GPE) as a surrogate for expensive computer experiments of transport infrastructure cut slopes in high-plasticity clay soils that are associated with an increased risk of failure. Our deterioration experiments simulate the dissipation of excess pore water pressure and seasonal pore water pressure cycles to determine slope failure time. It is impractical to perform the number of computer simulations that would be sufficient to make slope stability predictions over a meaningful range of geometries and strength parameters. Therefore, a GPE is used as an interpolator over a set of optimally spaced simulator runs modeling the time to slope failure as a function of geometry, strength, and permeability. Bayesian inference and Markov chain Monte Carlo simulation are used to obtain posterior estimates of the GPE parameters. For the experiments that do not reach failure within model time of 184 years, the time to failure is stochastically imputed by the Bayesian model. The trained GPE has the potential to inform infrastructure slope design, management, and maintenance. The reduction in computational cost compared with the original simulator makes it a highly attractive tool which can be applied to the different spatio-temporal scales of transport networks.
APA, Harvard, Vancouver, ISO, and other styles
21

"Método de los promedios anuales en el monitoreo de los cambios de cobertura por deforestación usando el sensor MODIS." Revista ECIPeru, December 18, 2018, 44–50. http://dx.doi.org/10.33017/reveciperu2014.0007/.

Full text
Abstract:
Método de los promedios anuales en el monitoreo de los cambios de cobertura por deforestación usando el sensor MODIS The average annual changes deforestation monitoring method in coverage using the MODIS model Yonatan Tarazona Coronel Laboratorio de Teledetección, Facultad de Ciencias Físicas, Universidad Nacional Mayor de San Marcos, Ciudad Universitaria, Lima-Perú DOI: https://doi.org/10.33017/RevECIPeru2014.0007/ Resumen La deforestación de los bosques, a través de la expansión agrícola, la conversión de áreas boscosas a pasturas, el desarrollo de infraestructura, la tala destructiva, los incendios, etc. representan casi el 20% de las emisiones de gases de efecto invernadero a nivel mundial, más que todo el sector de transporte global y sólo superada por el sector energético. Por lo tanto, con el fin de limitar los impactos del cambio climático dentro de los límites, que la sociedad razonablemente es capaz de tolerar, las temperaturas medias globales deben estabilizarse dentro de los dos grados centígrados. Este objetivo será prácticamente imposible de lograr sin reducir las emisiones del sector forestal, además de otras acciones de mitigación (REDD+). Si bien es cierto existen metodologías para el monitoreo de la deforestación, la mayoría de estos métodos, por no decir todos, requieren la designación del usuario de una definición de umbral que permita clasificar e identificar un cambio en el uso del suelo por deforestación. La determinación de los umbrales añade un coste significativo a los esfuerzos crecientes de detección de cambios en las distintas regiones y limita el estudio en diversas regiones que sufren cambios en la cobertura y que es necesario cuantificarlo. Se necesita, entonces, un análisis histórico a partir de datos de satélite archivado para modelar el comportamiento normal y el comportamiento anormal, es decir, la perturbación (Hargrove et al. 2009). Hay una necesidad crítica que permite el análisis de series de tiempo independiente de umbrales o definiciones para detectar alteraciones específicas. Este trabajo presenta un método que realiza, en un primer paso, un preprocesamiento de series de tiempo de índices EVI-MODIS antes de detectar y cuantificar los cambios de cobertura por deforestación. La metodología básicamente consiste en tener la serie de tiempo completa luego de interpolar los datos faltantes al filtrar los pixeles perturbados con la banda de fiabilidad del producto MOD13Q1. Luego de tener la serie completa para cualquier pixel, se promedia cada año los datos de tal manera que tengamos 13 valores desde el 2001-2013. Finalmente utilizaremos los límites de control de la estadística clásica para observar qué promedio está fuera de control y detectarlo como un cambio de uso por deforestación. Descriptores. Perturbación, Deforestación, EVI, Annual Averages. Abstract The clearing of forests, through agricultural expansion, conversion of forests to pasture, infrastructure development, destructive logging, fires, etc. represent nearly 20% of emissions of greenhouse gases worldwide, more than the entire global transportation sector and second only to the energy sector. Therefore, in order to limit the impacts of climate change within limits that society can tolerate reasonably, global average temperatures must be stabilized within two degrees Celsius. This objective will be virtually impossible to achieve without reducing emissions from the forest sector as well as other mitigation (REDD +). While there are methodologies for monitoring deforestation, most of these methods, if not all, require the user designation of a definition of threshold for classifying and identifying a change in land use from deforestation. Determining thresholds adds significantly to the increased efforts to detect changes in the regions cost and limits the study in various regions that undergo changes in coverage and the need to quantify it. Is needed, then, a historical analysis from archived satellite data to model the normal behavior and abnormal behavior, ie the disturbance (Hargrove et al. 2009). There is a critical need to allow independent analysis of time series of thresholds or definitions to detect specific abnormalities. This paper presents a method that takes as a first step, a pre-processing time series of MODIS-EVI indexes before detecting and quantifying changes in coverage due to deforestation. The methodology basically involves having the entire time series then interpolate the missing data by filtering pixels disturbed with the band MOD13Q1 product reliability. After having the complete set for every pixel, every year averaged data so we have 13 values from 2001 to 2013. Finally use the control limits of classical statistics to see what the average is out of control and detect a change of use from deforestation. Keywords. Disturbance, Deforestation, EVI, Box-Jenkins
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography