Academic literature on the topic 'Probabilities – Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Probabilities – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Probabilities – Data processing"

1

Vaidogas, Egidijus Rytas. "Bayesian Processing of Data on Bursts of Pressure Vessels." Information Technology and Control 50, no. 4 (December 16, 2021): 607–26. http://dx.doi.org/10.5755/j01.itc.50.4.29690.

Full text
Abstract:
Two alternative Bayesian approaches are proposed for the prediction of fragmentation of pressure vessels triggered off by accidental explosions (bursts) of these containment structures. It is shown how to carry out this prediction with post-mortem data on fragment numbers counted after past explosion accidents. Results of the prediction are estimates of probabilities of individual fragment numbers. These estimates are expressed by means of Bayesian prior or posterior distributions. It is demonstrated how to elicit the prior distributions from relatively scarce post-mortem data on vessel fragme
APA, Harvard, Vancouver, ISO, and other styles
2

Ivanov, A. I., E. N. Kuprianov, and S. V. Tureev. "Neural network integration of classical statistical tests for processing small samples of biometrics data." Dependability 19, no. 2 (June 16, 2019): 22–27. http://dx.doi.org/10.21683/1729-2646-2019-19-2-22-27.

Full text
Abstract:
The Aim of this paper is to increase the power of statistical tests through their joint application to reduce the requirement for the size of the test sample. Methods. It is proposed to combine classical statistical tests, i.e. chi square, Cram r-von Mises and Shapiro-Wilk by means of using equivalent artificial neurons. Each neuron compares the input statistics with a precomputed threshold and has two output states. That allows obtaining three bits of binary output code of a network of three artificial neurons. Results. It is shown that each of such criteria on small samples of biometric data
APA, Harvard, Vancouver, ISO, and other styles
3

Romansky, Radi. "Mathematical Model Investigation of a Technological Structure for Personal Data Protection." Axioms 12, no. 2 (January 18, 2023): 102. http://dx.doi.org/10.3390/axioms12020102.

Full text
Abstract:
The contemporary digital age is characterized by the massive use of different information technologies and services in the cloud. This raises the following question: “Are personal data processed correctly in global environments?” It is known that there are many requirements that the Data Controller must perform. For this reason, this article presents a point of view for transferring some activities for personal data processing from a traditional system to a cloud environment. The main goal is to investigate the differences between the two versions of data processing. To achieve this goal, a pr
APA, Harvard, Vancouver, ISO, and other styles
4

Tkachenko, Kirill. "PROVIDING A DEPENDABLE OPERATION OF THE DATA PROCESSING SYSTEM WITH INTERVAL CHANGES IN THE FLOW CHARACTERISTICS BASED ON ANALYTICAL SIMULATIONS." Automation and modeling in design and management 2021, no. 3-4 (December 30, 2021): 25–30. http://dx.doi.org/10.30987/2658-6436-2021-3-4-25-30.

Full text
Abstract:
The article proposes a new approach for adjusting the parameters of computing nodes being a part of a data processing system based on analytical simulation of a queuing system with subsequent estimation of probabilities of hypotheses regarding the computing node state. Methods of analytical modeling of queuing systems and mathematical statistics are used. The result of the study is a mathematical model for assessing the information situation for a computing node, which differs from the previously published system model used. Estimation of conditional probabilities of hypotheses concerning adeq
APA, Harvard, Vancouver, ISO, and other styles
5

Groot, Perry, Christian Gilissen, and Michael Egmont-Petersen. "Error probabilities for local extrema in gene expression data." Pattern Recognition Letters 28, no. 15 (November 2007): 2133–42. http://dx.doi.org/10.1016/j.patrec.2007.06.017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Čajka, Radim, and Martin Krejsa. "Measured Data Processing in Civil Structure Using the DOProC Method." Advanced Materials Research 859 (December 2013): 114–21. http://dx.doi.org/10.4028/www.scientific.net/amr.859.114.

Full text
Abstract:
This paper describes the use of measured values in the probabilistic tasks by means of the new method which is under development now - Direct Optimized Probabilistic Calculation (DOProC). This method has been used to solve a number of probabilistic tasks. DOProC has been applied in ProbCalc a part of this software is a module for entering and assessing the measured data. The software can read values saved in a text file and can create histograms with non-parametric (empirical) distribution of the probabilities. In case of the parametric distribution, it is possible to make selection from among
APA, Harvard, Vancouver, ISO, and other styles
7

Chervyakov, N. I., P. A. Lyakhov, and A. R. Orazaev. "3D-generalization of impulse noise removal method for video data processing." Computer Optics 44, no. 1 (February 2020): 92–100. http://dx.doi.org/10.18287/2412-6179-co-577.

Full text
Abstract:
The paper proposes a generalized method of adaptive median impulse noise filtering for video data processing. The method is based on the combined use of iterative processing and transformation of the result of median filtering based on the Lorentz distribution. Four different combinations of algorithmic blocks of the method are proposed. The experimental part of the paper presents the results of comparing the quality of the proposed method with known analogues. Video distorted by impulse noise with pixel distortion probabilities from 1% to 99% inclusive was used for the simulation. Numerical a
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Qiude, Qingyu Xiong, Shengfen Ji, Junhao Wen, Min Gao, Yang Yu, and Rui Xu. "Using fine-tuned conditional probabilities for data transformation of nominal attributes." Pattern Recognition Letters 128 (December 2019): 107–14. http://dx.doi.org/10.1016/j.patrec.2019.08.024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jain, Kirti. "Sentiment Analysis on Twitter Airline Data." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 30, 2021): 3767–70. http://dx.doi.org/10.22214/ijraset.2021.35807.

Full text
Abstract:
Sentiment analysis, also known as sentiment mining, is a submachine learning task where we want to determine the overall sentiment of a particular document. With machine learning and natural language processing (NLP), we can extract the information of a text and try to classify it as positive, neutral, or negative according to its polarity. In this project, We are trying to classify Twitter tweets into positive, negative, and neutral sentiments by building a model based on probabilities. Twitter is a blogging website where people can quickly and spontaneously share their feelings by sending tw
APA, Harvard, Vancouver, ISO, and other styles
10

Buhmann, Joachim, and Hans Kühnel. "Complexity Optimized Data Clustering by Competitive Neural Networks." Neural Computation 5, no. 1 (January 1993): 75–88. http://dx.doi.org/10.1162/neco.1993.5.1.75.

Full text
Abstract:
Data clustering is a complex optimization problem with applications ranging from vision and speech processing to data transmission and data storage in technical as well as in biological systems. We discuss a clustering strategy that explicitly reflects the tradeoff between simplicity and precision of a data representation. The resulting clustering algorithm jointly optimizes distortion errors and complexity costs. A maximum entropy estimation of the clustering cost function yields an optimal number of clusters, their positions, and their cluster probabilities. Our approach establishes a unifyi
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Probabilities – Data processing"

1

Sun, Liwen, and 孙理文. "Mining uncertain data with probabilistic guarantees." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B45705392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Navas, Portella Víctor. "Statistical modelling of avalanche observables: criticality and universality." Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/670764.

Full text
Abstract:
Complex systems can be understood as an entity composed by a large number of interactive elements whose emergent global behaviour cannot be derived from the local laws characterizing their constituents. The observables characterizing these systems can be observed at different scales and they often exhibit interesting properties such as lack of characteristic scales and self-similarity. In this context, power-law type functions take an important role in the description of these observables. The presence of power-law functions resembles to the situation of thermodynamic quantities close to a cri
APA, Harvard, Vancouver, ISO, and other styles
3

Franco, Samuel. "Searching for long transient gravitational waves in the LIGO-Virgo data." Phd thesis, Université Paris Sud - Paris XI, 2014. http://tel.archives-ouvertes.fr/tel-01062708.

Full text
Abstract:
This thesis presents the results of the STAMPAS all-sky search for long transient gravitational waves in the 2005-2007 LIGO-Virgo data. Gravitational waves are perturbations of the space-time metric. The Virgo and LIGO experiments are designed to detect such waves. They are Michelson interferometers with 3 km and 4 km long arms, whose light output is altered during the passage of a gravitational wave.Until very recently, transient gravitational wave search pipelines were focused on short transients, lasting less than 1 second, and on binary coalescence signals. STAMPAS is one of the very first
APA, Harvard, Vancouver, ISO, and other styles
4

Antelo, Junior Ernesto Willams Molina. "Estimação conjunta de atraso de tempo subamostral e eco de referência para sinais de ultrassom." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2616.

Full text
Abstract:
CAPES<br>Em ensaios não destrutivos por ultrassom, o sinal obtido a partir de um sistema de aquisição de dados real podem estar contaminados por ruído e os ecos podem ter atrasos de tempo subamostrais. Em alguns casos, esses aspectos podem comprometer a informação obtida de um sinal por um sistema de aquisição. Para lidar com essas situações, podem ser utilizadas técnicas de estimativa de atraso temporal (Time Delay Estimation ou TDE) e também técnicas de reconstrução de sinais, para realizar aproximações e obter mais informações sobre o conjunto de dados. As técnicas de TDE podem ser utiliza
APA, Harvard, Vancouver, ISO, and other styles
5

Malmgren, Henrik. "Revision of an artificial neural network enabling industrial sorting." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-392690.

Full text
Abstract:
Convolutional artificial neural networks can be applied for image-based object classification to inform automated actions, such as handling of objects on a production line. The present thesis describes theoretical background for creating a classifier and explores the effects of introducing a set of relatively recent techniques to an existing ensemble of classifiers in use for an industrial sorting system.The findings indicate that it's important to use spatial variety dropout regularization for high resolution image inputs, and use an optimizer configuration with good convergence properties. T
APA, Harvard, Vancouver, ISO, and other styles
6

Jiang, Bin Computer Science &amp Engineering Faculty of Engineering UNSW. "Probabilistic skylines on uncertain data." 2007. http://handle.unsw.edu.au/1959.4/40712.

Full text
Abstract:
Skyline analysis is important for multi-criteria decision making applications. The data in some of these applications are inherently uncertain due to various factors. Although a considerable amount of research has been dedicated separately to efficient skyline computation, as well as modeling uncertain data and answering some types of queries on uncertain data, how to conduct skyline analysis on uncertain data remains an open problem at large. In this thesis, we tackle the problem of skyline analysis on uncertain data. We propose a novel probabilistic skyline model where an uncertain object ma
APA, Harvard, Vancouver, ISO, and other styles
7

Murison, Robert. "Problems in density estimation for independent and dependent data." Phd thesis, 1993. http://hdl.handle.net/1885/136654.

Full text
Abstract:
The aim of this thesis is to provide two extensions to the theory of nonparametric kernel density estimation that increase the scope of the technique. The basic ideas of kernel density estimation are not new, having been proposed by Rosenblatt [20] and extended by Parzen [17]. The objective is that for a given set of data, estimates of functions of the distribution of the data such as probability densities are derived without recourse to rigid parametric assumptions and allow the data themselves to be more expressive in the statistical outcome. Thus kernel estimation has captured the im
APA, Harvard, Vancouver, ISO, and other styles
8

Kanetsi, Khahiso. "Annual peak rainfall data augmentation - A Bayesian joint probability approach for catchments in Lesotho." Thesis, 2017. https://hdl.handle.net/10539/25567.

Full text
Abstract:
A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Engineering, 2017<br>The main problem to be investigated is how short duration data records can be augmented using existing data from nearby catchments with data with long periods of record. The purpose of the investigation is to establish a method of improving hydrological data using data from a gauged catchment to improve data from an ungauged catchment. The investigation is undertaken usi
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Haiou. "Logic sampling, likelihood weighting and AIS-BN : an exploration of importance sampling." Thesis, 2001. http://hdl.handle.net/1957/28769.

Full text
Abstract:
Logic Sampling, Likelihood Weighting and AIS-BN are three variants of stochastic sampling, one class of approximate inference for Bayesian networks. We summarize the ideas underlying each algorithm and the relationship among them. The results from a set of empirical experiments comparing Logic Sampling, Likelihood Weighting and AIS-BN are presented. We also test the impact of each of the proposed heuristics and learning method separately and in combination in order to give a deeper look into AIS-BN, and see how the heuristics and learning method contribute to the power of the algorithm. Key wo
APA, Harvard, Vancouver, ISO, and other styles
10

Fazelnia, Ghazal. "Optimization for Probabilistic Machine Learning." Thesis, 2019. https://doi.org/10.7916/d8-jm7k-2k98.

Full text
Abstract:
We have access to great variety of datasets more than any time in the history. Everyday, more data is collected from various natural resources and digital platforms. Great advances in the area of machine learning research in the past few decades have relied strongly on availability of these datasets. However, analyzing them imposes significant challenges that are mainly due to two factors. First, the datasets have complex structures with hidden interdependencies. Second, most of the valuable datasets are high dimensional and are largely scaled. The main goal of a machine learning framework is
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Probabilities – Data processing"

1

Kelly, Brendan. Data management & probability module. Toronto, ON: Ontario Ministry of Education and Training, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Basic probability using MATLAB. Boston: PWS Pub. Co., 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Petrushin, V. N. Informat︠s︡ionnai︠a︡ chuvstvitelʹnostʹ kompʹi︠u︡ternykh algoritmov. Moskva: FIZMATLIT, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Aizaki, Hideo. Stated preference methods using R. Boca Raton: CRC Press, Taylor & Francis Group, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

T, Callender J., ed. Exploring probability and statistics with spreadsheets. London: Prentice Hall, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Probability and random processes: Using MATLAB with applications to continuous and discrete time systems. Chicago: Irwin, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rozhkov, V. A. Metody i sredstva statisticheskoĭ obrabotki i analiza informat︠s︡ii ob obstanovke v mirovom okeane na primere gidrometeorologii. Obninsk: VNIIGMI-MT︠S︡D, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Andrews, D. F. Calculations with random variables using mathematica. Toronto: University of Toronto, Dept. of Statistics, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Petersen, E. R. PROPS+: Proabilistic and optimization spreadsheets plus what-if-solver. Reading, MA: Addison-Wesley, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rozhkov, V. A. Metody i sredstva statisticheskoĭ obrabotki i analiza informat︠s︡ii ob obstanovke v mirovom okeane na primere gidrometeorologii. Obninsk: VNIIGMI-MT︠S︡D, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Probabilities – Data processing"

1

Pegoraro, Marco, Bianka Bakullari, Merih Seran Uysal, and Wil M. P. van der Aalst. "Probability Estimation of Uncertain Process Trace Realizations." In Lecture Notes in Business Information Processing, 21–33. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-98581-3_2.

Full text
Abstract:
AbstractProcess mining is a scientific discipline that analyzes event data, often collected in databases called event logs. Recently, uncertain event logs have become of interest, which contain non-deterministic and stochastic event attributes that may represent many possible real-life scenarios. In this paper, we present a method to reliably estimate the probability of each of such scenarios, allowing their analysis. Experiments show that the probabilities calculated with our method closely match the true chances of occurrence of specific outcomes, enabling more trustworthy analyses on uncertain data.
APA, Harvard, Vancouver, ISO, and other styles
2

Kosheleva, Olga, and Vladik Kreinovich. "Beyond p-Boxes and Interval-Valued Moments: Natural Next Approximations to General Imprecise Probabilities." In Statistical and Fuzzy Approaches to Data Processing, with Applications to Econometrics and Other Areas, 133–43. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45619-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kukar, Matjaž, Igor Kononenko, and Ciril Grošelj. "Automated Diagnostics of Coronary Artery Disease." In Data Mining, 1043–63. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2455-9.ch053.

Full text
Abstract:
The authors present results and the latest advancement in their long-term study on using image processing and data mining methods in medical image analysis in general, and in clinical diagnostics of coronary artery disease in particular. Since the evaluation of modern medical images is often difficult and time-consuming, authors integrate advanced analytical and decision support tools in diagnostic process. Partial diagnostic results, frequently obtained from tests with substantial imperfections, can be thus integrated in ultimate diagnostic conclusion about the probability of disease for a given patient. Authors study various topics, such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform the medical practice. During their long-term study (1995-2011) authors achieved, among other minor results, two really significant milestones. The first was achieved by using machine learning to significantly increase post-test diagnostic probabilities with respect to expert physicians. The second, even more significant result utilizes various advanced data analysis techniques, such as automatic multi-resolution image parameterization combined with feature extraction and machine learning methods to significantly improve on all aspects of diagnostic performance. With the proposed approach clinical results are significantly as well as fully automatically, improved throughout the study. Overall, the most significant result of the work is an improvement in the diagnostic power of the whole diagnostic process. The approach supports, but does not replace, physicians’ diagnostic process, and can assist in decisions on the cost-effectiveness of diagnostic tests.
APA, Harvard, Vancouver, ISO, and other styles
4

Kukar, Matjaž, Igor Kononenko, and Ciril Grošelj. "Automated Diagnostics of Coronary Artery Disease." In Medical Applications of Intelligent Data Analysis, 91–112. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-1803-9.ch006.

Full text
Abstract:
The authors present results and the latest advancement in their long-term study on using image processing and data mining methods in medical image analysis in general, and in clinical diagnostics of coronary artery disease in particular. Since the evaluation of modern medical images is often difficult and time-consuming, authors integrate advanced analytical and decision support tools in diagnostic process. Partial diagnostic results, frequently obtained from tests with substantial imperfections, can be thus integrated in ultimate diagnostic conclusion about the probability of disease for a given patient. Authors study various topics, such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform the medical practice. During their long-term study (1995-2011) authors achieved, among other minor results, two really significant milestones. The first was achieved by using machine learning to significantly increase post-test diagnostic probabilities with respect to expert physicians. The second, even more significant result utilizes various advanced data analysis techniques, such as automatic multi-resolution image parameterization combined with feature extraction and machine learning methods to significantly improve on all aspects of diagnostic performance. With the proposed approach clinical results are significantly as well as fully automatically, improved throughout the study. Overall, the most significant result of the work is an improvement in the diagnostic power of the whole diagnostic process. The approach supports, but does not replace, physicians’ diagnostic process, and can assist in decisions on the cost-effectiveness of diagnostic tests.
APA, Harvard, Vancouver, ISO, and other styles
5

Chiverton, John, and Kevin Wells. "PV Modeling of Medical Imaging Systems." In Benford's Law. Princeton University Press, 2015. http://dx.doi.org/10.23943/princeton/9780691147611.003.0018.

Full text
Abstract:
This chapter applies a Bayesian formulation of the Partial Volume (PV) effect, based on the Benford distribution, to the statistical classification of nuclear medicine imaging data: specifically Positron Emission Tomography (PET) acquired as part of a PET-CT phantom imaging procedure. The Benford distribution is a discrete probability distribution of great interest for medical imaging, because it describes the probabilities of occurrence of single digits in many sources of data. The chapter thus describes the PET-CT imaging and post-processing process to derive a gold standard. Moreover, this chapter uses it as a ground truth for the assessment of a Benford classifier formulation. The use of this gold standard shows that the classification of both the simulated and real phantom imaging data is well described by the Benford distribution.
APA, Harvard, Vancouver, ISO, and other styles
6

Harff, J. E., and R. A. Olea. "From Multivariate Sampling To Thematic Maps With An Application To Marine Geochemistry." In Computers in Geology - 25 Years of Progress. Oxford University Press, 1994. http://dx.doi.org/10.1093/oso/9780195085938.003.0027.

Full text
Abstract:
Integration of mapped data is one of the main problems in geological information processing. Structural, compositional, and genetic features of the Earth's crust may be apparent only if variables that were mapped separately are studied simultaneously. Geologists traditionally solve this problem by the "light table method." Mathematical geologists, in particular, D.F. Merriam, have applied multivariate techniques to data integration (Merriam and Sneath, 1966; Harbaugh and Merriam, 1968; Merriam and Jewett, 1988; Merriam and Sondergard, 1988; Herzfeld and Merriam, 1990; Brower and Merriam, 1990). In this article a regionalization concept based on the interpolation of Bayes' probabilities of class memberships is described using a geostatistical model called "classification probability kriging." The problem of interpolation between data points has not been considered in most of the publications on multivariate techniques mentioned above. An attempt at data integration—including interpolation of multivariate data vectors—was made by Harff and Davis (1990) using the concept of regionalized classification. This concept combines the theory of classification of geological objects (Rodionov, 1981) with the theory of regionalized variables (Matheron, 1970; Journel and Huijbregts, 1978). The method is based on the transformation of the original multivariate space of observed variables into a univariate space of rock types or rock classes. Distances between multivariate class centers and measurement vectors within the feature space are needed for this transformation. Such distances can be interpolated between the data points using kriging. Because of the assumptions of multinormality and the fact that Mahalanobis' distances tend to follow a x2 distribution, the distances must be normalized before kriging (Harff, Davis and Olea, 1991). From the resulting normalized distance vectors at each node of a spatial grid, the Bayes' probability of class membership can be calculated for each class. The corresponding grid nodes will be assigned to the classes with the greatest membership probabilities. The result is a regionalization scheme covering the area under investigation. Let X(r) denote the multivariate field of features, modeled as a regionalized variable (RV).
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Probabilities – Data processing"

1

Lokse, Sigurd, and Robert Jenssen. "Ranking Using Transition Probabilities Learned from Multi-Attribute Data." In ICASSP 2018 - 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018. http://dx.doi.org/10.1109/icassp.2018.8462132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reznik, A. L., A. A. Soloviev, and A. V. Torgov. "On the statistics of anomalous clumps in random point images." In Spatial Data Processing for Monitoring of Natural and Anthropogenic Processes 2021. Crossref, 2021. http://dx.doi.org/10.25743/sdm.2021.11.90.030.

Full text
Abstract:
New algorithms for calculating exact analytical formulas describing two related probabilities are proposed, substantiated and software implemented: 1) the probability of the formation of anomalously large local groups in a random point image; 2) the probability of the absence of significant local groupings in a random point image.
APA, Harvard, Vancouver, ISO, and other styles
3

Ko, Hsiao-Han, Kuo-Jin Tseng, Li-Min Wei, and Meng-Hsiun Tsai. "Possible Disease-Link Genetic Pathways Constructed by Hierarchical Clustering and Conditional Probabilities of Ovarian Carcinoma Microarray Data." In 2010 Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP). IEEE, 2010. http://dx.doi.org/10.1109/iihmsp.2010.8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Xiaodong, Ying Min Low, and Chan Ghee Koh. "Prediction of Low Failure Probabilities With Application to Marine Risers." In ASME 2017 36th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/omae2017-61574.

Full text
Abstract:
Offshore riser systems are subjected to wind, wave and current loadings, which are random in nature. Nevertheless, the current deterministic based design and analysis practice could not quantitatively evaluate the safety of structures taking random environmental loadings into consideration, due to high computational costs. Structural reliability method, as an analysis tool to quantify probability of failure of components or systems, can account for uncertainties in environmental conditions and system parameters. It is particularly useful in cases where limited experience exists or a risk-based
APA, Harvard, Vancouver, ISO, and other styles
5

Jimeno Yepes, Antonio, Jianbin Tang, and Benjamin Scott Mashford. "Improving Classification Accuracy of Feedforward Neural Networks for Spiking Neuromorphic Chips." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/274.

Full text
Abstract:
Deep Neural Networks (DNN) achieve human level performance in many image analytics tasks but DNNs are mostly deployed to GPU platforms that consume a considerable amount of power. New hardware platforms using lower precision arithmetic achieve drastic reductions in power consumption. More recently, brain-inspired spiking neuromorphic chips have achieved even lower power consumption, on the order of milliwatts, while still offering real-time processing. However, for deploying DNNs to energy efficient neuromorphic chips the incompatibility between continuous neurons and synaptic weights of tradi
APA, Harvard, Vancouver, ISO, and other styles
6

Wang, Yan. "System Resilience Quantification for Probabilistic Design of Internet-of-Things Architecture." In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-59426.

Full text
Abstract:
The objects in the Internet of Things (IoT) form a virtual space of information gathering and sharing through the networks. Designing IoT-compatible products that have the capabilities of data collection, processing, and communication requires open and resilient architecture with flexibility and adapability for dynamically evolving networks. Design for connectivity becomes an important subject in designing such products. To enable a resilience engineering approach for IoT systems design, quantitative measures of resilience are needed for analysis and optimization. In this paper, an approach fo
APA, Harvard, Vancouver, ISO, and other styles
7

Kyriazis, A., A. Tsalavoutas, K. Mathioudakis, M. Bauer, and O. Johanssen. "Gas Turbine Fault Identification by Fusing Vibration Trending and Gas Path Analysis." In ASME Turbo Expo 2009: Power for Land, Sea, and Air. ASMEDC, 2009. http://dx.doi.org/10.1115/gt2009-59942.

Full text
Abstract:
A fusion method that utilizes performance data and vibration measurements for gas turbine component fault identification is presented. The proposed method operates during the diagnostic processing of available data (process level) and adopts the principles of certainty factors theory. Both performance and vibration measurements are analyzed separately, in a first step, and their results are transformed into a common form of probabilities. These forms are interweaved, in order to derive a set of possible faulty components prior to deriving a final diagnostic decision. Then, in the second step,
APA, Harvard, Vancouver, ISO, and other styles
8

Galkin, Andrii, Iryna Polchaninova, Olena Galkina, and Iryna Balandina. "Retail trade area analysis using multiple variables modeling at residential zone." In Contemporary Issues in Business, Management and Economics Engineering. Vilnius Gediminas Technical University, 2019. http://dx.doi.org/10.3846/cibmee.2019.041.

Full text
Abstract:
Purpose – the purpose of the paper is to set up a method of retail trade area analysis using multiple variables modeling at a residential zone. Research methodology – system analysis; regression analysis; correlation analysis; simulating; urban characteristics analysis. Findings – retail trade area analysis using multiple variables modeling at residential zone based on the proposed method is performed by directly processing and analysing data in a separate zone. Research limitations – the obtained results can be used for data variation range of conducted experiment. Practical implications – th
APA, Harvard, Vancouver, ISO, and other styles
9

Harlow, D. Gary. "Lower Tail Estimation of Fatigue Life." In ASME 2019 Pressure Vessels & Piping Conference. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/pvp2019-93104.

Full text
Abstract:
Abstract Uncertainty in the prediction of lower tail fatigue life behavior is a combination of many causes, some of which are aleatoric and some of which are systemic. The error cannot be entirely eliminated or quantified due to microstructural variability, manufacturing processing, approximate scientific modeling, and experimental inconsistencies. The effect of uncertainty is exacerbated for extreme value estimation for fatigue life distributions because by necessity those events are rare. In addition, typically, there is a sparsity of data in the region of smaller stress levels in stress–lif
APA, Harvard, Vancouver, ISO, and other styles
10

Borozdin, Sergey Olegovich, Anatoly Nikolaevich Dmitrievsky, Nikolai Alexandrovich Eremin, Alexey Igorevich Arkhipov, Alexander Georgievich Sboev, Olga Kimovna Chashchina-Semenova, and Leonid Konstantinovich Fitzner. "Drilling Problems Forecast Based on Neural Network." In Offshore Technology Conference. OTC, 2021. http://dx.doi.org/10.4043/30984-ms.

Full text
Abstract:
Abstract This paper poses and solves the problem of using artificial intelligence methods for processing big volumes of geodata from geological and technological measurement stations in order to identify and predict complications during well drilling. Big volumes of geodata from the stations of geological and technological measurements during drilling varied from units to tens of terabytes. Digital modernization of the life cycle of well construction using machine learning methods contributes to improving the efficiency of drilling oil and gas wells. The clustering of big volumes of geodata fr
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!