Academic literature on the topic 'Statistics|Artificial intelligence'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Statistics|Artificial intelligence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Statistics|Artificial intelligence"

1

Yu, Bin, and Karl Kumbier. "Artificial intelligence and statistics." Frontiers of Information Technology & Electronic Engineering 19, no. 1 (2018): 6–9. http://dx.doi.org/10.1631/fitee.1700813.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ziegel, Eric. "Artificial Intelligence and Statistics." Technometrics 31, no. 1 (1989): 130. http://dx.doi.org/10.1080/00401706.1989.10488504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lowe, David. "“Artificial intelligence”, or statistics?" Significance 16, no. 4 (2019): 7. http://dx.doi.org/10.1111/j.1740-9713.2019.01291.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wiggins, Lyna L. "Artificial intelligence and statistics." Computers, Environment and Urban Systems 13, no. 3 (1989): 213–15. http://dx.doi.org/10.1016/0198-9715(89)90027-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ziegel, Eric R., and D. Hand. "Artificial Intelligence Frontiers in Statistics." Technometrics 37, no. 1 (1995): 127. http://dx.doi.org/10.2307/1269180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vôhandu, L. "Artificial intelligence frontiers in statistics." Engineering Applications of Artificial Intelligence 7, no. 1 (1994): 87. http://dx.doi.org/10.1016/0952-1976(94)90049-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Linster, Bruce G., and D. J. Hand. "Artificial Intelligence Frontiers in Statistics: AI and Statistics III." Southern Economic Journal 62, no. 3 (1996): 811. http://dx.doi.org/10.2307/1060915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Thisted, Ronald A., and D. J. Hand. "Artificial Intelligence Frontiers in Statistics: AI and Statistics III." Journal of the American Statistical Association 89, no. 426 (1994): 719. http://dx.doi.org/10.2307/2290889.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hart, Anna, and D. J. Hand. "Artificial Intelligence Frontiers in Statistics: AI and Statistics III." Statistician 43, no. 2 (1994): 333. http://dx.doi.org/10.2307/2348354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kharin, Yu S. "Artificial intelligence frontiers in statistics AI and statistics III." Knowledge-Based Systems 7, no. 1 (1994): 57–58. http://dx.doi.org/10.1016/0950-7051(94)90017-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Statistics|Artificial intelligence"

1

Jutras, Pierre. "Modeling of urban tree growth with artificial intelligence and multivariate statistics." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=21963.

Full text
Abstract:
The urban environment induces severe ecological conditions that impair tree growth. This situation is of major concern to municipal administrations that devote large budgets to arboricultural programmes. To adequately preserve arboreal heritage, three main issues must be resolved. First, biotic and abiotic inventory parameters that can express the complexity of street tree growth have to be assessed. Second, for an enhanced understanding of tree health and related environmental conditions, an analytical methodology should be defined to cluster street trees with similar growth patterns. Third, optimized tree-inventory procedures ought to be determined. To fulfill these objectives, multiple variables were measured on 1532 trees and associated sites in Montreal (Quebec, Canada). Seven species representing 75% of the total street tree population were sampled. To identify key inventory parameters, two approaches were used: multivariate statistics (principal coordinate and correspondence analyses) determined biotic variables, and contingency analysis investigated environmental variables. Results from the multivariate analysis revealed that qualitative biotic parameters are of low explanatory importance. Conversely, it was discovered that modeling with the synergistic combination of 11 specific quantitative biotic parameters gave an adequate portrayal of all tree physiological stages. Contingency analysis unveiled links between some environmental factors and tree growth. Overall, nine factors were identified as central inventory parameters for some or all species. To develop the classification methodology, a two-step procedure was chosen. First, intermediate linkage clustering and correspondence analysis were used to ascertain groups with dissimilar growth rates. Second, the clustering knowledge was used to train radial basis function networks to recognize tree growth patterns and predict cluster affiliation. Global cluster classification was estimated by computing t<br>Les conditions urbaines créent un milieu rude qui limite la croissance des arbres sur rue. Cette situation est préoccupante pour les administrations municipales qui investissent des sommes considérables dans leur programme arboricole. Afin de protéger adéquatement ce riche héritage, trois importantes actions doivent être entreprises. Premièrement, il y a lieu d'identifier des paramètres d'inventaire qui caractérisent au mieux le développement des arbres sur rue. Deuxièmement, afin d'appréhender la complexité des relations entre la croissance des arbres et les conditions environnementales, il est impérieux de définir une méthode analytique permettant le classement des arbres dans des groupes à croissance similaire. Enfin, de nouvelles pratiques d'inventaire efficaces doivent être établies. Afin d'atteindre ces objectifs, de nombreux paramètres ont été mesurés sur 1532 arbres et sites afférents (Montréal, Québec, Canada). Sept espèces arborescentes reflétant 75% de la composition du patrimoine montréalais planté sur rue ont été échantillonnées. Pour l'identification des paramètres d'inventaire, deux approches ont été retenues. La première consistait à utiliser les statistiques multidimensionnelles (analyses en coordonnées principales et des correspondances) pour estimer l'importance de variables biotiques. La deuxième approche, l'utilisation de l'analyse de contingence, visait à distinguer les paramètres abiotiques les plus significatifs. Les résultats de l'analyse multidimensionnelle ont révélé que les paramètres qualitatifs sont peu significatifs. À l'opposé, l'utilisation synchronique de 11 paramètres quantitatifs a permis de décrire adéquatement les différents stades physiologiques de l'arbre sur rue. L'analyse de contingence a démontré que certains facteurs environnementaux affectent la performance de certaines espèces ou de l'ensemble des espèces étudiées. Ainsi, neuf facteurs spécifiques sont à$
APA, Harvard, Vancouver, ISO, and other styles
2

Valenzuela, Russell. "Predicting National Basketball Association Game Outcomes Using Ensemble Learning Techniques." Thesis, California State University, Long Beach, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10980443.

Full text
Abstract:
<p> There have been a number of studies that try to predict sporting event outcomes. Most previous research has involved results in football and college basketball. Recent years has seen similar approaches carried out in professional basketball. This thesis attempts to build upon existing statistical techniques and apply them to the National Basketball Association using a synthesis of algorithms as motivation. A number of ensemble learning methods will be utilized and compared in hopes of improving the accuracy of single models. Individual models used in this thesis will be derived from Logistic Regression, Na&iuml;ve Bayes, Random Forests, Support Vector Machines, and Artificial Neural Networks while aggregation techniques include Bagging, Boosting, and Stacking. Data from previous seasons and games from both?players and teams will be used to train models in R.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
3

Tolle, Kristin M. "Domain-independent semantic concept extraction using corpus linguistics, statistics and artificial intelligence techniques." Diss., The University of Arizona, 2003. http://hdl.handle.net/10150/280502.

Full text
Abstract:
For this dissertation two software applications were developed and three experiments were conducted to evaluate the viability of a unique approach to medical information extraction. The first system, the AZ Noun Phraser, was designed as a concept extraction tool. The second application, ANNEE, is a neural net-based entity extraction (EE) system. These two systems were combined to perform concept extraction and semantic classification specifically for use in medical document retrieval systems. The goal of this research was to create a system that automatically (without human interaction) enabled semantic type assignment, such as gene name and disease, to concepts extracted from unstructured medical text documents. Improving conceptual analysis of search phrases has been shown to improve the precision of information retrieval systems. Enabling this capability in the field of medicine can aid medical researchers, doctors and librarians in locating information, potentially improving healthcare decision-making. Due to the flexibility and non-domain specificity of the implementation, these applications have also been successfully deployed in other text retrieval experimentation for law enforcement (Atabakhsh et al., 2001; Hauck, Atabakhsh, Ongvasith, Gupta, & Chen, 2002), medicine (Tolle & Chen, 2000), query expansion (Leroy, Tolle, & Chen, 2000), web document categorization (Chen, Fan, Chau, & Zeng, 2001), Internet spiders (Chau, Zeng, & Chen, 2001), collaborative agents (Chau, Zeng, Chen, Huang, & Hendriawan, 2002), competitive intelligence (Chen, Chau, & Zeng, 2002), and Internet chat-room data visualization (Zhu & Chen, 2001).
APA, Harvard, Vancouver, ISO, and other styles
4

Nyongesa, Denis Barasa. "Various considerations on performance measures for a classification of ordinal data." Thesis, California State University, Long Beach, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10133995.

Full text
Abstract:
<p> The technological advancement and the escalating interest in personalized medicine has resulted in increased ordinal classification problems. The most commonly used performance metrics for evaluating the effectiveness of a multi-class ordinal classifier include; predictive accuracy, Kendall's tau-b rank correlation, and the average mean absolute error (AMAE). These metrics are beneficial in the quest to classify multi-class ordinal data, but no single performance metric incorporates the misclassification cost. Recently, distance, which finds the optimal trade-off between the predictive accuracy and the misclassification cost was proposed as a cost-sensitive performance metric for ordinal data. This thesis proposes the criteria for variable selection and methods that accounts for minimum distance and improved accuracy, thereby providing a platform for a more comprehensive and comparative analysis of multiple ordinal classifiers. The strengths of our methodology are demonstrated through real data analysis of a colon cancer data set.</p>
APA, Harvard, Vancouver, ISO, and other styles
5

Costa, Thiago. "A Non-Parametric Perspective on the Analysis of Massive Networks." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:17467341.

Full text
Abstract:
This dissertation develops an inferential framework for a highly non-parametric class of network models called graphons, which are the limit objects of converging sequences in the theory of dense graph limits. The theory, introduced by Lovász and co-authors, uses structural properties of very large networks to describe a notion of convergence for sequences of dense graphs. Converging sequences define a limit which can be represented by a class of random graphs that preserve many properties of the networks in the sequence. These random graphs are intuitive and have a relatively simple mathematical representation, but they are very difficult to estimate due to their non-parametric nature. Our work, which develops scalable and consistent methods for estimating graphons, offers an algorithmic framework that can be used to unlock the potential of applications of this powerful theory. To estimate graphons we use a stochastic blockmodel approximation approach that defines a notion of similarity between vertices to cluster vertices and find the blocks. We show how to compute these similarity distances from a given graph and how to properly cluster the vertices of the graph in order to form the blocks. The method is non-parametric, i.e., it uses the data to choose a convenient number of clusters. Our approach requires a careful balance between the number of blocks created, which is associated with stochastic blockmodel approximation of the graphon, and the size of the clusters, which is associated with the estimation of the stochastic blockmodel parameters. We prove insightful properties regarding the clustering mechanism and the similarity distance, and we also work with important variations of the graphon model, including a sparser type of graphon. As an application of our framework, we use the stochastic blockmodel nature of our method to improve identification of treatment response with social interaction. We show how the graph structure provided by our algorithm can be explored to design optimal experiments for assessing social effect on treatment response.<br>Engineering and Applied Sciences - Applied Math
APA, Harvard, Vancouver, ISO, and other styles
6

Palitawanont, Nanta. "An Investigation into the Effectiveness of Intelligent Tutoring on Learning of College Level Statistics." Thesis, University of North Texas, 1989. https://digital.library.unt.edu/ark:/67531/metadc331166/.

Full text
Abstract:
The present research incorporated the content of basic statistics into the Artificial Intelligence Physics Tutor (ARPHY), which was used as the expert system shell, and investigated the effects of the Artificial Intelligent Statistics Tutor (ARSTAT) as a supplement to learning statistics at the college level. Two classes of an introductory educational statistics course in the Department of Educational Foundations, University of North Texas, were used in the study. The daytime class was used as the experimental group and the evening class was used as the control group. The experimental group's lecture/discussion was supplemented with ARSTAT, and the control group received only lecture/discussion. A one-way analysis of covariance was used to compare students' test scores. No significant difference was found; however, the adjusted mean score of the experimental group was slightly higher than that of the control group. A two-way analysis of covariance showed no significant main effect or interaction between gender and study technique. A second two-way analysis of covariance showed no significant interaction between the students' attitude toward statistics and the study technique used. However, the students with a statistics-positive attitude scored significantly higher on the test than students who had a negative attitude toward statistics. This study concluded that the ARSTAT can be used effectively as a tutor for students taking an introductory course in educational statistics. The following recommendations for further study were made: incorporate more advanced topics of statistics into the ARPHY teaching model; incorporate the ARPHY learning theory and statistical content using another version of LISP language or another programming language such as PROLOG; and compare the ARSTAT tutor to some other kind of supplement to lecture/discussion.
APA, Harvard, Vancouver, ISO, and other styles
7

Navaroli, Nicholas Martin. "Generative Probabilistic Models for Analysis of Communication Event Data with Applications to Email Behavior." Thesis, University of California, Irvine, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3668831.

Full text
Abstract:
<p> Our daily lives increasingly involve interactions with others via different communication channels, such as email, text messaging, and social media. In this context, the ability to analyze and understand our communication patterns is becoming increasingly important. This dissertation focuses on generative probabilistic models for describing different characteristics of communication behavior, focusing primarily on email communication. </p><p> First, we present a two-parameter kernel density estimator for estimating the probability density over recipients of an email (or, more generally, items which appear in an itemset). A stochastic gradient method is proposed for efficiently inferring the kernel parameters given a continuous stream of data. Next, we apply the kernel model and the Bernoulli mixture model to two important prediction tasks: given a partially completed email recipient list, 1) predict which others will be included in the email, and 2) rank potential recipients based on their likelihood to be added to the email. Such predictions are useful in suggesting future actions to the user (i.e. which person to add to an email) based on their previous actions. We then investigate a piecewise-constant Poisson process model for describing the time-varying communication rate between an individual and several groups of their contacts, where changes in the Poisson rate are modeled as latent state changes within a hidden Markov model. </p><p> We next focus on the time it takes for an individual to respond to an event, such as receiving an email. We show that this response time depends heavily on the individual's typical daily and weekly patterns - patterns not adequately captured in standard models of response time (e.g. the Gamma distribution or Hawkes processes). A time-warping mechanism is introduced where the absolute response time is modeled as a transformation of effective response time, relative to the daily and weekly patterns of the individual. The usefulness of applying the time-warping mechanism to standard models of response time, both in terms of log-likelihood and accuracy in predicting which events will be quickly responded to, is illustrated over several individual email histories.</p>
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Tao. "Higher-order Random Walk Methods for Data Analysis." Thesis, Purdue University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10790747.

Full text
Abstract:
<p> Markov random walk models are powerful analytical tools for multiple areas in machine learning, numerical optimizations and data mining tasks. The key assumption of a first-order Markov chain is memorylessness, which restricts the dependence of the transition distribution to the current state only. However in many applications, this assumption is not appropriate. We propose a set of higher-order random walk techniques and discuss their applications to tensor co-clustering, user trails modeling, and solving linear systems. First, we develop a new random walk model that we call the super-spacey random surfer, which simultaneously clusters the rows, columns, and slices of a nonnegative three-mode tensor. This algorithm generalizes to tensors with any number of modes. We partition the tensor by minimizing the exit probability between clusters when the super-spacey random walk is at stationary. The second application is user trails modeling, where user trails record sequences of activities when individuals interact with the Internet and the world. We propose the retrospective higher-order Markov process as a two-step process by first choosing a state from the history and then transitioning as a first-order chain conditional on that state. This way the total number of parameters is restricted and thus the model is protected from overfitting. Lastly we propose to use a time-inhomogeneous Markov chain to approximate the solution of a linear system. Multiple simulations of the random walk are conducted to approximate the solution. By allowing the random walk to transition based on multiple matrices, we decrease the variance of the simulations, and thus increase the speed of the solver.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
9

Ross, Stephane. "Interactive Learning for Sequential Decisions and Predictions." Thesis, Carnegie Mellon University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3575525.

Full text
Abstract:
<p>Sequential prediction problems arise commonly in many areas of robotics and information processing: e.g., predicting a sequence of actions over time to achieve a goal in a control task, interpreting an image through a sequence of local image patch classifications, or translating speech to text through an iterative decoding procedure. </p><p> Learning predictors that can reliably perform such sequential tasks is challenging. Specifically, as predictions influence future inputs in the sequence, the data-generation process and executed predictor are inextricably intertwined. This can often lead to a significant mismatch between the distribution of examples observed during training (induced by the predictor used to generate training instances) and test executions (induced by the learned predictor). As a result, naively applying standard supervised learning methods&mdash;that assume independently and identically distributed training and test examples&mdash;often leads to poor test performance and compounding errors: inaccurate predictions lead to untrained situations where more errors are inevitable. </p><p> This thesis proposes general iterative learning procedures that leverage interactions between the learner and teacher to provably learn good predictors for sequential prediction tasks. Through repeated interactions, our approaches can efficiently learn predictors that are robust to their own errors and predict accurately during test executions. Our main approach uses existing no-regret online learning methods to provide strong generalization guarantees on test performance. </p><p> We demonstrate how to apply our main approach in various sequential prediction settings: imitation learning, model-free reinforcement learning, system identification, structured prediction and submodular list predictions. Its efficiency and wide applicability are exhibited over a large variety of challenging learning tasks, ranging from learning video game playing agents from human players and accurate dynamic models of a simulated helicopter for controller synthesis, to learning predictors for scene understanding in computer vision, news recommendation and document summarization. We also demonstrate the applicability of our technique on a real robot, using pilot demonstrations to train an autonomous quadrotor to avoid trees seen through its onboard camera (monocular vision) when flying at low-altitude in natural forest environments. </p><p> Our results throughout show that unlike typical supervised learning tasks where examples of good behavior are sufficient to learn good predictors, interaction is a fundamental part of learning in sequential tasks. We show formally that some level of interaction is necessary, as without interaction, no learning algorithm can guarantee good performance in general. </p>
APA, Harvard, Vancouver, ISO, and other styles
10

Ramsahai, Roland Ryan. "Causal inference with instruments and other supplementary variables." Thesis, University of Oxford, 2008. http://ora.ox.ac.uk/objects/uuid:df2961da-0843-421f-8be4-66a92e6b0d13.

Full text
Abstract:
Instrumental variables have been used for a long time in the econometrics literature for the identification of the causal effect of one random variable, B, on another, C, in the presence of unobserved confounders. In the classical continuous linear model, the causal effect can be point identified by studying the regression of C on A and B on A, where A is the instrument. An instrument is an instance of a supplementary variable which is not of interest in itself but aids identification of causal effects. The method of instrumental variables is extended here to generalised linear models, for which only bounds on the causal effect can be computed. For the discrete instrumental variable model, bounds have been derived in the literature for the causal effect of B on C in terms of the joint distribution of (A,B,C). Using an approach based on convex polytopes, bounds are computed here in terms of the pairwise (A,B) and (A,C) distributions, in direct analogy to the classic use but without the linearity assumption. The bounding technique is also adapted to instrumental models with stronger and weaker assumptions. The computation produces constraints which can be used to invalidate the model. In the literature, constraints of this type are usually tested by checking whether the relative frequencies satisfy them. This is unsatisfactory from a statistical point of view as it ignores the sampling uncertainty of the data. Given the constraints for a model, a proper likelihood analysis is conducted to develop a significance test for the validity of the instrumental model and a bootstrap algorithm for computing confidence intervals for the causal effect. Applications are presented to illustrate the methods and the advantage of a rigorous statistical approach. The use of covariates and intermediate variables for improving the efficiency of causal estimators is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Statistics|Artificial intelligence"

1

Hand, D. J., ed. Artificial Intelligence Frontiers in Statistics. Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-4537-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

E, Nicholson Ann, ed. Bayesian artificial intelligence. 2nd ed. CRC Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kharin, Yurij. Robustness in Statistical Pattern Recognition. Springer Netherlands, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Marwala, Tshilidzi. Economic Modeling Using Artificial Intelligence Methods. Springer London, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kruse, Rudolf. Synergies of Soft Computing and Statistics for Intelligent Data Analysis. Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rzempoluck, Edward J. Neural Network Data Analysis Using SimulnetTM. Springer New York, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Berthold, M. Intelligent data analysis: An introduction. Springer, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Goodman, I. R. Mathematics of Data Fusion. Springer Netherlands, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Berthold, M. Guide to intelligent data analysis: How to intelligently make sense of real data. Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vapnik, Vladimir Naumovich. The Nature of Statistical Learning Theory. Springer New York, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Statistics|Artificial intelligence"

1

Morik, Katharina. "A Note on Artificial Intelligence and Statistics." In Studies in Classification, Data Analysis, and Knowledge Organization. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-25147-5_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cox, L. A. "Combining the probability judgements of experts: statistical and artificial intelligence approaches." In Artificial Intelligence Frontiers in Statistics. Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-4537-2_26.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lorenzen, T. J., L. T. Truss, W. S. Spangler, W. T. Corpus, and A. B. Parker. "DEXPERT: an expert system for the design of experiments." In Artificial Intelligence Frontiers in Statistics. Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-4537-2_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ranjbar, A., and M. McLeish. "Intelligent arc addition, belief propagation and utilization of parallel processors by probabilistic inference engines." In Artificial Intelligence Frontiers in Statistics. Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-4537-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shenoy, P. P. "A new method for representing and solving Bayesian decision problems." In Artificial Intelligence Frontiers in Statistics. Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-4537-2_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Glymour, C., P. Spirtes, and R. Scheines. "Inferring causal structure in mixed populations." In Artificial Intelligence Frontiers in Statistics. Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-4537-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tsujino, K., and S. Nishida. "A knowledge acquisition inductive system driven by empirical interpretation of derived results." In Artificial Intelligence Frontiers in Statistics. Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-4537-2_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Esposito, F., D. Malerba, and G. Semeraro. "Incorporating statistical techniques into empirical symbolic learning systems." In Artificial Intelligence Frontiers in Statistics. Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-4537-2_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Buntine, W. "Learning classification trees." In Artificial Intelligence Frontiers in Statistics. Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-4537-2_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Crawford, S. L., and R. M. Fung. "An analysis of two probabilistic model induction techniques." In Artificial Intelligence Frontiers in Statistics. Springer US, 1993. http://dx.doi.org/10.1007/978-1-4899-4537-2_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Statistics|Artificial intelligence"

1

Shuxun, Yan, Wang Ying, Li Huan, and Li Yun. "Classification and Statistics of Endocrine Diseases and Diagnoses Based on Artificial Intelligence." In 2013 Fourth International Conference on Intelligent Systems Design and Engineering Applications (ISDEA). IEEE, 2013. http://dx.doi.org/10.1109/isdea.2013.450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nieto-Chaupis, Huber. "Identification of the Social Duality: Street Criminality and High Vehicle Traffic in Lima City by Using Artificial Intelligence Through the Fisher-Snedecor Statistics and Shannon’s Entropy." In 2018 IEEE International Smart Cities Conference (ISC2). IEEE, 2018. http://dx.doi.org/10.1109/isc2.2018.8656935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Khan, Mohammad Rasheed, Shams Kalam, and Rizwan Ahmed Khan. "Development of a Computationally Intelligent Model to Estimate Oil Formation Volume Factor." In Offshore Technology Conference. OTC, 2021. http://dx.doi.org/10.4043/31312-ms.

Full text
Abstract:
Abstract This investigation presents a powerful predictive model to determine crude oil formation volume factor (FVF) using state-of-the-art computational intelligence (CI) techniques. FVF is a vital pressure-volume-temperature (PVT) parameter used to characterize hydrocarbon systems and is pivotal to reserve evaluation studies and reservoir engineering calculations. Ideally, FVF is measured at the laboratory scale; however, prognostic tools to evaluate this parameter can aid in optimizing time and cost estimates. The database utilized in this study is obtained from open literature and covers statistics of crude oils of Pakistan, Iran, UAE, and Malaysia. Resultantly, this allows to move step forward towards the creation of a generalized model. Multiple CI algorithms are considered, including Artificial Neural Networks (ANN) and Artificial Neural Fuzzy Inference Systems (ANFIS). Models for CI are developed utilizing an optimization strategy for various parameters/hyper-parameters of the respective algorithms. Unique permutations and combinations for the number of perceptron and their resident layers is investigated to reach a solution that provides the most optimum output. These intelligent models are produced as a function of the parameters intrinsically affecting FVF; reservoir temperature, solution GOR, gas specific gravity, and crude oil API gravity. Comparative analysis of various CI models is performed using visualization/statistical analysis and the best model pointed out. Finally, the mathematical equation extraction to determine FVF is accomplished with the respective weights and bias for the model presented. Graphical analysis using scatter plots with a coefficient of determination (R2) illustrates that ANN equation produces the most accurate predictions for oil FVF with R2 in excess of 0.96. Moreover, during this study an error metric is developed comprising of multiple analysis parameters; Average Absolute Error (AAE), Root Mean Squared Error (RMSE), correlation coefficient (R). All models investigated are tested on an unseen dataset to prevent the development of a biased model. Performance of the established CI models are gauged based on this error metric, which demonstrates that ANN outperforms the other models with error within 2% of the measured PVT values. A computationally derived intelligent model proves to provide the strongest predictive capabilities as it maps complex non-linear interactions between various input parameters leading to FVF.
APA, Harvard, Vancouver, ISO, and other styles
4

Roemer, Michael J., Rolf F. Orsagh, Gregory J. Kacprzynski, James Scheid, Richard Friend, and William Sotomayer. "Upgrading Engine Test Cells for Improved Troubleshooting and Diagnostics." In ASME Turbo Expo 2002: Power for Land, Sea, and Air. ASMEDC, 2002. http://dx.doi.org/10.1115/gt2002-30034.

Full text
Abstract:
Upgrading military engine test cells with advanced diagnostic and troubleshooting capabilities will play a critical role in increasing aircraft availability and test cell effectiveness while simultaneously reducing engine operating and maintenance costs. Sophisticated performance and mechanical anomaly detection and fault classification algorithms utilizing thermodynamic, statistical, and empirical engine models are now being implemented as part of a United States Air Force Advanced Test Cell Upgrade Initiative. Under this program, a comprehensive set of real-time and post-test diagnostic software modules, including sensor validation algorithms, performance fault classification techniques and vibration feature analysis are being developed. An automated troubleshooting guide is also being implemented to streamline the troubleshooting process for both inexperienced and experienced technicians. This artificial intelligence based tool enhances the conventional troubleshooting tree architecture by incorporating probability of occurrence statistics to optimize the troubleshooting path. This paper describes the development and implementation of the F404 engine test cell upgrade at the Jacksonville Naval Air Station.
APA, Harvard, Vancouver, ISO, and other styles
5

Smith, David, Sara Rouhani, and Vibhav Gogate. "Order Statistics for Probabilistic Graphical Models." In Twenty-Sixth International Joint Conference on Artificial Intelligence. International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/645.

Full text
Abstract:
We consider the problem of computing r-th order statistics, namely finding an assignment having rank r in a probabilistic graphical model. We show that the problem is NP-hard even when the graphical model has no edges (zero-treewidth models) via a reduction from the partition problem. We use this reduction, specifically a pseudo-polynomial time algorithm for number partitioning to yield a pseudo-polynomial time approximation algorithm for solving the r-th order statistics problem in zero- treewidth models. We then extend this algorithm to arbitrary graphical models by generalizing it to tree decompositions, and demonstrate via experimental evaluation on various datasets that our proposed algorithm is more accurate than sampling algorithms.
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Hang, Chen Ma, Wei Xu, and Xue Liu. "Feature Statistics Guided Efficient Filter Pruning." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/363.

Full text
Abstract:
Building compact convolutional neural networks (CNNs) with reliable performance is a critical but challenging task, especially when deploying them in real-world applications. As a common approach to reduce the size of CNNs, pruning methods delete part of the CNN filters according to some metrics such as l1-norm. However, previous methods hardly leverage the information variance in a single feature map and the similarity characteristics among feature maps. In this paper, we propose a novel filter pruning method, which incorporates two kinds of feature map selections: diversity-aware selection (DFS) and similarity-aware selection (SFS). DFS aims to discover features with low information diversity while SFS removes features that have high similarities with others. We conduct extensive empirical experiments with various CNN architectures on publicly available datasets. The experimental results demonstrate that our model obtains up to 91.6% parameter decrease and 83.7% FLOPs reduction with almost no accuracy loss.
APA, Harvard, Vancouver, ISO, and other styles
7

Huaiqin Jia and Zhenqian Wu. "The interdiscipline of electronic commerce statistics and international trade statistics." In 2011 2nd International Conference on Artificial Intelligence, Management Science and Electronic Commerce (AIMSEC). IEEE, 2011. http://dx.doi.org/10.1109/aimsec.2011.6010933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Naufal, Ahmad Naufal, Samy Abdelhamid Samy, Nenisurya Hashim Nenisurya, et al. "Machine Learning as Accelerating Tool in Remote Operation Realisation through Monitoring Oil and Gas Equipments and Identifying its Failure Mode." In International Petroleum Technology Conference. IPTC, 2021. http://dx.doi.org/10.2523/iptc-21493-ms.

Full text
Abstract:
Abstract Equipment failure, unplanned downtime operation, and environmental damage cost represent critical challenges in overall oil and gas business from well reservoir identification and drilling strategy to production and processing. Identifying and managing the risks around assets that could fail and cause redundant and expensive downtime are the core of plant reliability in oil and gas industry. In the current digital era; there is an essential need of innovative data-driven solutions to address these challenges, especially, monitoring and diagnosis of plant equipment operations, recognize equipment failure; avoid unplanned downtime; repair costs and potential environmental damage; maintaining reliable production, and identifying equipment failures. Machine learning-artificial intelligence application is being studied to develop predictive maintenance (PdM) models as innovative analytics solution based on real-data streaming to get to an elevated level of situational intelligence to guide actions and provide early warnings of impending asset failure that previously remained undetected. This paper proposes novel machine learning predictive models based on extreme learning/support vector machines (ELM-SVM) to predict the time to failure (TTF) and when a plant equipment(s) will fail; so maintenance can be planned well ahead of time to minimize disruption. Proper visualization with deep-insights (training and validation) processes of the available mountains of historian and real-time data are carried out. Comparative studies of ELM-SVM techniques versus the most common physical-statistical regression techniques using available rotating equipment-compressors and time-failure mode data. Results are presented and it is promising to show that the new machine learning (ELM-SVM) techniques outperforms physical-statistics techniques with reliable and high accurate predictions; which have a high impact on the future ROI of oil and gas industry.
APA, Harvard, Vancouver, ISO, and other styles
9

Vigueras, Flavio, Arturo Hernández, and Iván Maldonado. "Iterative Linear Solution of the Perspective n-Point Problem Using Unbiased Statistics." In Mexican International Conference on Artificial Intelligence (MICAI). IEEE, 2009. http://dx.doi.org/10.1109/micai.2009.39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ishihata, Masakazu, and Takanori Maehara. "Exact Bernoulli Scan Statistics using Binary Decision Diagrams." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/795.

Full text
Abstract:
In combinatorial statistics, we are interested in a statistical test of combinatorial correlation, i.e., existence a subset from an underlying combinatorial structure such that the observation is large on the subset. The combinatorial scan statistics has been proposed for such a statistical test; however, it is not commonly used in practice because of its high computational cost. In this study, we restrict our attention to the case that the number of data points is moderately small (e.g., 50), the outcome is binary, and the underlying combinatorial structure is represented by a zero-suppressed binary decision diagram (ZDD), and consider the problem of computing the p-value of the combinatorial scan statistics exactly. First, we prove that this problem is a #P-hard problem. Then, we propose a practical algorithm that solves the problem. Here, the algorithm constructs a binary decision diagram (BDD) for a set of realizations of the random variables by a dynamic programming on the ZDD, and computes the p-value by a dynamic programming on the BDD. We conducted experiments to evaluate the performance of the proposed algorithm using real-world datasets.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Statistics|Artificial intelligence"

1

Hofer, Martin, Tomas Sako, Arturo Martinez Jr., et al. Applying Artificial Intelligence on Satellite Imagery to Compile Granular Poverty Statistics. Asian Development Bank, 2020. http://dx.doi.org/10.22617/wps200432-2.

Full text
Abstract:
This study outlines a computational framework to enhance the spatial granularity of government-published poverty estimates, citing data from the Philippines and Thailand. Computer vision techniques were applied on publicly available medium resolution satellite imagery, household surveys, and census data from the two countries. The results suggest that even using publicly accessible satellite imagery, predictions generally aligned with the distributional structure of government-published poverty estimates after calibration. The study further examines the robustness of the resulting estimates to user-specified algorithmic parameters and model specifications.
APA, Harvard, Vancouver, ISO, and other styles
2

Brynjolfsson, Erik, Daniel Rock, and Chad Syverson. Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics. National Bureau of Economic Research, 2017. http://dx.doi.org/10.3386/w24001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mapping the Spatial Distribution of Poverty Using Satellite Imagery in Thailand. Asian Development Bank, 2021. http://dx.doi.org/10.22617/tcs210112-2.

Full text
Abstract:
The “leave no one behind” principle of the 2030 Agenda for Sustainable Development requires appropriate indicators for different segments of a country’s population. This entails detailed, granular data on population groups that extend beyond national trends and averages. The Asian Development Bank (ADB), in collaboration with the National Statistical Office of Thailand and the Word Data Lab, conducted a feasibility study to enhance the granularity, cost-effectiveness, and compilation of high-quality poverty statistics in Thailand. This report documents the results of the study, providing insights on data collection requirements, advanced algorithmic techniques, and validation of poverty estimates using artificial intelligence to complement traditional data sources and conventional survey methods.
APA, Harvard, Vancouver, ISO, and other styles
4

A Guidebook on Mapping Poverty through Data Integration and Artificial Intelligence. Asian Development Bank, 2021. http://dx.doi.org/10.22617/spr210131-2.

Full text
Abstract:
The “leave no one behind” principle of the 2030 Agenda for Sustainable Development requires appropriate indicators to be estimated for different segments of a country’s population. The Asian Development Bank, in collaboration with the Philippine Statistics Authority, the National Statistical Office of Thailand, and the World Data Lab, conducted a feasibility study that aimed to enhance the granularity, cost-effectiveness, and compilation of high-quality poverty statistics in the Philippines and Thailand. This accompanying guide to the Key Indicators for Asia and the Pacific 2020 special supplement is based on the study, capitalizing on satellite imagery, geospatial data, and powerful machine-learning algorithms to augment conventional data collection and sample survey techniques.
APA, Harvard, Vancouver, ISO, and other styles
5

Payment Systems Report - June of 2020. Banco de la República de Colombia, 2021. http://dx.doi.org/10.32468/rept-sist-pag.eng.2020.

Full text
Abstract:
With its annual Payment Systems Report, Banco de la República offers a complete overview of the infrastructure of Colombia’s financial market. Each edition of the report has four objectives: 1) to publicize a consolidated account of how the figures for payment infrastructures have evolved with respect to both financial assets and goods and services; 2) to summarize the issues that are being debated internationally and are of interest to the industry that provides payment clearing and settlement services; 3) to offer the public an explanation of the ideas and concepts behind retail-value payment processes and the trends in retail payments within the circuit of individuals and companies; and 4) to familiarize the public, the industry, and all other financial authorities with the methodological progress that has been achieved through applied research to analyze the stability of payment systems. This edition introduces changes that have been made in the structure of the report, which are intended to make it easier and more enjoyable to read. The initial sections in this edition, which is the eleventh, contain an analysis of the statistics on the evolution and performance of financial market infrastructures. These are understood as multilateral systems wherein the participating entities clear, settle and register payments, securities, derivatives and other financial assets. The large-value payment system (CUD) saw less momentum in 2019 than it did the year before, mainly because of a decline in the amount of secondary market operations for government bonds, both in cash and sell/buy-backs, which was offset by an increase in operations with collective investment funds (CIFs) and Banco de la República’s operations to increase the money supply (repos). Consequently, the Central Securities Depository (DCV) registered less activity, due to fewer negotiations on the secondary market for public debt. This trend was also observed in the private debt market, as evidenced by the decline in the average amounts cleared and settled through the Central Securities Depository of Colombia (Deceval) and in the value of operations with financial derivatives cleared and settled through the Central Counterparty of Colombia (CRCC). Section three offers a comprehensive look at the market for retail-value payments; that is, transactions made by individuals and companies. During 2019, electronic transfers increased, and payments made with debit and credit cards continued to trend upward. In contrast, payments by check continued to decline, although the average daily value was almost four times the value of debit and credit card purchases. The same section contains the results of the fourth survey on how the use of retail-value payment instruments (for usual payments) is perceived. Conducted at the end of 2019, the main purpose of the survey was to identify the availability of these payment instruments, the public’s preferences for them, and their acceptance by merchants. It is worth noting that cash continues to be the instrument most used by the population for usual monthly payments (88.1% with respect to the number of payments and 87.4% in value). However, its use in terms of value has declined, having registered 89.6% in the 2017 survey. In turn, the level of acceptance by merchants of payment instruments other than cash is 14.1% for debit cards, 13.4% for credit cards, 8.2% for electronic transfers of funds and 1.8% for checks. The main reason for the use of cash is the absence of point-of-sale terminals at commercial establishments. Considering that the retail-payment market worldwide is influenced by constant innovation in payment services, by the modernization of clearing and settlement systems, and by the efforts of regulators to redefine the payment industry for the future, these trends are addressed in the fourth section of the report. There is an account of how innovations in technology-based financial payment services have developed, and it shows that while this topic is not new, it has evolved, particularly in terms of origin and vocation. One of the boxes that accompanies the fourth section deals with certain payment aspects of open banking and international experience in that regard, which has given the customers of a financial entity sovereignty over their data, allowing them, under transparent and secure conditions, to authorize a third party, other than their financial entity, to request information on their accounts with financial entities, thus enabling the third party to offer various financial services or initiate payments. Innovation also has sparked interest among international organizations, central banks, and research groups concerning the creation of digital currencies. Accordingly, the last box deals with the recent international debate on issuance of central bank digital currencies. In terms of the methodological progress that has been made, it is important to underscore the work that has been done on the role of central counterparties (CCPs) in mitigating liquidity and counterparty risk. The fifth section of the report offers an explanation of a document in which the work of CCPs in financial markets is analyzed and corroborated through an exercise that was built around the Central Counterparty of Colombia (CRCC) in the Colombian market for non-delivery peso-dollar forward exchange transactions, using the methodology of network topology. The results provide empirical support for the different theoretical models developed to study the effect of CCPs on financial markets. Finally, the results of research using artificial intelligence with information from the large-value payment system are presented. Based on the payments made among financial institutions in the large-value payment system, a methodology is used to compare different payment networks, as well as to determine which ones can be considered abnormal. The methodology shows signs that indicate when a network moves away from its historical trend, so it can be studied and monitored. A methodology similar to the one applied to classify images is used to make this comparison, the idea being to extract the main characteristics of the networks and use them as a parameter for comparison. Juan José Echavarría Governor
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!