Academic literature on the topic 'Theory, Machine learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Theory, Machine learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Theory, Machine learning"

1

CHASE, HUNTER, and JAMES FREITAG. "MODEL THEORY AND MACHINE LEARNING." Bulletin of Symbolic Logic 25, no. 03 (2019): 319–32. http://dx.doi.org/10.1017/bsl.2018.71.

Full text
Abstract:
AbstractAbout 25 years ago, it came to light that a single combinatorial property determines both an important dividing line in model theory (NIP) and machine learning (PAC-learnability). The following years saw a fruitful exchange of ideas between PAC-learning and the model theory of NIP structures. In this article, we point out a new and similar connection between model theory and machine learning, this time developing a correspondence between stability and learnability in various settings of online learning. In particular, this gives many new examples of mathematically interesting classes which are learnable in the online setting.
APA, Harvard, Vancouver, ISO, and other styles
2

Petrova, O., and K. Bobriekhova. "DEVELOPING ADISTANCECOURSE «THEORY OF SYSTEMSIN MACHINE LEARNING PROBLEMS»." Transactions of Kremenchuk Mykhailo Ostrohradskyi National University 6 (December 27, 2019): 54–59. http://dx.doi.org/10.30929/1995-0519.2019.6.54-59.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Huang, Guang-Bin, Qin-Yu Zhu, and Chee-Kheong Siew. "Extreme learning machine: Theory and applications." Neurocomputing 70, no. 1-3 (2006): 489–501. http://dx.doi.org/10.1016/j.neucom.2005.12.126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tze-Leung Lai and S. Yakowitz. "Machine learning and nonparametric bandit theory." IEEE Transactions on Automatic Control 40, no. 7 (1995): 1199–209. http://dx.doi.org/10.1109/9.400491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vanchurin, Vitaly. "Toward a theory of machine learning." Machine Learning: Science and Technology 2, no. 3 (2021): 035012. http://dx.doi.org/10.1088/2632-2153/abe6d7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jackson, A. H. "Machine learning." Expert Systems 5, no. 2 (1988): 132–50. http://dx.doi.org/10.1111/j.1468-0394.1988.tb00341.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Khare, Ashish, Moongu Jeon, Ishwar K. Sethi, and Benlian Xu. "Machine Learning Theory and Applications for Healthcare." Journal of Healthcare Engineering 2017 (2017): 1–2. http://dx.doi.org/10.1155/2017/5263570.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tanaka, Toshiyuki. "Mean-field theory of Boltzmann machine learning." Physical Review E 58, no. 2 (1998): 2302–10. http://dx.doi.org/10.1103/physreve.58.2302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

E, Weinan. "Machine Learning: Mathematical Theory and Scientific Applications." Notices of the American Mathematical Society 66, no. 11 (2019): 1. http://dx.doi.org/10.1090/noti1994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bianco, Michael J., Peter Gerstoft, James Traer, et al. "Machine learning in acoustics: Theory and applications." Journal of the Acoustical Society of America 146, no. 5 (2019): 3590–628. http://dx.doi.org/10.1121/1.5133944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Theory, Machine learning"

1

Hussain, Z. "Sparsity in machine learning : theory and practice." Thesis, University College London (University of London), 2008. http://discovery.ucl.ac.uk/1444276/.

Full text
Abstract:
The thesis explores sparse machine learning algorithms for supervised (classification and regression) and unsupervised (subspace methods) learning. For classification, we review the set covering machine (SCM) and propose new algorithms that directly minimise the SCMs sample compression generalisation error bounds during the training phase. Two of the resulting algorithms are proved to produce optimal or near-optimal solutions with respect to the loss bounds they minimise. One of the SCM loss bounds is shown to be incorrect and a corrected derivation of the sample compression bound is given along with a framework for allowing asymmetrical loss in sample compression risk bounds. In regression, we analyse the kernel matching pursuit (KMP) algorithm and derive a loss bound that takes into account the dual sparse basis vectors. We make connections to a sparse kernel principal components analysis (sparse KPCA) algorithm and bound its future loss using a sample compression argument. This investigation suggests a similar argument for kernel canonical correlation analysis (KCCA) and so the application of a similar sparsity algorithm gives rise to the sparse KCCA algorithm. We also propose a loss bound for sparse KCCA using the novel technique developed for KMP. All of the algorithms and bounds proposed in the thesis are elucidated with experiments.
APA, Harvard, Vancouver, ISO, and other styles
2

Menke, Joshua E. "Improving machine learning through oracle learning /." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd1726.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cardamone, Dario. "Support Vector Machine a Machine Learning Algorithm." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
Nella presente tesi di laurea viene preso in considerazione l’algoritmo di classificazione Support Vector Machine. Piu` in particolare si considera la sua formulazione come problema di ottimizazione Mixed Integer Program per la classificazione binaria super- visionata di un set di dati.
APA, Harvard, Vancouver, ISO, and other styles
4

Carlucci, Lorenzo. "Some cognitively-motivated learning paradigms in Algorithmic Learning Theory." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 0.68 Mb., p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3220797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Xiao. "Regularized adaptation : theory, algorithms, and applications /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/5928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Blankenship, Jessica. "Machine Learning and Achievement Games." University of Akron / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=akron1590713726030926.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Foreman, Samuel Alfred. "Learning better physics: a machine learning approach to lattice gauge theory." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6944.

Full text
Abstract:
In this work we explore how lattice gauge theory stands to benefit from new developments in machine learning, and look at two specific examples that illustrate this point. We begin with a brief overview of selected topics in machine learning for those who may be unfamiliar, and provide a simple example that helps to show how these ideas are carried out in practice. After providing the relevant background information, we then introduce an example of renormalization group (RG) transformations, inspired by the tensor RG, that can be used for arbitrary image sets, and look at applying this idea to equilibrium configurations of the two-dimensional Ising model. The second main idea presented in this thesis involves using machine learning to improve the efficiency of Markov Chain Monte Carlo (MCMC) methods. Explicitly, we describe a new technique for performing Hamiltonian Monte Carlo (HMC) simulations using an alternative leapfrog integrator that is parameterized by weights in a neural network. This work is based on the L2HMC ('Learning to Hamiltonian Monte Carlo') algorithm introduced in [1].
APA, Harvard, Vancouver, ISO, and other styles
8

Sandberg, Martina. "Credit Risk Evaluation using Machine Learning." Thesis, Linköpings universitet, Statistik och maskininlärning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138968.

Full text
Abstract:
In this thesis, we examine the machine learning models logistic regression, multilayer perceptron and random forests in the purpose of discriminate between good and bad credit applicants. In addition to these models we address the problem of imbalanced data with the Synthetic Minority Over-Sampling Technique (SMOTE). The data available have 273 286 entries and contains information about the invoice of the applicant and the credit decision process as well as information about the applicant. The data was collected during the period 2015-2017. With AUC-values at about 73%some patterns are found that can discriminate between customers that are likely to pay their invoice and customers that are not. However, the more advanced models only performed slightly better than the logistic regression.
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Bin. "A Mathematical Framework on Machine Learning: Theory and Application." FIU Digital Commons, 2018. https://digitalcommons.fiu.edu/etd/3876.

Full text
Abstract:
The dissertation addresses the research topics of machine learning outlined below. We developed the theory about traditional first-order algorithms from convex opti- mization and provide new insights in nonconvex objective functions from machine learning. Based on the theory analysis, we designed and developed new algorithms to overcome the difficulty of nonconvex objective and to accelerate the speed to obtain the desired result. In this thesis, we answer the two questions: (1) How to design a step size for gradient descent with random initialization? (2) Can we accelerate the current convex optimization algorithms and improve them into nonconvex objective? For application, we apply the optimization algorithms in sparse subspace clustering. A new algorithm, CoCoSSC, is proposed to improve the current sample complexity under the condition of the existence of noise and missing entries. Gradient-based optimization methods have been increasingly modeled and inter- preted by ordinary differential equations (ODEs). Existing ODEs in the literature are, however, inadequate to distinguish between two fundamentally different meth- ods, Nesterov’s acceleration gradient method for strongly convex functions (NAG-SC) and Polyak’s heavy-ball method. In this paper, we derive high-resolution ODEs as more accurate surrogates for the two methods in addition to Nesterov’s acceleration gradient method for general convex functions (NAG-C), respectively. These novel ODEs can be integrated into a general framework that allows for a fine-grained anal- ysis of the discrete optimization algorithms through translating properties of the amenable ODEs into those of their discrete counterparts. As a first application of this framework, we identify the effect of a term referred to as gradient correction in NAG-SC but not in the heavy-ball method, shedding deep insight into why the for- mer achieves acceleration while the latter does not. Moreover, in this high-resolution ODE framework, NAG-C is shown to boost the squared gradient norm minimization at the inverse cubic rate, which is the sharpest known rate concerning NAG-C itself. Finally, by modifying the high-resolution ODE of NAG-C, we obtain a family of new optimization methods that are shown to maintain the accelerated convergence rates as NAG-C for minimizing convex functions.
APA, Harvard, Vancouver, ISO, and other styles
10

Mauricio, Palacio Sebastián. "Machine-Learning Applied Methods." Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/669286.

Full text
Abstract:
The presented discourse followed several topics where every new chapter introduced an economic prediction problem and showed how traditional approaches can be complemented with new techniques like machine learning and deep learning. These powerful tools combined with principles of economic theory is highly increasing the scope for empiricists. Chapter 3 addressed this discussion. By progressively moving from Ordinary Least Squares, Penalized Linear Regressions and Binary Trees to advanced ensemble trees. Results showed that ML algorithms significantly outperform statistical models in terms of predictive accuracy. Specifically, ML models perform 49-100\% better than unbiased methods. However, we cannot rely on parameter estimations. For example, Chapter 4 introduced a net prediction problem regarding fraudulent property claims in insurance. Despite the fact that we got extraordinary results in terms of predictive power, the complexity of the problem restricted us from getting behavioral insight. Contrarily, statistical models are easily interpretable. Coefficients give us the sign, the magnitude and the statistical significance. We can learn behavior from marginal impacts and elasticities. Chapter 5 analyzed another prediction problem in the insurance market, particularly, how the combination of self-reported data and risk categorization could improve the detection of risky potential customers in insurance markets. Results were also quite impressive in terms of prediction, but again, we did not know anything about the direction or the magnitude of the features. However, by using a Probit model, we showed the benefits of combining statistic models with ML-DL models. The Probit model let us get generalizable insights on what type of customers are likely to misreport, enhancing our results. Likewise, Chapter 2 is a clear example of how causal inference can benefit from ML and DL methods. These techniques allowed us to capture that 70 days before each auction there were abnormal behaviors in daily prices. By doing so, we could apply a solid statistical model and we could estimate precisely what the net effect of the mandated auctions in Spain was. This thesis aims at combining advantages of both methodologies, machine learning and econometrics, boosting their strengths and attenuating their weaknesses. Thus, we used ML and statistical methods side by side, exploring predictive performance and interpretability. Several conditions can be inferred from the nature of both approaches. First, as we have observed throughout the chapters, ML and traditional econometric approaches solve fundamentally different problems. We use ML and DL techniques to predict, not in terms of traditional forecast, but making our models generalizable to unseen data. On the other hand, traditional econometrics has been focused on causal inference and parameter estimation. Therefore, ML is not replacing traditional techniques, but rather complementing them. Second, ML methods focus in out-of-sample data instead of in-sample data, while statistical models typically focus on goodness-of-fit. It is then not surprising that ML techniques consistently outperformed traditional techniques in terms of predictive accuracy. The cost is then biased estimators. Third, the tradition in economics has been to choose a unique model based on theoretical principles and to fit the full dataset on it and, in consequence, obtaining unbiased estimators and their respective confidence intervals. On the other hand, ML relies on data driven selection models, and does not consider causal inference. Instead of manually choosing the covariates, the functional form is determined by the data. This also translates to the main weakness of ML, which is the lack of inference of the underlying data-generating process. I.e. we cannot derive economically meaningful conclusions from the coefficients. Focusing on out-of-sample performance comes at the expense of the ability to infer causal effects, due to the lack of standard errors on the coefficients. Therefore, predictors are typically biased, and estimators may not be normally distributed. Thus, we can conclude that in terms of out-sample performance it is hard to compete against ML models. However, ML cannot contend with the powerful insights that the causal inference analysis gives us, which allow us not only to get the most important variables and their magnitude but also the ability to understand economic behaviors.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Theory, Machine learning"

1

Hassanien, Aboul Ella, ed. Machine Learning Paradigms: Theory and Application. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-02357-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hanson, Stephen José, Werner Remmele, and Ronald L. Rivest, eds. Machine Learning: From Theory to Applications. Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-56483-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Oliva, Diego, Essam H. Houssein, and Salvador Hinojosa, eds. Metaheuristics in Machine Learning: Theory and Applications. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-70542-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

MACKAY, DAVID J. C. Information Theory, Inference & Learning Algorithms. Cambridge University Press, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Okun, Oleg. Ensembles in Machine Learning Applications. Springer Berlin Heidelberg, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Najim, K. Learning automata: Theory and applications. Pergamon, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alessandro, Vinciarelli, ed. Machine learning for audio, image and video analysis: Theory and applications. Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Barber, David. Bayesian reasoning and machine learning. Cambridge University Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shi, Bin, and S. S. Iyengar. Mathematical Theories of Machine Learning - Theory and Applications. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-17076-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

1964-, Auer Peter, Meir Ron, and LINK (Online service), eds. Learning theory: 18th annual conference on learning theory, COLT 2005, Bertinoro, Italy, June 27-30 : proceedings. Springer, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Theory, Machine learning"

1

Fernandes de Mello, Rodrigo, and Moacir Antonelli Ponti. "Statistical Learning Theory." In Machine Learning. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-94989-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Zhi-Hua. "Computational Learning Theory." In Machine Learning. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-1967-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Maria, Italia Joseph, and T. Devi. "Machine Learning." In Artificial Intelligence Theory, Models, and Applications. Auerbach Publications, 2021. http://dx.doi.org/10.1201/9781003175865-14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kakas, Antonis C., David Cohn, Sanjoy Dasgupta, et al. "Active Learning Theory." In Encyclopedia of Machine Learning. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hutter, Marcus. "Universal Learning Theory." In Encyclopedia of Machine Learning. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_861.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Forsyth, David. "A Little Learning Theory." In Applied Machine Learning. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-18114-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shultz, Thomas R., Scott E. Fahlman, Susan Craw, et al. "Confirmation Theory." In Encyclopedia of Machine Learning. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Utgoff, Paul E., James Cussens, Stefan Kramer, et al. "Information Theory." In Encyclopedia of Machine Learning. Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Golden, Richard M. "Set Theory for Concept Modeling." In Statistical Machine Learning. Chapman and Hall/CRC, 2020. http://dx.doi.org/10.1201/9781351051507-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hegedüs, Tibor. "Can complexity theory benefit from Learning Theory?" In Machine Learning: ECML-93. Springer Berlin Heidelberg, 1993. http://dx.doi.org/10.1007/3-540-56602-3_150.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Theory, Machine learning"

1

Alexandre, Frédéric. "Beyond Machine Learning: Autonomous Learning." In 8th International Conference on Neural Computation Theory and Applications. SCITEPRESS - Science and Technology Publications, 2016. http://dx.doi.org/10.5220/0006090300970101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Tian, Jing, Ming-hu Ha, Jun-hua Li, and Da-zeng Tian. "The Fuzzy- Number Based Key Theorem of Statistical Learning Theory." In 2006 International Conference on Machine Learning and Cybernetics. IEEE, 2006. http://dx.doi.org/10.1109/icmlc.2006.258536.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gonzalez-Diaz, Humbert. "PTML: Perturbation-Theory Machine Learning notes." In MOL2NET 2018, International Conference on Multidisciplinary Sciences, 4th edition. MDPI, 2018. http://dx.doi.org/10.3390/mol2net-04-05463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Muller, Michael, Shion Guha, Eric P. S. Baumer, David Mimno, and N. Sadat Shami. "Machine Learning and Grounded Theory Method." In GROUP '16: 2016 ACM Conference on Supporting Groupwork. ACM, 2016. http://dx.doi.org/10.1145/2957276.2957280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Larsen, Kai R., Dirk Hovorka, Jevin West, et al. "Theory Identity: A Machine-Learning Approach." In 2014 47th Hawaii International Conference on System Sciences (HICSS). IEEE, 2014. http://dx.doi.org/10.1109/hicss.2014.564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ha, Ming-Hu, Li-Fang Zheng, and Ji-Qiang Chen. "The Key Theorem of Learning Theory Based on Random Sets Samples." In 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370629.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Ji-Qiang, Ming-Hu Ha, and Li-Fang Zheng. "The Key Theorem of Learning Theory on Set-Valued Probability Space." In 2007 International Conference on Machine Learning and Cybernetics. IEEE, 2007. http://dx.doi.org/10.1109/icmlc.2007.4370620.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Varshney, Kush R. "Engineering safety in machine learning." In 2016 Information Theory and Applications (ITA). IEEE, 2016. http://dx.doi.org/10.1109/ita.2016.7888195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sun, Xiao-Jing, Chao Wang, Ming-Hu Ha, and Da-Zeng Tian. "The key theorem of learning theory based on hybrid variable." In 2011 International Conference on Machine Learning and Cybernetics (ICMLC). IEEE, 2011. http://dx.doi.org/10.1109/icmlc.2011.6016929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yun-Chao Bai and Ming-Hu Ha. "The key theorem of statistical learning theory on possibility spaces." In Proceedings of 2005 International Conference on Machine Learning and Cybernetics. IEEE, 2005. http://dx.doi.org/10.1109/icmlc.2005.1527708.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Theory, Machine learning"

1

Goldman, Jeffery A. Machine Learning: A Comparative Study of Pattern Theory and C4.5. Defense Technical Information Center, 1994. http://dx.doi.org/10.21236/ada285582.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Haowen, Melissa Allen-Dumas, Anne Berres, et al. A HPC Theory-Guided Machine Learning Cyberinfrastructure for Communicating Hydrometeorological Data Across Scales. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1769644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Duersch, Jed, Thomas Catanach, and Ming Gu. CIS-LDRD Project 218313 Final Technical Report. Parsimonious Inference Information-Theoretic Foundations for a Complete Theory of Machine Learning. Office of Scientific and Technical Information (OSTI), 2020. http://dx.doi.org/10.2172/1668936.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lohn, Andrew. Hacking AI: A Primer for Policymakers on Machine Learning Cybersecurity. Center for Security and Emerging Technology, 2020. http://dx.doi.org/10.51593/2020ca006.

Full text
Abstract:
Machine learning systems’ vulnerabilities are pervasive. Hackers and adversaries can easily exploit them. As such, managing the risks is too large a task for the technology community to handle alone. In this primer, Andrew Lohn writes that policymakers must understand the threats well enough to assess the dangers that the United States, its military and intelligence services, and its civilians face when they use machine learning.
APA, Harvard, Vancouver, ISO, and other styles
5

Buchanan, Ben, John Bansemer, Dakota Cary, Jack Lucas, and Micah Musser. Automating Cyber Attacks: Hype and Reality. Center for Security and Emerging Technology, 2020. http://dx.doi.org/10.51593/2020ca002.

Full text
Abstract:
Based on an in-depth analysis of artificial intelligence and machine learning systems, the authors consider the future of applying such systems to cyber attacks, and what strategies attackers are likely or less likely to use. As nuanced, complex, and overhyped as machine learning is, they argue, it remains too important to ignore.
APA, Harvard, Vancouver, ISO, and other styles
6

Cilliers, Jacobus, Eric Dunford, and James Habyarimana. What Do Local Government Education Managers Do to Boost Learning Outcomes? Research on Improving Systems of Education (RISE), 2021. http://dx.doi.org/10.35489/bsg-rise-wp_2021/064.

Full text
Abstract:
Decentralization reforms have shifted responsibility for public service delivery to local government, yet little is known about how their management practices or behavior shape performance. We conducted a comprehensive management survey of mid-level education bureaucrats and their staff in every district in Tanzania, and employ flexible machine learning techniques to identify important management practices associated with learning outcomes. We find that management practices explain 10 percent of variation in a district's exam performance. The three management practices most predictive of performance are: i) the frequency of school visits; ii) school and teacher incentives administered by the district manager; and iii) performance review of staff. Although the model is not causal, these findings suggest the importance of robust systems to motivate district staff, schools, and teachers, that include frequent monitoring of schools. They also show the importance of surveying subordinates of managers, in order to produce richer information on management practices.
APA, Harvard, Vancouver, ISO, and other styles
7

Daniels, Matthew, Autumn Toney, Melissa Flagg, and Charles Yang. Machine Intelligence for Scientific Discovery and Engineering Invention. Center for Security and Emerging Technology, 2021. http://dx.doi.org/10.51593/20200099.

Full text
Abstract:
The advantages of nations depend in part on their access to new inventions—and modern applications of artificial intelligence can help accelerate the creation of new inventions in the years ahead. This data brief is a first step toward understanding how modern AI and machine learning have begun accelerating growth across a wide array of science and engineering disciplines in recent years.
APA, Harvard, Vancouver, ISO, and other styles
8

Douglas, Thomas, and Caiyun Zhang. Machine learning analyses of remote sensing measurements establish strong relationships between vegetation and snow depth in the boreal forest of Interior Alaska. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/41222.

Full text
Abstract:
The seasonal snowpack plays a critical role in Arctic and boreal hydrologic and ecologic processes. Though snow depth can be different from one season to another there are repeated relationships between ecotype and snowpack depth. Alterations to the seasonal snowpack, which plays a critical role in regulating wintertime soil thermal conditions, have major ramifications for near-surface permafrost. Therefore, relationships between vegetation and snowpack depth are critical for identifying how present and projected future changes in winter season processes or land cover will affect permafrost. Vegetation and snow cover areal extent can be assessed rapidly over large spatial scales with remote sensing methods, however, measuring snow depth remotely has proven difficult. This makes snow depth–vegetation relationships a potential means of assessing snowpack characteristics. In this study, we combined airborne hyperspectral and LiDAR data with machine learning methods to characterize relationships between ecotype and the end of winter snowpack depth. Our results show hyperspectral measurements account for two thirds or more of the variance in the relationship between ecotype and snow depth. An ensemble analysis of model outputs using hyperspectral and LiDAR measurements yields the strongest relationships between ecotype and snow depth. Our results can be applied across the boreal biome to model the coupling effects between vegetation and snowpack depth.
APA, Harvard, Vancouver, ISO, and other styles
9

Rodriguez, Simon, Tim Hwang, and Rebecca Gelles. Comparing Corporate and University Publication Activity in AI/ML. Center for Security and Emerging Technology, 2021. http://dx.doi.org/10.51593/20200067.

Full text
Abstract:
Based on news coverage alone, it can seem as if corporations dominate the research on artificial intelligence and machine learning when compared to the work of universities and academia. Authors Simon Rodriguez, Tim Hwang and Rebecca Gelles analyze the data over the past decade of research publications and find that, in fact, universities are the more dominant producers of AI papers. They also find that while corporations do tend to generate more citations to the work they publish in the field, these “high performing” papers are most frequently cross-collaborations with university labs
APA, Harvard, Vancouver, ISO, and other styles
10

Cordeiro de Amorim, Renato. A survey on feature weighting based K-Means algorithms. Web of Open Science, 2020. http://dx.doi.org/10.37686/ser.v1i2.79.

Full text
Abstract:
In a real-world data set there is always the possibility, rather high in our opinion, that different features may have different degrees of relevance. Most machine learning algorithms deal with this fact by either selecting or deselecting features in the data preprocessing phase. However, we maintain that even among relevant features there may be different degrees of relevance, and this should be taken into account during the clustering process. With over 50 years of history, K-Means is arguably the most popular partitional clustering algorithm there is. The first K-Means based clustering algorithm to compute feature weights was designed just over 30 years ago. Various such algorithms have been designed since but there has not been, to our knowledge, a survey integrating empirical evidence of cluster recovery ability, common flaws, and possible directions for future research. This paper elaborates on the concept of feature weighting and addresses these issues by critically analysing some of the most popular, or innovative, feature weighting mechanisms based in K-Means
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!