To see the other types of publications on this topic, follow the link: Discrete data models.

Books on the topic 'Discrete data models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 42 books for your research on the topic 'Discrete data models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse books on a wide variety of disciplines and organise your bibliography correctly.

1

Models for discrete data. Oxford: Clarendon Press, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Coca, D. A direct approach to identification of nonlinear differential models from discrete data. Sheffield: University of Sheffield, Dept. of Automatic Control and Systems Engineering, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chambers, Marcus J. On forecasting discrete data from continuous time models with an application to consumption. [Colchester]: University of Essex, Department of Economics, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tsang, K. M. Reconstruction of linear and nonlinear continuous time models from discrete time sampled-data systems. Sheffield: University of Sheffield, Dept. of Control Engineering, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Aït-Sahalia, Yacine. Telling from discrete data whether the underlying continuous-time model is a diffusion. Cambridge, MA: National Bureau of Economic Research, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Orme, Chris. A note on adjusting the bias of maximum likelihood estimators in discrete panel data models with unobserved random effects. Loughborough: Loughborough University of Technology, Department of Economics, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Byun, Jae-Woong. Estimation of discrete dynamic models from endogenously-sampled company panel data: An analysis of direct investmentby Korean firms in the European Union. Leicester: University of Leicester, Department of Economics, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lenda, Grzegorz. Rozwinięcie metod tworzenia funkcji sklejanych w aspekcie budowy modeli na podstawie danych dyskretnych: Development of the methods to spline functions in terms of building models based on discrete data. Kraków: Wydawnictwa AGH, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Correa, R., Inês Dutra, Mario Fiallos, and Fernando Gomes. Models for parallel and distributed computation: Theory, algorithmic techniques and applications. Boston, MA: Springer US, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Jeannette, Janssen, and SpringerLink (Online service), eds. Algorithms and Models for the Web Graph: 9th International Workshop, WAW 2012, Halifax, NS, Canada, June 22-23, 2012. Proceedings. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
11

Babeshko, Lyudmila, and Irina Orlova. Econometrics and econometric modeling in Excel and R. ru: INFRA-M Academic Publishing LLC., 2020. http://dx.doi.org/10.12737/1079837.

Full text
Abstract:
The textbook includes topics of modern econometrics, often used in economic research. Some aspects of multiple regression models related to the problem of multicollinearity and models with a discrete dependent variable are considered, including methods for their estimation, analysis, and application. A significant place is given to the analysis of models of one-dimensional and multidimensional time series. Modern ideas about the deterministic and stochastic nature of the trend are considered. Methods of statistical identification of the trend type are studied. Attention is paid to the evaluation, analysis, and practical implementation of Box — Jenkins stationary time series models, as well as multidimensional time series models: vector autoregressive models and vector error correction models. It includes basic econometric models for panel data that have been widely used in recent decades, as well as formal tests for selecting models based on their hierarchical structure. Each section provides examples of evaluating, analyzing, and testing models in the R software environment. Meets the requirements of the Federal state educational standards of higher education of the latest generation. It is addressed to master's students studying in the Field of Economics, the curriculum of which includes the disciplines Econometrics (advanced course)", "Econometric modeling", "Econometric research", and graduate students."
APA, Harvard, Vancouver, ISO, and other styles
12

Galassi, Francesco L. Econometrics and the renaissance: A discrete random-effects panel data model of farm tenures in fifteenth century Florence. Leicester: University of Leicester, Department of Economics, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
13

Didimo, Walter. Graph Drawing: 20th International Symposium, GD 2012, Redmond, WA, USA, September 19-21, 2012, Revised Selected Papers. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
14

Harris, Jeffrey E. Impact of "seguro popular" on prenatal visits in Mexico, 2002-2005: Latent class model of count data with a discrete endogenous variable. Cambridge, MA: National Bureau of Economic Research, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
15

Models for Discrete Data. Oxford University Press, USA, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
16

Models for Discrete Longitudinal Data. New York: Springer-Verlag, 2005. http://dx.doi.org/10.1007/0-387-28980-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Models for Discrete Longitudinal Data (Springer Series in Statistics). Springer, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
18

Facility, Dryden Flight Research, ed. Analysis of structural response data using discrete modal filters. Edwards, Calif: The Facility, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
19

(Editor), Zhilan Feng, Ulf Dieckmann (Editor), and Simon A. Levin (Editor), eds. Disease Evolution: Models, Concepts, and Data Analyses (Dimacs Series in Discrete Mathematics and Theoretical Computer Science). American Mathematical Society, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
20

R, Kumar P., and Varaiya P. P, eds. Discrete event systems, manufacturing systems, and communication networks. New York: Springer Verlag, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
21

Cheng, Russell. Change-Point Models. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198505044.003.0011.

Full text
Abstract:
This chapter investigates change-point (hazard rate) probability models for the random survival time in some population of interest. A parametric probability distribution is assumed with parameters to be estimated from a sample of observed survival times. If a change-point parameter, denoted by τ‎, is included to represent the time at which there is a discrete change in hazard rate, then the model is non-standard. The profile log-likelihood, with τ‎ as profiling parameter, has a discontinuous jump at every τ‎ equal to a sampled value, becoming unbounded as τ‎ tends to the largest observation. It is known that maximum likelihood estimation can still be used provided the range of τ‎ is restricted. It is shown that the alternative maximum product of spacings method is consistent without restriction on τ‎. Censored observations which commonly occur in survival-time data can be accounted for using Kaplan-Meier estimation. A real data numerical example is given.
APA, Harvard, Vancouver, ISO, and other styles
22

Orme, Chris. A note on adjusting the bias of maximum likelihood estimators in discrete panel data models with unobserved random effects. Dept. of Economics, Loughborough University of Technology, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
23

Mas, André, and Besnik Pumo. Linear Processes for Functional Data. Edited by Frédéric Ferraty and Yves Romain. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780199568444.013.3.

Full text
Abstract:
This article provides an overview of the basic theory and applications of linear processes for functional data, with particular emphasis on results published from 2000 to 2008. It first considers centered processes with values in a Hilbert space of functions before proposing some statistical models that mimic or adapt the scalar or finite-dimensional approaches for time series. It then discusses general linear processes, focusing on the invertibility and convergence of the estimated moments and a general method for proving asymptotic results for linear processes. It also describes autoregressive processes as well as two issues related to the general estimation problem, namely: identifiability and the inverse problem. Finally, it examines convergence results for the autocorrelation operator and the predictor, extensions for the autoregressive Hilbertian (ARH) model, and some numerical aspects of prediction when the data are curves observed at discrete points.
APA, Harvard, Vancouver, ISO, and other styles
24

Gelfand, Alan, and Sujit K. Sahu. Models for demography of plant populations. Edited by Anthony O'Hagan and Mike West. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198703174.013.17.

Full text
Abstract:
This article discusses the use of Bayesian analysis and methods to analyse the demography of plant populations, and more specifically to estimate the demographic rates of trees and how they respond to environmental variation. It examines data from individual (tree) measurements over an eighteen-year period, including diameter, crown area, maturation status, and survival, and from seed traps, which provide indirect information on fecundity. The multiple data sets are synthesized with a process model where each individual is represented by a multivariate state-space submodel for both continuous (fecundity potential, growth rate, mortality risk, maturation probability) and discrete states (maturation status). The results from plant population demography analysis demonstrate the utility of hierarchical modelling as a mechanism for the synthesis of complex information and interactions.
APA, Harvard, Vancouver, ISO, and other styles
25

Wing, Ian Sue, and Edward J. Balistreri. Computable General Equilibrium Models for Policy Evaluation and Economic Consequence Analysis. Edited by Shu-Heng Chen, Mak Kaboudan, and Ye-Rong Du. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780199844371.013.7.

Full text
Abstract:
This chapter reviews recent applications of computable general equilibrium (CGE) modeling in the analysis and evaluation of policies that affect interactions among multiple markets. At the core of this research is a particular approach to the data and structural representations of the economy, elaborated through the device of a canonical static multiregional model. This template is adapted and extended to shed light on the structural and methodological foundations of simulating dynamic economies, incorporating “bottom-up” representations of discrete production activities, and modeling contemporary theories of international trade with monopolistic competition and heterogeneous firms. These techniques are motivated by policy applications including trade liberalization, development, energy policy and greenhouse gas mitigation, the impacts of climate change and natural disasters, and economic integration and liberalization of trade in services.
APA, Harvard, Vancouver, ISO, and other styles
26

Bonato, Anthony, Paweł Prałat, and Andrei Raigorodskii. Algorithms and Models for the Web Graph: 15th International Workshop, WAW 2018, Moscow, Russia, May 17-18, 2018, Proceedings. Springer, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
27

Mitzenmacher, Michael, Anthony Bonato, and Pawel Pralat. Algorithms and Models for the Web Graph: 10th International Workshop, WAW 2013, Cambridge, MA, USA, December 14-15, 2013, Proceedings. Springer, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
28

Graham, Fan Chung, Anthony Bonato, and Paweł Prałat. Algorithms and Models for the Web Graph: 13th International Workshop, WAW 2016, Montreal, QC, Canada, December 14–15, 2016, Proceedings. Springer, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
29

Graham, Fan Chung, Anthony Bonato, and Paweł Prałat. Algorithms and Models for the Web Graph: 14th International Workshop, WAW 2017, Toronto, ON, Canada, June 15–16, 2017, Revised Selected Papers. Springer, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
30

Discrete Event Simulation and Modeling (Model-Based Design). CRC, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ulrik, Brandes, and Erlebach Thomas, eds. Network analysis: Methodological foundations. Berlin: Springer, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
32

(Editor), Ulrik Brandes, and Thomas Erlebach (Editor), eds. Network Analysis: Methodological Foundations (Lecture Notes in Computer Science). Springer, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
33

Elwood, Mark. Chance variation. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199682898.003.0008.

Full text
Abstract:
This chapter explains chance variation and statistical tests, including discrete and continuous measures, the concept of significance, one and two sided test, exact tests, precision and confidence limits. It shows tests of differences in proportions and chi-square tests, the Mantel-Haenszel test, and calculation of confidence limits, for simple tables and for stratified data. It covers heterogeneity tests, multiplicative and additive models, ordered exposure variables and tests of trend. It explains statistical tests for matched studies and in multivariate models. Multiple testing, the Bonferroni correction, issues of hypothesis testing and hypothesis generation, and subgroup analyses are discussed. Stopping rules and repeated testing in trials is covered. It explains how to calculate study power and the necessary size of the study. The chapter describes time to event analysis, including survival curves, product-limit and actuarial or life-table methods, and the calculation of confidence limits, relative survival ratios, the log rank test with control for confounding, and multivariate analysis.
APA, Harvard, Vancouver, ISO, and other styles
34

Brazier, John, Julie Ratcliffe, Joshua A. Salomon, and Aki Tsuchiya. Modelling health state valuation data. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780198725923.003.0005.

Full text
Abstract:
This chapter examines the technical issues in modelling health state valuation data. Most measures of health define too many states to directly value all of them (e.g. SF-6D defines 18,000 health states). The solution has been to value a subset and by using modelling to predict the values of all states. This chapter reviews two approaches to modelling: one using multiattribute utility theory to determine health values given an assumed functional form; and the other is using statistical modelling of SF-6D preference data that are skewed, bimodal, and clustered by respondents. This chapter examines the selection of health states for valuation, data preparation, model specification, and techniques for modelling the data starting with ordinary least squares (OLS) and moving on to more complex techniques including Bayesian non-parametric and semi-parametric approaches, and a hybrid approach that combines cardinal preference data with the results of paired data from a discrete choice experiment.
APA, Harvard, Vancouver, ISO, and other styles
35

Graph Drawing 19th International Symposium Gd 2011 Eindhoven The Netherlands September 2123 2011. Springer, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
36

Didimo, Walter, and Maurizio Patrignani. Graph Drawing: 20th International Symposium, GD 2012, Redmond, WA, USA, September 19-21, 2012, Revised Selected Papers. Springer, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
37

Hong, Sun-ha. Technologies of Speculation. NYU Press, 2020. http://dx.doi.org/10.18574/nyu/9781479860234.001.0001.

Full text
Abstract:
What counts as knowledge in the age of big data and smart machines? Technologies of datafication renew the long modern promise of turning bodies into facts. They seek to take human intentions, emotions, and behavior and to turn these messy realities into discrete and stable truths. But in pursuing better knowledge, technology is reshaping in its image what counts as knowledge. The push for algorithmic certainty sets loose an expansive array of incomplete archives, speculative judgments, and simulated futures. Too often, data generates speculation as much as it does information. Technologies of Speculation traces this twisted symbiosis of knowledge and uncertainty in emerging state and self-surveillance technologies. It tells the story of vast dragnet systems constructed to predict the next terrorist and of how familiar forms of prejudice seep into the data by the back door. In software placeholders, such as “Mohammed Badguy,” the fantasy of pure data collides with the old specter of national purity. It shows how smart machines for ubiquitous, automated self-tracking, manufacturing knowledge, paradoxically lie beyond the human senses. This data is increasingly being taken up by employers, insurers, and courts of law, creating imperfect proxies through which my truth can be overruled. This book argues that as datafication transforms what counts as knowledge, it is dismantling the long-standing link between knowledge and human reason, rational publics, and free individuals. If data promises objective knowledge, then we must ask in return, Knowledge by and for whom; enabling what forms of life for the human subject?
APA, Harvard, Vancouver, ISO, and other styles
38

Wikle, Christopher K. Spatial Statistics. Oxford University Press, 2018. http://dx.doi.org/10.1093/acrefore/9780190228620.013.710.

Full text
Abstract:
The climate system consists of interactions between physical, biological, chemical, and human processes across a wide range of spatial and temporal scales. Characterizing the behavior of components of this system is crucial for scientists and decision makers. There is substantial uncertainty associated with observations of this system as well as our understanding of various system components and their interaction. Thus, inference and prediction in climate science should accommodate uncertainty in order to facilitate the decision-making process. Statistical science is designed to provide the tools to perform inference and prediction in the presence of uncertainty. In particular, the field of spatial statistics considers inference and prediction for uncertain processes that exhibit dependence in space and/or time. Traditionally, this is done descriptively through the characterization of the first two moments of the process, one expressing the mean structure and one accounting for dependence through covariability.Historically, there are three primary areas of methodological development in spatial statistics: geostatistics, which considers processes that vary continuously over space; areal or lattice processes, which considers processes that are defined on a countable discrete domain (e.g., political units); and, spatial point patterns (or point processes), which consider the locations of events in space to be a random process. All of these methods have been used in the climate sciences, but the most prominent has been the geostatistical methodology. This methodology was simultaneously discovered in geology and in meteorology and provides a way to do optimal prediction (interpolation) in space and can facilitate parameter inference for spatial data. These methods rely strongly on Gaussian process theory, which is increasingly of interest in machine learning. These methods are common in the spatial statistics literature, but much development is still being done in the area to accommodate more complex processes and “big data” applications. Newer approaches are based on restricting models to neighbor-based representations or reformulating the random spatial process in terms of a basis expansion. There are many computational and flexibility advantages to these approaches, depending on the specific implementation. Complexity is also increasingly being accommodated through the use of the hierarchical modeling paradigm, which provides a probabilistically consistent way to decompose the data, process, and parameters corresponding to the spatial or spatio-temporal process.Perhaps the biggest challenge in modern applications of spatial and spatio-temporal statistics is to develop methods that are flexible yet can account for the complex dependencies between and across processes, account for uncertainty in all aspects of the problem, and still be computationally tractable. These are daunting challenges, yet it is a very active area of research, and new solutions are constantly being developed. New methods are also being rapidly developed in the machine learning community, and these methods are increasingly more applicable to dependent processes. The interaction and cross-fertilization between the machine learning and spatial statistics community is growing, which will likely lead to a new generation of spatial statistical methods that are applicable to climate science.
APA, Harvard, Vancouver, ISO, and other styles
39

Oliveira, Ricardo Puziol de, Isabela Zara Cremonezi, Daniele Peralta, Marcos Vinícius de Oliveira Peres, and Juliano Katayama Groff. ANÁLISE DA DISTRIBUIÇÃO DE DIAS CHUVOSOS NA REGIÃO SUL DO BRASIL EM TERMOS DE PROBABILIDADE. Bookerfield Editora, 2021. http://dx.doi.org/10.53268/bkf21060901.

Full text
Abstract:
A análise da distribuição dos períodos chuvosos baseada na precipitação diária é cada vez mais importante para a população de uma dada região e tais análises podem ser utilizadas para a finalidade de tomada de decisões e/ou previsões. Neste sentido é importante avaliar o desempenho de várias distribuições de probabilidade para melhor descrever o comportamento do comprimento de dias chuvosos. Na literatura, os modelos de probabilidade tradicionais como as distribuições de probabilidade geométrica (G), Poisson truncada em zero (ZTP), logarítmica (LG) e binomial negativa truncada em zero (ZTNB) são os mais comuns nesse tipo de análise. Este artigo identificou duas distribuições discretas univariadas obtidas pelo método de Nakagawa e Osaki (1975), as distribuições de probabilidade truncadas em zero Lindley (ZTDL) e quasiLindley (ZTQLD), como boas possíveis alternativas para as distribuições ZTP, ZTNB e G. Os resultados obtidos mostraram que as distribuições ZTDL e ZTDQL são muito promissoras para a modelagem dos comprimentos de períodos chuvosos com ajustes equivalentes/superior aos modelos ZTP, ZTNB e G.
APA, Harvard, Vancouver, ISO, and other styles
40

Hooghe, Liesbet, Gary Marks, Tobias Lenz, Jeanine Bezuijen, Besir Ceka, and Svet Derderyan. Measuring International Authority. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198724490.001.0001.

Full text
Abstract:
This book sets out a measure of authority for seventy-six major international organizations (IOs) from 1950 to 2010 in an effort to provide systematic comparative information on international governance. On the premise that transparency is key in the production of data, the authors chart a path in laying out the assumptions that underpin the measure. Successive chapters detail the authors’ theoretical, conceptual, and coding decisions. In order to assess their authority, the authors model the composition of IO bodies, their roles in decision making, the bindingness of IO decisions, and the mechanisms through which they seek to settle disputes. Profiles of regional, cross-regional, and global IOs explain how they are composed and how they make decisions. A distinctive feature of the measure is that it breaks down the concept of international authority into discrete dimensions. The Measure of International Authority (MIA) is built up from coherent ingredients—the composition and role of individual IO bodies at each stage in policy making, constitutional reform, the budget, financial compliance, membership accession, and the suspension of members. These observations can be assembled—like Lego blocks—in diverse ways for diverse purposes. This produces a flexible tool for investigating international governance and testing theory.
APA, Harvard, Vancouver, ISO, and other styles
41

Leidwanger, Justin. Roman Seas. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190083656.001.0001.

Full text
Abstract:
This book offers an archaeological analysis of maritime economy and connectivity in the Roman east. That seafaring was fundamental to prosperity under Rome is beyond doubt, but a tendency to view the grandest long-distance movements among major cities against a background noise of small-scale, short-haul activity has tended to flatten the finer and varied contours of maritime interaction and coastal life into a featureless blue Mediterranean. Drawing together maritime landscape studies and network analysis, this work takes a bottom-up view of the diverse socioeconomic conditions and seafaring logistics that generated multiple structures and scales of interaction. The material record of shipwrecks and ports along a vital corridor from the southeast Aegean across the northeast Mediterranean provides a case study of regional exchange and communication based on routine sails between simple coastal facilities. Rather than a single well-integrated and persistent Mediterranean network, multiple discrete and evolving regional and interregional systems emerge. This analysis sheds light on the cadence of economic life along the coast, the development of market institutions, and the regional continuities that underpinned integration—despite certain interregional disintegration—into Late Antiquity. Through this model of seaborne interaction, the study advances a new approach to the synthesis of shipwreck and other maritime archaeological and historical economic data, as well as a path through the stark dichotomies that inform most paradigms of Roman connectivity and trade.
APA, Harvard, Vancouver, ISO, and other styles
42

Ellis, Graham. An Invitation to Computational Homotopy. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198832973.001.0001.

Full text
Abstract:
This book is an introduction to elementary algebraic topology for students with an interest in computers and computer programming. Its aim is to illustrate how the basics of the subject can be implemented on a computer. The transition from basic theory to practical computation raises a range of non-trivial algorithmic issues and it is hoped that the treatment of these will also appeal to readers already familiar with basic theory who are interested in developing computational aspects. The book covers a subset of standard introductory material on fundamental groups, covering spaces, homology, cohomology and classifying spaces as well as some less standard material on crossed modules, homotopy 2- types and explicit resolutions for an eclectic selection of discrete groups. It attempts to cover these topics in a way that hints at potential applications of topology in areas of computer science and engineering outside the usual territory of pure mathematics, and also in a way that demonstrates how computers can be used to perform explicit calculations within the domain of pure algebraic topology itself. The initial chapters include examples from data mining, biology and digital image analysis, while the later chapters cover a range of computational examples on the cohomology of classifying spaces that are likely beyond the reach of a purely paper-and-pen approach to the subject. The applied examples in the initial chapters use only low-dimensional and mainly abelian topological tools. Our applications of higher dimensional and less abelian computational methods are currently confined to pure mathematical calculations. The approach taken to computational homotopy is very much based on J.H.C. Whitehead’s theory of combinatorial homotopy in which he introduced the fundamental notions of CW-space, simple homotopy equivalence and crossed module. The book should serve as a self-contained informal introduction to these topics and their computer implementation. It is written in a style that tries to lead as quickly as possible to a range of potentially useful machine computations.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography