To see the other types of publications on this topic, follow the link: Time correlation functions.

Books on the topic 'Time correlation functions'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 23 books for your research on the topic 'Time correlation functions.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse books on a wide variety of disciplines and organise your bibliography correctly.

1

Rubinstein, Robert. Effects of helicity on Lagrangian and Eulerian time correlations in turbulence. Hampton, VA: Institute for Computer Applications in Science and Engineering, NASA Langley Research Center, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cevelev, Aleksandr. Strategic development of railway transport logistics. ru: INFRA-M Academic Publishing LLC., 2021. http://dx.doi.org/10.12737/1194747.

Full text
Abstract:
The monograph is devoted to the methodology of material and technical support of railway transport. According to the types of activities, the nature of the material and technical resources used, technologies, means and management systems, Russian railways belong to the category of high-tech industries that must have high quality and technical level, reliability and technological efficiency in operation. For this reason, the logistics system itself, both in structure and in the algorithm of the functions performed as a whole, needs a serious improvement in the quality of its work. The economic situation in Russia requires a revision of the principles and mechanisms of management based on the corporate model of supply chain management, focused on logistics knowledge. In the difficult economic conditions of the current decade, it is necessary to improve the quality of the supply organization of enterprises and structural divisions of railway transport, directly related to the implementation of the process approach, the advantage of which is a more detailed regulation of management actions and their mutual coordination. In order to increase the efficiency of its activities and develop the management system, Russian Railways is developing a lean production system aimed at further expanding the implementation of the principles of customer orientation, ideology and corporate culture. At the present time, the solution of many issues is impossible without a cybernetic approach to the formulation of problems of material and technical support and logistics analysis of information technologies, to the implementation of the developed algorithms and models of development strategies and concepts for improving the business processes of the production system. The management strategy, or the general plan for the implementation of activities for the management of material resources, is based on a fundamental assessment of the alignment and correlation of forces and factors operating in the economic and political field, taking into account the impact on the specific form of the management strategy. The materials will be useful to the heads and specialists of the directorates of the MTO, CDZs and can be used in the scientific research of bachelors, masters and postgraduates interested in the economics of railway transport and supply logistics.
APA, Harvard, Vancouver, ISO, and other styles
3

Allen, Michael P., and Dominic J. Tildesley. How to analyse the results. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198803195.003.0008.

Full text
Abstract:
In this chapter, practical guidance is given on the calculation of thermodynamic, structural, and dynamical quantities from simulation trajectories. Program examples are provided to illustrate the calculation of the radial distribution function and a time correlation function using the direct and fast Fourier transform methods. There is a detailed discussion of the calculation of statistical errors through the statistical inefficiency. The estimation of the error in equilibrium averages, fluctuations and in time correlation functions is discussed. The correction of thermodynamic averages to neighbouring state points is described along with the extension and extrapolation of the radial distribution function. The calculation of transport coefficients by the integration of the time correlation function and through the Einstein relation is discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Morawetz, Klaus. Spectral Properties. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198797241.003.0008.

Full text
Abstract:
The spectral properties of the nonequilibrium Green’s functions are explored. Causality and sum rules are shown to be completed by the extended quasiparticle picture. The off-shell motion is seen to become visible in satellite structures of the spectral function. Different forms of ansatz to reduce the two-time Green’s function to a one-time reduced density matrix are discussed with respect to the consistency to other approximations. We have seen from the information contained in the correlation function that the statistical weight of excitations with which the distributions are populated are given by the spectral function. This momentum-resolved density of state can be found by the retarded and advance functions.
APA, Harvard, Vancouver, ISO, and other styles
5

Morawetz, Klaus. Interacting Systems far from Equilibrium. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198797241.001.0001.

Full text
Abstract:
In quantum statistics based on many-body Green’s functions, the effective medium is represented by the selfenergy. This book aims to discuss the selfenergy from this point of view. The knowledge of the exact selfenergy is equivalent to the knowledge of the exact correlation function from which one can evaluate any single-particle observable. Complete interpretations of the selfenergy are as rich as the properties of the many-body systems. It will be shown that classical features are helpful to understand the selfenergy, but in many cases we have to include additional aspects describing the internal dynamics of the interaction. The inductive presentation introduces the concept of Ludwig Boltzmann to describe correlations by the scattering of many particles from elementary principles up to refined approximations of many-body quantum systems. The ultimate goal is to contribute to the understanding of the time-dependent formation of correlations. Within this book an up-to-date most simple formalism of nonequilibrium Green’s functions is presented to cover different applications ranging from solid state physics (impurity scattering, semiconductor, superconductivity, Bose–Einstein condensation, spin-orbit coupled systems), plasma physics (screening, transport in magnetic fields), cold atoms in optical lattices up to nuclear reactions (heavy-ion collisions). Both possibilities are provided, to learn the quantum kinetic theory in terms of Green’s functions from the basics using experiences with phenomena, and experienced researchers can find a framework to develop and to apply the quantum many-body theory straight to versatile phenomena.
APA, Harvard, Vancouver, ISO, and other styles
6

Ferrari, Patrik L., and Herbert Spohn. Random matrices and Laplacian growth. Edited by Gernot Akemann, Jinho Baik, and Philippe Di Francesco. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198744191.013.39.

Full text
Abstract:
This article reviews the theory of random matrices with eigenvalues distributed in the complex plane and more general ‘beta ensembles’ (logarithmic gases in 2D). It first considers two ensembles of random matrices with complex eigenvalues: ensemble C of general complex matrices and ensemble N of normal matrices. In particular, it describes the Dyson gas picture for ensembles of matrices with general complex eigenvalues distributed on the plane. It then presents some general exact relations for correlation functions valid for any values of N and β before analysing the distribution and correlations of the eigenvalues in the large N limit. Using the technique of boundary value problems in two dimensions and elements of the potential theory, the article demonstrates that the finite-time blow-up (a cusp–like singularity) of the Laplacian growth with zero surface tension is a critical point of the normal and complex matrix models.
APA, Harvard, Vancouver, ISO, and other styles
7

Morawetz, Klaus. Transient Time Period. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198797241.003.0019.

Full text
Abstract:
The formation of correlations at short- time scales is considered. A universal response function is found which allows describing the formation of collective modes in plasmas created by femto-second lasers as well as the formation of occupations in cold atomic optical lattices. Quantum quench and sudden switching of interactions are possible to describe by such Levinson-type kinetic equations on the transient time regime. On larger time scales it is shown that non-Markovian–Levnson equations double count correlations and the extended quasiparticle picture to distinguish between the reduced density matrix and quasiparticle distribution solve this shortcoming. The problem of initial correlations and how they can be incorporated into the Green’s function technique to result into modified kinetic equations is solved and a systematic expansion is suggested.
APA, Harvard, Vancouver, ISO, and other styles
8

Bradbury, Elizabeth J., and Nicholas D. James. Mapping of neurotrophin receptors on adult sensory neurons. Edited by Paul Farquhar-Smith, Pierre Beaulieu, and Sian Jagger. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780198834359.003.0022.

Full text
Abstract:
The paper discussed in this chapter describes the first mapping of neurotrophin receptors in adult sensory neurons. Neurotrophins and their receptors were a particularly hot topic at the time, but the primary focus of interest had been in their role in development. In this paper, McMahon and colleagues characterized both mRNA and protein expression of the recently discovered trk receptors on defined populations of adult sensory neurons, correlating trk expression with other primary afferent projection neuron properties such as cell size and neuronal function. Furthermore, by showing clear correlations between the expression of different trk receptors and the physical and functional properties of defined primary afferent projections, the authors provided key evidence suggesting that nerve growth factor and neurotrophin-3 acted on functionally distinct populations of adult sensory neurons. This paper provided the basis for subsequent research on neurotrophin signalling and function in both the healthy and the diseased nervous system.
APA, Harvard, Vancouver, ISO, and other styles
9

Berber, Stevan. Discrete Communication Systems. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198860792.001.0001.

Full text
Abstract:
The book present essential theory and practice of the discrete communication systems design, based on the theory of discrete time stochastic processes, and their relation to the existing theory of digital communication systems. Using the notion of stochastic linear time invariant systems, in addition to the orhogonality principles, a general structure of the discrete communication system is constructed in terms of mathematical operators. Based on this structure, the MPSK, MFSK, QAM, OFDM and CDMA systems, using discrete modulation methods, are deduced as special cases. The signals are processed in the time and frequency domain, which requires precise derivatives of their amplitude spectral density functions, correlation functions and related energy and pover spectral densities. The book is self-sufficient, because it uses the unified notation both in the main ten chapters explaining communications systems theory and nine supplementary chapters dealing with the continuous and discrete time signal processing for both the deterministic and stochastic signals. In this context, the indexing of vital signals and finctions makes obvious distinction beteween them. Having in mind the controversial nature of the continuous time white Gaussian noise process, a separate chapter is dedicated to the noise discretisation by introducing notions of noise entropy and trauncated Gaussian density function to avoid limitations in applying the Nyquist criterion. The text of the book is acompained by the solutions of problems for all chapters and a set of deign projects with the defined projects’ topics and tasks and offered solutions.
APA, Harvard, Vancouver, ISO, and other styles
10

Ross, John, Igor Schreiber, and Marcel O. Vlad. Determination of Complex Reaction Mechanisms. Oxford University Press, 2006. http://dx.doi.org/10.1093/oso/9780195178685.001.0001.

Full text
Abstract:
In a chemical system with many chemical species several questions can be asked: what species react with other species: in what temporal order: and with what results? These questions have been asked for over one hundred years about simple and complex chemical systems, and the answers constitute the macroscopic reaction mechanism. In Determination of Complex Reaction Mechanisms authors John Ross, Igor Schreiber, and Marcel Vlad present several systematic approaches for obtaining information on the causal connectivity of chemical species, on correlations of chemical species, on the reaction pathway, and on the reaction mechanism. Basic pulse theory is demonstrated and tested in an experiment on glycolysis. In a second approach, measurements on time series of concentrations are used to construct correlation functions and a theory is developed which shows that from these functions information may be inferred on the reaction pathway, the reaction mechanism, and the centers of control in that mechanism. A third approach is based on application of genetic algorithm methods to the study of the evolutionary development of a reaction mechanism, to the attainment given goals in a mechanism, and to the determination of a reaction mechanism and rate coefficients by comparison with experiment. Responses of non-linear systems to pulses or other perturbations are analyzed, and mechanisms of oscillatory reactions are presented in detail. The concluding chapters give an introduction to bioinformatics and statistical methods for determining reaction mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
11

Akemann, Gernot. Random matrix theory and quantum chromodynamics. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198797319.003.0005.

Full text
Abstract:
This chapter was originally presented to a mixed audience of physicists and mathematicians with some basic working knowledge of random matrix theory. The first part is devoted to the solution of the chiral Gaussian unitary ensemble in the presence of characteristic polynomials, using orthogonal polynomial techniques. This includes all eigenvalue density correlation functions, smallest eigenvalue distributions, and their microscopic limit at the origin. These quantities are relevant for the description of the Dirac operator spectrum in quantum chromodynamics with three colors in four Euclidean space-time dimensions. In the second part these two theories are related based on symmetries, and the random matrix approximation is explained. In the last part recent developments are covered, including the effect of finite chemical potential and finite space-time lattice spacing, and their corresponding orthogonal polynomials. This chapter also provides some open random matrix problems.
APA, Harvard, Vancouver, ISO, and other styles
12

Boothroyd, Andrew T. Principles of Neutron Scattering from Condensed Matter. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780198862314.001.0001.

Full text
Abstract:
The book contains a comprehensive account of the theory and application of neutron scattering for the study of the structure and dynamics of condensed matter. All the principal experimental techniques available at national and international neutron scattering facilities are covered. The formal theory is presented, and used to show how neutron scattering measurements give direct access to a variety of correlation and response functions which characterize the equilibrium properties of bulk matter. The determination of atomic arrangements and magnetic structures by neutron diffraction and neutron optical methods is described, including single-crystal and powder diffraction, diffuse scattering from disordered structures, total scattering, small-angle scattering, reflectometry, and imaging. The principles behind the main neutron spectroscopic techniques are explained, including continuous and time-of-flight inelastic scattering, quasielastic scattering, spin-echo spectroscopy, and Compton scattering. The scattering cross-sections for atomic vibrations in solids, diffusive motion in atomic and molecular fluids, and single-atom and cooperative magnetic excitations are calculated. A detailed account of neutron polarization analysis is given, together with examples of how polarized neutrons can be exploited to obtain information about structural and magnetic correlations which cannot be obtained by other methods. Alongside the theoretical aspects, the book also describes the essential practical information needed to perform experiments and to analyse and interpret the data. Exercises are included at the end of each chapter to consolidate and enhance understanding of the material, and a summary of relevant results from mathematics, quantum mechanics, and linear response theory, is given in the appendices.
APA, Harvard, Vancouver, ISO, and other styles
13

Allen, Michael P., and Dominic J. Tildesley. Quantum simulations. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198803195.003.0013.

Full text
Abstract:
This chapter covers the introduction of quantum mechanics into computer simulation methods. The chapter begins by explaining how electronic degrees of freedom may be handled in an ab initio fashion and how the resulting forces are included in the classical dynamics of the nuclei. The technique for combining the ab initio molecular dynamics of a small region, with classical dynamics or molecular mechanics applied to the surrounding environment, is explained. There is a section on handling quantum degrees of freedom, such as low-mass nuclei, by discretized path integral methods, complete with practical code examples. The problem of calculating quantum time correlation functions is addressed. Ground-state quantum Monte Carlo methods are explained, and the chapter concludes with a forward look to the future development of such techniques particularly to systems that include excited electronic states.
APA, Harvard, Vancouver, ISO, and other styles
14

National Aeronautics and Space Administration (NASA) Staff. Separating Direct and Indirect Turbofan Engine Combustion Noise While Estimating Post-Combustion (Post-Flame) Residence Time Using the Correlation Function. Independently Published, 2019.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
15

Henriksen, Niels Engholm, and Flemming Yssing Hansen. Rate Constants, Reactive Flux. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198805014.003.0005.

Full text
Abstract:
This chapter discusses a direct approach to the calculation of the rate constant k(T) that bypasses the detailed state-to-state reaction cross-sections. The method is based on the calculation of the reactive flux across a dividing surface on the potential energy surface. Versions based on classical as well as quantum mechanics are described. The classical version and its relation to Wigner’s variational theorem and recrossings of the dividing surface is discussed. Neglecting recrossings, an approximate result based on the calculation of the classical one-way flux from reactants to products is considered. Recrossings can subsequently be included via a transmission coefficient. An alternative exact expression is formulated based on a canonical average of the flux time-correlation function. It concludes with the quantum mechanical definition of the flux operator and the derivation of a relation between the rate constant and a flux correlation function.
APA, Harvard, Vancouver, ISO, and other styles
16

Wendling, Fabrice, Marco Congendo, and Fernando H. Lopes da Silva. EEG Analysis. Edited by Donald L. Schomer and Fernando H. Lopes da Silva. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780190228484.003.0044.

Full text
Abstract:
This chapter addresses the analysis and quantification of electroencephalographic (EEG) and magnetoencephalographic (MEG) signals. Topics include characteristics of these signals and practical issues such as sampling, filtering, and artifact rejection. Basic concepts of analysis in time and frequency domains are presented, with attention to non-stationary signals focusing on time-frequency signal decomposition, analytic signal and Hilbert transform, wavelet transform, matching pursuit, blind source separation and independent component analysis, canonical correlation analysis, and empirical model decomposition. The behavior of these methods in denoising EEG signals is illustrated. Concepts of functional and effective connectivity are developed with emphasis on methods to estimate causality and phase and time delays using linear and nonlinear methods. Attention is given to Granger causality and methods inspired by this concept. A concrete example is provided to show how information processing methods can be combined in the detection and classification of transient events in EEG/MEG signals.
APA, Harvard, Vancouver, ISO, and other styles
17

Paus, Tomáš. Combining brain imaging with brain stimulation: causality and connectivity. Edited by Charles M. Epstein, Eric M. Wassermann, and Ulf Ziemann. Oxford University Press, 2012. http://dx.doi.org/10.1093/oxfordhb/9780198568926.013.0034.

Full text
Abstract:
This article establishes the concept of a methodological approach to combine brain imaging with brain stimulation. Transcranial magnetic stimulation (TMS) is a tool that allows perturbing neural activity, in time and space, in a noninvasive manner. This approach allows the study of the brain-behaviour relationship. Under certain circumstances, the influence of one region on other, called the effective connectivity, can be measured. Functional connectivity is the extent of correlation in brain activity measured across a number of spatially distinct brain regions. This tool of connectivity can be applied to any dataset acquired with brain-mapping tools. However, its interpretation is complex. Also, the technical complexity of the combined studies needs to be resolved. Future studies may benefit from focusing on neurochemical transmission in specific neural circuits and on temporal dynamics of cortico-cortical interactions.
APA, Harvard, Vancouver, ISO, and other styles
18

Aarts, D. G. A. L. Soft interfaces: the case of colloid–polymer mixtures. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198789352.003.0013.

Full text
Abstract:
In this chapter we discuss the interface of a phase separated colloid-polymer mixture. We start by highlighting a number of experimental studies, illustrating the richness of colloidal interface phenomena. This is followed by a derivation of the bulk phase behaviour within free volume theory. We subsequently calculate the interfacial tension using a squared gradient approach. The interfacial tension turns out to be ultralow, easily a million times smaller than a molecular interfacial tension. From the bulk and interface calculations we obtain the capillary length and compare to experiments, where good overall agreement is found. Finally, we focus on the thermal capillary waves of the interface and derive the static and dynamic height–height correlation functions, which describe the experimental data very well. We end with an outlook, where we address some outstanding questions concerning the behaviour of interfaces, to which colloids may provide unique insights.
APA, Harvard, Vancouver, ISO, and other styles
19

Nitzan, Abraham. Chemical Dynamics in Condensed Phases. Oxford University Press, 2006. http://dx.doi.org/10.1093/oso/9780198529798.001.0001.

Full text
Abstract:
This text provides a uniform and consistent approach to diversified problems encountered in the study of dynamical processes in condensed phase molecular systems. Given the broad interdisciplinary aspect of this subject, the book focuses on three themes: coverage of needed background material, in-depth introduction of methodologies, and analysis of several key applications. The uniform approach and common language used in all discussions help to develop general understanding and insight on condensed phases chemical dynamics. The applications discussed are among the most fundamental processes that underlie physical, chemical and biological phenomena in complex systems. The first part of the book starts with a general review of basic mathematical and physical methods (Chapter 1) and a few introductory chapters on quantum dynamics (Chapter 2), interaction of radiation and matter (Chapter 3) and basic properties of solids (chapter 4) and liquids (Chapter 5). In the second part the text embarks on a broad coverage of the main methodological approaches. The central role of classical and quantum time correlation functions is emphasized in Chapter 6. The presentation of dynamical phenomena in complex systems as stochastic processes is discussed in Chapters 7 and 8. The basic theory of quantum relaxation phenomena is developed in Chapter 9, and carried on in Chapter 10 which introduces the density operator, its quantum evolution in Liouville space, and the concept of reduced equation of motions. The methodological part concludes with a discussion of linear response theory in Chapter 11, and of the spin-boson model in chapter 12. The third part of the book applies the methodologies introduced earlier to several fundamental processes that underlie much of the dynamical behaviour of condensed phase molecular systems. Vibrational relaxation and vibrational energy transfer (Chapter 13), Barrier crossing and diffusion controlled reactions (Chapter 14), solvation dynamics (Chapter 15), electron transfer in bulk solvents (Chapter 16) and at electrodes/electrolyte and metal/molecule/metal junctions (Chapter 17), and several processes pertaining to molecular spectroscopy in condensed phases (Chapter 18) are the main subjects discussed in this part.
APA, Harvard, Vancouver, ISO, and other styles
20

Dyall, Kenneth G., and Knut Faegri. Introduction to Relativistic Quantum Chemistry. Oxford University Press, 2007. http://dx.doi.org/10.1093/oso/9780195140866.001.0001.

Full text
Abstract:
This book provides an introduction to the essentials of relativistic effects in quantum chemistry, and a reference work that collects all the major developments in this field. It is designed for the graduate student and the computational chemist with a good background in nonrelativistic theory. In addition to explaining the necessary theory in detail, at a level that the non-expert and the student should readily be able to follow, the book discusses the implementation of the theory and practicalities of its use in calculations. After a brief introduction to classical relativity and electromagnetism, the Dirac equation is presented, and its symmetry, atomic solutions, and interpretation are explored. Four-component molecular methods are then developed: self-consistent field theory and the use of basis sets, double-group and time-reversal symmetry, correlation methods, molecular properties, and an overview of relativistic density functional theory. The emphases in this section are on the basics of relativistic theory and how relativistic theory differs from nonrelativistic theory. Approximate methods are treated next, starting with spin separation in the Dirac equation, and proceeding to the Foldy-Wouthuysen, Douglas-Kroll, and related transformations, Breit-Pauli and direct perturbation theory, regular approximations, matrix approximations, and pseudopotential and model potential methods. For each of these approximations, one-electron operators and many-electron methods are developed, spin-free and spin-orbit operators are presented, and the calculation of electric and magnetic properties is discussed. The treatment of spin-orbit effects with correlation rounds off the presentation of approximate methods. The book concludes with a discussion of the qualitative changes in the picture of structure and bonding that arise from the inclusion of relativity.
APA, Harvard, Vancouver, ISO, and other styles
21

Mills, Caitlin, Arianne Herrera-Bennett, Myrthe Faber, and Kalina Christoff. Why the Mind Wanders. Edited by Kalina Christoff and Kieran C. R. Fox. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780190464745.013.42.

Full text
Abstract:
This chapter offers a functional account of why the mind—when free from the demands of a task or the constraints of heightened emotions—tends to wander from one topic to another, in a ceaseless and seemingly random fashion. We propose the default variability hypothesis, which builds on William James’s phenomenological account of thought as a form of mental locomotion, as well as on recent advances in cognitive neuroscience and computational modeling. Specifically, the default variability hypothesis proposes that the default mode of mental content production yields the frequent arising of new mental states that have heightened variability of content over time. This heightened variability in the default mode of mental content production may be an adaptive mechanism that (1) enhances episodic memory efficiency through de-correlating individual episodic memories from one another via temporally spaced reactivations, and (2) facilitates semantic knowledge optimization by providing optimal conditions for interleaved learning.
APA, Harvard, Vancouver, ISO, and other styles
22

van der Wal, Jenneke. A Featural Typology of Bantu Agreement. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780198844280.001.0001.

Full text
Abstract:
The Bantu languages are in some sense remarkably uniform (subject, verb, order (SVO) basic word order, noun classes, verbal morphology), but this extensive language family also show a wealth of morphosyntactic variation. Two core areas in which such variation is attested are subject and object agreement. The book explores the variation in Bantu subject and object marking on the basis of data from 75 Bantu languages, discovering striking patterns (the Relation between Asymmetry and Non-Doubling Object Marking (RANDOM), and the Asymmetry Wants Single Object Marking (AWSOM) correlation), and providing a novel syntactic analysis. This analysis takes into account not just phi agreement, but also nominal licensing and information structure. A Person feature, associated with animacy, definiteness, or givenness, is shown to be responsible for differential object agreement, while at the same time accounting for doubling vs. non-doubling object marking—a hybrid solution to an age-old debate in Bantu comparative morphosyntax. It is furthermore proposed that low functional heads can Case-license flexibly downwards or upwards, depending on the relative topicality of the two arguments involved. This accounts for the properties of symmetric object marking in ditransitives (for Appl), and subject inversion constructions (for v). By keeping Agree constant and systematically determining which featural parameters are responsible for the attested variation, the proposed analysis argues for an emergentist view of features and parameters (following Biberauer 2018, 2019), and against both Strong Uniformity and Strong Modularity.
APA, Harvard, Vancouver, ISO, and other styles
23

Ślusarski, Marek. Metody i modele oceny jakości danych przestrzennych. Publishing House of the University of Agriculture in Krakow, 2017. http://dx.doi.org/10.15576/978-83-66602-30-4.

Full text
Abstract:
The quality of data collected in official spatial databases is crucial in making strategic decisions as well as in the implementation of planning and design works. Awareness of the level of the quality of these data is also important for individual users of official spatial data. The author presents methods and models of description and evaluation of the quality of spatial data collected in public registers. Data describing the space in the highest degree of detail, which are collected in three databases: land and buildings registry (EGiB), geodetic registry of the land infrastructure network (GESUT) and in database of topographic objects (BDOT500) were analyzed. The results of the research concerned selected aspects of activities in terms of the spatial data quality. These activities include: the assessment of the accuracy of data collected in official spatial databases; determination of the uncertainty of the area of registry parcels, analysis of the risk of damage to the underground infrastructure network due to the quality of spatial data, construction of the quality model of data collected in official databases and visualization of the phenomenon of uncertainty in spatial data. The evaluation of the accuracy of data collected in official, large-scale spatial databases was based on a representative sample of data. The test sample was a set of deviations of coordinates with three variables dX, dY and Dl – deviations from the X and Y coordinates and the length of the point offset vector of the test sample in relation to its position recognized as a faultless. The compatibility of empirical data accuracy distributions with models (theoretical distributions of random variables) was investigated and also the accuracy of the spatial data has been assessed by means of the methods resistant to the outliers. In the process of determination of the accuracy of spatial data collected in public registers, the author’s solution was used – resistant method of the relative frequency. Weight functions, which modify (to varying degree) the sizes of the vectors Dl – the lengths of the points offset vector of the test sample in relation to their position recognized as a faultless were proposed. From the scope of the uncertainty of estimation of the area of registry parcels the impact of the errors of the geodetic network points was determined (points of reference and of the higher class networks) and the effect of the correlation between the coordinates of the same point on the accuracy of the determined plot area. The scope of the correction was determined (in EGiB database) of the plots area, calculated on the basis of re-measurements, performed using equivalent techniques (in terms of accuracy). The analysis of the risk of damage to the underground infrastructure network due to the low quality of spatial data is another research topic presented in the paper. Three main factors have been identified that influence the value of this risk: incompleteness of spatial data sets and insufficient accuracy of determination of the horizontal and vertical position of underground infrastructure. A method for estimation of the project risk has been developed (quantitative and qualitative) and the author’s risk estimation technique, based on the idea of fuzzy logic was proposed. Maps (2D and 3D) of the risk of damage to the underground infrastructure network were developed in the form of large-scale thematic maps, presenting the design risk in qualitative and quantitative form. The data quality model is a set of rules used to describe the quality of these data sets. The model that has been proposed defines a standardized approach for assessing and reporting the quality of EGiB, GESUT and BDOT500 spatial data bases. Quantitative and qualitative rules (automatic, office and field) of data sets control were defined. The minimum sample size and the number of eligible nonconformities in random samples were determined. The data quality elements were described using the following descriptors: range, measure, result, and type and unit of value. Data quality studies were performed according to the users needs. The values of impact weights were determined by the hierarchical analytical process method (AHP). The harmonization of conceptual models of EGiB, GESUT and BDOT500 databases with BDOT10k database was analysed too. It was found that the downloading and supplying of the information in BDOT10k creation and update processes from the analyzed registers are limited. An effective approach to providing spatial data sets users with information concerning data uncertainty are cartographic visualization techniques. Based on the author’s own experience and research works on the quality of official spatial database data examination, the set of methods for visualization of the uncertainty of data bases EGiB, GESUT and BDOT500 was defined. This set includes visualization techniques designed to present three types of uncertainty: location, attribute values and time. Uncertainty of the position was defined (for surface, line, and point objects) using several (three to five) visual variables. Uncertainty of attribute values and time uncertainty, describing (for example) completeness or timeliness of sets, are presented by means of three graphical variables. The research problems presented in the paper are of cognitive and application importance. They indicate on the possibility of effective evaluation of the quality of spatial data collected in public registers and may be an important element of the expert system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography