To see the other types of publications on this topic, follow the link: Superposition (Local Interpretation).

Journal articles on the topic 'Superposition (Local Interpretation)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 28 journal articles for your research on the topic 'Superposition (Local Interpretation).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Dong, Hui-Ning, Hui-Ping Du, Shao-Yi Wu, and Peng Li. "Theoretical Interpretation of the EPR Parameters for Dy3+ Ion in LuPO4 Crystal." Zeitschrift für Naturforschung A 59, no. 11 (2004): 765–68. http://dx.doi.org/10.1515/zna-2004-1105.

Full text
Abstract:
Based on the superposition model, in this paper the EPR parameters gII and g⊥ of Dy3+, and the hyperfine structure constants AII and A⊥ of 161Dy3+ and 163Dy3+ in LuPO4 crystal are calculated by perturbation formulas from the crystal-field theory. In the calculations, the contributions of various admixtures and interactions such as J-mixing, mixtures among states with the same J-value, twoorder perturbation, covalency as well as local lattice relaxation are considered. The calculated results agree reasonably with the observed values
APA, Harvard, Vancouver, ISO, and other styles
2

Fragkos, Vasileios, Michael Kopp, and Igor Pikovski. "On inference of quantization from gravitationally induced entanglement." AVS Quantum Science 4, no. 4 (2022): 045601. http://dx.doi.org/10.1116/5.0101334.

Full text
Abstract:
Observable signatures of the quantum nature of gravity at low energies have recently emerged as a promising new research field. One prominent avenue is to test for gravitationally induced entanglement between two mesoscopic masses prepared in spatial superposition. Here, we analyze such proposals and what one can infer from them about the quantum nature of gravity as well as the electromagnetic analogues of such tests. We show that it is not possible to draw conclusions about mediators: even within relativistic physics, entanglement generation can equally be described in terms of mediators or in terms of non-local processes—relativity does not dictate a local channel. Such indirect tests, therefore, have limited ability to probe the nature of the process establishing the entanglement as their interpretation is inherently ambiguous. We also show that cosmological observations already demonstrate some aspects of quantization that these proposals aim to test. Nevertheless, the proposed experiments would probe how gravity is sourced by spatial superpositions of matter, an untested new regime of quantum physics.
APA, Harvard, Vancouver, ISO, and other styles
3

Wibig, Tadeusz. "Testing the superposition model in small CORSIKA shower simulations." Journal of Physics G: Nuclear and Particle Physics 49, no. 3 (2022): 035201. http://dx.doi.org/10.1088/1361-6471/ac4da7.

Full text
Abstract:
Abstract The idea of superposition in the high-energy interactions of cosmic ray nuclei and the development of extensive air showers initiated by them has been known for more than half a century. It has been thoroughly and successfully tested in a number of simulations for primary energies around 1015 and above. In this work, we will investigate its applicability to lower energies. At the lowest energies, when the shower contains on average about one charged particle (or even less), deviations from the superposition model can be seen in the simulation results. Fluctuations of higher moments of the main shower parameters are systematically broader than expected. Further studies to confirm superposition in particular in the shower longitudinal profile are in progress. A correct description of the longitudinal development of the small shower, a precise description of its fluctuations on the observational level with a correct implementation of the superposition principle will enable to construct a simple and fast phenomenological algorithm generating small showers indispensable for the interpretation of measurements made using a small local shower array and determination of the flux of single, incoherent, secondary cosmic ray particles.
APA, Harvard, Vancouver, ISO, and other styles
4

Fedotova, Julia A., Uladzislaw E. Gumiennik, Svetlana A. Vorobyova, et al. "Phase composition and local environment of iron ions in gadolinium-doped iron oxide nanoparticles." Journal of the Belarusian State University. Chemistry, no. 2 (September 19, 2022): 30–37. http://dx.doi.org/10.33581/2520-257x-2022-2-30-37.

Full text
Abstract:
FeO ⋅ Fe2O3 ⋅ nH2O, Fe2.95Gd0.05O4 and Fe2.9Gd0.1O4 powders were obtained by chemical precipitation from aqueous solutions. The phase composition and local environment of iron ions in gadolinium-doped iron oxide nanoparticles were studied by X-ray diffraction analysis and nuclear gamma resonance (NGR) spectroscopy. Interpretation of radiographs and NGR spectra of synthesised samples indicates the presence of a superposition of maghemite γ-Fe2O3 and iron hydroxide α-FeOOH in the samples. It was found that under the deposition of powders in the presence of gadolinium nitrate, an increase in the content of iron hydroxide α-FeOOH is observed, which disappears after annealing at 200 °C.
APA, Harvard, Vancouver, ISO, and other styles
5

Chiarelli, P., and S. Chiarelli. "Stability of Quantum Eigenstates and Collapse of Superposition of States in a Fluctuating Vacuum: The Madelung Hydrodynamic Approach." European Journal of Applied Physics 3, no. 5 (2021): 11–28. http://dx.doi.org/10.24018/ejphysics.2021.3.5.97.

Full text
Abstract:
The paper investigates the quantum fluctuating dynamics by using the stochastic generalization of the Madelung quantum-hydrodynamic approach. By using the discrete approach, the path integral solution is derived in order to investigate how the final stationary configuration is obtained from the initial quantum superposition of states. The model shows that the quantum eigenstates remain stationary configurations with a very small perturbation of their mass density distribution and that any eigenstate, contributing to a quantum superposition of states, can be reached in the final stationary configuration. When the non-local quantum potential acquires a finite range of interaction, the work shows that the macroscopic coarse-grained description of the theory can lead to a really classical system. The minimum uncertainty attainable in the stochastic Madelung model is shown to be compatible with maximum speed of transmission of information and interactions. The theory shows that, in the quantum deterministic limit, the uncertainty relations of quantum mechanics are obtained. The connections with the decoherence theory and the Copenhagen interpretation of quantum mechanics are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Morales-Bayuelo, Alejandro, and Ricardo Vivas-Reyes. "Theoretical Calculations and Modeling for the Molecular Polarization of Furan and Thiophene under the Action of an Electric Field Using Quantum Similarity." Journal of Quantum Chemistry 2014 (March 17, 2014): 1–10. http://dx.doi.org/10.1155/2014/585394.

Full text
Abstract:
A theoretical study on the molecular polarization of thiophene and furan under the action of an electric field using Local Quantum Similarity Indexes (LQSI) was performed. This model is based on Hirshfeld partitioning of electron density within the framework of Density Functional Theory (DFT). Six local similarity indexes were used: overlap, overlap-interaction, coulomb, coulomb-interaction, Euclidian distances of overlap, and Euclidean distances of coulomb. In addition Topo-Geometrical Superposition Algorithm (TGSA) was used as a method of alignment. This method provides a straightforward procedure to solve the problem of molecular relative orientation. It provides a tool to evaluate molecular quantum similarity, enabling the study of structural systems, which differ in only one atom such as thiophene and furan (point group C2v) and cyclopentadienyl molecule (point group D5h). Additionally, this model can contribute to the interpretation of chemical bonds, and molecular interactions in the framework of the solvent effect theory.
APA, Harvard, Vancouver, ISO, and other styles
7

POPOVA, A. D. "NONLINEAR QUANTUM MECHANICS WITH NONCLASSICAL GRAVITATIONAL SELF-INTERACTION." International Journal of Modern Physics A 04, no. 13 (1989): 3229–67. http://dx.doi.org/10.1142/s0217751x89001321.

Full text
Abstract:
The original approach for the self-consistent inclusion of gravity into quantum mechanics of a particle is developed. (There are no connections with second quantization.) The nonstandard action principle is constructed for the stationary situation: the quantum particle in a stationary state creating some nonclassical stationary gravitational field and interacting with it, The accompanying problem of covariantization of quantum operators is considered. The general theory is illustrated by the Newtonian-Schrödingerian and quasi classical limiting cases. The levels of applicability of ordinary quantum mechanics and the problems of measurements and interpretation of nonclassical gravity are discussed. The “uncertainty relations” connecting uncertainties of some “local” parts of curvature and those of the particle’s position and momentum are derived. The superposition principle is generalized on the base of some approximate action.
APA, Harvard, Vancouver, ISO, and other styles
8

Karlin, Ilya. "Derivation of regularized Grad's moment system from kinetic equations: modes, ghosts and non-Markov fluxes." Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, no. 2118 (2018): 20170230. http://dx.doi.org/10.1098/rsta.2017.0230.

Full text
Abstract:
Derivation of the dynamic correction to Grad’s moment system from kinetic equations (regularized Grad’s 13 moment system, or R13) is revisited. The R13 distribution function is found as a superposition of eight modes. Three primary modes, known from the previous derivation (Karlin et al. 1998 Phys. Rev. E 57 , 1668–1672. ( doi:10.1103/PhysRevE.57.1668 )), are extended into the nonlinear parameter domain. Three essentially nonlinear modes are identified, and two ghost modes which do not contribute to the R13 fluxes are revealed. The eight-mode structure of the R13 distribution function implies partition of R13 fluxes into two types of contributions: dissipative fluxes (both linear and nonlinear) and nonlinear streamline convective fluxes. Physical interpretation of the latter non-dissipative and non-local in time effect is discussed. A non-perturbative R13-type solution is demonstrated for a simple Lorentz scattering kinetic model. The results of this study clarify the intrinsic structure of the R13 system. This article is part of the theme issue ‘Hilbert’s sixth problem’.
APA, Harvard, Vancouver, ISO, and other styles
9

Medina, Ana, Josep Triviño, Rafael J. Borges, Claudia Millán, Isabel Usón, and Massimo D. Sammito. "ALEPH: a network-oriented approach for the generation of fragment-based libraries and for structure interpretation." Acta Crystallographica Section D Structural Biology 76, no. 3 (2020): 193–208. http://dx.doi.org/10.1107/s2059798320001679.

Full text
Abstract:
The analysis of large structural databases reveals general features and relationships among proteins, providing useful insight. A different approach is required to characterize ubiquitous secondary-structure elements, where flexibility is essential in order to capture small local differences. The ALEPH software is optimized for the analysis and the extraction of small protein folds by relying on their geometry rather than on their sequence. The annotation of the structural variability of a given fold provides valuable information for fragment-based molecular-replacement methods, in which testing alternative model hypotheses can succeed in solving difficult structures when no homology models are available or are successful. ARCIMBOLDO_BORGES combines the use of composite secondary-structure elements as a search model with density modification and tracing to reveal the rest of the structure when both steps are successful. This phasing method relies on general fold libraries describing variations around a given pattern of β-sheets and helices extracted using ALEPH. The program introduces characteristic vectors defined from the main-chain atoms as a way to describe the geometrical properties of the structure. ALEPH encodes structural properties in a graph network, the exploration of which allows secondary-structure annotation, decomposition of a structure into small compact folds, generation of libraries of models representing a variation of a given fold and finally superposition of these folds onto a target structure. These functions are available through a graphical interface designed to interactively show the results of structure manipulation, annotation, fold decomposition, clustering and library generation. ALEPH can produce pictures of the graphs, structures and folds for publication purposes.
APA, Harvard, Vancouver, ISO, and other styles
10

Jamet, Quentin, William K. Dewar, Nicolas Wienders, Bruno Deremble, Sally Close, and Thierry Penduff. "Locally and Remotely Forced Subtropical AMOC Variability: A Matter of Time Scales." Journal of Climate 33, no. 12 (2020): 5155–72. http://dx.doi.org/10.1175/jcli-d-19-0844.1.

Full text
Abstract:
AbstractMechanisms driving the North Atlantic meridional overturning circulation (AMOC) variability at low frequency are of central interest for accurate climate predictions. Although the subpolar gyre region has been identified as a preferred place for generating climate time-scale signals, their southward propagation remains under consideration, complicating the interpretation of the observed time series provided by the Rapid Climate Change–Meridional Overturning Circulation and Heatflux Array–Western Boundary Time Series (RAPID–MOCHA–WBTS) program. In this study, we aim at disentangling the respective contribution of the local atmospheric forcing from signals of remote origin for the subtropical low-frequency AMOC variability. We analyze for this a set of four ensembles of a regional (20°S–55°N), eddy-resolving (1/12°) North Atlantic oceanic configuration, where surface forcing and open boundary conditions are alternatively permuted from fully varying (realistic) to yearly repeating signals. Their analysis reveals the predominance of local, atmospherically forced signal at interannual time scales (2–10 years), whereas signals imposed by the boundaries are responsible for the decadal (10–30 years) part of the spectrum. Due to this marked time-scale separation, we show that, although the intergyre region exhibits peculiarities, most of the subtropical AMOC variability can be understood as a linear superposition of these two signals. Finally, we find that the decadal-scale, boundary-forced AMOC variability has both northern and southern origins, although the former dominates over the latter, including at the site of the RAPID array (26.5°N).
APA, Harvard, Vancouver, ISO, and other styles
11

Bao, Pengfei. "Reinterpreting Benjamin’s Translational Constellation: A Quantum Field Perspective on Interpretive Entanglement in Classical Text Translation Networks." International Journal of Translation and Interpretation Studies 5, no. 2 (2025): 45–58. https://doi.org/10.32996/ijtis.2025.5.2.5.

Full text
Abstract:
This study offers a quantum field-theoretical reinterpretation of Walter Benjamin’s theory (Young-Jin 2005) of the “translational constellation,” positing that classical texts exist in a state of “interpretive entanglement” across linguistic and historical contexts. By developing a Translation Entanglement Metric (TEM) grounded in complex network analysis, the research uncovers non-local correlations among translations of the Daodejing (《道德经》), demonstrating that meaning emerges not from isolated interpretive acts but from the dynamic interplay of translational practices across time and space. Drawing on quantum hermeneutics, the study challenges the linear model of translation as a sequence of discrete events, reframing it instead as a quantum field where each translation functions simultaneously as a nodal entity and a wave of meaning, entangled with others in a non-hierarchical constellation. Case studies of Arthur Waley, James Legge, and Roger T. Ames’ translations of the Daodejing illustrate how TEM captures both semantic resonance and ideological dissonance, revealing that Benjamin’s “afterlife of the original” is best conceptualized as a quantum system of overlapping possibilities, where fidelity and creativity coexist in a state of superposition. This interdisciplinary approach establishes a quantum cognitive framework for translation studies, expanding the spatiotemporal dimensions and ontological implications of classical text interpretation.
APA, Harvard, Vancouver, ISO, and other styles
12

Cordell, Lindrith, and A. E. McCafferty. "A terracing operator for physical property mapping with potential field data." GEOPHYSICS 54, no. 5 (1989): 621–34. http://dx.doi.org/10.1190/1.1442689.

Full text
Abstract:
The terracing operator works iteratively on gravity or magnetic data, using the sense of the measured field’s local curvature, to produce a field comprised of uniform domains separated by abrupt domain boundaries. The result is crudely proportional to a physical‐property function defined in one (profile case) or two (map case) horizontal dimensions. This result can be extended to a physical‐property model if its behavior in the third (vertical) dimension is defined, either arbitrarily or on the basis of the local geologic situation. The terracing algorithm is computationally fast and appropriate to use with very large digital data sets. Where gravity and magnetic data are both available, terracing provides an effective means by which the two data sets can be compared directly. Results of the terracing operation somewhat resemble those of conventional susceptibility (or density) mapping. In contrast with conventional susceptibility mapping, however, the terraced function is a true step function, which cannot be depicted by means of contour lines. Magnetic or gravity fields calculated from the physical‐property model do not, in general, produce an exact fit to the observed data. By intent, the terraced map is more closely analogous to a geologic map in that domains are separated by hard‐edged domain boundaries and minor within‐domain variation is neglected. The terracing operator was applied separately to aeromagnetic and gravity data from a 136 km × 123 km area in eastern Kansas. Results provide a reasonably good physical representation of both the gravity and the aeromagnetic data. Superposition of the results from the two data sets shows many areas of agreement that can be referenced to geologic features within the buried Precambrian crystalline basement. The emerging picture of basement geology is much better resolved than that obtained either from the scanty available drill data or from interpretation of the geophysical data by inspection.
APA, Harvard, Vancouver, ISO, and other styles
13

Biondi, Biondo L., and Clement Kostov. "High‐resolution velocity spectra using eigenstructure methods." GEOPHYSICS 54, no. 7 (1989): 832–42. http://dx.doi.org/10.1190/1.1442712.

Full text
Abstract:
Stacking spectra provide maximum‐likelihood estimates for the stacking velocity, or for the ray parameter, of well separated reflections in additive white noise. However, the resolution of stacking spectra is limited by the aperture of the array and the frequency of the data. Despite these limitations, parametric spectral estimation methods achieve better resolution than does stacking. To improve resolution, the parametric methods introduce a parsimonious model for the spectrum of the data. In particular, when the data are modeled as the superposition of wavefronts, the properties of the eigenstructure of the data covariance matrix can be used to obtain high‐resolution spectra. The traditional stacking spectra can also be expressed as a function of the data covariance matrix and directly compared to the eigenstructure spectra. The superiority of the latter in separating closely interfering reflections is then apparent from a simple geometric interpretation. Eigenstructure methods were originally developed for use with narrow‐band signals, while seismic reflections are wide‐band and transient in time. Taking advantage of the full bandwidth of seismic data, we average spectra from several frequency bands. We choose each frequency band wide enough, so that we can average over time estimates of the covariance matrix. Thus, we obtain a robust estimate of the covariance matrix from short data sequences. A field‐data example shows that the high‐resolution estimators are particularly attractive for use in the estimation of local spectra in which short arrays are considered. Several realistic synthetic examples of stacking‐velocity spectra illustrate the improved performance of the new methods in comparison with conventional processing.
APA, Harvard, Vancouver, ISO, and other styles
14

Hazlett, R. D., and D. K. Babu. "Transient-Inflow-Performance Modeling From Analytic Line-Source Solutions for Arbitrary-Trajectory Wells." SPE Journal 23, no. 03 (2018): 906–18. http://dx.doi.org/10.2118/189463-pa.

Full text
Abstract:
Summary We present two easily computable, equally valid, semianalytic, single-phase, constant-rate solutions to the diffusivity equation for an arbitrarily oriented uniform-flux line source in a 3D, anisotropic, bounded system in Cartesian coordinates. With the addition of superposition, these become inflow solutions for wells of arbitrary trajectory. In addition, we produce analytic time derivatives for pressure-transient analyses (PTAs) of complex wells. If we extract solution components for 2D systems from the general solution, we can construct discrete complex-fracture-inflow and PTA capability for vertical, fully penetrating fractures, suitable for use as the basis solution in modeling complex phenomena, such as pressure-constrained production or development of fracture interference. For a 3D slanted well, the full characterization of dimensionless pressure over 10 decades of dimensionless time behavior can be produced in 1.5 seconds. With a fast-computing analytic solution for pressure anywhere in the system, we can also produce dense pressure maps at scalable resolution where any point could represent an observation well for convolution and enhanced interpretation. Likewise, the pressure derivative and the slope of the logarithmic temporal derivative of pressure can be mapped throughout to indicate local flow regime in a complex system. In particular, we compare and contrast the PTA signatures from symmetrical and asymmetrical horizontal, slanted, and diagonal line sources and examine when the behavior of a thin 3D reservoir collapses to the equivalent of a 2D fully penetrating fracture. Once the reservoir-thickness/length ratio reaches 1:100, all wells with the same projection onto the x–y plane are indistinguishable except for very early time, probably masked by wellbore/fracture-storage effects.
APA, Harvard, Vancouver, ISO, and other styles
15

Campillo, M., J. C. Gariel, K. Aki, and F. J. Sánchez-Sesma. "Destructive strong ground motion in Mexico city: Source, path, and site effects during great 1985 Michoacán earthquake." Bulletin of the Seismological Society of America 79, no. 6 (1989): 1718–35. http://dx.doi.org/10.1785/bssa0790061718.

Full text
Abstract:
Abstract Simultaneous consideration of source, path, and site effects on ground motion during the Michoacán earthquake of 1985 allows us to draw coherent conclusions regarding the roles played for the disaster in Mexico City by the rupture process, the mode of propagation of the waves between the epicentral zone and Mexico City, and the local amplification. In contrast to the horizontal component which showed dramatic amplification for the 2 to 3 sec motion at lake sediment sites, we observe almost identical vertical displacement seismograms containing ripples with 2 to 3 sec period throughout the Mexico City valley whether the recording site is on the lake sediments or on hard rock. We, therefore, conclude that the 2 to 3 sec motion responsible for the destruction of Mexico City was present in the incident field. After performing a phase analysis, we interpret the signal as the superposition of long-period Rayleigh waves and short-period Lg with a dominant period of about 3 sec. The analysis of the teleseismic records indicates that the radiation of this event is enhanced for waves around the 3 sec period. Except in the case of stations for which an anomalous path effect is suspected, the records present ripples appearing a few seconds after the beginning of the signal. The characteristics of near-fault records show that the rupture process consists of the growth of a smooth crack. The numerical simulation indicates that the 3 sec period ripples can be explained by a series of changes of the rupture front velocity. We examine two alternative source models associated with different crustal models to explain the characteristics of the vertical displacements recorded in Mexico City. Our preferred model attributes the cause of the enhanced 3 sec motion to the irregularity in the rupture propagation in addition to the effect of the local conditions in Mexico City. This interpretation leads to a very coherent scenario of what happened from the start of the failure on the fault up to the destruction in Mexico City. This example illustrates the need to consider simultaneously source, path, and site effects in order to understand strong ground motions.
APA, Harvard, Vancouver, ISO, and other styles
16

Margrave, Gary F., and Robert J. Ferguson. "Wavefield extrapolation by nonstationary phase shift." GEOPHYSICS 64, no. 4 (1999): 1067–78. http://dx.doi.org/10.1190/1.1444614.

Full text
Abstract:
The phase‐shift method of wavefield extrapolation applies a phase shift in the Fourier domain to deduce a scalar wavefield at one depth level given its value at another. The phase‐shift operator varies with frequency and wavenumber, and assumes constant velocity across the extrapolation step. We use nonstationary filter theory to generalize this method to nonstationary phase shift (NSPS), which allows the phase shift to vary laterally depending upon the local propagation velocity. For comparison, we derive an analytic form for the popular phase shift plus interpolation (PSPI) method in the limit of an exhaustive set of reference velocities. NSPS and this limiting form of PSPI can be written as generalized Fourier integrals which reduce to ordinary phase shift in the constant velocity limit. In the (x, ω) domain, these processes are the transpose of each other; however, only NSPS has the physical interpretation of forming the scaled, linear superposition of laterally‐variable impulse responses (i.e.,Huygen’s wavelets). The difference between NSPS and PSPI is clear when they are compared in the case of a piecewise constant velocity variation. Define a set of windows such that the jth window is unity when the propagation velocity is the jth distinct velocity and is zero otherwise. NSPS can be computed by applying the window set to the input data to create a set of windowed wavefields, which are individually phase‐shift extrapolated with the corresponding constant velocity, and the extrapolated set is superimposed. PSPI proceeds by phase‐shift extrapolating the input data for each distinct velocity, applying the jth window to the jth extrapolation, and superimposing. Though neither process is fully correct, PSPI has the unphysical limit that discontinuities in the lateral velocity variation cause discontinuities in the wavefield, whereas NSPS shows the expected wavefront “healing.” We then formulate a finite aperture compensation for NSPS which has the practical result of absorbing lateral boundaries for all incidence angles. Wavefield extrapolation can be regarded as the crosscorrelation of the wavefield with the expected response of a point diffractor at the new depth level. Aperture compensation simply applies a laterally varying window to the infinite, theoretical diffraction response. The crosscorrelation becomes spatially variant, even for constant velocity, and hence is a nonstationary filter. The nonstationary effects of aperture compensation can be simultaneously applied with the NSPS extrapolation through a laterally variable velocity field.
APA, Harvard, Vancouver, ISO, and other styles
17

Sussulini, Alessandra. "Chemical Imaging – Is an Image Always Worth a Thousand Spectra?" Brazilian Journal of Analytical Chemistry 10, no. 38 (2022): 11–12. http://dx.doi.org/10.30744/brjac.2179-3425.point-of-view-asussulini.n38.

Full text
Abstract:
Chemical images can be described as distribution maps that correlate the chemical information of an element or molecule, such as mass-to-charge ratio (m/z) or wavelength, with its intensity and/or concentration in a given sample. These images are usually obtained by mass spectrometry (MS) or optical spectroscopy techniques, where hundreds or thousands of spectra are initially acquired and dedicated image processing software is employed to construct and edit the final pictures, as well as selecting and annotating regions of interest in a sample, performing calibration procedures, etc. Mass spectrometry imaging (preferably abbreviated as MSI, to distinguish it from ion mobility spectrometry – IMS) is currently the most employed chemical imaging strategy, as can be noticed in the most recently published papers. Depending on the selected ionization technique, molecular or elemental images can be acquired. For molecular MSI, the classical matrix-assisted laser desorption/ionization (MALDI) is generally applied for imaging lipids, peptides and proteins, and the ambient ionization technique desorption electrospray ionization (DESI) is commonly applied for visualizing lipid distribution. In terms of elemental MSI, laser ablation inductively coupled plasma (LA-ICP) is undoubtedly the technique of choice, although nano-secondary ion mass spectrometry (nanoSIMS) can also be applied. Considering optical spectroscopy, the main techniques used nowadays are Raman and near-infrared radiation – NIR – spectroscopy for molecular imaging, and Synchrotron radiation X-ray fluorescence – SRXRF – and laser-induced breakdown spectroscopy – LIBS – for elemental imaging. Amongst these techniques, the best spatial resolutions are generally achieved by SRXRF (elemental imaging) and Raman spectroscopy (molecular imaging). Analytical chemistry advances in chemical imaging allow the acquisition of images with high spatial resolution, which is particularly interesting when studying specific regions or cell structures in a biological sample. For instance, in a Parkinson’s disease model, LA-ICP-MS images with good spatial resolution make the distinction of specific mouse brain regions possible and, consequently, the association of metal ion concentrations to each region,1 which is a relevant result considering micro-local metal speciation in neurodegenerative diseases. Nevertheless, there are some drawbacks in chemical imaging that demand further analytical development, such as the long analysis time and the lack of certified reference materials for quantitative analysis and method validation, as well as open-source software with advanced multivariate statistical analysis tools. Another obstacle to overcome concerns the integration of elemental and molecular imaging results. Since 2009, when one of the first review articles regarding the combination of these imaging approaches in a synergistic way was proposed by Becker and Jakubowski,2 until more recently described in reviews from 20203 and 2021,4 it has been possible to realize that there is still much work to be done in this field. This is mostly due to the fact that each imaging technique provides different spatial resolutions, making image superposition difficult, and also the absence of software and algorithms that allow the integration of different data sets in order to obtain trustworthy results and produce relevant study hypotheses. Besides that, the instrumentation for chemical imaging is rather costly and usually research groups are specialized in either molecular or elemental imaging. With these considerations, it is important to emphasize that the community involved in chemical imaging research should focus not only on the quality of the generated images in terms of resolution but also, if they are indeed worth a thousand spectra, on interpretation of the initial questions in a deep and holistic manner. After all, the main objective of chemical imaging is that the images represent how the process in question (disease, treatment, contamination, genetic modification, etc.) locally affects the system (biological, environmental, pharmaceutical sample) under study and, then, provide solutions for solving problems in different areas, such as forensic, environmental and life sciences.
APA, Harvard, Vancouver, ISO, and other styles
18

Tetzlaff, Tom, Stefan Rotter, Eran Stark, Moshe Abeles, Ad Aertsen, and Markus Diesmann. "Dependence of Neuronal Correlations on Filter Characteristics and Marginal Spike Train Statistics." Neural Computation 20, no. 9 (2008): 2133–84. http://dx.doi.org/10.1162/neco.2008.05-07-525.

Full text
Abstract:
Correlated neural activity has been observed at various signal levels (e.g., spike count, membrane potential, local field potential, EEG, fMRI BOLD). Most of these signals can be considered as superpositions of spike trains filtered by components of the neural system (synapses, membranes) and the measurement process. It is largely unknown how the spike train correlation structure is altered by this filtering and what the consequences for the dynamics of the system and for the interpretation of measured correlations are. In this study, we focus on linearly filtered spike trains and particularly consider correlations caused by overlapping presynaptic neuron populations. We demonstrate that correlation functions and statistical second-order measures like the variance, the covariance, and the correlation coefficient generally exhibit a complex dependence on the filter properties and the statistics of the presynaptic spike trains. We point out that both contributions can play a significant role in modulating the interaction strength between neurons or neuron populations. In many applications, the coherence allows a filter-independent quantification of correlated activity. In different network models, we discuss the estimation of network connectivity from the high-frequency coherence of simultaneous intracellular recordings of pairs of neurons.
APA, Harvard, Vancouver, ISO, and other styles
19

Taylor, Jordan K., and Ian P. McCulloch. "Wavefunction branching: when you can't tell pure states from mixed states." Quantum 9 (March 25, 2025): 1670. https://doi.org/10.22331/q-2025-03-25-1670.

Full text
Abstract:
We propose a definition of wavefunction "branchings": quantum superpositions which can't be feasibly distinguished from the corresponding mixed state, even under time evolution. Our definition is largely independent of interpretations, requiring only that it takes many more local gates to swap branches than to distinguish them. We give several examples of states admitting such branch decompositions. Under our definition, we argue that attempts to get relative-phase information between branches will fail without frequent active error correction, that branches are effectively the opposite of good error-correcting codes, that branches effectively only grow further apart in time under natural evolution, that branches tend to absorb spatial entanglement, that branching is stronger in the presence of conserved quantities, and that branching implies effective irreversibility. Identifying these branch decompositions in many-body quantum states could shed light on the emergence of classicality, provide a metric for experimental tests at the quantum/ classical boundary, and allow for longer numerical time evolution simulations. We see this work as a generalization of the basic ideas of environmentally-induced decoherence to situations with no clear system/ environment split.
APA, Harvard, Vancouver, ISO, and other styles
20

Mousavi, Naeim, and Javier Fullea. "3-D thermochemical structure of lithospheric mantle beneath the Iranian plateau and surrounding areas from geophysical–petrological modelling." Geophysical Journal International 222, no. 2 (2020): 1295–315. http://dx.doi.org/10.1093/gji/ggaa262.

Full text
Abstract:
SUMMARY While the crustal structure across the Iranian plateau is fairly well constrained from controlled source and passive seismic data, the lithospheric mantle structure remains relatively poorly known, in particular in terms of lithology. Geodynamics rely on a robust image of the present-day thermochemical structure interpretations of the area. In this study, the 3-D crustal and upper mantle structure of the Iranian plateau is investigated, for the first time, through integrated geophysical–petrological modelling combining elevation, gravity and gravity gradient fields, seismic and petrological data. Our modelling approach allows us to simultaneously match complementary data sets with key mantle physical parameters (density and seismic velocities) being determined within a self-consistent thermodynamic framework. We first elaborate a new 3-D isostatically balanced crustal model constrained by available controlled source and passive seismic data, as well as complementary by gravity data. Next, we follow a progressively complex modelling strategy, starting from a laterally quasi chemically homogeneous model and then including structural, petrological and seismic tomography constraints. Distinct mantle compositions are tested in each of the tectonothermal terranes in our study region based on available local xenolith suites and global petrological data sets. Our preferred model matches the input geophysical observables (gravity field and elevation), includes local xenolith data, and qualitatively matches velocity anomalies from state of the art seismic tomography models. Beneath the Caspian and Oman seas (offshore areas) our model is defined by an average Phanerozoic fertile composition. The Arabian Plate and the Turan platform are characterized by a Proterozoic composition based on xenolith samples from eastern Arabia. In agreement with previous studies, our results also suggest a moderately refractory Proterozoic type composition in Zagros-Makran belt, extending to Alborz, Turan and Kopeh-Dagh terranes. In contrast, the mantle in our preferred model in Central Iran is defined by a fertile composition derived from a xenolith suite in northeast Iran. Our results indicate that the deepest Moho boundary is located beneath the high Zagros Mountains (∼65 km). The thinnest crust is found in the Oman Sea, Central Iran (Lut Block) and Talesh Mountains. A relatively deep Moho boundary is modelled in the Kopeh-Dagh Mountains, where Moho depth reaches to ∼55 km. The lithosphere is ∼280 km thick beneath the Persian Gulf (Arabian–Eurasian Plate boundary) and the Caspian Sea, thinning towards the Turan platform and the high Zagros. Beneath the Oman Sea, the base of the lithosphere is at ∼150 km depth, rising to ∼120 km beneath Central Iran, with the thinnest lithosphere (<100 km) being located beneath the northwest part of the Iranian plateau. We propose that the present-day lithosphere–asthenosphere topography is the result of the superposition of different geodynamic processes: (i) Arabia–Eurasia convergence lasting from mid Jurassic to recent and closure of Neo-Tethys ocean, (ii) reunification of Gondwanian fragments to form the Central Iran block and Iranian microcontinent, (iii) impingement of a small-scale convection and slab break-off beneath Central Iran commencing in the mid Eocene and (iv) refertilization of the lithospheric mantle beneath the Iranian microcontinent.
APA, Harvard, Vancouver, ISO, and other styles
21

Warunek, Ronald Michael. "Quantum Computing with Bipolar Charge States: A Theoretical Framework Based on the Solid State Atom Model." June 1, 2025. https://doi.org/10.5281/zenodo.15579829.

Full text
Abstract:
This paper introduces a novel theoretical framework for quantum computing, grounded in the principles of the Solid State Atom (SSA) model. It proposes leveraging the SSA's fundamental concept of bipolar charge states, specifically the bound positive (e+) and negative (e−) electron pair, to represent and manipulate quantum bits (qubits). The framework further expands on this by identifying a natural ternary (three-state) system inherent in AC voltage generation within the SSA model (positive, zero, and negative voltage), suggesting the potential for qutrits and new interpretations of digital bits. Magnetic fields are posited as the primary mechanism for qubit control and readout. This approach offers a physically intuitive foundation for quantum computation, providing a local and realistic interpretation of superposition and entanglement as inherent properties of matter. While significant theoretical and technological challenges remain, this work presents a unique and potentially more robust pathway toward realizing quantum computation based on the fundamental structure of the SSA model.
APA, Harvard, Vancouver, ISO, and other styles
22

Wilson-Gerow, Jordan, Annika Dugad, and Yanbei Chen. "Decoherence by warm horizons." Physical Review D 110, no. 4 (2024). http://dx.doi.org/10.1103/physrevd.110.045002.

Full text
Abstract:
Recently Danielson, Satishchandran, and Wald (DSW) have shown that quantum superpositions held outside of Killing horizons will decohere at a steady rate. This occurs because of the inevitable radiation of soft photons (gravitons), which imprint a electromagnetic (gravitational) “which-path” memory onto the horizon. Rather than appealing to this global description, an experimenter ought to also have a local description for the cause of decoherence. One might intuitively guess that this is just the bombardment of Hawking/Unruh radiation on the system, however simple calculations challenge this idea—the same superposition held in a finite temperature inertial laboratory does not decohere at the DSW rate. In this work we provide a local description of the decoherence by mapping the DSW setup onto a worldline-localized model resembling an Unruh-DeWitt particle detector. We present an interpretation in terms of random local forces which do not sufficiently self-average over long times. Using the Rindler horizon as a concrete example we clarify the crucial role of temperature, and show that the Unruh effect is the only quantum mechanical effect underlying these random forces. A general lesson is that for an environment which induces Ohmic friction on the central system (as one gets from the classical Abraham-Lorentz-Dirac force, in an accelerating frame) the fluctuation-dissipation theorem implies that when this environment is at finite temperature it will cause steady decoherence on the central system. Our results agree with DSW and provide the complementary local perspective. Published by the American Physical Society 2024
APA, Harvard, Vancouver, ISO, and other styles
23

Waegell, Mordecai. "Madelung Mechanics and Superoscillations." New Journal of Physics, July 29, 2024. http://dx.doi.org/10.1088/1367-2630/ad689b.

Full text
Abstract:
Abstract In single-particle Madelung mechanics, the single-particle quantum state $\Psi(\vec{x},t) = R(\vec{x},t) e^{iS(\vec{x},t)/\hbar}$ is interpreted as comprising an entire conserved fluid of classical point particles, with local density $R(\vec{x},t)^2$ and local momentum $\vec{\nabla}S(\vec{x},t)$ (where $R$ and $S$ are real). The Schr"{o}dinger equation gives rise to the continuity equation for the fluid, and the Hamilton-Jacobi equation for particles of the fluid, which includes an additional density-dependent quantum potential energy term $Q(\vec{x},t) = -\frac{\hbar^2}{2m}\frac{\vec{\nabla}R(\vec{x},t)}{R(\vec{x},t)}$, which is all that makes the fluid behavior nonclassical. In particular, the quantum potential can become negative and create a nonclassical boost in the kinetic energy. This boost is related to superoscillations in the wavefunction, where the local frequency of $\Psi$ exceeds its global band limit. Berry showed that for states of definite energy $E$, the regions of superoscillation are exactly the regions where $Q(\vec{x},t)<0$. For energy superposition states with band-limit $E_+$, the situation is slightly more complicated, and the bound is no longer $Q(\vec{x},t)<0$. However, the fluid model provides a definite local energy for each fluid particle which allows us to define a local band limit for superoscillation, and with this definition, all regions of superoscillation are again regions where $Q(\vec{x},t)<0$ for general superpositions. An alternative interpretation of these quantities involving a \textit{reduced quantum potential} is reviewed and advanced, and a parallel discussion of superoscillation in this picture is given. Detailed examples are given which illustrate the role of the quantum potential and superoscillations in a range of scenarios.
APA, Harvard, Vancouver, ISO, and other styles
24

Tang, Jau, and Jau Tang. "A new paradigm for double-slit interference: cavity-induced nonlocal quantized momentum transfer with no need of Schrödinger's wave theory, self-interference, and wavefunction collapse." March 23, 2025. https://doi.org/10.5281/zenodo.15073467.

Full text
Abstract:
We propose a quantum framework where cavity-induced nonlocal stochastic quantized momentum transfer governs double-slit interference of single eelectrons, replacing self-interference, wavefunction collapse, and Schrödinger’s wavefunction description. Using Heisenberg’s operator formalism, we model the electron’s interaction with the double slit as a quantized field potential. This approach explains interference via discrete momentum transfer through stochastic cavity modes. We explore its role as a non-local hidden-variable mechanism, predicting deviations in Bell violation, and finer discrete interference fringes in short cavity-mode wavelength regimes, opening new avenues for experimental verification. Our new interpretation of quantum dynamics sheds light on long-standing debates about quantum reality, hidden variables, wave-function superposition and Schrödinger’s cat, the illusion of self-interference of a single electron, and instantaneous wavefunction collapse misconception in the measurements. Our theory possesses the deterministic description of an electron interacting with the stochastic yet nonlocal hidden variables characteristic of the quantized cavity modes.  It meets Einstein’s desire for a more complete theory and bridges the gap between physical reality and the incomplete conventional quantum theory that requires confusing Copenhagen or many-world interpretations.  
APA, Harvard, Vancouver, ISO, and other styles
25

Jiang, Yikun, Manki Kim, and Gabriel Wong. "Entanglement entropy and edge modes in topological string theory. Part II. The dual gauge theory story." Journal of High Energy Physics 2021, no. 10 (2021). http://dx.doi.org/10.1007/jhep10(2021)202.

Full text
Abstract:
Abstract This is the second in a two-part paper devoted to studying entanglement entropy and edge modes in the A model topological string theory. This theory enjoys a gauge-string (Gopakumar-Vafa) duality which is a topological analogue of AdS/CFT. In part 1, we defined a notion of generalized entropy for the topological closed string theory on the resolved conifold. We provided a canonical interpretation of the generalized entropy in terms of the q-deformed entanglement entropy of the Hartle-Hawking state. We found string edge modes transforming under a quantum group symmetry and interpreted them as entanglement branes. In this work, we provide the dual Chern-Simons gauge theory description. Using Gopakumar-Vafa duality, we map the closed string theory Hartle-Hawking state to a Chern-Simons theory state containing a superposition of Wilson loops. These Wilson loops are dual to closed string worldsheets that determine the partition function of the resolved conifold. We show that the undeformed entanglement entropy due to cutting these Wilson loops reproduces the bulk generalized entropy and therefore captures the entanglement underlying the bulk spacetime. Finally, we show that under the Gopakumar-Vafa duality, the bulk entanglement branes are mapped to a configuration of topological D-branes, and the non-local entanglement boundary condition in the bulk is mapped to a local boundary condition in the gauge theory dual. This suggests that the geometric transition underlying the gauge-string duality may also be responsible for the emergence of entanglement branes.
APA, Harvard, Vancouver, ISO, and other styles
26

Hoffmann, Alexandre, Romain Brossier, Ludovic Métivier, and Alizia Tarayoun. "Local uncertainty quantification for 3D time-domain full waveform inversion with ensemble Kalman filters: application to a North sea OBC dataset." Geophysical Journal International, March 21, 2024. http://dx.doi.org/10.1093/gji/ggae114.

Full text
Abstract:
Summary Full waveform inversion has emerged as the state-of-the art high resolution seismic imaging technique, both in seismology for global and regional scale imaging and in the industry for exploration purposes. While gaining in popularity, full waveform inversion, at an operational level, remains a heavy computational process involving the repeated solution of large-scale 3D wave propagation problems. For this reason it is a common practice to focus the interpretation of the results on the final estimated model. This is forgetting full waveform inversion is an ill-posed inverse problem in a high dimensional space for which the solution is intrinsically non-unique. This is the reason why being able to qualify and quantify the uncertainty attached to a model estimated by full waveform inversion is key. To this end, we propose to extend at an operational level the concepts introduced in a previous study related to the coupling between ensemble Kalman filters and full waveform inversion. These concepts had been developed for 2D frequency-domain full waveform inversion. We extend it here to the case of 3D time-domain full waveform inversion, relying on a source subsampling strategy to assimilate progressively the data within the Kalman filter. We apply our strategy to an ocean bottom cable field dataset from the North Sea to illustrate its feasibility. We explore the convergence of the filter in terms of number of elements, and extract variance and covariance information showing which part of the model are well constrained and which are not. Analyzing the variance helps to gain insight on how well the final estimated model is constrained by the whole full waveform inversion workflow. The variance maps appears as the superposition of a smooth trend related to the geometrical spreading and a high resolution trend related to reflectors. Mapping lines of the covariance (or correlation matrix) to the model space helps to gain insight on the local resolution. Through a wave propagation analysis, we are also able to relate variance peaks in the model space to variance peaks in the data space. Compared to other posterior-covariance approximation scheme, our combination between Ensemble Kalman filter and full waveform inversion is intrinsically scalable, making it a good candidate for exploiting the recent exascale high performance computing machines.
APA, Harvard, Vancouver, ISO, and other styles
27

Liu, Jingshou, Haimeng Yang, Ke Xu, et al. "Genetic mechanism of transfer zones in rift basins: Insights from geomechanical models." GSA Bulletin, February 8, 2022. http://dx.doi.org/10.1130/b36151.1.

Full text
Abstract:
A transfer zone is a kind of structure that is produced to conserve deformation of a fault structure on both sides. Increasing numbers of transfer zones are being identified in rift basins, which are areas of petroleum accumulation and potential exploration targets. This paper provides a numerical simulation method for the genesis and development of transfer zones based on geomechanical modeling. On the basis of three-dimensional (3-D) seismic interpretation, using the Tongcheng fault as an example, the fault activity parameter and fault activity intensity index were established to quantitatively characterize the difference in fault activity on the two sides of a transfer zone. A geomechanical model was developed for a transfer zone in a rift basin, and the structural characteristics and genetic mechanism of a convergent fault were studied using paleostress and strain numerical simulations. Affected by different movements of boundary faults and basement faults, the evolution of the Tongcheng fault can be divided into three stages: (1) during the Funing period, which was the main development period of compound transfer faults, the activity, stress, and strain of the fault blocks on either side of the Tongcheng fault were obviously different; (2) during the Dainan period, which was the development stage of inherited compound transfer faults, the northern part of the Tongcheng area underwent local compression, and the T3 anticline began to form; and (3) during the Sanduo period, the Tongcheng fault experienced right-lateral strike-slip activity, where the activity showed two stages of change, first increasing and then decreasing, and the Tongcheng fault anticline developed. The superposition of multiple complex tectonic movements produced a transfer zone that has both strike-slip and extensional fault properties. The geomechanical model in this paper provides important insights for analyzing the evolution of transfer zones in rift basins.
APA, Harvard, Vancouver, ISO, and other styles
28

Adedokun, Omonike, Olagoke Oladejo, Kehinde Alao, et al. "Delineation of structural lineaments of Shaki West Southwestern Nigeria using high resolution aeromagnetic data." Journal of the Nigerian Society of Physical Sciences, May 1, 2025, 2493. https://doi.org/10.46481/jnsps.2025.2493.

Full text
Abstract:
A minor earthquake, known as earth tremor, often occurs in areas prone to seismic activity. However, there is a notable gap in knowledge about earth tremors, with little documentation conducted before 1987, but a series of notable events between 1990 and 2000 prompted researchers to delve deeper into the study of earth tremors in Nigeria. Therefore, this study is aimed at delineating the structural lineaments of Shaki West Southwestern Nigeria using High Resolution Aeromagnetic Data (HRAD) to identify the underlying basement geology and define the structural framework of the study area. The study area’s aeromagnetic data of Shaki (Sheet 199) underwent processing and interpretation using Oasis Montaj software to assess basement configuration and structural integrity. The data were further enhanced using the Total Horizontal Derivative (THDR) in order to determine the orientations of the lineaments in the study area. The orientations of the lineaments obtained from THDR map revealed that the Pan African orogeny constitutes 52%; Kibaran orogeny constitutes 31%, while Liberian orogeny constitutes 17% lineaments in the study area. The upward continuation maps suggest the presence of faults at the depth range of 2.0-2.25 km. The overall depth to magnetic sources of the area is relatively shallow compared to sedimentary basement area. Based on orientation of faults on magnetic fault map obtained from the superposition of the lineaments extracted from THDR map on the geological map of the study area, three distinct set of sinistral /dextral faults were recognized in Shaki west local government area which includes: E-W, NE-SW and NW-SE fault trend. This suggests that NE-SW and NE-SE fault-set could be responsible for the tremor experienced in Shaki west southwestern, Nigeria. It is concluded that the study area is not immune from experiencing occurrences of tremors from time to time.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!