To see the other types of publications on this topic, follow the link: Inverse codes.

Journal articles on the topic 'Inverse codes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Inverse codes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Usha, K. "Generation of Walsh codes in two different orderings using 4-bit Gray and Inverse Gray codes." Indian Journal of Science and Technology 5, no. 3 (2012): 1–5. http://dx.doi.org/10.17485/ijst/2012/v5i3.29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rabin, A. V., S. V. Michurin, and V. A. Lipatnikov. "DEVELOPMENT OF A CLASS OF SYSTEM AND RETURN SYSTEM MATRIXES PROVIDING INCREASE IN NOISE IMMUNITY OF SPECTRALLY EFFECTIVE MODULATION SCHEMES ON BASIS OF ORTHOGONAL CODING." Issues of radio electronics, no. 10 (October 20, 2018): 75–79. http://dx.doi.org/10.21778/2218-5453-2018-10-75-79.

Full text
Abstract:
In work it is proposed in the digital systems of messages transmission for noise immunity's increase with the fixed code rate to use an additional coding called by the authors orthogonal. The way of a definition of orthogonal codes is presented, the synthesis algorithm of system and inverse system matrices of orthogonal codes is developed, and the main parameters of some matrices constructed by the offered algorithm are specified. Orthogonal coding as a special case of convolutional coding is defined by matrices, which elements are polynomials in the delay variable with integer coefficients. Code words are given by multiplication of an information polynomial by a system matrix, and decoding is performed by multiplication by an inverse system matrix. Basic ratios for orthogonal coding are given in article, and properties of system and inverse matrices are specified. Parameters of system and inverse system matrices assure additional gain in signal-to-noise ratio. This gain is got as a result of a more effective use of energy of transmitted signals. For transmission of one symbol energy of several symbols is accumulated.
APA, Harvard, Vancouver, ISO, and other styles
3

Al-Sammak, A. J., and R. S. K. Al-Alawi. "An Encoder for Differential Manchester and Inverse Differential Manchester Line Codes." IEEJ Transactions on Electronics, Information and Systems 125, no. 2 (2005): 379–80. http://dx.doi.org/10.1541/ieejeiss.125.379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Négadi, Tidjani. "The Genetic Codes: Mathematical Formulae and an Inverse Symmetry-Information Relationship." Information 8, no. 1 (2016): 6. http://dx.doi.org/10.3390/info8010006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Birget, Jean-Camille, and Stuart W. Margolis. "Two-letter group codes that preserve aperiodicity of inverse finite automata." Semigroup Forum 76, no. 1 (2007): 159–68. http://dx.doi.org/10.1007/s00233-007-9024-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ibáñez, Javier, Jorge Sastre, Pedro Ruiz, José M. Alonso, and Emilio Defez. "An Improved Taylor Algorithm for Computing the Matrix Logarithm." Mathematics 9, no. 17 (2021): 2018. http://dx.doi.org/10.3390/math9172018.

Full text
Abstract:
The most popular method for computing the matrix logarithm is a combination of the inverse scaling and squaring method in conjunction with a Padé approximation, sometimes accompanied by the Schur decomposition. In this work, we present a Taylor series algorithm, based on the free-transformation approach of the inverse scaling and squaring technique, that uses recent matrix polynomial formulas for evaluating the Taylor approximation of the matrix logarithm more efficiently than the Paterson–Stockmeyer method. Two MATLAB implementations of this algorithm, related to relative forward or backward error analysis, were developed and compared with different state-of-the art MATLAB functions. Numerical tests showed that the new implementations are generally more accurate than the previously available codes, with an intermediate execution time among all the codes in comparison.
APA, Harvard, Vancouver, ISO, and other styles
7

Duc-Minh Pham, A. B. Premkumar, and A. S. Madhukumar. "Error Detection and Correction in Communication Channels Using Inverse Gray RSNS Codes." IEEE Transactions on Communications 59, no. 4 (2011): 975–86. http://dx.doi.org/10.1109/tcomm.2011.022811.100092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Alabiad, Sami, and Yousef Alkhamees. "Constacyclic Codes over Finite Chain Rings of Characteristic p." Axioms 10, no. 4 (2021): 303. http://dx.doi.org/10.3390/axioms10040303.

Full text
Abstract:
Let R be a finite commutative chain ring of characteristic p with invariants p,r, and k. In this paper, we study λ-constacyclic codes of an arbitrary length N over R, where λ is a unit of R. We first reduce this to investigate constacyclic codes of length ps (N=n1ps,p∤n1) over a certain finite chain ring CR(uk,rb) of characteristic p, which is an extension of R. Then we use discrete Fourier transform (DFT) to construct an isomorphism γ between R[x]/<xN−λ> and a direct sum ⊕b∈IS(rb) of certain local rings, where I is the complete set of representatives of p-cyclotomic cosets modulo n1. By this isomorphism, all codes over R and their dual codes are obtained from the ideals of S(rb). In addition, we determine explicitly the inverse of γ so that the unique polynomial representations of λ-constacyclic codes may be calculated. Finally, for k=2 the exact number of such codes is provided.
APA, Harvard, Vancouver, ISO, and other styles
9

Delfosse, Nicolas, and Naomi H. Nickerson. "Almost-linear time decoding algorithm for topological codes." Quantum 5 (December 2, 2021): 595. http://dx.doi.org/10.22331/q-2021-12-02-595.

Full text
Abstract:
In order to build a large scale quantum computer, one must be able to correct errors extremely fast. We design a fast decoding algorithm for topological codes to correct for Pauli errors and erasure and combination of both errors and erasure. Our algorithm has a worst case complexity of O(nα(n)), where n is the number of physical qubits and α is the inverse of Ackermann's function, which is very slowly growing. For all practical purposes, α(n)≤3. We prove that our algorithm performs optimally for errors of weight up to (d−1)/2 and for loss of up to d−1 qubits, where d is the minimum distance of the code. Numerically, we obtain a threshold of 9.9% for the 2d-toric code with perfect syndrome measurements and 2.6% with faulty measurements.
APA, Harvard, Vancouver, ISO, and other styles
10

Giusti, Chad, and Vladimir Itskov. "A No-Go Theorem for One-Layer Feedforward Networks." Neural Computation 26, no. 11 (2014): 2527–40. http://dx.doi.org/10.1162/neco_a_00657.

Full text
Abstract:
It is often hypothesized that a crucial role for recurrent connections in the brain is to constrain the set of possible response patterns, thereby shaping the neural code. This implies the existence of neural codes that cannot arise solely from feedforward processing. We set out to find such codes in the context of one-layer feedforward networks and identified a large class of combinatorial codes that indeed cannot be shaped by the feedforward architecture alone. However, these codes are difficult to distinguish from codes that share the same sets of maximal activity patterns in the presence of subtractive noise. When we coarsened the notion of combinatorial neural code to keep track of only maximal patterns, we found the surprising result that all such codes can in fact be realized by one-layer feedforward networks. This suggests that recurrent or many-layer feedforward architectures are not necessary for shaping the (coarse) combinatorial features of neural codes. In particular, it is not possible to infer a computational role for recurrent connections from the combinatorics of neural response patterns alone. Our proofs use mathematical tools from classical combinatorial topology, such as the nerve lemma and the existence of an inverse nerve. An unexpected corollary of our main result is that any prescribed (finite) homotopy type can be realized by a subset of the form [Formula: see text], where [Formula: see text] is a polyhedron.
APA, Harvard, Vancouver, ISO, and other styles
11

Higueras, Manuel, Pedro Puig, Elizabeth A. Ainsbury, and Kai Rothkamm. "A new inverse regression model applied to radiation biodosimetry." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 471, no. 2174 (2015): 20140588. http://dx.doi.org/10.1098/rspa.2014.0588.

Full text
Abstract:
Biological dosimetry based on chromosome aberration scoring in peripheral blood lymphocytes enables timely assessment of the ionizing radiation dose absorbed by an individual. Here, new Bayesian-type count data inverse regression methods are introduced for situations where responses are Poisson or two-parameter compound Poisson distributed. Our Poisson models are calculated in a closed form, by means of Hermite and negative binomial (NB) distributions. For compound Poisson responses, complete and simplified models are provided. The simplified models are also expressible in a closed form and involve the use of compound Hermite and compound NB distributions. Three examples of applications are given that demonstrate the usefulness of these methodologies in cytogenetic radiation biodosimetry and in radiotherapy. We provide R and SAS codes which reproduce these examples.
APA, Harvard, Vancouver, ISO, and other styles
12

Kalidindi, Surya R., J. Houskamp, G. Proust, and H. Duvvuru. "Microstructure Sensitive Design with First Order Homogenization Theories and Finite Element Codes." Materials Science Forum 495-497 (September 2005): 23–30. http://dx.doi.org/10.4028/www.scientific.net/msf.495-497.23.

Full text
Abstract:
A mathematical framework called Microstructure Sensitive Design (MSD) has been developed recently to solve inverse problems of materials design, where the goal is to identify the class of microstructures that are predicted to satisfy a set of designer specified objectives and constraints [1]. This paper demonstrates the application of the MSD framework to a specific case study involving mechanical design. Processing solutions to obtain one of the elements of the desired class of textures are also explored within the same framework.
APA, Harvard, Vancouver, ISO, and other styles
13

AL-ALAWI, RAIDA, and A. J. AL-SAMMAK. "DESIGN OF A MULTICODE BI-PHASE ENCODER FOR DATA TRANSMISSION." Journal of Circuits, Systems and Computers 15, no. 01 (2006): 1–12. http://dx.doi.org/10.1142/s0218126606002897.

Full text
Abstract:
In this paper, we present a versatile Multicode Bi-Phase Encoder (MBPE) circuit capable of encoding five different Bi-Phase line codes, namely: Bi-Phase-Level (Bi-Φ-L), Bi-Phase-Mark (Bi-Φ-M), Bi-Phase-Space (Bi-Φ-S), Differential Manchester (DM) and Inverse Differential Manchester (IDM) codes. The design methodology is based on a new definition of these codes in terms of encoding rules and state diagrams, instead of the traditional way of representing them in terms of their bit transition. The operation mode of the MBPE is set by three selection lines, which can be either hardware or software controlled. This will facilitate the process of altering the data transmission protocol without the need of changing the encoder hardware. The functionality and design of the MBPE is outlined. VHDL has been used to describe the behavior of the MBPE whose operation was verified using the ModelSim XE II Simulation tools. Implementation and testing of the MBPE on XILINX Spartan-II FPGA showed that the MBPE circuit is capable of encoding NRZ data into any of the five codes.
APA, Harvard, Vancouver, ISO, and other styles
14

Samanta, Supriti, Goutam K. Maity, and Subhadipta Mukhopadhyay. "Implementation of Orthogonal Codes Using MZI." Micro and Nanosystems 12, no. 3 (2020): 159–67. http://dx.doi.org/10.2174/1876402912666200211121624.

Full text
Abstract:
Background: In Code Division Multiple Access (CDMA)/Multi-Carrier CDMA (MCCDMA), Walsh-Hadamard codes are widely used for its orthogonal characteristics, and hence, it leads to good contextual connection property. These orthogonal codes are important because of their various significant applications. Objective: To use the Mach–Zehnder Interferometer (MZI) for all-optical Walsh-Hadamard codes is implemented in this present paper. Method: The Mach–Zehnder Interferometer (MZI) is considered for the Tree architecture of Semiconductor Optical Amplifier (SOA). The second-ordered Hadamard and the inverse Hadamard matrix are constructed using SOA-MZIs. Higher-order Hadamard matrix (H4) formed by the process of Kronecker product with lower-order Hadamard matrix (H2) is also analyzed and constructed. Results: To experimentally get the result from these schemes, some design issues e,g Time delay, nonlinear phase modulation, extinction ratio, and synchronization of signals are the important issues. Lasers of wavelength 1552 nm and 1534 nm can be used as input and control signals, respectively. As the whole system is digital, intensity losses due to couplers in the interconnecting stage may not create many problems in producing the desired optical bits at the output. The simulation results were obtained by Matlab-9. Here, Hadamard H2 (2×2) matrix output beam intensity (I ≈ 108 w.m-2) for different values of inputs. Conclusion: Implementation of Walsh-Hadamard codes using MZI is explored in this paper, and experimental results show the better performance of the proposed scheme compared to recently reported methods using electronic circuits regarding the issues of versatility, reconfigurability, and compactness. The design can be used and extended for diverse applications for which Walsh-Hadamard codes are required.
APA, Harvard, Vancouver, ISO, and other styles
15

Peng, Chen, and Dong. "Experimental Data Assessment and Fatigue Design Recommendation for Stainless-Steel Welded Joints." Metals 9, no. 7 (2019): 723. http://dx.doi.org/10.3390/met9070723.

Full text
Abstract:
Stainless steel possesses outstanding advantages such as good corrosion resistance and long service life. Stainless steel is one of the primary materials used for sustainable structures, and welding is one of the main connection modes of stainless-steel bridges and other structures. Therefore, fatigue damage at welded joints deserves attention. The existing fatigue design codes of stainless-steel structures mainly adopt the design philosophy of structural steel. In order to comprehensively review the published fatigue test data of welded joints in stainless steel, in this paper, the fatigue test data of representative welded joints of stainless steel were summarized comprehensively and the S–N curves of six representative stainless-steel welded joints were obtained by statistical evaluation. The comparison of the fatigue strength from existing design codes and fatigue test data was performed, and the results showed that the fatigue strength of welded joints of stainless steel was higher than that of structural-steel welded joints. The flexibility of regression analysis with and without a fixed negative inverse slope was discussed based on the scatter index. It was found that the fatigue test data of stainless-steel welded joints are more consistent with the S–N curve regressed by a free negative inverse slope. In this paper, a design proposal for the fatigue strength of representative welded joints of stainless steel is presented based on the S–N curve regressed by the free negative inverse slope.
APA, Harvard, Vancouver, ISO, and other styles
16

Xie, Chunli, Xia Wang, Cheng Qian, and Mengqi Wang. "A Source Code Similarity Based on Siamese Neural Network." Applied Sciences 10, no. 21 (2020): 7519. http://dx.doi.org/10.3390/app10217519.

Full text
Abstract:
Finding similar code snippets is a fundamental task in the field of software engineering. Several approaches have been proposed for this task by using statistical language model which focuses on syntax and structure of codes rather than deep semantic information underlying codes. In this paper, a Siamese Neural Network is proposed that maps codes into continuous space vectors and try to capture their semantic meaning. Firstly, an unsupervised pre-trained method that models code snippets as a weighted series of word vectors. The weights of the series are fitted by the Term Frequency-Inverse Document Frequency (TF-IDF). Then, a Siamese Neural Network trained model is constructed to learn semantic vector representation of code snippets. Finally, the cosine similarity is provided to measure the similarity score between pairs of code snippets. Moreover, we have implemented our approach on a dataset of functionally similar code. The experimental results show that our method improves some performance over single word embedding method.
APA, Harvard, Vancouver, ISO, and other styles
17

Błasiok, Jarosław, Venkatesan Guruswami, Preetum Nakkiran, Atri Rudra, and Madhu Sudan. "General Strong Polarization." Journal of the ACM 69, no. 2 (2022): 1–67. http://dx.doi.org/10.1145/3491390.

Full text
Abstract:
Arıkan’s exciting discovery of polar codes has provided an altogether new way to efficiently achieve Shannon capacity. Given a (constant-sized) invertible matrix M , a family of polar codes can be associated with this matrix and its ability to approach capacity follows from the polarization of an associated [0, 1]-bounded martingale, namely its convergence in the limit to either 0 or 1 with probability 1. Arıkan showed appropriate polarization of the martingale associated with the matrix ( G 2 = ( 1 1 0 1) to get capacity achieving codes. His analysis was later extended to all matrices M that satisfy an obvious necessary condition for polarization. While Arıkan’s theorem does not guarantee that the codes achieve capacity at small blocklengths (specifically in length, which is a polynomial in ( 1ε ) where (ε) is the difference between the capacity of a channel and the rate of the code), it turns out that a “strong” analysis of the polarization of the underlying martingale would lead to such constructions. Indeed for the martingale associated with ( G 2 ) such a strong polarization was shown in two independent works (Guruswami and Xia (IEEE IT’15) and Hassani et al. (IEEE IT’14)), thereby resolving a major theoretical challenge associated with the efficient attainment of Shannon capacity. In this work we extend the result above to cover martingales associated with all matrices that satisfy the necessary condition for (weak) polarization. In addition to being vastly more general, our proofs of strong polarization are (in our view) also much simpler and modular. Key to our proof is a notion of local polarization that only depends on the evolution of the martingale in a single time step. We show that local polarization always implies strong polarization. We then apply relatively simple reasoning about conditional entropies to prove local polarization in very general settings. Specifically, our result shows strong polarization over all prime fields and leads to efficient capacity-achieving source codes for compressing arbitrary i.i.d. sources, and capacity-achieving channel codes for arbitrary symmetric memoryless channels. We show how to use our analyses to achieve exponentially small error probabilities at lengths inverse polynomial in the gap to capacity. Indeed we show that we can essentially match any error probability while maintaining lengths that are only inverse polynomial in the gap to capacity.
APA, Harvard, Vancouver, ISO, and other styles
18

Dixon, J. R., B. A. Lindley, T. Taylor, and G. T. Parks. "DATA ASSIMILATION APPLIED TO PRESSURISED WATER REACTORS." EPJ Web of Conferences 247 (2021): 09020. http://dx.doi.org/10.1051/epjconf/202124709020.

Full text
Abstract:
Best estimate plus uncertainty is the leading methodology to validate existing safety margins. It remains a challenge to develop and license these approaches, in part due to the high dimensionality of system codes. Uncertainty quantification is an active area of research to develop appropriate methods for propagating uncertainties, offering greater scientific reason, dimensionality reduction and minimising reliance on expert judgement. Inverse uncertainty quantification is required to infer a best estimate back on the input parameters and reduce the uncertainties, but it is challenging to capture the full covariance and sensitivity matrices. Bayesian inverse strategies remain attractive due to their predictive modelling and reduced uncertainty capabilities, leading to dramatic model improvements and validation of experiments. This paper uses state-of-the-art data assimilation techniques to obtain a best estimate of parameters critical to plant safety. Data assimilation can combine computational, benchmark and experimental measurements, propagate sparse covariance and sensitivity matrices, treat non-linear applications and accommodate discrepancies. The methodology is further demonstrated through application to hot zero power tests in a pressurised water reactor (PWR) performed using the BEAVRS benchmark with Latin hypercube sampling of reactor parameters to determine responses. WIMS 11 (dv23) and PANTHER (V.5:6:4) are used as the coupled neutronics and thermal-hydraulics codes; both are used extensively to model PWRs. Results demonstrate updated best estimate parameters and reduced uncertainties, with comparisons between posterior distributions generated using maximum entropy principle and cost functional minimisation techniques illustrated in recent conferences. Future work will improve the Bayesian inverse framework with the introduction of higher-order sensitivities.
APA, Harvard, Vancouver, ISO, and other styles
19

Wagenpfeil, Stefan, Paul Mc Kevitt, Abbas Cheddad, and Matthias Hemmje. "Explainable Multimedia Feature Fusion for Medical Applications." Journal of Imaging 8, no. 4 (2022): 104. http://dx.doi.org/10.3390/jimaging8040104.

Full text
Abstract:
Due to the exponential growth of medical information in the form of, e.g., text, images, Electrocardiograms (ECGs), X-rays, and multimedia, the management of a patient’s data has become a huge challenge. In particular, the extraction of features from various different formats and their representation in a homogeneous way are areas of interest in medical applications. Multimedia Information Retrieval (MMIR) frameworks, like the Generic Multimedia Analysis Framework (GMAF), can contribute to solving this problem, when adapted to special requirements and modalities of medical applications. In this paper, we demonstrate how typical multimedia processing techniques can be extended and adapted to medical applications and how these applications benefit from employing a Multimedia Feature Graph (MMFG) and specialized, efficient indexing structures in the form of Graph Codes. These Graph Codes are transformed to feature relevant Graph Codes by employing a modified Term Frequency Inverse Document Frequency (TFIDF) algorithm, which further supports value ranges and Boolean operations required in the medical context. On this basis, various metrics for the calculation of similarity, recommendations, and automated inferencing and reasoning can be applied supporting the field of diagnostics. Finally, the presentation of these new facilities in the form of explainability is introduced and demonstrated. Thus, in this paper, we show how Graph Codes contribute new querying options for diagnosis and how Explainable Graph Codes can help to readily understand medical multimedia formats.
APA, Harvard, Vancouver, ISO, and other styles
20

Lin, Chin-Feng. "UFMC-Based Underwater Voice Transmission Scheme with LDPC Codes." Applied Sciences 11, no. 4 (2021): 1818. http://dx.doi.org/10.3390/app11041818.

Full text
Abstract:
An underwater universal filtered multicarrier (UFMC)-based voice transmission scheme is proposed using a 512-point inverse discrete Fourier transform, utilizing 10 sub-bands, and that each had 20 subcarriers. In this proposed UFMC method, the adaptive modulation technologies with 4 quadrature amplitude modulation (QAM), 16-QAM, and low-density parity-check (LDPC) channel coding were integrated. Additionally, the bit error rate (BER), transmission power weighting, the ratios of power-saving, and underwater voice transmission performance with perfect channel estimation (PCE), and 5% and 10% channel estimation errors (CEEs) were investigated. The underwater voice transmission had a BER quality of service 10−3. Simulation results showed that the PCE outperformed 5% and 10% CEEs, under 4-QAM, with gains of 0.5 and 0.9 dB, respectively, and a BER of 4×10−4. The PCE outperformed 5% and 10% CEEs, under 16-QAM, with gains of 0.5 and 2.4 dB, respectively, and a BER of 8.5×10−4. The proposed UFMC scheme can be applied to underwater voice transmission with a BER below 10−3 The proposed system showed a superior capability to contend with additive white Gaussian noise, underwater multipath channel fading, and CEEs.
APA, Harvard, Vancouver, ISO, and other styles
21

Yousef and Hamdy. "Three-Stage Estimation of the Mean and Variance of the Normal Distribution with Application to an Inverse Coefficient of Variation with Computer Simulation." Mathematics 7, no. 9 (2019): 831. http://dx.doi.org/10.3390/math7090831.

Full text
Abstract:
This paper considers sequentially two main problems. First, we estimate both the mean and the variance of the normal distribution under a unified one decision framework using Hall’s three-stage procedure. We consider a minimum risk point estimation problem for the variance considering a squared-error loss function with linear sampling cost. Then we construct a confidence interval for the mean with a preassigned width and coverage probability. Second, as an application, we develop Fortran codes that tackle both the point estimation and confidence interval problems for the inverse coefficient of variation using a Monte Carlo simulation. The simulation results show negative regret in the estimation of the inverse coefficient of variation, which indicates that the three-stage procedure provides better estimation than the optimal.
APA, Harvard, Vancouver, ISO, and other styles
22

HU, YI-QIANG, BING-FEI WU, and CHORNG-YANN SU. "A DISCRETE WAVELET TRANSFORM CODEC DESIGN." Journal of Circuits, Systems and Computers 13, no. 06 (2004): 1347–78. http://dx.doi.org/10.1142/s021812660400201x.

Full text
Abstract:
This manuscript presents a VLSI architecture and its design rule, called embedded instruction code (EIC), to realize discrete wavelet transform (DWT) codec in a single chip. Since the essential computation of DWT is convolution, we build a set of multiplication instruction, MUL, and the addition instruction, ADD, to complete the work. We segment the computation paths of DWT according to the multiplication and addition, and apply the instruction codes to execute the operators. Besides, we offer a parallel arithmetic logic unit (PALU) organization that is composed of two multipliers and four adders (2M4A) in our design. Thus, the instruction codes programmed by EIC control the PALU to compute efficiently. Additionally, we establish a few necessary registers in PALU, and the number of registers depends on the wavelet filters' length and the decomposition level. Yet, the numbers of multipliers and adders do not increase as we execute the DWT or the inverse DWT (IDWT) in multilevel decomposition. Furthermore, we deduce the similarity between DWT and IDWT, so the functions can be integrated in the same architecture. Besides, we schedule the instructions; thus, the execution of the multilevel processes can be achieved without superfluous PALU in a single chip. Moreover, we solve the boundary problem of DWT by using the symmetric extension. Therefore, the perfect reconstruction (PR) condition for DWT requirement can be accomplished. Through EIC, we can systematically generate a flexible instruction codes while we adopt different filters. Our chip supports up to six levels of decomposition, and versatile image specifications, e.g., VGA, MPEG-1, MPEG-2, and 1024×1024 image sizes. The processing speed is 7.78 Mpixel/s when the operation frequency, for normal case, is 100 MHz.
APA, Harvard, Vancouver, ISO, and other styles
23

Biba, Yuri, and Peter Menegay. "Inverse Design of Centrifugal Compressor Stages Using a Meanline Approach." International Journal of Rotating Machinery 10, no. 1 (2004): 75–84. http://dx.doi.org/10.1155/s1023621x04000089.

Full text
Abstract:
This article discusses an approach for determining meanline geometric parameters of centrifugal compressor stages given specified performance requirements. This is commonly known as the inverse design approach. The opposite process, that of calculating performance parameters based on geometry input is usually called analysis, or direct calculation. An algorithm and computer code implementing the inverse approach is described. As an alternative to commercially available inverse design codes, this tool is intended for exclusive OEM use and calls a trusted database of loss models for individual stage components, such as impellers, guide vanes, diffusers, etc. The algorithm extends applicability of the inverse design code by ensuring energy conservation for any working medium, like imperfect gases. The concept of loss coefficient for rotating impellers is introduced for improved loss modelling. The governing conservation equations for each component of a stage are presented, and then described in terms of an iterative procedure which calculates the required one-dimensional geometry. A graphical user interface which facilitates user input and presentation of results is discussed briefly. The object-oriented nature of the code is highlighted as a platform which easily provides for maintainability and future extensions.
APA, Harvard, Vancouver, ISO, and other styles
24

KIM, HYUN KEOL, and ANDREAS H. HIELSCHER. "A DIFFUSION–TRANSPORT HYBRID METHOD FOR ACCELERATING OPTICAL TOMOGRAPHY." Journal of Innovative Optical Health Sciences 03, no. 04 (2010): 293–305. http://dx.doi.org/10.1142/s1793545810001143.

Full text
Abstract:
It is well acknowledged that the equation of radiative transfer (ERT) provides an accurate prediction of light propagation in biological tissues, while the diffusion approximation (DA) is of limited accuracy for the transport regime. However, ERT-based reconstruction codes require much longer computation times as compared to DA-based reconstruction codes. We introduce here a computationally efficient algorithm, called a diffusion–transport hybrid solver, that makes use of the DA- or low-order ERT-based inverse solution as an initial guess for the full ERT-based reconstruction solution. To evaluate the performance of this hybrid method, we present extensive studies involving numerical tissue phantoms and experimental data. As a result, we show that the hybrid method reduces the reconstruction time by a factor of up to 23, depending on the physical character of the problem.
APA, Harvard, Vancouver, ISO, and other styles
25

HARVEY, NATE, and YUVAL PERES. "An invariant of finitary codes with finite expected square root coding length." Ergodic Theory and Dynamical Systems 31, no. 1 (2010): 77–90. http://dx.doi.org/10.1017/s014338570900090x.

Full text
Abstract:
AbstractLet p and q be probability vectors with the same entropy h. Denote by B(p) the Bernoulli shift indexed by ℤ with marginal distribution p. Suppose that φ is a measure-preserving homomorphism from B(p) to B(q). We prove that if the coding length of φ has a finite 1/2 moment, then σ2p=σ2q, where σ2p=∑ ipi(−log pi−h)2 is the informational variance of p. In this result, the 1/2 moment cannot be replaced by a lower moment. On the other hand, for any θ<1, we exhibit probability vectors p and q that are not permutations of each other, such that there exists a finitary isomorphism Φ from B(p) to B(q) where the coding lengths of Φ and of its inverse have a finite θ moment. We also present an extension to ergodic Markov chains.
APA, Harvard, Vancouver, ISO, and other styles
26

Kruger, Paul A. "Symbolic Inversion in Death: Some Examples from the Old Testament and the Ancient Near Eastern world." Verbum et Ecclesia 26, no. 2 (2005): 398–411. http://dx.doi.org/10.4102/ve.v26i2.232.

Full text
Abstract:
Symbolic inversion is a widespread cultural phenomenon, the earliest examples of which can be traced back to the cultures of the ancient Near East. Symbolic inversion (mundus inversus) relates to those forms of expressive behaviour which invert commonly accepted social codes. One such area in the ancient Near Eastern and Old Testament world where this phenomenon manifested itself prominently is in the conception of life after death: life after death is often conceived as the direct inverse of what is customary in ordinary life.
APA, Harvard, Vancouver, ISO, and other styles
27

Nguyen, Hai, Jonathan Wittmer, and Tan Bui-Thanh. "DIAS: A Data-Informed Active Subspace Regularization Framework for Inverse Problems." Computation 10, no. 3 (2022): 38. http://dx.doi.org/10.3390/computation10030038.

Full text
Abstract:
This paper presents a regularization framework that aims to improve the fidelity of Tikhonov inverse solutions. At the heart of the framework is the data-informed regularization idea that only data-uninformed parameters need to be regularized, while the data-informed parameters, on which data and forward model are integrated, should remain untouched. We propose to employ the active subspace method to determine the data-informativeness of a parameter. The resulting framework is thus called a data-informed (DI) active subspace (DIAS) regularization. Four proposed DIAS variants are rigorously analyzed, shown to be robust with the regularization parameter and capable of avoiding polluting solution features informed by the data. They are thus well suited for problems with small or reasonably small noise corruptions in the data. Furthermore, the DIAS approaches can effectively reuse any Tikhonov regularization codes/libraries. Though they are readily applicable for nonlinear inverse problems, we focus on linear problems in this paper in order to gain insights into the framework. Various numerical results for linear inverse problems are presented to verify theoretical findings and to demonstrate advantages of the DIAS framework over the Tikhonov, truncated SVD, and the TSVD-based DI approaches.
APA, Harvard, Vancouver, ISO, and other styles
28

Long, Kevin, Paul T. Boggs, and Bart G. van Bloemen Waanders. "Sundance: High-Level Software for PDE-Constrained Optimization." Scientific Programming 20, no. 3 (2012): 293–310. http://dx.doi.org/10.1155/2012/380908.

Full text
Abstract:
Sundance is a package in the Trilinos suite designed to provide high-level components for the development of high-performance PDE simulators with built-in capabilities for PDE-constrained optimization. We review the implications of PDE-constrained optimization on simulator design requirements, then survey the architecture of the Sundance problem specification components. These components allow immediate extension of a forward simulator for use in an optimization context. We show examples of the use of these components to develop full-space and reduced-space codes for linear and nonlinear PDE-constrained inverse problems.
APA, Harvard, Vancouver, ISO, and other styles
29

Scruby, T. R., and D. E. Browne. "A Hierarchy of Anyon Models Realised by Twists in Stacked Surface Codes." Quantum 4 (April 6, 2020): 251. http://dx.doi.org/10.22331/q-2020-04-06-251.

Full text
Abstract:
Braiding defects in topological stabiliser codes can be used to fault-tolerantly implement logical operations. Twists are defects corresponding to the end-points of domain walls and are associated with symmetries of the anyon model of the code. We consider twists in multiple copies of the 2d surface code and identify necessary and sufficient conditions for considering these twists as anyons: namely that they must be self-inverse and that all charges which can be localised by the twist must be invariant under its associated symmetry. If both of these conditions are satisfied the twist and its set of localisable anyonic charges reproduce the behaviour of an anyonic model belonging to a hierarchy which generalises the Ising anyons. We show that the braiding of these twists results in either (tensor products of) the S gate or (tensor products of) the CZ gate. We also show that for any number of copies of the 2d surface code the application of H gates within a copy and CNOT gates between copies is sufficient to generate all possible twists.
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Xu, Ziyu Xie, Farah Alsafadi, and Tomasz Kozlowski. "A comprehensive survey of inverse uncertainty quantification of physical model parameters in nuclear system thermal–hydraulics codes." Nuclear Engineering and Design 384 (December 2021): 111460. http://dx.doi.org/10.1016/j.nucengdes.2021.111460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Yadav, V., and A. M. Michalak. "Improving computational efficiency in large linear inverse problems: an example from carbon dioxide flux estimation." Geoscientific Model Development 6, no. 3 (2013): 583–90. http://dx.doi.org/10.5194/gmd-6-583-2013.

Full text
Abstract:
Abstract. Addressing a variety of questions within Earth science disciplines entails the inference of the spatiotemporal distribution of parameters of interest based on observations of related quantities. Such estimation problems often represent inverse problems that are formulated as linear optimization problems. Computational limitations arise when the number of observations and/or the size of the discretized state space becomes large, especially if the inverse problem is formulated in a probabilistic framework and therefore aims to assess the uncertainty associated with the estimates. This work proposes two approaches to lower the computational costs and memory requirements for large linear space–time inverse problems, taking the Bayesian approach for estimating carbon dioxide (CO2) emissions and uptake (a.k.a. fluxes) as a prototypical example. The first algorithm can be used to efficiently multiply two matrices, as long as one can be expressed as a Kronecker product of two smaller matrices, a condition that is typical when multiplying a sensitivity matrix by a covariance matrix in the solution of inverse problems. The second algorithm can be used to compute a posteriori uncertainties directly at aggregated spatiotemporal scales, which are the scales of most interest in many inverse problems. Both algorithms have significantly lower memory requirements and computational complexity relative to direct computation of the same quantities (O(n2.5) vs. O(n3)). For an examined benchmark problem, the two algorithms yielded massive savings in floating point operations relative to direct computation of the same quantities. Sample computer codes are provided for assessing the computational and memory efficiency of the proposed algorithms for matrices of different dimensions.
APA, Harvard, Vancouver, ISO, and other styles
32

Ozelim, Luan C. S. M., Ugo S. Dias та Pushpa N. Rathie. "Revisiting the Lognormal Modelling of Shadowing Effects during Wireless Communications by Means of the α-μ/α-μ Composite Distribution". Modelling 2, № 2 (2021): 197–209. http://dx.doi.org/10.3390/modelling2020010.

Full text
Abstract:
Properly modeling the shadowing effects during wireless transmissions is crucial to perform the network quality assessment. From a mathematical point of view, using composite distributions allows one to combine both fast fading and slow fading stochastic phenomena. Numerous statistical distributions have been used to account for the fast fading effects. On the other hand, even though several studies indicate the adequacy of the Lognormal distributon (LNd) as a shadowing model, they also reveal this distribution renders some analytic tractability issues. Past works include the combination of Rayleigh and Weibull distributions with LNd. Due to the difficulty inherent to obtaining closed form expressions for the probability density functions involved, other authors approximated LNd as a Gamma distribution, creating Nakagami-m/Gamma and Rayleigh/Gamma composite distributions. In order to better mimic the LNd, approximations using the inverse Gamma and the inverse Nakagami-m distributions have also been considered. Although all these alternatives were discussed, it is still an open question how to effectively use the LNd in the compound models and still get closed-form results. We present a novel understanding on how the α-μ distribution can be reduced to a LNd by a limiting procedure, overcoming the analytic intractability inherent to Lognormal fading processes. Interestingly, new closed-form and series representations for the PDF and CDF of the composite distributions are derived. We build computational codes to evaluate all the expression hereby derived as well as model real field trial results by the equations developed. The accuracy of the codes and of the model are remarkable.
APA, Harvard, Vancouver, ISO, and other styles
33

Joutsijoki, Henry, Markus Haponen, Jyrki Rasku, Katriina Aalto-Setälä, and Martti Juhola. "Error-Correcting Output Codes in Classification of Human Induced Pluripotent Stem Cell Colony Images." BioMed Research International 2016 (2016): 1–13. http://dx.doi.org/10.1155/2016/3025057.

Full text
Abstract:
The purpose of this paper is to examine how well the human induced pluripotent stem cell (hiPSC) colony images can be classified using error-correcting output codes (ECOC). Our image dataset includes hiPSC colony images from three classes (bad, semigood, and good) which makes our classification task a multiclass problem. ECOC is a general framework to model multiclass classification problems. We focus on four different coding designs of ECOC and apply to each one of themk-Nearest Neighbor (k-NN) searching, naïve Bayes, classification tree, and discriminant analysis variants classifiers. We use Scaled Invariant Feature Transformation (SIFT) based features in classification. The best accuracy (62.4%) is obtained with ternary complete ECOC coding design andk-NN classifier (standardized Euclidean distance measure and inverse weighting). The best result is comparable with our earlier research. The quality identification of hiPSC colony images is an essential problem to be solved before hiPSCs can be used in practice in large-scale. ECOC methods examined are promising techniques for solving this challenging problem.
APA, Harvard, Vancouver, ISO, and other styles
34

Jain, P., and M. D. Flannigan. "Comparison of methods for spatial interpolation of fire weather in Alberta, Canada." Canadian Journal of Forest Research 47, no. 12 (2017): 1646–58. http://dx.doi.org/10.1139/cjfr-2017-0101.

Full text
Abstract:
Spatial interpolation of fire weather variables from station data allow fire danger indices to be mapped continuously across the landscape. This information is crucial to fire management agencies, particularly in areas where weather data are sparse. We compare the performance of several standard interpolation methods (inverse distance weighting, spline, and geostatistical interpolation methods) for estimating output from the Canadian Fire Weather Index (FWI) system at unmonitored locations. We find that geostatistical methods (kriging) generally outperform the other methods, particularly when elevation is used as a covariate. We also find that interpolation of the input meteorological variables and the previous day’s moisture codes to unmonitored locations followed by calculation of the FWI output variables is preferable to first calculating the FWI output variables and then interpolating, in contrast to previous studies. Alternatively, when the previous day’s moisture codes are estimated from interpolated weather, rather than directly interpolated, errors can accumulate and become large. This effect is particularly evident for the duff moisture code and drought moisture code due to their significant autocorrelation.
APA, Harvard, Vancouver, ISO, and other styles
35

Haney, Matthew M., and Victor C. Tsai. "Perturbational and nonperturbational inversion of Rayleigh-wave velocities." GEOPHYSICS 82, no. 3 (2017): F15—F28. http://dx.doi.org/10.1190/geo2016-0397.1.

Full text
Abstract:
The inversion of Rayleigh-wave dispersion curves is a classic geophysical inverse problem. We have developed a set of MATLAB codes that performs forward modeling and inversion of Rayleigh-wave phase or group velocity measurements. We describe two different methods of inversion: a perturbational method based on finite elements and a nonperturbational method based on the recently developed Dix-type relation for Rayleigh waves. In practice, the nonperturbational method can be used to provide a good starting model that can be iteratively improved with the perturbational method. Although the perturbational method is well-known, we solve the forward problem using an eigenvalue/eigenvector solver instead of the conventional approach of root finding. Features of the codes include the ability to handle any mix of phase or group velocity measurements, combinations of modes of any order, the presence of a surface water layer, computation of partial derivatives due to changes in material properties and layer boundaries, and the implementation of an automatic grid of layers that is optimally suited for the depth sensitivity of Rayleigh waves.
APA, Harvard, Vancouver, ISO, and other styles
36

Hocevar, Erwin, and Walter G. Kropatsch. "Inventing the Formula of the Trees: A Solution of the Representation of Self Similar Objects." Fractals 05, supp01 (1997): 51–64. http://dx.doi.org/10.1142/s0218348x97000632.

Full text
Abstract:
Iterated Function Systems (IFS) seem to be used best to represent objects in the nature, because many of them are self similar. An IFS is a set of affine and contractive transformations. The union (so-called collage) of the subimages generated by transforming the whole image produces the image again - the self similar attractor of these transformations, which can be described by a binary image. For a fast and compact representation of those images, it would be desirable to calculate the transformations (the IFS-Codes) directly from the image that means to solve the inverse IFS-Problem. The solution presented in this paper will directly use the features of the self similar image. Subsets of the entire image and the subimage to be calculated are identified by the computation of the set difference between the pixels of the original and a rotated copy. The rotation and the scale factor of the transformation can be computed by the mapping of this two subsets onto each other, if the translation part - the fixed point - is predefined. The calculation of the transformation has to be repeated for each subimage. It will be proved, that with this method the IFS-Codes can be calculated for not convex, undistorted, and self similar images as long as the fixed point is known. An efficient algorithm for the identification of these fixed points within the image is introduced. Different ways to achieve this solutions are presented. In the conclusion the class of images, which can be coded by this method is defined, the results are pointed out, the advantages resp. the disadvantages of the method are evaluated, and possible ways to extend the method are discussed.
APA, Harvard, Vancouver, ISO, and other styles
37

Landkammer, Philipp, and Paul Steinmann. "Application of a Non-Invasive Form Finding Algorithm to the Ring Compression Test with Varying Friction Coefficients." Key Engineering Materials 651-653 (July 2015): 1381–86. http://dx.doi.org/10.4028/www.scientific.net/kem.651-653.1381.

Full text
Abstract:
It is a great challenge in the development of functional components to determine the optimal blank design (material configuration) of a workpiece according to a specific forming process, while knowing the desired target geometry (spatial configuration). A new iterative non-invasive algorithm, which is purely based on geometrical considerations, is developed to solve inverse form finding problems. The update-step is performed by mapping the nodal spatial difference vector, between the computed spatial coordinates and the desired spatial target coordinates, with a smoothed deformation gradient to the discretized material configuration. The iterative optimization approach can be easily coupled non-invasively via subroutines to arbitrary finite element codes such that the pre-processing, the solving and the post-processing can be performed by the habitual simulation software. This is exemplary demonstrated by an interacting between Matlab (update procedure for inverse form finding) and MSC.MarcMentat (metal forming simulation). The algorithm succeeds for a parameter study of a ring compression test within nearly linear convergence rates, despite highly deformed elements and tangential contact with varying friction parameters.
APA, Harvard, Vancouver, ISO, and other styles
38

Andrade-Campos, A. "Development of an Optimization Framework for Parameter Identification and Shape Optimization Problems in Engineering." International Journal of Manufacturing, Materials, and Mechanical Engineering 1, no. 1 (2011): 57–79. http://dx.doi.org/10.4018/ijmmme.2011010105.

Full text
Abstract:
The use of optimization methods in engineering is increasing. Process and product optimization, inverse problems, shape optimization, and topology optimization are frequent problems both in industry and science communities. In this paper, an optimization framework for engineering inverse problems such as the parameter identification and the shape optimization problems is presented. It inherits the large experience gain in such problems by the SiDoLo code and adds the latest developments in direct search optimization algorithms. User subroutines in Sdl allow the program to be customized for particular applications. Several applications in parameter identification and shape optimization topics using Sdl Lab are presented. The use of commercial and non-commercial (in-house) Finite Element Method codes to evaluate the objective function can be achieved using the interfaces pre-developed in Sdl Lab. The shape optimization problem of the determination of the initial geometry of a blank on a deep drawing square cup problem is analysed and discussed. The main goal of this problem is to determine the optimum shape of the initial blank in order to save latter trimming operations and costs.
APA, Harvard, Vancouver, ISO, and other styles
39

Xiong, Shengjun, Lisheng Zhou, Zhengduo Wang, Chao Wang, and Yuyan Zhang. "Method of MAI cancellation in asynchronous underwater acoustic CDMA system." MATEC Web of Conferences 283 (2019): 07007. http://dx.doi.org/10.1051/matecconf/201928307007.

Full text
Abstract:
In the underwater passive positioning systems (UPPS), it involves a key technology of multi-user communication that the beacons synchronously send positioning messages to the targets. Interferences from other users, referred to multi-access interferences (MAIs), can induce significant bit errors for the desired user. In this paper, we designed an underwater acoustic Code Division Multiple Access (CDMA) system using the coded cyclic shift key (CCSK) modulation, and proposed a hybrid MAI cancellation scheme based on cyclic shift inverse matrix transform (CSIMT) and recursive least square (RLS) in the CDMA system. The hybrid scheme can suppress MAI better than RLS or CSIMT, and meets the requirements of strong stability to support reliable transmission for UPPS. The CSIMT interference cancellation performs well under the far-near effects, which doesn’t depend on the accuracy of the estimation of transmitted data, and the strong MAI can be reduced by transformation of the zero-forcing correlation values using m sequence cyclic shift inverse matrix. The RLS interference cancellation is used to mitigate MAI form the detected users with comparable power, by using the estimated the channel impulse response between the receiver and the detected users, and the regenerated signal of detected users form decoder output, which allows of strong error-correcting codes, and it’s robust in an asynchronous environment. The theoretical analysis and experimental result show that the hybrid MAI cancellation scheme can effectively suppress MAI better. The shallow water sea trial was carried out near Dalian Tiger Beach Ocean Park in China, at a distance of about 5 km, the data rates of 100bps, where three users could communicate asynchronously with low failure ratio and can match the UPPS application.
APA, Harvard, Vancouver, ISO, and other styles
40

Forney, D. C., and D. H. Rothman. "Inverse method for estimating respiration rates from decay time series." Biogeosciences Discussions 9, no. 3 (2012): 3795–828. http://dx.doi.org/10.5194/bgd-9-3795-2012.

Full text
Abstract:
Abstract. Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools) with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters; a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided.
APA, Harvard, Vancouver, ISO, and other styles
41

Forney, D. C., and D. H. Rothman. "Inverse method for estimating respiration rates from decay time series." Biogeosciences 9, no. 9 (2012): 3601–12. http://dx.doi.org/10.5194/bg-9-3601-2012.

Full text
Abstract:
Abstract. Long-term organic matter decomposition experiments typically measure the mass lost from decaying organic matter as a function of time. These experiments can provide information about the dynamics of carbon dioxide input to the atmosphere and controls on natural respiration processes. Decay slows down with time, suggesting that organic matter is composed of components (pools) with varied lability. Yet it is unclear how the appropriate rates, sizes, and number of pools vary with organic matter type, climate, and ecosystem. To better understand these relations, it is necessary to properly extract the decay rates from decomposition data. Here we present a regularized inverse method to identify an optimally-fitting distribution of decay rates associated with a decay time series. We motivate our study by first evaluating a standard, direct inversion of the data. The direct inversion identifies a discrete distribution of decay rates, where mass is concentrated in just a small number of discrete pools. It is consistent with identifying the best fitting "multi-pool" model, without prior assumption of the number of pools. However we find these multi-pool solutions are not robust to noise and are over-parametrized. We therefore introduce a method of regularized inversion, which identifies the solution which best fits the data but not the noise. This method shows that the data are described by a continuous distribution of rates, which we find is well approximated by a lognormal distribution, and consistent with the idea that decomposition results from a continuum of processes at different rates. The ubiquity of the lognormal distribution suggest that decay may be simply described by just two parameters: a mean and a variance of log rates. We conclude by describing a procedure that estimates these two lognormal parameters from decay data. Matlab codes for all numerical methods and procedures are provided.
APA, Harvard, Vancouver, ISO, and other styles
42

Harrach, Bastian. "An Introduction to Finite Element Methods for Inverse Coefficient Problems in Elliptic PDEs." Jahresbericht der Deutschen Mathematiker-Vereinigung 123, no. 3 (2021): 183–210. http://dx.doi.org/10.1365/s13291-021-00236-2.

Full text
Abstract:
AbstractSeveral novel imaging and non-destructive testing technologies are based on reconstructing the spatially dependent coefficient in an elliptic partial differential equation from measurements of its solution(s). In practical applications, the unknown coefficient is often assumed to be piecewise constant on a given pixel partition (corresponding to the desired resolution), and only finitely many measurement can be made. This leads to the problem of inverting a finite-dimensional non-linear forward operator $\mathcal{F}:\ \mathcal{D}(\mathcal{F})\subseteq \mathbb{R}^{n}\to \mathbb{R}^{m}$ F : D ( F ) ⊆ R n → R m , where evaluating ℱ requires one or several PDE solutions.Numerical inversion methods require the implementation of this forward operator and its Jacobian. We show how to efficiently implement both using a standard FEM package and prove convergence of the FEM approximations against their true-solution counterparts. We present simple example codes for Comsol with the Matlab Livelink package, and numerically demonstrate the challenges that arise from non-uniqueness, non-linearity and instability issues. We also discuss monotonicity and convexity properties of the forward operator that arise for symmetric measurement settings.This text assumes the reader to have a basic knowledge on Finite Element Methods, including the variational formulation of elliptic PDEs, the Lax-Milgram-theorem, and the Céa-Lemma. Section 3 also assumes that the reader is familiar with the concept of Fréchet differentiability.
APA, Harvard, Vancouver, ISO, and other styles
43

Brücher, Gertrud. "Zum Formgebrauch des religiösen Fundamentalismus." Soziale Systeme 21, no. 1 (2016): 159–84. http://dx.doi.org/10.1515/sosys-2016-0007.

Full text
Abstract:
Zusammenfassung Der Beitrag nimmt die Schwierigkeit, Fundamentalismus und Terrorismus als eine Kausalrelation mit empirischen Methoden nachzuweisen, zum Ausgangspunkt und testet die Luhmann’schen Analysen zum autopoietischen Konfliktssystem in ihrer Aussagekraft für aktuelle Problemkonstellationen. Gezeigt wird dies am Formgebrauch des religiösen gewaltgeneigten Fundamentalismus in seinen theologischen und säkular-zivilen Varianten. Ein solcherart problematischer Umgang mit Unterscheidungen lässt sich an bestimmten Funktionsstellen ablesen: Konfliktpotenzierende Kommunikationsmedien werden darin erkennbar, wie mit der paradoxen Einheit von Glaube und Wissen umgegangen wird. Konfliktpotenzierende Codes offenbaren sich an den Modi der Entparadoxierung von Moral und Religion. Und Kontingenzformeln können zu Auslösefaktoren von Gewaltspiralen werden, sobald sich ihre Funktion der ‚Stoppregel‘ – Legitimität, Gerechtigkeit, Knappheit, Limitationalität, Gott – ins Gegenteil eines Grenzenlosigkeitssinns verkehrt. Dieser inverse Sinn ist in der Lage, außergesetzliche Tötungen zu legitimieren.
APA, Harvard, Vancouver, ISO, and other styles
44

Klein, Christian, Ken McLaughlin, and Nikola Stoilov. "High precision numerical approach for Davey–Stewartson II type equations for Schwartz class initial data." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 476, no. 2239 (2020): 20190864. http://dx.doi.org/10.1098/rspa.2019.0864.

Full text
Abstract:
We present an efficient high-precision numerical approach for Davey–Stewartson (DS) II type equa- tions, treating initial data from the Schwartz class of smooth, rapidly decreasing functions. As with previous approaches, the presented code uses discrete Fourier transforms for the spatial dependence and Driscoll’s composite Runge–Kutta method for the time dependence. Since DS equations are non-local, nonlinear Schrödinger equations with a singular symbol for the non-locality, standard Fourier methods in practice only reach accuracy of the order of 10 −6 or less for typical examples. This was previously demonstrated for the defocusing integrable case by comparison with a numerical approach for DS II via inverse scattering. By applying a regularization to the singular symbol, originally developed for D-bar problems, the presented code is shown to reach machine precision. The code can treat integrable and non-integrable DS II equations. Moreover, it has the same numerical complexity as existing codes for DS II. Several examples for the integrable defocusing DS II equation are discussed as test cases. In an appendix by C. Kalla, a doubly periodic solution to the defocusing DS II equation is presented, providing a test for direct DS codes based on Fourier methods.
APA, Harvard, Vancouver, ISO, and other styles
45

Rochlitz, Raphael, Marc Seidel, and Ralph-Uwe Börner. "Evaluation of three approaches for simulating 3-D time-domain electromagnetic data." Geophysical Journal International 227, no. 3 (2021): 1980–95. http://dx.doi.org/10.1093/gji/ggab302.

Full text
Abstract:
SUMMARY We implemented and compared the implicit Euler time-stepping approach, the inverse Fourier transform-based approach and the Rational Arnoldi method for simulating 3-D transient electromagnetic data. We utilize the finite-element method with unstructured tetrahedral meshes for the spatial discretization supporting irregular survey geometries and anisotropic material parameters. Both, switch-on and switch-off current waveforms, can be used in combination with direct current solutions of Poisson problems as initial conditions. Moreover, we address important topics such as the incorporation of source currents and opportunities to simulate impulse as well as step response magnetic field data with all approaches for supporting a great variety of applications. Three examples ranging from simple to complex real-world geometries and validations against external codes provide insight into the numerical accuracy, computational performance and unique characteristics of the three applied methods. We further present an application of logarithmic Fourier transforms to convert transient data into the frequency domain. We made all approaches available in the open-source Python toolbox custEM, which previously supported only frequency-domain electromagnetic data. The object-oriented software implementation is suited for further elaboration on distinct modelling topics and the presented examples can serve for benchmarking other codes.
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Jen-Yin, Yao-Tsung Lin, Li-Kai Wang, et al. "Hypovitaminosis Din Postherpetic Neuralgia—High Prevalence and Inverse Association with Pain: A Retrospective Study." Nutrients 11, no. 11 (2019): 2787. http://dx.doi.org/10.3390/nu11112787.

Full text
Abstract:
Hypovitaminosis D (25-hydroxyvitamin D (25(OH)D) <75 nmol/L) is associated with neuropathic pain and varicella-zoster virus (VZV) immunity. A two-part retrospective hospital-based study was conducted. Part I (a case-control study): To investigate the prevalence and risk of hypovitaminosis D in postherpetic neuralgia (PHN) patients compared to those in gender/index-month/age-auto matched controls who underwent health examinations. Patients aged ≥50 years were automatically selected by ICD-9 codes for shingle/PHN. Charts were reviewed. Part II (a cross-sectional study): To determine associations between 25(OH)D, VZV IgG/M, pain and items in the DN4 questionnaire at the first pain clinic visit of patients. Independent predictors of PHN were presented as adjusted odds ratios(AOR) and 95% confidence intervals (CI). Prevalence (73.9%) of hypovitaminosis D in 88 patients was high. In conditional logistic regressions, independent predictors for PHN were hypovitaminosis D (AOR3.12, 95% CI1.73–5.61), malignancy (AOR3.21, 95% CI 1.38–7.48) and Helicobacter pylori-related peptic ulcer disease (AOR3.47, 95% CI 1.71–7.03). 25(OH)D was inversely correlated to spontaneous/brush-evoked pain. Spontaneous pain was positively correlated to VZV IgM. Based on the receiver operator characteristic curve, cutoffs for 25(OH)D to predict spontaneous and brush-evoked pain were 67.0 and 169.0 nmol/L, respectively. A prospective, longitudinal study is needed to elucidate the findings.
APA, Harvard, Vancouver, ISO, and other styles
47

Buljak, Vladimir, Severine Bavier-Romero, and Achraf Kallel. "Calibration of Drucker–Prager Cap Constitutive Model for Ceramic Powder Compaction through Inverse Analysis." Materials 14, no. 14 (2021): 4044. http://dx.doi.org/10.3390/ma14144044.

Full text
Abstract:
Phenomenological plasticity models that relate relative density to plastic strain are frequently used to simulate ceramic powder compaction. With respect to the form implemented in finite element codes, they need to be modified in order to define governing parameters as functions of relative densities. Such a modification increases the number of constitutive parameters and makes their calibration a demanding task that involves a large number of experiments. The novel calibration procedure investigated in this paper is based on inverse analysis methodology, centered on the minimization of a discrepancy function that quantifies the difference between experimentally measured and numerically computed quantities. In order to capture the influence of sought parameters on measured quantities, three different geometries of die and punches are proposed, resulting from a sensitivity analysis performed using numerical simulations of the test. The formulated calibration protocol requires only data that can be collected during the compaction test and, thus, involves a relatively smaller number of experiments. The developed procedure is tested on an alumina powder mixture, used for refractory products, by making a reference to the modified Drucker–Prager Cap model. The assessed parameters are compared to reference values, obtained through more laborious destructive tests performed on green bodies, and are further used to simulate the compaction test with arbitrary geometries. Both comparisons evidenced excellent agreement.
APA, Harvard, Vancouver, ISO, and other styles
48

Waqas, Ghulam Jilani, Ishtiaq Ahmad, Muhammad Kashif Samee, Muhammad Nasir Khan, and Ali Raza. "A hybrid OFDM–CDMA-based robust image watermarking technique." International Journal of Wavelets, Multiresolution and Information Processing 18, no. 06 (2020): 2050043. http://dx.doi.org/10.1142/s0219691320500435.

Full text
Abstract:
Digital watermarking is a process of embedding hidden information called watermark into different kinds of media objects. It uses basic modulation, multiplexing and transform techniques of communication for hiding information. Traditional techniques used are least significant bit (LSB) modification, discrete cosine transform (DCT), discrete wavelet transform (DWT), discrete Fourier transform (DFT), code division multiple access (CDMA) or a combination of these. Among these, CDMA is the most robust against different attacks except geometric attacks. This paper proposes a blind and highly robust watermarking technique by utilizing the basis of orthogonal frequency division multiplexing (OFDM) and CDMA communication system. In this scheme, the insertion process starts by taking DFT of host images, permuting the watermark bits in randomized manner and recording them in a seed as a key. Then PSK modulation follows inverse DFT (IDFT) that gives watermark information as OFDM symbols. These symbols are spread using spreading codes and then arithmetically added to the host image. Finally, scheme applies inverse DCT (IDCT) to get watermarked host images. The simulation results of the proposed scheme are compared with CDMA-based scheme in DCT domain. The results show that the robustness of the proposed scheme is higher than the existing scheme for non-geometric attacks.
APA, Harvard, Vancouver, ISO, and other styles
49

Mohitpour, M., C. L. Pierce, and R. Hooper. "The Design and Engineering of Cross-Country Hydrogen Pipelines." Journal of Energy Resources Technology 110, no. 4 (1988): 203–7. http://dx.doi.org/10.1115/1.3231383.

Full text
Abstract:
Design of cross-country hydrogen pipelines is still uncommon. No industry-accepted codes and standards have been developed to guide the engineering and design of such facilities. A hydrogen pipeline was required to connect a hydrogen purification plant to an anhydrous ammonia production facility. Since no pipeline standards were available, special considerations were required during the design and engineering stages. The small molecular size and reactivity of hydrogen presented unique problems. So do the hydrogen embrittlement/attack/delayed failure phenomena, inverse Joule-Thomson effects and special concerns for commissioning, operation and maintenance. This paper will review world experience, the research, economics, design and safety considerations for hydrogen gas pipelines, as well as the engineering, design and construction methods which were considered necessary by Novacorp International Consulting Inc. for successful completion of the project.
APA, Harvard, Vancouver, ISO, and other styles
50

Sinha, Pankaj, and Naina Grover. "Interrelationship Among Competition, Diversification and Liquidity Creation: Evidence from Indian Banks." Margin: The Journal of Applied Economic Research 15, no. 2 (2021): 183–204. http://dx.doi.org/10.1177/0973801021990398.

Full text
Abstract:
This study analyses the impact of competition on liquidity creation by banks and investigates the dynamics between diversification, liquidity creation and competition for banks operating in India during the period from 2005 to 2018. Using the broad and narrow measures of liquidity creation, an inverse relationship is determined between liquidity creation and competition. The study also indicates a trade-off between pro-competitive policies to improve consumer welfare and the liquidity-destroying effects of competition, and it highlights how diversification affects liquidity creation. Highly diversified banks in India create less liquidity compared with less-diversified banks, both public and private. The liquidity-destroying effects of competition is intensified among highly diversified private banks, which suggest that diversification has not moderated the adverse impact of competition. JEL Codes: G01, G18, G21, G28
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!