To see the other types of publications on this topic, follow the link: Extended Euclidean algorithm.

Journal articles on the topic 'Extended Euclidean algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Extended Euclidean algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Okazaki, Hiroyuki, Yosiki Aoki, and Yasunari Shidama. "Extended Euclidean Algorithm and CRT Algorithm." Formalized Mathematics 20, no. 2 (December 1, 2012): 175–79. http://dx.doi.org/10.2478/v10037-012-0020-2.

Full text
Abstract:
Summary In this article we formalize some number theoretical algorithms, Euclidean Algorithm and Extended Euclidean Algorithm [9]. Besides the a gcd b, Extended Euclidean Algorithm can calculate a pair of two integers (x, y) that holds ax + by = a gcd b. In addition, we formalize an algorithm that can compute a solution of the Chinese remainder theorem by using Extended Euclidean Algorithm. Our aim is to support the implementation of number theoretic tools. Our formalization of those algorithms is based on the source code of the NZMATH, a number theory oriented calculation system developed by Tokyo Metropolitan University [8].
APA, Harvard, Vancouver, ISO, and other styles
2

Levrie, Paul, and Rudi Penne. "The extended Euclidean Algorithm made easy." Mathematical Gazette 100, no. 547 (March 2016): 147–49. http://dx.doi.org/10.1017/mag.2016.25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Daehak, and Kwang Sik Oh. "Computer intensive method for extended Euclidean algorithm." Journal of the Korean Data and Information Science Society 25, no. 6 (November 30, 2014): 1467–74. http://dx.doi.org/10.7465/jkdi.2014.25.6.1467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Havas, George. "On the complexity of the extended euclidean algorithm (Extended Abstract)." Electronic Notes in Theoretical Computer Science 78 (April 2003): 1–4. http://dx.doi.org/10.1016/s1571-0661(04)81002-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Aldaya, Alejandro Cabrera, Alejandro J. Cabrera Sarmiento, and Santiago Sánchez-Solano. "SPA vulnerabilities of the binary extended Euclidean algorithm." Journal of Cryptographic Engineering 7, no. 4 (July 8, 2016): 273–85. http://dx.doi.org/10.1007/s13389-016-0135-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Luo, Zhikun, Huafei Sun, and Xiaomin Duan. "The Extended Hamiltonian Algorithm for the Solution of the Algebraic Riccati Equation." Journal of Applied Mathematics 2014 (2014): 1–8. http://dx.doi.org/10.1155/2014/693659.

Full text
Abstract:
We use a second-order learning algorithm for numerically solving a class of the algebraic Riccati equations. Specifically, the extended Hamiltonian algorithm based on manifold of positive definite symmetric matrices is provided. Furthermore, this algorithm is compared with the Euclidean gradient algorithm, the Riemannian gradient algorithm, and the new subspace iteration method. Simulation examples show that the convergence speed of the extended Hamiltonian algorithm is the fastest one among these algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Weihua, and Andrew Klapper. "AFSRs synthesis with the extended Euclidean rational approximation algorithm." Advances in Mathematics of Communications 11, no. 1 (2017): 139–50. http://dx.doi.org/10.3934/amc.2017008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

KAIHARA, M. E. "A Hardware Algorithm for Modular Multiplication/Division Based on the Extended Euclidean Algorithm." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E88-A, no. 12 (December 1, 2005): 3610–17. http://dx.doi.org/10.1093/ietfec/e88-a.12.3610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yi, Jin, Shiqiang Zhang, Yueqi Cao, Erchuan Zhang, and Huafei Sun. "Rigid Shape Registration Based on Extended Hamiltonian Learning." Entropy 22, no. 5 (May 12, 2020): 539. http://dx.doi.org/10.3390/e22050539.

Full text
Abstract:
Shape registration, finding the correct alignment of two sets of data, plays a significant role in computer vision such as objection recognition and image analysis. The iterative closest point (ICP) algorithm is one of well known and widely used algorithms in this area. The main purpose of this paper is to incorporate ICP with the fast convergent extended Hamiltonian learning (EHL), so called EHL-ICP algorithm, to perform planar and spatial rigid shape registration. By treating the registration error as the potential for the extended Hamiltonian system, the rigid shape registration is modelled as an optimization problem on the special Euclidean group S E ( n ) ( n = 2 , 3 ) . Our method is robust to initial values and parameters. Compared with some state-of-art methods, our approach shows better efficiency and accuracy by simulation experiments.
APA, Harvard, Vancouver, ISO, and other styles
10

Al-Hiaja, Qasem Abu, Abdullah AlShuaibi, and Ahmad Al Badawi. "Frequency Analysis of 32-bit Modular Divider Based on Extended GCD Algorithm for Different FPGA chips." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 17, no. 1 (January 16, 2018): 7133–39. http://dx.doi.org/10.24297/ijct.v17i1.6992.

Full text
Abstract:
Modular inversion with large integers and modulus is a fundamental operation in many public-key cryptosystems. Extended Euclidean algorithm (XGCD) is an extension of Euclidean algorithm (GCD) used to compute the modular multiplicative inverse of two coprime numbers. In this paper, we propose a Frequency Analysis study of 32-bit modular divider based on extended-GCD algorithm targeting different chips of field-programmable gate array (FPGA). The experimental results showed that the design recorded the best performance results when implemented using Kintex7 (xc7k70t-2-fbg676) FPGA kit with a minimum delay period of 50.63 ns and maximum operating frequency of 19.5 MHz. Therefore, the proposed work can be embedded with many FPGA based cryptographic applications.
APA, Harvard, Vancouver, ISO, and other styles
11

Achuthan, P., and S. Sundar. "A new application of the extended Euclidean algorithm for matrix padé approximants." Computers & Mathematics with Applications 16, no. 4 (1988): 287–96. http://dx.doi.org/10.1016/0898-1221(88)90145-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Mielke, Paul W., and Kenneth J. Berry. "Multivariate Multiple Regression Prediction Models: A Euclidean Distance Approach." Psychological Reports 92, no. 3 (June 2003): 763–69. http://dx.doi.org/10.2466/pr0.2003.92.3.763.

Full text
Abstract:
An extension of a multiple regression prediction model to multiple response variables is presented. An algorithm using least sum of Euclidean distances between the multivariate observed and model-predicted response values provides regression coefficients, a measure of effect size, and inferential procedures for evaluating the extended multivariate multiple regression prediction model.
APA, Harvard, Vancouver, ISO, and other styles
13

Zhou, Qiang, Chengliang Tian, Hanlin Zhang, Jia Yu, and Fengjun Li. "How to securely outsource the extended euclidean algorithm for large-scale polynomials over finite fields." Information Sciences 512 (February 2020): 641–60. http://dx.doi.org/10.1016/j.ins.2019.10.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hou, Jie, and William W. Symes. "Accelerating extended least-squares migration with weighted conjugate gradient iteration." GEOPHYSICS 81, no. 4 (July 2016): S165—S179. http://dx.doi.org/10.1190/geo2015-0499.1.

Full text
Abstract:
Least-squares migration (LSM) iteratively achieves a mean-square best fit to seismic reflection data, provided that a kinematically accurate velocity model is available. The subsurface offset extension adds extra degrees of freedom to the model, thereby allowing LSM to fit the data even in the event of significant velocity error. This type of extension also implies additional computational expense per iteration from crosscorrelating source and receiver wavefields over the subsurface offset, and therefore places a premium on rapid convergence. We have accelerated the convergence of extended least-squares migration by combining the conjugate gradient algorithm with weighted norms in range (data) and domain (model) spaces that render the extended Born modeling operator approximately unitary. We have developed numerical examples that demonstrate that the proposed algorithm dramatically reduces the number of iterations required to achieve a given level of fit or gradient reduction compared with conjugate gradient iteration with Euclidean (unweighted) norms.
APA, Harvard, Vancouver, ISO, and other styles
15

Kaye, P. R. "Optimized quantum implementation of elliptic curve arithmetic over binary fields." Quantum Information and Computation 5, no. 6 (September 2005): 474–91. http://dx.doi.org/10.26421/qic5.6-6.

Full text
Abstract:
Shor's quantum algorithm for discrete logarithms applied to elliptic curve groups forms the basis of a ``quantum attack'' of elliptic curve cryptosystems. To implement this algorithm on a quantum computer requires the efficient implementation of the elliptic curve group operation. Such an implementation requires we be able to compute inverses in the underlying field. In \cite{PZ03}, Proos and Zalka show how to implement the extended Euclidean algorithm to compute inverses in the prime field $\GF(p)$. They employ a number of optimizations to achieve a running time of $O(n^2)$, and a space-requirement of $O(n)$ qubits, where $n$ is the number of bits in the binary representation of $p$ (there are some trade-offs that they make, sacrificing a few extra qubits to reduce running-time). In practice, elliptic curve cryptosystems often use curves over the binary field $\GF(2^m)$. In this paper, I show how to implement the extended Euclidean algorithm for polynomials to compute inverses in $\GF(2^m)$. Working under the assumption that qubits will be an `expensive' resource in realistic implementations, I optimize specifically to reduce the qubit space requirement, while keeping the running-time polynomial. The implementation here differs from that in $\cite{PZ03}$ for $\GF(p)$, and we are able to take advantage of some properties of the binary field $\GF(2^m)$. I also optimize the overall qubit space requirement for computing the group operation for elliptic curves over $\GF(2^m)$ by decomposing the group operation to make it ``piecewise reversible'' (similar to what is done in \cite{PZ03} for curves over $\GF(p)$).
APA, Harvard, Vancouver, ISO, and other styles
16

Bufalo, Michele, Daniele Bufalo, and Giuseppe Orlando. "A Note on the Computation of the Modular Inverse for Cryptography." Axioms 10, no. 2 (June 9, 2021): 116. http://dx.doi.org/10.3390/axioms10020116.

Full text
Abstract:
In literature, there are a number of cryptographic algorithms (RSA, ElGamal, NTRU, etc.) that require multiple computations of modulo multiplicative inverses. In this paper, we describe the modulo operation and we recollect the main approaches to computing the modulus. Then, given a and n positive integers, we present the sequence (zj)j≥0, where zj=zj−1+aβj−n, a<n and GCD(a,n)=1. Regarding the above sequence, we show that it is bounded and admits a simple explicit, periodic solution. The main result is that the inverse of a modulo n is given by a−1=⌊im⌋+1 with m=n/a. The computational cost of such an index i is O(a), which is less than O(nlnn) of the Euler’s phi function. Furthermore, we suggest an algorithm for the computation of a−1 using plain multiplications instead of modular multiplications. The latter, still, has complexity O(a) versus complexity O(n) (naive algorithm) or complexity O(lnn) (extended Euclidean algorithm). Therefore, the above procedure is more convenient when a<<n (e.g., a<lnn).
APA, Harvard, Vancouver, ISO, and other styles
17

GISBRECHT, ANDREJ, BASSAM MOKBEL, FRANK-MICHAEL SCHLEIF, XIBIN ZHU, and BARBARA HAMMER. "LINEAR TIME RELATIONAL PROTOTYPE BASED LEARNING." International Journal of Neural Systems 22, no. 05 (September 26, 2012): 1250021. http://dx.doi.org/10.1142/s0129065712500219.

Full text
Abstract:
Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.
APA, Harvard, Vancouver, ISO, and other styles
18

Fournaris, Apostolos P., and O. Koufopavlou. "Applying systolic multiplication–inversion architectures based on modified extended Euclidean algorithm for GF(2k) in elliptic curve cryptography." Computers & Electrical Engineering 33, no. 5-6 (September 2007): 333–48. http://dx.doi.org/10.1016/j.compeleceng.2007.05.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Pilarczyk, Rafał, and Władysław Skarbek. "On Intra-Class Variance for Deep Learning of Classifiers." Foundations of Computing and Decision Sciences 44, no. 3 (September 1, 2019): 285–301. http://dx.doi.org/10.2478/fcds-2019-0015.

Full text
Abstract:
Abstract A novel technique for deep learning of image classifiers is presented. The learned CNN models higher offer better separation of deep features (also known as embedded vectors) measured by Euclidean proximity and also no deterioration of the classification results by class membership probability. The latter feature can be used for enhancing image classifiers having the classes at the model’s exploiting stage different from from classes during the training stage. While the Shannon information of SoftMax probability for target class is extended for mini-batch by the intra-class variance, the trained network itself is extended by the Hadamard layer with the parameters representing the class centers. Contrary to the existing solutions, this extra neural layer enables interfacing of the training algorithm to the standard stochastic gradient optimizers, e.g. AdaM algorithm. Moreover, this approach makes the computed centroids immediately adapting to the updating embedded vectors and finally getting the comparable accuracy in less epochs.
APA, Harvard, Vancouver, ISO, and other styles
20

Grošek, Otokar, and Tomáš Fabšič. "Computing multiplicative inverses in finite fields by long division." Journal of Electrical Engineering 69, no. 5 (September 1, 2018): 400–402. http://dx.doi.org/10.2478/jee-2018-0059.

Full text
Abstract:
Abstract We study a method of computing multiplicative inverses in finite fields using long division. In the case of fields of a prime order p, we construct one fixed integer d(p) with the property that for any nonzero field element a, we can compute its inverse by dividing d(p) by a and by reducing the result modulo p. We show how to construct the smallest d(p) with this property. We demonstrate that a similar approach works in finite fields of a non-prime order, as well. However, we demonstrate that the studied method (in both cases) has worse asymptotic complexity than the extended Euclidean algorithm.
APA, Harvard, Vancouver, ISO, and other styles
21

NASRAOUI, OLFA, HICHEM FRIGUI, RAGHU KRISHNAPURAM, and ANUPAM JOSHI. "EXTRACTING WEB USER PROFILES USING RELATIONAL COMPETITIVE FUZZY CLUSTERING." International Journal on Artificial Intelligence Tools 09, no. 04 (December 2000): 509–26. http://dx.doi.org/10.1142/s021821300000032x.

Full text
Abstract:
The proliferation of information on the World Wide Web has made the personalization of this information space a necessity. An important component of Web personalization is to mine typical user profiles from the vast amount of historical data stored in access logs. In the absence of any a priori knowledge, unsupervised classification or clustering methods seem to be ideally suited to analyze the semi-structured log data of user accesses. In this paper, we define the notion of a "user session" as being a temporally compact sequence of Web accesses by a user. We also define a new distance measure between two Web sessions that captures the organization of a Web site. The Competitive Agglomeration clustering algorithm which can automatically cluster data into the optimal number of components is extended so that it can work on relational data. The resulting Competitive Agglomeration for Relational Data (CARD) algorithm can deal with complex, non-Euclidean, distance/similarity measures. This algorithm was used to analyze Web server access logs successfully and obtain typical session profiles of users.
APA, Harvard, Vancouver, ISO, and other styles
22

Yokoyama, Kazuhiro, Masaya Yasuda, Yasushi Takahashi, and Jun Kogure. "Complexity bounds on Semaev’s naive index calculus method for ECDLP." Journal of Mathematical Cryptology 14, no. 1 (October 30, 2020): 460–85. http://dx.doi.org/10.1515/jmc-2019-0029.

Full text
Abstract:
AbstractSince Semaev introduced summation polynomials in 2004, a number of studies have been devoted to improving the index calculus method for solving the elliptic curve discrete logarithm problem (ECDLP) with better complexity than generic methods such as Pollard’s rho method and the baby-step and giant-step method (BSGS). In this paper, we provide a deep analysis of Gröbner basis computation for solving polynomial systems appearing in the point decomposition problem (PDP) in Semaev’s naive index calculus method. Our analysis relies on linear algebra under simple statistical assumptions on summation polynomials. We show that the ideal derived from PDP has a special structure and Gröbner basis computation for the ideal is regarded as an extension of the extended Euclidean algorithm. This enables us to obtain a lower bound on the cost of Gröbner basis computation. With the lower bound, we prove that the naive index calculus method cannot be more efficient than generic methods.
APA, Harvard, Vancouver, ISO, and other styles
23

Lin, Sheng-Kai, Rong-Chin Lo, and Ren-Guey Lee. "MAGNETOENCEPHALOGRAPHY–ELECTROENCEPHALOGRAPHY CO-REGISTRATION USING 3D GENERALIZED HOUGH TRANSFORM." Biomedical Engineering: Applications, Basis and Communications 32, no. 03 (June 2020): 2050024. http://dx.doi.org/10.4015/s1016237220500246.

Full text
Abstract:
This study proposes an advanced co-registration method for an integrated high temporal resolution electroencephalography (EEG) and magnetoencephalography (MEG) data. The MEG has a higher accuracy for source localization techniques and spatial resolution by sensing magnetic fields generated by the entire brain using multichannel superconducting quantum interference devices, whereas EEG can record electrical activities from larger cortical surface to detect epilepsy. However, by integrating the two modality tools, we can accurately localize the epileptic activity compared to other non-invasive modalities. Integrating the two modality tools is challenging and important. This study proposes a new algorithm using an extended three-dimensional generalized Hough transform (3D GHT) to co-register the two modality data. The pre-process steps require the locations of EEG electrodes, MEG sensors, head-shape points of subjects and fiducial landmarks. The conventional GHT algorithm is a well-known method used for identifying or locating two 2D images. This study proposes a new co-registration method that extends the 2D GHT algorithm to a 3D GHT algorithm that can automatically co-register 3D image data. It is important to study the prospective brain source activity in bio-signal analysis. Furthermore, the study examines the registration accuracy evaluation by calculating the root mean square of the Euclidean distance of MEG–EEG co-registration data. Several experimental results are used to show that the proposed method for co-registering the two modality data is accurate and efficient. The results demonstrate that the proposed method is feasible, sufficiently automatic, and fast for investigating brain source images.
APA, Harvard, Vancouver, ISO, and other styles
24

LI, XIANG-YANG, and YU WANG. "EFFICIENT CONSTRUCTION OF LOW WEIGHTED BOUNDED DEGREE PLANAR SPANNER." International Journal of Computational Geometry & Applications 14, no. 01n02 (April 2004): 69–84. http://dx.doi.org/10.1142/s0218195904001366.

Full text
Abstract:
Given a set V of n points in a two-dimensional plane, we give an O(n log n)-time centralized algorithm that constructs a planar t-spanner for V, for [Formula: see text] such that the degree of each node is bounded from above by [Formula: see text], where 0<α<π/2 is an adjustable parameter. Here Cdel is the spanning ratio of the Delaunay triangulation, which is at most [Formula: see text]. We also show, by applying the greedy method in Ref. [14], how to construct a low weighted bounded degree planar spanner with spanning ratio ρ(α)2(1+∊) and the same degree bound, where ∊ is any positive real constant. Here, a structure is called low weighted if its total edge length is proportional to the total edge length of the Euclidean minimum spanning tree of V. Moreover, we show that our method can be extended to construct a planar bounded degree spanner for unit disk graphs with the adjustable parameter α satisfying 0<α<π/3. Previously, only centralized method6 of constructing bounded degree planar spanner is known, with degree bound 27 and spanning ratio t≃10.02. The distributed implementation of this centralized method takes O(n2) communications in the worst case. Our method can be converted to a localized algorithm where the total number of messages sent by all nodes is at most O(n).
APA, Harvard, Vancouver, ISO, and other styles
25

Tamilmani, Rajesh, and Emmanuel Stefanakis. "Semantically Enriched Simplification of Trajectories." Proceedings of the ICA 2 (July 10, 2019): 1–8. http://dx.doi.org/10.5194/ica-proc-2-128-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Moving objects that are equipped with GPS devices generate huge volumes of spatio-temporal data. This spatial and temporal information is used in tracing the path travelled by the object, so called trajectory. It is often difficult to handle this massive data as it contains millions of raw data points. The number of points in a trajectory is reduced by trajectory simplification techniques. While most of the simplification algorithms use the distance offset as a criterion to eliminate the redundant points, temporal dimension in trajectories should also be considered in retaining the points which convey both the spatial and temporal characteristics of the trajectory. In addition to that the simplification process may result in losing the semantics associated with the intermediate points on the original trajectories. These intermediate points can contain attributes or characteristics depending on the application domain. For example, a trajectory of a moving vessel can contain information about distance travelled, bearing, and current speed. This paper involves implementing the Synchronized Euclidean Distance (SED) based simplification to consider the temporal dimension and building the Semantically Enriched Line simpliFication(SELF) data structure to preserve the semantic attributes associated to individual points on actual trajectories. The SED based simplification technique and the SELF data structure have been implemented in PostgreSQL 9.4 with PostGIS extension using PL/pgSQL to support dynamic lines. Extended experimental work has been carried out to better understand the impact of SED based simplification over conventional Douglas-Peucker algorithm to both synthetic and real trajectories. The efficiency of SELF structure in regard to semantic preservation has been tested at different levels of simplification.</p>
APA, Harvard, Vancouver, ISO, and other styles
26

Tawfeeq, Firas Ghanim, and Alaa M. Abdul-Hadi. "Improved throughput of Elliptic Curve Digital Signature Algorithm (ECDSA) processor implementation over Koblitz curve k-163 on Field Programmable Gate Array (FPGA)." Baghdad Science Journal 17, no. 3(Suppl.) (September 8, 2020): 1029. http://dx.doi.org/10.21123/bsj.2020.17.3(suppl.).1029.

Full text
Abstract:
The widespread use of the Internet of things (IoT) in different aspects of an individual’s life like banking, wireless intelligent devices and smartphones has led to new security and performance challenges under restricted resources. The Elliptic Curve Digital Signature Algorithm (ECDSA) is the most suitable choice for the environments due to the smaller size of the encryption key and changeable security related parameters. However, major performance metrics such as area, power, latency and throughput are still customisable and based on the design requirements of the device. The present paper puts forward an enhancement for the throughput performance metric by proposing a more efficient design for the hardware implementation of ECDSA. The design raised the throughput to 0.08207 Mbit/s, leading to an increase of 6.95% from the existing design. It also includes the design and implementation of the Universal Asynchronous Receiver Transmitter (UART) module. The present work is based on a 163-bit key-size over Koblitz curve k-163 and secure hash function SHA-1. A serial module for the underlying modular layer, high-speed architecture of Koblitz point addition and Koblitz point multiplication have been considered in this work, in addition to utilising the carry-save-multiplier, modular adder-subtractor and Extended Euclidean module for ECDSA protocols. All modules are designed using VHDL and implemented on the platform Virtex5 xc5vlx155t-3ff1738. Signature generation requires 0.55360ms, while its validation consumes 1.10947288ms. Thus, the total time required to complete both processes is equal to 1.66ms and the maximum frequency is approximately 83.477MHZ, consuming a power of 99mW with the efficiency approaching 3.39 * 10-6.
APA, Harvard, Vancouver, ISO, and other styles
27

Rashidi, Bahram, and Mohammad Abedini. "Efficient Lightweight Hardware Structures of Point Multiplication on Binary Edwards Curves for Elliptic Curve Cryptosystems." Journal of Circuits, Systems and Computers 28, no. 09 (August 2019): 1950149. http://dx.doi.org/10.1142/s0218126619501494.

Full text
Abstract:
This paper presents efficient lightweight hardware implementations of the complete point multiplication on binary Edwards curves (BECs). The implementations are based on general and special cases of binary Edwards curves. The complete differential addition formulas have the cost of [Formula: see text] and [Formula: see text] for general and special cases of BECs, respectively, where [Formula: see text] and [Formula: see text] denote the costs of a field multiplication, a field squaring and a field multiplication by a constant, respectively. In the general case of BECs, the structure is implemented based on 3 concurrent multipliers. Also in the special case of BECs, two structures by employing 3 and 2 field multipliers are proposed for achieving the highest degree of parallelization and utilization of resources, respectively. The field multipliers are implemented based on the proposed efficient digit–digit polynomial basis multiplier. Two input operands of the multiplier proceed in digit level. This property leads to reduce hardware consumption and critical path delay. Also, in the structure, based on the change of input digit size from low digit size to high digit size the number of clock cycles and input words are different. Therefore, the multiplier can be flexible for different cryptographic considerations such as low-area and high-speed implementations. The point multiplication computation requires field inversion, therefore, we use a low-cost Extended Euclidean Algorithm (EEA) based inversion for implementation of this field operation. Implementation results of the proposed architectures based on Virtex-5 XC5VLX110 FPGA for two fields [Formula: see text] and [Formula: see text] are achieved. The results show improvements in terms of area and efficiency for the proposed structures compared to previous works.
APA, Harvard, Vancouver, ISO, and other styles
28

Kamen, Edward W. "The VIT Transform Approach to Discrete-Time Signals and Linear Time-Varying Systems." Eng 2, no. 1 (March 10, 2021): 99–125. http://dx.doi.org/10.3390/eng2010008.

Full text
Abstract:
A transform approach based on a variable initial time (VIT) formulation is developed for discrete-time signals and linear time-varying discrete-time systems or digital filters. The VIT transform is a formal power series in z−1, which converts functions given by linear time-varying difference equations into left polynomial fractions with variable coefficients, and with initial conditions incorporated into the framework. It is shown that the transform satisfies a number of properties that are analogous to those of the ordinary z-transform, and that it is possible to do scaling of z−i by time functions, which results in left-fraction forms for the transform of a large class of functions including sinusoids with general time-varying amplitudes and frequencies. Using the extended right Euclidean algorithm in a skew polynomial ring with time-varying coefficients, it is shown that a sum of left polynomial fractions can be written as a single fraction, which results in linear time-varying recursions for the inverse transform of the combined fraction. The extraction of a first-order term from a given polynomial fraction is carried out in terms of the evaluation of zi at time functions. In the application to linear time-varying systems, it is proved that the VIT transform of the system output is equal to the product of the VIT transform of the input and the VIT transform of the unit-pulse response function. For systems given by a time-varying moving average or an autoregressive model, the transform framework is used to determine the steady-state output response resulting from various signal inputs such as the step and cosine functions.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhu, Zhen, Long Chen, Changgao Xia, and Chaochun Yuan. "A History-Driven Differential Evolution Algorithm for Optimization in Dynamic Environments." International Journal on Artificial Intelligence Tools 27, no. 06 (September 2018): 1850028. http://dx.doi.org/10.1142/s0218213018500288.

Full text
Abstract:
This paper presents a novel differential evolution algorithm to solve dynamic optimization problems. In the proposed algorithm, the entire population is composed of several subpopulations, which are evolved independently and excluded each other by a predefined Euclidian-distance. In each subpopulation, the “DE/best/1” mutation operator is employed to generate a mutant individual in this paper. In order to fully exploit the newly generated individual, the selection operator was extended, in which the newly generated trial vector competed with the worst individual if this trial vector was worse than the target vector in terms of the fitness. Meanwhile, this trial vector was stored as the historical information, if it was better than the worst individual. When the environmental change was detected, some of the stored solutions were retrieved and expected to guide the reinitialized solutions to track the new location of the global optimum as soon as possible. The proposed algorithm was compared with several state-of-the-art dynamic evolutionary algorithms over the representative benchmark instances. The experimental results show that the proposed algorithm outperforms the competitors.
APA, Harvard, Vancouver, ISO, and other styles
30

SCHLEIF, F. M., THOMAS VILLMANN, BARBARA HAMMER, and PETRA SCHNEIDER. "EFFICIENT KERNELIZED PROTOTYPE BASED CLASSIFICATION." International Journal of Neural Systems 21, no. 06 (December 2011): 443–57. http://dx.doi.org/10.1142/s012906571100295x.

Full text
Abstract:
Prototype based classifiers are effective algorithms in modeling classification problems and have been applied in multiple domains. While many supervised learning algorithms have been successfully extended to kernels to improve the discrimination power by means of the kernel concept, prototype based classifiers are typically still used with Euclidean distance measures. Kernelized variants of prototype based classifiers are currently too complex to be applied for larger data sets. Here we propose an extension of Kernelized Generalized Learning Vector Quantization (KGLVQ) employing a sparsity and approximation technique to reduce the learning complexity. We provide generalization error bounds and experimental results on real world data, showing that the extended approach is comparable to SVM on different public data.
APA, Harvard, Vancouver, ISO, and other styles
31

Pierre, C., and E. H. Dowell. "A Study of Dynamic Instability of Plates by an Extended Incremental Harmonic Balance Method." Journal of Applied Mechanics 52, no. 3 (September 1, 1985): 693–97. http://dx.doi.org/10.1115/1.3169123.

Full text
Abstract:
The dynamic instability of plates is investigated with geometric nonlinearities being included in the model, which allows one to determine the amplitude of the parametric vibrations. A modal analysis allowing one spatial mode is performed on the nonlinear equations of motion and the resulting nonlinear Mathieu equation is solved by the incremental harmonic balance method, which takes several temporal harmonics into account. When viscous damping is included, a new algorithm is proposed to solve the equation system obtained by the incremental method. For this purpose, a new characterization of the parametric vibration by its total amplitude—or Euclidian norm—is introduced. This algorithm is particularly simple and convenient for computer implementation. The instability regions are obtained with a high degree of accuracy.
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Yanwei, Yuqing Shan, and Peide Liu. "An Extended TODIM Method for Group Decision Making with the Interval Intuitionistic Fuzzy Sets." Mathematical Problems in Engineering 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/672140.

Full text
Abstract:
For a multiple-attribute group decision-making problem with interval intuitionistic fuzzy sets, a method based on extended TODIM is proposed. First, the concepts of interval intuitionistic fuzzy set and its algorithms are defined, and then the entropy method to determine the weights is put forward. Then, based on the Hamming distance and the Euclidean distance of the interval intuitionistic fuzzy set, both of which have been defined, function mapping is given for the attribute. Finally, to solve multiple-attribute group decision-making problems using interval intuitionistic fuzzy sets, a method based on extended TODIM is put forward, and a case that deals with the site selection of airport terminals is given to prove the method.
APA, Harvard, Vancouver, ISO, and other styles
33

Fišer, Karel, Tomáš Sieger, and Josef H. Vormoor. "Identifying Candidate Normal and Leukemic B Cell Progenitor Populations with Hierarchical Clustering of 6-Color Flow Cytometry Data - A Better View." Blood 110, no. 11 (November 16, 2007): 1428. http://dx.doi.org/10.1182/blood.v110.11.1428.1428.

Full text
Abstract:
Abstract 6-color flow cytometry allows multiparameter analysis of high numbers of single cells. It is an excellent tool for the characterization of a wide range of hematopoietic populations and for monitoring minimal residual disease. However, analysis of complex flow data is challenging. Gating populations on 28 two-parameter plots is extremely tedious and does not reflect the multidimensionality of the data. Here, we describe a novel approach, employing hierarchical clustering (HCA) and support vector machine (SVM) learning in analyzing flow data. This approach provides a new perspective for looking at flow data and promises better identification of rare and novel subpopulations that escape classic analysis. Our aim was to identify normal and leukemic B cell progenitor/stem cell populations in normal (n=6) and ALL (n=10) bone marrow. Samples were labelled with fluorochrome-conjugated antibodies to 6 CD markers (CD 10, 19, 22, 34, 38, 117) and 104 to 106 events were acquired (FACSCanto, BD Biosciences). To analyze flow data with HCA we developed a new algorithm, better suited for the ellipsoid nature of cell populations than other current HCA metrics. Data exported from DiVa software were externally compensated and Hyperlog transformed to achieve a logarithmic-like scale that displayed zero and negative values. Normalized data were then subjected to HCA employing a scale-invariant Mahalanobis distance measurement for merging clusters. This reflects the extended ellipsoid shape of the populations (here: 8 dimensional ellipsoids). We developed a new adaptive linkage algorithm that smoothly shifts from the Euclidean distance (when clusters are too small to compute Mahalanobis distance) to Mahalanobis distance measurement. This allowed us to build the hierarchy from single events, yet to retain the advantage of Mahalanobis measurement for larger clusters. To build classifiers we used SVM employing polynomial kernel. All work was carried out in MATLAB (MathWorks, Inc.). The resulting hierarchical tree combined with the heatmap of the CD marker expression allows visualization of hierarchically clustered data with all 8 parameters displayed in a single plot (!) as compared to 28 traditional two-parameter plots. HCA has big advantage of providing populations homogenous in their expression pattern of all parameters (without the need for complex sub or back gating). We were able to identify populations corresponding to the different stages of B-cell development. In a normal control bone marrow we could detect the following candidate B-lineage progenitor populations: CD34+117+38+10−22−19− (0.94% of total) progenitor/stem cells, CD34+117−38+10+22+19med (0.26% of total) pro-B cells, CD34−117−38+10+22+19+ (2.77% of total) small pre-B cells (lower FCS values), CD34−117−38+10+22+19+ (1.09% of total) large pre-B cells (higher FCS values) and CD34−117−38lo10−22+19+ (5.94% of total) (immature) B cells. In 10 diagnostic or relapse samples HCA clearly identified the main leukemic population. HCA is able to visualize otherwise “hidden” populations. This was exemplified by a distinct CD38+B-lin− population that overlapped with other populations in all 28 two-parameter plots (most likely T cells). We have built a classifier able to find established populations across samples and in large datasets (106 events) for which HCA would be computationally too demanding. In summary, we show the advantages of using hierarchical clustering analysis for large complex multiparameter flow cytometry datasets.
APA, Harvard, Vancouver, ISO, and other styles
34

Prajapat, Gopal Krishan, and Rakesh Kumar. "A Hybrid Approach for Facial Expression Recognition Using Extended Local Binary Patterns and Principal Component Analysis." International Journal of Electronics, Communications, and Measurement Engineering 8, no. 2 (July 2019): 1–25. http://dx.doi.org/10.4018/ijecme.2019070101.

Full text
Abstract:
Facial feature extraction and recognition plays a prominent role in human non-verbal interaction and it is one of the crucial factors among pose, speech, facial expression, behaviour and actions which are used in conveying information about the intentions and emotions of a human being. In this article an extended local binary pattern is used for the feature extraction process and a principal component analysis (PCA) is used for dimensionality reduction. The projections of the sample and model images are calculated and compared by Euclidean distance method. The combination of extended local binary pattern and PCA (ELBP+PCA) improves the accuracy of the recognition rate and also diminishes the evaluation complexity. The evaluation of proposed facial expression recognition approach will focus on the performance of the recognition rate. A series of tests are performed for the validation of algorithms and to compare the accuracy of the methods on the JAFFE, Extended Cohn-Kanade images database.
APA, Harvard, Vancouver, ISO, and other styles
35

BAUSCHKE, HEINZ H., JONATHAN M. BORWEIN, and PATRICK L. COMBETTES. "ESSENTIAL SMOOTHNESS, ESSENTIAL STRICT CONVEXITY, AND LEGENDRE FUNCTIONS IN BANACH SPACES." Communications in Contemporary Mathematics 03, no. 04 (November 2001): 615–47. http://dx.doi.org/10.1142/s0219199701000524.

Full text
Abstract:
The classical notions of essential smoothness, essential strict convexity, and Legendreness for convex functions are extended from Euclidean to Banach spaces. A pertinent duality theory is developed and several useful characterizations are given. The proofs rely on new results on the more subtle behavior of subdifferentials and directional derivatives at boundary points of the domain. In weak Asplund spaces, a new formula allows the recovery of the subdifferential from nearby gradients. Finally, it is shown that every Legendre function on a reflexive Banach space is zone consistent, a fundamental property in the analysis of optimization algorithms based on Bregman distances. Numerous illustrating examples are provided.
APA, Harvard, Vancouver, ISO, and other styles
36

ANIGBOGU, J. C., and A. BELAÏD. "HIDDEN MARKOV MODELS IN TEXT RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 09, no. 06 (December 1995): 925–58. http://dx.doi.org/10.1142/s0218001495000389.

Full text
Abstract:
A multi-level multifont character recognition is presented. The system proceeds by first delimiting the context of the characters. As a way of enhancing system performance, typographical information is extracted and used for font identification before actual character recognition is performed. This has the advantage of sure character identification as well as text reproduction in its original form. The font identification is based on decision trees where the characters are automatically arranged differently in confusion classes according to the physical characteristics of fonts. The character recognizers are built around the first and second order hidden Markov models (HMM) as well as Euclidean distance measures. The HMMs use the Viterbi and the Extended Viterbi algorithms to which enhancements were made. Also present is a majority-vote system that polls the other systems for “advice” before deciding on the identity of a character. Among other things, this last system is shown to give better results than each of the other systems applied individually. The system finally uses combinations of stochastic and dictionary verification methods for word recognition and error-correction.
APA, Harvard, Vancouver, ISO, and other styles
37

HE, GUANGHUI, LINGFENG ZHANG, and ZHAOWEI SHANG. "CORRELATION-BASED MULTIDIMENSIONAL SCALING FOR UNSUPERVISED SUBSPACE LEARNING." International Journal of Wavelets, Multiresolution and Information Processing 10, no. 03 (May 2012): 1250030. http://dx.doi.org/10.1142/s0219691312500300.

Full text
Abstract:
Multidimensional scaling (MDS) has been applied in many applications such as dimensionality reduction and data mining. However, one of the drawbacks of MDS is that it is only defined on "training" data without clear extension to out-of-sample points. Furthermore, since that MDS is based on Euclidean distance (which is a dissimilarity measure), it is not suitable for detecting the nonlinear manifold structure embedded in the similarities between data points. In this paper, we extend MDS to the correlation measure space, named correlation MDS (CMDS). CMDS employs an explicit nonlinear mapping between the input and reduced space while MDS using an implicit mapping. As a result, CMDS can directly provide prediction for new samples. In addition, correlation is a similarity measure, CMDS method can effectively capture the nonlinear manifold structure of data embedded in the similarities between the data points. Theoretical analysis also shows that CMDS has some properties similar to kernel methods and can be extended to feature space. The effectiveness of the approach provided in this paper are demonstrated by extensive experiments on various datasets, in comparison with serval existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Kunqi, Zhen Wei, Hui Liu, João Pedro de Magalhães, Rong Rong, Zhiliang Lu, and Jia Meng. "Enhancing Epitranscriptome Module Detection from m6A-Seq Data Using Threshold-Based Measurement Weighting Strategy." BioMed Research International 2018 (June 14, 2018): 1–15. http://dx.doi.org/10.1155/2018/2075173.

Full text
Abstract:
To date, with well over 100 different types of RNA modifications associated with various molecular functions identified on diverse types of RNA molecules, the epitranscriptome has emerged to be an important layer for gene expression regulation. It is of crucial importance and increasing interest to understand how the epitranscriptome is regulated to facilitate different biological functions from a global perspective, which may be carried forward by finding biologically meaningful epitranscriptome modules that respond to upstream epitranscriptome regulators and lead to downstream biological functions; however, due to the intrinsic properties of RNA molecules, RNA modifications, and relevant sequencing technique, the epitranscriptome profiled from high-throughput sequencing approaches often suffers from various artifacts, jeopardizing the effectiveness of epitranscriptome modules identification when using conventional approaches. To solve this problem, we developed a convenient measurement weighting strategy, which can largely tolerate the artifacts of high-throughput sequencing data. We demonstrated on real data that the proposed measurement weighting strategy indeed brings improved performance in epitranscriptome module discovery in terms of both module accuracy and biological significance. Although the new approach is integrated with Euclidean distance measurement in a hierarchical clustering scenario, it has great potential to be extended to other distance measurements and algorithms as well for addressing various tasks in epitranscriptome analysis. Additionally, we show for the first time with rigorous statistical analysis that the epitranscriptome modules are biologically meaningful with different GO functions enriched, which established the functional basis of epitranscriptome modules, fulfilled a key prerequisite for functional characterization, and deciphered the epitranscriptome and its regulation.
APA, Harvard, Vancouver, ISO, and other styles
39

A. A., Ibrahim. "GCD of Aunu Binary Polynomials of Cardinality Seven Using Extended Euclidean Algorithm." International Journal of Mathematics and Computer Research 09, no. 09 (September 24, 2021). http://dx.doi.org/10.47191/ijmcr/v9i9.02.

Full text
Abstract:
Finite fields is considered to be the most widely used algebraic structures today due to its applications in cryptography, coding theory, error correcting codes among others. This paper reports the use of extended Euclidean algorithm in computing the greatest common divisor (gcd) of Aunu binary polynomials of cardinality seven. Each class of the polynomial is permuted into pairs until all the succeeding classes are exhausted. The findings of this research reveals that the gcd of most of the pairs of the permuted classes are relatively prime. This results can be used further in constructing some cryptographic architectures that could be used in design of strong encryption schemes.
APA, Harvard, Vancouver, ISO, and other styles
40

"Privacy Preserving Using Extended Euclidean Algorithm Applied To RSA-Homomorphic Encryption Technique." VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE 8, no. 10 (August 10, 2019): 3175–79. http://dx.doi.org/10.35940/ijitee.j1236.0881019.

Full text
Abstract:
Communication of confidential information over Internet is the key aspect of security applications. Providing protection to sensitive information is of major concern. Many cryptographic algorithms have been in use for providing security of confidential information. Providing security for data has become major challenge in this era. Classical cryptography is playing a major role in providing security for applications. In modern days securing confidential information in the cloud is considered as an important challenge. Homomorphic Encryption technique is one of the best solutions that provide security in the cloud[1]. In this paper, Extended Euclidean Algorithm is used for generating keys. This technique follows RSA Homomorphic encryption technique. .RSA Homomorphic encryption using Extended Euclidean algorithm (RSA-HEEEA) is secure when compared to RSA as it based on the generation of private key which makes the algorithm complex .This technique of using Extended Euclidean Algorithm(EEA) is fast and secure when compared to RSA homomorphic encryption technique. The encryption process utilizes modulo operator which gives security as well.The beauty of this algorithm is in generation of private key which uses Extended Euclidean Algorithm (EEA) that helps in avoiding brute force attacks. Also, this technique uses Homomorphic operations which gives enhance security to confidential information in the cloud
APA, Harvard, Vancouver, ISO, and other styles
41

"Secure Convertible Undeniable Signature Scheme Using Extended Euclidean Algorithm without Random Oracles." KSII Transactions on Internet and Information Systems 7, no. 6 (June 26, 2013): 1512–32. http://dx.doi.org/10.3837/tiis.2013.06.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

"A Soft-Input Soft-Output Decoding Algorithm for LDPC Codes Based on Euclidean Distance." Applied Mechanics and Materials 63-64 (June 2011): 999–1004. http://dx.doi.org/10.4028/www.scientific.net/amm.63-64.999.

Full text
Abstract:
Paper has been removed due to plagiarism. The original paper was in an extended form in: Published in IET Communications, Vol 5, Issue 16, pp 2364-2370, 2011 Received on 23rd November 2010 Revised on 21st April 2011 doi: 10.1049/iet-com.2010.1040 Euclidean distance soft-input soft-output decoding algorithm for low-density parity-check codes P.G. Farrell1, L.J. Arnone, J. Castineira Moreira
APA, Harvard, Vancouver, ISO, and other styles
43

Гнатюк, Сергей Александрович, Владислав Юрьевич Ковтун, Оксана Михайловна Бердник, and Мария Григорьевна Ковтун. "Approaches to performance increasing of extended Euclidean algorithm for double precision division on single precision large integers." Ukrainian Scientific Journal of Information Security 21, no. 1 (April 20, 2015). http://dx.doi.org/10.18372/2225-5036.21.8308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

B, Anjanadevi, P. S. Sitharama Raju, Jyothi V, and V. Valli Kumari. "A Novel approach for Privacy Preserving in Video using Extended Euclidean algorithm Based on Chinese remainder theorem." International Journal of Communication Networks and Security, October 2011, 45–49. http://dx.doi.org/10.47893/ijcns.2011.1019.

Full text
Abstract:
The development in the modern technology paved a path in the utilization of surveillance cameras in streets, offices and other areas but this significantly leads a threat to the privacy of visitors, passengers or employees, leakage of information etc.. To overcome this threat, privacy and security needs to be incorporated in the practical surveillance system. It secures the video information which is resided in various video file types. In this process we used an efficient framework to preserve the privacy while distributing secret among ‘N’ number of parties. In this paper we analyzed various techniques of Chinese Remainder Theorem.
APA, Harvard, Vancouver, ISO, and other styles
45

Moudafi, Abdellatif. "Difference of two norms-regularizations for Q-Lasso." Applied Computing and Informatics ahead-of-print, ahead-of-print (August 5, 2020). http://dx.doi.org/10.1016/j.aci.2018.07.002.

Full text
Abstract:
The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m-space, for m∈IN, is the set of errors when linear measurements are taken to recover a signal/image via the Lasso. Based on a recent work by Wang (2013), we are interested in two new penalty methods for Q-Lasso relying on two types of difference of convex functions (DC for short) programming where the DC objective functions are the difference of l1 and lσq norms and the difference of l1 and lr norms with r>1. By means of a generalized q-term shrinkage operator upon the special structure of lσq norm, we design a proximal gradient algorithm for handling the DC l1−lσq model. Then, based on the majorization scheme, we develop a majorized penalty algorithm for the DC l1−lr model. The convergence results of our new algorithms are presented as well. We would like to emphasize that extensive simulation results in the case Q={b} show that these two new algorithms offer improved signal recovery performance and require reduced computational effort relative to state-of-the-art l1 and lp (p∈(0,1)) models, see Wang (2013). We also devise two DC Algorithms on the spirit of a paper where exact DC representation of the cardinality constraint is investigated and which also used the largest-q norm of lσq and presented numerical results that show the efficiency of our DC Algorithm in comparison with other methods using other penalty terms in the context of quadratic programing, see Jun-ya et al. (2017).
APA, Harvard, Vancouver, ISO, and other styles
46

Saeed, Muhammad, Asad Mehmood, and Amna Anwar. "An extension of TOPSIS based on linguistic terms in triangular intuitionistic fuzzy structure." Punjab University Journal of Mathematics, June 25, 2021, 409–24. http://dx.doi.org/10.52280/pujm.2021.530604.

Full text
Abstract:
Chen [24] introduced the extension of TOPSIS in the fuzzy structure, while this article stretches the modern approach of TOPSIS to the intuitionistic fuzzy framework. Linguistic terms are used in this study to evaluate the weight of each criterion and the rating of alternatives in the context of a triangular intuitionistic fuzzy number. A new intuitionistic fuzzy positive ideal solution (IFPIS) and intuitionistic fuzzy negative ideal solution (IFNIS) are proposed in this model of extended TOPSIS. Euclidean distance is introduced between two triangular intuitionistic fuzzy numbers to calculate separation between each alternative to both (IFPIS) and (IFNIS). The proposed model’s mechanism is presented with the help of an algorithm, and then it is applied to the personal selection problem. Finally, a comparative study is given between this model and other TOPSIS techniques.
APA, Harvard, Vancouver, ISO, and other styles
47

Joey, Hawra’a Lateef, Ahlam Hanoon Al-sudani, and Maher Faik Esmaile. "Ultrasound Images Registration Based on Optimal Feature Descriptor Using Speeded Up Robust Feature." Iraqi Journal of Science, September 29, 2020, 2395–407. http://dx.doi.org/10.24996/ijs.2020.61.9.26.

Full text
Abstract:
Image registration plays a significant role in the medical image processing field. This paper proposes a development on the accuracy and performance of the Speeded-Up Robust Surf (SURF) algorithm to create Extended Field of View (EFoV) Ultrasound (US) images through applying different matching measures. These measures include Euclidean distance, cityblock distance, variation, and correlation in the matching stage that was built in the SURF algorithm. The US image registration (fusion) was implemented depending on the control points obtained from the used matching measures. The matched points with higher frequency algorithm were proposed in this work to perform and enhance the EFoV for the US images, since the maximum accurate matching points would have been selected. The resulted fused images of these applied methods were evaluated subjectively and objectively. The objective assessment was conducted by calculating the execution time, peak signal to noise ratio (PSNR), and signal to noise ratio (SNR) of the registered images and the reference image which was fused manually by a physician. The results showed that the cityblock distance has the best result since it has the highest PSNR and SNR in addition to the lowest execution time.
APA, Harvard, Vancouver, ISO, and other styles
48

Cabrera Aldaya, Alejandro, Cesar Pereida García, and Billy Bob Brumley. "From A to Z: Projective coordinates leakage in the wild." IACR Transactions on Cryptographic Hardware and Embedded Systems, June 19, 2020, 428–53. http://dx.doi.org/10.46586/tches.v2020.i3.428-453.

Full text
Abstract:
At EUROCRYPT 2004, Naccache et al. showed that the projective coordinates representation of the resulting point of an elliptic curve scalar multiplication potentially allows to recover some bits of the scalar. However, this attack has received little attention by the scientific community, and the status of deployed mitigations to prevent it in widely adopted cryptography libraries is unknown. In this paper, we aim to fill this gap, by analyzing several cryptography libraries in this context. To demonstrate the applicability of the attack, we use a side-channel attack to exploit this vulnerability within libgcrypt in the context of ECDSA. To the best of our knowledge, this is the first practical attack instance. It targets the insecure binary extended Euclidean algorithm implementation using a microarchitectural side-channel attack that allows recovering the projective representation of the output point of scalar multiplication during ECDSA signature generation. We captured 100k traces to estimate the number of traces an attacker would need to compromise the libgcrypt ECDSA implementation, resulting in less than 2k for commonly used elliptic curve secp256r1, demonstrating the attack feasibility. During exploitation, we found two additional vulnerabilities. However, we remark the purpose of this paper is not merely exploiting a library but about providing an analysis on the projective coordinates vulnerability status in widely deployed open-source libraries, filling a gap between its original description in the academic literature and the adoption of countermeasures to thwart it in real-world applications.
APA, Harvard, Vancouver, ISO, and other styles
49

Danvy, Olivier, and Mayer Goldberg. "Partial Evaluation of the Euclidian Algorithm (Extended Version)." BRICS Report Series 4, no. 1 (January 1, 1997). http://dx.doi.org/10.7146/brics.v4i1.18780.

Full text
Abstract:
Some programs are easily amenable to partial evaluation because<br />their control flow clearly depends on one of their parameters. Specializing<br />such programs with respect to this parameter eliminates the<br />associated interpretive overhead. Some other programs, however, do<br />not exhibit this interpreter-like behavior. Each of them presents a challenge<br />for partial evaluation. The Euclidian algorithm is one of them,<br />and in this article, we make it amenable to partial evaluation.<br />We observe that the number of iterations in the Euclidian algorithm<br />is bounded by a number that can be computed given either of the two<br />arguments. We thus rephrase this algorithm using bounded recursion.<br />The resulting program is better suited for automatic unfolding and<br />thus for partial evaluation. Its specialization is efficient.<br />Keywords: partial evaluation, scientific computation.
APA, Harvard, Vancouver, ISO, and other styles
50

Mandava, Pitchaiah, Michael E. Brooks, Chase S. Krumpelman, and Thoma A. Kent. "Abstract 2374: A New More Sensitive Method to Assess Balance Among Stroke Trial Populations." Stroke 43, suppl_1 (February 2012). http://dx.doi.org/10.1161/str.43.suppl_1.a2374.

Full text
Abstract:
Background: Stroke outcome is dependent on baseline factors such as NIHSS and age. Relationships between these variables and outcomes are often non-linear and imbalances can influence outcomes, particularly in subgroup analysis with smaller number of subjects. Balance in baseline variables factors are typically compared by Wilcoxon rank sum, t-test or ANOVA. Because of non-linearity, these tests may be insensitive to important differences in the distribution of these factors and if multiple factors are considered simultaneously. We adapted a multi-dimensional extension of Kolmogorov-Smirnov (KS) test proposed by Fasano and Franceschini (FF) to compare population distributions. The FF algorithm provides a method to calculate KS Distance (KSD) between two distributions in multiple dimensions and a probability value can be obtained. We hypothesized that the FF algorithm would be more sensitive than traditional statistical tests to determine whether baseline factors differed among two trial arms. We further show that matching for baseline variables (nearest neighbor Euclidean matching, pPAIRS©; Mandava Kent Stroke 2010) improves the KSD, indicating closer matched populations. Methods: The NINDS database was used for this study ( ntis.gov ). The subgroup of rt-PA and placebo treated normoglycemic subjects with large artery stroke was analyzed. Median and mean NIHSS and age were compared and KSD and a p value were calculated using a custom program, pPOPULATION© written in Matlab®. rt-PA and placebo subjects were then matched using pPAIRS© and outliers eliminated. KSD and p value for the post-matched groups were calculated. Results: The left half of the table shows the pre-match comparisons. Baseline variables were not different using usual tests. A KSD value of 0.283, however yielded p=0.008, suggesting that the population distributions are indeed different when two variables are considered simultaneously. Right half of the table shows the post-match comparisons of baseline variables. The KSD value, 0.217, is lower and is associated with a p value = 0.175, indicating that the post-matched distributions are similar. a Wilcoxon Rank-Sum; b Student t-test; Conclusion: We demonstrate here a new application of a 2d version of the KS distance to verify the similarity of stroke populations and show that it is more sensitive than traditional difference testing. This finding is important because baseline imbalances are critical for accurate assessment of outcome. This algorithm can be further extended to additional dimensions (e.g.: glucose). Its relative advantages over other methods will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography