To see the other types of publications on this topic, follow the link: Constrained optimization. Electronic data processing.

Journal articles on the topic 'Constrained optimization. Electronic data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Constrained optimization. Electronic data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gosain, Anjana, and Kavita Sachdeva. "Materialized View Selection for Query Performance Enhancement Using Stochastic Ranking Based Cuckoo Search Algorithm." International Journal of Reliability, Quality and Safety Engineering 27, no. 03 (September 18, 2019): 2050008. http://dx.doi.org/10.1142/s0218539320500084.

Full text
Abstract:
Materialized view selection (MVS) improves the query processing efficiency and performance for making decisions effectively in a data warehouse. This problem is NP-hard and constrained optimization problem which involves space and cost constraint. Various optimization algorithms have been proposed in literature for optimal selection of materialized views. Few works exist for handling the constraints in MVS. In this study, authors have proposed the Cuckoo Search Algorithm (CSA) for optimization and Stochastic Ranking (SR) for handling the constraints in solving the MVS problem. The motivation behind integrating CS with SR is that fewer parameters have to be fine tuned in CS algorithm than in genetic and Particle Swarm Optimization (PSO) algorithm and the ranking method of SR handles the constraints effectively. For proving the efficiency and performance of our proposed algorithm Stochastic Ranking based Cuckoo Search Algorithm for Materialized View Selection (SRCSAMVS), it has been compared with PSO, genetic algorithm and the constrained evolutionary optimization algorithm proposed by Yu et al. SRCSAMVS outperforms in terms of query processing cost and scalability of the problem.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Guangshun, Jiping Wang, Junhua Wu, and Jianrong Song. "Data Processing Delay Optimization in Mobile Edge Computing." Wireless Communications and Mobile Computing 2018 (2018): 1–9. http://dx.doi.org/10.1155/2018/6897523.

Full text
Abstract:
With the development of Internet of Things (IoT), the number of mobile terminal devices is increasing rapidly. Because of high transmission delay and limited bandwidth, in this paper, we propose a novel three-layer network architecture model which combines cloud computing and edge computing (abbreviated as CENAM). In edge computing layer, we propose a computational scheme of mutual cooperation between the edge devices and use the Kruskal algorithm to compute the minimum spanning tree of weighted undirected graph consisting of edge nodes, so as to reduce the communication delay between them. Then we divide and assign the tasks based on the constrained optimization problem and solve the computation delay of edge nodes by using the Lagrange multiplier method. In cloud computing layer, we focus on the balanced transmission method to solve the data transmission delay from edge devices to cloud servers and obtain an optimal allocation matrix, which reduces the data communication delay. Finally, according to the characteristics of cloud servers, we solve the computation delay of cloud computing layer. Simulation shows that the CENAM has better performance in data processing delay than traditional cloud computing.
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Gaojian, Gerd Ascheid, Yanlu Wang, Oner Hanay, Renato Negra, Matthias Herrmann, and Norbert Wehn. "Optimization of Wireless Transceivers under Processing Energy Constraints." Frequenz 71, no. 9-10 (September 26, 2017): 379–88. http://dx.doi.org/10.1515/freq-2017-0150.

Full text
Abstract:
Abstract Focus of the article is on achieving maximum data rates under a processing energy constraint. For a given amount of processing energy per information bit, the overall power consumption increases with the data rate. When targeting data rates beyond 100 Gb/s, the system’s overall power consumption soon exceeds the power which can be dissipated without forced cooling. To achieve a maximum data rate under this power constraint, the processing energy per information bit must be minimized. Therefore, in this article, suitable processing efficient transmission schemes together with energy efficient architectures and their implementations are investigated in a true cross-layer approach. Target use cases are short range wireless transmitters working at carrier frequencies around 60 GHz and bandwidths between 1 GHz and 10 GHz.
APA, Harvard, Vancouver, ISO, and other styles
4

Cao, Xuanyu, Junshan Zhang, and H. Vincent Poor. "A Virtual-Queue-Based Algorithm for Constrained Online Convex Optimization With Applications to Data Center Resource Allocation." IEEE Journal of Selected Topics in Signal Processing 12, no. 4 (August 2018): 703–16. http://dx.doi.org/10.1109/jstsp.2018.2827302.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ren, Zhimin. "Data Processing Platform of Cloud Computing and Its Performance Analysis Based on Photoelectric Hybrid Interconnection Architecture." Journal of Nanoelectronics and Optoelectronics 15, no. 6 (June 1, 2020): 743–52. http://dx.doi.org/10.1166/jno.2020.2805.

Full text
Abstract:
Data processing platform is the core support platform of cloud computing. The use of electric interconnection architecture will increase the complexity of network topology, while optical interconnection architecture is ideal, so cloud computing platform based on optical interconnection has become a research hotspot. The distributed optical interconnection architecture of cloud computing data processing platform is focused on. Combining the hybrid mechanism of optical circuit switching and electric packet switching, it can meet a variety of traffic requirements. Meanwhile, it improves the switching mechanism, communication strategy, and router structure. Moreover, considering that the hybrid optoelectronic interconnection architecture can improve the network delay and throughput, but there is still a problem of network consumption. Combined with the network characteristics of the data processing platform (wireless mesh structure) of cloud computing, the network topology algorithm is studied, and the relationship between the topology and the maximum number of allocable channels is analyzed. Furthermore, the equation of topological reliability calculation is defined, and the optimization model of topological design is proposed, according to which the data processing platform of cloud computing is further optimized under the photoelectric hybrid interconnection architecture. During the experiment, before topology optimization, by changing the message length, it is found that adding optical circuit switching can help achieving large capacity and new type of transmission, and can effectively reduce the time delay. After topology optimization structure is adopted, the photoelectric hybrid-data processing platform of cloud computing without topology optimization is compared. It is found that under different reliability constraints, the throughput and end-to-end delay of the network are significantly improved, which proves that the data processing platform of cloud computing based on the photoelectric hybrid interconnection architecture is a feasible cloud computing platform.
APA, Harvard, Vancouver, ISO, and other styles
6

Zhong, Jianying, Jibin Zhu, Yonghao Guo, Yunxin Chang, and Chaofeng Zhu. "A Customer Clustering Algorithm for Power Logistics Distribution Network Structure and Distribution Volume Constraints." International Journal of Circuits, Systems and Signal Processing 15 (August 25, 2021): 1051–56. http://dx.doi.org/10.46300/9106.2021.15.113.

Full text
Abstract:
Customer clustering technology for distribution process is widely used in location selection, distribution route optimization and vehicle scheduling optimization of power logistics distribution center. Aiming at the problem of customer clustering with unknown distribution center location, this paper proposes a clustering algorithm considering distribution network structure and distribution volume constraint, which makes up for the defect that the classical Euclidean distance does not consider the distribution road information. This paper proposes a logistics distribution customer clustering algorithm, which improves CLARANS algorithm to make the clustering results meet the constraints of customer distribution volume. By using the single vehicle load rate, the sufficient conditions for logistics distribution customer clustering to be solvable under the condition of considering the ubiquitous and constraints are given, which effectively solves the problem of logistics distribution customer clustering with sum constraints. The results state clearly that the clustering algorithm can effectively deal with large-scale spatial data sets, and the clustering process is not affected by isolated customers, The clustering results can be effectively applied to the distribution center location, distribution cost optimization, distribution route optimization and distribution area division of vehicle scheduling optimization.
APA, Harvard, Vancouver, ISO, and other styles
7

Salim, Ibrahim, and A. Hamza. "Fast Feature-Preserving Approach to Carpal Bone Surface Denoising." Sensors 18, no. 7 (July 21, 2018): 2379. http://dx.doi.org/10.3390/s18072379.

Full text
Abstract:
We present a geometric framework for surface denoising using graph signal processing, which is an emerging field that aims to develop new tools for processing and analyzing graph-structured data. The proposed approach is formulated as a constrained optimization problem whose objective function consists of a fidelity term specified by a noise model and a regularization term associated with prior data. Both terms are weighted by a normalized mesh Laplacian, which is defined in terms of a data-adaptive kernel similarity matrix in conjunction with matrix balancing. Minimizing the objective function reduces it to iteratively solve a sparse system of linear equations via the conjugate gradient method. Extensive experiments on noisy carpal bone surfaces demonstrate the effectiveness of our approach in comparison with existing methods. We perform both qualitative and quantitative comparisons using various evaluation metrics.
APA, Harvard, Vancouver, ISO, and other styles
8

Gupta, Saurav, Sachin N. Kapgate, and Ajit Kumar Sahoo. "In-Network Distributed Least-Mean-Square Identification of Nonlinear Systems Using Volterra–Laguerre Model." Journal of Circuits, Systems and Computers 29, no. 02 (May 24, 2019): 2050030. http://dx.doi.org/10.1142/s0218126620500309.

Full text
Abstract:
It is of great importance to model the behavior of nonlinear systems in a distributed fashion using wireless sensor networks (WSNs) because of its computation and energy-efficient data processing. However, least squares methods have been previously employed to estimate the parameters of Volterra model for modeling nonlinear systems. Still, it is more convenient and advantageous to use in-network distributed identification strategy for real-time modeling and control. In this context, a black-box model with generalized structure and remarkable modeling ability called Volterra–Laguerre model is considered in which distributed signal processing is employed to identify the nonlinear systems in a distributed manner. The model cost function is expressed as a separable constrained minimization problem which is decomposed into augmented Lagrangian form to facilitate the distributed optimization. Then, alternating direction method of multipliers is employed to estimate the optimal parameters of the model. Convergence of the algorithm is guaranteed by providing its mean stability analysis. Simulation results for a nonlinear system are obtained under the noisy environment. These results are plotted against the results of noncooperative and centralized methods, demonstrating the effectiveness and superior performance of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
9

Akhatov, A. R., and F. M. Nazarov. "METHODS OF IMPLEMENTATION OF BLOCKCHAIN TECHNOLOGIES ON THE BASIS OF CRYPTOGRAPHIC PROTECTION FOR THE DATA PROCESSING SYSTEM WITH CONSTRAINT AND LAGGING INTO ELECTRONIC DOCUMENT MANAGEMEN." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 184 (October 2019): 3–12. http://dx.doi.org/10.14489/vkit.2019.10.pp.003-012.

Full text
Abstract:
The problem of application design by restriction and delay in ED (Electronic Document) management based on blockchain technologies to ensure a new level of security, reliability, transparency of data processing is considered. Increasing the reliability of information in systems by limiting and delaying ED management of enterprises and organizations during collecting, transmitting, storing and processing ED based on new, little-studied optimization technologies for processing blockchain-type data is a relevant and promising research topic. Important advantages of the potential use of transaction blocks built according to certain rules in systems by limiting and delaying ED are ensuring security by encrypting transactions for subsequent confirmation, the inability to make unauthorized changes due to the dependence of the current blockchain state on previous transactions, transparency and reliability of procedures due to public and distributed storage, as well as the interaction of a large number of users between without the use of “trusted intermediaries”. Studies show that when using existing algorithms for adding blocks in any system, it is possible to achieve the requirements of decentralization, openness of the entered data, the inability to change the data once entered into the system. However, mathematics-cryptographic information protection must be developed for each designed system separately. The task of providing and formulating the rules of data reliability control by limiting and delaying in ED circulation based on cryptographic methods of encrypting transaction blocks constituting the blockchain has been formulated. The approaches have been adopted as a methodology of support for systems by limiting and delaying electronic documents based on a new database architecture.
APA, Harvard, Vancouver, ISO, and other styles
10

RECINE, GREG, and DWIGHT L. WOOLARD. "PREDICTING THE PATH OF ELECTRONIC TRANSPORT THROUGH A MOLECULAR DEVICE VIA A MOUNTAIN-PASS ALGORITHM." International Journal of High Speed Electronics and Systems 18, no. 01 (March 2008): 223–28. http://dx.doi.org/10.1142/s0129156408005291.

Full text
Abstract:
The so-called “mountain-pass” theorem allows for the finding of a critical point on the path between two points on a multidimensional contour where the maximal elevation is minimal. By implementing the “elastic string algorithm”, it is possible to not only find the critical point, but to compute the mountain-pass itself on a finite-dimensional contour. For a given molecule that sits between two probes making up a nanostructure device, we propose that the mountain-pass will be a likely path of electron transport through the molecule, the contour being the electronic potential of the molecule. This potential along this path will be used as the input potential for SETraNS, a 1D Wigner-Poisson electron transport solver in order to explore the current-bias characteristics of such the molecule in such a device. In order to calculate the mountain pass, the elastic string algorithm is used to set up a constrained non-linear optimization problem which is, in turn, solved via a Monte Carlo method. We will compute the mountain-pass for a well-known test contour in order to show the validity of this approach. The procedure developed here is to be combined with conformational analysis via the molecular modeling program AMBER and the quantum transport program SETraNS in order to predict molecular function. When achieved, this combined procedure will allow for the better design and implementation of nanoscale molecular devices for application such as sensing, switching and data processing.
APA, Harvard, Vancouver, ISO, and other styles
11

Al mahdawi, Raghda Salam, and Huda M. Salih. "Optimization of open flow controller placement in software defined networks." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 4 (August 1, 2021): 3145. http://dx.doi.org/10.11591/ijece.v11i4.pp3145-3153.

Full text
Abstract:
The world is entering into the era of Big Data where computer networks are an essential part. However, the current network architecture is not very convenient to configure such leap. Software defined network (SDN) is a new network architecture which argues the separation of control and data planes of the network devices by centralizing the former in high level, centralised devices and efficient supervisors, called controllers. This paper proposes a mathematical model that helps optimizing the locations of the controllers within the network while minimizing the overall cost under realistic constrains. Our method includes finding the minimum cost of placing the controllers; these costs are the network latency, controller processing power and link bandwidth. Different types of network topologies have been adopted to consider the data profile of the controllers, links of controllers and locations of switches. The results showed that as the size of input data increased, the time to find the optimal solution also increased in a non-polynomial time. In addition, the cost of solution is increased linearly with the input size. Furthermore, when increasing allocating possible locations of the controllers, for the same number of switches, the cost was found to be less.
APA, Harvard, Vancouver, ISO, and other styles
12

Ren, Xiaozhen, and Yuying Jiang. "Spatial Domain Terahertz Image Reconstruction Based on Dual Sparsity Constraints." Sensors 21, no. 12 (June 15, 2021): 4116. http://dx.doi.org/10.3390/s21124116.

Full text
Abstract:
Terahertz time domain spectroscopy imaging systems suffer from the problems of long image acquisition time and massive data processing. Reducing the sampling rate will lead to the degradation of the imaging reconstruction quality. To solve this issue, a novel terahertz imaging model, named the dual sparsity constraints terahertz image reconstruction model (DSC-THz), is proposed in this paper. DSC-THz fuses the sparsity constraints of the terahertz image in wavelet and gradient domains into the terahertz image reconstruction model. Differing from the conventional wavelet transform, we introduce a non-linear exponentiation transform into the shift invariant wavelet coefficients, which can amplify the significant coefficients and suppress the small ones. Simultaneously, the sparsity of the terahertz image in gradient domain is used to enhance the sparsity of the image, which has the advantage of edge preserving property. The split Bregman iteration scheme is utilized to tackle the optimization problem. By using the idea of separation of variables, the optimization problem is decomposed into subproblems to solve. Compared with the conventional single sparsity constraint terahertz image reconstruction model, the experiments verified that the proposed approach can achieve higher terahertz image reconstruction quality at low sampling rates.
APA, Harvard, Vancouver, ISO, and other styles
13

Xu, Zhanyang, Renhao Gu, Tao Huang, Haolong Xiang, Xuyun Zhang, Lianyong Qi, and Xiaolong Xu. "An IoT-Oriented Offloading Method with Privacy Preservation for Cloudlet-Enabled Wireless Metropolitan Area Networks." Sensors 18, no. 9 (September 10, 2018): 3030. http://dx.doi.org/10.3390/s18093030.

Full text
Abstract:
With the development of the Internet of Things (IoT) technology, a vast amount of the IoT data is generated by mobile applications from mobile devices. Cloudlets provide a paradigm that allows the mobile applications and the generated IoT data to be offloaded from the mobile devices to the cloudlets for processing and storage through the access points (APs) in the Wireless Metropolitan Area Networks (WMANs). Since most of the IoT data is relevant to personal privacy, it is necessary to pay attention to data transmission security. However, it is still a challenge to realize the goal of optimizing the data transmission time, energy consumption and resource utilization with the privacy preservation considered for the cloudlet-enabled WMAN. In this paper, an IoT-oriented offloading method, named IOM, with privacy preservation is proposed to solve this problem. The task-offloading strategy with privacy preservation in WMANs is analyzed and modeled as a constrained multi-objective optimization problem. Then, the Dijkstra algorithm is employed to evaluate the shortest path between APs in WMANs, and the nondominated sorting differential evolution algorithm (NSDE) is adopted to optimize the proposed multi-objective problem. Finally, the experimental results demonstrate that the proposed method is both effective and efficient.
APA, Harvard, Vancouver, ISO, and other styles
14

Tucci, Mauro, Sami Barmada, Alessandro Formisano, and Dimitri Thomopulos. "A Regularized Procedure to Generate a Deep Learning Model for Topology Optimization of Electromagnetic Devices." Electronics 10, no. 18 (September 7, 2021): 2185. http://dx.doi.org/10.3390/electronics10182185.

Full text
Abstract:
The use of behavioral models based on deep learning (DL) to accelerate electromagnetic field computations has recently been proposed to solve complex electromagnetic problems. Such problems usually require time-consuming numerical analysis, while DL allows achieving the topologically optimized design of electromagnetic devices using desktop class computers and reasonable computation times. An unparametrized bitmap representation of the geometries to be optimized, which is a highly desirable feature needed to discover completely new solutions, is perfectly managed by DL models. On the other hand, optimization algorithms do not easily cope with high dimensional input data, particularly because it is difficult to enforce the searched solutions as feasible and make them belong to expected manifolds. In this work, we propose the use of a variational autoencoder as a data regularization/augmentation tool in the context of topology optimization. The optimization was carried out using a gradient descent algorithm, and the DL neural network was used as a surrogate model to accelerate the resolution of single trial cases in the due course of optimization. The variational autoencoder and the surrogate model were simultaneously trained in a multi-model custom training loop that minimizes total loss—which is the combination of the two models’ losses. In this paper, using the TEAM 25 problem (a benchmark problem for the assessment of electromagnetic numerical field analysis) as a test bench, we will provide a comparison between the computational times and design quality for a “classical” approach and the DL-based approach. Preliminary results show that the variational autoencoder manages regularizing the resolution process and transforms a constrained optimization into an unconstrained one, improving both the quality of the final solution and the performance of the resolution process.
APA, Harvard, Vancouver, ISO, and other styles
15

Jabłoński, Bartłomiej, Dariusz Makowski, and Piotr Perek. "Implementation of Thermal Event Image Processing Algorithms on NVIDIA Tegra Jetson TX2 Embedded System-on-a-Chip." Energies 14, no. 15 (July 22, 2021): 4416. http://dx.doi.org/10.3390/en14154416.

Full text
Abstract:
Advances in Infrared (IR) cameras, as well as hardware computational capabilities, contributed towards qualifying vision systems as reliable plasma diagnostics for nuclear fusion experiments. Robust autonomous machine protection and plasma control during operation require real-time processing that might be facilitated by Graphics Processing Units (GPUs). One of the current aims of image plasma diagnostics involves thermal events detection and analysis with thermal imaging. The paper investigates the suitability of the NVIDIA Jetson TX2 Tegra-based embedded platform for real-time thermal events detection. Development of real-time processing algorithms on an embedded System-on-a-Chip (SoC) requires additional effort due to the constrained resources, yet low-power consumption enables embedded GPUs to be applied in MicroTCA.4 computing architecture that is prevalent in nuclear fusion projects. For this purpose, the authors have proposed, developed and optimised GPU-accelerated algorithms with the use of available software tools for NVIDIA Tegra systems. Furthermore, the implemented algorithms are evaluated and benchmarked on Wendelstein 7-X (W7-X) stellarator experimental data against the corresponding alternative Central Processing Unit (CPU) implementations. Considerable improvement is observed for the accelerated algorithms that enable real-time detection on the embedded SoC platform, yet some encountered limitations when developing parallel image processing routines are described and signified.
APA, Harvard, Vancouver, ISO, and other styles
16

Kamaraj, Michael, and Balakrishnan. "Global Energy Minimization and Optimization of Multi-Target Tracking." Journal of Computational and Theoretical Nanoscience 14, no. 1 (January 1, 2017): 704–14. http://dx.doi.org/10.1166/jctn.2017.6261.

Full text
Abstract:
The recent multiple target tracking methods aim to obtain the best possible number of trajectories within the time frame, and few constraints have been set to handle the wide area of trajectories by discrete mapping. In this novel approach of multi target tracking, energy terms are formulated to attain the global optimization which includes the entire representation of the issues such as target tracking, operational representation, collision handling and trajectory processing. Furthermore, two optimization strategies such as the gradient descent which is performed on multiple feature space to obtain local minima of a density function from the given sample of data and gradient ascent which is carried out to achieve a likelihood matching of the target and used to handle the partial evidence of the image, and also uncertainty of the various targets are minimized. The experimentation is performed on the openly available dataset and the mean target tracking accuracy and precision is studied to validate the proposed tracking.
APA, Harvard, Vancouver, ISO, and other styles
17

Chinda, Betty, Katayoun Sepehri, Macy Zou, Mckenzie Braley, Antonina Garm, Grace Park, Kenneth Rockwood, and Xiaowei Song. "THE ELECTRONIC FRAILTY INDEX BASED ON THE COMPREHENSIVE GERIATRIC ASSESSMENT: DEVELOPMENT AND TESTING." Innovation in Aging 3, Supplement_1 (November 2019): S685—S686. http://dx.doi.org/10.1093/geroni/igz038.2529.

Full text
Abstract:
Abstract Frailty is characterized by loss of biological reserves across multiple systems and associated with increased risks of adverse outcomes. A Frailty Index (FI) constructed using items from the Comprehensive Geriatric Assessment (CGA) has been validated in geriatric medicine settings to estimate the level of frailty. Traditionally, the CGA used a paper form and the CGA-based FI calculation was a manual process. Here, we reported building of an electronic version of the assessment on personal computers (PC), i.e., standalone eFI-CGA, to benefit frailty assessment at points of care. The eFI-CGA was implemented as a software tool on the WinForms platform. It automated the FI calculation by counting deficits accumulation across multiple domains assessing medical conditions, cognition, balance, and dependency of activities of daily living. Debugging, testing, and optimization were performed to enhance the software performance with respect to automation accuracy (processing algorithm), friendly user interface (user manual and feedback), and data quality control (missing data and value constraints). Systematically-designed simulation dataset and anonymous real-world cases were both applied. The optimized assessment tool resulted in fast and convenient conductance of the CGA, and a 100% accuracy rate of the eFI-CGA automation for up to four decimals. The stand-alone eFI-CGA implementation has provided a PC-based software tool for use by geriatricians and primary and acute care providers, benefiting early detection and management of frailty at points of care for older adults.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhu, Xianglin, Khalil Ur Rehman, Bo Wang, and Muhammad Shahzad. "Modern Soft-Sensing Modeling Methods for Fermentation Processes." Sensors 20, no. 6 (March 23, 2020): 1771. http://dx.doi.org/10.3390/s20061771.

Full text
Abstract:
For effective monitoring and control of the fermentation process, an accurate real-time measurement of important variables is necessary. These variables are very hard to measure in real-time due to constraints such as the time-varying, nonlinearity, strong coupling, and complex mechanism of the fermentation process. Constructing soft sensors with outstanding performance and robustness has become a core issue in industrial procedures. In this paper, a comprehensive review of existing data pre-processing approaches, variable selection methods, data-driven (black-box) soft-sensing modeling methods and optimization techniques was carried out. The data-driven methods used for the soft-sensing modeling such as support vector machine, multiple least square support vector machine, neural network, deep learning, fuzzy logic, probabilistic latent variable models are reviewed in detail. The optimization techniques used for the estimation of model parameters such as particle swarm optimization algorithm, ant colony optimization, artificial bee colony, cuckoo search algorithm, and genetic algorithm, are also discussed. A comprehensive analysis of various soft-sensing models is presented in tabular form which highlights the important methods used in the field of fermentation. More than 70 research publications on soft-sensing modeling methods for the estimation of variables have been examined and listed for quick reference. This review paper may be regarded as a useful source as a reference point for researchers to explore the opportunities for further enhancement in the field of soft-sensing modeling.
APA, Harvard, Vancouver, ISO, and other styles
19

Ye, Miao, Ruoyu Wei, Wei Guo, Qiuxiang Jiang, Hongbing Qiu, and Yong Wang. "A New Method for Reconstructing Data on a Single Failure Node in the Distributed Storage System Based on the MSR Code." Wireless Communications and Mobile Computing 2021 (March 31, 2021): 1–14. http://dx.doi.org/10.1155/2021/5574255.

Full text
Abstract:
As a storage method for a distributed storage system, an erasure code can save storage space and repair the data of failed nodes. However, most studies that discuss the repair of fault nodes in the erasure code mode only focus on the condition that the bandwidth of heterogeneous links restricts the repair rate but ignore the condition that the storage node is heterogeneous, the cost of repair traffic in the repair process, and the influence of the failure of secondary nodes on the repair process. An optimal repair strategy based on the minimum storage regenerative (MSR) code and a hybrid genetic algorithm is proposed for single-node fault scenarios to solve the above problems. In this work, the single-node data repair problem is modeled as an optimization problem of an optimal Steiner tree with constraints considering heterogeneous link bandwidth and heterogeneous node processing capacity and takes repair traffic and repair delay as optimization objectives. After that, a hybrid genetic algorithm is designed to solve the problem. The experimental results show that under the same scales used in the MSR code cases, our approach has good robustness and its repair delay decreases by 10% and 55% compared with the conventional tree repair topology and star repair topology, respectively; the repair flow increases by 10% compared with the star topology, and the flow rate of the conventional tree repair topology decreases by 40%.
APA, Harvard, Vancouver, ISO, and other styles
20

Hackenberg, Annika, Karl Worthmann, Torben Pätz, Dörthe Keiner, Joachim Oertel, and Kathrin Flaßkamp. "Neurosurgery planning based on automated image recognition and optimal path design." at - Automatisierungstechnik 69, no. 8 (August 1, 2021): 708–21. http://dx.doi.org/10.1515/auto-2021-0044.

Full text
Abstract:
Abstract Stereotactic neurosurgery requires a careful planning of cannulae paths to spare eloquent areas of the brain that, if damaged, will result in loss of essential neurological function such as sensory processing, linguistic ability, vision, or motor function. We present an approach based on modelling, simulation, and optimization to set up a computational assistant tool. Thereby, we focus on the modeling of the brain topology, where we construct ellipsoidal approximations of voxel clouds based on processed MRI data. The outcome is integrated in a path-planning problem either via constraints or by penalization terms in the objective function. The surgical planning problem with obstacle avoidance is solved for different types of stereotactic cannulae using numerical simulations. We illustrate our method with a case study using real MRI data.
APA, Harvard, Vancouver, ISO, and other styles
21

Lee, Taekgyu, and Yeonsik Kang. "Performance Analysis of Deep Neural Network Controller for Autonomous Driving Learning from a Nonlinear Model Predictive Control Method." Electronics 10, no. 7 (March 24, 2021): 767. http://dx.doi.org/10.3390/electronics10070767.

Full text
Abstract:
Nonlinear model predictive control (NMPC) is based on a numerical optimization method considering the target system dynamics as constraints. This optimization process requires large amount of computation power and the computation time is often unpredictable which may cause the control update rate to overrun. Therefore, the performance must be carefully balanced against the computational time. To solve the computation problem, we propose a data-based control technique based on a deep neural network (DNN). The DNN is trained with closed-loop driving data of an NMPC. The proposed "DNN control technique based on NMPC driving data" achieves control characteristics comparable to those of a well-tuned NMPC within a reasonable computation period, which is verified with an experimental scaled-car platform and realistic numerical simulations.
APA, Harvard, Vancouver, ISO, and other styles
22

Ramadurgam, Srikanth, and Darshika G. Perera. "An Efficient FPGA-Based Hardware Accelerator for Convex Optimization-Based SVM Classifier for Machine Learning on Embedded Platforms." Electronics 10, no. 11 (May 31, 2021): 1323. http://dx.doi.org/10.3390/electronics10111323.

Full text
Abstract:
Machine learning is becoming the cornerstones of smart and autonomous systems. Machine learning algorithms can be categorized into supervised learning (classification) and unsupervised learning (clustering). Among many classification algorithms, the Support Vector Machine (SVM) classifier is one of the most commonly used machine learning algorithms. By incorporating convex optimization techniques into the SVM classifier, we can further enhance the accuracy and classification process of the SVM by finding the optimal solution. Many machine learning algorithms, including SVM classification, are compute-intensive and data-intensive, requiring significant processing power. Furthermore, many machine learning algorithms have found their way into portable and embedded devices, which have stringent requirements. In this research work, we introduce a novel, unique, and efficient Field Programmable Gate Array (FPGA)-based hardware accelerator for a convex optimization-based SVM classifier for embedded platforms, considering the constraints associated with these platforms and the requirements of the applications running on these devices. We incorporate suitable mathematical kernels and decomposition methods to systematically solve the convex optimization for machine learning applications with a large volume of data. Our proposed architectures are generic, parameterized, and scalable; hence, without changing internal architectures, our designs can be used to process different datasets with varying sizes, can be executed on different platforms, and can be utilized for various machine learning applications. We also introduce system-level architectures and techniques to facilitate real-time processing. Experiments are performed using two different benchmark datasets to evaluate the feasibility and efficiency of our hardware architecture, in terms of timing, speedup, area, and accuracy. Our embedded hardware design achieves up to 79 times speedup compared to its embedded software counterpart, and can also achieve up to 100% classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
23

Chugh, Neeraj, Geetam Singh Tomar, Robin Singh Bhadoria, and Neetesh Saxena. "A Novel Anomaly Behavior Detection Scheme for Mobile Ad Hoc Networks." Electronics 10, no. 14 (July 9, 2021): 1635. http://dx.doi.org/10.3390/electronics10141635.

Full text
Abstract:
To sustain the security services in a Mobile Ad Hoc Networks (MANET), applications in terms of confidentially, authentication, integrity, authorization, key management, and abnormal behavior detection/anomaly detection are significant. The implementation of a sophisticated security mechanism requires a large number of network resources that degrade network performance. In addition, routing protocols designed for MANETs should be energy efficient in order to maximize network performance. In line with this view, this work proposes a new hybrid method called the data-driven zone-based routing protocol (DD-ZRP) for resource-constrained MANETs that incorporate anomaly detection schemes for security and energy awareness using Network Simulator 3. Most of the existing schemes use constant threshold values, which leads to false positive issues in the network. DD-ZRP uses a dynamic threshold to detect anomalies in MANETs. The simulation results show an improved detection ratio and performance for DD-ZRP over existing schemes; the method is substantially better than the prevailing protocols with respect to anomaly detection for security enhancement, energy efficiency, and optimization of available resources.
APA, Harvard, Vancouver, ISO, and other styles
24

Smith, Kristofer R., Hang Liu, Li-Tse Hsieh, Xavier de Foy, and Robert Gazda. "Wireless Adaptive Video Streaming with Edge Cloud." Wireless Communications and Mobile Computing 2018 (December 5, 2018): 1–13. http://dx.doi.org/10.1155/2018/1061807.

Full text
Abstract:
Wireless data traffic, especially video traffic, continues to increase at a rapid rate. Innovative network architectures and protocols are needed to improve the efficiency of data delivery and the quality of experience (QoE) of mobile users. Mobile edge computing (MEC) is a new paradigm that integrates computing capabilities at the edge of the wireless network. This paper presents a computation-capable and programmable wireless access network architecture to enable more efficient and robust video content delivery based on the MEC concept. It incorporates in-network data processing and communications under a unified software-defined networking platform. To address the multiple resource management challenges that arise in exploiting such integration, we propose a framework to optimize the QoE for multiple video streams, subject to wireless transmission capacity and in-network computation constraints. We then propose two simplified algorithms for resource allocation. The evaluation results demonstrate the benefits of the proposed algorithms for the optimization of video content delivery.
APA, Harvard, Vancouver, ISO, and other styles
25

Xu, Rongxu, Wenquan Jin, Yonggeun Hong, and Do-Hyeun Kim. "Intelligent Optimization Mechanism Based on an Objective Function for Efficient Home Appliances Control in an Embedded Edge Platform." Electronics 10, no. 12 (June 18, 2021): 1460. http://dx.doi.org/10.3390/electronics10121460.

Full text
Abstract:
In recent years the ever-expanding internet of things (IoT) is becoming more empowered to revolutionize our world with the advent of cutting-edge features and intelligence in an IoT ecosystem. Thanks to the development of the IoT, researchers have devoted themselves to technologies that convert a conventional home into an intelligent occupants-aware place to manage electric resources with autonomous devices to deal with excess energy consumption and providing a comfortable living environment. There are studies to supplement the innate shortcomings of the IoT and improve intelligence by using cloud computing and machine learning. However, the machine learning-based autonomous control devices lack flexibility, and cloud computing is challenging with latency and security. In this paper, we propose a rule-based optimization mechanism on an embedded edge platform to provide dynamic home appliance control and advanced intelligence in a smart home. To provide actional control ability, we design and developed a rule-based objective function in the EdgeX edge computing platform to control the temperature states of the smart home. Compared to cloud computing, edge computing can provide faster response and higher quality of services. The edge computing paradigm provides better analysis, processing, and storage abilities to the data generated from the IoT sensors to enhance the capability of IoT devices concerning computing, storage, and network resources. In order to satisfy the paradigm of distributed edge computing, all the services are implemented as microservices. The microservices are connected to each other through REST APIs based on the constrained IoT devices to provide all the functionalities that accomplish a trade-off between energy consumption and occupant-desired environment setting for the smart home appliances. We simulated our proposed system to control the temperature of a smart home; through experimental findings, we investigated the application against the delay time and overall memory consumption by the embedded edge system of EdgeX. The result of this research work suggests that the implemented services operated efficiently in the raspberry pi 3 hardware of IoT devices.
APA, Harvard, Vancouver, ISO, and other styles
26

Ouali, Saad, and Abdeljebbar Cherkaoui. "Optimal Allocation of Combined Renewable Distributed Generation and Capacitor Units for Interconnection Cost Reduction." Journal of Electrical and Computer Engineering 2020 (July 23, 2020): 1–11. http://dx.doi.org/10.1155/2020/5101387.

Full text
Abstract:
In this paper, a new methodology for the optimal investment in distributed generation is presented, based on an optimal allocation of combined DG and capacitor units to alleviate network voltage constraints and reduce the interconnection cost of renewable generation integration in public medium voltage distribution networks. An analytical optimization method is developed, with the inclusion of practical considerations that are typically neglected in developed works: network topology reconfiguration and the geographical data of the generation land-use and network infrastructure. Powerful results concluded from a sensitivity analysis study of the most impacted parts of the network by the variation of active and reactive power injection under network topology reconfiguration are used as a basis for capacitor units placement. A case study, with two meshed IEEE 15-bus feeders and a new DG to connect, geographical dispersed, are used to simulate the performance of the proposed approach. A cost evaluation of the obtained results proves the effectiveness of the proposed approach to reduce the required charges for connecting new renewable generation units in medium voltage distribution system.
APA, Harvard, Vancouver, ISO, and other styles
27

Tsagkaropoulos, Andreas, Yiannis Verginadis, Maxime Compastié, Dimitris Apostolou, and Gregoris Mentzas. "Extending TOSCA for Edge and Fog Deployment Support." Electronics 10, no. 6 (March 20, 2021): 737. http://dx.doi.org/10.3390/electronics10060737.

Full text
Abstract:
The emergence of fog and edge computing has complemented cloud computing in the design of pervasive, computing-intensive applications. The proximity of fog resources to data sources has contributed to minimizing network operating expenditure and has permitted latency-aware processing. Furthermore, novel approaches such as serverless computing change the structure of applications and challenge the monopoly of traditional Virtual Machine (VM)-based applications. However, the efforts directed to the modeling of cloud applications have not yet evolved to exploit these breakthroughs and handle the whole application lifecycle efficiently. In this work, we present a set of Topology and Orchestration Specification for Cloud Applications (TOSCA) extensions to model applications relying on any combination of the aforementioned technologies. Our approach features a design-time “type-level” flavor and a run time “instance-level” flavor. The introduction of semantic enhancements and the use of two TOSCA flavors enables the optimization of a candidate topology before its deployment. The optimization modeling is achieved using a set of constraints, requirements, and criteria independent from the underlying hosting infrastructure (i.e., clouds, multi-clouds, edge devices). Furthermore, we discuss the advantages of such an approach in comparison to other notable cloud application deployment approaches and provide directions for future research.
APA, Harvard, Vancouver, ISO, and other styles
28

Peralta, Federico, Daniel Gutierrez Reina, Sergio Toral, Mario Arzamendia, and Derlis Gregor. "A Bayesian Optimization Approach for Multi-Function Estimation for Environmental Monitoring Using an Autonomous Surface Vehicle: Ypacarai Lake Case Study." Electronics 10, no. 8 (April 18, 2021): 963. http://dx.doi.org/10.3390/electronics10080963.

Full text
Abstract:
Bayesian optimization is a sequential method that can optimize a single and costly objective function based on a surrogate model. In this work, we propose a Bayesian optimization system dedicated to monitoring and estimating multiple water quality parameters simultaneously using a single autonomous surface vehicle. The proposed work combines different strategies and methods for this monitoring task, evaluating two approaches for acquisition function fusion: the coupled and the decoupled techniques. We also consider dynamic parametrization of the maximum measurement distance traveled by the ASV so that the monitoring system balances the total number of measurements and the total distance, which is related to the energy required. To evaluate the proposed approach, the Ypacarai Lake (Paraguay) serves as the test scenario, where multiple maps of water quality parameters, such as pH and dissolved oxygen, need to be obtained efficiently. The proposed system is compared with the predictive entropy search for multi-objective optimization with constraints (PESMOC) algorithm and the genetic algorithm (GA) path planning for the Ypacarai Lake scenario. The obtained results show that the proposed approach is 10.82% better than other optimization methods in terms of R2 score with noiseless measurements and up to 17.23% better when the data are noisy. Additionally, the proposed approach achieves a good average computational time for the whole mission when compared with other methods, 3% better than the GA technique and 46.5% better than the PESMOC approach.
APA, Harvard, Vancouver, ISO, and other styles
29

Gao, Yun Feng, and Ning Xu. "Data Processing with Combined Homotopy Methods for a Class of Nonconvex Optimization Problems." Advanced Materials Research 1046 (October 2014): 403–6. http://dx.doi.org/10.4028/www.scientific.net/amr.1046.403.

Full text
Abstract:
On the existing theoretical results, this paper studies the realization of combined homotopy methods on optimization problems in a specific class of nonconvex constrained region. Contraposing to this nonconvex constrained region, we give the structure method of the quasi-normal, prove that the chosen mappings on constrained grads are positive independent and the feasible region on SLM satisfies the quasi-normal cone condition. And we construct combined homotopy equation under the quasi-normal cone condition with numerical value and examples, and get a preferable result by data processing.
APA, Harvard, Vancouver, ISO, and other styles
30

Choi, K. H., and J. N. Hwang. "Constrained Optimization for Audio-to-Visual Conversion." IEEE Transactions on Signal Processing 52, no. 6 (June 2004): 1783–90. http://dx.doi.org/10.1109/tsp.2004.827153.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Rizvi, S. A., and N. M. Nasrabadi. "Predictive vector quantizer using constrained optimization." IEEE Signal Processing Letters 1, no. 1 (January 1994): 15–18. http://dx.doi.org/10.1109/97.295315.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yakin, Syamsul, Tasrif Hasanuddin, and Nia Kurniati. "Application of content based image retrieval in digital image search system." Bulletin of Electrical Engineering and Informatics 10, no. 2 (April 1, 2021): 1122–28. http://dx.doi.org/10.11591/eei.v10i2.2713.

Full text
Abstract:
Multimedia data is growing rapidly in the current digital era, one of which is digital image data. The increasing need for a large number of digital image datasets makes the constraints faced eventually drain a lot of time and cause the process of image description to be inconsistent. Therefore, a method is needed in processing the data, especially in searching digital image data in large image dataset to find image data that are relevant to the query image. One of the proposed methods for searching information based on image content is content based image retrieval (CBIR). The main advantage of the CBIR method is automatic retrieval process, compared to traditional keyword. This research was conducted on a combination of the HSV color histogram methods and the discrete wavelet transform to extract color features and textures features, while the chi-square distance technique was used to compare the test images with images into a database. The results have showed that the digital image search system with color and texture features have a precision value of 37.5% - 100%, with an average precision value of 80.71%, while the percentage accuracy is 93.7% - 100% with an average accuracy is 98.03%.
APA, Harvard, Vancouver, ISO, and other styles
33

Liang, Y. C., R. Zhang, and J. M. Cioffi. "Transmit Optimization for MIMO-OFDM With Delay-Constrained and No-Delay-Constrained Traffic." IEEE Transactions on Signal Processing 54, no. 8 (August 2006): 3190–99. http://dx.doi.org/10.1109/tsp.2006.874769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Nongpiur, Rajeev C., Dale J. Shpak, and Andreas Antoniou. "Design of IIR Digital Differentiators Using Constrained Optimization." IEEE Transactions on Signal Processing 62, no. 7 (April 2014): 1729–39. http://dx.doi.org/10.1109/tsp.2014.2302733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Xie, Li, Yi-qun Zhang, and Jun-yan Xu. "Hohmann transfer via constrained optimization." Frontiers of Information Technology & Electronic Engineering 19, no. 11 (November 2018): 1444–58. http://dx.doi.org/10.1631/fitee.1800295.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Vucic, N., and H. Boche. "Robust QoS-Constrained Optimization of Downlink Multiuser MISO Systems." IEEE Transactions on Signal Processing 57, no. 2 (February 2009): 714–25. http://dx.doi.org/10.1109/tsp.2008.2008553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Chengcheng, Xianchang Wang, Huiling Chen, Chengwen Wu, Majdi Mafarja, and Hamza Turabieh. "Towards Precision Fertilization: Multi-Strategy Grey Wolf Optimizer Based Model Evaluation and Yield Estimation." Electronics 10, no. 18 (September 7, 2021): 2183. http://dx.doi.org/10.3390/electronics10182183.

Full text
Abstract:
Precision fertilization is a major constraint in consistently balancing the contradiction between land resources, ecological environment, and population increase. Even more, it is a popular technology used to maintain sustainable development. Nitrogen (N), phosphorus (P), and potassium (K) are the main sources of nutrient income on farmland. The traditional fertilizer effect function cannot meet the conditional agrochemical theory’s conditional extremes because the soil is influenced by various factors and statistical errors in harvest and yield. In order to find more accurate scientific ratios, it has been proposed a multi-strategy-based grey wolf optimization algorithm (SLEGWO) to solve the fertilizer effect function in this paper, using the “3414” experimental field design scheme, taking the experimental field in Nongan County, Jilin Province as the experimental site to obtain experimental data, and using the residuals of the ternary fertilizer effect function of Nitrogen, phosphorus, and potassium as the target function. The experimental results showed that the SLEGWO algorithm could improve the fitting degree of the fertilizer effect equation and then reasonably predict the accurate fertilizer application ratio and improve the yield. It is a more accurate precision fertilization modeling method. It provides a new means to solve the problem of precision fertilizer and soil testing and fertilization.
APA, Harvard, Vancouver, ISO, and other styles
38

Farokhi, Farhad. "Privacy-Preserving Constrained Quadratic Optimization With Fisher Information." IEEE Signal Processing Letters 27 (2020): 545–49. http://dx.doi.org/10.1109/lsp.2020.2983320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Miaohui, Jian Xiong, Long Xu, Wuyuan Xie, King Ngi Ngan, and Jing Qin. "Rate Constrained Multiple-QP Optimization for HEVC." IEEE Transactions on Multimedia 22, no. 6 (June 2020): 1395–406. http://dx.doi.org/10.1109/tmm.2019.2947351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Liu, An, Vincent K. N. Lau, and Borna Kananian. "Stochastic Successive Convex Approximation for Non-Convex Constrained Stochastic Optimization." IEEE Transactions on Signal Processing 67, no. 16 (August 15, 2019): 4189–203. http://dx.doi.org/10.1109/tsp.2019.2925601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Liu, Michelle, Rajiv Kumar, Eldad Haber, and Aleksandr Aravkin. "Simultaneous-shot inversion for PDE-constrained optimization problems with missing data." Inverse Problems 35, no. 2 (December 27, 2018): 025003. http://dx.doi.org/10.1088/1361-6420/aaf317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Scutari, Gesualdo, Francisco Facchinei, and Lorenzo Lampariello. "Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory." IEEE Transactions on Signal Processing 65, no. 8 (April 15, 2017): 1929–44. http://dx.doi.org/10.1109/tsp.2016.2637317.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chierchia, G., N. Pustelnik, J. C. Pesquet, and B. Pesquet-Popescu. "Epigraphical projection and proximal tools for solving constrained convex optimization problems." Signal, Image and Video Processing 9, no. 8 (July 22, 2014): 1737–49. http://dx.doi.org/10.1007/s11760-014-0664-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Tihanyi, Viktor, András Rövid, Viktor Remeli, Zsolt Vincze, Mihály Csonthó, Zsombor Pethő, Mátyás Szalai, Balázs Varga, Aws Khalil, and Zsolt Szalay. "Towards Cooperative Perception Services for ITS: Digital Twin in the Automotive Edge Cloud." Energies 14, no. 18 (September 18, 2021): 5930. http://dx.doi.org/10.3390/en14185930.

Full text
Abstract:
We demonstrate a working functional prototype of a cooperative perception system that maintains a real-time digital twin of the traffic environment, providing a more accurate and more reliable model than any of the participant subsystems—in this case, smart vehicles and infrastructure stations—would manage individually. The importance of such technology is that it can facilitate a spectrum of new derivative services, including cloud-assisted and cloud-controlled ADAS functions, dynamic map generation with analytics for traffic control and road infrastructure monitoring, a digital framework for operating vehicle testing grounds, logistics facilities, etc. In this paper, we constrain our discussion on the viability of the core concept and implement a system that provides a single service: the live visualization of our digital twin in a 3D simulation, which instantly and reliably matches the state of the real-world environment and showcases the advantages of real-time fusion of sensory data from various traffic participants. We envision this prototype system as part of a larger network of local information processing and integration nodes, i.e., the logically centralized digital twin is maintained in a physically distributed edge cloud.
APA, Harvard, Vancouver, ISO, and other styles
45

Sabir, Dilshad, Muhammmad Abdullah Hanif, Ali Hassan, Saad Rehman, and Muhammad Shafique. "Weight Quantization Retraining for Sparse and Compressed Spatial Domain Correlation Filters." Electronics 10, no. 3 (February 2, 2021): 351. http://dx.doi.org/10.3390/electronics10030351.

Full text
Abstract:
Using Spatial Domain Correlation Pattern Recognition (CPR) in Internet-of-Things (IoT)-based applications often faces constraints, like inadequate computational resources and limited memory. To reduce the computation workload of inference due to large spatial-domain CPR filters and convert filter weights into hardware-friendly data-types, this paper introduces the power-of-two (Po2) and dynamic-fixed-point (DFP) quantization techniques for weight compression and the sparsity induction in filters. Weight quantization re-training (WQR), the log-polar, and the inverse log-polar geometric transformations are introduced to reduce quantization error. WQR is a method of retraining the CPR filter, which is presented to recover the accuracy loss. It forces the given quantization scheme by adding the quantization error in the training sample and then re-quantizes the filter to the desired quantization levels which reduce quantization noise. Further, Particle Swarm Optimization (PSO) is used to fine-tune parameters during WQR. Both geometric transforms are applied as pre-processing steps. The Po2 quantization scheme showed better performance close to the performance of full precision, while the DFP quantization showed further closeness to the Receiver Operator Characteristic of full precision for the same bit-length. Overall, spatial-trained filters showed a better compression ratio for Po2 quantization after retraining of the CPR filter. The direct quantization approach achieved a compression ratio of 8 at 4.37× speedup with no accuracy degradation. In contrast, quantization with a log-polar transform is accomplished at a compression ratio of 4 at 1.12× speedup, but, in this case, 16% accuracy of degradation is noticed. Inverse log-polar transform showed a compression ratio of 16 at 8.90× speedup and 6% accuracy degradation. All the mentioned accuracies are reported for a common database.
APA, Harvard, Vancouver, ISO, and other styles
46

Jun Fang and Hongbin Li. "Power Constrained Distributed Estimation With Correlated Sensor Data." IEEE Transactions on Signal Processing 57, no. 8 (August 2009): 3292–97. http://dx.doi.org/10.1109/tsp.2009.2020033.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Ciftci, Okan, Mahdi Mehrtash, and Amin Kargarian. "Data-Driven Nonparametric Chance-Constrained Optimization for Microgrid Energy Management." IEEE Transactions on Industrial Informatics 16, no. 4 (April 2020): 2447–57. http://dx.doi.org/10.1109/tii.2019.2932078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Fantacci, R., M. Forti, M. Marini, D. Tarchi, and G. Vannuccini. "A neural network for constrained optimization with application to CDMA communication systems." IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing 50, no. 8 (August 2003): 484–87. http://dx.doi.org/10.1109/tcsii.2003.814805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Pei, Jihong, Wenying Mo, Xuhui Shao, and Lixia Wang. "A constrained optimization method for blocking artifact compensation in JPEG infrared image." Signal, Image and Video Processing 15, no. 7 (August 6, 2021): 1361–68. http://dx.doi.org/10.1007/s11760-020-01840-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Siu, Man-Hung, Brian Mak, and Wing-Hei Au. "Minimization of Utterance Verification Error Rate as a Constrained Optimization Problem." IEEE Signal Processing Letters 13, no. 12 (December 2006): 760–63. http://dx.doi.org/10.1109/lsp.2006.879818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography