To see the other types of publications on this topic, follow the link: Constrained optimization. Electronic data processing.

Dissertations / Theses on the topic 'Constrained optimization. Electronic data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 27 dissertations / theses for your research on the topic 'Constrained optimization. Electronic data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Atlas, James. "Efficient coordination techniques for non-deterministic multi-agent systems using distributed constraint optimization." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 168 p, 2009. http://proquest.umi.com/pqdweb?did=1885755811&sid=3&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jeon, Woojay. "Pitch detection of polyphonic music using constrained optimization." Thesis, Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/15802.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mailhe, Maxime. "Batch processing task optimization." Thesis, Georgia Institute of Technology, 1994. http://hdl.handle.net/1853/11893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhang, Yue. "Detection copy number variants profile by multiple constrained optimization." HKBU Institutional Repository, 2017. https://repository.hkbu.edu.hk/etd_oa/439.

Full text
Abstract:
Copy number variation, causing by the genome rearrangement, generally refers to the copy numbers increased or decreased of large genome segments whose lengths are more than 1kb. Such copy number variations mainly appeared as the sub-microscopic level of deletion and duplication. Copy number variation is an important component of genome structural variation, and is one of pathogenic factors of human diseases. Next generation sequencing technology is a popular CNV detection method and it has been widely used in various fields of life science research. It possesses the advantages of high throughput and low cost. By tailoring NGS technology, it is plausible to sequence individual cells. Such single cell sequencing can reveal the gene expression status and genomic variation profile of a single-cell. Single cell sequencing is promising in the study of tumor, developmental biology, neuroscience and other fields. However, there are two challenging problems encountered in CNV detection for NGS data. The first one is that since single-cell sequencing requires a special genome amplification step to accumulate enough samples, a large number of bias is introduced, making the calling of copy number variants rather challenging. The performances of many popular copy number calling methods, designed for bulk sequencings, are not consistent and cannot be applied on single-cell sequenced data directly. The second one is to simultaneously analyze genome data for multiple samples, thus achieving assembling and subgrouping similar cells accurately and efficiently. The high level of noises in single-cell-sequencing data negatively affects the reliability of sequence reads and leads to inaccurate patterns of variations. To handle the problem of reliably finding CNVs in NGS data, in this thesis, we firstly establish a workflow for analyzing NGS and single-cell sequencing data. The CNVs identification is formulated as a quadratic optimization problem with both constraints of sparsity and smoothness. Tailored from alternating direction minimization (ADM) framework, an efficient numerical solution is designed accordingly. The proposed model was tested extensively to demonstrate its superior performances. It is shown that the proposed approach can successfully reconstruct CNVs especially somatic copy number alteration patterns from raw data. By comparing with existing counterparts, it achieved superior or comparable performances in detection of the CNVs. To tackle this issue of recovering the hidden blocks within multiple single-cell DNA-sequencing samples, we present an permutation based model to rearrange the samples such that similar ones are positioned adjacently. The permutation is guided by the total variational (TV) norm of the recovered copy number profiles, and is continued until the TV-norm is minimized when similar samples are stacked together to reveal block patterns. Accordingly, an efficient numerical scheme for finding this permutation is designed, tailored from the alternating direction method of multipliers. Application of this method to both simulated and real data demonstrates its ability to recover the hidden structures of single-cell DNA sequences.
APA, Harvard, Vancouver, ISO, and other styles
5

D'Souza, Sammy Raymond. "Parallelizing a nondeterministic optimization algorithm." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3084.

Full text
Abstract:
This research explores the idea that for certain optimization problems there is a way to parallelize the algorithm such that the parallel efficiency can exceed one hundred percent. Specifically, a parallel compiler, PC, is used to apply shortcutting techniquest to a metaheuristic Ant Colony Optimization (ACO), to solve the well-known Traveling Salesman Problem (TSP) on a cluster running Message Passing Interface (MPI). The results of both serial and parallel execution are compared using test datasets from the TSPLIB.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Bin. "Optimization strategies for data warehouse maintenance in distributed environments." Link to electronic thesis, 2002. http://www.wpi.edu/Pubs/ETD/Available/etd-0430102-133814.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Fei, and 王緋. "Complex stock trading strategy based on parallel particle swarm optimization." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B49858889.

Full text
Abstract:
Trading rules have been utilized in the stock market to make profit for more than a century. However, only using a single trading rule may not be sufficient to predict the stock price trend accurately. Although some complex trading strategies combining various classes of trading rules have been proposed in the literature, they often pick only one rule for each class, which may lose valuable information from other rules in the same class. In this thesis, a complex stock trading strategy, namely Performance-based Reward Strategy (PRS), is proposed. PRS combines the seven most popular classes of trading rules in financial markets, and for each class of trading rule, PRS includes various combinations of the rule parameters to produce a universe of 1059 component trading rules in all. Each component rule is assigned a starting weight and a reward/penalty mechanism based on profit is proposed to update these rules’ weights over time. To determine the best parameter values of PRS, we employ an improved time variant Particle Swarm Optimization (PSO) algorithm with the objective of maximizing the annual net profit generated by PRS. Due to the large number of component rules and swarm size, the optimization time is significant. A parallel PSO based on Hadoop, an open source parallel programming model of MapReduce, is employed to optimize PRS more efficiently. By omitting the traditional reduce phase of MapReduce, the proposed parallel PSO avoids the I/O cost of intermediate data and gets higher speedup ratio than previous parallel PSO based on MapReduce. After being optimized in an eight years training period, PRS is tested on an out-of-sample data set. The experimental results show that PRS outperforms all of the component rules in the testing period.
published_or_final_version
Computer Science
Master
Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Mianyu Kam Moshe Kandasamy Nagarajan. "A decentralized control and optimization framework for autonomic performance management of web-server systems /." Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/2643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Alemany, Kristina. "Design space pruning heuristics and global optimization method for conceptual design of low-thrust asteroid tour missions." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31821.

Full text
Abstract:
Thesis (Ph.D)--Aerospace Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Braun, Robert; Committee Member: Clarke, John-Paul; Committee Member: Russell, Ryan; Committee Member: Sims, Jon; Committee Member: Tsiotras, Panagiotis. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
10

Jung, Gueyoung. "Multi-dimensional optimization for cloud based multi-tier applications." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/37267.

Full text
Abstract:
Emerging trends toward cloud computing and virtualization have been opening new avenues to meet enormous demands of space, resource utilization, and energy efficiency in modern data centers. By being allowed to host many multi-tier applications in consolidated environments, cloud infrastructure providers enable resources to be shared among these applications at a very fine granularity. Meanwhile, resource virtualization has recently gained considerable attention in the design of computer systems and become a key ingredient for cloud computing. It provides significant improvement of aggregated power efficiency and high resource utilization by enabling resource consolidation. It also allows infrastructure providers to manage their resources in an agile way under highly dynamic conditions. However, these trends also raise significant challenges to researchers and practitioners to successfully achieve agile resource management in consolidated environments. First, they must deal with very different responsiveness of different applications, while handling dynamic changes in resource demands as applications' workloads change over time. Second, when provisioning resources, they must consider management costs such as power consumption and adaptation overheads (i.e., overheads incurred by dynamically reconfiguring resources). Dynamic provisioning of virtual resources entails the inherent performance-power tradeoff. Moreover, indiscriminate adaptations can result in significant overheads on power consumption and end-to-end performance. Hence, to achieve agile resource management, it is important to thoroughly investigate various performance characteristics of deployed applications, precisely integrate costs caused by adaptations, and then balance benefits and costs. Fundamentally, the research question is how to dynamically provision available resources for all deployed applications to maximize overall utility under time-varying workloads, while considering such management costs. Given the scope of the problem space, this dissertation aims to develop an optimization system that not only meets performance requirements of deployed applications, but also addresses tradeoffs between performance, power consumption, and adaptation overheads. To this end, this dissertation makes two distinct contributions. First, I show that adaptations applied to cloud infrastructures can cause significant overheads on not only end-to-end response time, but also server power consumption. Moreover, I show that such costs can vary in intensity and time scale against workload, adaptation types, and performance characteristics of hosted applications. Second, I address multi-dimensional optimization between server power consumption, performance benefit, and transient costs incurred by various adaptations. Additionally, I incorporate the overhead of the optimization procedure itself into the problem formulation. Typically, system optimization approaches entail intensive computations and potentially have a long delay to deal with a huge search space in cloud computing infrastructures. Therefore, this type of cost cannot be ignored when adaptation plans are designed. In this multi-dimensional optimization work, scalable optimization algorithm and hierarchical adaptation architecture are developed to handle many applications, hosting servers, and various adaptations to support various time-scale adaptation decisions.
APA, Harvard, Vancouver, ISO, and other styles
11

Deivakkannu, Ganesan. "Data acquisition and data transfer methods for real-time power system optimisation problems solution." Thesis, Cape Peninsula University of Technology, 2014. http://hdl.handle.net/20.500.11838/1178.

Full text
Abstract:
Thesis submitted in fulfilment of the requirements for the degree Master of Technology: Electrical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology 2014
The electric power utilities play a vital role in the generation, transmission and distribution of the electrical power to the end users. The power utilities face two major issues, i.e. i) power grids are expected to operate close to the maximum capacity, and ii) there is a need for accurate and better monitoring and control of the power system network using the modern technology and the available tools. These two issues are interconnected as better monitoring allows for better control of the power system. Development of the new standard-based power system technologies contributed to raising the ideas for building of a Smart grid. The challenges are that this process requires development of new control and operation architectures and methods for data acquisition, data transfer, and control computation. These methods require data for the full dynamic state of the power system in real-time, which leads to the introduction of the synchrophasor-based monitoring and control of the power system. The thesis describes the research work and investigations for integration of the existing new power system technologies to build fully automated systems for real-time solution of power system energy management problems, incorporating data measurement and acquisition, data transfer and distribution through a communication network, and data storage and retrieval in one whole system.
APA, Harvard, Vancouver, ISO, and other styles
12

Lenharth, Andrew D. "Algorithms for stable allocations in distributed real-time resource management systems." Ohio : Ohio University, 2004. http://www.ohiolink.edu/etd/view.cgi?ohiou1102697777.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ali, Shirook M. Nikolova Natalia K. "Efficient sensitivity analysis and optimization with full-wave EM solvers." *McMaster only, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
14

Wong, Cheok Meng. "A distributed particle swarm optimization for fuzzy c-means algorithm based on an apache spark platform." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Vanden, Berghen Frank. "Constrained, non-linear, derivative-free, parallel optimization of continuous, high computing load, noisy objective functions." Doctoral thesis, Universite Libre de Bruxelles, 2004. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211177.

Full text
Abstract:
The main result is a new original algorithm: CONDOR ("COnstrained, Non-linear, Direct, parallel Optimization using trust Region method for high-computing load, noisy functions"). The aim of this algorithm is to find the minimum x* of an objective function F(x) (x is a vector whose dimension is between 1 and 150) using the least number of function evaluations of F(x). It is assumed that the dominant computing cost of the optimization process is the time needed to evaluate the objective function F(x) (One evaluation can range from 2 minutes to 2 days). The algorithm will try to minimize the number of evaluations of F(x), at the cost of a huge amount of routine work. CONDOR is a derivate-free optimization tool (i.e. the derivatives of F(x) are not required. The only information needed about the objective function is a simple method (written in Fortran, C++,) or a program (a Unix, Windows, Solaris, executable) which can evaluate the objective function F(x) at a given point x. The algorithm has been specially developed to be very robust against noise inside the evaluation of the objective function F(x). This hypotheses are very general, the algorithm can thus be applied on a vast number of situations. CONDOR is able to use several CPU's in a cluster of computers. Different computer architectures can be mixed together and used simultaneously to deliver a huge computing power. The optimizer will make simultaneous evaluations of the objective function F(x) on the available CPU's to speed up the optimization process. The experimental results are very encouraging and validate the quality of the approach: CONDOR outperforms many commercial, high-end optimizer and it might be the fastest optimizer in its category (fastest in terms of number of function evaluations). When several CPU's are used, the performances of CONDOR are currently unmatched (may 2004). CONDOR has been used during the METHOD project to optimize the shape of the blades inside a Centrifugal Compressor (METHOD stands for Achievement Of Maximum Efficiency For Process Centrifugal Compressors THrough New Techniques Of Design). In this project, the objective function is based on a 3D-CFD (computation fluid dynamic) code which simulates the flow of the gas inside the compressor.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
16

Yap, Han Lun. "Constrained measurement systems of low-dimensional signals." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/47716.

Full text
Abstract:
The object of this thesis is the study of constrained measurement systems of signals having low-dimensional structure using analytic tools from Compressed Sensing (CS). Realistic measurement systems usually have architectural constraints that make them differ from their idealized, well-studied counterparts. Nonetheless, these measurement systems can exploit structure in the signals that they measure. Signals considered in this research have low-dimensional structure and can be broken down into two types: static or dynamic. Static signals are either sparse in a specified basis or lying on a low-dimensional manifold (called manifold-modeled signals). Dynamic signals, exemplified as states of a dynamical system, either lie on a low-dimensional manifold or have converged onto a low-dimensional attractor. In CS, the Restricted Isometry Property (RIP) of a measurement system ensures that distances between all signals of a certain sparsity are preserved. This stable embedding ensures that sparse signals can be distinguished one from another by their measurements and therefore be robustly recovered. Moreover, signal-processing and data-inference algorithms can be performed directly on the measurements instead of requiring a prior signal recovery step. Taking inspiration from the RIP, this research analyzes conditions on realistic, constrained measurement systems (of the signals described above) such that they are stable embeddings of the signals that they measure. Specifically, this thesis focuses on four different types of measurement systems. First, we study the concentration of measure and the RIP of random block diagonal matrices that represent measurement systems constrained to make local measurements. Second, we study the stable embedding of manifold-modeled signals by existing CS matrices. The third part of this thesis deals with measurement systems of dynamical systems that produce time series observations. While Takens' embedding result ensures that this time series output can be an embedding of the dynamical systems' states, our research establishes that a stronger stable embedding result is possible under certain conditions. The final part of this thesis is the application of CS ideas to the study of the short-term memory of neural networks. In particular, we show that the nodes of a recurrent neural network can be a stable embedding of sparse input sequences.
APA, Harvard, Vancouver, ISO, and other styles
17

Yaman, Sibel. "A multi-objective programming perspective to statistical learning problems." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26470.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Chin-Hui Lee; Committee Member: Anthony Yezzi; Committee Member: Evans Harrell; Committee Member: Fred Juang; Committee Member: James H. McClellan. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
18

Hohweiller, Tom. "Méthodes de décomposition non-linéaire pour l'imagerie X spectrale." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI097.

Full text
Abstract:
La tomodensitométrie spectrale est une modalité d’imagerie par rayons X émergente. Si le principe de la double énergie est plus ancien, des développements récents sur des détecteurs à comptage de photons permettent d’acquérir des données résolues en énergie sur plusieurs plages. Cette modalité permet de réduire un certain nombre d’artéfacts classiques dont ceux liés au durcissement de spectre, mais surtout de remonter à la composition chimique des tissus. Les données spectrales permettent également d’utiliser de nouveaux agents de contraste (comme l’or par exemple) qui présentent des discontinuités énergétiques. La possibilité d’utiliser d’autres marqueurs et de quantifier leurs présences dans le patient donne à cette modalité un fort potentiel dans le domaine de l’imagerie médicale. Une approche classique pour le traitement des données spectrales est d’effectuer une décomposition en base de matériaux préalables à la reconstruction tomographique. Cependant, les méthodes de décomposition dans le domaine des projections avec un grand nombre de plages d’énergies n’en sont qu’à leurs débuts. Les techniques classiques par calibration, ne sont plus numériquement stables lorsqu’il y a plus de deux plages disponibles. Le but de cette thèse est de développer de nouvelles méthodes de décomposition des données spectrales dans le domaine des projections. Après avoir formalisé le problème direct de la tomodensitométrie spectrale, le problème de décomposition en base de matériaux sera exprimé et traité comme un problème inverse non linéaire. Il sera résolu en minimisant une fonction de coût incluant un terme caractérisant la fidélité de la décomposition par rapport aux données et un \textit{a priori} sur les cartes de matériaux projetées. Ces travaux présenteront tout d’abord une adaptation de la fonctionnelle prenant en compte la nature Poissonienne du bruit. Cette formulation permet d’obtenir de meilleures décompositions pour de forts niveaux de bruit par rapport à la formulation classique. Ensuite, deux algorithmes de minimisation incluant une contrainte de positivité additionnelle seront proposés. Le premier, un algorithme de Gauss-Newton projeté, permet d’obtenir des cartes rapidement et de meilleure qualité que des méthodes non contraintes. Pour améliorer les résultats du premier algorithme, une seconde méthode, de type ADMM, ajoute une contrainte d’égalité. Cette contrainte a permis de diminuer les artefacts présents dans l’image. Ces méthodes ont été évaluées sur des données numériques de souris et de thorax humain. Afin d’accélérer et de simplifier les méthodes, un choix automatique des hyperparamètres est proposé qui permet de diminuer fortement le temps de calcul tout en gardant de bonnes décompositions. Finalement, ces méthodes sont testées sur des données expérimentales provenant d’un prototype de scanner spectral
Spectral tomodensitometry is a new emerging x-ray imaging modality. If the dual-energy principle was already known for quite some time, new developments on photon-counting detectors now allowing acquiring more energy bins than before. This modality allows reducing some artifacts presents in x-ray imaging, such as beam hardening, but mostly to decompose the data into the chemical composition of the imaged tissue. It also enables the use of new markers (i.e. gold) with an energic discontinuity. The use of these markers also allows to locate and quantify them in the patient, granting great potential for medical imaging. Decomposition in the projection domain followed by a tomographic reconstruction is a classical processing for those spectral data. However, decomposition methods in the projection domain are unstable for a high number of energy bins. Classical calibration technic is numerically unstable for more than two energy bins. This thesis aims to developed new material decomposition methods in the projections domains. After expressing the spectral forward model, the decomposition problem is expressed and dealt as a non-linear inverse problem. It will be solved by minimizing a cost function composed by a term characterizing the fidelity of the decomposition regarding the data and an \textit{a priori} of the decomposed material maps. We will firstly present an adaptation of the cost function that takes into account the Poissonian noise on the data. This formulation allows having better decomposed maps for a high level of noise than classical formulation. Then, two constrained algorithms will be presented. The first one, a projected Gauss-Newton algorithm, that enforces positivity on the decomposed maps, allows having better decomposed maps than an unconstrained algorithm. To improve the first algorithm, another one was developed that also used an egality constrain. The equality allows having images with fewer artifacts than before. These methods are tested on a numerical phantom of a mouse and thorax. To speed up the decomposition process, an automatic choice of parameters is presented, which allow faster decomposition while keeping good maps. Finally, the methods are tested on experimental data that are coming from a spectral scanner prototype
APA, Harvard, Vancouver, ISO, and other styles
19

King, Jonathan B. "Optimization of machine allocation in RingLeader." Thesis, 1996. http://hdl.handle.net/1957/34077.

Full text
Abstract:
Many different types of distributed batch scheduling systems have been developed in the last decade to take advantage of the decentralization of computers and the enormous investments that many companies and educational institutions have in desktop workstations. Based on the premise that the majority of desktop workstations are significantly underutilized, distributed batch systems allow users to submit and run jobs when these workstations are available. While simpler systems determine machine availability by time of day (e.g., 5:00 p.m. to 8:00 a.m.), more sophisticated systems determine availability dynamically, migrating tasks when the availability changes. Ring Leader is a distributed batch system currently under development at Hewlett Packard. Since meeting the objectives of a distributed system rely on the intelligent use of idle workstations, good resource determination and efficient utilization decisions are a high priority for such a system. System performance will depend heavily on the process of deciding where jobs should be run. This thesis explains the development of Ring Leader's history based resource utilization scheme, and compares its performance to more simplistic algorithms.
Graduation date: 1997
APA, Harvard, Vancouver, ISO, and other styles
20

Cardoso, Mário Diogo Pinto da Silva. "Optimization of a cloud-based biological sample data processing system." Dissertação, 2021. https://hdl.handle.net/10216/135765.

Full text
Abstract:
In the context of personalized medicine, progress has been made towards the integration of Photonics and Artificial Intelligence in the creation of a virtual library of disease biomarkers and biological profiles useful to provide quick and accessible mechanisms for screening and stratification of biological samples. In this context, this work addresses (i) the optimization of the existing prototype device responsible for the acquisition of data from biological samples and its transmission to the cloud for further processing and (ii) the design of a remote controlled orbital shaker station for the homogenization of biological samples prior to the data acquisition. In the current state of the prototype, the data throughput from acquisition to transmission does not scale favorably in relation to the increasing number of biological samples needing to be analyzed. The approach taken for the first part of this work consisted in the analysis of each data transmission step in order to find throughput optimization opportunities. Starting from the intra-device communication using the SPI protocol, it was possible to conclude, after careful waveform investigation and waveform quality assessment that the choice of the SPI clock frequency was below optimal levels and could be optimized by close to an order of magnitude. Regarding the codification, processing and wireless transmission of data, details related to both data encoding and the MQTT and Wi-Fi protocol usage were studied and potential bottleneck points were identified through experiment and statistical analysis. This led to the understanding that both the data encoding scheme as well as the Wi-Fi protocol used were sub-optimal and could be optimized by almost 30%. As for the MQTT protocol, no possible improvements have been identified. The second part of the work focused on the development of an orbital shaker for the homogenization of biological samples prior to data acquisition. For this purpose, a suitable brushless DC electric motor and controller were chosen after a proper market search, the required software to control the motor and a suitable user interface using a mobile device were developed to control the operation in time, direction and rotation speed. The design was successfully tested in a 3D printed mechanical mock-up.
APA, Harvard, Vancouver, ISO, and other styles
21

Cardoso, Mário Diogo Pinto da Silva. "Optimization of a cloud-based biological sample data processing system." Master's thesis, 2021. https://hdl.handle.net/10216/135765.

Full text
Abstract:
In the context of personalized medicine, progress has been made towards the integration of Photonics and Artificial Intelligence in the creation of a virtual library of disease biomarkers and biological profiles useful to provide quick and accessible mechanisms for screening and stratification of biological samples. In this context, this work addresses (i) the optimization of the existing prototype device responsible for the acquisition of data from biological samples and its transmission to the cloud for further processing and (ii) the design of a remote controlled orbital shaker station for the homogenization of biological samples prior to the data acquisition. In the current state of the prototype, the data throughput from acquisition to transmission does not scale favorably in relation to the increasing number of biological samples needing to be analyzed. The approach taken for the first part of this work consisted in the analysis of each data transmission step in order to find throughput optimization opportunities. Starting from the intra-device communication using the SPI protocol, it was possible to conclude, after careful waveform investigation and waveform quality assessment that the choice of the SPI clock frequency was below optimal levels and could be optimized by close to an order of magnitude. Regarding the codification, processing and wireless transmission of data, details related to both data encoding and the MQTT and Wi-Fi protocol usage were studied and potential bottleneck points were identified through experiment and statistical analysis. This led to the understanding that both the data encoding scheme as well as the Wi-Fi protocol used were sub-optimal and could be optimized by almost 30%. As for the MQTT protocol, no possible improvements have been identified. The second part of the work focused on the development of an orbital shaker for the homogenization of biological samples prior to data acquisition. For this purpose, a suitable brushless DC electric motor and controller were chosen after a proper market search, the required software to control the motor and a suitable user interface using a mobile device were developed to control the operation in time, direction and rotation speed. The design was successfully tested in a 3D printed mechanical mock-up.
APA, Harvard, Vancouver, ISO, and other styles
22

Modungwa, Dithoto. "Application of artificial intelligence techniques in design optimization of a parallel manipulator." Thesis, 2015. http://hdl.handle.net/10210/13328.

Full text
Abstract:
D.Phil. (Electrical and Electronic Engineering)
The complexity of multi-objective functions and diverse variables involved with optimization of parallel manipulator or parallel kinematic machine design has inspired the research conducted in this thesis to investigate techniques that are suitable to tackle this problem efficiently. Further the parallel manipulator dimensional synthesis problem is multimodal and has no explicit analytical expressions. This process requires optimization techniques which offer high level of accuracy and robustness. The goal of this work is to present method(s) based on Artificial Intelligence (AI) that may be applied in addressing the challenge stated above. The performance criteria considered include; stiffness, dexterity and workspace. The case studied in this work is a 6 degrees of freedom (DOF) parallel manipulator, particularly because it is considered much more complicated than the lesser DOF mechanisms, owing to the number of independent parameters or inputs needed to specify its configuration (i.e. the higher the DOFs, the more the number of independent variables to be considered). The first contribution in this thesis is a comparative study of several hybrid Multi- Objective Optimization (MOO) AI algorithms, in application of a parallel manipulator dimensional synthesis. Artificial neural networks are utilized to approximate a multiple function for the analytical solution of the 6 DOF parallel manipulator’s performance indices, followed by implementation of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) as search algorithms. Further two hybrid techniques are proposed which implement Simulated Annealing and Random Forest in searching for optimum solutions in the Multi-objective Optimization problem. The final contribution in this thesis is ensemble machine learning algorithms for approximation of a multiple objective function for the 6 DOF parallel manipulator analytical solution. The results from the experiments demonstrated not only neural network (NN) but also other machine learning algorithms namely K- Nearest Neighbour (k-NN), M5 Prime (M5’), Zero R (ZR) and Decision Stump (DS) can effectively be implemented for the application of function approximation.
APA, Harvard, Vancouver, ISO, and other styles
23

Steere, Edward. "Massive parallelism for combinatorial problems by hardware acceleration with an application to the label switching problem." Thesis, 2016. http://hdl.handle.net/10539/22673.

Full text
Abstract:
A dissertation submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, in fulfilment of the requirements for the degree of Master of Science in Engineering.
This dissertation proposes an approach to solving hard combinatorial problems in massively parallel architectures using parallel metaheuristics. Combinatorial problems are common in many scientific fields. Scientific progress is constrained by the fact that, even using state of the art algorithms, solving hard combinatorial problems can take days or weeks. This is the case with the Label Switching Problem (LSP) in the field of Bioinformatics. In this field, prior work to solve the LSP has resulted in the program CLUMPP (CLUster Matching and Permutation Program). CLUMPP focuses solely on the use of a sequential, classical heuristic, and has had success in smaller low complexity problems. By contrast this dissertation proposes the Parallel Solvers model for the acceleration of hard combinatorial problems. This model draws on the commonalities evident in algorithms and strategies in metaheuristics. After investigating the effectiveness of the mechanisms apparent in the Parallel Solvers model with regards to the LSP, the author developed DePermute, an algorithm which can be used to solve the LSP significantly faster. Results were generated from time based testing of simulated data, as well as data freely available on the Internet as part of various projects. An investigation into the effectiveness of DePermute was carried out on a CPU (Central Processing Unit) based computer. The time based testing was carried out on a CPU based computer and on a Graphics Processing Unit (GPU) attached to a CPU host computer. The dissertation also proposes the design of an Field Programmable Gate Arrays (FGPA) based implementation of DePermute. Using Parallel Solvers, in the DePermute algorithm, the time taken for population group sizes, K, ranging from K = 5 to 20 was improved by up to two orders of magnitude using the GPU implementation and aggressive settings for CLUMPP. The CPU implementation, while slower than the GPU implementation still outperforms CLUMPP, using aggressive settings, marginally and usually with better quality. In addition it outperforms CLUMPP by at least an order of magnitude when CLUMPP is set to use higher quality settings. Combinatorial problems can be very difficult. Parallel Solvers has been effective in the field of Bioinformatics in solving the LSP. This dissertation proposes that it might assist in the reasoning and design of algorithms in other fields.
MT2017
APA, Harvard, Vancouver, ISO, and other styles
24

"An agent-assisted board-level functional fault diagnostic framework: design and optimization." 2014. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1291511.

Full text
Abstract:
Advances in semiconductor technology and design automation methods have introduced a new era for electronic products. With design sizes in millions of logic gates and operating frequencies in GHz, defects-per-million rates continue to increase, and defects are manifesting themselves in subtle ways.
Diagnosing functional failures in complicated electronic boards is a challenging task, wherein debug technicians try to identify defective components by analyzing some syndromes obtained from the application of diagnostic tests. The diagnosis effectiveness and efficiency rely heavily on the quality of the in-house developed diagnostic tests and the debug technicians’ knowledge and experience, which, however, have no guarantees nowadays. To tackle this problem, this thesis proposes a novel agent-assisted diagnostic framework for board-level functional failures, namely AgentDiag, which facilitates to evaluate the quality of the diagnostic tests and bridge the knowledge gap between the diagnostic programmers who write diagnostic tests and the debug technicians who conduct in-field diagnosis with a lightweight model of the boards and tests.
Machine learning algorithms have been advocated for automated diagnosis of board-level functional failures due to the extreme complexity of the problem. Such reasoning-based solutions, however, remain ineffective at the early stage of the product cycle, simply because there are insufficient historical data for training the diagnostic system that has a large number of test syndromes. Guided by a proposed metric isolation capability, AgentDiag is able to leverage the knowledge from the lightweight model to selecting a reduced test syndrome set for diagnosis in an adaptive manner.
While AgentDiag is effective to improve the diagnostic accuracy, this technique, by excluding some test syndromes, may cause information loss for diagnosis. The thesis further presents a novel test syndrome merging methodology to address this problem. That is, by leveraging the domain knowledge of the diagnostic tests and the board structural information, we adaptively reduce the feature size of the diagnostic system by selectively merging test syndromes such that it can effectively utilize the available training cases.
Experimental results on real industrial boards and an OpenRISC design demonstrate the effectiveness of the proposed solutions.
半導體技術和設計自動化的高速發展開啟了電子產品的新紀元。百萬級別的設計尺寸和上G赫茲的操作頻率使得每百萬次採樣數的缺陷率繼續上升,缺陷顯現方式也日益微妙。
複雜電子板的診斷是一項極具挑戰的工作。調試人員通常通過分析診斷測試所產生的症狀,甄別有缺陷的元件。診斷的有效性和效率就極大地依賴於診斷測試的質量和調試人員的知識經驗,但是現在這些都是沒有確定性的。為了解決這一問題,本文提出一個新穎的針對板級功能性故障的代理輔助診斷系統AgentDiag。它幫助評估診斷測試的質量,並架起編寫診斷測試的測試程式員和從事實際診斷工作的調試人員之間的橋樑。
因為板級診斷的極度複雜,機器學習算法已經被提出來解決這一問題。但是這些基於推導的方法在早期很難達到好的效果,原因是過大的測試數量和相對較少的訓練數據。在度量Isolation Capability的引導下,能夠適應性地利用來自輕量級模型的知識去選取一個症狀集進行診斷。
AgentDiag可以有效地提高診斷準確率,但是由於是直接剔除一部分測試症狀,所以有可能造成信息的丟失。本文進一步提出了一個測試症狀合併的方法來解決這一問題。那就是利用診斷測試和電路板的結構描述,我們可以適應性地利用選擇性合併的測試症狀來減少測試症狀的數目,從而有效地利用已有的測試數據。
來自實際的工業電路板和OpenRisc設計的實驗數據驗證了提出的方法的有效性。
Sun, Zelong.
Thesis M.Phil. Chinese University of Hong Kong 2014.
Includes bibliographical references (leaves 47-51).
Abstracts also in Chinese.
Title from PDF title page (viewed on 12, October, 2016).
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
Detailed summary in vernacular field only.
APA, Harvard, Vancouver, ISO, and other styles
25

Fazelnia, Ghazal. "Optimization for Probabilistic Machine Learning." Thesis, 2019. https://doi.org/10.7916/d8-jm7k-2k98.

Full text
Abstract:
We have access to great variety of datasets more than any time in the history. Everyday, more data is collected from various natural resources and digital platforms. Great advances in the area of machine learning research in the past few decades have relied strongly on availability of these datasets. However, analyzing them imposes significant challenges that are mainly due to two factors. First, the datasets have complex structures with hidden interdependencies. Second, most of the valuable datasets are high dimensional and are largely scaled. The main goal of a machine learning framework is to design a model that is a valid representative of the observations and develop a learning algorithm to make inference about unobserved or latent data based on the observations. Discovering hidden patterns and inferring latent characteristics in such datasets is one of the greatest challenges in the area of machine learning research. In this dissertation, I will investigate some of the challenges in modeling and algorithm design, and present my research results on how to overcome these obstacles. Analyzing data generally involves two main stages. The first stage is designing a model that is flexible enough to capture complex variation and latent structures in data and is robust enough to generalize well to the unseen data. Designing an expressive and interpretable model is one of crucial objectives in this stage. The second stage involves training learning algorithm on the observed data and measuring the accuracy of model and learning algorithm. This stage usually involves an optimization problem whose objective is to tune the model to the training data and learn the model parameters. Finding global optimal or sufficiently good local optimal solution is one of the main challenges in this step. Probabilistic models are one of the best known models for capturing data generating process and quantifying uncertainties in data using random variables and probability distributions. They are powerful models that are shown to be adaptive and robust and can scale well to large datasets. However, most probabilistic models have a complex structure. Training them could become challenging commonly due to the presence of intractable integrals in the calculation. To remedy this, they require approximate inference strategies that often results in non-convex optimization problems. The optimization part ensures that the model is the best representative of data or data generating process. The non-convexity of an optimization problem take away the general guarantee on finding a global optimal solution. It will be shown later in this dissertation that inference for a significant number of probabilistic models require solving a non-convex optimization problem. One of the well-known methods for approximate inference in probabilistic modeling is variational inference. In the Bayesian setting, the target is to learn the true posterior distribution for model parameters given the observations and prior distributions. The main challenge involves marginalization of all the other variables in the model except for the variable of interest. This high-dimensional integral is generally computationally hard, and for many models there is no known polynomial time algorithm for calculating them exactly. Variational inference deals with finding an approximate posterior distribution for Bayesian models where finding the true posterior distribution is analytically or numerically impossible. It assumes a family of distribution for the estimation, and finds the closest member of that family to the true posterior distribution using a distance measure. For many models though, this technique requires solving a non-convex optimization problem that has no general guarantee on reaching a global optimal solution. This dissertation presents a convex relaxation technique for dealing with hardness of the optimization involved in the inference. The proposed convex relaxation technique is based on semidefinite optimization that has a general applicability to polynomial optimization problem. I will present theoretical foundations and in-depth details of this relaxation in this work. Linear dynamical systems represent the functionality of many real-world physical systems. They can describe the dynamics of a linear time-varying observation which is controlled by a controller unit with quadratic cost function objectives. Designing distributed and decentralized controllers is the goal of many of these systems, which computationally, results in a non-convex optimization problem. In this dissertation, I will further investigate the issues arising in this area and develop a convex relaxation framework to deal with the optimization challenges. Setting the correct number of model parameters is an important aspect for a good probabilistic model. If there are only a few parameters, model may lack capturing all the essential relations and components in the observations while too many parameters may cause significant complications in learning or overfit to the observations. Non-parametric models are suitable techniques to deal with this issue. They allow the model to learn the appropriate number of parameters to describe the data and make predictions. In this dissertation, I will present my work on designing Bayesian non-parametric models as powerful tools for learning representations of data. Moreover, I will describe the algorithm that we derived to efficiently train the model on the observations and learn the number of model parameters. Later in this dissertation, I will present my works on designing probabilistic models in combination with deep learning methods for representing sequential data. Sequential datasets comprise a significant portion of resources in the area of machine learning research. Designing models to capture dependencies in sequential datasets are of great interest and have a wide variety of applications in engineering, medicine and statistics. Recent advances in deep learning research has shown exceptional promises in this area. However, they lack interpretability in their general form. To remedy this, I will present my work on mixing probabilistic models with neural network models that results in better performance and expressiveness of the results.
APA, Harvard, Vancouver, ISO, and other styles
26

Yan, Jiaxiang. "Modeling, monitoring and optimization of discrete event systems using Petri nets." 2014. http://hdl.handle.net/1805/3874.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Yan, Jiaxiang. M.S.E.C.E., Purdue University, May 2013. Modeling, Monitoring and Optimization of Discrete Event Systems Using Petri Nets. Major Professor: Lingxi Li. In last decades, the research of discrete event systems (DESs) has attracts more and more attention because of the fast development of intelligent control strategies. Such control measures combine the conventional control strategies with discrete decision-making processes which simulate human decision-making processes. Due to the scale and complexity of common DESs, the dedicated models, monitoring methods and optimal control strategies for them are necessary. Among various DES models, Petri nets are famous for the advantage in dealing with asynchronous processes. They have been widely applied in intelligent transportation systems (ITS) and communication technology in recent years. With encoding of the Petri net state, we can also enable fault detection and identification capability in DESs and mitigate potential human errors. This thesis studies various problems in the context of DESs that can be modeled by Petri nets. In particular, we focus on systematic modeling, asynchronous monitoring and optimal control strategies design of Petri nets. This thesis starts by looking at the systematic modeling of ITS. A microscopic model of signalized intersection and its two-layer timed Petri net representation is proposed in this thesis, where the first layer is the representation of the intersection and the second layer is the representation of the traffic light system. Deterministic and stochastic transitions are both involved in such Petri net representation. The detailed operation process of such Petri net representation is stated. The improvement of such Petri net representation is also provided with comparison to previous models. Then we study the asynchronous monitoring of sensor networks. An event sequence reconstruction algorithm for a given sensor network based on asynchronous observations of its state changes is proposed in this thesis. We assume that the sensor network is modeled as a Petri net and the asynchronous observations are in the form of state (token) changes at different places in the Petri net. More specifically, the observed sequences of state changes are provided by local sensors and are asynchronous, i.e., they only contain partial information about the ordering of the state changes that occur. We propose an approach that is able to partition the given net into several subnets and reconstruct the event sequence for each subnet. Then we develop an algorithm that is able to reconstruct the event sequences for the entire net that are consistent with: 1) the asynchronous observations of state changes; 2) the event sequences of each subnet; and 3) the structure of the given Petri net. We discuss the algorithmic complexity. The final problem studied in this thesis is the optimal design method of Petri net controllers with fault-tolerant ability. In particular, we consider multiple faults detection and identification in Petri nets that have state machine structures (i.e., every transition in the net has only one input place and one output place). We develop the approximation algorithms to design the fault-tolerant Petri net controller which achieves the minimal number of connections with the original controller. A design example for an automated guided vehicle (AGV) system is also provided to illustrate our approaches.
APA, Harvard, Vancouver, ISO, and other styles
27

Jindal, Prachee. "Compiler Assisted Energy Management For Sensor Network Nodes." Thesis, 2008. http://hdl.handle.net/2005/819.

Full text
Abstract:
Emerging low power, embedded, wireless sensor devices are useful for wide range of applications, yet have very limited processing storage and especially energy resources. Sensor networks have a wide variety of applications in medical monitoring, environmental sensing and military surveillance. Due to the large number of sensor nodes that may be deployed and the required long system lifetimes, replacing the battery is not an option. Sensor systems must utilize the minimal possible energy while operating over a wide range of operating scenarios. The most of the efforts in the energy management in sensor networks have concentrated on minimizing energy consumption in the communication subsystem. Some researchers have also dealt with the issue of minimizing the energy in computing subsystem of a sensor network node. Some proposals using energy aware software have also been made. Relatively little work has been done on compiler controlled energy management in sensor networks. In this thesis, we present our investigations on how compiler techniques can be used to minimize CPU energy consumption in sensor network nodes. One effectively used energy management technique in general purpose processors, is dynamic voltage scaling. In this thesis we implement and evaluate a compiler assisted DVS algorithm and show its usefulness for a small sensor node processor. We were able to achieve an energy saving of 29% with a little performance slowdown. Scratchpad memories have been widely used for improving performance. In this thesis we show that if the scratchpad size for the system is chosen carefully, then large energy savings can be achieved by using a compiler assisted scratchpad allocation policy. With a small size of 512 byte scratchpad memory we were able to achieve 50% of energy savings. We also studied the behavior of dynamic voltage scaling in presence of scratchpad memory. Our results show that in presence of scratchpad memory less opportunities are found for applying dynamic voltage scaling techniques. The sensor network community lacks a comprehensive benchmark suite, for our study we also implemented a set of applications, representative of computational workload on sensor network nodes. The techniques studied in this thesis can easily be integrated with existing energy management techniques in sensor networks, yielding in additional energy savings.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography