To see the other types of publications on this topic, follow the link: Intelligent optimization techniques.

Dissertations / Theses on the topic 'Intelligent optimization techniques'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 46 dissertations / theses for your research on the topic 'Intelligent optimization techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ngatchou, Patrick. "Intelligent techniques for optimization and estimation /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/5827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mohamed, Abdelhamed. "Optimization Techniques for Reconfigurable Intelligent Surfaces Assisted Wireless Networks." Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPAST137.

Full text
Abstract:
Récemment, l'émergence des surfaces intelligentes reconfigurables (RIS) a suscité une vive attention de l'industrie et du monde universitaire. Un RIS est une surface plane constituée d'un grand nombre d'éléments réfléchissants passifs à faible coût. En ajustant soigneusement les déphasages des éléments réfléchissants, un RIS peut remodeler l'environnement sans fil pour une meilleure communication. En général, cette thèse fournit des contributions sur : (i) les performances des RIS basées sur des modèles de rayonnement électromagnétique précis et réalistes. De plus, elle fournit des cadres d'optimisation pour améliorer la performance des systèmes de communication dans les deux cas d'utilisation suivants : (i) Améliorer conjointement le taux d'information et la quantité de puissance récoltée dans un réseau sans fil multi-utilisateurs MISO descendant assisté par les RIS. (ii) améliorer l'efficacité spectrale pour un grand nombre d'utilisateurs situés en bordure de cellule ou de l'autre côté du RIS en utilisant des omni-surfaces intelligentes (IOS). Le chapitre 1 présente les défis à relever pour répondre aux exigences des réseaux 6G, le concept d'environnements radio intelligents et les RIS, qui constituent l'une des technologies habilitantes. Dans les communications futures, les RIS sont une technique clé qui aura des applications potentielles permettant d'obtenir une connectivité sans faille tout en consommant moins d'énergie. Le chapitre 2 présente les systèmes de communication assistés par RIS. Le principe de réflexion, le problème d'estimation de canal et le problème de conception du système sont présentés en détail. Les recherches de pointe sur les problèmes d'estimation de canal et de conception de système sont passées en revue. Le chapitre 3 étudie l'impact de modèles de reradiation réalistes pour les RIS en fonction de l'inter-distance sub-longueur d'onde entre les éléments proches du RIS, les niveaux de quantification des coefficients de réflexion, l'interaction entre l'amplitude et la phase des coefficients de réflexion, et la présence d'interférences électromagnétiques. En conclusion, notre étude montre que, en raison de contraintes de conception, telles que la nécessité d'utiliser des coefficients de réflexion quantifiés ou l'interaction inhérente entre la phase et l'amplitude des coefficients de réflexion, un RIS peut reradir la puissance vers des directions non désirées qui dépendent des ondes électromagnétiques prévues et interférentes. Le chapitre 4 aborde le problème de l'optimisation simultanée du taux d'information et de la puissance récoltée dans un réseau sans fil multi-utilisateurs MISO en liaison descendante avec surface intelligente reconfigurable (RIS) et transfert simultané d'information et de puissance sans fil (SWIPT). Un algorithme pratique est développé par une interaction entre l'optimisation alternée, l'optimisation séquentielle et les méthodes basées sur les prix. Le chapitre 5 propose un algorithme d'optimisation qui a un taux de convergence rapide en quelques itérations pour maximiser le taux de somme dans les canaux de diffusion MIMO assistés par IOS, qui peut être exploité pour servir l'utilisateur de bord de cellule et améliorer la couverture du réseau. La particularité de ce travail est de considérer que les coefficients de réflexion et de transmission d'un IOS sont étroitement couplés. Enfin, le chapitre 6 résume les principales conclusions de la thèse et discute des orientations futures possibles qui méritent d'être étudiées pour libérer tout le potentiel des RIS et les mettre en pratique<br>Recently, the emergence of reconfigurable intelligent surface (RIS) has attracted heated attention from both industry and academia. An RIS is a planar surface that consists of a large number of low-cost passive reflecting elements. By carefully adjusting the phase shifts of the reflecting elements, an RIS can reshape the wireless environment for better communication. In general, this thesis provides contributions on: (i) the performance of RISs based on accurate and realistic electromagnetic reradiation models. Moreover, it provides some of optimization frameworks for enhancing the communication system performance on the following two use case: (i) To jointly improves the information rate and the amount of harvested power in a RIS-aided MISO downlink multiuser wireless network. (ii) enhancing spectral efficiency for large number of users located on cell edge or on the other side of the RIS by utilizing the intelligent omni-surfaces (IOSs).Chapter 1 introduces the challenges of fulfilling the requirements of of 6G networks, the concept of smart radio environments and RIS as it is one of the enabling technologies. In future communications, RIS is a key technique that will have potential applications which will achieve seamless connectivity and less energy consumption at the same time. Chapter 2 also introduces the state-of-art optimization techniques developed for RIS-aided systems. Firstly, it introduces the system models of RIS-aided MIMO systems and then investigates the reflection principle of RISs. In addition, it introduces the Optimization techniques challenges of RIS-assisted systems. Also, the proposed optimization techniques for designing the continuous and discrete phase shifts are presented in detail. Chapter 3 studies the impact of realistic reradiation models for RISs as a function of the sub-wavelength inter-distance between nearby elements of the RIS, the quantization levels of the reflection coefficients, the interplay between the amplitude and phase of the reflection coefficients, and the presence of electromagnetic interference. In conclusion, our study shows that, due to design constraints, such as the need to use quantized reflection coefficients or the inherent interplay between the phase and the amplitude of the reflection coefficients, a RIS may reradiate power towards unwanted directions that depend on the intended and interfering electromagnetic waves. Chapter 4 considers the problem of simultaneously optimizing the information rate and the harvested power in a reconfigurable intelligent surface (RIS)-aided MISO downlink multiuser wireless network with simultaneous wireless information, and power transfer (SWIPT) is addressed. A practical algorithm is developed through an interplay of alternating optimization, sequential optimization, and pricing-based methods. Chapter 5 proposes an optimization algorithm that has a rapid convergence rate in a few iterations for maximizing the sum rate in IOS-aided MIMO broadcast channels, which can be exploited to serve the cell-edge user and enhance network coverage. This work's distinguishable feature lies in considering that the reflection and transmission coefficients of an IOS are tightly coupled. Finally, Chapter 6 summarizes the main findings of the thesis and discusses possible future directions that are worth investigating to unlock the full potential of RIS and bring it into practice
APA, Harvard, Vancouver, ISO, and other styles
3

NETO, OMAR PARANAIBA VILELA. "DESIGN, OPTIMIZATION, SIMULATION AND PREDICTION OF NANOSTRUCTURES PROPERTIES BY COMPUTATIONAL INTELLIGENCE TECHNIQUES: INTELLIGENT COMPUTATIONAL NANOTECHNOLOGY." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2009. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=15182@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO<br>FUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO<br>Esta tese investiga a Nanotecnologia Computacional Inteligente, isto é, o apoio de técnicas de Inteligência Computacional (IC) nos desafios enfrentados pela Nanociência e Nanotecnologia. Por exemplo, utilizam-se as Redes Neurais para construir sistemas de inferência capazes de relacionar um conjunto de parâmetros de entrada com as características finais das nanoestruturas, permitindo aos pesquisadores prever o comportamento de outras nanoestruturas ainda não realizadas experimentalmente. A partir dos sistemas de inferência, Algoritmos Genéticos são então empregados com o intuito de encontrar o conjunto ótimo de parâmetros de entrada para a síntese (projeto) de uma nanoestrutura desejada. Numa outra linha de investigação, os Algoritmos Genéticos são usados para a otimização de parâmetros de funções de base para cálculos ab initio. Neste caso, são otimizados os expoentes das funções gaussianas que compõem as funções de base. Em outra abordagem, os Algoritmos Genéticos são aplicados na otimização de agregados atômicos e moleculares, permitindo aos pesquisadores estudar teoricamente os agregados formados experimentalmente. Por fim, o uso destes algoritmos, aliado ao uso de simuladores, é aplicado na síntese automática de OLEDs e circuitos de Autômatos Celulares com Pontos Quânticos (QCA). Esta pesquisa revelou o potencial da IC em aplicações inovadoras. Os sistemas híbridos de otimização e inferência, por exemplo, concebidos para prever a altura, a densidade e o desvio padrão de pontos quânticos auto-organizáveis, apresentam altos níveis de correlação com os resultados experimentais e baixos erros percentuais (inferior a 10%). O módulo de elasticidade de nanocompósitos também é previsto por um sistema semelhante e apresenta erros percentuais ainda menores, entorno de 4%. Os Algoritmos Genéticos, juntamente com o software de modelagem molecular Gaussian03, otimizam os parâmetros de funções que geram expoentes de primitivas gaussianas de funções de base para cálculos hartree-fock, obtendo energias menores do que aquelas apresentadas nas referencias. Em outra aplicação, os Algoritmos Genéticos também se mostram eficientes na busca pelas geometrias de baixa energia dos agregados atômicos de (LiF)nLi+, (LiF)n e (LiF)nF-, obtendo uma série de novos isômeros ainda não propostos na literatura. Uma metodologia semelhante é aplicada em um sistema inédito para entender a formação de agregados moleculares de H2O iônicos, partindo-se de agregados neutros. Os resultados mostram como os agregados podem ser obtidos a partir de diferentes perspectivas, formando estruturas ainda não investigadas na área científica. Este trabalho também apresenta a síntese automática de circuitos de QCA robustos. Os circuitos obtidos apresentam grau de polarização semelhante àqueles propostos pelos especialistas, mas com uma importante redução na quantidade de células. Por fim, um sistema envolvendo Algoritmos Genéticos e um modelo analítico de OLEDs multicamadas otimizam as concentrações de materiais orgânicos em cada camada com o intuito de obter dispositivos mais eficientes. Os resultados revelam um dispositivo 9,7% melhor que a solução encontrada na literatura, sendo estes resultados comprovados experimentalmente. Em resumo, os resultados da pesquisa permitem constatar que a inédita integração das técnicas de Inteligência Computacional com Nanotecnologia Computacional, aqui denominada Nanotecnologia Computacional Inteligente, desponta como uma promissora alternativa para acelerar as pesquisas em Nanociência e o desenvolvimento de aplicações nanotecnológicas.<br>This thesis investigates the Intelligent Computational Nanotechnology, that is, the support of Computational Intelligence (CI) techniques in the challenges faced by the Nanoscience and Nanotechnology. For example, Neural Networks are used for build Inference systems able to relate a set of input parameters with the final characteristics of the nanostructures, allowing the researchers foresees the behavior of other nanostructures not yet realized experimentally. From the inference systems, Genetic Algorithms are then employees with the intention of find the best set of input parameters for the synthesis (project) of a desired nanostructure. In another line of inquiry, the Genetic Algorithms are used for the base functions optimization used in ab initio calculations. In that case, the exponents of the Gaussian functions that compose the base functions are optimized. In another approach, the Genetic Algorithms are applied in the optimization of molecular and atomic clusters, allowing the researchers to theoretically study the experimentally formed clusters. Finally, the use of these algorithms, use together with simulators, is applied in the automatic synthesis of OLEDs and circuits of Quantum Dots Cellular Automata (QCA). This research revealed the potential of the CI in innovative applications. The hybrid systems of optimization and inference, for example, conceived to foresee the height, the density and the height deviation of self-assembled quantum dots, present high levels of correlation with the experimental results and low percentage errors (lower to 10%). The Young’s module of nanocomposites is also predicted by a similar system and presents percentage errors even smaller, around 4%. The Genetic Algorithms, jointly with the package of molecular modeling Gaussian03, optimize the parameters of functions that generate exponents of primitive Gaussian functions of base sets for hartree-fock calculations, obtaining smaller energies than those presented in the literature. In another application, the Genetic Algorithms are also efficient in the search by the low energy geometries of the atomic clusters of (LiF) nLi +, (LiF) n and (LiF) nF-, obtaining a set of new isomers yet not propose in the literature. A similar methodology is applied in an unpublished system for understand the formation of molecular cluster of ionic H2O from neutral clusters. The results show how the clusters can be obtained from different perspectives, forming structures not yet investigate in the scientific area. This work also presents the automatic synthesis of robust QCA circuits. The circuits obtained present high polarization, similar to those proposed by the specialists, but with an important reduction in the quantity of cells. Finally, a system involving Genetic Algorithms and an analytic model of multilayer OLEDs optimize the concentrations of organic material in each layer in order to obtain more efficient devices. The results reveal a device 9.7% better that the solution found in the literature, being these results verified experimentally. In summary, the results of the proposed research allow observe that the unpublished integration of the techniques of Computational Intelligence with Computational Nanotechnology, here named Intelligent Computational Nanotechnology, emerges as a promising alternative for accelerate the researches in Nanoscince and the development of application in Nanotechnology.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Futong. "Global Optimization Techniques Based on Swarm-intelligent and Gradient-free Algorithms." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42307.

Full text
Abstract:
The need for solving nonlinear optimization problems is pervasive in many fields. Particle swarm optimization, advantageous with the simple underlying implementation logic, and simultaneous perturbation stochastic approximation, which is famous for its saving in the computational power with the gradient-free attribute, are two solutions that deserve attention. Many researchers have exploited their merits in widely challenging applications. However, there is a known fact that both of them suffer from a severe drawback, non- effectively converging to the global best solution, because of the local “traps” spreading on the searching space. In this article, we propose two approaches to remedy this issue by combined their advantages. In the first algorithm, the gradient information helps optimize half of the particles at the initialization stage and then further updates the global best position. If the global best position is located in one of the local optima, the searching surface’s additional gradient estimation can help it jump out. The second algorithm expands the implementation of the gradient information to all the particles in the swarm to obtain the optimized personal best position. Both have to obey the rule created for updating the particle(s); that is, the solution found after employing the gradient information to the particle(s) has to perform more optimally. In this work, the experiments include five cases. The three previous methods with a similar theoretical basis and the two basic algorithms take participants in all five. The experimental results prove that the proposed two algorithms effectively improved the basic algorithms and even outperformed the previously designed three algorithms in some scenarios.
APA, Harvard, Vancouver, ISO, and other styles
5

Brka, Adel. "Optimisation of stand-alone hydrogen-based renewable energy systems using intelligent techniques." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2015. https://ro.ecu.edu.au/theses/1756.

Full text
Abstract:
Wind and solar irradiance are promising renewable alternatives to fossil fuels due to their availability and topological advantages for local power generation. However, their intermittent and unpredictable nature limits their integration into energy markets. Fortunately, these disadvantages can be partially overcome by using them in combination with energy storage and back-up units. However, the increased complexity of such systems relative to single energy systems makes an optimal sizing method and appropriate Power Management Strategy (PMS) research priorities. This thesis contributes to the design and integration of stand-alone hybrid renewable energy systems by proposing methodologies to optimise the sizing and operation of hydrogen-based systems. These include using intelligent techniques such as Genetic Algorithm (GA), Particle Swarm Optimisation (PSO) and Neural Networks (NNs). Three design aspects: component sizing, renewables forecasting, and operation coordination, have been investigated. The thesis includes a series of four journal articles. The first article introduced a multi-objective sizing methodology to optimise standalone, hydrogen-based systems using GA. The sizing method was developed to calculate the optimum capacities of system components that underpin appropriate compromise between investment, renewables penetration and environmental footprint. The system reliability was assessed using the Loss of Power Supply Probability (LPSP) for which a novel modification was introduced to account for load losses during transient start-up times for the back-ups. The second article investigated the factors that may influence the accuracy of NNs when applied to forecasting short-term renewable energy. That study involved two NNs: Feedforward, and Radial Basis Function in an investigation of the effect of the type, span and resolution of training data, and the length of training pattern, on shortterm wind speed prediction accuracy. The impact of forecasting error on estimating the available wind power was also evaluated for a commercially available wind turbine. The third article experimentally validated the concept of a NN-based (predictive) PMS. A lab-scale (stand-alone) hybrid energy system, which consisted of: an emulated renewable power source, battery bank, and hydrogen fuel cell coupled with metal hydride storage, satisfied the dynamic load demand. The overall power flow of the constructed system was controlled by a NN-based PMS which was implemented using MATLAB and LabVIEW software. The effects of several control parameters, which are either hardware dependent or affect the predictive algorithm, on system performance was investigated under the predictive PMS, this was benchmarked against a rulebased (non-intelligent) strategy. The fourth article investigated the potential impact of NN-based PMS on the economic and operational characteristics of such hybrid systems. That study benchmarked a rule-based PMS to its (predictive) counterpart. In addition, the effect of real-time fuel cell optimisation using PSO, when applied in the context of predictive PMS was also investigated. The comparative analysis was based on deriving the cost of energy, life cycle emissions, renewables penetration, and duty cycles of fuel cell and electrolyser units. The effects of other parameters such the LPSP level, prediction accuracy were also investigated. The developed techniques outperformed traditional approaches by drawing upon complex artificial intelligence models. The research could underpin cost-effective, reliable power supplies to remote communities as well as reducing the dependence on fossil fuels and the associated environmental footprint.
APA, Harvard, Vancouver, ISO, and other styles
6

Teng, Sin Yong. "Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-433427.

Full text
Abstract:
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.
APA, Harvard, Vancouver, ISO, and other styles
7

Hernández, Pibernat Hugo. "Swarm intelligence techniques for optimization and management tasks insensor networks." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/81861.

Full text
Abstract:
The main contributions of this thesis are located in the domain of wireless sensor netorks. More in detail, we introduce energyaware algorithms and protocols in the context of the following topics: self-synchronized duty-cycling in networks with energy harvesting capabilities, distributed graph coloring and minimum energy broadcasting with realistic antennas. In the following, we review the research conducted in each case. We propose a self-synchronized duty-cycling mechanism for sensor networks. This mechanism is based on the working and resting phases of natural ant colonies, which show self-synchronized activity phases. The main goal of duty-cycling methods is to save energy by efficiently alternating between different states. In the case at hand, we considered two different states: the sleep state, where communications are not possible and energy consumption is low; and the active state, where communication result in a higher energy consumption. In order to test the model, we conducted an extensive experimentation with synchronous simulations on mobile networks and static networks, and also considering asynchronous networks. Later, we extended this work by assuming a broader point of view and including a comprehensive study of the parameters. In addition, thanks to a collaboration with the Technical University of Braunschweig, we were able to test our algorithm in the real sensor network simulator Shawn (http://shawn.sf.net). The second part of this thesis is devoted to the desynchronization of wireless sensor nodes and its application to the distributed graph coloring problem. In particular, our research is inspired by the calling behavior of Japanese tree frogs, whose males use their calls to attract females. Interestingly, as female frogs are only able to correctly localize the male frogs when their calls are not too close in time, groups of males that are located nearby each other desynchronize their calls. Based on a model of this behavior from the literature, we propose a novel algorithm with applications to the field of sensor networks. More in detail, we analyzed the ability of the algorithm to desynchronize neighboring nodes. Furthermore, we considered extensions of the original model, hereby improving its desynchronization capabilities.To illustrate the potential benefits of desynchronized networks, we then focused on distributed graph coloring. Later, we analyzed the algorithm more extensively and show its performance on a larger set of benchmark instances. The classical minimum energy broadcast (MEB) problem in wireless ad hoc networks, which is well-studied in the scientific literature, considers an antenna model that allows the adjustment of the transmission power to any desired real value from zero up to the maximum transmission power level. However, when specifically considering sensor networks, a look at the currently available hardware shows that this antenna model is not very realistic. In this work we re-formulate the MEB problem for an antenna model that is realistic for sensor networks. In this antenna model transmission power levels are chosen from a finite set of possible ones. A further contribution concerns the adaptation of an ant colony optimization algorithm --currently being the state of the art for the classical MEB problem-- to the more realistic problem version, the so-called minimum energy broadcast problem with realistic antennas (MEBRA). The obtained results show that the advantage of ant colony optimization over classical heuristics even grows when the number of possible transmission power levels decreases. Finally we build a distributed version of the algorithm, which also compares quite favorably against centralized heuristics from the literature.<br>Las principles contribuciones de esta tesis se encuentran en el domino de las redes de sensores inalámbricas. Más en detalle, introducimos algoritmos y protocolos que intentan minimizar el consumo energético para los siguientes problemas: gestión autosincronizada de encendido y apagado de sensores con capacidad para obtener energía del ambiente, coloreado de grafos distribuido y broadcasting de consumo mínimo en entornos con antenas reales. En primer lugar, proponemos un sistema capaz de autosincronizar los ciclos de encendido y apagado de los nodos de una red de sensores. El mecanismo está basado en las fases de trabajo y reposo de las colonias de hormigas tal y como estas pueden observarse en la naturaleza, es decir, con fases de actividad autosincronizadas. El principal objectivo de este tipo de técnicas es ahorrar energía gracias a alternar estados de forma eficiente. En este caso en concreto, consideramos dos estados diferentes: el estado dormido, en el que los nodos no pueden comunicarse y el consumo energético es bajo; y el estado activo, en el que las comunicaciones propician un consumo energético elevado. Con el objetivo de probar el modelo, se ha llevado a cabo una extensa experimentación que incluye tanto simulaciones síncronas en redes móviles y estáticas, como simulaciones en redes asíncronas. Además, este trabajo se extendió asumiendo un punto de vista más amplio e incluyendo un detallado estudio de los parámetros del algoritmo. Finalmente, gracias a la colaboración con la Technical University of Braunschweig, tuvimos la oportunidad de probar el mecanismo en el simulador realista de redes de sensores, Shawn (http://shawn.sf.net). La segunda parte de esta tesis está dedicada a la desincronización de nodos en redes de sensores y a su aplicación al problema del coloreado de grafos de forma distribuida. En particular, nuestra investigación está inspirada por el canto de las ranas de árbol japonesas, cuyos machos utilizan su canto para atraer a las hembras. Resulta interesante que debido a que las hembras solo son capaces de localizar las ranas macho cuando sus cantos no están demasiado cerca en el tiempo, los grupos de machos que se hallan en una misma región desincronizan sus cantos. Basado en un modelo de este comportamiento que se encuentra en la literatura, proponemos un nuevo algoritmo con aplicaciones al campo de las redes de sensores. Más en detalle, analizamos la habilidad del algoritmo para desincronizar nodos vecinos. Además, consideramos extensiones del modelo original, mejorando su capacidad de desincronización. Para ilustrar los potenciales beneficios de las redes desincronizadas, nos centramos en el problema del coloreado de grafos distribuido que tiene relación con diferentes tareas habituales en redes de sensores. El clásico problema del broadcasting de consumo mínimo en redes ad hoc ha sido bien estudiado en la literatura. El problema considera un modelo de antena que permite transmitir a cualquier potencia elegida (hasta un máximo establecido por el dispositivo). Sin embargo, cuando se trabaja de forma específica con redes de sensores, un vistazo al hardware actualmente disponible muestra que este modelo de antena no es demasiado realista. En este trabajo reformulamos el problema para el modelo de antena más habitual en redes de sensores. En este modelo, los niveles de potencia de transmisión se eligen de un conjunto finito de posibilidades. La siguiente contribución consiste en en la adaptación de un algoritmo de optimización por colonias de hormigas a la versión más realista del problema, también conocida como broadcasting de consumo mínimo con antenas realistas. Los resultados obtenidos muestran que la ventaja de este método sobre heurísticas clásicas incluso crece cuando el número de posibles potencias de transmisión decrece. Además, se ha presentado una versión distribuida del algoritmo, que también se compara de forma bastante favorable contra las heurísticas centralizadas conocidas.
APA, Harvard, Vancouver, ISO, and other styles
8

Turan, Kamil Hakan. "Reliability-based Optimization Of River Bridges Using Artificial Intelligence Techniques." Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613062/index.pdf.

Full text
Abstract:
Proper bridge design is based on consideration of structural, hydraulic, and geotechnical conformities at an optimum level. The objective of this study is to develop an optimization-based methodology to select appropriate dimensions for components of a river bridge such that the aforementioned design aspects can be satisfied jointly. The structural and geotechnical design parts uses a statisticallybased technique, artificial neural network (ANN) models. Therefore, relevant data of many bridge projects were collected and analyzed from different aspects to put them into a matrix form. ANN architectures are used in the objective function of the optimization problem, which is modeled using Genetic Algorithms with penalty functions as constraint handling method. Bridge scouring reliability comprises one of the constraints, which is performed using Monte-Carlo Simulation technique. All these mechanisms are assembled in a software framework, named as AIROB. Finally, an application built on AIROB is presented to assess the outputs of the software by focusing on the evaluations of hydraulic &ndash<br>structure interactions.
APA, Harvard, Vancouver, ISO, and other styles
9

Wilke, Daniel N. "Analysis of the particle swarm optimization algorithm." Pretoria : [s.n.], 2005. http://upetd.up.ac.za/thesis/available/etd-01312006-125743.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ibn, Khedher Hatem. "Optimization and virtualization techniques adapted to networking." Thesis, Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0007/document.

Full text
Abstract:
Dans cette thèse, on présente nos travaux sur la virtualisation dans le contexte de la réplication des serveurs de contenu vidéo. Les travaux couvrent la conception d’une architecture de virtualisation pour laquelle on présente aussi plusieurs algorithmes qui peuvent réduire les couts globaux à long terme et améliorer la performance du système. Le travail est divisé en plusieurs parties : les solutions optimales, les solutions heuristiques pour des raisons de passage à l’échelle, l’orchestration des services, l’optimisation multi-objective, la planification de services dans des réseaux actifs et complexes et l'intégration d'algorithmes dans une plate-forme réelle<br>In this thesis, we designed and implemented a tool which performs optimizations that reduce the number of migrations necessary for a delivery task. We present our work on virtualization in the context of replication of video content servers. The work covers the design of a virtualization architecture for which there are also several algorithms that can reduce overall costs and improve system performance. The thesis is divided into several parts: optimal solutions, greedy (heuristic) solutions for reasons of scalability, orchestration of services, multi-objective optimization, service planning in complex active networks, and integration of algorithms in real platform. This thesis is supported by models, implementations and simulations which provide results that showcase our work, quantify the importance of evaluating optimization techniques and analyze the trade-off between reducing operator cost and enhancing end user satisfaction index
APA, Harvard, Vancouver, ISO, and other styles
11

Pratap, Rana Jitendra. "Design and Optimization of Microwave Circuits and Systems Using Artificial Intelligence Techniques." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7225.

Full text
Abstract:
In this thesis, a new approach combining neural networks and genetic algorithms is presented for microwave design. In this method, an accurate neural network model is developed from the experimental data. This neural network model is used to perform sensitivity analysis and derive response surfaces. An innovative technique is then applied in which genetic algorithms are coupled with the neural network model to assist in synthesis and optimization. The proposed method is used for modeling and analysis of circuit parameters for flip chip interconnects up to 35 GHz, as well as for design of multilayer inductors and capacitors at 1.9 GHz and 2.4 GHz. The method was also used to synthesize mm wave low pass filters in the range of 40-60 GHz. The devices obtained from layout parameters predicted by the neuro-genetic design method yielded electrical response close to the desired value (95% accuracy). The proposed method also implements a weighted priority scheme to account for tradeoffs in microwave design. This scheme was implemented to synthesize bandpass filters for 802.11a and HIPERLAN wireless LAN applications in the range of 5-6 GHz. This research also develops a novel neuro-genetic design centering methodology for yield enhancement and design for manufacturability of microwave devices and circuits. A neural network model is used to calculate yield using Monte Carlo methods. A genetic algorithm is then used for yield optimization. The proposed method has been used for yield enhancement of SiGe heterojunction bipolar transistor and mm wave voltage-controlled oscillator. It results in significant yield enhancement of the SiGe HBTs (from 25 % to 75 %) and VCOs (from 8 % to 85 %). The proposed method is can be extended for device, circuit, package, and system level integrated co-design since it can handle a large number of design variables without any assumptions about the component behavior. The proposed algorithm could be used by microwave community for design and optimization of microwave circuits and systems with greater accuracy while consuming less computational time.
APA, Harvard, Vancouver, ISO, and other styles
12

Johnson, Clayton Matthew. "A grammar-based technique for genetic search and optimization." W&M ScholarWorks, 1996. https://scholarworks.wm.edu/etd/1539623893.

Full text
Abstract:
The genetic algorithm (GA) is a robust search technique which has been theoretically and empirically proven to provide efficient search for a variety of problems. Due largely to the semantic and expressive limitations of adopting a bitstring representation, however, the traditional GA has not found wide acceptance in the Artificial Intelligence community. In addition, binary chromosones can unevenly weight genetic search, reduce the effectiveness of recombination operators, make it difficult to solve problems whose solution schemata are of high order and defining length, and hinder new schema discovery in cases where chromosome-wide changes are required.;The research presented in this dissertation describes a grammar-based approach to genetic algorithms. Under this new paradigm, all members of the population are strings produced by a problem-specific grammar. Since any structure which can be expressed in Backus-Naur Form can thus be manipulated by genetic operators, a grammar-based GA strategy provides a consistent methodology for handling any population structure expressible in terms of a context-free grammar.;In order to lend theoretical support to the development of the syntactic GA, the concept of a trace schema--a similarity template for matching the derivation traces of grammar-defined rules--was introduced. An analysis of the manner in which a grammar-based GA operates yielded a Trace Schema Theorem for rule processing, which states that above-average trace schemata containing relatively few non-terminal productions are sampled with increasing frequency by syntactic genetic search. Schemata thus serve as the "building blocks" in the construction of the complex rule structures manipulated by syntactic GAs.;As part of the research presented in this dissertation, the GEnetic Rule Discovery System (GERDS) implementation of the grammar-based GA was developed. A comparison between the performance of GERDS and the traditional GA showed that the class of problems solvable by a syntactic GA is a superset of the class solvable by its binary counterpart, and that the added expressiveness greatly facilitates the representation of GA problems. to strengthen that conclusion, several experiments encompassing diverse domains were performed with favorable results.
APA, Harvard, Vancouver, ISO, and other styles
13

Ibn, Khedher Hatem. "Optimization and virtualization techniques adapted to networking." Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2018. http://www.theses.fr/2018TELE0007.

Full text
Abstract:
Dans cette thèse, on présente nos travaux sur la virtualisation dans le contexte de la réplication des serveurs de contenu vidéo. Les travaux couvrent la conception d’une architecture de virtualisation pour laquelle on présente aussi plusieurs algorithmes qui peuvent réduire les couts globaux à long terme et améliorer la performance du système. Le travail est divisé en plusieurs parties : les solutions optimales, les solutions heuristiques pour des raisons de passage à l’échelle, l’orchestration des services, l’optimisation multi-objective, la planification de services dans des réseaux actifs et complexes et l'intégration d'algorithmes dans une plate-forme réelle<br>In this thesis, we designed and implemented a tool which performs optimizations that reduce the number of migrations necessary for a delivery task. We present our work on virtualization in the context of replication of video content servers. The work covers the design of a virtualization architecture for which there are also several algorithms that can reduce overall costs and improve system performance. The thesis is divided into several parts: optimal solutions, greedy (heuristic) solutions for reasons of scalability, orchestration of services, multi-objective optimization, service planning in complex active networks, and integration of algorithms in real platform. This thesis is supported by models, implementations and simulations which provide results that showcase our work, quantify the importance of evaluating optimization techniques and analyze the trade-off between reducing operator cost and enhancing end user satisfaction index
APA, Harvard, Vancouver, ISO, and other styles
14

Storer, Jeremy J. "Computational Intelligence and Data Mining Techniques Using the Fire Data Set." Bowling Green State University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1460129796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sayers, William Keith Paul. "Artificial intelligence techniques for flood risk management in urban environments." Thesis, University of Exeter, 2015. http://hdl.handle.net/10871/21030.

Full text
Abstract:
Flooding is an important concern for the UK, as evidenced by the many extreme flooding events in the last decade. Improved flood risk intervention strategies are therefore highly desirable. The application of hydroinformatics tools, and optimisation algorithms in particular, which could provide guidance towards improved intervention strategies, is hindered by the necessity of performing flood modelling in the process of evaluating solutions. Flood modelling is a computationally demanding task; reducing its impact upon the optimisation process would therefore be a significant achievement and of considerable benefit to this research area. In this thesis sophisticated multi-objective optimisation algorithms have been utilised in combination with cutting-edge flood-risk assessment models to identify least-cost and most-benefit flood risk interventions that can be made on a drainage network. Software analysis and optimisation has improved the flood risk model performance. Additionally, artificial neural networks used as feature detectors have been employed as part of a novel development of an optimisation algorithm. This has alleviated the computational time-demands caused by using extremely complex models. The results from testing indicate that the developed algorithm with feature detectors outperforms (given limited computational resources available) a base multi-objective genetic algorithm. It does so in terms of both dominated hypervolume and a modified convergence metric, at each iteration. This indicates both that a shorter run of the algorithm produces a more optimal result than a similar length run of a chosen base algorithm, and also that a full run to complete convergence takes fewer iterations (and therefore less time) with the new algorithm.
APA, Harvard, Vancouver, ISO, and other styles
16

Al-Olimat, Hussein S. "Optimizing Cloudlet Scheduling and Wireless Sensor Localization using Computational Intelligence Techniques." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1403922600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

GRIMALDI, MATTEO. "Hardware-Aware Compression Techniques for Embedded Deep Neural Networks." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2933756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zouidi, Naïma. "Complexity reduction of VVC encoder using machine learning techniques : intra-prediction." Electronic Thesis or Diss., Rennes, INSA, 2023. http://www.theses.fr/2023ISAR0016.

Full text
Abstract:
En juillet 2020, la nouvelle norme de codage vidéo, appelle Versatile Video Coding (VVC), a été publiée par le groupe Joint Video Experts Team (JVET). Cette norme permet un niveau plus élevé de polyvalence avec une meilleure performance en compression vidéo par rapport à son prédécesseur, High-Efficiency Video Coding (HEVC). En effet, elle introduit plusieurs nouveaux outils de codage tels que les modes de prédiction intra à granularité plus fine (IPMs) et la division QuadTree Multi-type Tree (QTMT). Étant donné que la recherche des meilleures décisions de codage est généralement précédée par l’optimisation du coût en distorsion et débit binaire, l’introduction de nouveaux outils de codage ou l’amélioration des outils existants nécessite des calculs supplémentaires. En fait, la norme VVC est 31 fois plus complexe que la norme HEVC. Par conséquent, cette thèse vise à réduire la complexité de calcul de la norme VVC et plus particulièrement au niveau des outils de prédiction Intra. Elle étudie en premiers lieu les opportunités de réduction de la complexité dans la décision du mode intra de la norme VVC. Puis, deux algorithmes rapides de décision de mode de prédiction intra basé sur des modèles d’apprentissage automatique telles que les réseaux de neurones convolutifs multi-tachés et l’arbre de décision LightGradient Boosting Machine (Light-GBM) ont été proposés<br>In July 2020, the new video coding standard Versatile Video Coding (VVC), was released by the Joint Video Expert Team (JVET). This standard enables a higher level of versatility with a better compression performance compared to its predecessor, High Efficiency Video Coding (HEVC). Indeed, it introduces several new coding tools such as finer-granularity Intra prediction Modes (IPMs), and nested Multi-type Tree (QTMT) and finer-granularity Intra Prediction Modes (IPM). Because finding the best encoding decisions is usually preceded by optimizing the Rate Distortion (RD) cost, introducing new coding tools or enhancing existing ones would require additional computations. In fact, the VVC is 31 times more complex than the HEVC. Therefore, the aim of this thesis is to reduce the computational complexity of the VVC. First, it studies the upper bound of complexity reduction in the intra mode decision of the VVC. Then, proposes two fast decision algorithms for the intra mode decision based on machine learning algorithms such as Multi-Task Learning (MTL) and Light-Gradient Boosting Machine (Light-GBM) were proposed
APA, Harvard, Vancouver, ISO, and other styles
19

Khan, Salman A. "Design and analysis of evolutionary and swarm intelligence techniques for topology design of distributed local area networks." Pretori: [S.n.], 2009. http://upetd.up.ac.za/thesis/available/etd-09272009-153908/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Proença, Diogo Alexandre Breites de Campos. "Adaptive complex system modeling for realistic modern ground warfare simulation analysis based on evolutionary multi-objective meta-heuristic techniques." Master's thesis, Instituto Politécnico de Leiria, 2011. http://hdl.handle.net/10400.8/1332.

Full text
Abstract:
Dissertação apresentada à Escola Superior de Tecnologia e Gestão do IPL para obtenção do grau de Mestre em Engenharia Informática - Computação Móvel, orientada pelo Professor Silvio Priem Mendes.<br>The battlefield is a harsh and inhuman environment, where deaths and destruction take lead role. Through many millennia there was blood shed all over the world, people who many time died in a battle that sometimes they didn‘t even care about. Today, the battle field is very different, machines take most damage and there are less casualties, this is because of the advancements made in the fields of aeronautics, weaponry, nautical, vehicles, armor, and psychology. Also there is another important party that throughout the last decades made a special and decisive advantage to the side which is more advanced in this field, it is intelligence and simulation. Intelligence today gives enormous advantage to one country as you ―see and feel‖ the battlefield hundreds or thousands kilometers away. Then, with the data provided by intelligence, countries can simulate the battle in order to deploy the most efficient units into battle. In this thesis we propose a warfare simulator analysis tool using a multi-objective approach and artificial intelligence. Further on, the 1991 Gulf war scenario is used to simulate and the results are presented and analyzed. The approach used in this thesis is difficult to be used in games due to its processing complexity and computing demands.
APA, Harvard, Vancouver, ISO, and other styles
21

Green, Robert C. II. "Novel Computational Methods for the Reliability Evaluation of Composite Power Systems using Computational Intelligence and High Performance Computing Techniques." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1338894641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Joulin, Armand. "Convex optimization for cosegmentation." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00826236.

Full text
Abstract:
La simplicité apparente avec laquelle un humain perçoit ce qui l'entoure suggère que le processus impliqué est en partie mécanique, donc ne nécessite pas un haut degré de réflexion. Cette observation suggère que notre perception visuelle du monde peut être simulée sur un ordinateur. La vision par ordinateur est le domaine de recherche consacré au problème de la création d'une forme de perception visuelle pour des ordinateurs. La puissance de calcul des ordinateurs des années 50 ne permettait pas de traiter et d'analyser les données visuelles nécessaires à l'élaboration d'une perception visuelle virtuelle. Depuis peu, la puissance de calcul et la capacité de stockage ont permis à ce domaine de vraiment émerger. En deux décennies, la vision par ordinateur a permis de répondre à problèmes pratiques ou industrielles comme la détection des visages, de personnes au comportement suspect dans une foule ou de défauts de fabrication dans des chaînes de production. En revanche, en ce qui concerne l'émergence d'une perception visuelle virtuelle non spécifique à une tâche donnée, peu de progrès ont été réalisés et la communauté est toujours confrontée à des problèmes fondamentaux. Un de ces problèmes est de segmenter un stimuli optique ou une image en régions porteuses de sens, en objets ou actions. La segmentation de scène est naturelle pour les humains, mais aussi essentielle pour comprendre pleinement son environnement. Malheureusement elle est aussi extrêmement difficile à reproduire sur un ordinateur car il n'existe pas de définition claire de la région "significative''. En effet, en fonction de la scène ou de la situation, une région peut avoir des interprétations différentes. Etant donnée une scène se passant dans la rue, on peut considérer que distinguer un piéton est important dans cette situation, par contre ses vêtements ne le semblent pas nécessairement. Si maintenant nous considérons une scène ayant lieu pendant un défilé de mode, un vêtement devient un élément important, donc une région significative. Ici, nous nous concentrons sur ce problème de segmentation et nous l'abordons sous un angle particulier pour éviter cette difficulté fondamentale. Nous considérerons la segmentation comme un problème d'apprentissage faiblement supervisé, c'est-à-dire qu'au lieu de segmenter des images selon une certaine définition prédéfinie de régions "significatives'', nous développons des méthodes permettant de segmenter simultanément un ensemble d'images en régions qui apparaissent régulièrement. Nous définissons donc une région "significative'' d'un point de vue statistique: Ce sont les régions qui apparaissent régulièrement dans l'ensemble des images données. Pour cela nous concevons des modèles ayant une portée qui va au-delà de l'application à la vision. Notre approche prend ses racines dans l'apprentissage statistique, dont l'objectif est de concevoir des méthodes efficaces pour extraire et/ou apprendre des motifs récurrents dans des jeux de données. Ce domaine a récemment connu une forte popularité en raison de l'augmentation du nombre et de la taille des bases de données disponibles. Nous nous concentrons ici sur des méthodes conçues pour découvrir l'information "cachée'' dans une base à partir d'annotations incomplètes ou inexistantes. Enfin, nos travaux prennent racine dans le domaine de l'optimisation numérique afin d'élaborer des algorithmes efficaces et adaptés à nos problèmes. En particulier, nous utilisons et adaptons des outils récemment développés afin de relaxer des problèmes combinatoires complexes en des problèmes convexes pour lesquels il est garanti de trouver la solution optimale. Nous illustrons la qualité de nos formulations et algorithmes aussi sur des problèmes tirés de domaines autres que la vision par ordinateur. En particulier, nous montrons que nos travaux peuvent être utilisés dans la classification de texte et en biologie cellulaire.
APA, Harvard, Vancouver, ISO, and other styles
23

Rodriguez, Cancio Marcelino. "Contributions on approximate computing techniques and how to measure them." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S071/document.

Full text
Abstract:
La Computation Approximée est basée dans l'idée que des améliorations significatives de l'utilisation du processeur, de l'énergie et de la mémoire peuvent être réalisées, lorsque de faibles niveaux d'imprécision peuvent être tolérés. C'est un concept intéressant, car le manque de ressources est un problème constant dans presque tous les domaines de l'informatique. Des grands superordinateurs qui traitent les big data d'aujourd'hui sur les réseaux sociaux, aux petits systèmes embarqués à contrainte énergétique, il y a toujours le besoin d'optimiser la consommation de ressources. La Computation Approximée propose une alternative à cette rareté, introduisant la précision comme une autre ressource qui peut à son tour être échangée par la performance, la consommation d'énergie ou l'espace de stockage. La première partie de cette thèse propose deux contributions au domaine de l'informatique approximative: Aproximate Loop Unrolling : optimisation du compilateur qui exploite la nature approximative des données de séries chronologiques et de signaux pour réduire les temps d'exécution et la consommation d'énergie des boucles qui le traitent. Nos expériences ont montré que l'optimisation augmente considérablement les performances et l'efficacité énergétique des boucles optimisées (150% - 200%) tout en préservant la précision à des niveaux acceptables. Primer: le premier algorithme de compression avec perte pour les instructions de l'assembleur, qui profite des zones de pardon des programmes pour obtenir un taux de compression qui surpasse techniques utilisées actuellement jusqu'à 10%. L'objectif principal de la Computation Approximée est d'améliorer l'utilisation de ressources, telles que la performance ou l'énergie. Par conséquent, beaucoup d'efforts sont consacrés à l'observation du bénéfice réel obtenu en exploitant une technique donnée à l'étude. L'une des ressources qui a toujours été difficile à mesurer avec précision, est le temps d'exécution. Ainsi, la deuxième partie de cette thèse propose l'outil suivant : AutoJMH : un outil pour créer automatiquement des microbenchmarks de performance en Java. Microbenchmarks fournissent l'évaluation la plus précis de la performance. Cependant, nécessitant beaucoup d'expertise, il subsiste un métier de quelques ingénieurs de performance. L'outil permet (grâce à l'automatisation) l'adoption de microbenchmark par des non-experts. Nos résultats montrent que les microbencharks générés, correspondent à la qualité des manuscrites par des experts en performance. Aussi ils surpassent ceux écrits par des développeurs professionnels dans Java sans expérience en microbenchmarking<br>Approximate Computing is based on the idea that significant improvements in CPU, energy and memory usage can be achieved when small levels of inaccuracy can be tolerated. This is an attractive concept, since the lack of resources is a constant problem in almost all computer science domains. From large super-computers processing today’s social media big data, to small, energy-constraint embedded systems, there is always the need to optimize the consumption of some scarce resource. Approximate Computing proposes an alternative to this scarcity, introducing accuracy as yet another resource that can be in turn traded by performance, energy consumption or storage space. The first part of this thesis proposes the following two contributions to the field of Approximate Computing :Approximate Loop Unrolling: a compiler optimization that exploits the approximative nature of signal and time series data to decrease execution times and energy consumption of loops processing it. Our experiments showed that the optimization increases considerably the performance and energy efficiency of the optimized loops (150% - 200%) while preserving accuracy to acceptable levels. Primer: the first ever lossy compression algorithm for assembler instructions, which profits from programs’ forgiving zones to obtain a compression ratio that outperforms the current state-of-the-art up to a 10%. The main goal of Approximate Computing is to improve the usage of resources such as performance or energy. Therefore, a fair deal of effort is dedicated to observe the actual benefit obtained by exploiting a given technique under study. One of the resources that have been historically challenging to accurately measure is execution time. Hence, the second part of this thesis proposes the following tool : AutoJMH: a tool to automatically create performance microbenchmarks in Java. Microbenchmarks provide the finest grain performance assessment. Yet, requiring a great deal of expertise, they remain a craft of a few performance engineers. The tool allows (thanks to automation) the adoption of microbenchmark by non-experts. Our results shows that the generated microbencharks match the quality of payloads handwritten by performance experts and outperforms those written by professional Java developers without experience in microbenchmarking
APA, Harvard, Vancouver, ISO, and other styles
24

Apatean, Anca Ioana. "Contributions à la fusion des informations : application à la reconnaissance des obstacles dans les images visible et infrarouge." Phd thesis, INSA de Rouen, 2010. http://tel.archives-ouvertes.fr/tel-00621202.

Full text
Abstract:
Afin de poursuivre et d'améliorer la tâche de détection qui est en cours à l'INSA, nous nous sommes concentrés sur la fusion des informations visibles et infrarouges du point de vue de reconnaissance des obstacles, ainsi distinguer entre les véhicules, les piétons, les cyclistes et les obstacles de fond. Les systèmes bimodaux ont été proposées pour fusionner l'information à différents niveaux: des caractéristiques, des noyaux SVM, ou de scores SVM. Ils ont été pondérés selon l'importance relative des capteurs modalité pour assurer l'adaptation (fixe ou dynamique) du système aux conditions environnementales. Pour évaluer la pertinence des caractéristiques, différentes méthodes de sélection ont été testés par un PPV, qui fut plus tard remplacée par un SVM. Une opération de recherche de modèle, réalisée par 10 fois validation croisée, fournit le noyau optimisé pour SVM. Les résultats ont prouvé que tous les systèmes bimodaux VIS-IR sont meilleurs que leurs correspondants monomodaux.
APA, Harvard, Vancouver, ISO, and other styles
25

Pascal, Barbara. "Estimation régularisée d'attributs fractals par minimisation convexe pour la segmentation de textures : formulations variationnelles conjointes, algorithmes proximaux rapides et sélection non supervisée des paramètres de régularisation; Applications à l'étude du frottement solide et de la microfluidique des écoulements multiphasiques." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN042.

Full text
Abstract:
Cette thèse propose plusieurs procédures pour la segmentation de textures auto-similaires s'appuyant sur deux attributs fractals : l'exposant de Hölder, quantifiant la régularité locale d'une texture, et la variance locale. Un modèle de textures fractales homogènes par morceaux est construit, accompagné d'une procédure de synthèse, fournissant des images composées d'un assemblage de textures fractales d'attributs fixés et de segmentation connue, utilisées pour évaluer les performances des méthodes proposées.Une première méthode, reposant sur la régularisation par Variation Totale d'une estimée brute de la régularité locale, est illustrée, et augmentée d'une étape de post-traitement par seuillage itératif fournissant ainsi une segmentation. Après avoir pointé les limites de cette approche, deux méthodes de segmentation, à contours « libres » ou « co-localisés », sont construites, prenant conjointement en compte la régularité et la variance locales.Ces deux procédures sont formulés comme des problèmes de minimisation de fonctionnelles convexes non lisses.Nous montrons que les fonctionnelles à pénalisations « libre » et « co-localisée » sont toutes deux fortement convexes, et calculons leur module de forte-convexité.Plusieurs schémas de minimisation sont dérivés, et leurs vitesses de convergence sont comparées.Les performances de segmentation des différentes méthodes sont quantifiées sur un large panel de données synthétiques, dans des configurations de difficulté croissante, ainsi que sur des images réelles et comparées aux méthodes de l’état-de-l'art, tels que les réseaux de neurones convolutionnels.Une application à la segmentation d'images provenant d'expériences sur les écoulements multiphasiques en milieu poreux est présentée.Une stratégie, dérivée de l'estimateur SURE de l'erreur quadratique, est mise en oeuvre pour le réglage automatique des hyperparamètres impliqués dans la construction des fonctionnelles à pénalisations « libre » et « co-localisée »<br>In this doctoral thesis several scale-free texture segmentation procedures based on two fractal attributes, the Hölder exponent, measuring the local regularity of a texture, and local variance, are proposed.A piecewise homogeneous fractal texture model is built, along with a synthesis procedure, providing images composed of the aggregation of fractal texture patches with known attributes and segmentation. This synthesis procedure is used to evaluate the proposed methods performance.A first method, based on the Total Variation regularization of a noisy estimate of local regularity, is illustrated and refined thanks to a post-processing step consisting in an iterative thresholding and resulting in a segmentation.After evidencing the limitations of this first approach, deux segmentation methods, with either "free" or "co-located" contours, are built, taking in account jointly the local regularity and the local variance.These two procedures are formulated as convex nonsmooth functional minimization problems.We show that the two functionals, with "free" and "co-located" penalizations, are both strongly-convex. and compute their respective strong convexity moduli.Several minimization schemes are derived, and their convergence speed are compared.The segmentation performance of the different methods are evaluated over a large amount of synthetic data in configurations of increasing difficulty, as well as on real world images, and compared to state-of-the-art procedures, including convolutional neural networks.An application for the segmentation of multiphasic flow through a porous medium experiment images is presented.Finally, a strategy for automated selection of the hyperparameters of the "free" and "co-located" functionals is built, inspired from the SURE estimator of the quadratic risk
APA, Harvard, Vancouver, ISO, and other styles
26

Jaber, Sara. "Resilience as a Service : a multimodal transport strategy combining simulation, optimization, and data-driven resource allocation." Electronic Thesis or Diss., Université Gustave Eiffel, 2025. http://www.theses.fr/2025UEFL2004.

Full text
Abstract:
L'objectif de cette thèse est la conception et le développement d'un système d'exploitation coopératif, des modes des transports collectifs traditionnels et les ressources existantes, afin d'offrir un réseau de transport résilient pouvant faire face aux perturbations de grandes ou petites échelles. Les véhicules seront utilisées comme service de desserte des transports collectifs traditionnels existants et comme élément régulateur en cas de perturbation. Ce système doit être capable d'aider les opérateurs à gérer de manière efficace les réseaux en cas de dérèglements ou de ruptures (problème d'exploitation de ligne, grands évènements, fermeture de zone, etc.). Le système permettra de faire face aux fluctuations de la capacité et de garantir un fonctionnement avec une perte minimale de performance et de coûts. Afin de garantir le niveau de service et la résilience du réseau, une attention particulière sera accordée : à la conception du réseau de véhicules remplacantes comme services de dessertes aux réseaux de transports traditionnels existants. à la synchronisation des horaires des services des véhicules de remplacement et des transports en commun. au développement de stratégies de ré - acheminement et de ré - affectation des véhicules dans différents scénarios de perturbations pour réduire la perte de fonctionnalité du système menant à une perte monétaire et perte de loyauté des passagers. à l'évaluation de l'efficacité du réseau du point de vue résilience. Ce travail vient compléter d'une part, les travaux effectués dans les laboratoires COSYS/GRETTIA et COSYS/LICIT sur le développement de plateformes de monitoring et de régulation de la résilience et d'autre part, des recherches et développement à l'institut VEDECOM sur l'impact des services des véhicules de remplacement sur la gestion de perturbation, notamment à travers la résilience et la fiabilité des systèmes<br>The objective of this thesis is the design and development of a cooperative operating system, traditional public transport modes and existing resources, in order to offer a resilient transport network that can cope with large or small scale disruptions. The vehicles will be used as a service to serve existing traditional public transport and as a regulatory element in the event of disruption. This system must be able to help operators to efficiently manage the networks in the event of disruptions or breaks (line operation problem, major events, zone closure, etc.). The system will make it possible to cope with fluctuations in capacity and guarantee operation with minimal loss of performance and costs. In order to guarantee the level of service and the resilience of the network, particular attention will be paid to: the design of the replacement vehicle network as service services to existing traditional transport networks. the synchronization of the schedules of the replacement vehicle services and public transport. • the development of vehicle re-routing and re-allocation strategies in different disruption scenarios to reduce the loss of system functionality leading to monetary loss and loss of passenger loyalty. the assessment of network efficiency from a resilience perspective. This work complements, on the one hand, the work carried out in the COSYS/GRETTIA and COSYS/LICIT laboratories on the development of resilience monitoring and regulation platforms and, on the other hand, research and development at the VEDECOM institute on the impact of replacement vehicle services on disruption management, particularly through system resilience and reliability
APA, Harvard, Vancouver, ISO, and other styles
27

Karásek, Jan. "Vysokoúrovňové objektově orientované genetické programování pro optimalizaci logistických skladů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2014. http://www.nusl.cz/ntk/nusl-233624.

Full text
Abstract:
Disertační práce je zaměřena na optimalizaci průběhu pracovních operací v logistických skladech a distribučních centrech. Hlavním cílem je optimalizovat procesy plánování, rozvrhování a odbavování. Jelikož jde o problém patřící do třídy složitosti NP-težký, je výpočetně velmi náročné nalézt optimální řešení. Motivací pro řešení této práce je vyplnění pomyslné mezery mezi metodami zkoumanými na vědecké a akademické půdě a metodami používanými v produkčních komerčních prostředích. Jádro optimalizačního algoritmu je založeno na základě genetického programování řízeného bezkontextovou gramatikou. Hlavním přínosem této práce je a) navrhnout nový optimalizační algoritmus, který respektuje následující optimalizační podmínky: celkový čas zpracování, využití zdrojů, a zahlcení skladových uliček, které může nastat během zpracování úkolů, b) analyzovat historická data z provozu skladu a vyvinout sadu testovacích příkladů, které mohou sloužit jako referenční výsledky pro další výzkum, a dále c) pokusit se předčit stanovené referenční výsledky dosažené kvalifikovaným a trénovaným operačním manažerem jednoho z největších skladů ve střední Evropě.
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Bing. "Contrôle et optimisation des systèmes de transport intelligents dans le voisinage des intersections." Thesis, Ecole centrale de Lille, 2016. http://www.theses.fr/2016ECLI0008/document.

Full text
Abstract:
Cette thèse est consacrée à étudier les applications potentielles de véhicules autonomes et communications V2X pour construire les systèmes de transport intelligents. Premièrement, le comportement de caravane dans un environnement de véhicule connecté est étudié. Un algorithme de commande de caravane est conçu pour obtenir l'espacement sécuritaire ainsi que la conformité de la vitesse et de l'accélération. Deuxièmement, à plus grande échelle, les caravanes autour d'une intersection sont considérées. Le débit pendant une période de signal de trafic peut être amélioré en tirant profit de la capacité redondante de la route. Dans diverses contraintes, les véhicules peuvent choisir d'accélérer et rejoindre la caravane précédente ou à décélérer de déroger à l'actuel. Troisièmement, une intersection sans signalisation en VANET est considérée. Dans des conditions de faible trafic, les véhicules peuvent réguler leur vitesse avant d'arriver à l'intersection en fonction du temps d'occupation de la zone de conflit (TOZC) stocké au niveau du gestionnaire, afin qu'ils puissent traverser l'intersection sans collision ni arrêt. Le délai peut être réduit en conséquence. Enfin, un algorithme de gestion d'intersection autonome universelle, qui peut fonctionner même avec le trafic lourd, est développé. Le véhicule cherche à sécuriser les fenêtres entrant dans le TOZC. Ensuite, sur la base des fenêtres trouvées et le mouvement du véhicule qui précède, les trajectoires des véhicules peuvent être planifiées en utilisant une méthode de programmation dynamique segmentée. Tous les algorithmes conçus sont testés et vérifiés avec succès par des simulations dans scénarios différents<br>This thesis is devoted to study the potential applications of autonomous vehicles and V2X communications to construct the intelligent transportation systems. Firstly, the behavior of platoon in connected vehicle environment is studied. A platoon control algorithm is designed to obtain safe spacing as well as accordance of velocity and acceleration for vehicles in the same lane. Secondly, in larger scale, the platoons around an intersection are considered. The throughput in a traffic signal period can be improved by taking advantage of the redundant road capacity. Within diverse constraints, vehicles can choose to accelerate to join in the preceding platoon or to decelerate to depart from the current one. Thirdly, an unsignalized intersection in VANET is considered. In light traffic conditions, vehicles can regulate their velocities before arriving at the intersection according to the conflict zone occupancy time (CZOT) stored at the manager, so that they could get through the intersection without collision or stop. The delay can be reduced accordingly. Finally, an universal autonomous intersection management algorithm, which can work even with heavy traffic, is developed. The vehicle searches for safe entering windows in the CZOT. Then based on the found windows and the motion of preceding vehicle, the trajectories of vehicles can be planned using a segmented dynamic programming method. All the designed algorithms are successfully tested and verified by simulations in various scenarios
APA, Harvard, Vancouver, ISO, and other styles
29

Zadeh, Saman Akbar. "Application of advanced algorithms and statistical techniques for weed-plant discrimination." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2020. https://ro.ecu.edu.au/theses/2352.

Full text
Abstract:
Precision agriculture requires automated systems for weed detection as weeds compete with the crop for water, nutrients, and light. The purpose of this study is to investigate the use of machine learning methods to classify weeds/crops in agriculture. Statistical methods, support vector machines, convolutional neural networks (CNNs) are introduced, investigated and optimized as classifiers to provide high accuracy at high vehicular speed for weed detection. Initially, Support Vector Machine (SVM) algorithms are developed for weed-crop discrimination and their accuracies are compared with a conventional data-aggregation method based on the evaluation of discrete Normalised Difference Vegetation Indices (NDVIs) at two different wavelengths. The results of this work show that the discrimination performance of the Gaussian kernel SVM algorithm, with either raw reflected intensities or NDVI values being used as inputs, provides better discrimination accuracy than the conventional discrete NDVI-based aggregation algorithm. Then, we investigate a fast statistical method for CNN parameter optimization, which can be applied in many CNN applications and provides more explainable results. This study specifically applies Taguchi based experimental designs for network optimization in a basic network, a simplified inception network and a simplified Resnet network, and conducts a comparison analysis to assess their respective performance and then to select the hyper parameters and networks that facilitate faster training and provide better accuracy. Results show that, for all investigated CNN architectures, there is a measurable improvement in accuracy in comparison with un-optimized CNNs, and that the Inception network yields the highest improvement (~ 6%) in accuracy compared to simple CNN (~ 5%) and Resnet CNN counterparts (~ 2%). Aimed at achieving weed-crop classification in real-time at high speeds, while maintaining high accuracy, the algorithms are uploaded on both a small embedded NVIDIA Jetson TX1 board for real-time precision agricultural applications, and a larger high throughput GeForce GTX 1080Ti board for aerial crop analysis applications. Experimental results show that for a simplified CNN algorithm implemented on a Jetson TX1 board, an improvement in detection speed of thirty times (60 km/hr) can be achieved by using spectral reflectance data rather than imaging data. Furthermore, with an Inception algorithm implemented on a GeForce GTX 1080Ti board for aerial weed detection, an improvement in detection speed of 11 times (~2300 km/hr) can be achieved, while maintaining an adequate detection accuracy above 80%. These high speeds are attained by reducing the data size, choosing spectral components with high information contents at lower resolution, pre-processing efficiently, optimizing the deep learning networks through the use of simplified faster networks for feature detection and classification, and optimizing computational power with available power and embedded resources, to identify the best fit hardware platforms.
APA, Harvard, Vancouver, ISO, and other styles
30

Rahmani, Younes. "The Multi-product Location-Routing Problem with Pickup and Delivery." Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0331.

Full text
Abstract:
Dans les problèmes de localisation-routage classiques (LRP), il s'agit de combiner des décisions stratégiques liées aux choix des sites à ouvrir (centres de traitement) avec des décisions tactiques et opérationnelles liées à l'affectation des clients aux sites sélectionnés et a la confection des tournées associées. Cette thèse propose de nouveaux modèles de localisation-routage permettant de résoudre des problématiques issues de réseaux logistiques, devenus aujourd'hui de plus en plus complexes vu la nécessité de mutualisation de ressources pour intégrer des contraintes de développement durable et des prix de carburants qui semblent augmenter de manière irrémédiable. Plus précisément, trois aspects ont été intégrés pour généraliser les modèles LRP classiques de la littérature : 1) l'aspect pickup and delivery, 2) l'aspect multi-produits, et 3) la possibilité de visiter un ou plusieurs centres de traitement dans une tournée donnée. Nous avons étudié deux schémas logistiques, qui ont donné lieu à deux nouveaux modèles de localisation et de routage, le MPLRP-PD (LRP with multi-product and pickup and delivery), qui peut être vu comme une extension des problèmes de tournées de véhicules avec collecte et livraison, intégrant une décision tactique liée à la localisation des centres de traitement (noeud avec collecte et livraison) dans un réseau de distribution à un seul échelon, et le 2E-MPLRP-PD (Two-echelon LRP with multi-product and pickup and delivery) qui est une généralisation du LRP à deux échelons avec les contraintes citées plus-haut. Ces deux modèles ont été formalisés par des programmes linéaires en variables mixtes (MIP). Des techniques de résolution, basées sur des méthodes de type heuristique, clustering, métaheuristique, ont été proposées pour résoudre le MPLRP-PD et le 2E-MPLRP-PD. Les jeux d'essais de la littérature ont été généralisés pour tester et valider les algorithmes proposés<br>In the framework of Location-Routing Problem (LRP), the main idea is to combine strategic decisions related to the choice of processing centers with tactical and operational decisions related to the allocation of customers to selected processing centers and computing the associated routes. This thesis proposes a new location-routing model to solve problems which are coming from logistics networks, that became nowadays increasingly complex due to the need of resources sharing, in order to integrate the constraints of sustainable development and fuels price, which is increasing irreversibly. More precisely, three aspects have been integrated to generalize the classical LRP models already existed in the literature: 1) pickup and delivery aspect, 2) multi-product aspect, and 3) the possibility to use the processing centers as intermediate facilities in routes. We studied two logistics schemes gives us two new location-routing models: (i) MPLRP-PD (Multi-product LRP with pickup and delivery), which can be viewed as an extension of the vehicle routing problem with pick-up and delivery, including a tactical decision related to the location of processing centers (node with pick-up and delivery), and (ii) 2E-MPLRP-PD (Two-echelon multi-product LRP with pickup and delivery), which is a generalization of the two-echelon LRP. Both models were formalized by mixed integer linear programming (MIP). Solving techniques, based on heuristic methods, clustering approach and meta-heuristic techniques have been proposed to solve the MPLRP-PD and the 2E-MPLRP-PD. The benchmarks from the literature were generalized to test and to validate the proposed algorithms
APA, Harvard, Vancouver, ISO, and other styles
31

Rahmani, Younes. "The Multi-product Location-Routing Problem with Pickup and Delivery." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0331/document.

Full text
Abstract:
Dans les problèmes de localisation-routage classiques (LRP), il s'agit de combiner des décisions stratégiques liées aux choix des sites à ouvrir (centres de traitement) avec des décisions tactiques et opérationnelles liées à l'affectation des clients aux sites sélectionnés et a la confection des tournées associées. Cette thèse propose de nouveaux modèles de localisation-routage permettant de résoudre des problématiques issues de réseaux logistiques, devenus aujourd'hui de plus en plus complexes vu la nécessité de mutualisation de ressources pour intégrer des contraintes de développement durable et des prix de carburants qui semblent augmenter de manière irrémédiable. Plus précisément, trois aspects ont été intégrés pour généraliser les modèles LRP classiques de la littérature : 1) l'aspect pickup and delivery, 2) l'aspect multi-produits, et 3) la possibilité de visiter un ou plusieurs centres de traitement dans une tournée donnée. Nous avons étudié deux schémas logistiques, qui ont donné lieu à deux nouveaux modèles de localisation et de routage, le MPLRP-PD (LRP with multi-product and pickup and delivery), qui peut être vu comme une extension des problèmes de tournées de véhicules avec collecte et livraison, intégrant une décision tactique liée à la localisation des centres de traitement (noeud avec collecte et livraison) dans un réseau de distribution à un seul échelon, et le 2E-MPLRP-PD (Two-echelon LRP with multi-product and pickup and delivery) qui est une généralisation du LRP à deux échelons avec les contraintes citées plus-haut. Ces deux modèles ont été formalisés par des programmes linéaires en variables mixtes (MIP). Des techniques de résolution, basées sur des méthodes de type heuristique, clustering, métaheuristique, ont été proposées pour résoudre le MPLRP-PD et le 2E-MPLRP-PD. Les jeux d'essais de la littérature ont été généralisés pour tester et valider les algorithmes proposés<br>In the framework of Location-Routing Problem (LRP), the main idea is to combine strategic decisions related to the choice of processing centers with tactical and operational decisions related to the allocation of customers to selected processing centers and computing the associated routes. This thesis proposes a new location-routing model to solve problems which are coming from logistics networks, that became nowadays increasingly complex due to the need of resources sharing, in order to integrate the constraints of sustainable development and fuels price, which is increasing irreversibly. More precisely, three aspects have been integrated to generalize the classical LRP models already existed in the literature: 1) pickup and delivery aspect, 2) multi-product aspect, and 3) the possibility to use the processing centers as intermediate facilities in routes. We studied two logistics schemes gives us two new location-routing models: (i) MPLRP-PD (Multi-product LRP with pickup and delivery), which can be viewed as an extension of the vehicle routing problem with pick-up and delivery, including a tactical decision related to the location of processing centers (node with pick-up and delivery), and (ii) 2E-MPLRP-PD (Two-echelon multi-product LRP with pickup and delivery), which is a generalization of the two-echelon LRP. Both models were formalized by mixed integer linear programming (MIP). Solving techniques, based on heuristic methods, clustering approach and meta-heuristic techniques have been proposed to solve the MPLRP-PD and the 2E-MPLRP-PD. The benchmarks from the literature were generalized to test and to validate the proposed algorithms
APA, Harvard, Vancouver, ISO, and other styles
32

Perronnet, Florent. "Régulation coopérative des intersections : protocoles et politiques." Thesis, Belfort-Montbéliard, 2015. http://www.theses.fr/2015BELF0259/document.

Full text
Abstract:
Dans ce travail, nous exploitons le potentiel offert par les véhicules autonomes coopératifs, pour fluidifier le trafic dans une intersection isolée puis dans un réseau d’intersections. Nous proposons le protocole SVAC (Système du Véhicule Actionneur Coopératif) permettant de réguler une intersection isolée. SVAC est basé sur une distribution individuelle du droit de passage qui respecte un ordre précis donné par une séquence de passage.Pour optimiser la séquence de passage, nous définissons la politique PED (Politique d’Evacuation Distribuée) permettant d’améliorer le temps d’évacuation total de l’intersection. La création de la séquence de passage est étudiée à travers deux modélisations. Une modélisation sous forme de graphes permettant le calcul de la solution optimale en connaissant les dates d’arrivée de tous les véhicules, et une modélisation de type réseaux de Petri avec dateurs pour traiter la régulation temps-réel. Des tests réels avec des véhicules équipés ont été réalisés pour étudier la faisabilité du protocole SVAC. Des simulations mettant en jeu un trafic réaliste permettent ensuite de montrer la capacité de fluidifier le trafic par rapport à une régulation classique par feux tricolores.La régulation d’un réseau d’intersections soulève les problèmes d’interblocage et de réorientation du trafic. Nous proposons le protocole SVACRI (Système du Véhicule Actionneur Coopératif pour les Réseaux d’Intersections) qui permet d’éliminer à priori les risques d’interblocage à travers la définition de contraintes d’occupation et de réservation de l’espace et du temps. Nous étudions également la possibilité d’améliorer la fluidité du trafic à travers le routage des véhicules, en tirant avantage du protocole SVACRI. Enfin, nous généralisons le système de régulation proposé pour la synchronisation des vitesses aux intersections<br>The objective of this work is to use the potential offered by the wireless communication and autonomous vehicles to improve traffic flow in an isolated intersection and in a network of intersections. We define a protocol, called CVAS (Cooperative Vehicle Actuator System) for managing an isolated intersection. CVAS distributes separately the right of way to each vehicle according to a specific order determined by a computed sequence.In order to optimize the sequence, we define a DCP (Distributed Clearing Policy) to improve the total evacuation time of the intersection. The control strategy is investigated through two modeling approaches. First graph theory is used for calculating the optimal solution according to the arrival times of all vehicles, and then a timed Petri Net model is used to propose a real-time control algorithm. Tests with real vehicles are realized to study the feasibility of CVAS. Simulations of realistic traffic flows are performed to assess our algorithm and to compare it versus conventional traffic lights.Managing a network of intersections raises the issue of gridlock. We propose CVAS-NI protocol (Cooperative Vehicle actuator system for Networks of Intersections), which is an extension of CVAS protocol. This protocol prevents the deadlock in the network through occupancy and reservation constraints. With a deadlock free network we extend the study to the traffic routing policy. Finally, we generalize the proposed control system for synchronizing the vehicle velocities at intersections
APA, Harvard, Vancouver, ISO, and other styles
33

Dinh, Van Binh. "Méthodes et outils pour le dimensionnement des bâtiments et des systèmes énergétiques en phase d'esquisse intégrant la gestion optimale." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAT092/document.

Full text
Abstract:
Dans le but de réduire la consommation d’énergie et d’augmenter la part des énergies renouvelables, la conception optimale des futurs bâtiments (bâtiments intelligents) apparaît comme un facteur important. Cette thèse vise donc à développer des modèles, des méthodes innovantes d’aide à la conception pour ces bâtiments. Notre nouvelle approche de conception est une optimisation globale et simultanée de l’enveloppe, des systèmes énergétiques et de leurs stratégies de gestion dès la phase d’esquisse, qui prend en compte plusieurs critères de coût (investissement et exploitation) et de confort (thermique, visuel et aéraulique). Le problème d’optimisation multi-objectif est donc un problème de couplage fort de grande taille avec de nombreuses variables et contraintes, qui induisent des difficultés lors de sa résolution. Après avoir fait des analyses sur des cas tests, une méthode d’optimisation d’ordre 1 est choisie, en association à des modèles analytiques dérivés formellement de manière automatique. Notre méthodologie est appliquée à la conception de maisons individuelles, et plus particulièrement des maisons à énergie positive. Les résultats obtenus par cette approche globale apportent des informations importantes aux concepteurs pour l’aider à faire des choix en phase amont du processus de conception<br>In order to reduce the energy consumption and to increase the use of renewable energy, the optimal design of future buildings (smart-buildings) appears as an important factor.This thesis aims to develop models, innovative methods aiding decision-making during the design of buildings. Our approach of design is a global and simultaneous optimization of envelope, energy systems and their management strategies from the sketch phase, which takes into account multi-criterions of costs (investment et exploitation) and comforts (thermal, visual, aeraulic). The multi-objective optimization problem is so a strong coupling problem of large scale with a lot of variables and constraints, which leads to difficulties to solve.After the tests, an optimization method of order 1 is chosen in combination with analytical models formally derived automatically. Our methodology is applied to the design of individual houses, especially positive energy houses. The results of this global approach provide important information to designers to help make choices from the preliminary phase of the design process
APA, Harvard, Vancouver, ISO, and other styles
34

Mhedhbi, Imen. "Ordonnancement d'ateliers de traitements de surfaces pour une production mono-robot/multi-produits : Résolution et étude de la robustesse." Thesis, Ecole centrale de Lille, 2011. http://www.theses.fr/2011ECLI0004/document.

Full text
Abstract:
Les travaux de recherche de ce mémoire portent sur la contribution à l’ordonnancement et à la robustesse d’ateliers de traitement de surface pour une production mono-robot/multi-produits.Une ligne de traitement de surface est constituée d’une succession de cuves dans lesquelles une opération chimique, de durée définie sur un intervalle de temps, appelé fenêtre, doit être réalisée. Ce type de ligne est en particulier contraint par un robot, se déplaçant sur un rail au dessus des cuves et assurant le transport du produit à traiter. Ce problème d’ordonnancement traité, appelé SHMP (Single-Hoist/Multi-Products) est connu pour être NP-difficile, même avec un seul produit et une seule ressource de transport. Basé sur les techniques de satisfaction de contraintes, un algorithme a été développé et mis en œuvre avec succès pour l’atelier de traitement de surfaces étudié. L’utilisation de l’hybridation de ce même algorithme avec d’autres méthodes s’est avérée intéressante et efficace pour déterminer des solutions de meilleure qualité. Nous avons également montré que le recours aux algorithmes génétiques pour l’optimisation du problème job shop mono-robot/multi-produits étudié conduit à des résultats encore plus intéressants et significatifs.La robustesse a aussi été considérée pour l’étude de l’influence des perturbations sur l’ordonnancement. Pour cela, la distinction de divers scénarii a été nécessaire pour l’étude de l’influence d’une perturbation au niveau du chariot. La détermination systématique d’un ordonnancement robuste a été ensuite menée, avec succès, par application d’une méthode d’évaluation multi-critères<br>In this thesis we study the automated electroplating lines. In these lines, the products are immerged in different tanks. The processing times are bounded. The lower bound represents the minimum time to treat the product while the upper bound depends on the treatment.A classical objective is to find the robot moves which minimize the cycle time, this is called ”hoist scheduling problem” (HSP). In this thesis, we study particularly the single-hoist/multi-products.In this direction, three approaches are presented to solve the single-hoist/multi-products problem with introducing the hoist moves time: constraints satisfaction algorithm based on non standard criteria witch the hoist wait time, hybridization with classical heuristics improving the obtained results, and finally the genetic algorithm to optimize the cycle time. Robustness’ notions are finally exploited in the presence of a disturbance at the critical resource of the workshop which is the hoist.The systematic determination of a robust scheduling has been conducted successfully introducing new performance indicators and by applying a multicriteria evaluation method
APA, Harvard, Vancouver, ISO, and other styles
35

Kroetz, Marcel Giovani. "Sistema de apoio na inspeção radiográfica computadorizada de juntas soldadas de tubulações de petróleo." Universidade Tecnológica Federal do Paraná, 2012. http://repositorio.utfpr.edu.br/jspui/handle/1/509.

Full text
Abstract:
Petrobras<br>A inspeção radiográfica de juntas soldadas de tubulações é a atividade minuciosa e cuidadosa de observar imagens radiográficas de juntas soldadas em busca de pequenos defeitos e descontinuidades que possam comprometer a resistência mecânica dessas juntas. Como toda atividade que requer atenção constante, a inspeção radiográfica está sujeita a erros principalmente devido a fadiga visual e distrações naturais devido a repetitividade e monotonia inerentes à essa atividade. No presente trabalho, apresentam-se duas metodologias que têm por objetivo o auxílio e a automação da atividade de inspeção: a detecção automática dos cordões de solda nas radiografias e o realce das descontinuidades; compondo entre outras funcionalidades, um aplicativo completo de auxílio na inspeção radiográfica que agrega ainda a possibilidade de automação do processamento dessas imagens através da construção de rotinas e sua posterior aplicação a um conjunto de imagens semelhantes. Os resultados obtidos na detecção automática do cordão de solda são promissores, sendo possível, através da metodologia proposta, detectar cordões provenientes diferentes técnicas de ensaios radiográficos usuais. Quanto aos resultados do realce das descontinuidades, apesar de estes ainda não levarem a uma inspeção completamente autônoma e não supervisionada, apresentam resultados melhores do que aqueles existentes atualmente na literatura, principalmente quanto a correlação entre contraste visual do resultado do realce e a probabilidade de ocorrência de descontinuidades nas regiões demarcadas. Por fim, o realce das descontinuidades em conjunto com um aplicativo completo e iterativo contribui para uma maior leveza na atividade de inspeção, com o que se espera uma expressiva redução das taxas de erro devido à fadiga visual e um aumento considerável da produtividade através da automação das rotinas mais repetitivas de processamento digital a que as imagens radiográficas são submetidas durante sua inspeção.<br>The weld bead radiographic inspection is the activity of meticulously observe a radiographic image looking for small defects and discontinuities in the welded joints that can compromise the mechanical resistance of that joints. As any other activity than requires constant attention, the weld bead inspection is error prone due to visual fatigue, repetition and others distractions inherent to these activity. In this work, two new methodologies for help in the inspection activities are presented: the automatic detection of the weld bead and the highlighting of the weld bead discontinuities. Those that, among others functionalities, are included in a complete software solution for help in the weld bead inspection. Including the feature of macro programing for automation of the most common image processing routines and further processing bath of images in an automatic way. The results from the automatic weld bead detection is beyond the satisfactory, detecting weld bead from all the usual radiographic techniques. About the results of the highlight of the discontinuities, although that are not suited for a complete non supervised weld bead inspection, their correlation among intensity and the probability of the presence of a discontinuity is very well suited for discontinuities highlighting, a helpful tool in weld bead inspection. In conclusion, the proposed methodologies. combined with a fully featured interactive software solution, a lot contribute for the weld bead inspection activity, a decreased error rate due to visual fatigue and a better overall performance due to the automation of the most common procedures involved in this activity.
APA, Harvard, Vancouver, ISO, and other styles
36

Miranda, Rafael Arthur Rocha. "Otimização por enxame de partículas para segmentação de cordões de solda em imagens radiográficas de tubulações de petróleo." Universidade Tecnológica Federal do Paraná, 2015. http://repositorio.utfpr.edu.br/jspui/handle/1/2048.

Full text
Abstract:
CAPES<br>A inspeção radiográfica de juntas soldadas é importante para garantir qualidade e segurança em redes de tubulações. Apesar de todo treinamento e conhecimento, os especialistas estão sujeitos a cometer erros causados por diferentes fatores. O cansaço visual, distrações e a quantidade de radiografias a serem analisadas podem ser listados como principais fatores. Este trabalho busca propor um sistema de auxílio às inspeções de defeitos em cordões de solda de tubulações de petróleo. Para isto, apresenta uma abordagem para a segmentação automática de cordões de solda em imagens radiográficas do tipo Parede Dupla Vista Dupla (PDVD) combinando dois métodos já conhecidos na literatura: Otimização por Enxame de Partículas (Particle Swarm Optimization - PSO) e Alinhamento Dinâmico no Tempo (Dynamic Time Warping – DTW). Um perfil vertical é obtido das coordenadas da janela codificada nas partículas do PSO e comparado, via DTW a um perfil modelo. A medida de similaridade entre o perfil modelo e o perfil extraído é a base para o cálculo do fitness de cada partícula o qual possui grande importância para o funcionamento da abordagem. Desta forma estudos foram realizados para a escolha de uma função de fitness adequada para o PSO. Os testes foram realizados em duas etapas: primeiramente, fixou-se a altura do perfil extraído e num segundo conjunto de experimentos a altura foi um elemento variável incorporado à codificação das partículas e evoluiu durante as iterações do PSO. Os resultados obtidos na segmentação automática do cordão de solda mostraram que o PSO, na maioria das vezes (no primeiro conjunto de experimentos, obteve um desempenho entre 85,17% e 93,11% e no segundo entre 79,83% e 81,36%), convergiu para a janela que permite a segmentação do cordão de solda, indicando resultados promissores.<br>The radiographic weld inspection is important to ensure quality and security of pipe networks. Despite all training and knowledge, specialists can provide misclassifications for several reasons. The visual tiredness, distraction and the quantity of radiographic to be analyzed can induce an inspector mistaken. This work aims at proposing an assistant system for automatic segmentation of weld bead present in radiographic images. For this, it presents an approach for segmentation of weld bead in radiographic images of type double wall double image (DWDI), merging two well known algorithms: Particle Swarm Optimization - PSO and Dynamic Time Warping - DTW. A vertical profile is raised from a window encoded in a particle of PSO and it is compared through DTW with a model profile. The similarity measure between the model and extracted profile is the basis for the fitness computation which is of great importance to the final performance. Thus a lot of effort has devoted to choose a suitable fitness function for PSO. The tests were realized in two steps: firstly, the height of the extracted profile was fixed and in a second set of experiments the height was a variable component incorporated into the coding of particles and evolved during PSO iterations. The results obtained in automatic segmentation of weld bead showed that the PSO, mostly, converged satisfactorily (first phase achieved a performance was between 85,17% and 93,11% and the second one is between 79,83% and 81,36%), to the window that enables the segmentation of the weld bead, indicating promising results.
APA, Harvard, Vancouver, ISO, and other styles
37

Sahoo, Biswajit. "Study and Analysis of Mobile and Humanoid Robots using Intelligent Optimization Techniques." Thesis, 2018. http://ethesis.nitrkl.ac.in/9661/1/2018_MT_216ME1358_BSahoo_Study.pdf.

Full text
Abstract:
In the current research work, different intelligent algorithms have been analyzed for navigation of robots in cluttered environments consisting of various obstacles. Here, several papers have been reviewed and analyzed based on different artificial intelligence techniques. Based on the research gap available with the existing literature, the objectives of the current research is set as design and implementation of several intelligent algorithms as navigational models in both mobile and humanoid robots. The algorithms such as Cell decomposition, Neural Network, Invasive weed optimization and a hybrid technology named as Neuro-IWO are applied on the robots. The working of the navigational controllers is tested on simulation platforms and the navigational patterns obtained from the simulation platforms are validated through real-time set-ups developed under laboratory conditions. Finally, the results obtained from both the simulation and experimental platforms are compared against each other and are found to be in good agreement with minimal percentage of errors.
APA, Harvard, Vancouver, ISO, and other styles
38

Ben, Sghaier Oussama. "Towards using intelligent techniques to assist software specialists in their tasks." Thesis, 2020. http://hdl.handle.net/1866/25094.

Full text
Abstract:
L’automatisation et l’intelligence constituent des préoccupations majeures dans le domaine de l’Informatique. Avec l’évolution accrue de l’Intelligence Artificielle, les chercheurs et l’industrie se sont orientés vers l’utilisation des modèles d’apprentissage automatique et d’apprentissage profond pour optimiser les tâches, automatiser les pipelines et construire des systèmes intelligents. Les grandes capacités de l’Intelligence Artificielle ont rendu possible d’imiter et même surpasser l’intelligence humaine dans certains cas aussi bien que d’automatiser les tâches manuelles tout en augmentant la précision, la qualité et l’efficacité. En fait, l’accomplissement de tâches informatiques nécessite des connaissances, une expertise et des compétences bien spécifiques au domaine. Grâce aux puissantes capacités de l’intelligence artificielle, nous pouvons déduire ces connaissances en utilisant des techniques d’apprentissage automatique et profond appliquées à des données historiques représentant des expériences antérieures. Ceci permettra, éventuellement, d’alléger le fardeau des spécialistes logiciel et de débrider toute la puissance de l’intelligence humaine. Par conséquent, libérer les spécialistes de la corvée et des tâches ordinaires leurs permettra, certainement, de consacrer plus du temps à des activités plus précieuses. En particulier, l’Ingénierie dirigée par les modèles est un sous-domaine de l’informatique qui vise à élever le niveau d’abstraction des langages, d’automatiser la production des applications et de se concentrer davantage sur les spécificités du domaine. Ceci permet de déplacer l’effort mis sur l’implémentation vers un niveau plus élevé axé sur la conception, la prise de décision. Ainsi, ceci permet d’augmenter la qualité, l’efficacité et productivité de la création des applications. La conception des métamodèles est une tâche primordiale dans l’ingénierie dirigée par les modèles. Par conséquent, il est important de maintenir une bonne qualité des métamodèles étant donné qu’ils constituent un artéfact primaire et fondamental. Les mauvais choix de conception, ainsi que les changements conceptuels répétitifs dus à l’évolution permanente des exigences, pourraient dégrader la qualité du métamodèle. En effet, l’accumulation de mauvais choix de conception et la dégradation de la qualité pourraient entraîner des résultats négatifs sur le long terme. Ainsi, la restructuration des métamodèles est une tâche importante qui vise à améliorer et à maintenir une bonne qualité des métamodèles en termes de maintenabilité, réutilisabilité et extensibilité, etc. De plus, la tâche de restructuration des métamodèles est délicate et compliquée, notamment, lorsqu’il s’agit de grands modèles. De là, automatiser ou encore assister les architectes dans cette tâche est très bénéfique et avantageux. Par conséquent, les architectes de métamodèles pourraient se concentrer sur des tâches plus précieuses qui nécessitent de la créativité, de l’intuition et de l’intelligence humaine. Dans ce mémoire, nous proposons une cartographie des tâches qui pourraient être automatisées ou bien améliorées moyennant des techniques d’intelligence artificielle. Ensuite, nous sélectionnons la tâche de métamodélisation et nous essayons d’automatiser le processus de refactoring des métamodèles. A cet égard, nous proposons deux approches différentes: une première approche qui consiste à utiliser un algorithme génétique pour optimiser des critères de qualité et recommander des solutions de refactoring, et une seconde approche qui consiste à définir une spécification d’un métamodèle en entrée, encoder les attributs de qualité et l’absence des design smells comme un ensemble de contraintes et les satisfaire en utilisant Alloy.<br>Automation and intelligence constitute a major preoccupation in the field of software engineering. With the great evolution of Artificial Intelligence, researchers and industry were steered to the use of Machine Learning and Deep Learning models to optimize tasks, automate pipelines, and build intelligent systems. The big capabilities of Artificial Intelligence make it possible to imitate and even outperform human intelligence in some cases as well as to automate manual tasks while rising accuracy, quality, and efficiency. In fact, accomplishing software-related tasks requires specific knowledge and skills. Thanks to the powerful capabilities of Artificial Intelligence, we could infer that expertise from historical experience using machine learning techniques. This would alleviate the burden on software specialists and allow them to focus on valuable tasks. In particular, Model-Driven Engineering is an evolving field that aims to raise the abstraction level of languages and to focus more on domain specificities. This allows shifting the effort put on the implementation and low-level programming to a higher point of view focused on design, architecture, and decision making. Thereby, this will increase the efficiency and productivity of creating applications. For its part, the design of metamodels is a substantial task in Model-Driven Engineering. Accordingly, it is important to maintain a high-level quality of metamodels because they constitute a primary and fundamental artifact. However, the bad design choices as well as the repetitive design modifications, due to the evolution of requirements, could deteriorate the quality of the metamodel. The accumulation of bad design choices and quality degradation could imply negative outcomes in the long term. Thus, refactoring metamodels is a very important task. It aims to improve and maintain good quality characteristics of metamodels such as maintainability, reusability, extendibility, etc. Moreover, the refactoring task of metamodels is complex, especially, when dealing with large designs. Therefore, automating and assisting architects in this task is advantageous since they could focus on more valuable tasks that require human intuition. In this thesis, we propose a cartography of the potential tasks that we could either automate or improve using Artificial Intelligence techniques. Then, we select the metamodeling task and we tackle the problem of metamodel refactoring. We suggest two different approaches: A first approach that consists of using a genetic algorithm to optimize set quality attributes and recommend candidate metamodel refactoring solutions. A second approach based on mathematical logic that consists of defining the specification of an input metamodel, encoding the quality attributes and the absence of smells as a set of constraints and finally satisfying these constraints using Alloy.
APA, Harvard, Vancouver, ISO, and other styles
39

Khan, S. A. (Salman Ahmad). "Design and analysis of evolutionary and swarm intelligence techniques for topology design of distributed local area networks." Thesis, 2009. http://hdl.handle.net/2263/28233.

Full text
Abstract:
Topology design of distributed local area networks (DLANs) can be classified as an NP-hard problem. Intelligent algorithms, such as evolutionary and swarm intelligence techniques, are candidate approaches to address this problem and to produce desirable solutions. DLAN topology design consists of several conflicting objectives such as minimization of cost, minimization of network delay, minimization of the number of hops between two nodes, and maximization of reliability. It is possible to combine these objectives in a single-objective function, provided that the trade-offs among these objectives are adhered to. This thesis proposes a strategy and a new aggregation operator based on fuzzy logic to combine the four objectives in a single-objective function. The thesis also investigates the use of a number of evolutionary algorithms such as stochastic evolution, simulated evolution, and simulated annealing. A number of hybrid variants of the above algorithms are also proposed. Furthermore, the applicability of swarm intelligence techniques such as ant colony optimization and particle swarm optimization to topology design has been investigated. All proposed techniques have been evaluated empirically with respect to their algorithm parameters. Results suggest that simulated annealing produced the best results among all proposed algorithms. In addition, the hybrid variants of simulated annealing, simulated evolution, and stochastic evolution generated better results than their respective basic algorithms. Moreover, a comparison of ant colony optimization and particle swarm optimization shows that the latter generated better results than the former.<br>Thesis (PhD)--University of Pretoria, 2009.<br>Computer Science<br>unrestricted
APA, Harvard, Vancouver, ISO, and other styles
40

Modungwa, Dithoto. "Application of artificial intelligence techniques in design optimization of a parallel manipulator." Thesis, 2015. http://hdl.handle.net/10210/13328.

Full text
Abstract:
D.Phil. (Electrical and Electronic Engineering)<br>The complexity of multi-objective functions and diverse variables involved with optimization of parallel manipulator or parallel kinematic machine design has inspired the research conducted in this thesis to investigate techniques that are suitable to tackle this problem efficiently. Further the parallel manipulator dimensional synthesis problem is multimodal and has no explicit analytical expressions. This process requires optimization techniques which offer high level of accuracy and robustness. The goal of this work is to present method(s) based on Artificial Intelligence (AI) that may be applied in addressing the challenge stated above. The performance criteria considered include; stiffness, dexterity and workspace. The case studied in this work is a 6 degrees of freedom (DOF) parallel manipulator, particularly because it is considered much more complicated than the lesser DOF mechanisms, owing to the number of independent parameters or inputs needed to specify its configuration (i.e. the higher the DOFs, the more the number of independent variables to be considered). The first contribution in this thesis is a comparative study of several hybrid Multi- Objective Optimization (MOO) AI algorithms, in application of a parallel manipulator dimensional synthesis. Artificial neural networks are utilized to approximate a multiple function for the analytical solution of the 6 DOF parallel manipulator’s performance indices, followed by implementation of Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) as search algorithms. Further two hybrid techniques are proposed which implement Simulated Annealing and Random Forest in searching for optimum solutions in the Multi-objective Optimization problem. The final contribution in this thesis is ensemble machine learning algorithms for approximation of a multiple objective function for the 6 DOF parallel manipulator analytical solution. The results from the experiments demonstrated not only neural network (NN) but also other machine learning algorithms namely K- Nearest Neighbour (k-NN), M5 Prime (M5’), Zero R (ZR) and Decision Stump (DS) can effectively be implemented for the application of function approximation.
APA, Harvard, Vancouver, ISO, and other styles
41

Sharma, Meghansh. "Optimization of Test Data for Basis Path Testing using Artificial Intelligence Techniques." Thesis, 2013. http://ethesis.nitrkl.ac.in/5440/1/211CS3300.pdf.

Full text
Abstract:
Software testing is a process carried out with the intent of finding errors. This helps in analyzing the stability and quality of a software. Stability and quality can be achieved by suitable test data. Test data can be generated either manually or by automated process. Manual generation of test data is a difficult task. It involves lot of effort due to presence of huge number of predicate nodes in a module. In this report, an automated process is proposed for test data generation in traditional methodology for the automatically constructed control flow graph. Code Coverage is a measure used in software testing process and is one of the key indicators of software quality. It helps the tester in evaluating the effectiveness of testing. It is achieved by automatically generating test data for various functions. Code coverage is not a method or a test; it is a measure which helps in improving software reliability. Effort has been made to gather code coverage information either by source code or by the requirements specified by the customer. But less attention has been paid to achieve better coverage. This report also emphasizes on code coverage, achieved through the test data generated, using some soft computing techniques. Here, three soft computing techniques such as Genetic algorithm, Particle swarm optimization and the Clonal selection algorithm techniques have been deployed for automatic test data generation. This test data was in turn used for code coverage analysis. Experimental results show that the test data generated using Clonal selection algorithm was much more effective in achieving better code coverage over Genetic algorithm and Particle swarm optimization.
APA, Harvard, Vancouver, ISO, and other styles
42

Cartright, Marc-Allen. "Query-time optimization techniques for structured queries in information retrieval." 2013. https://scholarworks.umass.edu/dissertations/AAI3603062.

Full text
Abstract:
The use of information retrieval (IR) systems is evolving towards larger, more complicated queries. Both the IR industrial and research communities have generated significant evidence indicating that in order to continue improving retrieval effectiveness, increases in retrieval model complexity may be unavoidable. From an operational perspective, this translates into an increasing computational cost to generate the final ranked list in response to a query. Therefore we encounter an increasing tension in the trade-off between retrieval effectiveness (quality of result list) and efficiency (the speed at which the list is generated). This tension creates a strong need for optimization techniques to improve the efficiency of ranking with respect to these more complex retrieval models. This thesis presents three new optimization techniques designed to deal with different aspects of structured queries. The first technique involves manipulation of interpolated subqueries, a common structure found across a large number of retrieval models today. We then develop an alternative scoring formulation to make retrieval models more responsive to dynamic pruning techniques. The last technique is delayed execution, which focuses on the class of queries that utilize term dependencies and term conjunction operations. In each case, we empirically show that these optimizations can significantly improve query processing efficiency without negatively impacting retrieval effectiveness. Additionally, we implement these optimizations in the context of a new retrieval system known as Julien. As opposed to implementing these techniques as one-off solutions hard-wired to specific retrieval models, we treat each technique as a ``behavioral'' extension to the original system. This allows us to flexibly stack the modifications to use the optimizations in conjunction, increasing efficiency even further. By focusing on the behaviors of the objects involved in the retrieval process instead of on the details of the retrieval algorithm itself, we can recast these techniques to be applied only when the conditions are appropriate. Finally, the modular design of these components illustrates a system design that allows improvements to be implemented without disturbing the existing retrieval infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
43

Chou, Hung-Mu, and 周宏穆. "Application of Hybrid Intelligent Computational Technique to Low Noise Amplifier Integrated Circuit Design Optimization." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/78075014139928660188.

Full text
Abstract:
碩士<br>國立交通大學<br>電子物理系所<br>93<br>The low noise amplifier (LNA) plays an important role in radio frequency (RF) circuit design. In modern integrated circuit (IC) design flow and chip implementation, the designers must perform a series of functional examination and analysis of the characteristics by several circuit simulation tools to match the specification. In order to achieve the specification, the designers must continuously tuning the design coefficients and perform the circuit simulation to get optimized active device model parameters, passive device parameters, circuit layout, and width of wires. This task usually requires the experienced designers to accomplish such complicated work. In this work we propose a hybrid intelligent circuit optimization technique for LNA circuit. This method combines with the genetic algorithm (GA), Levenberg – Marquardt (LM) method, and circuit simulator to perform automatic LNA circuit optimization. For a given LNA circuit, the optimization method considers the electrical specification such as S11, S12, S21, S22, K factor, the noise figure, and the viii input third-order intercept point. The optimization procedure starts with loading the necessary parameters for circuit simulation, and then calls the circuit simulator for circuit simulation and evaluation. Once the specification is achieved, then output the optimized parameters; otherwise activates the GA for global optimization, mean while the LM method searches the local optima obtained by GA speedy, and then calls circuit simulator to obtain result and evaluates results until the specification is matched. During the optimization process, the fitness function of GA and the optimized result obtained by LM method are generated by applying the circuit simulator to simulate the designed LNA circuit. According to the concept described above, we successfully developed a prototype of the hybrid intelligent IC optimization computer aided design (CAD) system. In the experiment, sixteen optimized parameters of the LNA circuit composed with 0.18 μm metal-oxide-silicon filed effect transistors (MOSFETs) are acquired by our developed system and the seven specifications are all matched. Through this examination, the proposed circuit optimization method shows its robustness and practicability on RF circuit and wireless system on chip (SoC) design.
APA, Harvard, Vancouver, ISO, and other styles
44

(5929916), Sudhir B. Kylasa. "HIGHER ORDER OPTIMIZATION TECHNIQUES FOR MACHINE LEARNING." Thesis, 2019.

Find full text
Abstract:
<div> <div> <div> <p>First-order methods such as Stochastic Gradient Descent are methods of choice for solving non-convex optimization problems in machine learning. These methods primarily rely on the gradient of the loss function to estimate descent direction. However, they have a number of drawbacks, including converging to saddle points (as opposed to minima), slow convergence, and sensitivity to parameter tuning. In contrast, second order methods that use curvature information in addition to the gradient, have been shown to achieve faster convergence rates, theoretically. When used in the context of machine learning applications, they offer faster (quadratic) convergence, stability to parameter tuning, and robustness to problem conditioning. In spite of these advantages, first order methods are commonly used because of their simplicity of implementation and low per-iteration cost. The need to generate and use curvature information in the form of a dense Hessian matrix makes each iteration of second order methods more expensive. </p><p><br></p> <p>In this work, we address three key problems associated with second order methods – (i) what is the best way to incorporate curvature information into the optimization procedure; (ii) how do we reduce the operation count of each iteration in a second order method, while maintaining its superior convergence property; and (iii) how do we leverage high-performance computing platforms to significant accelerate second order methods. To answer the first question, we propose and validate the use of Fisher information matrices in second order methods to significantly accelerate convergence. The second question is answered through the use of statistical sampling techniques that suitably sample matrices to reduce per-iteration cost without impacting convergence. The third question is addressed through the use of graphics processing units (GPUs) in distributed platforms to deliver state of the art solvers.</p></div></div></div><div><div><div> <p>Through our work, we show that our solvers are capable of significant improvement over state of the art optimization techniques for training machine learning models. We demonstrate improvements in terms of training time (over an order of magnitude in wall-clock time), generalization properties of learned models, and robustness to problem conditioning. </p> </div> </div> </div>
APA, Harvard, Vancouver, ISO, and other styles
45

Leke, Collins Achepsah. "Empirical evaluation of optimization techniques for classification and prediction tasks." Thesis, 2014. http://hdl.handle.net/10210/9858.

Full text
Abstract:
M.Ing. (Electrical and Electronic Engineering)<br>Missing data is an issue which leads to a variety of problems in the analysis and processing of data in datasets in almost every aspect of day−to−day life. Due to this reason missing data and ways of handling this problem have been an area of research in a variety of disciplines in recent times. This thesis presents a method which is aimed at finding approximations to missing values in a dataset by making use of Genetic Algorithm (GA), Simulated Annealing (SA), Particle Swarm Optimization (PSO), Random Forest (RF), Negative Selection (NS) in combination with auto-associative neural networks, and also provides a comparative analysis of these algorithms. The methods suggested use the optimization algorithms to minimize an error function derived from training an auto-associative neural network during which the interrelationships between the inputs and the outputs are obtained and stored in the weights connecting the different layers of the network. The error function is expressed as the square of the difference between the actual observations and predicted values from an auto-associative neural network. In the event of missing data, all the values of the actual observations are not known hence, the error function is decomposed to depend on the known and unknown variable values. Multi Layer Perceptron (MLP) neural network is employed to train the neural networks using the Scaled Conjugate Gradient (SCG) method. The research primarily focusses on predicting missing data entries from two datasets being the Manufacturing dataset and the Forest Fire dataset. Prediction is a representation of how things will occur in the future based on past occurrences and experiences. The research also focuses on investigating the use of this proposed technique in approximating and classifying missing data with great accuracy from five classification datasets being the Australian Credit, German Credit, Japanese Credit, Heart Disease and Car Evaluation datasets. It also investigates the impact of using different neural network architectures in training the neural network and finding approximations for the missing values, and using the best possible architecture for evaluation purposes. It is revealed in this research that the approximated values for the missing data obtained by applying the proposed models are accurate with a high percentage of correlation between the actual missing values and corresponding approximated values using the proposed models on the Manufacturing dataset ranging between 94.7% and 95.2% with the exception of the Negative Selection algorithm which resulted in a 49.6% correlation coefficient value. On the Forest Fire dataset, it was observed that there was a low percentage correlation between the actual missing values and the corresponding approximated values in the range 0.95% to 4.49% due to the nature of the values of the variables in the dataset. The Negative Selection algorithm on this dataset revealed a negative percentage correlation between the actual values and the approximated values with a value of 100%. Approximations found for missing data are also observed to depend on the particular neural network architecture employed in training the dataset. Further analysis revealed that the Random Forest algorithm on average performed better than the GA, SA, PSO, and NS algorithms yielding the lowest Mean Square Error, Root Mean Square Error, and Mean Absolute Error values. On the other end of the scale was the NS algorithm which produced the highest values for the three error metrics bearing in mind that for these, the lower the values, the better the performance, and vice versa. The evaluation of the algorithms on the classification datasets revealed that the most accurate in classifying and identifying to which of a set of categories a new observation belonged on the basis of the training set of data is the Random Forest algorithm, which yielded the highest AUC percentage values on all of the five classification datasets. The differences between its AUC values and those of the GA, SA, PSO, and NS algorithms were statistically significant, with the most statistically significant differences observed when the AUC values for the Random Forest algorithm were compared to those of the Negative Selection algorithm on all five classification datasets. The GA, SA, and PSO algorithms produced AUC values which when compared against each other on all five classification datasets were not very different. Overall analysis on the datasets considered revealed that the algorithm which performed best in solving both the prediction and classification problems was the Random Forest algorithm as seen by the results obtained. The algorithm on the other end of the scale after comparisons of results was the Negative Selection algorithm which produced the highest error metric values for the prediction problems and the lowest AUC values for the classification problems.
APA, Harvard, Vancouver, ISO, and other styles
46

Agbugba, Emmanuel Emenike. "Hybridization of particle Swarm Optimization with Bat Algorithm for optimal reactive power dispatch." Diss., 2017. http://hdl.handle.net/10500/23630.

Full text
Abstract:
This research presents a Hybrid Particle Swarm Optimization with Bat Algorithm (HPSOBA) based approach to solve Optimal Reactive Power Dispatch (ORPD) problem. The primary objective of this project is minimization of the active power transmission losses by optimally setting the control variables within their limits and at the same time making sure that the equality and inequality constraints are not violated. Particle Swarm Optimization (PSO) and Bat Algorithm (BA) algorithms which are nature-inspired algorithms have become potential options to solving very difficult optimization problems like ORPD. Although PSO requires high computational time, it converges quickly; while BA requires less computational time and has the ability of switching automatically from exploration to exploitation when the optimality is imminent. This research integrated the respective advantages of PSO and BA algorithms to form a hybrid tool denoted as HPSOBA algorithm. HPSOBA combines the fast convergence ability of PSO with the less computation time ability of BA algorithm to get a better optimal solution by incorporating the BA’s frequency into the PSO velocity equation in order to control the pace. The HPSOBA, PSO and BA algorithms were implemented using MATLAB programming language and tested on three (3) benchmark test functions (Griewank, Rastrigin and Schwefel) and on IEEE 30- and 118-bus test systems to solve for ORPD without DG unit. A modified IEEE 30-bus test system was further used to validate the proposed hybrid algorithm to solve for optimal placement of DG unit for active power transmission line loss minimization. By comparison, HPSOBA algorithm results proved to be superior to those of the PSO and BA methods. In order to check if there will be a further improvement on the performance of the HPSOBA, the HPSOBA was further modified by embedding three new modifications to form a modified Hybrid approach denoted as MHPSOBA. This MHPSOBA was validated using IEEE 30-bus test system to solve ORPD problem and the results show that the HPSOBA algorithm outperforms the modified version (MHPSOBA).<br>Electrical and Mining Engineering<br>M. Tech. (Electrical Engineering)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!