Pour voir les autres types de publications sur ce sujet consultez le lien suivant : Entropy algorithms.

Thèses sur le sujet « Entropy algorithms »

Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres

Choisissez une source :

Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Entropy algorithms ».

À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.

Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.

Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.

1

Höns, Robin. « Estimation of distribution algorithms and minimum relative entropy ». [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=980407877.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
2

Luo, Shen. « Interior-Point Algorithms Based on Primal-Dual Entropy ». Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/1181.

Texte intégral
Résumé :
We propose a family of search directions based on primal-dual entropy in the context of interior point methods for linear programming. This new family contains previously proposed search directions in the context of primal-dual entropy. We analyze the new family of search directions by studying their primal-dual affine-scaling and constant-gap centering components. We then design primal-dual interior-point algorithms by utilizing our search directions in a homogeneous and self-dual framework. We present iteration complexity analysis of our algorithms and provide the results of computational experiments on NETLIB problems.
Styles APA, Harvard, Vancouver, ISO, etc.
3

Fellman, Laura Suzanne. « The Genetic Algorithm and Maximum Entropy Dice ». PDXScholar, 1996. https://pdxscholar.library.pdx.edu/open_access_etds/5247.

Texte intégral
Résumé :
The Brandeis dice problem, originally introduced in 1962 by Jaynes as an illustration of the principle of maximum entropy, was solved using the genetic algorithm, and the resulting solution was compared with that obtained analytically. The effect of varying the genetic algorithm parameters was observed, and the optimum values for population size, mutation rate, and mutation interval were determined for this problem. The optimum genetic algorithm program was then compared to a completely random method of search and optimization. Finally, the genetic algorithm approach was extended to several variations of the original problem for which an analytical approach would be impractical.
Styles APA, Harvard, Vancouver, ISO, etc.
4

Meehan, Timothy J. « Joint demodulation of low-entropy narrow band cochannel signals ». Thesis, Monterey, Calif. : Naval Postgraduate School, 2006. http://bosun.nps.edu/uhtbin/hyperion.exe/06Dec%5FMeehan%5FPhD.pdf.

Texte intégral
Résumé :
Thesis (Ph.D. in Electrical Engineering)--Naval Postgraduate School, December 2006.
Dissertation supervisor(s): Frank E. Kragh. "December 2006." Includes bibliographical references (p. 167-177). Also available in print.
Styles APA, Harvard, Vancouver, ISO, etc.
5

Reimann, Axel. « Evolutionary algorithms and optimization ». Doctoral thesis, [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=969093497.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
6

JIMMY, TJEN. « Entropy-Based Sensor Selection Algorithms for Damage Detection in SHM Systems ». Doctoral thesis, Università degli Studi dell'Aquila, 2021. http://hdl.handle.net/11697/173561.

Texte intégral
Résumé :
It is often the case that small faults in a structure lead to irreparable damages that deliver a huge financial loss or even pose safety risks. Thus, an early fault detection is necessary, such that these unfortunate events can be avoided. In this thesis, the problem of structural damage detection is considered. In particular there are 3 main contributions: First, a novel sensors selection algorithm based on the concepts of entropy and information gain from information theory is developed, to reduce the number of sensors without affecting, or even improving, model accuracy; Second, a novel technique based on Kalman filtering and on a combination of Regression Trees theory from Machine Learning and Auto Regressive (AR) system identification from control theory is derived, to build models that can be used to detect structural damages. Finally, a new fault detection algorithm based on Poly-Exponential (PE) models and nonlinear Kalman filtering on the residual is introduced, which is able to enhance the sensitivity of the proposed fault detection algorithm and improve the data prediction quality for some accelerometers in a notably margin. The presented techniques are validated on three different experimental datasets, providing evidence that the proposed algorithms outperform some previous approaches, improving the prediction accuracy and the damage detection sensitivity while reducing the number of sensors.
Styles APA, Harvard, Vancouver, ISO, etc.
7

Kirsch, Matthew Robert. « Signal Processing Algorithms for Analysis of Categorical and Numerical Time Series : Application to Sleep Study Data ». Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1278606480.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
8

Molari, Marco. « Implementation of network entropy algorithms on hpc machines, with application to high-dimensional experimental data ». Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6160/.

Texte intégral
Résumé :
Network Theory is a prolific and lively field, especially when it approaches Biology. New concepts from this theory find application in areas where extensive datasets are already available for analysis, without the need to invest money to collect them. The only tools that are necessary to accomplish an analysis are easily accessible: a computing machine and a good algorithm. As these two tools progress, thanks to technology advancement and human efforts, wider and wider datasets can be analysed. The aim of this paper is twofold. Firstly, to provide an overview of one of these concepts, which originates at the meeting point between Network Theory and Statistical Mechanics: the entropy of a network ensemble. This quantity has been described from different angles in the literature. Our approach tries to be a synthesis of the different points of view. The second part of the work is devoted to presenting a parallel algorithm that can evaluate this quantity over an extensive dataset. Eventually, the algorithm will also be used to analyse high-throughput data coming from biology.
Styles APA, Harvard, Vancouver, ISO, etc.
9

Kotha, Aravind Eswar Ravi Raja, et Lakshmi Ratna Hima Rajitha Majety. « Performance Comparison of Image Enhancement Algorithms Evaluated on Poor Quality Images ». Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13880.

Texte intégral
Résumé :
Many applications require automatic image analysis for different quality of the input images. In many cases, the quality of acquired images is suitable for the purpose of the application. However, in some cases the quality of the acquired image has to be modified according to needs of a specific application. A higher quality of the image can be achieved by Image Enhancement (IE) algorithms. The choice of IE technique is challenging as this choice varies with the application purpose. The goal of this research is to investigate the possibility of the selective application for the IE algorithms. The values of entropy and Peak Signal to Noise Ratio (PSNR) of the acquired image are considered as parameters for selectivity. Three algorithms such as Retinex, Bilateral filter and Bilateral tone adjustment have been chosen as IE techniques for evaluation in this work. Entropy and PSNR are used for the performance evaluation of selected IE algorithms. In this study, we considered the images from three fingerprint image databases as input images to investigate the algorithms. The decision to enhance an image in these databases by the considered algorithms is based on the empirically evaluated entropy and PSNR thresholds. Automatic Fingerprint Identification System (AFIS) has been selected as the application of interest. The evaluation results show that the performance of the investigated IE algorithms affects significantly the performance of AFIS. The second conclusion is that entropy and PSNR might be considered as indicators for required IE of the input image for AFIS.
Styles APA, Harvard, Vancouver, ISO, etc.
10

Saraiva, Gustavo Francisco Rosalin. « Análise temporal da sinalização elétrica em plantas de soja submetidas a diferentes perturbações externas ». Universidade do Oeste Paulista, 2017. http://bdtd.unoeste.br:8080/jspui/handle/jspui/1087.

Texte intégral
Résumé :
Submitted by Michele Mologni (mologni@unoeste.br) on 2018-07-27T17:57:40Z No. of bitstreams: 1 Gustavo Francisco Rosalin Saraiva.pdf: 5041218 bytes, checksum: 30127a7816b12d3bd7e57182e6229bc2 (MD5)
Made available in DSpace on 2018-07-27T17:57:40Z (GMT). No. of bitstreams: 1 Gustavo Francisco Rosalin Saraiva.pdf: 5041218 bytes, checksum: 30127a7816b12d3bd7e57182e6229bc2 (MD5) Previous issue date: 2017-03-31
Plants are complex organisms with dynamic processes that, due to their sessile way of life, are influenced by environmental conditions at all times. Plants can accurately perceive and respond to different environmental stimuli intelligently, but this requires a complex and efficient signaling system. Electrical signaling in plants has been known for a long time, but has recently gained prominence with the understanding of the physiological processes of plants. The objective of this thesis was to test the following hypotheses: temporal series of data obtained from electrical signaling of plants have non-random information, with dynamic and oscillatory pattern, such dynamics being affected by environmental stimuli and that there are specific patterns in responses to stimuli. In a controlled environment, stressful environmental stimuli were applied in soybean plants, and the electrical signaling data were collected before and after the application of the stimulus. The time series obtained were analyzed using statistical and computational tools to determine Frequency Spectrum (FFT), Autocorrelation of Values and Approximate Entropy (ApEn). In order to verify the existence of patterns in the series, classification algorithms from the area of machine learning were used. The analysis of the time series showed that the electrical signals collected from plants presented oscillatory dynamics with frequency distribution pattern in power law. The results allow to differentiate with great efficiency series collected before and after the application of the stimuli. The PSD and autocorrelation analyzes showed a great difference in the dynamics of the electric signals before and after the application of the stimuli. The ApEn analysis showed that there was a decrease in the signal complexity after the application of the stimuli. The classification algorithms reached significant values in the accuracy of pattern detection and classification of the time series, showing that there are mathematical patterns in the different electrical responses of the plants. It is concluded that the time series of bioelectrical signals of plants contain discriminant information. The signals have oscillatory dynamics, having their properties altered by environmental stimuli. There are still mathematical patterns built into plant responses to specific stimuli.
As plantas são organismos complexos com processos dinâmicos que, devido ao seu modo séssil de vida, sofrem influência das condições ambientais todo o tempo. Plantas podem percebem e responder com precisão a diferentes estímulos ambientais de forma inteligente, mas para isso se faz necessário um complexo e eficiente sistema de sinalização. A sinalização elétrica em plantas já é conhecida há muito tempo, mas vem ganhando destaque recentemente com seu entendimento em relação aos processos fisiológicos das plantas. O objetivo desta tese foi testar as seguintes hipóteses: séries temporais de dados obtidos da sinalização elétrica de plantas possuem informação não aleatória, com padrão dinâmico e oscilatório, sendo tal dinâmica afetada por estímulos ambientais e que há padrões específicos nas respostas a estímulos. Em ambiente controlado, foram aplicados estímulos ambientais estressantes em plantas de soja, e captados os dados de sinalização elétrica antes e após a aplicação dos mesmos. As séries temporais obtidas foram analisadas utilizando ferramentas estatísticas e computacionais para se determinar o Espectro de Frequências (FFT), Autocorrelação dos valores e Entropia Aproximada (ApEn). Para se verificar a existência de padrões nas séries, foram utilizados algoritmos de classificação da área de aprendizado de máquina. A análise das séries temporais mostrou que os sinais elétricos coletados de plantas apresentaram dinâmica oscilatória com padrão de distribuição de frequências em lei de potência. Os resultados permitem diferenciar com grande eficácia séries coletadas antes e após a aplicação dos estímulos. As análises de PSD e autocorrelação mostraram grande diferença na dinâmica dos sinais elétricos antes e após a aplicação dos estímulos. A análise de ApEn mostrou haver diminuição da complexidade do sinal após a aplicação dos estímulos. Os algoritmos de classificação alcançaram valores significativos na acurácia de detecção de padrões e classificação das séries temporais, mostrando haver padrões matemáticos nas diferentes respostas elétricas das plantas. Conclui-se que as séries temporais de sinais bioelétricos de plantas possuem informação discriminante. Os sinais possuem dinâmica oscilatória, tendo suas propriedades alteradas por estímulos ambientais. Há ainda padrões matemáticos embutidos nas respostas da planta a estímulos específicos.
Styles APA, Harvard, Vancouver, ISO, etc.
11

Lo, Johnny Li-Chang. « A framework for cryptography algorithms on mobile devices ». Diss., University of Pretoria, 2007. http://hdl.handle.net/2263/28849.

Texte intégral
Résumé :
Mobile communication devices have become a popular tool for gathering and disseminating information and data. With the evidence of the growth of wireless technology and a need for more flexible, customizable and better-optimised security schemes, it is evident that connection-based security such as HTTPS may not be sufficient. In order to provide sufficient security at the application layer, developers need access to a cryptography package. Such packages are available as third party mobile cryptographic toolkits or are supported natively on the mobile device. Typically mobile cryptographic packages have reduced their number of API methods to keep the package lightweight in size, but consequently making it quite complex to use. As a result developers could easily misuse a method which can weaken the entire security of a system without knowing it. Aside from the complexities in the API, mobile cryptography packages often do not apply sound cryptography within the implementation of the algorithms thus causing vulnerabilities in its utilization and initialization. Although FIPS 140-2 and CAPI suggest guidelines on how cryptographic algorithms should be implemented, they do not define the guidelines for implementing and using cryptography in a mobile environment. In our study, we do not define new cryptographic algorithms, instead, we investigate how sound cryptography can be applied practically in a mobile application environment and developed a framework called Linca (which stands for Logical Integration of Cryptographic Architectures) that can be used as a mobile cryptographic package to demonstrate our findings. The benefit that Linca has is that it hides the complexity of making incorrect cryptographic algorithm decisions, cryptographic algorithm initialization and utilization and key management, while maintaining a small size. Linca also applies sound cryptographic fundamentals internally within the framework, which radiates these benefits outwards at the API. Because Linca is a framework, certain architecture and design patterns are applied internally so that the cryptographic mechanisms and algorithms can be easily maintained. Linca showed better results when evaluated against two mobile cryptography API packages namely Bouncy Castle API and Secure and Trust Service API in terms of security and design. We demonstrate the applicability of Linca on using two realistic examples that cover securing network channels and on-device data.
Dissertation (MSc (Computer Science))--University of Pretoria, 2007.
Computer Science
MSc
unrestricted
Styles APA, Harvard, Vancouver, ISO, etc.
12

Hyla, Bret M. « Sample Entropy and Random Forests a methodology for anomaly-based intrusion detection and classification of low-bandwidth malware attacks / ». Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FHyla.pdf.

Texte intégral
Résumé :
Thesis (M.S. in Computer Science)--Naval Postgraduate School, September 2006.
Thesis Advisor(s): Craig Martell, Kevin Squire. "September 2006." Includes bibliographical references (p.59-62). Also available in print.
Styles APA, Harvard, Vancouver, ISO, etc.
13

Gaudencio, Andreia. « Study on the texture of biomedical data : contributions from multiscale and multidimensional features based on entropy measures ». Electronic Thesis or Diss., Angers, 2025. http://www.theses.fr/2025ANGE0004.

Texte intégral
Résumé :
Le travail de thèse avait pour objectif de concevoir des outils d’extraction de texture à l’aide d’algorithmes basés sur l’entropie bidimensionnelle (ABE) et des techniques d’intelligence artificielle. Tout d’abord, une étude bibliographique a permis de montrer l’utilité de l’entropie pour diagnostiquer les pathologies telles que le cancer et les maladies pulmonaires. Par la suite, les différents algorithmes d’entropie proposés, certains basés sur l’entropie de Shannon et d’autres utilisant l’entropie conditionnelle, ont été comparés en termes de temps de calculs et de performances dans l’analyse de la texture d’images médicales. Les résultats ont montré que les algorithmes basés sur l’entropie de Shannon sont moins gourmands en temps de calculs pour la détection de la pneumonie, de l’emphysème et de la tuberculose. Les algorithmes utilisant l’entropie conditionnelle ont montré une stabilité et une cohérence supérieures. L’entropie floue d’ensemble bidimensionnel (2D) semble être la meilleure des techniques pour détecter le tissu pulmonaire sain et deux types d’emphysème. Par ailleurs, l’entropie floue multi-échelle tridimensionnelle mise en œuvre a permis d’obtenir une précision de 89,6 % et une sensibilité de 96 % lors de la détection du COVID-19. L’entropie symbolique dynamique 2D proposée s’est également avérée plus précise (87,3 %) pour détecter les patients atteints d’emphysème par rapport aux autres ABE testés. Finalement, en utilisant les caractéristiques d’entropie 2D des ABE conçus durant ce travail de doctorat, les patients atteints d’emphysème ont été détectés avec une précision de 89,1 % et une aire sous la courbe de 95 %. Dans l’ensemble, les ABE proposés se sont révélés efficaces dans l’évaluation de la texture et pourraient être appliqués, à l’avenir, à diverses applications biomédicales
The PhD aimed to develop texture extraction tools using artificial intelligence and entropy-based algorithms (EBA) for image pro cessing. First, a systematic review investigated the utility of entropy in predicting several pathologies like cancer and lung diseases. Then, Shannonbased and conditional-based entropy algorithms were developed and compared for their computational efficiency and performance in texture analysis of medical images. Shannon-based algorithms were less computationally intensive and were applied to detect pulmonary diseases. Conditional-based algorithms showed superior stability and consistency. Two-dimensional (2D) ensemble fuzzy entropy was the best algorithm among the ensemble techniques to detect healthy lung tissue and two types of emphysema. The proposed three-dimensional multiscale fuzzy entropy led to 89.6% accuracy and 96% sensitivity when detecting COVID-19. Moreover, 2D symbolic dynamic entropy proved to be the most accurate EBA (87.3%) in detecting emphysema patients among healthy subjects. Finally, when using 2D entropy features provided by the EBAs developed, emphysema patients were detected with 89.1% accuracy, and 95% area under the curve. Overall, the developed EBAs have proven to be effective in tex ture evaluation. In the future, they could be applied to various biomedical applications through different medical image sources
Styles APA, Harvard, Vancouver, ISO, etc.
14

Lökk, Adrian, et Jacob Hallman. « Viability of Sentiment Analysis for Troll Detection on Twitter : A Comparative Study Between the Naive Bayes and Maximum Entropy Algorithms ». Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186443.

Texte intégral
Résumé :
In this study, we investigated whether sentiment analysis could prove to be a viable tool for troll detection on Twitter. The reason why sentiment analysis was analyzed as a possible tool was because of earlier work recognizing it as a feature that could be interesting to examine. By performing two different sentiment analysis methods, Naive Bayes and Maximum Entropy, an idea could be gathered of how well these approaches perform and whether they are viable for troll detection. The data set used was a set of 3000 tweets under the hashtag #BlackLivesMatter. Sentiment analysis was performed on the data set with both the Naive Bayes and Maximum Entropy approaches. The data was then clustered and visually presented in graphs. The results showed that sentiment analysis was not viable as a metric alone. However, together with other metrics it could prove to be useful. Ultimately, for k-means clustering, Maximum Entropy seemed to be the preferable sentiment analysis approach when looking at specific users whereas Naive Bayes performed better when researching individual tweets. As for finding trolls, a general conclusion on the viability of the algorithms could not be drawn, however Maximum Entropy was concluded to be preferable in this specific study.
Styles APA, Harvard, Vancouver, ISO, etc.
15

SILVA, Israel Batista Freitas da. « Representações cache eficientes para índices baseados em Wavelet trees ». Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/21050.

Texte intégral
Résumé :
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-08-30T19:22:34Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Israel Batista Freitas da Silva.pdf: 1433243 bytes, checksum: 5b1ac5501cae385e4811343e1426e6c9 (MD5)
Made available in DSpace on 2017-08-30T19:22:34Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Israel Batista Freitas da Silva.pdf: 1433243 bytes, checksum: 5b1ac5501cae385e4811343e1426e6c9 (MD5) Previous issue date: 2016-12-12
CNPQ, FACEPE.
Hoje em dia, há um exponencial crescimento do volume de informação no mundo. Esta explosão cria uma demanda por técnicas mais eficientes de indexação e consulta de dados, uma vez que, para serem úteis, eles precisarão ser manipuláveis. Casamento de padrões se refere à busca de um texto menor (padrão) em um texto muito maior (texto), reportando a quantidade de ocorrências e/ou as localizações das ocorrências. Para tal, pode-se construir uma estrutura chamada índice que pré-processará o texto e permitirá que consultas sejam feitas eficientemente. A eficiência prática de um índice, além da sua eficiência teórica, pode definir o quão utilizado ele será, e isto está diretamente ligado a como ele se comporta nas arquiteturas dos computadores atuais. O principal objetivo deste estudo é analisar o uso da estrutura Wavelet Tree como índice avaliando o impacto da reorganização interna dos seus dados quanto à localidade espacial e, assim propor formas de organização que reduzam efetivamente a quantidade de cache misses ocorridos na execução de operações neste índice. Através de análises empíricas com dados simulados e dados textuais obtidos de dois repositórios públicos, avaliou-se alguns aspectos de cinco tipos de organizações para os dados da estrutura com o objetivo de compará-las quanto ao tempo de execução e quantidade de cache misses ocorridos. Adicionalmente, uma análise teórica da complexidade da quantidade de cache misses ocorridos para operação de consulta de um padrão é descrita para uma das organizações propostas. Dois experimentos realizados sugerem comportamentos assintóticos para duas das organizações analisadas. Um terceiro experimento executado mostra que, para quatro das cinco organizações apresentadas, houve uma sistemática redução na quantidade de cache misses ocorridos para a cache de menor nível. Entretanto a redução de cache misses para cache de menor nível não se refletiu integralmente numa diferença no tempo de execução das operações, tendo sido esta menos significativa, nem na quantidade de cache misses ocorridos na cache de maior nível, onde houveram variações positivas e negativas.Os resultados obtidos permitem concluir que a escolha de uma representação adequada pode acarretar numa melhora significativa de utilização da cache. Diferentemente do modelo teórico, o custo de acesso à memória responde apenas por uma fração do tempo de computação das operações sobre as Wavelet Trees, pelo que a diminuição no número de cache misses não se traduziu integralmente no tempo de execução. No entanto, este fator pode ser crítico em situações mais extremas de utilização de memória.
Today, there is an exponential growth in the volume of information in the world. This increase creates the demand for more efficient indexing and querying techniques, since, to be useful, that data needs to be manageable. Pattern matching means searching for a string (pattern) in a much bigger string (text), reporting the number of occurrences and/or its locations. To do that, we need to build a data structure known as index. This structure will preprocess the text to allow for efficient queries. The adoption of an index depends heavily on its efficiency, and this is directly related to how well it performs on current machine architectures. The main objective of this work is to analyze the Wavelet Tree data structure as an index, assessing the impact of its internal organization with respect to spatial locality, and propose ways to organize its data as to reduce the amount of cache misses incurred by its operations. We performed an empirical analysis using both real and simulated textual data to compare the running time and cache behavior of Wavelet Trees using five different proposals of internal data layout. A theoretical analysis about the cache complexity of a query operation is also presented for the most efficient layout. Two experiments suggest good asymptotic behavior for two of the analyzed layouts. A third experiment shows that for four of the five layouts, there was a systematic reduction in the number of cache misses for the lowest level cache. Despite this, this reduction was not reflected in the runtime, neither in the performance for the highest level cache. The results obtained allow us to conclude that the choice of a suitable layout can lead to a significant improvement in cache usage. Unlike the theoretical model, however, the cost of memory access only accounts for a fraction of the operations’ computation time on the Wavelet Trees, so the decrease in the number of cache misses did not translate fully into gains in the execution time. However, this factor can still be critical in more extreme memory utilization situations.
Styles APA, Harvard, Vancouver, ISO, etc.
16

Sinha, Anurag R. « Optimization of a new digital image compression algorithm based on nonlinear dynamical systems / ». Online version of thesis, 2008. http://hdl.handle.net/1850/5544.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
17

Pereira, Filipe de Oliveira. « Separação cega de misturas com não-linearidade posterior utilizando estruturas monotônicas e algoritmos bio-inspirados de otimização ». [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259842.

Texte intégral
Résumé :
Orientadores: Romis Ribeiro de Faissol Attux, Leonardo Tomazeli Duarte
Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-16T19:27:38Z (GMT). No. of bitstreams: 1 Pereira_FilipedeOliveira_M.pdf: 3292959 bytes, checksum: b07b4141d2a1f443eb3ab766909a099c (MD5) Previous issue date: 2010
Resumo: O presente trabalho se propõe a desenvolver métodos de Separação Cega de Fontes (BSS) para modelos de mistura com Não-Linearidade Posterior (PNL). Neste caso particular, a despeito da não-linearidade do modelo, ainda é possível recuperar as fontes através de técnicas de Análise de Componentes Independentes (ICA). No entanto, há duas dificuldades maiores no emprego da ICA em modelos PNL. A primeira delas diz respeito a uma restrição sobre as funções não-lineares presentes no modelo PNL: elas devem ser monotônicas por construção. O segundo problema se encontra no ajuste do sistema separador com base em funções custo associadas à ICA: pode haver mínimos locais sub-ótimos. De modo a contornar o primeiro problema, investigamos a adequabilidade de três tipos distintos de estruturas não-lineares monotônicas. Para lidar com a presença de mínimos sub-ótimos no ajuste do sistema separador, empregamos algoritmos bio-inspirados com significativa capacidade de busca global. Finalmente, buscamos, através de experimentos em diversos cenários representativos, identificar dentre as estratégias estudadas qual a melhor configuração, tanto em termos de qualidade da estimação das fontes quanto em termos de complexidade
Abstract: This work aims at the development of Blind Source Separation (BSS) methods for Post-NonLinear (PNL) mixing models. In this particular case, despite the presence of nonlinear elements in the mixing model, it is still possible to recover the sources through Independent Component Analysis (ICA) methods. However, there are two major problems in the application of ICA techniques to PNL models. The first one concerns a restriction on the nonlinear functions present in the PNL model: they must be monotonic functions by construction. The second one is related to the adjustment of the PNL separating system via ICA-based cost functions: there may be sub-optimal local minima. To cope with the first problem, we investigate three types of monotonic nonlinear structures. Moreover, to circumvent the problem related to the presence of sub-optimal minima, we consider bio-inspired algorithms that have a significant global search potential. Finally, we perform a set of experiments in representative scenarios in order to identify, among the considered strategies, the best ones in terms of quality of the retrieved sources and overall complexity
Mestrado
Mestre em Engenharia Elétrica
Styles APA, Harvard, Vancouver, ISO, etc.
18

Gielniak, Michael Joseph. « Adaptation of task-aware, communicative variance for motion control in social humanoid robotic applications ». Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43591.

Texte intégral
Résumé :
An algorithm for generating communicative, human-like motion for social humanoid robots was developed. Anticipation, exaggeration, and secondary motion were demonstrated as examples of communication. Spatiotemporal correspondence was presented as a metric for human-like motion, and the metric was used to both synthesize and evaluate motion. An algorithm for generating an infinite number of variants from a single exemplar was established to avoid repetitive motion. The algorithm was made task-aware by including the functionality of satisfying constraints. User studies were performed with the algorithm using human participants. Results showed that communicative, human-like motion can be harnessed to direct partner attention and communicate state information. Furthermore, communicative, human-like motion for social robots produced by the algorithm allows humans partners to feel more engaged in the interaction, recognize motion earlier, label intent sooner, and remember interaction details more accurately.
Styles APA, Harvard, Vancouver, ISO, etc.
19

Kobayashi, Jorge Mamoru. « Entropy : algoritmo de substituição de linhas de cache inspirado na entropia da informação ». Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-29112016-102603/.

Texte intégral
Résumé :
Este trabalho apresenta um estudo sobre o problema de substituição de linhas de cache em microprocessadores. Inspirado no conceito de Entropia da Informação proposto em 1948 por Claude E. Shannon, este trabalho propõe uma nova heurística de substituição de linhas de cache. Seu objetivo é capturar e explorar melhor a localidade de referência dos programas e diminuir a taxa de miss rate durante a execução dos programas. O algoritmo proposto, Entropy, utiliza a heurística de entropia da informação para estimar as chances de uma linha ou bloco de cache ser referenciado após ter sido carregado na cache. Uma nova função de decaimento de entropia foi introduzida no algoritmo, otimizando seu funcionamento. Dentre os resultados obtidos, o Entropy conseguiu reduzir em até 50,41% o miss rate em relação ao algoritmo LRU. O trabalho propõe, ainda, uma implementação em hardware com complexidade e custo computacional comparáveis aos do algoritmo LRU. Para uma memória cache de segundo nível com 2-Mbytes e 8-way associative, a área adicional requerida é da ordem de 0,61% de bits adicionais. O algoritmo proposto foi simulado no SimpleScalar e comparado com o algoritmo LRU utilizando-se os benchmarks SPEC CPU2000.
This work presents a study about cache line replacement problem for microprocessors. Inspired in the Information Entropy concept stated by Claude E. Shannon in 1948, this work proposes a novel heuristic to replace cache lines in microprocessors. The major goal is to capture the referential locality of programs and to reduce the miss rate for cache access during programs execution. The proposed algorithm, Entropy, employs that new entropy heuristic to estimate the chances of a cache line to be referenced after it has been loaded into cache. A novel decay function has been introduced to optimize its operation. Results show that Entropy could reduce miss rate up to 50.41% in comparison to LRU. This work also proposes a hardware implementation which keeps computation and complexity costs comparable to the most employed algorithm, LRU. To a 2-Mbytes and 8-way associative cache memory, the required storage area is 0.61% of the cache size. The Entropy algorithm was simulated using SimpleScalar ISA simulator and compared to LRU using SPEC CPU2000 benchmark programs.
Styles APA, Harvard, Vancouver, ISO, etc.
20

Cosma, Ioana Ada. « Dimension reduction of streaming data via random projections ». Thesis, University of Oxford, 2009. http://ora.ox.ac.uk/objects/uuid:09eafd84-8cb3-4e54-8daf-18db7832bcfc.

Texte intégral
Résumé :
A data stream is a transiently observed sequence of data elements that arrive unordered, with repetitions, and at very high rate of transmission. Examples include Internet traffic data, networks of banking and credit transactions, and radar derived meteorological data. Computer science and engineering communities have developed randomised, probabilistic algorithms to estimate statistics of interest over streaming data on the fly, with small computational complexity and storage requirements, by constructing low dimensional representations of the stream known as data sketches. This thesis combines techniques of statistical inference with algorithmic approaches, such as hashing and random projections, to derive efficient estimators for cardinality, l_{alpha} distance and quasi-distance, and entropy over streaming data. I demonstrate an unexpected connection between two approaches to cardinality estimation that involve indirect record keeping: the first using pseudo-random variates and storing selected order statistics, and the second using random projections. I show that l_{alpha} distances and quasi-distances between data streams, and entropy, can be recovered from random projections that exploit properties of alpha-stable distributions with full statistical efficiency. This is achieved by the method of L-estimation in a single-pass algorithm with modest computational requirements. The proposed estimators have good small sample performance, improved by the methods of trimming and winsorising; in other words, the value of these summary statistics can be approximated with high accuracy from data sketches of low dimension. Finally, I consider the problem of convergence assessment of Markov Chain Monte Carlo methods for simulating from complex, high dimensional, discrete distributions. I argue that online, fast, and efficient computation of summary statistics such as cardinality, entropy, and l_{alpha} distances may be a useful qualitative tool for detecting lack of convergence, and illustrate this with simulations of the posterior distribution of a decomposable Gaussian graphical model via the Metropolis-Hastings algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
21

Bouallagui, Sarra. « Techniques d'optimisation déterministe et stochastique pour la résolution de problèmes difficiles en cryptologie ». Phd thesis, INSA de Rouen, 2010. http://tel.archives-ouvertes.fr/tel-00557912.

Texte intégral
Résumé :
Cette thèse s'articule autour des fonctions booléennes liées à la cryptographie et la cryptanalyse de certains schémas d'identification. Les fonctions booléennes possèdent des propriétés algébriques fréquemment utilisées en cryptographie pour constituer des S-Boxes (tables de substitution).Nous nous intéressons, en particulier, à la construction de deux types de fonctions : les fonctions courbes et les fonctions équilibrées de haut degré de non-linéarité.Concernant la cryptanalyse, nous nous focalisons sur les techniques d'identification basées sur les problèmes de perceptron et de perceptron permuté. Nous réalisons une nouvelle attaque sur le schéma afin de décider de sa faisabilité.Nous développons ici des nouvelles méthodes combinant l'approche déterministe DCA (Difference of Convex functions Algorithm) et heuristique (recuit simulé, entropie croisée, algorithmes génétiques...). Cette approche hybride, utilisée dans toute cette thèse, est motivée par les résultats intéressants de la programmation DC.
Styles APA, Harvard, Vancouver, ISO, etc.
22

Robles, Bernard. « Etude de la pertinence des paramètres stochastiques sur des modèles de Markov cachés ». Phd thesis, Université d'Orléans, 2013. http://tel.archives-ouvertes.fr/tel-01058784.

Texte intégral
Résumé :
Le point de départ de ce travail est la thèse réalisée par Pascal Vrignat sur la modélisation de niveaux de dégradation d'un système dynamique à l'aide de Modèles de Markov Cachés (MMC), pour une application en maintenance industrielle. Quatre niveaux ont été définis : S1 pour un arrêt de production et S2 à S4 pour des dégradations graduelles. Recueillant un certain nombre d'observations sur le terrain dans divers entreprises de la région, nous avons réalisé un modèle de synthèse à base de MMC afin de simuler les différents niveaux de dégradation d'un système réel. Dans un premier temps, nous identifions la pertinence des différentes observations ou symboles utilisés dans la modélisation d'un processus industriel. Nous introduisons ainsi le filtre entropique. Ensuite, dans un but d'amélioration du modèle, nous essayons de répondre aux questions : Quel est l'échantillonnage le plus pertinent et combien de symboles sont ils nécessaires pour évaluer au mieux le modèle ? Nous étudions ensuite les caractéristiques de plusieurs modélisations possibles d'un processus industriel afin d'en déduire la meilleure architecture. Nous utilisons des critères de test comme les critères de l'entropie de Shannon, d'Akaike ainsi que des tests statistiques. Enfin, nous confrontons les résultats issus du modèle de synthèse avec ceux issus d'applications industrielles. Nous proposons un réajustement du modèle pour être plus proche de la réalité de terrain.
Styles APA, Harvard, Vancouver, ISO, etc.
23

Sharify, Meisam. « Algorithmes de mise à l'échelle et méthodes tropicales en analyse numérique matricielle ». Phd thesis, Ecole Polytechnique X, 2011. http://pastel.archives-ouvertes.fr/pastel-00643836.

Texte intégral
Résumé :
L'Algèbre tropicale peut être considérée comme un domaine relativement nouveau en mathématiques. Elle apparait dans plusieurs domaines telles que l'optimisation, la synchronisation de la production et du transport, les systèmes à événements discrets, le contrôle optimal, la recherche opérationnelle, etc. La première partie de ce manuscrit est consacrée a l'étude des applications de l'algèbre tropicale à l'analyse numérique matricielle. Nous considérons tout d'abord le problème classique de l'estimation des racines d'un polynôme univarié. Nous prouvons plusieurs nouvelles bornes pour la valeur absolue des racines d'un polynôme en exploitant les méthodes tropicales. Ces résultats sont particulièrement utiles lorsque l'on considère des polynômes dont les coefficients ont des ordres de grandeur différents. Nous examinons ensuite le problème du calcul des valeurs propres d'une matrice polynomiale. Ici, nous introduisons une technique de mise à l'échelle générale, basée sur l'algèbre tropicale, qui s'applique en particulier à la forme compagnon. Cette mise à l'échelle est basée sur la construction d'une fonction polynomiale tropicale auxiliaire, ne dépendant que de la norme des matrices. Les raciness (les points de non-différentiabilité) de ce polynôme tropical fournissent une pré-estimation de la valeur absolue des valeurs propres. Ceci se justifie en particulier par un nouveau résultat montrant que sous certaines hypothèses faites sur le conditionnement, il existe un groupe de valeurs propres bornées en norme. L'ordre de grandeur de ces bornes est fourni par la plus grande racine du polynôme tropical auxiliaire. Un résultat similaire est valable pour un groupe de petites valeurs propres. Nous montrons expérimentalement que cette mise à l'échelle améliore la stabilité numérique, en particulier dans des situations où les données ont des ordres de grandeur différents. Nous étudions également le problème du calcul des valeurs propres tropicales (les points de non-différentiabilité du polynôme caractéristique) d'une matrice polynômiale tropicale. Du point de vue combinatoire, ce problème est équivalent à trouver une fonction de couplage: la valeur d'un couplage de poids maximum dans un graphe biparti dont les arcs sont valués par des fonctions convexes et linéaires par morceaux. Nous avons développé un algorithme qui calcule ces valeurs propres tropicales en temps polynomial. Dans la deuxième partie de cette thèse, nous nous intéressons à la résolution de problèmes d'affectation optimale de très grande taille, pour lesquels les algorithms séquentiels classiques ne sont pas efficaces. Nous proposons une nouvelle approche qui exploite le lien entre le problème d'affectation optimale et le problème de maximisation d'entropie. Cette approche conduit à un algorithme de prétraitement pour le problème d'affectation optimale qui est basé sur une méthode itérative qui élimine les entrées n'appartenant pas à une affectation optimale. Nous considérons deux variantes itératives de l'algorithme de prétraitement, l'une utilise la méthode Sinkhorn et l'autre utilise la méthode de Newton. Cet algorithme de prétraitement ramène le problème initial à un problème beaucoup plus petit en termes de besoins en mémoire. Nous introduisons également une nouvelle méthode itérative basée sur une modification de l'algorithme Sinkhorn, dans lequel un paramètre de déformation est lentement augmenté. Nous prouvons que cette méthode itérative(itération de Sinkhorn déformée) converge vers une matrice dont les entrées non nulles sont exactement celles qui appartiennent aux permutations optimales. Une estimation du taux de convergence est également présentée.
Styles APA, Harvard, Vancouver, ISO, etc.
24

Han, Seungju. « A family of minimum Renyi's error entropy algorithm for information processing ». [Gainesville, Fla.] : University of Florida, 2007. http://purl.fcla.edu/fcla/etd/UFE0021428.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
25

Semerád, Lukáš. « Generování kryptografického klíče z biometrických vlastností oka ». Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236038.

Texte intégral
Résumé :
The main topic of the thesis is creation of formulas for the amount of information entropy in biometric characteristics of iris and retina. This field of science in biometrics named above is unstudied yet, so the thesis tries to initiate research in this direction. The thesis also discusses the historical context of security and identification fields according to biometric characteristics of a human being with an overlap for potential usage in biometrics of iris and retina. The Daugman’s algorithm for converting iris image into the binary code which can be used as a cryptographic key is discussed in detail. An application implementing this conversion is also a part of the thesis.
Styles APA, Harvard, Vancouver, ISO, etc.
26

CAMPOS, M. C. M. « Development of an Entropy-Based Swarm Algorithm for Continuous Dynamic Constrained Optimization ». Universidade Federal do Espírito Santo, 2017. http://repositorio.ufes.br/handle/10/9871.

Texte intégral
Résumé :
Made available in DSpace on 2018-08-02T00:04:06Z (GMT). No. of bitstreams: 1 tese_11183_ThesisMauroMay082017.pdf: 2199268 bytes, checksum: fb7b682bc98a1f2089ec2331551f97b1 (MD5) Previous issue date: 2017-05-08
Dynamic constrained optimization problems form a class of problems WHERE the objective function or the constraints can change over time. In static optimization, finding a global optimum is considered as the main goal. In dynamic optimization, the goal is not only to find an optimal solution, but also track its trajectory as closely as possible over time. Changes in the environment must be taken into account during the optimization process in such way that these problems are to be solved online. Many real-world problems can be formulated within this framework. This thesis proposes an entropy-based bare bones particle swarm for solving dynamic constrained optimization problems. The Shannons entropy is established as a phenotypic diversity index and the proposed algorithm uses the Shannons index of diversity to aggregate the global-best and local-best bare bones particle swarm variants. The proposed approach applies the idea of mixture of search directions by using the index of diversity as a factor to balance the influence of the global-best and local-best search directions. High diversity promotes the search guided by the global-best solution, with a normal distribution for exploitation. Low diversity promotes the search guided by the local-best solution, with a heavy-tailed distribution for exploration. A constraint-handling strategy is also proposed, which uses a ranking method with selection based on the technique for order of preference by similarity to ideal solution to obtain the best solution within a specific population of candidate solutions. Mechanisms to detect changes in the environment and to update particles' memories are also implemented into the proposed algorithm. All these strategies do not act independently. They operate related to each other to tackle problems such as: diversity loss due to convergence and outdated memories due to changes in the environment. The combined effect of these strategies provides an algorithm with ability to maintain a proper balance between exploration and exploitation at any stage of the search process without losing the tracking ability to search an optimal solution which is changing over time. An empirical study was carried out to evaluate the performance of the proposed approach. Experimental results show the suitability of the algorithm in terms of effectiveness to find good solutions for the benchmark problems investigated. Finally, an application is developed, WHERE the proposed algorithm is applied to solve the dynamic economic dispatch problem in power systems.
Styles APA, Harvard, Vancouver, ISO, etc.
27

Carvalho, André Izecson de. « A design method based in entropy statistics ». Instituto Tecnológico de Aeronáutica, 2008. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=1169.

Texte intégral
Résumé :
Desde o início da história da aviação, cada nova aeronave é criada para que seja mais econômica, mais rápida, mais leve, melhor do que as que a antecederam. Compreender a evolução tecnológica da aviação é extremamente útil quando se deseja projetar uma nova aeronave. Saviotti (1984) e, posteriormente, Frenken (1997) propuseram um método de análise da evolução tecnológica de aeronaves. Esse método, baseado em conceitos de teoria da informação desenvolvidos por Shannon (1948), especialmente o conceito de entropia estatística, se mostrou bastante eficaz. O método, porém, é essencialmente uma ferramenta de análise, e não de projeto. Baseado em um banco de dados de aeronaves, o método é capaz de determinar em que medida cada uma delas foi influenciada por seus predecessores (o que se denomina "convergência") e, por sua vez, influenciou seus sucessores (o que se denomina "difusão"). Neste trabalho, uma ferramenta de auxílio ao projeto de aeronaves é proposta. Essa ferramenta se baseia no método de entropia estatística. Dadas especificações da aeronave que se deseja projetar, e um banco de dados com informações de diversas aeronaves, é realizada a minimização da entropia do sistema, o que conduz a uma aeronave com alto índice de convergência, ou seja, que tenha absorvido o mais possível da tecnologia de aeronaves existentes. A minimização da entropia é realizada através de um algoritmo genético. O algoritmo foi selecionado devido a sua robustez ao lidar com grandes quantidades de informação, minimizando diversas variáveis independentes simultaneamente mesmo na ausência de uma modelagem física do sistema. Foram realizadas diversas análises para avaliar a eficácia do método da entropia estatística. Em especial, um design criado pelo método foi comparado com três outros projetos com as mesmas especificações, realizados por times distintos de engenheiros utilizando-se de métodos convencionais. Além disso, foi avaliada a gama de especificações na qual o método é eficaz, e os seus limites. Como forma de avaliar mais completamente a qualidade dos resultados produzidos pelo método, estes foram testados, através de uma análise de desempenho das aeronaves obtidas, para avaliar se eram internamente consistentes.
Styles APA, Harvard, Vancouver, ISO, etc.
28

Armean, Irina Mărioara. « Protein complexes analyzed by affinity purification and maximum entropy algorithm using published annotations ». Thesis, University of Cambridge, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.707940.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
29

Danks, Jacob R. « Algorithm Optimizations in Genomic Analysis Using Entropic Dissection ». Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804921/.

Texte intégral
Résumé :
In recent years, the collection of genomic data has skyrocketed and databases of genomic data are growing at a faster rate than ever before. Although many computational methods have been developed to interpret these data, they tend to struggle to process the ever increasing file sizes that are being produced and fail to take advantage of the advances in multi-core processors by using parallel processing. In some instances, loss of accuracy has been a necessary trade off to allow faster computation of the data. This thesis discusses one such algorithm that has been developed and how changes were made to allow larger input file sizes and reduce the time required to achieve a result without sacrificing accuracy. An information entropy based algorithm was used as a basis to demonstrate these techniques. The algorithm dissects the distinctive patterns underlying genomic data efficiently requiring no a priori knowledge, and thus is applicable in a variety of biological research applications. This research describes how parallel processing and object-oriented programming techniques were used to process larger files in less time and achieve a more accurate result from the algorithm. Through object oriented techniques, the maximum allowable input file size was significantly increased from 200 mb to 2000 mb. Using parallel processing techniques allowed the program to finish processing data in less than half the time of the sequential version. The accuracy of the algorithm was improved by reducing data loss throughout the algorithm. Finally, adding user-friendly options enabled the program to use requests more effectively and further customize the logic used within the algorithm.
Styles APA, Harvard, Vancouver, ISO, etc.
30

Wang, Zhenggang. « Improved algorithm for entropic segmentation of DNA sequence / ». View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?PHYS%202004%20WANG.

Texte intégral
Résumé :
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2004.
Includes bibliographical references (leaves 56-58). Also available in electronic version. Access restricted to campus users.
Styles APA, Harvard, Vancouver, ISO, etc.
31

NEGRI, MATTEO. « Is Evolution an Algorithm ? Effects of local entropy in unsupervised learning and protein evolution ». Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2972307.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
32

Champion, Julie. « Sur les algorithmes de projections en entropie relative avec contraintes marginales ». Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2036/.

Texte intégral
Résumé :
Cette thèse est centrée autour d'un algorithme de construction de mesures de probabilités à lois marginales prescrites, appelé Iterative Proportional Fitting (IPF). Issu de la statistique, cet algorithme est basé sur des projections successives sur des espaces de probabilités avec la pseudo-distance d'entropie relative de Kullback-Leibler. Cette thèse constitue un panorama des résultats disponibles sur le sujet, et contient quelques extensions et raffinements. La première partie est consacrée à l'étude des projections en entropie relative, à des critères d'existence, d'unicité ainsi que de caractérisation liés à la fermeture d'une somme de sous- espaces. Sous certaines conditions, le problème devient un problème de maximum d'entropie pour des contraintes marginales graphiques. La seconde partie met en avant le procédé itératif IPF. Répondant à l'origine à un problème d'estimation pour les tables de contingence, il constitue plus généralement un analogue d'un algorithme classique de projections alternées sur des espaces de Hilbert. Après avoir présenté les propriétés de l'IPF, on s'intéresse à des résultats de convergence dans le cas fini discret et dans le cas gaussien, ainsi qu'au cas continu à deux marginales, pour lequel une extension est proposée. On traite ensuite plus particulièrement du cas gaussien, pour lequel une nouvelle formulation de l'IPF permet d'obtenir une vitesse de convergence dans le cas à deux marginales prescrites, dont on montre l'optimalité en dimension 2
This work is focused on an algorithm of construction of probability measures with prescribed marginal laws, called Iterative Proportional Fitting (IPF). Deriving from statistical problems, this algorithm is based on successive projections on probability spaces for the relative entropy pseudometric of Kullback Leibler. This thesis consists in a survey of the current results on this subject and gives some extensions and subtleties. The first part deals with the study of projections in relative entropy, namely existence, uniqueness criteria, and characterization properties related to closedness of sumspaces. Under certain assumptions, the problem becomes a problem of maximisation of the entropy for graphical marginal constraints. In the second part, we study the iterative procedure IPF. Introduced initially for an estimation problem on contingency tables, it corresponds in a more general setting to an analogue of a classic algorithm of alternating projections on Hilbert spaces. After presenting the IPF properties, we look for convergence results in the finite discrete case, the Gaussian case, and the more general continuous case with two marginals, for which some extensions are given. Then, the thesis focused on Gaussian case with two prescribed marginal, for which we get a rate of convergence using a new formulation of the IPF. Moreover we prove the optimality for the 2-dimensional case
Styles APA, Harvard, Vancouver, ISO, etc.
33

Perche, Paul-Benoît. « Méthodes d'induction par arbres de décision dans le cadre de l'aide au diagnostic ». Lille 1, 1999. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/1999/50376-1999-65.pdf.

Texte intégral
Résumé :
Dans une première partie du mémoire, la démarche générale de l'étude d'un système sans modèle analytique de comportement est rappelée. Dans le cadre de l'aide au diagnostic, nous proposons de construire le modèle d'un système afin de le surveiller, à partir de l'analyse de données prélevées dans différents modes de fonctionnement. Différentes techniques d'apprentissage sont ainsi exposées. Dans une deuxième phase, nous rappelons les résultats importants de la théorie de l'information appliquée à l'analyse structurale des systèmes complexes. Cela nous amène à proposer plusieurs critères entropiques de modélisabilité d'un système permettant de quantifier la qualité du modèle construit. L'utilisation de l'entropie conditionnelle comme base pour ces critères ayant été justifiée, nous proposons de définir des algorithmes permettant la construction d'un modèle de comportement sous la forme d'arbres de décision. Plusieurs points de vue de construction des arbres sont considérés et combinés. C'est ainsi que des algorithmes associant une approche descendante ou ascendante avec une approche agrégative ou désagrégative de sélection de variables, sont présentés. Nous mettons en évidence les performances des approches ascendantes désagrégatives en développant une approche de construction globale (ou par niveau) de l'arbre de décision. Ces algorithmes sont ensuite affinés à l'aide d'approches de construction locales (ou par noeud). Enfin, ces méthodes sont appliquées sur des exemples de manière à les comparer et les valider.
Styles APA, Harvard, Vancouver, ISO, etc.
34

Kilpatrick, Alastair Morris. « Novel stochastic and entropy-based Expectation-Maximisation algorithm for transcription factor binding site motif discovery ». Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10489.

Texte intégral
Résumé :
The discovery of transcription factor binding site (TFBS) motifs remains an important and challenging problem in computational biology. This thesis presents MITSU, a novel algorithm for TFBS motif discovery which exploits stochastic methods as a means of both overcoming optimality limitations in current algorithms and as a framework for incorporating relevant prior knowledge in order to improve results. The current state of the TFBS motif discovery field is surveyed, with a focus on probabilistic algorithms that typically take the promoter regions of coregulated genes as input. A case is made for an approach based on the stochastic Expectation-Maximisation (sEM) algorithm; its position amongst existing probabilistic algorithms for motif discovery is shown. The algorithm developed in this thesis is unique amongst existing motif discovery algorithms in that it combines the sEM algorithm with a derived data set which leads to an improved approximation to the likelihood function. This likelihood function is unconstrained with regard to the distribution of motif occurrences within the input dataset. MITSU also incorporates a novel heuristic to automatically determine TFBS motif width. This heuristic, known as MCOIN, is shown to outperform current methods for determining motif width. MITSU is implemented in Java and an executable is available for download. MITSU is evaluated quantitatively using realistic synthetic data and several collections of previously characterised prokaryotic TFBS motifs. The evaluation demonstrates that MITSU improves on a deterministic EM-based motif discovery algorithm and an alternative sEM-based algorithm, in terms of previously established metrics. The ability of the sEM algorithm to escape stable fixed points of the EM algorithm, which trap deterministic motif discovery algorithms and the ability of MITSU to discover multiple motif occurrences within a single input sequence are also demonstrated. MITSU is validated using previously characterised Alphaproteobacterial motifs, before being applied to motif discovery in uncharacterised Alphaproteobacterial data. A number of novel results from this analysis are presented and motivate two extensions of MITSU: a strategy for the discovery of multiple different motifs within a single dataset and a higher order Markov background model. The effects of incorporating these extensions within MITSU are evaluated quantitatively using previously characterised prokaryotic TFBS motifs and demonstrated using Alphaproteobacterial motifs. Finally, an information-theoretic measure of motif palindromicity is presented and its advantages over existing approaches for discovering palindromic motifs discussed.
Styles APA, Harvard, Vancouver, ISO, etc.
35

Nagalakshmi, Subramanya. « Study of FPGA implementation of entropy norm computation for IP data streams ». [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002477.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
36

Singh, Anima Ph D. Massachusetts Institute of Technology. « Risk stratification of cardiovascular patients using a novel classification tree induction algorithm with non-symmetric entropy measures ». Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/64601.

Texte intégral
Résumé :
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 95-100).
Risk stratification allows clinicians to choose treatments consistent with a patient's risk profile. Risk stratification models that integrate information from several risk attributes can aid clinical decision making. One of the technical challenges in developing risk stratification models from medical data is the class imbalance problem. Typically the number of patients that experience a serious medical event is a small subset of the entire population. The goal of my thesis work is to develop automated tools to build risk stratification models that can handle unbalanced datasets and improve risk stratification. We propose a novel classification tree induction algorithm that uses non-symmetric entropy measures to construct classification trees. We apply our methods to the application of identifying patients at high risk of cardiovascular mortality. We tested our approach on a set of 4200 patients who had recently suffered from a non-ST-elevation acute coronary syndrome. When compared to classification tree models generated using other measures proposed in the literature, the tree models constructed using non-symmetric entropy had higher recall and precision. Our models significantly outperformed models generated using logistic regression - a standard method of developing multivariate risk stratification models in the literature.
by Anima Singh.
S.M.
Styles APA, Harvard, Vancouver, ISO, etc.
37

GONCALVES, LEONARDO BARROSO. « RÉNYI ENTROPY AND CAUCHY-SCHWARTZ MUTUAL INFORMATION APPLIED TO THE MIFS-U VARIABLES SELECTION ALGORITHM : A COMPARATIVE STUDY ». PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=12170@1.

Texte intégral
Résumé :
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
A presente dissertação aborda o algoritmo de Seleção de Variáveis Baseada em Informação Mútua sob Distribuição de Informação Uniforme (MIFS-U) e expõe um método alternativo para estimação da entropia e da informação mútua, medidas que constituem a base deste algoritmo de seleção. Este método tem, por fundamento, a informação mútua quadrática de Cauchy-Schwartz e a entropia quadrática de Rényi, combinada, no caso de variáveis contínuas, ao método de estimação de densidade Janela de Parzen. Foram realizados experimentos com dados reais de domínio público, sendo tal método comparado com outro, largamente utilizado, que adota a definição de entropia de Shannon e faz uso, no caso de variáveis contínuas, do estimador de densidade histograma. Os resultados mostram pequenas variações entre os dois métodos, mas que sugerem uma investigação futura através de um classificador, tal como Redes Neurais, para avaliar qualitativamente tais resultados à luz do objetivo final que consiste na maior exatidão de classificação.
This dissertation approaches the algorithm of Selection of Variables under Mutual Information with Uniform Distribution (MIFS-U) and presents an alternative method for estimate entropy and mutual information, measures that constitute the base of this selection algorithm. This method has, for foundation, the Cauchy-Schwartz quadratic mutual information and the quadratic Rényi entropy, combined, in the case of continuous variables, with Parzen Window density estimation. Experiments were accomplished with real public domain data, being such method compared with other, broadly used, that adopts the Shannon entropy definition and makes use, in the case of continuous variables, of the histogram density estimator The results show small variations among the two methods, what suggests a future investigation through a classifier, such as Neural Networks, to evaluate this results, qualitatively, in the light of the final objective that consists of the biggest sort exactness.
Styles APA, Harvard, Vancouver, ISO, etc.
38

Gordan, Mimić. « Nelinearna dinamička analiza fizičkih procesa u žiivotnoj sredini ». Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2016. https://www.cris.uns.ac.rs/record.jsf?recordId=101258&source=NDLTD&language=en.

Texte intégral
Résumé :
Ispitivan  je  spregnut  sistem  jednačina  za  prognozu  temperature  na površini  i  u  dubljem sloju zemljišta.  Računati  su  Ljapunovljevi eksponenti,  bifurkacioni dijagram, atraktor i analiziran je domen rešenja. Uvedene su nove informacione mere  bazirane naKolmogorovljevoj kompleksnosti,  za kvantifikaciju  stepena nasumičnosti u vremenskim serijama,.  Nove mere su primenjene na razne serije dobijene merenjem fizičkih faktora životne sredine i pomoću klimatskih modela.
Coupled system of prognostic equations for  the  ground surface temperature and  the deeper layer temperature was examind. Lyapunov exponents, bifurcation diagrams, attractor and the domain of solutions were analyzed.  Novel information measures based on Kolmogorov complexity  and used  for the quantification of randomness in time series, were presented.Novel measures were tested on various time series obtained by measuring physical factors of the environment or as the climate model outputs.
Styles APA, Harvard, Vancouver, ISO, etc.
39

Hauman, Charlotte. « The application of the cross-entropy method for multi-objective optimisation to combinatorial problems ». Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71636.

Texte intégral
Résumé :
Thesis (MScEng)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: Society is continually in search of ways to optimise various objectives. When faced with multiple and con icting objectives, humans are in need of solution techniques to enable optimisation. This research is based on a recent venture in the eld of multi-objective optimisation, the use of the cross-entropy method to solve multi-objective problems. The document provides a brief overview of the two elds, multi-objective optimisation and the cross-entropy method, touching on literature, basic concepts and applications or techniques. The application of the method to two problems is then investigated. The rst application is to the multi-objective vehicle routing problem with soft time windows, a widely studied problem with many real-world applications. The problem is modelled mathematically with a transition probability matrix that is updated according to cross-entropy principles before converging to an approximation solution set. The highly constrained problem is successfully modelled and the optimisation algorithm is applied to a set of benchmark problems. It was found that the cross-entropy method for multi-objective optimisation is a valid technique in providing feasible and non-dominated solutions. The second application is to a real world case study in blood management done at the Western Province Blood Transfusion Service. The conceptual model is derived from interviews with relevant stakeholders before discrete event simulation is used to model the system. The cross-entropy method is used to optimise the inventory policy of the system by simultaneously maximising the combined service level of the system and minimising the total distance travelled. By integrating the optimisation and simulation model, the study shows that the inventory policy of the service can improve signi cantly, and the use of the cross-entropy algorithm adequately progresses to a front of solutions. The research proves the remarkable width and simplicity of possible applications of the cross-entropy algorithm for multi-objective optimisation, whilst contributing to literature on the vehicle routing problem and blood management. Results on benchmark problems for the vehicle routing problem with soft time windows are provided and an improved inventory policy is suggested to the Western Province Blood Transfusion Service.
AFRIKAANSE OPSOMMING: Die mensdom is voortdurend op soek na maniere om verskeie doelwitte te optimeer. Wanneer die mens konfrontreer word met meervoudige en botsende doelwitte, is oplossingsmetodes nodig om optimering te bewerkstellig. Hierdie navorsing is baseer op 'n nuwe wending in die veld van multi-doelwit optimering, naamlik die gebruik van die kruisentropie metode om multi-doelwit probleme op te los. Die dokument verskaf 'n bre e oorsig oor die twee velde { multi-doelwit optimering en die kruis-entropie-metode { deur kortliks te kyk na die beskikbare literatuur, basiese beginsels, toepassingsareas en metodes. Die toepassing van die metode op twee onafhanklike probleme word dan ondersoek. Die eerste toepassing is di e van die multi-doelwit voertuigroeteringsprobleem met plooibare tydvensters. Die probleem word eers wiskundig modelleer met 'n oorgangswaarskynlikheidsmatriks. Die matriks word dan deur kruis-entropie beginsels opdateer voor dit konvergeer na 'n benaderingsfront van oplossings. Die oplossingsruimte is onderwerp aan heelwat beperkings, maar die probleem is suksesvol modelleer en die optimeringsalgoritme is gevolglik toegepas op 'n stel verwysingsprobleme. Die navorsing het gevind dat die kruis-entropie metode vir multi-doelwit optimering 'n geldige metode is om 'n uitvoerbare front van oplossings te beraam. Die tweede toepassing is op 'n gevallestudie van die bestuur van bloed binne die konteks van die Westelike Provinsie Bloedoortappingsdiens. Na aanleiding van onderhoude met die relevante belanghebbers is 'n konsepmodel geskep voor 'n simulasiemodel van die stelsel gebou is. Die kruis-entropie metode is gebruik om die voorraadbeleid van die stelsel te optimeer deur 'n gesamentlike diensvlak van die stelsel te maksimeer en terselfdetyd die totale reis-afstand te minimeer. Deur die optimerings- en simulasiemodel te integreer, wys die studie dat die voorraadbeleid van die diens aansienlik kan verbeter, en dat die kruis-entropie algoritme in staat is om na 'n front van oplossings te beweeg. Die navorsing bewys die merkwaardige wydte en eenvoud van moontlike toepassings van die kruis-entropie algoritme vir multidoelwit optimering, terwyl dit 'n bydrae lewer tot die afsonderlike velde van voertuigroetering en die bestuur van bloed. Uitslae vir die verwysingsprobleme van die voertuigroeteringsprobleem met plooibare tydvensters word verskaf en 'n verbeterde voorraadbeleid word aan die Westelike Provinsie Bloedoortappingsdiens voorgestel.
Styles APA, Harvard, Vancouver, ISO, etc.
40

Strizzi, Jon D. (Jon David). « An improved algorithm for satellite orbit decay and re-entry prediction ». Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/47332.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
41

Spratlin, Kenneth Milton. « An adaptive numeric predictor-corrector guidance algorithm for atmospheric entry vehicles ». Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/31006.

Texte intégral
Résumé :
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 1987.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND AERONAUTICS.
Bibliography: p. 211-213.
by Kenneth Milton Spratlin.
M.S.
Styles APA, Harvard, Vancouver, ISO, etc.
42

De, bortoli Valentin. « Statistiques non locales dans les images : modélisation, estimation et échantillonnage ». Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASN020.

Texte intégral
Résumé :
Dans cette thèse, on étudie d'un point de vueprobabiliste deux statistiques non locales dans les images : laredondance spatiale et les moments de certaines couches de réseauxde neurones convolutionnels. Plus particulièrement, on s'intéresse àl'estimation et à la détection de la redondance spatiale dans lesimages naturelles et à l'échantillonnage de modèles d'images souscontraintes de moments de sorties de réseaux deneurones.On commence par proposer une définition de la redondance spatialedans les images naturelles. Celle-ci repose sur une analyseGestaltiste de la notion de similarité ainsi que sur un cadrestatistique pour le test d'hypothèses via la méthode acontrario. On développe un algorithme pour identifier cetteredondance dans les images naturelles. Celui-ci permet d'identifierles patchs similaires dans une image. On utilise cette informationpour proposer de nouveaux algorithmes de traitement d'image(débruitage, analyse de périodicité).Le reste de cette thèse est consacré à la modélisation et àl'échantillonnage d'images sous contraintes non locales. Les modèlesd'images considérés sont obtenus via le principe de maximumd'entropie. On peut alors déterminer la distribution cible sur lesimages via une procédure de minimisation. On aborde ce problème enutilisant des outils issus de l'optimisationstochastique.Plus précisément, on propose et analyse un nouvel algorithme pourl'optimisation stochastique : l'algorithme SOUL (StochasticOptimization with Unadjusted Langevin). Dans cette méthodologie, legradient est estimé par une méthode de Monte Carlo par chaîne deMarkov (ici l'algorithme de Langevin non ajusté). Les performancesde cet algorithme repose sur les propriétés de convergenceergodiques des noyaux de Markov associés aux chaînes de Markovutilisées. On s'intéresse donc aux propriétés de convergencegéométrique de certaines classes de modèles fonctionnelsautorégressifs. On caractérise précisément la dépendance des taux deconvergence de ces modèles vis à vis des constantes du modèle(dimension, régularité,convexité...).Enfin, on applique l'algorithme SOUL au problème de synthèse detexture par maximum d'entropie. On étudie les liens qu'entretientcette approche avec d'autres modèles de maximisation d'entropie(modèles macrocanoniques, modèles microcanoniques). En utilisant desstatistiques de moments de sorties de réseaux de neuronesconvolutionnels on obtient des résultats visuels comparables à ceux del'état de l'art
In this thesis we study two non-localstatistics in images from a probabilistic point of view: spatialredundancy and convolutional neural network features. Moreprecisely, we are interested in the estimation and detection ofspatial redundancy in naturalimages. We also aim at sampling images with neural network constraints.We start by giving a definition of spatial redundancy in naturalimages. This definition relies on two concepts: a Gestalt analysisof the notion of similarity in images, and a hypothesis testingframework (the a contrario method). We propose an algorithm toidentify this redundancy in natural images. Using this methodologywe can detect similar patches in images and, with this information,we propose new algorithms for diverse image processing tasks(denoising, periodicity analysis).The rest of this thesis deals with sampling images with non-localconstraints. The image models we consider are obtained via themaximum entropy principle. The target distribution is then obtainedby minimizing an energy functional. We use tools from stochasticoptimization to tackle thisproblem.More precisely, we propose and analyze a new algorithm: the SOUL(Stochastic Optimization with Unadjusted Langevin) algorithm. Inthis methodology, the gradient is estimated using Monte Carlo MarkovChains methods. In the case of the SOUL algorithm we use an unadjustedLangevin algorithm. The efficiency of the SOUL algorithm is relatedto the ergodic properties of the underlying Markov chains. Thereforewe are interested in the convergence properties of certain class offunctional autoregressive models. We characterize precisely thedependency of the convergence rates of these models with respect totheir parameters (dimension, smoothness,convexity).Finally, we apply the SOUL algorithm to the problem ofexamplar-based texture synthesis with a maximum entropy approach. Wedraw links between our model and other entropy maximizationprocedures (macrocanonical models, microcanonical models). Usingconvolutional neural network constraints we obtain state-of-the artvisual results
Styles APA, Harvard, Vancouver, ISO, etc.
43

Marcelo, Monte da Silva João. « Um novo algoritmo baseado em entropia para filtragem da interferência frente-verso ». Universidade Federal de Pernambuco, 2005. https://repositorio.ufpe.br/handle/123456789/5641.

Texte intégral
Résumé :
Made available in DSpace on 2014-06-12T17:40:54Z (GMT). No. of bitstreams: 2 arquivo7085_1.pdf: 5617751 bytes, checksum: 13b747395cadea3f73a525d5b1fd2004 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2005
A digitalização de documentos originariamente em papel é a maneira mais eficiente que dispomos hoje como meio de preservar o seu conteúdo para as gerações futuras, bem como possibilitar o acesso e disseminação às informações via redes de computadores. A natureza do documento impõe técnicas diferentes para a digitalização e armazenagem destes. Em geral, objetivando possibilidades futuras, os documentos são digitalizados em cores (true color) e alta resolução (chegando hoje até mais de 1.000 pontos por polegada). Visando o acesso via redes, tais documentos são geralmente disponibilizados em sua versão monocromática, com 200 dpi de resolução e comprimidos em formato conveniente, geralmente TIFF (G4). Tal processo de diminuição do número de cores de documentos, no caso de conversão para monocromático conhecido como binarização, possui dificuldades para ser efetuado de maneira automática, caso o documento tenha sido escrito ou impresso em ambos os lados de papel translúcido, situação conhecida como interferência frenteverso. Os algoritmos de binarização hoje existentes nas ferramentas comerciais geram imagem onde as porções referentes à tinta na frente e no verso ficam sobrepostas, impossibilitando a leitura da imagem obtida. Embora tal problema tenha sido apresentado há mais de uma década, ainda hoje busca-se soluções melhores para ele. No caso de documentos históricos, a complexidade do problema é ainda maior, uma vez que há o escurecimento causado pelo envelhecimento do papel como fator complicador. Esta dissertação propõe um novo algoritmo baseado na entropia do histograma da imagem para a binarização da imagem de documentos históricos com interferência frente-verso. O algoritmo proposto é comparado com os seus antecessores descritos na literatura, gerando imagens de melhor qualidade que os seus congêneres
Styles APA, Harvard, Vancouver, ISO, etc.
44

Perianhes, Roberto Vitoriano. « Utilizando algoritmo de cross-entropy para a modelagem de imagens de núcleos ativos de galáxias obtidas com o VLBA ». Universidade Presbiteriana Mackenzie, 2017. http://tede.mackenzie.br/jspui/handle/tede/3466.

Texte intégral
Résumé :
Submitted by Marta Toyoda (1144061@mackenzie.br) on 2018-02-16T23:06:29Z No. of bitstreams: 2 Roberto Vitoriano Perianhes.pdf: 5483045 bytes, checksum: 54cb8ad49fe9a8dd9da3aaabb8076b2f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Paola Damato (repositorio@mackenzie.br) on 2018-03-08T11:19:18Z (GMT) No. of bitstreams: 2 Roberto Vitoriano Perianhes.pdf: 5483045 bytes, checksum: 54cb8ad49fe9a8dd9da3aaabb8076b2f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-03-08T11:19:18Z (GMT). No. of bitstreams: 2 Roberto Vitoriano Perianhes.pdf: 5483045 bytes, checksum: 54cb8ad49fe9a8dd9da3aaabb8076b2f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-08-09
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The images obtained by interferometers such as VLBA (Very Long Baseline Array) and VLBI (Very Long Baseline Interferometry), remain the direct evidence of relativistic jets and outbursts associated with supermassive black holes in active galactic nuclei (AGN). The study of these images are critical tools to the use of information from these observations, since they are one of the main ingredients for synthesis codes7 of extragalactic objects. In this thesis is used both synthetic and observed images. The VLBA images show 2-dimensional observations generated from complex 3-dimensional astrophysical processes. In this sense, one of the main difficulties of the models is the definition of parameters of functions and equations to reproduce macroscopic and dynamic physical formation events of these objects, so that images could be study reliably and on a large scale. One of the goals of this thesis is to elaborate a generic8 form of observations, assuming that the formation of these objects had origin directly by similar astrophysical processes, given the information of certain parameters of the formation events. The definition of parameters that reproduce the observations are key to the generalization formation of sources and extragalactic jets. Most observation articles have focus on few or even unique objects. The purpose of this project is to implement an innovative method, more robust and efficient, for modeling and rendering projects of various objects, such as the MOJAVE Project, which monitors several quasars simultaneously offering a diverse library for creating models (Quasars9 and Blazars10: OVV11 and BL Lacertae12). In this thesis was implemented a dynamic way to study these objects. Presents in this thesis the adaptation of the Cross-Entropy algorithm for the calibration of the parameters of astrophysical events that summarize the actual events of the VLBA observations. The development of the code of the adaptation structure includes the possibility of extension to any image, assuming that these images are dispose in intensities (Jy/beam) distributed in Right Ascension (AR) and Declination (DEC) maps. The code is validating by searching for self-convergence to synthetic models with the same structure, i.e, realistics simulations of components ejection, in milliarcsecond, similar to the observations of the MOJAVE project in 15.3 GHz. With the use of the parameters major semi-axis, angle of position, eccentricity and intensity applied individually to each observed component, it was possible to calculate the structure of the sources, the velocities of the jets, as well as the conversion in flux density to obtain light curves. Through the light curve, the brightness temperature, the Doppler factor, the Lorentz factor and the observation angle of the extragalactic objects can be estimated with precision. The objects OJ 287, 4C +15.05, 3C 279 and 4C +29.45 are studied in this thesis due the fact that they have different and complex morphologies for a more complete study.
As imagens obtidas por interferômetros, tais como VLBA (Very Long Baseline Array) e VLBI (Very Long Baseline Interferometry), são evidências diretas de jatos relativísticos associados a buracos negros supermassivos em núcleos ativos de galáxias (AGN). O estudo dessas imagens é fundamental para o aproveitamento das informações dessas observações, já que é um dos principais ingredientes para os códigos de síntese1 de objetos extragalácticos. Utiliza-se nesta tese, tanto imagens sintéticas quanto observadas. As imagens de VLBA mostram observações em 2 dimensões de processos astrofísicos complexos ocorrendo em 3 dimensões. Nesse sentido, uma das principais dificuldades dos modelos é a definição dos parâmetros das funções e equações que reproduzam de forma macroscópica e dinâmica os eventos físicos de formação desses objetos, para que as imagens sejam estudadas de forma confiável e em grande escala. Um dos objetivos desta tese é elaborar uma forma genérica2 de observações, supondo que a formação desses objetos é originada por processos astrofísicos similares, com a informação de determinados parâmetros da formação dos eventos. A definição de parâmetros que reproduzam as observações são elementos chave para a generalização da formação de componentes em jatos extragalácticos. Grande parte dos artigos de observação são voltados para poucos ou únicos objetos. Foi realizada nesta tese a implementação um método inovador, robusto e eficiente para a modelagem e reprodução de vários objetos, como por exemplo nas fontes do Projeto MOJAVE, que monitora diversos quasares simultaneamente, oferecendo uma biblioteca diversificada para a criação de modelos (Quasares3 e Blazares4: OVV5 e BL Lacertae6). Com essas fontes implementou-se uma forma dinâmica para o estudo desses objetos. Apresenta-se, nesta tese, a adaptação do algoritmo de Cross-Entropy para a calibração dos parâmetros dos eventos astrofísicos que sintetizem os eventos reais das observações em VLBA. O desenvolvimento da estrutura de adaptação do código incluiu a possibilidade de extensão para qualquer imagem, supondo que as mesmas estão dispostas em intensidades (Jy/beam) distribuídas em mapas de Ascensão Reta (AR) e Declinação (DEC). A validação do código foi feita buscando a auto convergência para modelos sintéticos com as mesmas estruturas, ou seja, de simulações realísticas de ejeção de componentes, em milissegundos de arco, similares às observações do projeto MOJAVE, em 15,3 GHz. Com a utilização dos parâmetros semieixo maior, ângulo de posição, excentricidade e intensidade aplicados individualmente a cada componente observada, é possível calcular a estrutura das fontes, as velocidades dos jatos, bem como a conversão em densidade de fluxo para obtenção de curvas de luz. Através da curva de luz estimou-se com precisão a temperatura de brilhância, o fator Doppler, o fator de Lorentz e o ângulo de observação dos objetos extragalácticos. Os objetos OJ 287, 4C +15.05, 3C 279 e 4C +29.45 são estudados nesta tese pois têm morfologias diferentes e complexas para um estudo mais completo.
Styles APA, Harvard, Vancouver, ISO, etc.
45

Morales, Pérez Cristóbal Sebastián. « Algoritmo de detección de eventos epilépticos basado en medidas de energía y entropía enfocado en pacientes críticos ». Tesis, Universidad de Chile, 2017. http://repositorio.uchile.cl/handle/2250/147438.

Texte intégral
Résumé :
Ingeniero Civil Eléctrico
El objetivo de esta memoria es implementar un algoritmo de detección de crisis epilépticas que funcione en tiempo real. El trabajo se realiza como una cooperación entre el Laboratorio de Ingeniería Biomédica del DIE de la Universidad de Chile y el Departamento de Neurología y la Unidad de Paciente Crítico Pediátrica de la Facultad de Medicina de la Pontificia Universidad Católica de Chile. El estudio se basa en la memoria realizada por Eliseo Araya [1], que utiliza medidas de energía para detectar crisis epilépticas y se suman nuevas herramientas de análisis de señales, criterios expertos y medidas que caracterizan a las crisis epilépticas. La base de datos está constituida de 15 registros, que sumados tienen una duración de 219,3 [Hrs]. Además, los registros contienen 469 crisis epilépticas, donde 277 son de duración mayor a 10 [s] y 192 de duración menor a 10 [s]. Se utilizan 11 registros para entrenar el algoritmo con 232 crisis marcadas y 4 registros para probarlo, con 45 crisis marcadas. De todos los registros solo uno contiene crisis menores a 10 [s], y se utiliza para entrenar. El algoritmo está constituido de 5 módulos: 1) Extracción de características; 2) Filtrado de características; 3) Eliminación de artefactos; 4) Toma de decisiones; 5) Combinación de algoritmos. En el primero se obtienen las características del registro usadas en el algoritmo, en el segundo se aplican filtro sobre las características extraídas, en el tercer módulo se depuran las características de ruido y artefactos, el cuarto módulo se divide en 2 algoritmos que trabajan de forma paralela y utilizan el método de Gotman, uno se encarga de pesquisar las crisis epilépticas mayores a 10 [s] y el otro de pesquisar las crisis epilépticas menores a 10 [s]. El quinto módulo combina los algoritmos usados en el módulo 4 para generar una salida única. Como resultado se tiene que para el conjunto de prueba se detectan 41 crisis y se generan 36 falsas detecciones, lo que se traduce en una tasa de verdaderos positivos de 91,1% y una tasa de falsos positivos por hora de 0,6 [1/Hrs]. Para el caso de las crisis menores a 10 [s], para el conjunto de prueba no hay marcas realizadas, pero se generan 96 falsos positivos, lo que significa una tasa de falsos positivos por hora de 1,61 [1/Hrs]. Como conclusión, se destaca que la memoria posee avances con respecto a la realizada por Araya. En la presente memoria se programan nuevos algoritmos de análisis de señales y métodos para caracterizar las crisis epilépticas. Además, se aumenta la cantidad de registros en la base de datos, se aumenta la cantidad de crisis epilépticas marcadas y se logra obtener un algoritmo con una mejor tasa de falsos positivos y verdaderos positivos.
Styles APA, Harvard, Vancouver, ISO, etc.
46

Abdalla, Alvaro Martins. « OMPP para projeto conceitual de aeronaves, baseado em heurísticas evolucionárias e de tomadas de decisões ». Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/18/18148/tde-13012011-113940/.

Texte intégral
Résumé :
Este trabalho consiste no desenvolvimento de uma metodologia de otimização multidisciplinar de projeto conceitual de aeronaves. O conceito de aeronave otimizada tem como base o estudo evolutivo de características das categorias imediatas àquela que se propõe. Como estudo de caso, foi otimizada uma aeronave de treinamento militar que faça a correta transição entre as fases de treinamento básico e avançado. Para o estabelecimento dos parâmetros conceituais esse trabalho integra técnicas de entropia estatística, desdobramento da função de qualidade (QFD), aritmética fuzzy e algoritmo genético (GA) à aplicação de otimização multidisciplinar ponderada de projeto (OMPP) como metodologia de projeto conceitual de aeronaves. Essa metodologia reduz o tempo e o custo de projeto quando comparada com as técnicas tradicionais existentes.
This work is concerned with the development of a methodology for multidisciplinary optimization of the aircraft conceptual design. The aircraft conceptual design optimization was based on the evolutionary simulation of the aircraft characteristics outlined by a QFD/Fuzzy arithmetic approach where the candidates in the Pareto front are selected within categories close to the target proposed. As a test case a military trainer aircraft was designed target to perform the proper transition from basic to advanced training. The methodology for conceptual aircraft design optimization implemented in this work consisted on the integration of techniques such statistical entropy, quality function deployment (QFD), arithmetic fuzzy and genetic algorithm (GA) to the weighted multidisciplinary design optimization (WMDO). This methodology proved to be objective and well balanced when compared with traditional design techniques.
Styles APA, Harvard, Vancouver, ISO, etc.
47

Toledo, Peña Patricio Antonio. « Algoritmo de detección de ondas P invariante de escala : Caso de réplicas del sismo del 11 de marzo de 2010 ». Tesis, Universidad de Chile, 2014. http://repositorio.uchile.cl/handle/2250/131361.

Texte intégral
Résumé :
Doctor en Ciencias, Mención Geología
Bajo la presión del megaterremoto del Maule en febrero de 2010, los centros de estudio chilenos debieron enfrentar una emergencia adicional, consistente en el procesamiento de terabits de datos tomados con posterioridad al gran evento. Esta masa de información proviene principalmente de las campañas de intervención. Sin embargo, la razón de fondo del número de datos que se logra, son las leyes de escalamiento que dominan la dinámica de la corteza y es que estas dictan qué sucede antes y después de cada evento. Más aún, estas leyes imponen cotas bastante estrictas al volumen de datos que es necesario registrar para identificar los procesos en sí. A pesar de que una teoría completa de la generación de sismicidad es desconocida en la actualidad, es posible comprender sus rasgos principales con una multiplicidad de técnicas, dos de ellas son la similitud y las observaciones directas. El uso de estos métodos, permite identificar algunas simetrías presentes en los fenómenos corticales. Estas simetrías son invarianzas de escala, es decir, la posibilidad de expresar los observables de interés como leyes de potencia del espacio, del tiempo y del tamaño de lo estudiado. Esta invarianza es el motivo tras la geometría fractal de las fallas, la ley de Gutenberg-Richter para los tamaños de los eventos sísmicos, la ley de Omori para los tiempos entre réplicas y otros. Estos elementos, permiten identificar un rasgo combinatorial presente en el proceso de generación de sismicidad, que posibilita la introducción de la entropía de Shannon como uno de los observables relevantes, que en la actualidad, no ha sido explotado exhaustivamente por los geocientistas. La entropía está ligada a la idea de información que es posible conocer y transmitir. La interpretación de la fuente sísmica como una de carácter estocástico cuyas señales viajan a través de un medio ruidoso (la corteza) finalmente registradas en receptores (sismómetros) permite hacer la analogía con un telégrafo y con ello conocer la información que proviene de los terremotos. La noción de entropía se fundamenta sobre unas probabilidades que se han identificado con ayuda del fenómeno conocido como la anomalía del primer dígito, que se reporta presente en la fuente sísmica, hecho debidamente establecido en la primera de las publicaciones que se adjuntan por medio de observaciones y simulación de autómatas celulares. Esta anomalía, se muestra está asociada a una familia de sistemas disipativos de los cuales la corteza es uno. Con ayuda de la teoría de la información se han encontrado criterios básicos de índole geométrico, que han permitido desarrollar los algoritmos de reconocimiento de sismicidad propuestos, que se han probado empíricamente en el caso de una serie de réplicas pertenecientes al sismo de Pichilemu del 11 de marzo del 2010, presentados en detalle en el segundo trabajo adjunto. Se ha probado que estos algoritmos se muestran competitivos y complementarios a los ya usados popularmente, lo que aumenta la capacidad de detección y abre posibilidades de estudio en el problema de alerta temprana. Finalmente se discute la posibilidad de interpretar el proceso de disipación de energía a través de una representación simple, que ligaría información, entropía y geometría.
Styles APA, Harvard, Vancouver, ISO, etc.
48

Chakik, Fadi El. « Maximum d'entropie et réseaux de neurones pour la classification ». Grenoble INPG, 1998. http://www.theses.fr/1998INPG0091.

Texte intégral
Résumé :
Cette these s'inscrit dans le cadre de la classification. Elle porte particulierement sur l'etude des methodes basees sur le principe du maximum d'entropie (maxent). Ces approches ont ete utilisees dans le laboratoire leibniz, par exemple, pour apprendre des comportements a un robot autonome. Le but du travail a ete de comparer cette approche a celles basees sur des reseaux de neurones. Une analyse theorique de la classification a permis de montrer qu'il existe une equivalence entre le maxent et l'apprentissage hebbien des reseaux neuronaux. Apprendre les valeurs des poids de ces derniers est equivalent a apprendre les valeurs moyennes de certains observables du maxent. L'inclusion de nouveaux observables permet d'apprendre a apprendre avec des regles d'apprentissage plus performantes dans le cadre des reseaux de neurones. Le maxent a ete applique a deux problemes particuliers : la classification des ondes de breiman (probleme standard en apprentissage), et la reconnaissance de textures d'images spot. Ces applications ont montre que le maxent permet d'atteindre des performances comparables, voire meilleures, que les methodes neuronales. La robustesse du code du maxent mis au point au cours de cette these est en train d'etre etudiee dans le laboratoire tima. Il est prevu qu'il soit telecharge sur un satellite americain (projet mptb), pour l'evaluer en presence de rayonnements ionisants, dans la perspective de faire des traitements d'images en systemes embarques.
Styles APA, Harvard, Vancouver, ISO, etc.
49

Kesler, Joseph Michael. « Automated Alignment of Aircraft Wing Radiography Images Using a Modified Rotation, Scale, and Translation Invariant Phase Correlation Algorithm Employing Local Entropy for Peak Detection ». University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1218604857.

Texte intégral
Styles APA, Harvard, Vancouver, ISO, etc.
50

Ben, Atia Okba. « Plateforme de gestion collaborative sécurisée appliquée aux Réseaux IoT ». Electronic Thesis or Diss., Mulhouse, 2024. http://www.theses.fr/2024MULH7114.

Texte intégral
Résumé :
L'apprentissage fédéré (FL) permet aux clients de former collaborativement un modèle tout en préservant la confidentialité des données. Malgré ses avantages, le FL est vulnérable aux attaques de poisoning. Cette thèse aborde la détection des modèles malveillants dans le système FL pour les réseaux IoT. Nous fournissons une revue de littérature des techniques récentes de détection et proposons un cadre d'adaptation et de comportement sécurisé (FLSecLAB) pour renforcer le système FL contre les attaques. FLSecLAB offre une personnalisation pour évaluer les défenses à travers divers jeux de données et métriques. Nous proposons une détection améliorée de modèles malveillants avec sélection dynamique d'un seuil optimal, ciblant les attaques de changement d'étiquettes. Nous présentons une solution évolutive utilisant l'entropie et un seuil adaptatif pour détecter les clients malveillants. Nous explorons des scénarios complexes et proposons une détection novatrice contre les attaques simultanées de changement d'étiquettes et de porte dérobée. De plus, nous proposons un modèle adaptatif pour détecter les clients malveillants, abordant les défis des données Non-IID. Nous évaluons nos approches à travers divers scénarios de simulation avec différents jeux de données, et les comparons aux approches existantes. Les résultats démontrent l'efficacité de nos approches pour améliorer diverses métriques de performance de détection malveillante
Federated Learning (FL) allows clients to collaboratively train a model while preserving data privacy. Despite its benefits, FL is vulnerable to poisoning attacks. This thesis addresses malicious model detection in FL systems for IoT networks. We provide a literature review of recent detection techniques and propose a Secure Layered Adaptation and Behavior framework (FLSecLAB) to fortify the FL system against attacks. FLSecLAB offers customization for evaluating defenses across datasets and metrics. We propose enhanced malicious model detection with dynamic optimal threshold selection, targeting Label-flipping attacks. We present a scalable solution using entropy and an adaptive threshold to detect malicious clients. We explore complex scenarios and propose novel detection against simultaneous Label-flipping and Backdoor attacks. Additionally, we propose an adaptive model for detecting malicious clients, addressing Non-IID data challenges. We evaluate our approaches through various simulation scenarios with different datasets, comparing them to existing approaches. Results demonstrate the effectiveness of our approaches in enhancing various malicious detection performance metrics
Styles APA, Harvard, Vancouver, ISO, etc.
Nous offrons des réductions sur tous les plans premium pour les auteurs dont les œuvres sont incluses dans des sélections littéraires thématiques. Contactez-nous pour obtenir un code promo unique!

Vers la bibliographie