Dissertations / Theses on the topic 'Entropy algorithms'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Entropy algorithms.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Höns, Robin. "Estimation of distribution algorithms and minimum relative entropy." [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=980407877.
Full textLuo, Shen. "Interior-Point Algorithms Based on Primal-Dual Entropy." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/1181.
Full textFellman, Laura Suzanne. "The Genetic Algorithm and Maximum Entropy Dice." PDXScholar, 1996. https://pdxscholar.library.pdx.edu/open_access_etds/5247.
Full textMeehan, Timothy J. "Joint demodulation of low-entropy narrow band cochannel signals." Thesis, Monterey, Calif. : Naval Postgraduate School, 2006. http://bosun.nps.edu/uhtbin/hyperion.exe/06Dec%5FMeehan%5FPhD.pdf.
Full textDissertation supervisor(s): Frank E. Kragh. "December 2006." Includes bibliographical references (p. 167-177). Also available in print.
Reimann, Axel. "Evolutionary algorithms and optimization." Doctoral thesis, [S.l. : s.n.], 2002. http://deposit.ddb.de/cgi-bin/dokserv?idn=969093497.
Full textJIMMY, TJEN. "Entropy-Based Sensor Selection Algorithms for Damage Detection in SHM Systems." Doctoral thesis, Università degli Studi dell'Aquila, 2021. http://hdl.handle.net/11697/173561.
Full textKirsch, Matthew Robert. "Signal Processing Algorithms for Analysis of Categorical and Numerical Time Series: Application to Sleep Study Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1278606480.
Full textMolari, Marco. "Implementation of network entropy algorithms on hpc machines, with application to high-dimensional experimental data." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2013. http://amslaurea.unibo.it/6160/.
Full textKotha, Aravind Eswar Ravi Raja, and Lakshmi Ratna Hima Rajitha Majety. "Performance Comparison of Image Enhancement Algorithms Evaluated on Poor Quality Images." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-13880.
Full textSaraiva, Gustavo Francisco Rosalin. "Análise temporal da sinalização elétrica em plantas de soja submetidas a diferentes perturbações externas." Universidade do Oeste Paulista, 2017. http://bdtd.unoeste.br:8080/jspui/handle/jspui/1087.
Full textMade available in DSpace on 2018-07-27T17:57:40Z (GMT). No. of bitstreams: 1 Gustavo Francisco Rosalin Saraiva.pdf: 5041218 bytes, checksum: 30127a7816b12d3bd7e57182e6229bc2 (MD5) Previous issue date: 2017-03-31
Plants are complex organisms with dynamic processes that, due to their sessile way of life, are influenced by environmental conditions at all times. Plants can accurately perceive and respond to different environmental stimuli intelligently, but this requires a complex and efficient signaling system. Electrical signaling in plants has been known for a long time, but has recently gained prominence with the understanding of the physiological processes of plants. The objective of this thesis was to test the following hypotheses: temporal series of data obtained from electrical signaling of plants have non-random information, with dynamic and oscillatory pattern, such dynamics being affected by environmental stimuli and that there are specific patterns in responses to stimuli. In a controlled environment, stressful environmental stimuli were applied in soybean plants, and the electrical signaling data were collected before and after the application of the stimulus. The time series obtained were analyzed using statistical and computational tools to determine Frequency Spectrum (FFT), Autocorrelation of Values and Approximate Entropy (ApEn). In order to verify the existence of patterns in the series, classification algorithms from the area of machine learning were used. The analysis of the time series showed that the electrical signals collected from plants presented oscillatory dynamics with frequency distribution pattern in power law. The results allow to differentiate with great efficiency series collected before and after the application of the stimuli. The PSD and autocorrelation analyzes showed a great difference in the dynamics of the electric signals before and after the application of the stimuli. The ApEn analysis showed that there was a decrease in the signal complexity after the application of the stimuli. The classification algorithms reached significant values in the accuracy of pattern detection and classification of the time series, showing that there are mathematical patterns in the different electrical responses of the plants. It is concluded that the time series of bioelectrical signals of plants contain discriminant information. The signals have oscillatory dynamics, having their properties altered by environmental stimuli. There are still mathematical patterns built into plant responses to specific stimuli.
As plantas são organismos complexos com processos dinâmicos que, devido ao seu modo séssil de vida, sofrem influência das condições ambientais todo o tempo. Plantas podem percebem e responder com precisão a diferentes estímulos ambientais de forma inteligente, mas para isso se faz necessário um complexo e eficiente sistema de sinalização. A sinalização elétrica em plantas já é conhecida há muito tempo, mas vem ganhando destaque recentemente com seu entendimento em relação aos processos fisiológicos das plantas. O objetivo desta tese foi testar as seguintes hipóteses: séries temporais de dados obtidos da sinalização elétrica de plantas possuem informação não aleatória, com padrão dinâmico e oscilatório, sendo tal dinâmica afetada por estímulos ambientais e que há padrões específicos nas respostas a estímulos. Em ambiente controlado, foram aplicados estímulos ambientais estressantes em plantas de soja, e captados os dados de sinalização elétrica antes e após a aplicação dos mesmos. As séries temporais obtidas foram analisadas utilizando ferramentas estatísticas e computacionais para se determinar o Espectro de Frequências (FFT), Autocorrelação dos valores e Entropia Aproximada (ApEn). Para se verificar a existência de padrões nas séries, foram utilizados algoritmos de classificação da área de aprendizado de máquina. A análise das séries temporais mostrou que os sinais elétricos coletados de plantas apresentaram dinâmica oscilatória com padrão de distribuição de frequências em lei de potência. Os resultados permitem diferenciar com grande eficácia séries coletadas antes e após a aplicação dos estímulos. As análises de PSD e autocorrelação mostraram grande diferença na dinâmica dos sinais elétricos antes e após a aplicação dos estímulos. A análise de ApEn mostrou haver diminuição da complexidade do sinal após a aplicação dos estímulos. Os algoritmos de classificação alcançaram valores significativos na acurácia de detecção de padrões e classificação das séries temporais, mostrando haver padrões matemáticos nas diferentes respostas elétricas das plantas. Conclui-se que as séries temporais de sinais bioelétricos de plantas possuem informação discriminante. Os sinais possuem dinâmica oscilatória, tendo suas propriedades alteradas por estímulos ambientais. Há ainda padrões matemáticos embutidos nas respostas da planta a estímulos específicos.
Lo, Johnny Li-Chang. "A framework for cryptography algorithms on mobile devices." Diss., University of Pretoria, 2007. http://hdl.handle.net/2263/28849.
Full textDissertation (MSc (Computer Science))--University of Pretoria, 2007.
Computer Science
MSc
unrestricted
Hyla, Bret M. "Sample Entropy and Random Forests a methodology for anomaly-based intrusion detection and classification of low-bandwidth malware attacks /." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Sep%5FHyla.pdf.
Full textThesis Advisor(s): Craig Martell, Kevin Squire. "September 2006." Includes bibliographical references (p.59-62). Also available in print.
Gaudencio, Andreia. "Study on the texture of biomedical data : contributions from multiscale and multidimensional features based on entropy measures." Electronic Thesis or Diss., Angers, 2025. http://www.theses.fr/2025ANGE0004.
Full textThe PhD aimed to develop texture extraction tools using artificial intelligence and entropy-based algorithms (EBA) for image pro cessing. First, a systematic review investigated the utility of entropy in predicting several pathologies like cancer and lung diseases. Then, Shannonbased and conditional-based entropy algorithms were developed and compared for their computational efficiency and performance in texture analysis of medical images. Shannon-based algorithms were less computationally intensive and were applied to detect pulmonary diseases. Conditional-based algorithms showed superior stability and consistency. Two-dimensional (2D) ensemble fuzzy entropy was the best algorithm among the ensemble techniques to detect healthy lung tissue and two types of emphysema. The proposed three-dimensional multiscale fuzzy entropy led to 89.6% accuracy and 96% sensitivity when detecting COVID-19. Moreover, 2D symbolic dynamic entropy proved to be the most accurate EBA (87.3%) in detecting emphysema patients among healthy subjects. Finally, when using 2D entropy features provided by the EBAs developed, emphysema patients were detected with 89.1% accuracy, and 95% area under the curve. Overall, the developed EBAs have proven to be effective in tex ture evaluation. In the future, they could be applied to various biomedical applications through different medical image sources
Lökk, Adrian, and Jacob Hallman. "Viability of Sentiment Analysis for Troll Detection on Twitter : A Comparative Study Between the Naive Bayes and Maximum Entropy Algorithms." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186443.
Full textSILVA, Israel Batista Freitas da. "Representações cache eficientes para índices baseados em Wavelet trees." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/21050.
Full textMade available in DSpace on 2017-08-30T19:22:34Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) Israel Batista Freitas da Silva.pdf: 1433243 bytes, checksum: 5b1ac5501cae385e4811343e1426e6c9 (MD5) Previous issue date: 2016-12-12
CNPQ, FACEPE.
Hoje em dia, há um exponencial crescimento do volume de informação no mundo. Esta explosão cria uma demanda por técnicas mais eficientes de indexação e consulta de dados, uma vez que, para serem úteis, eles precisarão ser manipuláveis. Casamento de padrões se refere à busca de um texto menor (padrão) em um texto muito maior (texto), reportando a quantidade de ocorrências e/ou as localizações das ocorrências. Para tal, pode-se construir uma estrutura chamada índice que pré-processará o texto e permitirá que consultas sejam feitas eficientemente. A eficiência prática de um índice, além da sua eficiência teórica, pode definir o quão utilizado ele será, e isto está diretamente ligado a como ele se comporta nas arquiteturas dos computadores atuais. O principal objetivo deste estudo é analisar o uso da estrutura Wavelet Tree como índice avaliando o impacto da reorganização interna dos seus dados quanto à localidade espacial e, assim propor formas de organização que reduzam efetivamente a quantidade de cache misses ocorridos na execução de operações neste índice. Através de análises empíricas com dados simulados e dados textuais obtidos de dois repositórios públicos, avaliou-se alguns aspectos de cinco tipos de organizações para os dados da estrutura com o objetivo de compará-las quanto ao tempo de execução e quantidade de cache misses ocorridos. Adicionalmente, uma análise teórica da complexidade da quantidade de cache misses ocorridos para operação de consulta de um padrão é descrita para uma das organizações propostas. Dois experimentos realizados sugerem comportamentos assintóticos para duas das organizações analisadas. Um terceiro experimento executado mostra que, para quatro das cinco organizações apresentadas, houve uma sistemática redução na quantidade de cache misses ocorridos para a cache de menor nível. Entretanto a redução de cache misses para cache de menor nível não se refletiu integralmente numa diferença no tempo de execução das operações, tendo sido esta menos significativa, nem na quantidade de cache misses ocorridos na cache de maior nível, onde houveram variações positivas e negativas.Os resultados obtidos permitem concluir que a escolha de uma representação adequada pode acarretar numa melhora significativa de utilização da cache. Diferentemente do modelo teórico, o custo de acesso à memória responde apenas por uma fração do tempo de computação das operações sobre as Wavelet Trees, pelo que a diminuição no número de cache misses não se traduziu integralmente no tempo de execução. No entanto, este fator pode ser crítico em situações mais extremas de utilização de memória.
Today, there is an exponential growth in the volume of information in the world. This increase creates the demand for more efficient indexing and querying techniques, since, to be useful, that data needs to be manageable. Pattern matching means searching for a string (pattern) in a much bigger string (text), reporting the number of occurrences and/or its locations. To do that, we need to build a data structure known as index. This structure will preprocess the text to allow for efficient queries. The adoption of an index depends heavily on its efficiency, and this is directly related to how well it performs on current machine architectures. The main objective of this work is to analyze the Wavelet Tree data structure as an index, assessing the impact of its internal organization with respect to spatial locality, and propose ways to organize its data as to reduce the amount of cache misses incurred by its operations. We performed an empirical analysis using both real and simulated textual data to compare the running time and cache behavior of Wavelet Trees using five different proposals of internal data layout. A theoretical analysis about the cache complexity of a query operation is also presented for the most efficient layout. Two experiments suggest good asymptotic behavior for two of the analyzed layouts. A third experiment shows that for four of the five layouts, there was a systematic reduction in the number of cache misses for the lowest level cache. Despite this, this reduction was not reflected in the runtime, neither in the performance for the highest level cache. The results obtained allow us to conclude that the choice of a suitable layout can lead to a significant improvement in cache usage. Unlike the theoretical model, however, the cost of memory access only accounts for a fraction of the operations’ computation time on the Wavelet Trees, so the decrease in the number of cache misses did not translate fully into gains in the execution time. However, this factor can still be critical in more extreme memory utilization situations.
Sinha, Anurag R. "Optimization of a new digital image compression algorithm based on nonlinear dynamical systems /." Online version of thesis, 2008. http://hdl.handle.net/1850/5544.
Full textPereira, Filipe de Oliveira. "Separação cega de misturas com não-linearidade posterior utilizando estruturas monotônicas e algoritmos bio-inspirados de otimização." [s.n.], 2010. http://repositorio.unicamp.br/jspui/handle/REPOSIP/259842.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de Computação
Made available in DSpace on 2018-08-16T19:27:38Z (GMT). No. of bitstreams: 1 Pereira_FilipedeOliveira_M.pdf: 3292959 bytes, checksum: b07b4141d2a1f443eb3ab766909a099c (MD5) Previous issue date: 2010
Resumo: O presente trabalho se propõe a desenvolver métodos de Separação Cega de Fontes (BSS) para modelos de mistura com Não-Linearidade Posterior (PNL). Neste caso particular, a despeito da não-linearidade do modelo, ainda é possível recuperar as fontes através de técnicas de Análise de Componentes Independentes (ICA). No entanto, há duas dificuldades maiores no emprego da ICA em modelos PNL. A primeira delas diz respeito a uma restrição sobre as funções não-lineares presentes no modelo PNL: elas devem ser monotônicas por construção. O segundo problema se encontra no ajuste do sistema separador com base em funções custo associadas à ICA: pode haver mínimos locais sub-ótimos. De modo a contornar o primeiro problema, investigamos a adequabilidade de três tipos distintos de estruturas não-lineares monotônicas. Para lidar com a presença de mínimos sub-ótimos no ajuste do sistema separador, empregamos algoritmos bio-inspirados com significativa capacidade de busca global. Finalmente, buscamos, através de experimentos em diversos cenários representativos, identificar dentre as estratégias estudadas qual a melhor configuração, tanto em termos de qualidade da estimação das fontes quanto em termos de complexidade
Abstract: This work aims at the development of Blind Source Separation (BSS) methods for Post-NonLinear (PNL) mixing models. In this particular case, despite the presence of nonlinear elements in the mixing model, it is still possible to recover the sources through Independent Component Analysis (ICA) methods. However, there are two major problems in the application of ICA techniques to PNL models. The first one concerns a restriction on the nonlinear functions present in the PNL model: they must be monotonic functions by construction. The second one is related to the adjustment of the PNL separating system via ICA-based cost functions: there may be sub-optimal local minima. To cope with the first problem, we investigate three types of monotonic nonlinear structures. Moreover, to circumvent the problem related to the presence of sub-optimal minima, we consider bio-inspired algorithms that have a significant global search potential. Finally, we perform a set of experiments in representative scenarios in order to identify, among the considered strategies, the best ones in terms of quality of the retrieved sources and overall complexity
Mestrado
Mestre em Engenharia Elétrica
Gielniak, Michael Joseph. "Adaptation of task-aware, communicative variance for motion control in social humanoid robotic applications." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43591.
Full textKobayashi, Jorge Mamoru. "Entropy: algoritmo de substituição de linhas de cache inspirado na entropia da informação." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-29112016-102603/.
Full textThis work presents a study about cache line replacement problem for microprocessors. Inspired in the Information Entropy concept stated by Claude E. Shannon in 1948, this work proposes a novel heuristic to replace cache lines in microprocessors. The major goal is to capture the referential locality of programs and to reduce the miss rate for cache access during programs execution. The proposed algorithm, Entropy, employs that new entropy heuristic to estimate the chances of a cache line to be referenced after it has been loaded into cache. A novel decay function has been introduced to optimize its operation. Results show that Entropy could reduce miss rate up to 50.41% in comparison to LRU. This work also proposes a hardware implementation which keeps computation and complexity costs comparable to the most employed algorithm, LRU. To a 2-Mbytes and 8-way associative cache memory, the required storage area is 0.61% of the cache size. The Entropy algorithm was simulated using SimpleScalar ISA simulator and compared to LRU using SPEC CPU2000 benchmark programs.
Cosma, Ioana Ada. "Dimension reduction of streaming data via random projections." Thesis, University of Oxford, 2009. http://ora.ox.ac.uk/objects/uuid:09eafd84-8cb3-4e54-8daf-18db7832bcfc.
Full textBouallagui, Sarra. "Techniques d'optimisation déterministe et stochastique pour la résolution de problèmes difficiles en cryptologie." Phd thesis, INSA de Rouen, 2010. http://tel.archives-ouvertes.fr/tel-00557912.
Full textRobles, Bernard. "Etude de la pertinence des paramètres stochastiques sur des modèles de Markov cachés." Phd thesis, Université d'Orléans, 2013. http://tel.archives-ouvertes.fr/tel-01058784.
Full textSharify, Meisam. "Algorithmes de mise à l'échelle et méthodes tropicales en analyse numérique matricielle." Phd thesis, Ecole Polytechnique X, 2011. http://pastel.archives-ouvertes.fr/pastel-00643836.
Full textHan, Seungju. "A family of minimum Renyi's error entropy algorithm for information processing." [Gainesville, Fla.] : University of Florida, 2007. http://purl.fcla.edu/fcla/etd/UFE0021428.
Full textSemerád, Lukáš. "Generování kryptografického klíče z biometrických vlastností oka." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236038.
Full textCAMPOS, M. C. M. "Development of an Entropy-Based Swarm Algorithm for Continuous Dynamic Constrained Optimization." Universidade Federal do Espírito Santo, 2017. http://repositorio.ufes.br/handle/10/9871.
Full textDynamic constrained optimization problems form a class of problems WHERE the objective function or the constraints can change over time. In static optimization, finding a global optimum is considered as the main goal. In dynamic optimization, the goal is not only to find an optimal solution, but also track its trajectory as closely as possible over time. Changes in the environment must be taken into account during the optimization process in such way that these problems are to be solved online. Many real-world problems can be formulated within this framework. This thesis proposes an entropy-based bare bones particle swarm for solving dynamic constrained optimization problems. The Shannons entropy is established as a phenotypic diversity index and the proposed algorithm uses the Shannons index of diversity to aggregate the global-best and local-best bare bones particle swarm variants. The proposed approach applies the idea of mixture of search directions by using the index of diversity as a factor to balance the influence of the global-best and local-best search directions. High diversity promotes the search guided by the global-best solution, with a normal distribution for exploitation. Low diversity promotes the search guided by the local-best solution, with a heavy-tailed distribution for exploration. A constraint-handling strategy is also proposed, which uses a ranking method with selection based on the technique for order of preference by similarity to ideal solution to obtain the best solution within a specific population of candidate solutions. Mechanisms to detect changes in the environment and to update particles' memories are also implemented into the proposed algorithm. All these strategies do not act independently. They operate related to each other to tackle problems such as: diversity loss due to convergence and outdated memories due to changes in the environment. The combined effect of these strategies provides an algorithm with ability to maintain a proper balance between exploration and exploitation at any stage of the search process without losing the tracking ability to search an optimal solution which is changing over time. An empirical study was carried out to evaluate the performance of the proposed approach. Experimental results show the suitability of the algorithm in terms of effectiveness to find good solutions for the benchmark problems investigated. Finally, an application is developed, WHERE the proposed algorithm is applied to solve the dynamic economic dispatch problem in power systems.
Carvalho, André Izecson de. "A design method based in entropy statistics." Instituto Tecnológico de Aeronáutica, 2008. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=1169.
Full textArmean, Irina Mărioara. "Protein complexes analyzed by affinity purification and maximum entropy algorithm using published annotations." Thesis, University of Cambridge, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.707940.
Full textDanks, Jacob R. "Algorithm Optimizations in Genomic Analysis Using Entropic Dissection." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804921/.
Full textWang, Zhenggang. "Improved algorithm for entropic segmentation of DNA sequence /." View abstract or full-text, 2004. http://library.ust.hk/cgi/db/thesis.pl?PHYS%202004%20WANG.
Full textIncludes bibliographical references (leaves 56-58). Also available in electronic version. Access restricted to campus users.
NEGRI, MATTEO. "Is Evolution an Algorithm? Effects of local entropy in unsupervised learning and protein evolution." Doctoral thesis, Politecnico di Torino, 2022. http://hdl.handle.net/11583/2972307.
Full textChampion, Julie. "Sur les algorithmes de projections en entropie relative avec contraintes marginales." Toulouse 3, 2013. http://thesesups.ups-tlse.fr/2036/.
Full textThis work is focused on an algorithm of construction of probability measures with prescribed marginal laws, called Iterative Proportional Fitting (IPF). Deriving from statistical problems, this algorithm is based on successive projections on probability spaces for the relative entropy pseudometric of Kullback Leibler. This thesis consists in a survey of the current results on this subject and gives some extensions and subtleties. The first part deals with the study of projections in relative entropy, namely existence, uniqueness criteria, and characterization properties related to closedness of sumspaces. Under certain assumptions, the problem becomes a problem of maximisation of the entropy for graphical marginal constraints. In the second part, we study the iterative procedure IPF. Introduced initially for an estimation problem on contingency tables, it corresponds in a more general setting to an analogue of a classic algorithm of alternating projections on Hilbert spaces. After presenting the IPF properties, we look for convergence results in the finite discrete case, the Gaussian case, and the more general continuous case with two marginals, for which some extensions are given. Then, the thesis focused on Gaussian case with two prescribed marginal, for which we get a rate of convergence using a new formulation of the IPF. Moreover we prove the optimality for the 2-dimensional case
Perche, Paul-Benoît. "Méthodes d'induction par arbres de décision dans le cadre de l'aide au diagnostic." Lille 1, 1999. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/1999/50376-1999-65.pdf.
Full textKilpatrick, Alastair Morris. "Novel stochastic and entropy-based Expectation-Maximisation algorithm for transcription factor binding site motif discovery." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/10489.
Full textNagalakshmi, Subramanya. "Study of FPGA implementation of entropy norm computation for IP data streams." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002477.
Full textSingh, Anima Ph D. Massachusetts Institute of Technology. "Risk stratification of cardiovascular patients using a novel classification tree induction algorithm with non-symmetric entropy measures." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/64601.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (p. 95-100).
Risk stratification allows clinicians to choose treatments consistent with a patient's risk profile. Risk stratification models that integrate information from several risk attributes can aid clinical decision making. One of the technical challenges in developing risk stratification models from medical data is the class imbalance problem. Typically the number of patients that experience a serious medical event is a small subset of the entire population. The goal of my thesis work is to develop automated tools to build risk stratification models that can handle unbalanced datasets and improve risk stratification. We propose a novel classification tree induction algorithm that uses non-symmetric entropy measures to construct classification trees. We apply our methods to the application of identifying patients at high risk of cardiovascular mortality. We tested our approach on a set of 4200 patients who had recently suffered from a non-ST-elevation acute coronary syndrome. When compared to classification tree models generated using other measures proposed in the literature, the tree models constructed using non-symmetric entropy had higher recall and precision. Our models significantly outperformed models generated using logistic regression - a standard method of developing multivariate risk stratification models in the literature.
by Anima Singh.
S.M.
GONCALVES, LEONARDO BARROSO. "RÉNYI ENTROPY AND CAUCHY-SCHWARTZ MUTUAL INFORMATION APPLIED TO THE MIFS-U VARIABLES SELECTION ALGORITHM: A COMPARATIVE STUDY." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=12170@1.
Full textA presente dissertação aborda o algoritmo de Seleção de Variáveis Baseada em Informação Mútua sob Distribuição de Informação Uniforme (MIFS-U) e expõe um método alternativo para estimação da entropia e da informação mútua, medidas que constituem a base deste algoritmo de seleção. Este método tem, por fundamento, a informação mútua quadrática de Cauchy-Schwartz e a entropia quadrática de Rényi, combinada, no caso de variáveis contínuas, ao método de estimação de densidade Janela de Parzen. Foram realizados experimentos com dados reais de domínio público, sendo tal método comparado com outro, largamente utilizado, que adota a definição de entropia de Shannon e faz uso, no caso de variáveis contínuas, do estimador de densidade histograma. Os resultados mostram pequenas variações entre os dois métodos, mas que sugerem uma investigação futura através de um classificador, tal como Redes Neurais, para avaliar qualitativamente tais resultados à luz do objetivo final que consiste na maior exatidão de classificação.
This dissertation approaches the algorithm of Selection of Variables under Mutual Information with Uniform Distribution (MIFS-U) and presents an alternative method for estimate entropy and mutual information, measures that constitute the base of this selection algorithm. This method has, for foundation, the Cauchy-Schwartz quadratic mutual information and the quadratic Rényi entropy, combined, in the case of continuous variables, with Parzen Window density estimation. Experiments were accomplished with real public domain data, being such method compared with other, broadly used, that adopts the Shannon entropy definition and makes use, in the case of continuous variables, of the histogram density estimator The results show small variations among the two methods, what suggests a future investigation through a classifier, such as Neural Networks, to evaluate this results, qualitatively, in the light of the final objective that consists of the biggest sort exactness.
Gordan, Mimić. "Nelinearna dinamička analiza fizičkih procesa u žiivotnoj sredini." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2016. https://www.cris.uns.ac.rs/record.jsf?recordId=101258&source=NDLTD&language=en.
Full textCoupled system of prognostic equations for the ground surface temperature and the deeper layer temperature was examind. Lyapunov exponents, bifurcation diagrams, attractor and the domain of solutions were analyzed. Novel information measures based on Kolmogorov complexity and used for the quantification of randomness in time series, were presented.Novel measures were tested on various time series obtained by measuring physical factors of the environment or as the climate model outputs.
Hauman, Charlotte. "The application of the cross-entropy method for multi-objective optimisation to combinatorial problems." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/71636.
Full textENGLISH ABSTRACT: Society is continually in search of ways to optimise various objectives. When faced with multiple and con icting objectives, humans are in need of solution techniques to enable optimisation. This research is based on a recent venture in the eld of multi-objective optimisation, the use of the cross-entropy method to solve multi-objective problems. The document provides a brief overview of the two elds, multi-objective optimisation and the cross-entropy method, touching on literature, basic concepts and applications or techniques. The application of the method to two problems is then investigated. The rst application is to the multi-objective vehicle routing problem with soft time windows, a widely studied problem with many real-world applications. The problem is modelled mathematically with a transition probability matrix that is updated according to cross-entropy principles before converging to an approximation solution set. The highly constrained problem is successfully modelled and the optimisation algorithm is applied to a set of benchmark problems. It was found that the cross-entropy method for multi-objective optimisation is a valid technique in providing feasible and non-dominated solutions. The second application is to a real world case study in blood management done at the Western Province Blood Transfusion Service. The conceptual model is derived from interviews with relevant stakeholders before discrete event simulation is used to model the system. The cross-entropy method is used to optimise the inventory policy of the system by simultaneously maximising the combined service level of the system and minimising the total distance travelled. By integrating the optimisation and simulation model, the study shows that the inventory policy of the service can improve signi cantly, and the use of the cross-entropy algorithm adequately progresses to a front of solutions. The research proves the remarkable width and simplicity of possible applications of the cross-entropy algorithm for multi-objective optimisation, whilst contributing to literature on the vehicle routing problem and blood management. Results on benchmark problems for the vehicle routing problem with soft time windows are provided and an improved inventory policy is suggested to the Western Province Blood Transfusion Service.
AFRIKAANSE OPSOMMING: Die mensdom is voortdurend op soek na maniere om verskeie doelwitte te optimeer. Wanneer die mens konfrontreer word met meervoudige en botsende doelwitte, is oplossingsmetodes nodig om optimering te bewerkstellig. Hierdie navorsing is baseer op 'n nuwe wending in die veld van multi-doelwit optimering, naamlik die gebruik van die kruisentropie metode om multi-doelwit probleme op te los. Die dokument verskaf 'n bre e oorsig oor die twee velde { multi-doelwit optimering en die kruis-entropie-metode { deur kortliks te kyk na die beskikbare literatuur, basiese beginsels, toepassingsareas en metodes. Die toepassing van die metode op twee onafhanklike probleme word dan ondersoek. Die eerste toepassing is di e van die multi-doelwit voertuigroeteringsprobleem met plooibare tydvensters. Die probleem word eers wiskundig modelleer met 'n oorgangswaarskynlikheidsmatriks. Die matriks word dan deur kruis-entropie beginsels opdateer voor dit konvergeer na 'n benaderingsfront van oplossings. Die oplossingsruimte is onderwerp aan heelwat beperkings, maar die probleem is suksesvol modelleer en die optimeringsalgoritme is gevolglik toegepas op 'n stel verwysingsprobleme. Die navorsing het gevind dat die kruis-entropie metode vir multi-doelwit optimering 'n geldige metode is om 'n uitvoerbare front van oplossings te beraam. Die tweede toepassing is op 'n gevallestudie van die bestuur van bloed binne die konteks van die Westelike Provinsie Bloedoortappingsdiens. Na aanleiding van onderhoude met die relevante belanghebbers is 'n konsepmodel geskep voor 'n simulasiemodel van die stelsel gebou is. Die kruis-entropie metode is gebruik om die voorraadbeleid van die stelsel te optimeer deur 'n gesamentlike diensvlak van die stelsel te maksimeer en terselfdetyd die totale reis-afstand te minimeer. Deur die optimerings- en simulasiemodel te integreer, wys die studie dat die voorraadbeleid van die diens aansienlik kan verbeter, en dat die kruis-entropie algoritme in staat is om na 'n front van oplossings te beweeg. Die navorsing bewys die merkwaardige wydte en eenvoud van moontlike toepassings van die kruis-entropie algoritme vir multidoelwit optimering, terwyl dit 'n bydrae lewer tot die afsonderlike velde van voertuigroetering en die bestuur van bloed. Uitslae vir die verwysingsprobleme van die voertuigroeteringsprobleem met plooibare tydvensters word verskaf en 'n verbeterde voorraadbeleid word aan die Westelike Provinsie Bloedoortappingsdiens voorgestel.
Strizzi, Jon D. (Jon David). "An improved algorithm for satellite orbit decay and re-entry prediction." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/47332.
Full textSpratlin, Kenneth Milton. "An adaptive numeric predictor-corrector guidance algorithm for atmospheric entry vehicles." Thesis, Massachusetts Institute of Technology, 1987. http://hdl.handle.net/1721.1/31006.
Full textMICROFICHE COPY AVAILABLE IN ARCHIVES AND AERONAUTICS.
Bibliography: p. 211-213.
by Kenneth Milton Spratlin.
M.S.
De, bortoli Valentin. "Statistiques non locales dans les images : modélisation, estimation et échantillonnage." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASN020.
Full textIn this thesis we study two non-localstatistics in images from a probabilistic point of view: spatialredundancy and convolutional neural network features. Moreprecisely, we are interested in the estimation and detection ofspatial redundancy in naturalimages. We also aim at sampling images with neural network constraints.We start by giving a definition of spatial redundancy in naturalimages. This definition relies on two concepts: a Gestalt analysisof the notion of similarity in images, and a hypothesis testingframework (the a contrario method). We propose an algorithm toidentify this redundancy in natural images. Using this methodologywe can detect similar patches in images and, with this information,we propose new algorithms for diverse image processing tasks(denoising, periodicity analysis).The rest of this thesis deals with sampling images with non-localconstraints. The image models we consider are obtained via themaximum entropy principle. The target distribution is then obtainedby minimizing an energy functional. We use tools from stochasticoptimization to tackle thisproblem.More precisely, we propose and analyze a new algorithm: the SOUL(Stochastic Optimization with Unadjusted Langevin) algorithm. Inthis methodology, the gradient is estimated using Monte Carlo MarkovChains methods. In the case of the SOUL algorithm we use an unadjustedLangevin algorithm. The efficiency of the SOUL algorithm is relatedto the ergodic properties of the underlying Markov chains. Thereforewe are interested in the convergence properties of certain class offunctional autoregressive models. We characterize precisely thedependency of the convergence rates of these models with respect totheir parameters (dimension, smoothness,convexity).Finally, we apply the SOUL algorithm to the problem ofexamplar-based texture synthesis with a maximum entropy approach. Wedraw links between our model and other entropy maximizationprocedures (macrocanonical models, microcanonical models). Usingconvolutional neural network constraints we obtain state-of-the artvisual results
Marcelo, Monte da Silva João. "Um novo algoritmo baseado em entropia para filtragem da interferência frente-verso." Universidade Federal de Pernambuco, 2005. https://repositorio.ufpe.br/handle/123456789/5641.
Full textA digitalização de documentos originariamente em papel é a maneira mais eficiente que dispomos hoje como meio de preservar o seu conteúdo para as gerações futuras, bem como possibilitar o acesso e disseminação às informações via redes de computadores. A natureza do documento impõe técnicas diferentes para a digitalização e armazenagem destes. Em geral, objetivando possibilidades futuras, os documentos são digitalizados em cores (true color) e alta resolução (chegando hoje até mais de 1.000 pontos por polegada). Visando o acesso via redes, tais documentos são geralmente disponibilizados em sua versão monocromática, com 200 dpi de resolução e comprimidos em formato conveniente, geralmente TIFF (G4). Tal processo de diminuição do número de cores de documentos, no caso de conversão para monocromático conhecido como binarização, possui dificuldades para ser efetuado de maneira automática, caso o documento tenha sido escrito ou impresso em ambos os lados de papel translúcido, situação conhecida como interferência frenteverso. Os algoritmos de binarização hoje existentes nas ferramentas comerciais geram imagem onde as porções referentes à tinta na frente e no verso ficam sobrepostas, impossibilitando a leitura da imagem obtida. Embora tal problema tenha sido apresentado há mais de uma década, ainda hoje busca-se soluções melhores para ele. No caso de documentos históricos, a complexidade do problema é ainda maior, uma vez que há o escurecimento causado pelo envelhecimento do papel como fator complicador. Esta dissertação propõe um novo algoritmo baseado na entropia do histograma da imagem para a binarização da imagem de documentos históricos com interferência frente-verso. O algoritmo proposto é comparado com os seus antecessores descritos na literatura, gerando imagens de melhor qualidade que os seus congêneres
Perianhes, Roberto Vitoriano. "Utilizando algoritmo de cross-entropy para a modelagem de imagens de núcleos ativos de galáxias obtidas com o VLBA." Universidade Presbiteriana Mackenzie, 2017. http://tede.mackenzie.br/jspui/handle/tede/3466.
Full textApproved for entry into archive by Paola Damato (repositorio@mackenzie.br) on 2018-03-08T11:19:18Z (GMT) No. of bitstreams: 2 Roberto Vitoriano Perianhes.pdf: 5483045 bytes, checksum: 54cb8ad49fe9a8dd9da3aaabb8076b2f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-03-08T11:19:18Z (GMT). No. of bitstreams: 2 Roberto Vitoriano Perianhes.pdf: 5483045 bytes, checksum: 54cb8ad49fe9a8dd9da3aaabb8076b2f (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-08-09
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The images obtained by interferometers such as VLBA (Very Long Baseline Array) and VLBI (Very Long Baseline Interferometry), remain the direct evidence of relativistic jets and outbursts associated with supermassive black holes in active galactic nuclei (AGN). The study of these images are critical tools to the use of information from these observations, since they are one of the main ingredients for synthesis codes7 of extragalactic objects. In this thesis is used both synthetic and observed images. The VLBA images show 2-dimensional observations generated from complex 3-dimensional astrophysical processes. In this sense, one of the main difficulties of the models is the definition of parameters of functions and equations to reproduce macroscopic and dynamic physical formation events of these objects, so that images could be study reliably and on a large scale. One of the goals of this thesis is to elaborate a generic8 form of observations, assuming that the formation of these objects had origin directly by similar astrophysical processes, given the information of certain parameters of the formation events. The definition of parameters that reproduce the observations are key to the generalization formation of sources and extragalactic jets. Most observation articles have focus on few or even unique objects. The purpose of this project is to implement an innovative method, more robust and efficient, for modeling and rendering projects of various objects, such as the MOJAVE Project, which monitors several quasars simultaneously offering a diverse library for creating models (Quasars9 and Blazars10: OVV11 and BL Lacertae12). In this thesis was implemented a dynamic way to study these objects. Presents in this thesis the adaptation of the Cross-Entropy algorithm for the calibration of the parameters of astrophysical events that summarize the actual events of the VLBA observations. The development of the code of the adaptation structure includes the possibility of extension to any image, assuming that these images are dispose in intensities (Jy/beam) distributed in Right Ascension (AR) and Declination (DEC) maps. The code is validating by searching for self-convergence to synthetic models with the same structure, i.e, realistics simulations of components ejection, in milliarcsecond, similar to the observations of the MOJAVE project in 15.3 GHz. With the use of the parameters major semi-axis, angle of position, eccentricity and intensity applied individually to each observed component, it was possible to calculate the structure of the sources, the velocities of the jets, as well as the conversion in flux density to obtain light curves. Through the light curve, the brightness temperature, the Doppler factor, the Lorentz factor and the observation angle of the extragalactic objects can be estimated with precision. The objects OJ 287, 4C +15.05, 3C 279 and 4C +29.45 are studied in this thesis due the fact that they have different and complex morphologies for a more complete study.
As imagens obtidas por interferômetros, tais como VLBA (Very Long Baseline Array) e VLBI (Very Long Baseline Interferometry), são evidências diretas de jatos relativísticos associados a buracos negros supermassivos em núcleos ativos de galáxias (AGN). O estudo dessas imagens é fundamental para o aproveitamento das informações dessas observações, já que é um dos principais ingredientes para os códigos de síntese1 de objetos extragalácticos. Utiliza-se nesta tese, tanto imagens sintéticas quanto observadas. As imagens de VLBA mostram observações em 2 dimensões de processos astrofísicos complexos ocorrendo em 3 dimensões. Nesse sentido, uma das principais dificuldades dos modelos é a definição dos parâmetros das funções e equações que reproduzam de forma macroscópica e dinâmica os eventos físicos de formação desses objetos, para que as imagens sejam estudadas de forma confiável e em grande escala. Um dos objetivos desta tese é elaborar uma forma genérica2 de observações, supondo que a formação desses objetos é originada por processos astrofísicos similares, com a informação de determinados parâmetros da formação dos eventos. A definição de parâmetros que reproduzam as observações são elementos chave para a generalização da formação de componentes em jatos extragalácticos. Grande parte dos artigos de observação são voltados para poucos ou únicos objetos. Foi realizada nesta tese a implementação um método inovador, robusto e eficiente para a modelagem e reprodução de vários objetos, como por exemplo nas fontes do Projeto MOJAVE, que monitora diversos quasares simultaneamente, oferecendo uma biblioteca diversificada para a criação de modelos (Quasares3 e Blazares4: OVV5 e BL Lacertae6). Com essas fontes implementou-se uma forma dinâmica para o estudo desses objetos. Apresenta-se, nesta tese, a adaptação do algoritmo de Cross-Entropy para a calibração dos parâmetros dos eventos astrofísicos que sintetizem os eventos reais das observações em VLBA. O desenvolvimento da estrutura de adaptação do código incluiu a possibilidade de extensão para qualquer imagem, supondo que as mesmas estão dispostas em intensidades (Jy/beam) distribuídas em mapas de Ascensão Reta (AR) e Declinação (DEC). A validação do código foi feita buscando a auto convergência para modelos sintéticos com as mesmas estruturas, ou seja, de simulações realísticas de ejeção de componentes, em milissegundos de arco, similares às observações do projeto MOJAVE, em 15,3 GHz. Com a utilização dos parâmetros semieixo maior, ângulo de posição, excentricidade e intensidade aplicados individualmente a cada componente observada, é possível calcular a estrutura das fontes, as velocidades dos jatos, bem como a conversão em densidade de fluxo para obtenção de curvas de luz. Através da curva de luz estimou-se com precisão a temperatura de brilhância, o fator Doppler, o fator de Lorentz e o ângulo de observação dos objetos extragalácticos. Os objetos OJ 287, 4C +15.05, 3C 279 e 4C +29.45 são estudados nesta tese pois têm morfologias diferentes e complexas para um estudo mais completo.
Morales, Pérez Cristóbal Sebastián. "Algoritmo de detección de eventos epilépticos basado en medidas de energía y entropía enfocado en pacientes críticos." Tesis, Universidad de Chile, 2017. http://repositorio.uchile.cl/handle/2250/147438.
Full textEl objetivo de esta memoria es implementar un algoritmo de detección de crisis epilépticas que funcione en tiempo real. El trabajo se realiza como una cooperación entre el Laboratorio de Ingeniería Biomédica del DIE de la Universidad de Chile y el Departamento de Neurología y la Unidad de Paciente Crítico Pediátrica de la Facultad de Medicina de la Pontificia Universidad Católica de Chile. El estudio se basa en la memoria realizada por Eliseo Araya [1], que utiliza medidas de energía para detectar crisis epilépticas y se suman nuevas herramientas de análisis de señales, criterios expertos y medidas que caracterizan a las crisis epilépticas. La base de datos está constituida de 15 registros, que sumados tienen una duración de 219,3 [Hrs]. Además, los registros contienen 469 crisis epilépticas, donde 277 son de duración mayor a 10 [s] y 192 de duración menor a 10 [s]. Se utilizan 11 registros para entrenar el algoritmo con 232 crisis marcadas y 4 registros para probarlo, con 45 crisis marcadas. De todos los registros solo uno contiene crisis menores a 10 [s], y se utiliza para entrenar. El algoritmo está constituido de 5 módulos: 1) Extracción de características; 2) Filtrado de características; 3) Eliminación de artefactos; 4) Toma de decisiones; 5) Combinación de algoritmos. En el primero se obtienen las características del registro usadas en el algoritmo, en el segundo se aplican filtro sobre las características extraídas, en el tercer módulo se depuran las características de ruido y artefactos, el cuarto módulo se divide en 2 algoritmos que trabajan de forma paralela y utilizan el método de Gotman, uno se encarga de pesquisar las crisis epilépticas mayores a 10 [s] y el otro de pesquisar las crisis epilépticas menores a 10 [s]. El quinto módulo combina los algoritmos usados en el módulo 4 para generar una salida única. Como resultado se tiene que para el conjunto de prueba se detectan 41 crisis y se generan 36 falsas detecciones, lo que se traduce en una tasa de verdaderos positivos de 91,1% y una tasa de falsos positivos por hora de 0,6 [1/Hrs]. Para el caso de las crisis menores a 10 [s], para el conjunto de prueba no hay marcas realizadas, pero se generan 96 falsos positivos, lo que significa una tasa de falsos positivos por hora de 1,61 [1/Hrs]. Como conclusión, se destaca que la memoria posee avances con respecto a la realizada por Araya. En la presente memoria se programan nuevos algoritmos de análisis de señales y métodos para caracterizar las crisis epilépticas. Además, se aumenta la cantidad de registros en la base de datos, se aumenta la cantidad de crisis epilépticas marcadas y se logra obtener un algoritmo con una mejor tasa de falsos positivos y verdaderos positivos.
Abdalla, Alvaro Martins. "OMPP para projeto conceitual de aeronaves, baseado em heurísticas evolucionárias e de tomadas de decisões." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/18/18148/tde-13012011-113940/.
Full textThis work is concerned with the development of a methodology for multidisciplinary optimization of the aircraft conceptual design. The aircraft conceptual design optimization was based on the evolutionary simulation of the aircraft characteristics outlined by a QFD/Fuzzy arithmetic approach where the candidates in the Pareto front are selected within categories close to the target proposed. As a test case a military trainer aircraft was designed target to perform the proper transition from basic to advanced training. The methodology for conceptual aircraft design optimization implemented in this work consisted on the integration of techniques such statistical entropy, quality function deployment (QFD), arithmetic fuzzy and genetic algorithm (GA) to the weighted multidisciplinary design optimization (WMDO). This methodology proved to be objective and well balanced when compared with traditional design techniques.
Toledo, Peña Patricio Antonio. "Algoritmo de detección de ondas P invariante de escala: Caso de réplicas del sismo del 11 de marzo de 2010." Tesis, Universidad de Chile, 2014. http://repositorio.uchile.cl/handle/2250/131361.
Full textBajo la presión del megaterremoto del Maule en febrero de 2010, los centros de estudio chilenos debieron enfrentar una emergencia adicional, consistente en el procesamiento de terabits de datos tomados con posterioridad al gran evento. Esta masa de información proviene principalmente de las campañas de intervención. Sin embargo, la razón de fondo del número de datos que se logra, son las leyes de escalamiento que dominan la dinámica de la corteza y es que estas dictan qué sucede antes y después de cada evento. Más aún, estas leyes imponen cotas bastante estrictas al volumen de datos que es necesario registrar para identificar los procesos en sí. A pesar de que una teoría completa de la generación de sismicidad es desconocida en la actualidad, es posible comprender sus rasgos principales con una multiplicidad de técnicas, dos de ellas son la similitud y las observaciones directas. El uso de estos métodos, permite identificar algunas simetrías presentes en los fenómenos corticales. Estas simetrías son invarianzas de escala, es decir, la posibilidad de expresar los observables de interés como leyes de potencia del espacio, del tiempo y del tamaño de lo estudiado. Esta invarianza es el motivo tras la geometría fractal de las fallas, la ley de Gutenberg-Richter para los tamaños de los eventos sísmicos, la ley de Omori para los tiempos entre réplicas y otros. Estos elementos, permiten identificar un rasgo combinatorial presente en el proceso de generación de sismicidad, que posibilita la introducción de la entropía de Shannon como uno de los observables relevantes, que en la actualidad, no ha sido explotado exhaustivamente por los geocientistas. La entropía está ligada a la idea de información que es posible conocer y transmitir. La interpretación de la fuente sísmica como una de carácter estocástico cuyas señales viajan a través de un medio ruidoso (la corteza) finalmente registradas en receptores (sismómetros) permite hacer la analogía con un telégrafo y con ello conocer la información que proviene de los terremotos. La noción de entropía se fundamenta sobre unas probabilidades que se han identificado con ayuda del fenómeno conocido como la anomalía del primer dígito, que se reporta presente en la fuente sísmica, hecho debidamente establecido en la primera de las publicaciones que se adjuntan por medio de observaciones y simulación de autómatas celulares. Esta anomalía, se muestra está asociada a una familia de sistemas disipativos de los cuales la corteza es uno. Con ayuda de la teoría de la información se han encontrado criterios básicos de índole geométrico, que han permitido desarrollar los algoritmos de reconocimiento de sismicidad propuestos, que se han probado empíricamente en el caso de una serie de réplicas pertenecientes al sismo de Pichilemu del 11 de marzo del 2010, presentados en detalle en el segundo trabajo adjunto. Se ha probado que estos algoritmos se muestran competitivos y complementarios a los ya usados popularmente, lo que aumenta la capacidad de detección y abre posibilidades de estudio en el problema de alerta temprana. Finalmente se discute la posibilidad de interpretar el proceso de disipación de energía a través de una representación simple, que ligaría información, entropía y geometría.
Chakik, Fadi El. "Maximum d'entropie et réseaux de neurones pour la classification." Grenoble INPG, 1998. http://www.theses.fr/1998INPG0091.
Full textKesler, Joseph Michael. "Automated Alignment of Aircraft Wing Radiography Images Using a Modified Rotation, Scale, and Translation Invariant Phase Correlation Algorithm Employing Local Entropy for Peak Detection." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1218604857.
Full textBen, Atia Okba. "Plateforme de gestion collaborative sécurisée appliquée aux Réseaux IoT." Electronic Thesis or Diss., Mulhouse, 2024. http://www.theses.fr/2024MULH7114.
Full textFederated Learning (FL) allows clients to collaboratively train a model while preserving data privacy. Despite its benefits, FL is vulnerable to poisoning attacks. This thesis addresses malicious model detection in FL systems for IoT networks. We provide a literature review of recent detection techniques and propose a Secure Layered Adaptation and Behavior framework (FLSecLAB) to fortify the FL system against attacks. FLSecLAB offers customization for evaluating defenses across datasets and metrics. We propose enhanced malicious model detection with dynamic optimal threshold selection, targeting Label-flipping attacks. We present a scalable solution using entropy and an adaptive threshold to detect malicious clients. We explore complex scenarios and propose novel detection against simultaneous Label-flipping and Backdoor attacks. Additionally, we propose an adaptive model for detecting malicious clients, addressing Non-IID data challenges. We evaluate our approaches through various simulation scenarios with different datasets, comparing them to existing approaches. Results demonstrate the effectiveness of our approaches in enhancing various malicious detection performance metrics