Siga este enlace para ver otros tipos de publicaciones sobre el tema: M5P Algorithm.

Tesis sobre el tema "M5P Algorithm"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "M5P Algorithm".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Rodríguez, Elen Yanina Aguirre. "Técnicas de aprendizado de máquina para predição do custo da logística de transporte : uma aplicação em empresa do segmento de autopeças /". Guaratinguetá, 2020. http://hdl.handle.net/11449/192326.

Texto completo
Resumen
Orientador: Fernando Augusto Silva Marins
Resumo: Em diferentes aspectos da vida cotidiana, o ser humano é forçado a escolher entre várias opções, esse processo é conhecido como tomada de decisão. No nível do negócio, a tomada de decisões desempenha um papel muito importante, porque dessas decisões depende o sucesso ou o fracasso das organizações. No entanto, em muitos casos, tomar decisões erradas pode gerar grandes custos. Desta forma, alguns dos problemas de tomada de decisão que um gerente enfrenta comumente são, por exemplo, a decisão para determinar um preço, a decisão de comprar ou fabricar, em problemas de logística, problemas de armazenamento, etc. Por outro lado, a coleta de dados tornou-se uma vantagem competitiva, pois pode ser utilizada para análise e extração de resultados significativos por meio da aplicação de diversas técnicas, como estatística, simulação, matemática, econometria e técnicas atuais, como aprendizagem de máquina para a criação de modelos preditivos. Além disso, há evidências na literatura de que a criação de modelos com técnicas de aprendizagem de máquina têm um impacto positivo na indústria e em diferentes áreas de pesquisa. Nesse contexto, o presente trabalho propõe o desenvolvimento de um modelo preditivo para tomada de decisão, usando as técnicas supervisionadas de aprendizado de máquina, e combinando o modelo gerado com as restrições pertencentes ao processo de otimização. O objetivo da proposta é treinar um modelo matemático com dados históricos de um processo decisório e obter os predit... (Resumo completo, clicar acesso eletrônico abaixo)
Mestre
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Chapala, Usha Kiran y Sridhar Peteti. "Continuous Video Quality of Experience Modelling using Machine Learning Model Trees". Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 1996. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17814.

Texto completo
Resumen
Adaptive video streaming is perpetually influenced by unpredictable network conditions, whichcauses playback interruptions like stalling, rebuffering and video bit rate fluctuations. Thisleads to potential degradation of end-user Quality of Experience (QoE) and may make userchurn from the service. Video QoE modelling that precisely predicts the end users QoE underthese unstable conditions is taken into consideration quickly. The root cause analysis for thesedegradations is required for the service provider. These sudden changes in trend are not visiblefrom monitoring the data from the underlying network service. Thus, this is challenging toknow this change and model the instantaneous QoE. For this modelling continuous time, QoEratings are taken into consideration rather than the overall end QoE rating per video. To reducethe user risk of churning the network providers should give the best quality to the users. In this thesis, we proposed the QoE modelling to analyze the user reactions change over timeusing machine learning models. The machine learning models are used to predict the QoEratings and change patterns in ratings. We test the model on video Quality dataset availablepublicly which contains the user subjective QoE ratings for the network distortions. M5P modeltree algorithm is used for the prediction of user ratings over time. M5P model gives themathematical equations and leads to more insights by given equations. Results of the algorithmshow that model tree is a good approach for the prediction of the continuous QoE and to detectchange points of ratings. It is shown that to which extent these algorithms are used to estimatechanges. The analysis of model provides valuable insights by analyzing exponential transitionsbetween different level of predicted ratings. The outcome provided by the analysis explains theuser behavior when the quality decreases the user ratings decrease faster than the increase inquality with time. The earlier work on the exponential transitions of instantaneous QoE overtime is supported by the model tree to the user reaction to sudden changes such as video freezes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Sehovic, Mirsad y Markus Carlsson. "Nåbarhetstestning i en baneditor : En undersökning i hur nåbarhetstester kan implementeras i en baneditor samt funktionens potential i att ersätta manuell testning". Thesis, Linnéuniversitetet, Institutionen för datavetenskap (DV), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-36394.

Texto completo
Resumen
Denna studie undersöker om det är möjligt att införa nåbarhetstestning i en baneditor. Testets syfte är att ersätta manuell testing, det vill säga att bankonstruktören inte ska behöva spela igenom banan för att säkerställa att denne kommer kunna nå alla nåbara positioner.För att kunna utföra studien skapas en enkel baneditor som testplattform. Vidare utförs en jämförande studie av flera alternativa algoritmer för att fastställa vilken som är mest passande för nåbarhetstestning i en baneditor.Resultatet från den jämförande studien visade att A* (A star) var den mest passande algoritmen för funktionen. Huruvida automatisk testning kan ersätta manuell testning är diskutabelt, men resultatet pekar på en ökad effektivitet i tid när det kommer till banbygge.
The following study examines whether it is possible to implement reachability testing in a map editor designed for 2D-platform games. The purpose of reachability testing is to replace manual testing, that being the level designer having to play through the map just to see if the player can reach all supposedly reachable positions in the map.A simple map editor is created to enable the implementation after which we perform a theoretical study in order to determine which algorithm would be best suited for the implementation of the reachability testing.The results comparing algorithms shows that A* (A star) worked best with the function. Whether or not manual testing can be replaced by automatic testing is open for debate, however the results points to an increase in time efficiency when it comes to level design.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Jiang, Minghui. "Map labeling with circles". Diss., Montana State University, 2005. http://etd.lib.montana.edu/etd/2005/jiang/JiangM0505.pdf.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Carrigan, Braxton Bezdek András. "Evading triangles without a map". Auburn, Ala., 2010. http://hdl.handle.net/10415/2032.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Hislop, A. D. "Parallel algorithms for digital map path optimisation". Thesis, University of Southampton, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.315321.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Yuen, Patrick Wingkee. "Applying modified CLEAN algorithm to MAP image super-resolution". Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187279.

Texto completo
Resumen
In this dissertation, the super-resolution method that we use for image restoration is the Poisson Maximum A-Posteriori (MAP) super-resolution algorithm of Hunt, computed with an iterative form. This algorithm is similar to the Maximum Likelihood of Holmes, which is derived from an Expectation/Maximization (EM) computation. Image restoration of point source data is our focus. This is because most astronomical data can be regarded as multiple point source data with a very dark background. The statistical limits imposed by photon noise on the resolution obtained by our algorithm are investigated. We improve the performance of the super-resolution algorithm by including the additional information of the spatial constraints. This is achieved by applying the well-known CLEAN algorithm, which is widely used in astronomy, to create regions of support for the potential point sources. Real and simulated data are included in this paper. The point spread function (psf) of a diffraction limited optical system is used for the simulated data. The real data is two dimensional optical image data from the Hubble Space Telescope.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Sutton, David William Peter. "Map-making algorithms in future CMB polarisation experiments". Thesis, University of Oxford, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.540274.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Rankenburg, Ivan. "Application of the difference map algorithm to protein structure prediction". Saarbrücken VDM Verlag Dr. Müller, 2007. http://d-nb.info/991338103/04.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Li, Xiaoli. "A map-growing localization algorithm for ad-hoc sensor networks /". free to MU campus, to others for purchase, 2003. http://wwwlib.umi.com/cr/mo/fullcit?p1418044.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Ben, Ammar Oussama. "Planification des réapprovisionnements sous incertitudes pour les systèmes d’assemblage à plusieurs niveaux". Thesis, Saint-Etienne, EMSE, 2014. http://www.theses.fr/2014EMSE0756/document.

Texto completo
Resumen
Dans le contexte actuel marqué par l’instabilité des marchés, les clients sont de plus en plus exigeants. un client qui n’est pas approvisionné à une date souhaitée peut soit remettre son achat à plus tard, soit aller chercher le produit chez un concurrent. de plus, l’entreprise doit faire face à de multiples imprévisibilités internes, de la concurrence ou d’événements extérieurs. ces aléas induisent de l'incertitude dans la planification de la production et génèrent des sources nombreuses de retard, de désynchronisation et de pertes de productivité. ce travail de thèse s’intègre dans la problématique de la planification de la production dans un environnement incertain. nous étudions des problèmes de la planification des réapprovisionnements pour un système d’assemblage à plusieurs niveaux, quand les délais d’approvisionnement sont incertains. nous avons choisi comme indicateur de performance l’espérance du coût total moyen qui est égal à la somme du coût de stockage des composants, le coût de rupture du produit fini et le coût de stockage du produit fini. des propriétés théoriques, des modèles analytiques ainsi que des méthodes d’optimisation ont été proposés. nous avons montré que la résolution du problème ne dépend pas seulement de la méthode de résolution et du nombre de niveaux, mais aussi du coût de rupture en produit fini et de la structure du système d’assemblage
In the current industrial context, the offer is largely higher than the demand. Therefore, the customers are more and more exigent. To distance themselves, companies need to offer to their customers the best quality products, the best costs, and with controlled lead times as short as possible. Last years, the struggle for reducing costs was accentuated within companies. However, stocks represent an important financial asset, and therefore, it is essential to control them. In addition, a bad management of stocks led either to delays in delivery, which generate additional production costs, either to the unnecessary inventory. The latter one can occur at different levels (from components at the last level to finished product), it costs money and immobilize funds. That is why, planners have to look for efficient methods of production and supply planning, to know exactly for each component, and when to order and in which quantity.The aim of this doctoral thesis is to investigate the supply planning in an uncertain environment. We are interested in a replenishment planning for multi-level assembly systems under a fixed demand and uncertainty of components lead times.We consider that each component has a fixed unit inventory cost; the finished product has an inventory cost and a backlogging cost per unit of time. Then, a general mathematical model for replenishment planning of multi-level assembly systems, genetic algorithm and branch and bound method are presented to calculate and to optimize the expected value of the total cost which equals to the sum of the inventory holding costs for the components, the backlogging and the inventory holding costs for the finished product. We can state by the different results that the convergence of the GA doesn't depend only on the number of components in the last level but also on the number of levels, the type of the BOM and the backlogging cost for the finished product
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Sun, Chi. "A constrained MDP-based vertical handoff decision algorithm for wireless networks". Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/1243.

Texto completo
Resumen
The 4th generation wireless communication systems aim to provide users with the convenience of seamless roaming among heterogeneous wireless access networks. To achieve this goal, the support of vertical handoff is important in mobility management. This thesis focuses on the vertical handoff decision algorithm, which determines the criteria under which vertical handoff should be performed. The problem is formulated as a constrained Markov decision process. The objective is to maximize the expected total reward of a connection subject to the expected total access cost constraint. In our model, a benefit function is used to assess the quality of the connection, and a penalty function is used to model the signaling incurred and call dropping. The user's velocity and location information are also considered when making the handoff decisions. The policy iteration and Q-learning algorithms are employed to determine the optimal policy. Structural results on the optimal vertical handoff policy are derived by using the concept of supermodularity. We show that the optimal policy is a threshold policy in bandwidth, delay, and velocity. Numerical results show that our proposed vertical handoff decision algorithm outperforms other decision schemes in a wide range of conditions such as variations on connection duration, user's velocity, user's budget, traffic type, signaling cost, and monetary access cost.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Altaye, Endale Berhane. "Approximate recursive algorithm for finding MAP of binary Markov random fields". Thesis, Norwegian University of Science and Technology, Department of Mathematical Sciences, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-10824.

Texto completo
Resumen

The purpose of this study was to develop a recursive algorithm for computing a maximum a posteriori (MAP) estimate of a binary Markov random field (MRF) by using the MAP-MRF framework. We also discuss how to include an approximation in the recursive scheme, so that the algorithm becomes computationally feasible also for larger problems. In particular, we discuss how our algorithm can be used in an image analysis setting. We consider a situation where an unobserved latent field is assumed to follow a Markov random field prior model, a Gaussian noise-corrupted version of the latent field is observed, and we estimate the unobserved field by the MAP estimator.

Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Parker, Michael Joseph. "Music Perception of Cochlear Implant recipients using a Genetic Algorithm MAP". Thesis, University of Canterbury. Communication Disorders, 2011. http://hdl.handle.net/10092/5235.

Texto completo
Resumen
Cochlear implant (CI) users have traditionally reported less enjoyment and have performed more poorly on tasks of music perception (timbre, melody and pitch) than their normal hearing (NH) counterparts. The enjoyment and perception of music can be affected by the MAP programmed into a user’s speech processor, the parameters of which can be altered to change the way that a CI recipient hears sound. However, finding the optimal MAP can prove challenging to clinicians because altering one parameter will affect others. Until recently the only way to find the optimal MAP has theoretically been to present each potential combination of parameters systematically, however this is impractical in a clinical setting due to the thousands of different potential combinations. Thus, in general, clinicians can find a good MAP, but not necessarily the best one. The goal of this study was to assess whether a Genetic Algorithm would assist clinicians to create a better MAP for music listening than current methods. Seven adult Nucleus Freedom CI users were assessed on tasks of timbre identification, melody identification and pitch-ranking using their original MAP. The participants then used the GA software to create an individualised MAP for music listening (referred to as their “GA MAP”). They then spent four weeks comparing their GA and original MAPs in their everyday life, and recording their listening experiences in a listening diary. At the end of this period participants were assessed on the same timbre, melody, and pitch tasks using their GA MAP. The results of the study showed that the GA process took an average of 35 minutes (range: 13-72 minutes) to create a MAP for music listening. As a group, participants reported the GA MAP to be slightly better than their original MAP for music listening, and preferred the GA MAP when at the cinema. Participants, on average, also performed significantly better on the melody identification task with their GA MAP; however they were significantly better on the half-octave interval pitch ranking task with their original MAP. The results also showed that participants were significantly more accurate on the single-instrument identification task than the ensemble instrument identification task regardless of which MAP they used. Overall, the results show that a GA can be used to successfully create a MAP for music listening, with two participants creating a MAP that they decided to keep at the conclusion of the study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Bookwala, Avinash Turab. "Combined map personalisation algorithm for delivering preferred spatial features in a map to everyday mobile device users". AUT University, 2009. http://hdl.handle.net/10292/920.

Texto completo
Resumen
In this thesis, we present an innovative and novel approach to personalise maps/geo-spatial services for mobile users. With the proposed map personalisation approach, only relevant data will be extracted from detailed maps/geo-spatial services on the fly, based on a user’s current location, preferences and requirements. This would result in dramatic improvements in the legibility of maps on mobile device screens, as well as significant reductions in the amount of data being transmitted; which, in turn, would reduce the download time and cost of transferring the required geo-spatial data across mobile networks. Furthermore, the proposed map personalisation approach has been implemented into a working system, based on a four-tier client server architecture, wherein fully detailed maps/services are stored on the server, and upon a user’s request personalised maps/services, extracted from the fully detailed maps/services based on the user’s current location, preferences, are sent to the user’s mobile device through mobile networks. By using open and standard system development tools, our system is open to everyday mobile devices rather than smart phones and Personal Digital Assistants (PDA) only, as is prevalent in most current map personalisation systems. The proposed map personalisation approach combines content-based information filtering and collaborative information filtering techniques into an algorithmic solution, wherein content-based information filtering is used for regular users having a user profile stored on the system, and collaborative information filtering is used for new/occasional users having no user profile stored on the system. Maps/geo-spatial services are personalised for regular users by analysing the user’s spatial feature preferences automatically collected and stored in their user profile from previous usages, whereas, map personalisation for new/occasional users is achieved through analysing the spatial feature preferences of like-minded users in the system in order to make an inference for the target user. Furthermore, with the use of association rule mining, an advanced inference technique, the spatial features retrieved for new/occasional users through collaborative filtering can be attained. The selection of spatial features through association rule mining is achieved by finding interesting and similar patterns in the spatial features most commonly retrieved by different user groups, based on their past transactions or usage sessions with the system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Badri, Linda. "Mcp : environnement de conception détaillée de logiciels". Lyon, INSA, 1990. http://www.theses.fr/1990ISAL0021.

Texto completo
Resumen
L'importance du logiciel est un fait dont les responsables de projets informatique sont aujourd'hui pleinement conscients. Pour cette raison, les utilisateurs et les concepteurs de produits logiciels sont de plus en plus exigeants et ressentent de façon aiguë et urgente la nécessité de développer des méthodologies pour la réalisation et la validation des produits logiciels. L'approche que nous proposons entre dans cette perspective en mettant en œuvre des stratégies intervenant dans les deux phases du cycle de vie conception détaillée et codage pour aboutir à une méthodologie de conception de programmes (MCP). MCP propose à l'utilisateur une démarche guidée selon un processus établi pour rationaliser la production des logiciels et en augmenter la qualité en respectant certaines étapes de production. Pour la phase de conception, nous avons d6fini un langage algorithmique (LA) adapté aux concepts de la programmation modulaire et d' encapsulation. La démarche retenue s' appuie également sur l'analyse descendante par raffinements successifs pour les données comme pour les programmes. Par ailleurs une aide à la saisie est fournie lors de l'écriture d'un programme, déchargeant ainsi l'utilisateur d'un ensemble de tâches fastidieuses et redondantes. MCP permet également l'accès à une banque d'entités logicielles : types abstraits et outils. Le passage de la phase de conception détaillée à la phase de codage se fait de façon automatique: on assure ainsi une continuité entre les phases. Outre ces apports, MCP fournit automatiquement une documentation plus ou moins détaillée selon le souhait de l'utilisateur. En résumé, MCP donne à l'utilisateur les moyens d'une conception rigoureuse (par l'utilisation du LA) et contrôlée, elle assure la production automatique d'une documentation et permet enfin une bonne communication et une transition cohérente entre les phases de conception détaillée et de codage.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Gallo, Melissa A. "Vascular Access: A Navigation Map". Mount St. Joseph University Dept. of Nursing / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=msjdn1619264506925441.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Chowdhury, Souma. "Modified predator-prey (MPP) algorithm for single-and multi-objective optimization problems". FIU Digital Commons, 2008. http://digitalcommons.fiu.edu/etd/2352.

Texto completo
Resumen
The aim of this work is to develop an algorithm that can solve multidisciplinary design optimization problems. In predator-prey algorithm, a relatively small number of predators and a much larger number of prey are randomly placed on a two dimensional lattice with connected ends. The predators are partially or completely biased towards one or more objectives, based on which each predator kills the weakest prey in its neighborhood. A stronger prey created through evolution replaces this prey. In case of constrained problems, the sum o f constraint violations serves as an additional objective. Modifications of the basic predator-prey algorithm have been implemented in this study regarding the selection procedure, apparent movement of the predators, mutation strategy, dynamics of the Pareto convergence, etc. Further modifications have been made making the algorithm capable of handling equality and inequality constraints. The final modified algorithm is tested on standard constrained/unconstrained, single and multi-objective optimization problems.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Pimenta, Mayra Mercedes Zegarra. "Self-organization map in complex network". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-30102018-111955/.

Texto completo
Resumen
The Self-Organization Map (SOM) is an artificial neural network that was proposed as a tool for exploratory analysis in large dimensionality data sets, being used efficiently for data mining. One of the main topics of research in this area is related to data clustering applications. Several algorithms have been developed to perform clustering in data sets. However, the accuracy of these algorithms is data depending. This thesis is mainly dedicated to the investigation of the SOM from two different approaches: (i) data mining and (ii) complex networks. From the data mining point of view, we analyzed how the performance of the algorithm is related to the distribution of data properties. It was verified the accuracy of the algorithm based on the configuration of the parameters. Likewise, this thesis shows a comparative analysis between the SOM network and other clustering methods. The results revealed that in random configuration of parameters the SOM algorithm tends to improve its acuracy when the number of classes is small. It was also observed that when considering the default configurations of the adopted methods, the spectral approach usually outperformed the other clustering algorithms. Regarding the complex networks approach, we observed that the network structure has a fundamental influence of the algorithm accuracy. We evaluated the cases at short and middle learning time scales and three different datasets. Furthermore, we show how different topologies also affect the self-organization of the topographic map of SOM network. The self-organization of the network was studied through the partitioning of the map in groups or communities. It was used four topological measures to quantify the structure of the groups such as: modularity, number of elements per group, number of groups per map, size of the largest group in three network models. In small-world (SW) networks, the groups become denser as time increases. An opposite behavior is found in the assortative networks. Finally, we verified that if some perturbation is included in the system, like a rewiring in a SW network and the deactivation model, the system cannot be organized again. Our results enable a better understanding of SOM in terms of parameters and network structure.
Um Mapa Auto-organizativo (da sigla SOM, Self-organized map, em inglês) é uma rede neural artificial que foi proposta como uma ferramenta para análise exploratória em conjuntos de dados de grande dimensionalidade, sendo utilizada de forma eficiente na mineração de dados. Um dos principais tópicos de pesquisa nesta área está relacionado com as aplicações de agrupamento de dados. Vários algoritmos foram desenvolvidos para realizar agrupamento de dados, tendo cada um destes algoritmos uma acurácia específica para determinados tipos de dados. Esta tese tem por objetivo principal analisar a rede SOM a partir de duas abordagens diferentes: mineração de dados e redes complexas. Pela abordagem de mineração de dados, analisou-se como o desempenho do algoritmo está relacionado à distribuição ou características dos dados. Verificou-se a acurácia do algoritmo com base na configuração dos parâmetros. Da mesma forma, esta tese mostra uma análise comparativa entre a rede SOM e outros métodos de agrupamento. Os resultados revelaram que o uso de valores aleatórios nos parâmetros de configuração do algoritmo SOM tende a melhorar sua acurácia quando o número de classes é baixo. Observou-se também que, ao considerar as configurações padrão dos métodos adotados, a abordagem espectral usualmente superou os demais algoritmos de agrupamento. Pela abordagem de redes complexas, esta tese mostra que, se considerarmos outro tipo de topologia de rede, além do modelo regular geralmente utilizado, haverá um impacto na acurácia da rede. Esta tese mostra que o impacto na acurácia é geralmente observado em escalas de tempo de aprendizado curto e médio. Esse comportamento foi observado usando três conjuntos de dados diferentes. Além disso, esta tese mostra como diferentes topologias também afetam a auto-organização do mapa topográfico da rede SOM. A auto-organização da rede foi estudada por meio do particionamento do mapa em grupos ou comunidades. Foram utilizadas quatro medidas topológicas para quantificar a estrutura dos grupos em três modelos distintos de rede: modularidade, número de elementos por grupo, número de grupos por mapa, tamanho do maior grupo. Em redes de pequeno mundo, os grupos se tornam mais densos à medida que o tempo aumenta. Um comportamento oposto a isso é encontrado nas redes assortativas. Apesar da modularidade, tem um alto valor em ambos os casos.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Akbari, Masoomeh. "Probabilistic Transitive Closure of Fuzzy Cognitive Maps: Algorithm Enhancement and an Application to Work-Integrated Learning". Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/41401.

Texto completo
Resumen
A fuzzy cognitive map (FCM) is made up of factors and direct impacts. In graph theory, a bipolar weighted digraph is used to model an FCM; its vertices represent the factors, and the arcs represent the direct impacts. Each direct impact is either positive or negative, and is assigned a weight; in the model considered in this thesis, each weight is interpreted as the probability of the impact. A directed walk from factor F to factor F' is interpreted as an indirect impact of F on F'. The probabilistic transitive closure (PTC) of an FCM (or bipolar weighted digraph) is a bipolar weighted digraph with the same set of factors, but with arcs corresponding to the indirect impacts in the given FCM. Fuzzy cognitive maps can be used to represent structured knowledge in diverse fields, which include science, engineering, and the social sciences. In [P. Niesink, K. Poulin, M. Sajna, Computing transitive closure of bipolar weighted digraphs, Discrete Appl. Math. 161 (2013), 217-243], it was shown that the transitive closure provides valuable new information for its corresponding FCM. In particular, it gives the total impact of each factor on each other factor, which includes both direct and indirect impacts. Furthermore, several algorithms were developed to compute the transitive closure of an FCM. Unfortunately, computing the PTC of an FCM is computationally hard and the implemented algorithms are not successful for large FCMs. Hence, the Reduction-Recovery Algorithm was proposed to make other (direct) algorithms more efficient. However, this algorithm has never been implemented before. In this thesis, we code the Reduction-Recovery Algorithm and compare its running time with the existing software. Also, we propose a new enhancement on the existing PTC algorithms, which we call the Separation-Reduction Algorithm. In particular, we state and prove a new theorem that describes how to reduce the input digraph to smaller components by using a separating vertex. In the application part of the thesis, we show how the PTC of an FCM can be used to compare different standpoints on the issue of work-integrated learning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Li, Ka-lun y 李嘉麟. "Newly modified log-map algorithms for turbo codes in mobile environments". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31224775.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Matos, Jody Maick Araujo de. "Graph based algorithms to efficiently map VLSI circuits with simple cells". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2018. http://hdl.handle.net/10183/174523.

Texto completo
Resumen
Essa tese introduz um conjunto de algoritmos baseados em grafos para o mapeamento eficiente de circuitos VLSI com células simples. Os algoritmos propostos se baseiam em minimizar de maneira eficiente o número de elementos lógicos usados na implementação do circuito. Posteriormente, uma quantidade significativa de esforço é aplicada na minimização do número de inversores entre esses elementos lógicos. Por fim, essa representação lógica é mapeada para circuitos compostos somente por células NAND e NOR de duas entradas, juntamente com inversores. Células XOR e XNOR de duas entradas também podem ser consideradas. Como nós também consideramos circuitos sequenciais, flips-flops também são levados em consideração. Com o grande esforço de minimização de elementos lógicos, o circuito gerado pode conter algumas células com um fanout impraticável para os nodos tecnológicos atuais. Para corrigir essas ocorrências, nós propomos um algoritmo de limitação de fanout que considera tanto a área sendo utilizada pelas células quanto a sua profundidade lógica. Os algoritmos propostos foram aplicados sobre um conjunto de circuitos de benchmark e os resultados obtidos demonstram a utilidade dos métodos. Essa tese introduz um conjunto de algoritmos baseados em grafos para o mapeamento eficiente de circuitos VLSI com células simples. Os algoritmos propostos se baseiam em minimizar de maneira eficiente o número de elementos lógicos usados na implementação do circuito. Posteriormente, uma quantidade significativa de esforço é aplicada na minimização do número de inversores entre esses elementos lógicos. Por fim, essa representação lógica é mapeada para circuitos compostos somente por células NAND e NOR de duas entradas, juntamente com inversores. Células XOR e XNOR de duas entradas também podem ser consideradas. Como nós também consideramos circuitos sequenciais, flips-flops também são levados em consideração. Com o grande esforço de minimização de elementos lógicos, o circuito gerado pode conter algumas células com um fanout impraticável para os nodos tecnológicos atuais. Para corrigir essas ocorrências, nós propomos um algoritmo de limitação de fanout que considera tanto a área sendo utilizada pelas células quanto a sua profundidade lógica. Os algoritmos propostos foram aplicados sobre um conjunto de circuitos de benchmark e os resultados obtidos demonstram a utilidade dos métodos. Adicionalmente, algumas aplicações Morethan-Moore, tais como circuitos baseados em eletrônica impressa, também podem ser beneficiadas pela abordagem proposta.
This thesis introduces a set of graph-based algorithms for efficiently mapping VLSI circuits using simple cells. The proposed algorithms are concerned to, first, effectively minimize the number of logic elements implementing the synthesized circuit. Then, we focus a significant effort on minimizing the number of inverters in between these logic elements. Finally, this logic representation is mapped into a circuit comprised of only two-input NANDs and NORS, along with the inverters. Two-input XORs and XNORs can also be optionally considered. As we also consider sequential circuits in this work, flip-flops are taken into account as well. Additionally, with high-effort optimization on the number of logic elements, the generated circuits may contain some cells with unfeasible fanout for current technology nodes. In order to fix these occurrences, we propose an area-oriented, level-aware algorithm for fanout limitation. The proposed algorithms were applied over a set of benchmark circuits and the obtained results have shown the usefulness of the method. We show that efficient implementations in terms of inverter count, transistor count, area, power and delay can be generated from circuits with a reduced number of both simple cells and inverters, combined with XOR/XNOR-based optimizations. The proposed buffering algorithm can handle all unfeasible fanout occurrences, while (i) optimizing the number of added inverters; and (ii) assigning cells to the inverter tree based on their level criticality. When comparing with academic and commercial approaches, we are able to simultaneously reduce the average number of inverters, transistors, area, power dissipation and delay up to 48%, 5%, 5%, 5%, and 53%, respectively. As the adoption of a limited set of simple standard cells have been showing benefits for a variety of modern VLSI circuits constraints, such as layout regularity, routability constraints, and/or ultra low power constraints, the proposed methods can be of special interest for these applications. Additionally, some More-than-Moore applications, such as printed electronics designs, can also take benefit from the proposed approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Kumar, Lalit. "Scalable Map-Reduce Algorithms for Mining Formal Concepts and Graph Substructures". University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1543996580926452.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

LITTON, JENNIFER GROMMON. "HEURISTIC DESIGN ALGORITHMS AND EVALUATION METHODS FOR PROPERTY MAPS". University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin981488752.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Hedlin, Daniel. "Evaluating a 3D node graphing algorithm : Developing an algorithm for improving 3D map data and comparing resulting node graphs used for underground mines". Thesis, Luleå tekniska universitet, Datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-75641.

Texto completo
Resumen
Mining companies are rapidly modernizing, and part of this modernization requires the tracking of equipment and personnel within mines. Mobilaris uses 3D maps that represent mine paths using two lines, one representing the left wall and one representing the right. These lines are often discontinuous and mixed with other lines that represent symbols, old lines, erroneous lines, etc. To better track the positions of items within a mine, a node graph that maps the possible cave paths is used. This node graph is partly generated and partly constructed manually. The manual corrections currently require a great deal of time. The purpose of this degree is to determine the viability of the current node graphing algorithm by developing a program that improves the map data. This improvement will be carried out to give the node graphing algorithm the best possible map data to determine where it is usable and where it is not. Multiple quick and problem-specific algorithms are developed to remove lines not part of the two path lines; these do not cover all found cases due to time constraints. An algorithm that merges path lines and an algorithm that connects disconnected path lines are developed. These algorithms greatly improve the quality of the map data. After processing the maps, the improvements to the generated node graphs are small but not insignificant, with a 1% to rare 20% reduction in breaks. The greatest improvements are made by postprocessing the generated node graphs; here, the differences in improvement between the old node graphs and the new are significant, with a 75-80% reduction in breaks. Many of the weaknesses of the current node graphing algorithm are also determined; some of these can be corrected by postprocessing the node graph, while some need either new or additional algorithms to be solved.
Gruvbolag moderniserar i snabb takt, en del av denna modernisering kräver spårning av utrustning och personal i gruvorna. Mobilaris använder 3D kartor som representerar gruvgångar med två linjer, ena representerar vänster vägg, medans den andra representerar höger vägg. Dessa linjer är ofta diskontinuerliga och blandade med andra linjer som representerar symboler, gamla linjer, felaktiga linjer och så vidare. För att bättre kunna spåra objekts positioner inom en gruva används en nodgraf som kartlägger de möjliga gruvgångarna. Denna nodgraf är delvis genererad och delvis manuellt skapad. De manuella korrigeringarna kräver i nuläget många arbetstimmar. Syftet med denna examen är att utvärdera den nuvarande nodgrafningsalgoritmen genom att utveckla ett program som förbättrar kartdatan. Denna förbättring utförs för att ge nodgrafningsalgoritmen den bästa möjliga kartdatan för att kunna bedöma inom vilka områden den är användbar och inom vilka den ej är det. Flera snabba problemspecifika algoritmer utvecklas för att ta bort linjer som ej tillhör de två väglinjerna, dessa algoritmer täcker ej alla fall på grund av tidsbegränsningar. En algoritm som sammanfogar väglinjer och en algoritm som ansluter diskontinuerliga linjer utvecklas, dessa algoritmer förbättrar kartdatans kvalitet avsevärt. Efter processeringen av kartorna är förbättringarna av de genererade nodgraferna små men ej osignifikanta med en 1% till sällsynt 20% minskning av avbrott. De största förbättringarna görs genom att efterbehandla de genererade nodgraferna, här skiljs kvaliteten mellan de gamla nodgraferna och de nya signifikant med en 75-80% minskning av avbrott. Många av den nuvarande nodgrafningsalgoritmens svagheter upptäcktes. Några av dessa svagheter kan korrigeras genom att efterbehandla nodgraferna, medans andra behöver antingen en ny eller ytterligare algoritmer för att lösas.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Dujmović, Vida. "Algorithmic aspects of planar map validation by a mobile agent". Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=33394.

Texto completo
Resumen
In this thesis we present an optimal linear time algorithm for validating the correctness of a map by an active agent (such as a person or a mobile robot). The robot is given a possibly incorrect map (model) of its environment. This model GM is given as an embedding of an undirected planar graph without edge crossings. The correct model GW of the environment, referred to as the underlying real world GW, is unknown to the robot. The underlying real world GW is an embedding of an arbitrary, not necessarily planar, graph. The robot begins at some arbitrary location in GM; this initial location is known to the robot. The robot must determine whether its map GM is consistent with the real world GW with respect to the given initial position.
The robot is assumed to be able to autonomously traverse graph edges, recognize when it has reached a vertex, and locally order edges incident upon the current vertex. The robot cannot measure distances nor does it have a compass, but it is equipped with a single marker that it can leave at a vertex and sense its presence. In addition to the linear plane graph validation algorithm, we present an approach to solve the problem efficiently for some non-planar embeddings. Namely, if the given map GM is a non-planar embedding of a combinatorially planar graph, we can solve this problem with a similar approach such that the complexity remains linear.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

陳廣輝 y Kwong-fai Chan. "Two results in algorithm design: finding least-weight subsequences with fewer processors and traversing anobstacle-spread terrain without a map". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B31209579.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Chan, Kwong-fai. "Two results in algorithm design : finding least-weight subsequences with fewer processors and traversing an obstacle-spread terrain without a map /". [Hong Kong : University of Hong Kong], 1991. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12996592.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Mendel, Thomas [Verfasser] y Stefan [Akademischer Betreuer] Funke. "Improved algorithms for map rendering and route planning / Thomas Mendel ; Betreuer: Stefan Funke". Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2020. http://d-nb.info/1218078774/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Sedláček, Josef. "Algoritmy pro shlukování textových dat". Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-218899.

Texto completo
Resumen
The thesis deals with text mining. It describes the theory of text document clustering as well as algorithms used for clustering. This theory serves as a basis for developing an application for clustering text data. The application is developed in Java programming language and contains three methods used for clustering. The user can choose which method will be used for clustering the collection of documents. The implemented methods are K medoids, BiSec K medoids, and SOM (self-organization maps). The application also includes a validation set, which was specially created for the diploma thesis and it is used for testing the algorithms. Finally, the algorithms are compared according to obtained results.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Bonaciu, M. "Flexible and Scalable Algorithm/Architecture Platform for MP-SoC Design of High Definition Video Compression Algorithms". Phd thesis, 2006. http://tel.archives-ouvertes.fr/tel-00086779.

Texto completo
Resumen
Ces dernières années, la complexité des puces a augmenté exponentiellement. La possibilité d'intégrer plusieurs processeurs sur la même puce représente un gain important, et amène au concept du système multiprocesseur hétérogène sur puce (MP-SoC). Cet aspect a permis d'amplifier de manière significative la puissance de calcule fourni par ce type de puce. Il est même devenu possible d'intégrer des applications complexes sur une seule puce, applications qui nécessitent beaucoup de calculs, de communications et de mémoires. Dans cette catégorie, on peut trouver les applications de traitement vidéo MPEG4. Pour obtenir de bonnes implémentations en termes de performances, (1) un algorithme de l'encodeur MPEG4 flexible a été réalisé, pouvant être facilement adapté pour différents types de paramètres d'algorithme, mais également différents niveaux de parallélisme/pipeline. Puis, (2) une modélisation flexible a été utilisée, pour représenter différents models d'algorithme et d'architecture contenant 2 SMP. Utilisant ces models, (3) une exploration d'algorithme et d'architecture à un haut niveau d'abstraction a été proposé, en vue de trouver les configurations correctes d'algorithme et d'architectures, nécessaires pour différents applications. A partir de ces configurations, (4) un flot automatique d'implémentation d'architectures RTL a été utilisé. En utilisant ces aspects, l'encodeur MPEG4 a été implémenté avec succès dans plusieurs architectures spécifiques MP-SoC au niveau RTL. La même approche a été utilisée pour l'implémentation de l'encodeur MPEG4 sur une architecture quadri-processeurs existante, pour différentes résolutions, frame-rate, bitrates, etc.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Ji, Wei-Jhong y 紀韋仲. "Topic map construction using bee colony algorithm". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/xb5u77.

Texto completo
Resumen
碩士
元智大學
工業工程與管理學系
105
With the rapid development of the information technology, information overload is becoming a serious problem during the information acquisition process. Information overload leads users spend more time to find necessary knowledge. To relieve this difficulty, knowledge map is a systematic approach to reveal the underlying relationships between abundant knowledge sources. However, few studies focused on optimizing the coordinates of objects in the map. In addition, too many parameters should be set which lead them complicated and not intuition. To solve the above problems, this thesis presents a novel knowledge map approach to transform high-dimensional objects into a 2-dimensional space to help understand complicated relatedness among high-dimensional important topics. First, the papers related to certain domains are collected from the knowledge database and papers as the knowledge items that contains many keywords. Second, the collected knowledge items are presented as the vector space model (VSM). In VSM, keywords can be represented as a term vector in m-dimensional space where the term frequency-inverse document frequency (TF-IDF) approach is used for term weighting so that the tf-idf value increases proportionally to the number of times a keyword appears in the knowledge item. Third, hierarchical clustering is used find important topics. Additionally, high-dimensional relationships among objects are transformed into a 2-dimensional space using the multi-dimension scaling method. The optimal transformation coordinate matrix is also determined by using the artificial bee colony (ABC) algorithm. Then, this transformation coordinate matrix is used to construct a two-dimensional knowledge map so that the relationship among all important topics can be visualized easily. According to experiments, it is found that setting appropriate number of clusters is important for visual perception in the knowledge map. In addition, population size and iteration number in ABC algorithm can affect the results. This paper also shows the example of using the proposed topic knowledge map for research trend analysis in IOT during years 2011 to 2016.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Cho, Ya-Ting y 卓雅婷. "Iterative MAP algorithm for Gauss-Markov Channel". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/23724818614295416930.

Texto completo
Resumen
碩士
國立交通大學
電信工程系所
93
In this paper, we experiment on the idea that the channel-with-memory nature can be nearly weakened to blockwise independence by the insertive transmission of informationless "random bits" (of length no less than the channel memory or channel spread) between two consecutive blocks. We found that these "random bits" can indeed be another parity check bits generated due to interleaved information bits such that additional coding information can be provided to improve the system performance. An exam- pli‾ed structure that follows this idea is the parallel concatenated convolutional code (PCCC). We thus derived its respective iterative MAP algorithm for time-varying channel with first-order Gauss-Markov fading, and tested whether or not the receiver can treat the received vector as blockwise independence with 2-bit blocks periodically separated by single parity-check bit from the second component recursive systematic convolutional (RSC) code encoder. Simulation results show that the iterative MAP decoder that is de- rived based on blockwise independence assumption not only performs close to the CSI(channel state information)-aided decoding scheme but is at most 0.9 dB away from the Shannon limit.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Blazquez, Carola A. "Decision-rule algorithm for map-matching in transportation". 2005. http://catalog.hathitrust.org/api/volumes/oclc/71242453.html.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Li, Jian-Wen y 李建汶. "Algorithm and Architecture Design for Disparity Map Generation". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/83548893324707976695.

Texto completo
Resumen
碩士
國立中興大學
電機工程學系所
102
In recent years, automobile electronics becomes a popular topic. The main reason is that we hope driving can be safer and more convenient for drivers. There are two parts of automobile electronics, including inside and outside parts. The inside part for example: fatigue detection and road information. On the other hand, outside part contains lane departure warning, collision avoidance and so on…. This thesis focuses on the front end of collision avoidance which estimates the distance between object and camera on the car. Our system is designed to avoid the collision against object which includes horizontal displacement. Therefore, the target of the front end of system is to find out the position and movement of these objects. First, we use SOBEL edge detection to find out the edges as our features because the collision occurs on these vertical edges. Another reason is the disparity of these vertical edges is more reliable when we generate disparity map. Then we use pyramid scale image to match the point from left view and right view. It can save a lot of computational time by reducing the window size and search range simultaneously. Finally, we check if the disparity map of left view and the disparity of right view are the same. After this step we can get a more reliable disparity map. When disparity map is done, we can directly get the distance between object and camera with the disparity value and the parameter of camera. However, there are some differences in real situation so that we must correct the formulation by an experiment. Because of the long processing time by software, we design VLSI architecture to perform block matching which takes the most time to compute disparity map. The search range and window size of the block matching is 64 and 13x13, respectively, so we design 64 processing units which can be divided into 5 groups. Therefore, hardware can perform block matching for full search at the same time. The spec for block matching circuit is 30 frames per second, 1080P resolution and operating frequency at 62.21MHz. And we implement the chip using TSMC 90nm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

HSIEH, CHI-HSUAN y 謝奇軒. "Positioning Algorithm for Feature Matching and Map Building". Thesis, 2017. http://ndltd.ncl.edu.tw/handle/21994126906377550985.

Texto completo
Resumen
碩士
銘傳大學
電腦與通訊工程學系碩士班
105
In recent years, the evolution of the intelligent mobile robot has been paid more and more attention. The intelligent mobile robots can complete many complicate tasks such as: automatic navigation, path planning, indoor positioning, environmental space rendering and so on. In the indoor environment, because smart mobile robots cannot use the global positioning system for self-positioning, we need to use other indoor positioning technology to perform simultaneous localization and mapping to establish the environmental maps. In this thesis, we propose a feature matching and map overlay positioning algorithm in a robot mobile platform "Seeker". We use the Lidar and Inertia measurement unit (IMU) to obtain environmental information and robot position. With the proposed feature matching algorithm, the robot will use the mobile information to map the map and use the mobile information for autonomous positioning and correction after each feature matching. And finally we achieve the same time to establish the characteristics of the environment map and the goal of self-positioning Finally, the experimental results show the feature area is 19.65mm from the center point and the matching ratio of the characteristic object is 100%. The robots can estimate the location of each move by using the mobile information estimate obtained through the IMU and effectively merge the maps and retrieve the position of the robot. The experimental results show that the method of this study is feasible and effective for building the environment maps and self-positioning.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Harish, D. "Reeb Graphs : Computation, Visualization and Applications". Thesis, 2012. http://hdl.handle.net/2005/3173.

Texto completo
Resumen
Level sets are extensively used for the visualization of scalar fields. The Reeb graph of a scalar function tracks the evolution of the topology of its level sets. It is obtained by mapping each connected component of a level set to a point. The Reeb graph and its loop-free version called the contour tree serve as an effective user interface for selecting meaningful level sets and for designing transfer functions for volume rendering. It also finds several other applications in the field of scientific visualization. In this thesis, we focus on designing algorithms for efficiently computing the Reeb graph of scalar functions and using the Reeb graph for effective visualization of scientific data. We have developed three algorithms to compute the Reeb graph of PL functions defined over manifolds and non-manifolds in any dimension. The first algorithm efficiently tracks the connected components of the level set and has the best known theoretical bound on the running time. The second algorithm, utilizes an alternate definition of Reeb graphs using cylinder maps, is simple to implement and efficient in practice. The third algorithm aggressively employs the efficient contour tree algorithm and is efficient both theoretically, in terms of the worst case running time, and practically, in terms of performance on real-world data. This algorithm has the best performance among existing methods and computes the Reeb graph at least an order of magnitude faster than other generic algorithms. We describe a scheme for controlled simplification of the Reeb graph and two different graph layout schemes that help in the effective presentation of Reeb graphs for visual analysis of scalar fields. We also employ the Reeb graph in four different applications – surface segmentation, spatially-aware transfer function design, visualization of interval volumes, and interactive exploration of time-varying data. Finally, we introduce the notion of topological saliency that captures the relative importance of a topological feature with respect to other features in its local neighborhood. We integrate topological saliency with Reeb graph based methods and demonstrate its application to visual analysis of features.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Huang, Ming-Jen y 黃銘仁. "Study of MAP Selection Algorithm in Hierarchical Mobile IPv6". Thesis, 2005. http://ndltd.ncl.edu.tw/handle/81622359946268733512.

Texto completo
Resumen
碩士
國立中央大學
通訊工程研究所
93
The Mobile IPv4 and Mobile IPv6 were proposed by IETF as the main protocols for supporting IP mobility. Because of some shortcomings of Mobile IPv6, Hierarchical Mobile IPv6 (HMIPv6) has been proposed to minimize the signaling overhead and handover latency by deploying Mobility Anchor Point (MAP) in the network. In this thesis, we proposed a MAP selection algorithm in a network with tree-based hierarchy. Mobile Node (MN) can select an appropriate MAP to register according to its mobility pattern. And we introduce the concept of abstract MAP, it can effectively reduce the frequency of Inter-domain handoff and then to minimize the signaling overhead and handover latency. Additionally, we introduce the load balance mechanism in the abstract MAP to avoid the overload in some MAP. Finally, the performance of the proposed scheme is evaluated through simulation experiments. The simulation results show that our scheme can minimize the handover latency and the load of each MAP node can be more balance. It also shows that the amount of MNs in the network and the mobility pattern of all MNs shall be carefully considered for the achievement of a suitable abstract MAP.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Wu, Ming-Hsien y 吳明憲. "Algorithm Design for Trajectory-based Dynamic Boundary Map Labeling". Thesis, 2019. http://ndltd.ncl.edu.tw/handle/38h2s7.

Texto completo
Resumen
碩士
國立臺灣大學
電機工程學研究所
107
Traditional map labeling focuses on static maps. As dynamic maps allow the user to move or rotate the map, traditional map labeling algorithms cannot easily be extended to dynamic maps. In this thesis, we consider the design of algorithms for trajectory-based dynamic boundary labelling. The goal is to find an automatic method to label the points along a given trajectory of a map in the framework of boundary labeling. Given a trajectory and a set of points, the design is to connect points to non-overlapping labels on one side of the boundary of the map. In this thesis, we design an Integer Linear Programming (ILP) formulation, which allows sliding ports as opposed to fixed ports assumed in much of the previous work. To improve the efficiency, heuristic algorithms are also given. Finally, experimental results are used to illustrate the effectiveness of our design.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Chiu, Cheng Wei y 邱政維. "Edge based Depth Map Super Resolution Algorithm and Implementation". Thesis, 2015. http://ndltd.ncl.edu.tw/handle/54102968280276306616.

Texto completo
Resumen
碩士
國立暨南國際大學
電機工程學系
103
In recent years, three-dimensional television (3DTV) becomes more popular. The depth map can be reduced resolution to save data transmission in the 3D video coding. However, the depth quality will affect the 3D experience of the viewer. To improve the quality of the depth map, high resolution depth map reconstruction is the important issue in 3D synthesis by Depth Image Based Rendering (DIBR)[1]. A single high resolution (HR) depth map reconstruction techniques could be classified into two categories in terms of single low resolution (LR) depth map upsampling and multiple low resolution depth maps upsampling. Single image reconstruction algorithm uses less executing time to reconstruct high resolution image than multiple images reconstruction algorithm. The other one is multiple images reconstruction algorithm, this method consider the similarity between each image, so the algorithm has a higher quality result than the other algorithm. In this study, the depth map super-resolution algorithm base on the edge-directed reconstruction is proposed for upsampling the depth map. In order to reduce the execution time, the proposed algorithm uses the single low resolution depth information and edge enhancement weighted to reconstruct the high resolution depth map. The proposed algorithm also achieves on FPGA DE3-260.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Liu, Yuan-Cheng y 劉原呈. "A Study of Adaptive Map/Reduce Affinity Propagation Algorithm". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/86737410493918098274.

Texto completo
Resumen
碩士
國立臺灣科技大學
資訊工程系
102
The Affinity Propagation (AP) is a clustering algorithm based on the concept of “message passing” between data points. Unlike most clustering algorithms such as k-means, the AP does not require the number of clusters to be determined or estimated before running the algorithm. There are implementation of AP on Hadoop, a distribute cloud environment, called the Map/Reduce Affinity Propagation (MRAP). But the MRAP has a limitation: it is hard to know what value of parameter “preference” can yield an optimal clustering solution. The Adaptive Affinity Propagation Clustering (AAP) algorithm was proposed to overcome this limitation to decide the preference value in AP. In this study, we propose to combine these two methods as the Adaptive Map/Reduce Affinity Propagation (AMRAP), which divides the clustering task to multiple mappers and one reducer in Hadoop, and decides suitable preference values individually for each mapper. In the experiments, we compare the clustering results of the proposed AMRAP with the original MRAP method. The experiment results support that the proposed AMRAP method outperforms the original MRAP method in terms of accuracy, Davies–Bouldin index and Dunn Index. In the experiments, we compare the clustering result of the proposed AMRAP with the MRAP method. The experiment results support that the proposed AMRAP method has good performance at accuracy, Davies–Bouldin index and Dunn Index.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Cho, Shin Yo y 卓鑫佑. "An Adaptive Maximum-Log-MAP Algorithm for Turbo Decoding". Thesis, 2008. http://ndltd.ncl.edu.tw/handle/63523286777551058801.

Texto completo
Resumen
碩士
長庚大學
電機工程學研究所
96
ABSTRACT Turbo codes were presented by Berrou Glavieux, and Thitimajshimal [1] in 1993. It was shown that can fulfill performance close to the Shannon limit. Turbo coding has attracted great interest in the last few years because of large coding gains. The soft-in soft-out decoder (SISO) is the most important part of a turbo decoder. The Max-log-MAP algorithm is a good compromise between performance and complexity [20]. Compared to nearly reaches the optimal Log-MAP algorithm is about half more complex than the Max-Log-MAP algorithm owing to the look-up operation required for finding the correction factors which is a Jacobian term. However, it typically suffers a performance degradation of about 0.5 dB [14]. Theoretically, it is necessary to estimate the SNR when using the Log-MAP constituent decoder, and has proven that decoding with the Max-Log-MAP decoder is SNR independent [22]. In 2000, Pyndiah [12] highlighted that the standard deviation of the extrinsic information is very high in the first decoding steps and decreases as we iterate the turbo decoding. To take above fact into account, it was suggested to multiply the extrinsic information at the output of each turbo decoder by a fixed weight. The evolution of the fixed weight with the decoding iteration number i, used to reduce the effect of the extrinsic information in the turbo decoder in the first decoding steps when the bit error rate (BER) is relatively high. It takes a small value in the first decoding steps and increases as the BER tends to zero. In this paper, the adaptive weighting evaluated in [12] depends on the heuristic procedure instead of using the intuitive approach, the extrinsic information at the output of each SISO turbo decoder is adaptive weighted by an efficient scaling scheme that extends the existing sign change ratio stopping criterion [21]. The new adaptive technique counts the number of sign differences in extrinsic information between two consecutive operation of iteration (i–1) and i of turbo decoding, and then adaptively determines the corresponding adaptive weighting for each data block. The experiment shows that it really can not only improve the coding gain of the Max-Log-MAP to Log-MAP algorithm but also lower the decoding delay of turbo decoding process by proposed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Jheng, Jian-Jhong y 鄭建忠. "Batch Training Algorithm for Mixed-Type Self-Organizing Map with a Fixed-Sized Map". Thesis, 2016. http://ndltd.ncl.edu.tw/handle/8839j2.

Texto completo
Resumen
碩士
國立雲林科技大學
資訊管理系
104
The Kohonen’s Self-Organizing Map (SOM) is a well-known unsupervised learning algorithm in the visualization field. For the Self-Organizing Map (SOM), how to find out useful information or knowledge in real-world data is very important. In recent years, the SOM has had many variants that are improved and extended to the original model. For instance, the Generalized Self-Organizing Map (GenSOM) is able to process categorical data and mixed-type data as well. The Batch Generalized Self-Organizing Map (BatchGSOM) is batch and dynamic growth version of the GenSOM algorithm, which runs faster than the previous algorithms. However, if the real-world data is high-dimensional and has a large amount of data, many of neurons will be dynamically generated during the training process of BatchGSOM. As a result, the training time will increase due to the large amount of calculation. Therefore, in this study we propose a fixed-sized map version of batch, extended SOM so as to improve the performance. Experimental results indicate that the proposed approach outperforms the previous approaches in terms of training efficiency.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Cheng, Wei y 鄭維. "Mental Map Preserving Graph Drawing Using Spring Algorithms". Thesis, 2011. http://ndltd.ncl.edu.tw/handle/53870453632086563405.

Texto completo
Resumen
碩士
國立臺灣大學
電機工程學研究所
99
Graph drawing is an important topic in the domain of information visualization. For needs of interaction with a drawing system or data transition presentation, called "dynamic graph drawing", we should consider to add a factor called "mental map preservation" into the current algorithm. In a dynamic drawing problem, if we ignore relations between the new graph and the old one while redrawing the former, but just let the algorithm auto-redraw along with data transition, it''s possible that the layout structure of the new one differs from the old one, and then sways human users'' ease of identification. We can achieve preserving users'' mental map if we can look after both avoidance of structure swaying and need of layout neatness when redrawing the layout from the old graph to the new one. This paper, based on the spring algorithm having a balance between speed and robustness, proposes a view different from former researches, and designs a new mental map preserving algorithm according to that.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Adaixo, Michael Carlos Gonçalves. "Influence map-based pathfinding algorithms in video games". Master's thesis, 2014. http://hdl.handle.net/10400.6/5517.

Texto completo
Resumen
Path search algorithms, i.e., pathfinding algorithms, are used to solve shortest path problems by intelligent agents, ranging from computer games and applications to robotics. Pathfinding is a particular kind of search, in which the objective is to find a path between two nodes. A node is a point in space where an intelligent agent can travel. Moving agents in physical or virtual worlds is a key part of the simulation of intelligent behavior. If a game agent is not able to navigate through its surrounding environment without avoiding obstacles, it does not seem intelligent. Hence the reason why pathfinding is among the core tasks of AI in computer games. Pathfinding algorithms work well with single agents navigating through an environment. In realtime strategy (RTS) games, potential fields (PF) are used for multi-agent navigation in large and dynamic game environments. On the contrary, influence maps are not used in pathfinding. Influence maps are a spatial reasoning technique that helps bots and players to take decisions about the course of the game. Influence map represent game information, e.g., events and faction power distribution, and is ultimately used to provide game agents knowledge to take strategic or tactical decisions. Strategic decisions are based on achieving an overall goal, e.g., capture an enemy location and win the game. Tactical decisions are based on small and precise actions, e.g., where to install a turret, where to hide from the enemy. This dissertation work focuses on a novel path search method, that combines the state-of-theart pathfinding algorithms with influence maps in order to achieve better time performance and less memory space performance as well as more smooth paths in pathfinding.
Algoritmos de pathfinding são usados por agentes inteligentes para resolver o problema do caminho mais curto, desde a àrea jogos de computador até à robótica. Pathfinding é um tipo particular de algoritmos de pesquisa, em que o objectivo é encontrar o caminho mais curto entre dois nós. Um nó é um ponto no espaço onde um agente inteligente consegue navegar. Agentes móveis em mundos físicos e virtuais são uma componente chave para a simulação de comportamento inteligente. Se um agente não for capaz de navegar no ambiente que o rodeia sem colidir com obstáculos, não aparenta ser inteligente. Consequentemente, pathfinding faz parte das tarefas fundamentais de inteligencia artificial em vídeo jogos. Algoritmos de pathfinding funcionam bem com agentes únicos a navegar por um ambiente. Em jogos de estratégia em tempo real (RTS), potential fields (PF) são utilizados para a navegação multi-agente em ambientes amplos e dinâmicos. Pelo contrário, os influence maps não são usados no pathfinding. Influence maps são uma técnica de raciocínio espacial que ajudam agentes inteligentes e jogadores a tomar decisões sobre o decorrer do jogo. Influence maps representam informação de jogo, por exemplo, eventos e distribuição de poder, que são usados para fornecer conhecimento aos agentes na tomada de decisões estratégicas ou táticas. As decisões estratégicas são baseadas em atingir uma meta global, por exemplo, a captura de uma zona do inimigo e ganhar o jogo. Decisões táticas são baseadas em acções pequenas e precisas, por exemplo, em que local instalar uma torre de defesa, ou onde se esconder do inimigo. Esta dissertação foca-se numa nova técnica que consiste em combinar algoritmos de pathfinding com influence maps, afim de alcançar melhores performances a nível de tempo de pesquisa e consumo de memória, assim como obter caminhos visualmente mais suaves.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Ciao-SiangSiao y 蕭喬翔. "Depth Map Enhancement and Hole Filling Algorithm for DIBR System". Thesis, 2012. http://ndltd.ncl.edu.tw/handle/76309129974903476512.

Texto completo
Resumen
碩士
國立成功大學
電機工程學系碩博士班
100
This thesis proposes a depth map enhancement and hole filling algorithm that uses temporal reference information to improve the visual quality of synthesized virtual views in a DIBR system. The depth map enhancement algorithm maintains the scaling ratio and depth value consistencies and temporal consistencies in difference time instant to enhance the three-dimensional (3D) effect and reduce the flicker problem. This algorithm estimates the Z-displacement and position in 3D space based on the scaling ratio of an object in order to maintain the consistency of the scaling ratio and the depth value, which cannot be achieved by existing depth map improvement algorithms. Many depth map enhancement algorithms process each frame individually, and therefore, the temporal consistency of the resulting depth map is weak. Some enhancement algorithms include temporal information; however, their ability to reduce temporal inconsistencies is poor, especially when the object is moving. Therefore, this thesis also proposes a motion-compensated IIR depth filter that smoothes the depth value along the moving trajectory of the object and preserves edges to reduce temporal inconsistency. To fill a hole in a synthesized virtual view, the proposed algorithm adds temporal information, which is the uncovered region in a temporal reference frame, to improve visual quality. The results show that the visual quality of the synthesized virtual view is enhanced by the proposed depth map enhancement and hole filling algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Bo-SyunLi y 李柏勳. "Content-adaptive Depth Map Enhancement Algorithm Based on Motion Distribution". Thesis, 2014. http://ndltd.ncl.edu.tw/handle/kwudd6.

Texto completo
Resumen
碩士
國立成功大學
電機工程學系
102
This thesis proposes a motion-based content-adaptive depth map enhancement algorithm to enhance the quality of the depth map and reduce the artifacts in synthesized virtual views. A depth cue is extracted from the motion distribution at one specific moving camera scenario. In the camera horizontal panning scenario, the nearer the distance between the object and the camera, the larger the motion will be, and vice versa. The relative distances between the camera and objects will be obtained from the motion distribution in this scenario. Moreover, the distance between a moving object and the camera should be similar and consistent in either camera-fixed or camera-panning scenarios. Thus, depth values of the moving object should not be intense variant. This thesis also provides the bi-directional motion-compensated infinite impulse response depth low-pass filter to enhance the consistency of depth maps in the temporal domain. The contribution of this thesis uses these depth cues and motion distribution to enhance stability and consistency of depth maps in the spatial-temporal domain. Experimental results show that the synthesized results would be better in both objective and subjective measurement when compared with the synthesized results using original depth maps and related depth map enhancement algorithms.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Chu, Chun-Yen y 朱俊諺. "An improved affinity propagation clustering algorithm for map/reduce environment". Thesis, 2013. http://ndltd.ncl.edu.tw/handle/24211284880547020442.

Texto completo
Resumen
碩士
國立臺灣科技大學
資訊工程系
101
The Affinity Propagation (AP) is a clustering algorithm that does not require pre-set K cluster numbers. The AP works through message passing between nodes in the data and is more suitable for the data which is not well known. In this paper, we improve the original AP to Map/Reduce Affinity Propagation (MRAP) implemented in Hadoop, a distribute cloud environment. The architecture of MRAP is divided to multiple mappers and one reducer in Hadoop. With multiple processing nodes, we can make data processing faster and more efficient. In the experiments, we compare the clustering result of the proposed MRAP with the K-means method. The experiment results support that the proposed MRAP method has good performance in terms of accuracy and Davies–Bouldin index value. Also, by applying the proposed MRAP method can reduce the number of iterations before convergence for the K-means method irrespective to the data dimensions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Chen, Heng-Yu y 陳恆裕. "An Efficient Self-Organizing Map Algorithm Based on Reference Point". Thesis, 2006. http://ndltd.ncl.edu.tw/handle/04399477595600384164.

Texto completo
Resumen
碩士
國立中興大學
資訊科學系所
94
The self-organizing map (SOM) is an excellent mechanism for data mining. It has been used as a tool for mapping high-dimensional data into a two- (or three-) dimensional feature map. Despite its successes in practical applications, SOM suffers from some drawbacks such as trial-and-error method for searching a neighborhood preserving feature map. In this paper, we present an efficient self-organizing map algorithm to improve the performance of SOM. We use an efficient self-organizing map algorithm based on reference point and two threshold values .We use a threshold value as the search boundary which is used to search for the Best-Matching Unit (BMU) via input vector. Another threshold value is used as the search boundary in which the BMU finds its neighbors. Moreover, we propose a new method to lower the number of computations required when the Efficient Initialization Scheme for the Self-organizing Feature Map Algorithm is applied. The method reduce the time complexity form O(n2) to O(n) in the steps of finding the initial neurons. We ran our algorithm based on the data set from Yeast database and UCI KDD Archive to illustrate the performance improvement of the proposed method. In the experiment, the execution time of the original SOM algorithm is cut in half in our scheme. At the same time, the sum of squared-error distance in our scheme is also smaller than that of SOM. After achieving improvement of time complexity, this method is good enough to apply in the first-layer algorithm of the TWO LEVEL Based SOM.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

"Soft self-organizing map". Chinese University of Hong Kong, 1995. http://library.cuhk.edu.hk/record=b5888572.

Texto completo
Resumen
by John Pui-fai Sum.
Thesis (M.Phil.)--Chinese University of Hong Kong, 1995.
Includes bibliographical references (leaves 99-104).
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Motivation --- p.1
Chapter 1.2 --- Idea of SSOM --- p.3
Chapter 1.3 --- Other Approaches --- p.3
Chapter 1.4 --- Contribution of the Thesis --- p.4
Chapter 1.5 --- Outline of Thesis --- p.5
Chapter 2 --- Self-Organizing Map --- p.7
Chapter 2.1 --- Introduction --- p.7
Chapter 2.2 --- Algorithm of SOM --- p.8
Chapter 2.3 --- Illustrative Example --- p.10
Chapter 2.4 --- Property of SOM --- p.14
Chapter 2.4.1 --- Convergence property --- p.14
Chapter 2.4.2 --- Topological Order --- p.15
Chapter 2.4.3 --- Objective Function of SOM --- p.15
Chapter 2.5 --- Conclusion --- p.17
Chapter 3 --- Algorithms for Soft Self-Organizing Map --- p.18
Chapter 3.1 --- Competitive Learning and Soft Competitive Learning --- p.19
Chapter 3.2 --- How does SOM generate ordered map? --- p.21
Chapter 3.3 --- Algorithms of Soft SOM --- p.23
Chapter 3.4 --- Simulation Results --- p.25
Chapter 3.4.1 --- One dimensional map under uniform distribution --- p.25
Chapter 3.4.2 --- One dimensional map under Gaussian distribution --- p.27
Chapter 3.4.3 --- Two dimensional map in a unit square --- p.28
Chapter 3.5 --- Conclusion --- p.30
Chapter 4 --- Application to Uncover Vowel Relationship --- p.31
Chapter 4.1 --- Experiment Set Up --- p.32
Chapter 4.1.1 --- Network structure --- p.32
Chapter 4.1.2 --- Training procedure --- p.32
Chapter 4.1.3 --- Relationship Construction Scheme --- p.34
Chapter 4.2 --- Results --- p.34
Chapter 4.2.1 --- Hidden-unit labeling for SSOM2 --- p.34
Chapter 4.2.2 --- Hidden-unit labeling for SOM --- p.35
Chapter 4.3 --- Conclusion --- p.37
Chapter 5 --- Application to vowel data transmission --- p.42
Chapter 5.1 --- Introduction --- p.42
Chapter 5.2 --- Simulation --- p.45
Chapter 5.2.1 --- Setup --- p.45
Chapter 5.2.2 --- Noise model and demodulation scheme --- p.46
Chapter 5.2.3 --- Performance index --- p.46
Chapter 5.2.4 --- Control experiment: random coding scheme --- p.46
Chapter 5.3 --- Results --- p.47
Chapter 5.3.1 --- Null channel noise (σ = 0) --- p.47
Chapter 5.3.2 --- Small channel noise (0 ≤ σ ≤1) --- p.49
Chapter 5.3.3 --- Large channel noise (1 ≤σ ≤7) --- p.49
Chapter 5.3.4 --- Very large channel noise (σ > 7) --- p.49
Chapter 5.4 --- Conclusion --- p.50
Chapter 6 --- Convergence Analysis --- p.53
Chapter 6.1 --- Kushner and Clark Lemma --- p.53
Chapter 6.2 --- Condition for the Convergence of Jou's Algorithm --- p.54
Chapter 6.3 --- Alternative Proof on the Convergence of Competitive Learning --- p.56
Chapter 6.4 --- Convergence of Soft SOM --- p.58
Chapter 6.5 --- Convergence of SOM --- p.60
Chapter 7 --- Conclusion --- p.61
Chapter 7.1 --- Limitations of SSOM --- p.62
Chapter 7.2 --- Further Research --- p.63
Chapter A --- Proof of Corollary1 --- p.65
Chapter A.l --- Mean Average Update --- p.66
Chapter A.2 --- Case 1: Uniform Distribution --- p.68
Chapter A.3 --- Case 2: Logconcave Distribution --- p.70
Chapter A.4 --- Case 3: Loglinear Distribution --- p.72
Chapter B --- Different Senses of neighborhood --- p.79
Chapter B.l --- Static neighborhood: Kohonen's sense --- p.79
Chapter B.2 --- Dynamic neighborhood --- p.80
Chapter B.2.1 --- Mou-Yeung Definition --- p.80
Chapter B.2.2 --- Martinetz et al. Definition --- p.81
Chapter B.2.3 --- Tsao-Bezdek-Pal Definition --- p.81
Chapter B.3 --- Example --- p.82
Chapter B.4 --- Discussion --- p.84
Chapter C --- Supplementary to Chapter4 --- p.86
Chapter D --- Quadrature Amplitude Modulation --- p.92
Chapter D.l --- Amplitude Modulation --- p.92
Chapter D.2 --- QAM --- p.93
Bibliography --- p.99
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía