To see the other types of publications on this topic, follow the link: Port filtering.

Dissertations / Theses on the topic 'Port filtering'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 21 dissertations / theses for your research on the topic 'Port filtering.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Holmeide, Ø., and J.-F. Gauvin. "Ethernet Packet Filtering for FTI - Part II." International Foundation for Telemetering, 2014. http://hdl.handle.net/10150/577404.

Full text
Abstract:
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA<br>Network loads close to Ethernet wire speed and latency sensitive data in a Flight Test Instrumentation (FTI) system, represent challenging requirements for FTI network equipment. Loss of data due to network congestion, overflow on the end nodes, as well as packet latency above a few hundred microseconds, can be critical during a flight test. To avoid these problems, several advanced packet filtering and network optimization functions are required in order to achieve best possible performance and thus avoid loss of data. This paper gives insight into how to properly engineer an Ethernet based FTI network and how to use advanced Ethernet switch techniques such as Quality of Service (QoS) and rate shaping.
APA, Harvard, Vancouver, ISO, and other styles
2

Fortuna, Filipe André Cancela. "Collaborative Filtering para recomendação de produtos on-line." Master's thesis, Faculdade de Economia da Universidade do Porto, 2008. http://hdl.handle.net/10216/53871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fortuna, Filipe André Cancela. "Collaborative Filtering para recomendação de produtos on-line." Dissertação, Faculdade de Economia da Universidade do Porto, 2008. http://hdl.handle.net/10216/53871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Faber, Marc. "On-Board Data Processing and Filtering." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596433.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV<br>One of the requirements resulting from mounting pressure on flight test schedules is the reduction of time needed for data analysis, in pursuit of shorter test cycles. This requirement has ramifications such as the demand for record and processing of not just raw measurement data but also of data converted to engineering units in real time, as well as for an optimized use of the bandwidth available for telemetry downlink and ultimately for shortening the duration of procedures intended to disseminate pre-selected recorded data among different analysis groups on ground. A promising way to successfully address these needs consists in implementing more CPU-intelligence and processing power directly on the on-board flight test equipment. This provides the ability to process complex data in real time. For instance, data acquired at different hardware interfaces (which may be compliant with different standards) can be directly converted to more easy-to-handle engineering units. This leads to a faster extraction and analysis of the actual data contents of the on-board signals and busses. Another central goal is the efficient use of the available bandwidth for telemetry. Real-time data reduction via intelligent filtering is one approach to achieve this challenging objective. The data filtering process should be performed simultaneously on an all-data-capture recording and the user should be able to easily select the interesting data without building PCM formats on board nor to carry out decommutation on ground. This data selection should be as easy as possible for the user, and the on-board FTI devices should generate a seamless and transparent data transmission, making a quick data analysis viable. On-board data processing and filtering has the potential to become the future main path to handle the challenge of FTI data acquisition and analysis in a more comfortable and effective way.
APA, Harvard, Vancouver, ISO, and other styles
5

Cantarello, Luca. "Use of a Kalman filtering technique for near-surface temperature analysis." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13455/.

Full text
Abstract:
A statistical post-processing of the hourly 2-meter temperature fields from the Nordic convective-scale operational Numerical Weather Prediction model Arome MetCoOp 2.5 Km has been developed and tested at the Norwegian Meteorological Institute (MET Norway). The objective of the work is to improve the representation of the temperature close to the surface combining model data and in-situ observations for climatological and hydrological applications. In particular, a statistical scheme based on a bias-aware Local Ensemble Transform Kalman Filter has been adapted to the spatial interpolation of surface temperature. This scheme starts from an ensemble of 2-meter temperature fields derived from Arome MetCoOp 2.5 Km and, taking into account the observations provided by the MET Norway network, produces an ensemble of analysis fields characterised by a grid spacing of 1 km. The model best estimate employed in the interpolation procedure is given by the latest avilable forecast, subsequently corrected for the model bias. The scheme has been applied off-line and the final analysis is performed independently at each grid point. The final analysis ensemble has been evaluated and its mean value has been proved to improve significantly the best estimate of Arome MetCoOp 2.5 km in representing the 2-meter temperature fields, in terms of both accuracy and precision, with a reduction in the root mean squared values as well as in the bias and an improvement in reproducing the cold extremes during wintertime. More generally, the analysis ensemble displays better forecast verification scores, with an overall reduction in the Brier Score and its reliability component and an increase in the resolution term for the zero degrees threshold. However, the final ensemble spread remains too narrow, though not as narrow as the model output.
APA, Harvard, Vancouver, ISO, and other styles
6

Garcia, Krystine. "Bioinformatics Pipeline for Improving Identification of Modified Proteins by Neutral Loss Peak Filtering." Ohio University / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1440157843.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

PEIREIRA, Alysson Bispo. "Sistemas de recomendação baseados em contexto físico e social." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/19521.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-07-12T13:47:04Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) risethesis.pdf: 1393384 bytes, checksum: f5f2fb9182ce60a9c5d2b0cd95f2893a (MD5)<br>Made available in DSpace on 2017-07-12T13:47:04Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) risethesis.pdf: 1393384 bytes, checksum: f5f2fb9182ce60a9c5d2b0cd95f2893a (MD5) Previous issue date: 2016-06-29<br>Em meio a grande sobrecarga de dados disponíveis na internet, sistemas de recomendação tornam-se ferramentas indispensáveis para auxiliar usuários no encontro de itens ou conteúdos relevantes. Diversas técnicas de recomendação são aplicadas em diversos tipos de domínios diferentes. Seja na recomendação de filmes, música, amigos, lugares ou notícias, sistemas de recomendação exploram diversas informações disponíveis para aprender as preferências dos usuários e promover recomendações úteis. Uma das estratégias mais utilizadas é a de filtragem colaborativa. A qualidade dessa estratégia depende da quantidade de avaliações disponíveis e da qualidade do algoritmo utilizado para predição de avaliação. Estudos recentes demonstram que informações provenientes de redes sociais podem ser muito úteis para aumentar a precisão das recomendações. Assim como acontece no mundo real, no mundo virtual usuários buscam recomendações e conselhos de amigos antes de comprar um item ou consumir algum serviço, informações desse tipo podem ser úteis para definição do contexto social da recomendação. Além do social, informações físicas e temporais passaram a ser utilizadas para definição do contexto físico de cada recomendação. A companhia, a localização e as condições climáticas são bons exemplos de elementos físicos que levam um usuário a preferir certos itens. Um processo de recomendação que não leve em consideração elementos contextuais pode fazer com que o usuário tenha uma péssima experiência consumindo determina do item recomendado equivocadamente. Esta dissertação tem como objetivo investigar técnicas de filtragem colaborativa que utilizam contexto a fim de realizar recomendações que auxiliem usuários no encontro de itens relevantes. Nesse tipo de técnica, um sistema de recomendação base é utilizando para fornecer recomendações para o usuário alvo. Em seguida, são filtrados apenas os itens considerados relevantes para contextos previamente identificados nas preferências do usuário alvo. As técnicas implementadas foram aplicadas em dois experimentos com duas bases de dados de domínios diferentes: uma base composta por eventos e outra por filmes. Na recomendação de eventos, investigamos o uso de contextos físicos (i.e., tempo e local) e de contextos sociais (i.e., amigos na rede social) associados aos itens sugeridos aos usuários. Na recomendação de filmes, por sua vez, investigamos novamente o uso de contexto social. A partir da aplicação de pós-filtragem em três algoritmos de filtragem colaborativa usados como base, foi possível recomendar itens de forma mais precisa, como demonstrado nos experimentos realizados.<br>The overload of data available on the internet makes recommendation systems become indispensable tools to assist users in meeting items or relevant content. Several recommendation techniques were has been userd in many different types of domains. Those systems can recommend movies, music, friends, places or news; recommender systems can exploit different information available to learn preferences of users and promote more useful recommendations. The collaborative filtering strategy is one of the most used. The quality of this technique depends on the number of available ratings and the algorithm used to predict. Recent studies show that information from social networks can be very useful to increase the accuracy recommendations. Just as in the real world, the virtual world users ask recommendations and advice from friends before buying an item or consume a service. Furthermore, the context of each rating may be crucial for the definition of new ratings. Location, date time and weather conditions are good examples of useful elements to define what should be the best items to recommend for some user. A recommendation process that does not respect those elements can provide a user a bad experience. This dissertation investigates collaborative filtering techniques based on context, and more specifically techniques based on post-filtering. First, a recommendation system was used to provide recommendations for a specific user. Then, only relevant items according to context preferences for the target user will be recommended. The techniques implemented was applied in two case studies with two different domains databases: one base composed of events and another of movies. In the event of recommendation, we investigated the use of physical contexts (i.e., time and place) and social contexts (i.e., friends in the social network) associated with items suggested to users. On the recommendation of movies, in turn, again we investigated the use of social context. From the application of post-filtering in three collaborative filtering algorithms used as a baseline, it was possible to recommend items more accurately, as demonstrated in the experiments.
APA, Harvard, Vancouver, ISO, and other styles
8

Hultén, Peter. "Managing a cross-institutional setting : a case study of a Western firm's subsidiary in the Ukraine." Doctoral thesis, Umeå universitet, Företagsekonomi, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-67937.

Full text
Abstract:
This study explores the development of a Western firm's subsidiary in the Ukraine and sets out to contribute to the theoretical development about the managing of subsidiaries in the Post-Soviet market. The cross-institutional approach to analyse the subsidiary has been adopted to explore influence from the institutional setting of the parent firm and from local institutions. In the theoretical framework, special attention is directed to studies analysing the challenges that Western firms encounter when operating in the Post-Soviet market. Institutional theory therefore serves as a framework for theories on market entries, networks and management transfers.The empirical study is based on a case study conducted in connection to a training project for local employees of a Western firm's subsidiary operating in the Ukraine. Besides being a source of inspiration, the training project provided good access to respondents and insights about the challenges that the subsidiary faced.The analysis shows that the introduction of the Western firm's management in the subsidiary reflects in the local employees' forming of identities. A clear pattern is that local employees' development of identities in line with the Western firm's norms is supported by socialisation in settings dominated by the Western firm. A setting dominated by conflicts between Western and local norms, in contrast, resulted in developments of conflict identities. The analysis of the subsidiary's managing of influences from the local institutional setting indicates that this concerned filtering. Striking was that the subsidiary was successful in managing influences when the filtering conditions were characterised by consonance. Looking into aspects making the filtering of external influences difficult, the analysis points out barter trade and local actors' boundary spanning towards authorities in the Ukrainian society as aspects creating dissonances and vacuum. Thus, influences characterised by dissonance and/or vacuum made it particularly difficult for the subsidiary to manage these influences.One of the major contributions of this thesis is the cross-institutional approach to analyse developments in a subsidiary in the Post-Soviet market. By applying this approach the study suggests that the managing of a cross-institutional setting concerns both internal and external boundary spanning. Of vital importance for the internal boundary spanning are issues influencing local employees' forming of a 'we' with the Western firm's representatives. The standpoint is that this concerns local employees' identity identification, which is a new perspective on management transfers towards a subsidiary in the Post-Soviet market. Concerning the managing of external boundary spanning, the study points towards the importance of observing local actors' ways of dealing with dissonances and vacuum in local networks.<br>digitalisering@umu
APA, Harvard, Vancouver, ISO, and other styles
9

Roohi, Milad. "Performance-Based Seismic Monitoring of Instrumented Buildings." ScholarWorks @ UVM, 2019. https://scholarworks.uvm.edu/graddis/1140.

Full text
Abstract:
This dissertation develops a new concept for performance-based monitoring (PBM) of instrumented buildings subjected to earthquakes. This concept is achieved by simultaneously combining and advancing existing knowledge from structural mechanics, signal processing, and performance-based earthquake engineering paradigms. The PBM concept consists of 1) optimal sensor placement, 2) dynamic response reconstruction, 3) damage estimation, and 4) loss analysis. Within the proposed concept, the main theoretical contribution is the derivation of a nonlinear model-based observer (NMBO) for state estimation in nonlinear structural systems. The NMBO employs an efficient iterative algorithm to combine a nonlinear model and limited noise-contaminated response measurements to estimate the complete nonlinear dynamic response of the structural system of interest, in the particular case of this research, a building subject to an earthquake. The main advantage of the proposed observer over existing nonlinear recursive state estimators is that it is specifically designed to be physically realizable as a nonlinear structural model. This results in many desirable properties, such as improved stability and efficiency. Additionally, a practical methodology is presented to implement the proposed PBM concept in the case of instrumented steel, wood-frame, and reinforced concrete buildings as the three main types of structural systems used for construction in the United States. The proposed methodology is validated using three case studies of experimental and real-world large-scale instrumented buildings. The first case study is an extensively instrumented six-story wood frame building tested in a series of full-scale seismic tests in the final phase of the NEESWood project at the E-Defense facility in Japan. The second case study is a 6-story steel moment resisting frame building located in Burbank, CA, and uses the recorded acceleration data from the 1991 Sierra Madre and 1994 Northridge earthquakes. The third case is a seven-story reinforced concrete structure in Van Nuys, CA, which was severely damaged during the 1994 Northridge earthquake. The results presented in this dissertation constitute the most accurate and the highest resolution seismic response and damage measure estimates obtained for instrumented buildings. The proposed PBM concept will help structural engineers make more informed and swift decisions regarding post-earthquake assessment of critical instrumented building structures, thus improving earthquake resiliency of seismic-prone communities.
APA, Harvard, Vancouver, ISO, and other styles
10

Korogui, Rubens Hideo. "Filtragem robusta, modelos com atraso e certificação de desempenho : aplicação em sistemas eletricos." [s.n.], 2009. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260897.

Full text
Abstract:
Orientador: Jose Claudio Geromel<br>Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação<br>Made available in DSpace on 2018-08-13T09:58:44Z (GMT). No. of bitstreams: 1 Korogui_RubensHideo_D.pdf: 1047054 bytes, checksum: 1a066f84d7ea84703434bdb049a62a20 (MD5) Previous issue date: 2009<br>Resumo: Uma das temáticas abordadas neste trabalho é o problema de filtragem robusta em norma H2 para sistemas lineares a tempo contínuo e a tempo discreto. Consideramos que os sistemas estão sujeitos a incertezas paramétricas inicialmente do tipo politópicas e, em seguida, do tipo que admite representação linear fracionária. Calculamos limitantes inferior e superior para a norma H2 do erro de estimação, resolvendo-se problemas minimax, escrevendo-os em formato de desigualdades matriciais lineares LMI. Dessa maneira calculamos um intervalo de otimalidade que permite certificar o desempenho do filtro robusto projetado. Em seguida, aplicamos a metodologia proposta em problemas de estimação envolvendo um motor de indução trifásico e uma linha de transmissão com derivação. O presente trabalho também apresenta o que chamamos de sistema linear de comparação, cuja proposta é servir como alternativa para estudar sistemas com atraso. Utilizando a substituição de Rekasius construímos um sistema linear invariante no tempo que permite obter informações sobre a estabilidade e o cálculo de norma H8 para sistemas desta classe. Em vista da formulação através de matrizes de representação de estado é possível estender sem maiores dificuldades os resultados para o projeto de controladores via realimentação de estado e de filtros para sistemas com atraso.<br>Abstract: One of the themes considered in this work is the robust H2 filtering design problem for linear time invariant continuous and discrete time systems. We assume that the systems are subject to parametric uncertainty, initially of the polytopic type and later as linear fractional transformation parametric uncertainties. We calculate lower and upper bounds to the H2 squared norm of the estimation error by means of the equilibrium solution of a minimax problem, that can be formulated in a linear matrix inequality framework. Under this approach we provide an optimality gap that allows us to certify the performance of the designed robust filter. Afterwards, we apply the proposed methodology to estimation problems involving a three-phase induction motor and a transmission line with a stub. This work also considers what we call a linear comparison system, whose goal is to serve as an alternative to study time-delay systems. Using the Rekasius substitution we construct a linear time invariant system that allows us to get information about stability and H8 norm of this class of systems. Based on this approach in terms of state space matrices it is possible to extend the results to state feedback design and filtering design without any further assumptions.<br>Doutorado<br>Automação<br>Mestre em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
11

Cárdenas, Rueda Miguel Ángel. "Localização híbrida para um veículo autônomo em escala usando fusão de sensores : Hybrid localization for an scale R/C car using sensor fusion." [s.n.], 2013. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265907.

Full text
Abstract:
Orientador: Janito Vaqueiro Ferreira<br>Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica<br>Made available in DSpace on 2018-08-24T20:24:36Z (GMT). No. of bitstreams: 1 CardenasRueda_MiguelAngel_M.pdf: 13764912 bytes, checksum: 61cf12c0dfb532f0a7b4ccb7bd50bcae (MD5) Previous issue date: 2013<br>Resumo: A localização tem sido definido como um dos problemas-chave da navegação autônoma. Este problema levanta a questão em relação ao fato de determinar a posição e orientação em qualquer instante durante uma trajetoria. Um dos métodos mais usados para determinar a localização de um robô movel é a odometria; embora não garanta informação precisa da posição e orientação, esse método é a base dos métodos de localização relativa. Também existem métodos de localização absoluta, que em relação a um referencial fixo provêm informação mais precisa sobre o estado de um robô, como, por exemplo, o GPS. Mas, quando erros de diferentes naturezas ocorrem, e como consequência introduzem ruído em qualquer sistema de localização, é necessário ter informação redundante para estabelecer uma informação mais precisa. Esta dissertação de mestrado propõe um sistema de localização para um veículo em escala baseado em sistemas de odometria visual e odometria clássica a fim de fornecer uma estimativa robusta de posição e orientação através de um sistema de posicionamento global simulado<br>Abstract: Localization has been defined as one of the key problems of autonomous navigation. This issue rises the question about the fact of determining the position and orientation at any time while tracking a trajectory. One of the most used methods to determine the localization of a mobile robot is odometry; however, despite the fact that can't provide accurate results for getting the position and orientation, this method is the basis of the relative localization methods. Also, there are methods for absolute localization which provide information more accurately about the state of the robot, such as the commonly used GPS. But when errors of different nature occur, and as a consequence, introduce noise at any localization system, it is necessary to have redundant information to establish a more accurate information. This master thesis proposes a tracking system for a scale indoor vehicle based on classic and visual odometry in order to provide a robust estimation of position and orientation relative through a simulated global positioning system<br>Mestrado<br>Mecanica dos Sólidos e Projeto Mecanico<br>Mestre em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
12

Ben, mabrouk Mouna. "PA efficiency enhancement using digital linearization techniques in uplink cognitive radio systems." Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0296/document.

Full text
Abstract:
Pour un terminal mobile alimenté sur batterie, le rendement de l’amplificateur de puissance (AP) doit êtreoptimisé. Cette optimisation peut rendre non-linéaire la fonction d’amplification de l’AP. Pour compenser lesdistorsions introduites par le caractère non-linéaire de l’AP, un détecteur numérique fondé sur un modèle deVolterra peut être utilisé. Le comportement de l’AP et le canal étant modélisé par le modèle de Volterra, uneapproche par filtrage de Kalman (FK) permet d’estimer conjointement les noyaux de Volterra et les symbolestransmis. Dans ce travail, nous proposons de traiter cette problématique dans le cadre d’une liaison montantedans un contexte radio intelligente (RI). Dans ce cas, des contraintes supplémentaires doivent être prises encompte. En effet, étant donné que la RI peut changer de bande de fréquence de fonctionnement, les nonlinéaritésde l’AP peuvent varier en fonction du temps. Par conséquent, nous proposons de concevoir une postdistorsionnumérique fondée sur une modélisation par modèles multiples combinant plusieurs estimateurs àbase de FK. Les différents FK permettant de prendre en compte les différentes dynamiques du modèle.Ainsi, les variations temporelles des noyaux de Volterra peuvent être suivies tout en gardant des estimationsprécises lorsque ces noyaux sont statiques. Le cas d’un signal monoporteuse est adressé et validé par desrésultats de simulation. Enfin, la pertinence de l’approche proposée est confirmée par des mesures effectuéessur un AP large bande (300-3000) MHz<br>For a battery driven terminal, the power amplifier (PA) efficiency must be optimized. Consequently,non-linearities may appear at the PA output in the transmission chain. To compensatethese distortions, one solution consists in using a digital post-distorter based on aVolterra model of both the PA and the channel and a Kalman filter (KF) based algorithm tojointly estimate the Volterra kernels and the transmitted symbols. Here, we suggest addressingthis issue when dealing with uplink cognitive radio (CR) system. In this case, additionalconstraints must be taken into account. Since the CR terminal may switch from one subbandto another, the PA non-linearities may vary over time. Therefore, we propose to designa digital post-distorter based on an interacting multiple model combining various KF basedestimators using different model parameter dynamics. This makes it possible to track thetime variations of the Volterra kernels while keeping accurate estimates when those parametersare static. Furthermore, the single carrier case is addressed and validated by simulationresults. In addition, the relevance of the proposed approach is confirmed by measurementscarried on a (300-3000) MHz broadband PA
APA, Harvard, Vancouver, ISO, and other styles
13

Oré, Huacles Gian Carlos, and García Alexis Vásquez. "Desarrollo de un equipo electrónico/computacional orientado a extraer información de interés para el diagnóstico de Mildiu en plantaciones de quinua de la costa peruana basado en procesamiento digital de imágenes." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2021. http://hdl.handle.net/10757/654958.

Full text
Abstract:
La presente tesis propone un equipo portátil y ergonómico que permita la captura de imágenes de cultivos de quinua y, mediante un método de procesamiento eficaz, detecte los segmentos donde la planta se encuentra afectada por la enfermedad del Mildiu (representada por un amarillamiento particular sobre las hojas) para así obtener un resultado numérico que represente dicho efecto. La realización de este proyecto resuelve el principal problema del análisis cualitativo en los que se basa el cliente para el diagnóstico de la enfermedad ya que ofrecerá una solución cuantitativa para la identificación y medición de daño en los cultivos que proporcione al agrónomo un dato vital para poder suministrar la dosis adecuada de fungicida a las plantaciones y obtener un producto de mejor calidad. Este trabajo se basa en dos procesos de segmentación: primero, se realizó, a partir de la imagen original capturada, la segmentación de vegetación sobre el entorno mediante el modelo de color L*a*b, histograma bidimensional, filtrado y binarización; y, segundo, se realizó, a partir de la imagen resultante del primer proceso, la segmentación de amarillamiento sobre la vegetación mediante de los modelos de histogramas bidimensionales, filtrado, binarización y propiedades de excentricidad. Para la validación se tomó 50 imágenes de un cultivo de quinua del Instituto Nacional de Innovación Agraria (INIA) - Sede Lima, las cuales fueron procesadas a través del equipo desarrollado y verificado por el agrónomo especialista. Finalmente, se utilizó el índice de Kappa de Cohen para comparar los resultados donde se obtuvo un resultado de 0.789.<br>This thesis proposes a portable and ergonomic equipment that allows the capture of images of quinoa crops and, through an effective processing method, detect the segments where the plant is affected by Mildew disease (represented by a particular yellowing on the leaves) in order to obtain a numerical result that represents that effect. The realization of this project solves the main problem of the qualitative analysis on which the client is based for the diagnosis of the disease since it will offer a quantitative solution for the identification and measurement of crop damage that provides the agronomist with a vital data to be able to Supply the appropriate dose of herbicide to the plantations and obtain a better quality product. This work is based on two segmentation processes: first, from the original image captured, the segmentation of vegetation over the environment was carried out using the L*a*b color model, two-dimensional histogram, filtering and binarization; and, secondly, from the image resulting from the first process, the segmentation of yellowing on the vegetation was carried out using the two-dimensional histogram, filtering, binarization and eccentricity properties models. For validation, 50 images of a quinoa crop from INIA - Lima Headquarters were taken, which were processed through the equipment developed and verified by the specialist agronomist. Finally, Cohen’s Kappa index was used to compare the results where a result of 0.789 was obtained.<br>Tesis
APA, Harvard, Vancouver, ISO, and other styles
14

Singh, Latchman. "Speech enhancement for forensic applications." Thesis, Queensland University of Technology, 1998. https://eprints.qut.edu.au/36080/1/36080_Singh_1998.pdf.

Full text
Abstract:
Forensic audio recordings are usually made with a single covert microphone in non-ideal conditions. In non-ideal conditions the recordings are highly susceptible to various types of noise. The noise is usually broadband noise, co-talker interference, impulsive noise, narrow band noise or convolutional noise. There are existing speech enhancement techniques available to suppress most of the noise types mentioned, but when the noise is of a considerable level the performance of most enhancement techniques tend to decrease significantly. This thesis presents a study of speech enhancement techniques that are applicable to the enhancement of forensic audio recordings or that can be used in a forensic recording environment. It considers both pre-processing and post-processing speech enhancement techniques. This thesis investigates the improvement of some of the existing speech enhancement techniques as well as proposing some new ones. The performance of the improved and proposed speech enhancement techniques were evaluated objectively using the segmental signal-to-noise ratio (SNRseg) and subjectively using the mean opinion score (MOS). A review of the current speech enhancement techniques is presented in the thesis and is also used as a reference in some comparisons. The current speech enhancement techniques considered are those that are applicable to forensic audio recordings. The performance of the existing techniques are assessed in the comparisons with the speech enhancement techniques proposed by this thesis. Two pre-processing speech enhancement techniques are presented in this thesis. The first pre-processing speech enhancement technique is designed to improve existing broadband noise suppression techniques by the use of frequency shift keying (FSK) signals. It is based on a simple concept, which is to insert a known tone of sufficient amplitude into the silent segments of a speech signal prior to transmission. At the receiver the detection of silent or non-speech segments used in estimating the noise, becomes a simpler and more accurate task due to the inserted tone. The second pre-processing speech enhancement technique is designed to suppress a wide range of noises and it is based on zero padding. Zero padding involves inserting a zero value sample in between each speech signal sample prior to transmission. The inserted zero value samples allow accurate characterisation of the noise in the adjacent speech samples. At the receiver the noise is estimated from the sample positions allocated for the zero value samples. Several post-processing speech enhancement techniques are presented in this thesis. The first post-processing speech enhancement technique is designed for the suppression of co-talker interference and it uses a combination of dynamic time warping (DTW) and dual channel adaptive filtering. This technique is proposed for the suppression of co-talker interference, when the co-talker interference or noise reference signal is obtainable at a later instance as in the case of many covert forensic recordings. The corrupted speech signal and the noise reference signal are aligned using DTW and then the co-talker interference is suppressed using a dual channel adaptive filter. The second post-processing speech enhancement technique is designed for broadband noise suppression and is based on spectral subtraction but it incorporates the masking properties of the human auditory system for improved performance. Auditory masking is used to find the masking threshold, below which the noise is no longer perceivable. Only those noise components above the masking threshold are suppressed. This approach is taken to reduce any byproducts such as musical noise. The third post-processing speech enhancement technique is designed for broadband noise suppression and is based on spectral subtraction but it exploits the human auditory systems perception of frequency. Critical band analysis is used to group frequencies that are similarly perceived, which are then treated as a single entity by the enhancement technique.
APA, Harvard, Vancouver, ISO, and other styles
15

Garcia, Lorca Federico. "Filtres récursifs temps réel pour la détection de contours : optimisations algorithmiques et architecturales." Paris 11, 1996. http://www.theses.fr/1996PA112439.

Full text
Abstract:
Dans cette thèse on s'intéresse à deux aspects différents : conceptuel et réalisationel, sur lesquels portent les quatre innovations présentées. Si celles-ci sont illustrées par une application au détecteur de contours de Deriche, elles sont facilement généralisables à d'autres détecteurs qu'ils soient basés sur le calcul de maxima locaux de la dérivée première, ou le calcul des passages par zéro du laplacien. Les filtres à réponse impulsionnelle infinie symétriques ou anti-symétriques peuvent être réalisés sous forme cascade. Le filtre de lissage peut être défini par intégration numérique du filtre dérivateur optimal. Tout filtre détecteur de contours à noyau large peut être considéré comme un filtre de lissage bidimensionnel à noyau large suivi d'un simple filtre Sobel. L’utilisation d'opérateurs blocs série offre le meilleur compromis surface rapidité pour l'intégration en ASICS ou FPGAS. Nous proposons une architecture câblée temps réel optimale en compacité et simplicité de la version récursive du filtre détecteur de contours de Deriche. Nous exposons la méthode qui conduit à notre solution. A travers cette expérience, nous souhaitons transmettre aux concepteurs d'outils de CAO un certain nombre d'idées qui doivent à notre avis être exploitées afin que des outils tels que les graphes flots de données ou les langages synchrones assistent efficacement l'architecte dans les problèmes d'ordonnancement, d'allocation et de repliement temporel du graphe vers l'architecture.
APA, Harvard, Vancouver, ISO, and other styles
16

Vaerenbergh, Steven Van. "Kernel Methods for Nonlinear Identification, Equalization and Separation of Signals." Doctoral thesis, Universidad de Cantabria, 2010. http://hdl.handle.net/10803/10673.

Full text
Abstract:
En la última década, los métodos kernel (métodos núcleo) han demostrado ser técnicas muy eficaces en la resolución de problemas no lineales. Parte de su éxito puede atribuirse a su sólida base matemática dentro de los espacios de Hilbert generados por funciones kernel ("reproducing kernel Hilbert spaces", RKHS); y al hecho de que resultan en problemas convexos de optimización. Además, son aproximadores universales y la complejidad computacional que requieren es moderada. Gracias a estas características, los métodos kernel constituyen una alternativa atractiva a las técnicas tradicionales no lineales, como las series de Volterra, los polinómios y las redes neuronales. Los métodos kernel también presentan ciertos inconvenientes que deben ser abordados adecuadamente en las distintas aplicaciones, por ejemplo, las dificultades asociadas al manejo de grandes conjuntos de datos y los problemas de sobreajuste ocasionados al trabajar en espacios de dimensionalidad infinita.En este trabajo se desarrolla un conjunto de algoritmos basados en métodos kernel para resolver una serie de problemas no lineales, dentro del ámbito del procesado de señal y las comunicaciones. En particular, se tratan problemas de identificación e igualación de sistemas no lineales, y problemas de separación ciega de fuentes no lineal ("blind source separation", BSS). Esta tesis se divide en tres partes. La primera parte consiste en un estudio de la literatura sobre los métodos kernel. En la segunda parte, se proponen una serie de técnicas nuevas basadas en regresión con kernels para resolver problemas de identificación e igualación de sistemas de Wiener y de Hammerstein, en casos supervisados y ciegos. Como contribución adicional se estudia el campo del filtrado adaptativo mediante kernels y se proponen dos algoritmos recursivos de mínimos cuadrados mediante kernels ("kernel recursive least-squares", KRLS). En la tercera parte se tratan problemas de decodificación ciega en que las fuentes son dispersas, como es el caso en comunicaciones digitales. La dispersidad de las fuentes se refleja en que las muestras observadas se agrupan, lo cual ha permitido diseñar técnicas de decodificación basadas en agrupamiento espectral. Las técnicas propuestas se han aplicado al problema de la decodificación ciega de canales MIMO rápidamente variantes en el tiempo, y a la separación ciega de fuentes post no lineal.<br>In the last decade, kernel methods have become established techniques to perform nonlinear signal processing. Thanks to their foundation in the solid mathematical framework of reproducing kernel Hilbert spaces (RKHS), kernel methods yield convex optimization problems. In addition, they are universal nonlinear approximators and require only moderate computational complexity. These properties make them an attractive alternative to traditional nonlinear techniques such as Volterra series, polynomial filters and neural networks.This work aims to study the application of kernel methods to resolve nonlinear problems in signal processing and communications. Specifically, the problems treated in this thesis consist of the identification and equalization of nonlinear systems, both in supervised and blind scenarios, kernel adaptive filtering and nonlinear blind source separation.In a first contribution, a framework for identification and equalization of nonlinear Wiener and Hammerstein systems is designed, based on kernel canonical correlation analysis (KCCA). As a result of this study, various other related techniques are proposed, including two kernel recursive least squares (KRLS) algorithms with fixed memory size, and a KCCA-based blind equalization technique for Wiener systems that uses oversampling. The second part of this thesis treats two nonlinear blind decoding problems of sparse data, posed under conditions that do not permit the application of traditional clustering techniques. For these problems, which include the blind decoding of fast time-varying MIMO channels, a set of algorithms based on spectral clustering is designed. The effectiveness of the proposed techniques is demonstrated through various simulations.
APA, Harvard, Vancouver, ISO, and other styles
17

JI, Jia-Tong, and 紀嘉桐. "A porn sites filtering mechanism with the ability of identifying medical web sites." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/14164112185634537460.

Full text
Abstract:
碩士<br>國立東華大學<br>資訊管理碩士學位學程<br>101<br>In the Internet, the pornography has caused an enormous problem. Many filtering mechanisms had been proposed for filtering pornographic pages. However, there exist similarities between medical pages and pornographic pages, it is somehow difficult to correctly identify a medical pages from pornographic pages. Whether pornographic or medicine sites that HTML fields may appear suspected pornographic keyword. In this paper, we building keyword library and increased medical class for reducing the false positives rate. According to retrieve webs’feature, using the ID3 algorithm and the concept to build decision trees to find related law, and then calculates the different types’ points that calculated by related law. Finally, we propose a systematic method to accurately identify an unknown sites to be either pornographic or medicine. According to the experiment, the accuracy, f-measure, and precision of our filtering method proposed in this paper can achieve 98.11%, 98.25%, 98.00%, respectively, and precision value is 98.44%, recall value is 98.33%.The results have showed that create medical class to identify the unknown websites have positive effect.
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Wei-Ting, and 陳韋廷. "Applying Decision Tree Data Mining Technique to Track the Concept Drift of Porn Web Filtering." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/90241852350617504678.

Full text
Abstract:
碩士<br>國立東華大學<br>資訊管理碩士學位學程<br>99<br>With the development of the Internet, the proliferation of Internet pornography affects physical and mental development of young people seriously. How to filter porn web pages effectively becomes an issue worth exploring. The study proposed the filtering mechanism method: the porn web filter using a decision tree-based approach to tracking concept drift and the weighted sliding window to calculate concept drift weight score, which helps determine the decision tree concept drift porn web. Filtering mechanism is divided into training and implementation phase. In training phase, we extracted the features of porn web and gave each rule score. In implementation phase, we extracted the features of unknown porn web and scored this web. In this study, a higher frequency of a particular time will drift keyword in keyword library and give a higher weight partition. To take this approach allows the filter to adapt to real-world web page concept drift and improve the recognition accuracy of porn web pages. In this study, the filter can adapt the dynamic web environment that can improve traditional machine learning classification method. The results of experiment, the use of porn web pages with the keyword database and decision tree techniques concepts drift method, is accurate classification rate of 97.06%. This accuracy is not lower than other machine learning recognizing porn web pages using experimental methods.
APA, Harvard, Vancouver, ISO, and other styles
19

Shen, Shan-Ting, and 沈劭庭. "Filtering Porn Site by Constructing Decision Trees of Multiple Categories Based on Website’s Functional Orientation." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/47872588209682268562.

Full text
Abstract:
碩士<br>國立東華大學<br>資訊管理碩士學位學程<br>99<br>The transparency, interaction and openness of Internet have caused some improper pornography photos, information and videos easily accessed by users through Internet. In order to reduce the negative impact on minors, this thesis aims to propose an efficient filtering mechanism for porn sites. Our method will divide various webs into four kinds of categories: blog-oriented, selling-oriented, portal-oriented and Sex trade-oriented. The ID3 decision tree data mining algorithm will be applied to establish the decision tree for every category. Therefore, the association rules between the features of webs and the web's type (porn or legal) should be found. Then, the rules can be used to filter the unknown websites. According to the experimental results, the average accuracy of filter proposed in this study is up to 97.68%, f-measure to 97.38% and precision to 98.26%. The results have showed that if we can differentiate the types of websites before filtering, it will be contributive to the identification of porn sites.
APA, Harvard, Vancouver, ISO, and other styles
20

Silva, Tiago Emanuel Nogueira da. "Extraction and quantification of perivascular spaces based on vesselness filtering: A potential biomarker for Fabry disease." Master's thesis, 2020. http://hdl.handle.net/10451/45535.

Full text
Abstract:
Tese de mestrado integrado, Engenharia Biomédica e Biofísica (Engenharia Clínica e Instrumentação Médica), Universidade de Lisboa, Faculdade de Ciências, 2020<br>Perivascular Spaces (PVS) allow interstitial solutes to be cleared from the brain contributing to the brain‘s homeostasis. Dysfunction of these pathways can occur if there is deposition of substances causing stagnation of fluid (CSF). Quantitative analysis of PVS on Magnetic Resonance Images (MRI) is important for understanding their relationship with dementia, stroke and vascular diseases. Manual delineation of PVS is very time consuming, and clinicians have to check multiple views in order to obtain a very accurate segmentation. Therefore, finding a method that would provide a reliable visual and quantitative information of a patient with PVS would enable a continuous monitoring giving clinical support throughout the development of each disease. Moreover, it would allow to understand and characterize PVS, provide useful insights into their role in normal brain physiology and small vessel disease (SVD). This work focused on the segmentation and further quantification of PVS in the brain using a vesselness filter (Frangi filter) . The Frangi filter, typically used to detect vessel-like or tube-like structures and fibers in volumetric image data, was capable to delineate, map and extract elongated and dot like features of PVS that were not easily seen when comparing with the non filtered images. However, this method requires a careful parameter optimization and further validation, since it presented different output behaviour in each MRI acquisition (T1-Weighted, T2-Weighted and T2-FLAIR). Also, quantitative analysis regarding the Frangi segmentation indicate that PVS visual rating scores may have a positive association with PVS volume. Statistical significance was found by clustering patients diagnosed with Multiple System atrophy (MSA), Progressive Supranuclear Palsy (PSP) and Fabry disease (FD) when compared with control patients.
APA, Harvard, Vancouver, ISO, and other styles
21

Guertin-Renaud, Jean-Philippe. "Flou de mouvement réaliste en temps réel." Thèse, 2014. http://hdl.handle.net/1866/11500.

Full text
Abstract:
Le flou de mouvement de haute qualité est un effet de plus en plus important en rendu interactif. Avec l'augmentation constante en qualité des ressources et en fidélité des scènes vient un désir semblable pour des effets lenticulaires plus détaillés et réalistes. Cependant, même dans le contexte du rendu hors-ligne, le flou de mouvement est souvent approximé à l'aide d'un post-traitement. Les algorithmes de post-traitement pour le flou de mouvement ont fait des pas de géant au niveau de la qualité visuelle, générant des résultats plausibles tout en conservant un niveau de performance interactif. Néanmoins, des artefacts persistent en présence, par exemple, de mouvements superposés ou de motifs de mouvement à très large ou très fine échelle, ainsi qu'en présence de mouvement à la fois linéaire et rotationnel. De plus, des mouvements d'amplitude importante ont tendance à causer des artefacts évidents aux bordures d'objets ou d'image. Ce mémoire présente une technique qui résout ces artefacts avec un échantillonnage plus robuste et un système de filtrage qui échantillonne selon deux directions qui sont dynamiquement et automatiquement sélectionnées pour donner l'image la plus précise possible. Ces modifications entraînent un coût en performance somme toute mineur comparativement aux implantations existantes: nous pouvons générer un flou de mouvement plausible et temporellement cohérent pour plusieurs séquences d'animation complexes, le tout en moins de 2ms à une résolution de 1280 x 720. De plus, notre filtre est conçu pour s'intégrer facilement avec des filtres post-traitement d'anticrénelage.<br>High-quality motion blur is an increasingly important effect in interactive graphics. With the continuous increase in asset quality and scene fidelity comes a similar desire for more detailed and realistic lenticular effects. However, even in the context of offline rendering, motion blur is often approximated as a post process. Recent motion blur post-processes have made great leaps in visual quality, generating plausible results with interactive performance. Still, distracting artifacts remain in the presence of, for instance, overlapping motion or large- and fine-scale motion features, as well as in the presence of both linear and rotational motion. Furthermore, large scale motion often tends to cause obvious artifacts at boundary points. This thesis addresses these artifacts with a more robust sampling and filtering scheme that samples across two directions which are dynamically and automatically selected to provide the most accurate image possible. These modifications come at a very slight penalty compared to previous motion blur implementations: we render plausible, temporally-coherent motion blur on several complex animation sequences, all in under 2ms at a resolution of 1280 x 720. Moreover, our filter is designed to integrate seamlessly with post-process anti-aliasing.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!