Dissertations / Theses on the topic 'Alpha estimation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 32 dissertations / theses for your research on the topic 'Alpha estimation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Hägglund, Kristoffer. "Symmetric alpha-Stable Adapted Demodulation and Parameter Estimation." Thesis, Luleå tekniska universitet, Signaler och system, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-70719.
Full textJaoua, Nouha. "Estimation Bayésienne non Paramétrique de Systèmes Dynamiques en Présence de Bruits Alpha-Stables." Phd thesis, Ecole Centrale de Lille, 2013. http://tel.archives-ouvertes.fr/tel-00929691.
Full textFries, Sébastien. "Anticipative alpha-stable linear processes for time series analysis : conditional dynamics and estimation." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLG005/document.
Full textIn the framework of linear time series analysis, we study a class of so-called anticipative strictly stationary processes potentially depending on all the terms of an independent and identically distributed alpha-stable errors sequence.Focusing first on autoregressive (AR) processes, it is shown that higher order conditional moments than marginal ones exist provided the characteristic polynomials admits at least one root inside the unit circle. The forms of the first and second order moments are obtained in special cases.The least squares method is shown to provide a consistent estimator of an all-pass causal representation of the process, the validity of which can be tested by a portmanteau-type test. A method based on extreme residuals clustering is proposed to determine the original AR representation.The anticipative stable AR(1) is studied in details in the framework of bivariate alpha-stable random vectors and the functional forms of its first four conditional moments are obtained under any admissible parameterisation.It is shown that during extreme events, these moments become equivalent to those of a two-point distribution charging two polarly-opposite future paths: exponential growth or collapse.Parallel results are obtained for the continuous time counterpart of the AR(1), the anticipative stable Ornstein-Uhlenbeck process.For infinite alpha-stable moving averages, the conditional distribution of future paths given the observed past trajectory during extreme events is derived on the basis of a new representation of stable random vectors on unit cylinders relative to semi-norms.Contrary to the case of norms, such representation yield a multivariate regularly varying tails property appropriate for prediction purposes, but not all stable vectors admit such a representation.A characterisation is provided and it is shown that finite length paths of a stable moving average admit such representation provided the process is "anticipative enough".Processes resulting from the linear combination of stable moving averages are encompassed, and the conditional distribution has a natural interpretation in terms of pattern identification
Azzaoui, Nourddine. "Analyse et Estimations Spectrales des Processus alpha-Stables non-Stationnaires." Phd thesis, Université de Bourgogne, 2006. http://tel.archives-ouvertes.fr/tel-00138027.
Full textPoulin, Nicolas. "Estimation de la fonction des quantiles pour des données tronquées." Littoral, 2006. http://www.theses.fr/2006DUNK0159.
Full textIn the left-truncation model, the pair of random variables Y and T with respective distribution function F and G are observed only if Y ≥ T. Let (Yi,Ti) ; 1 ≤ i ≤ n be an observed sample of this pair of random variables. The quantile function of F is estimated by the quantile function of the Lynden-Bell (1971) estimator. After giving some results of the literature in the case of independant data, we consider the α-mixing framework. We obtain strong consistency with rates, give a strong representation for the estimator of the quantile as a mean of random variables with a neglible rest and asymptotic normality. As regards the second topic of this thesis, we consider a multidimensionnal explanatory random variable X of Y which plays the role of a response. We establish strong consitency and asymptotic normality of the conditional distribution function and those of the conditional quantile function of Y given X when Y is subject to truncation. Simulations are drawn to illustrate the results for finite samples
Fourt, Olivier. "Traitement des signaux à phase polynomiale dans des environnements fortement bruités : séparation et estimation des paramètres." Paris 11, 2008. http://www.theses.fr/2008PA112064.
Full textThe research works of this thesis deal with the processings of polynomial phase signals in heavily corrupted environnements, whatsoever noise with high levels or impulse noise, noise modelled by the use of alpha-stable laws. Noise robustness is a common task in signal processing and if several algorithms are able to work with high gaussian noise level, the presence of impulse noise often leads to a great loss in performances or makes algorithms unable to work. Recently, some algorithms have been built in order to support impulse noise environnements but with one limit: the achievable results decrease with gaussian noise situations and thus needs as a first step to select the good method versus the kind of the noise. So one of the key points of this thesis was building algorithms who were robust to the kind of the noise which means that they have similar performances with gaussian noise or alpha-stable noise. The second key point was building fast algorithms, something difficult to add to robustness
Ferrani, Yacine. "Sur l'estimation non paramétrique de la densité et du mode dans les modèles de données incomplètes et associées." Thesis, Littoral, 2014. http://www.theses.fr/2014DUNK0370/document.
Full textThis thesis deals with the study of asymptotic properties of e kernel (Parzen-Rosenblatt) density estimate under associated and censored model. In this setting, we first recall with details the existing results, studied in both i.i.d. and strong mixing condition (α-mixing) cases. Under mild standard conditions, it is established that the strong uniform almost sure convergence rate, is optimal. In the part dedicated to the results of this thesis, two main and original stated results are presented : the first result concerns the strong uniform consistency rate of the studied estimator under association hypothesis. The main tool having permitted to achieve the optimal speed, is the adaptation of the Theorem due to Doukhan and Neumann (2007), in studying the term of fluctuations (random part) of the gap between the considered estimator and the studied parameter (density). As an application, the almost sure convergence of the kernel mode estimator is established. The stated results have been accepted for publication in Communications in Statistics-Theory & Methods ; The second result establishes the asymptotic normality of the estimator studied under the same model and then, constitute an extension to the censored case, the result stated by Roussas (2000). This result is submitted for publication
Silva, Francyelle de Lima e. "Estimação de cópulas via ondaletas." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-03122014-214943/.
Full textCopulas are important tools for describing the dependence structure between random variables and stochastic processes. Recently some nonparametric estimation procedures have appeared, using kernels and wavelets. In this context, knowing that a copula function can be expanded in a wavelet basis, we have proposed a nonparametric copula estimation procedure through wavelets for independent data and times series under alpha-mixing condition. The main feature of this estimator is the copula function estimation without assumptions about the data distribution and without ARMA - GARCH modeling, like in parametric copula estimation. Convergence rates for the estimator were computed, showing the estimator consistency. Some simulation studies were made, as well as analysis of real data sets.
Khrifi, Saâd. "Etude de la densité électronique précise du composé "2-amino-5-nitropyridinium-L-monohydrogènetartrate" : estimation des propriétés optiques linéaire [alpha] et non linéaire [bêta] à partir des propriétés électrostatiques." Lille 1, 1996. http://www.theses.fr/1996LIL10005.
Full textBoulanger, Frédéric. "Modelisation et simulation de variables regionalisees par des fonctions aleatoires stables." Paris, ENMP, 1990. http://www.theses.fr/1990ENMP0195.
Full textKarlsson, Fredrik. "Matting of Natural Image Sequences using Bayesian Statistics." Thesis, Linköping University, Department of Science and Technology, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2355.
Full textThe problem of separating a non-rectangular foreground image from a background image is a classical problem in image processing and analysis, known as matting or keying. A common example is a film frame where an actor is extracted from the background to later be placed on a different background. Compositing of these objects against a new background is one of the most common operations in the creation of visual effects. When the original background is of non-constant color the matting becomes an under determined problem, for which a unique solution cannot be found.
This thesis describes a framework for computing mattes from images with backgrounds of non-constant color, using Bayesian statistics. Foreground and background color distributions are modeled as oriented Gaussians and optimal color and opacity values are determined using a maximum a posteriori approach. Together with information from optical flow algorithms, the framework produces mattes for image sequences without needing user input for each frame.
The approach used in this thesis differs from previous research in a few areas. The optimal order of processing is determined in a different way and sampling of color values is changed to work more efficiently on high-resolution images. Finally a gradient-guided local smoothness constraint can optionally be used to improve results for cases where the normal technique produces poor results.
Oksar, Yesim. "Target Tracking With Correlated Measurement Noise." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608198/index.pdf.
Full textYin, Ling. "Automatic Stereoscopic 3D Chroma-Key Matting Using Perceptual Analysis and Prediction." Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31851.
Full textAït, Hennani Larbi. "Comportement asymptotique du processus de vraisemblance dans le cas non régulier." Rouen, 1989. http://www.theses.fr/1989ROUES039.
Full textGuerrero, José-Luis. "Robust Water Balance Modeling with Uncertain Discharge and Precipitation Data : Computational Geometry as a New Tool." Doctoral thesis, Uppsala universitet, Luft-, vatten och landskapslära, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-190686.
Full textModeller är viktiga verktyg för att förstå de hydrologiska processer som bestämmer vattnets transport i landskapet och för prognoser för tider och platser där det saknas mätdata. Graden av tillit till modeller bör emellertid inte överstiga kvaliteten på de data som de matas med. Det övergripande syftet med denna avhandling var att anpassa modelleringsprocessen så att den tar hänsyn till osäkerheten i data och identifierar robusta parametervärden med hjälp av metoder från beräkningsgeometrin. Metoderna var utvecklade och testades på data från Cholutecaflodens avrinningsområde i Honduras. Kvalitetskontrollen i nederbörds- och vattenföringsdata resulterade i att 22 % av de dagliga nederbördsobservationerna måste kasseras liksom alla data från en av sju analyserade vattenföringsstationer. Observationsnätet för nederbörd befanns otillräckligt för att fånga upp den rumsliga och tidsmässiga variabiliteten i den övre delen av Cholutecaflodens avrinningsområde. Vattenföringens tidsvariation utvärderades med en Monte Carlo-skattning av värdet på parametrarna i avbördningskurvan i ett rörligt tidsfönster av vattenföringsmätningar. Alla vattenföringsstationer uppvisade stor tidsvariation i avbördningskurvan som var störst för låga flöden, dock inte med någon gemensam trend. Problemet med den måttliga datakvaliteten bedömdes med hjälp av robusta modellparametervärden som identifierades med hjälp av beräkningsgeometriska metoder. Hypotesen att djupa parametervärdesuppsättningar var robusta testades och verifierades genom två djupfunktioner. Geometriskt djupa parametervärdesuppsättningar verkade ge bättre hydrologiska resultat än ytliga, var mindre känsliga för små ändringar i parametervärden och var bättre lämpade för förflyttning i tiden. Metoder utvecklades för att visualisera multivariata fördelningar av välpresterande parametrar baserade på de rangordnade värdena. Genom att projicera längs en gemensam dimension, kunde multivariata fördelningar av välpresterande parametrar hos modeller med varierande komplexitet jämföras med hjälp av det föreslagna visualiseringsverktyget. Det har alltså potentialen att bistå vid valet av en adekvat modellstruktur som tar hänsyn till osäkerheten i data. Dessa metoder möjliggjorde kvantifiering av observationsosäkerheter. Geometriska metoder har helt nyligen börjat användas inom hydrologin. I studien demonstrerades att de kan användas för att identifiera robusta parametervärdesuppsättningar och några av metodernas potentiella användningsområden belystes.
Chaouch, Mohamed. "Contribution à l'estimation non paramétrique des quantiles géométriques et à l'analyse des données fonctionnelles." Phd thesis, Université de Bourgogne, 2008. http://tel.archives-ouvertes.fr/tel-00364538.
Full textChainais, Pierre. "Cascades log-infiniment divisibles et analyse multiresolution. Application à l'étude des intermittences en turbulence." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2001. http://tel.archives-ouvertes.fr/tel-00001584.
Full textLaitrakun, Seksan. "Distributed detection and estimation with reliability-based splitting algorithms in random-access networks." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/53008.
Full textFerdous, Arundhoti. "Comparative Analysis of Tag Estimation Algorithms on RFID EPC Gen-2 Performance." Scholar Commons, 2017. http://scholarcommons.usf.edu/etd/6837.
Full textAdam, Hassan Ali. "A solid phase microextraction/gas chromatography method for estimating the concentrations of chlorpyrifos, endosulphan-alpha, edosulphan-beta and endosulphan sulphate in water." Thesis, Peninsula Technikon, 2003. http://hdl.handle.net/20.500.11838/899.
Full textThe monitoring of pesticide contamination in surface and groundwater is an essential aspect of an assessment of the potential environmental and health impacts of widespread pesticide use. Previous research in three Western Cape farming areas found consistent (37% to 69% of samples) pesticide contamination of rural water sources. However, despite the need, monitoring of pesticides in water is not done due to lack of analytical capacity and the cost of analysis in South Africa. The Solid Phase Microextraction (SPME) sampling method has been developed over the last decade as a replacement for solvent-based analyte extraction procedures. The method utilizes a short, thin, solid rod of fused silica coated with an absorbent polymer. The fibre is exposed to the pesticide contaminated water sample under vigorous agitation. The pesticide is absorbed into the polymer coating; the mass absorbed depends on the partition coefficient of the pesticide between the sample phase and the polymeric coating, the exposure time and factors such as agitation rate, the diffusivity of the analyte in water and the polymeric coating, and the volume and thickness of the coating. After absorption, the fibre is directly inserted into the Gas Chromatograph (GC) injection port for analysis. For extraction from a stirred solution a fibre will have a boundary region where the solution moves slowly near the fibre surface and faster further away until the analyte is practically perfectly mixed in the bulk solution by convection. The boundary region may be modelled as a layer of stationary solution surrounded by perfectly mixed solution.
Choate, Radmila. "ESTIMATING DISEASE SEVERITY, SYMPTOM BURDEN AND HEALTH-RELATED BEHAVIORS IN PATIENTS WITH CHRONIC PULMONARY DISEASES." UKnowledge, 2019. https://uknowledge.uky.edu/epb_etds/22.
Full textHamonier, Julien. "Analyse par ondelettes du mouvement multifractionnaire stable linéaire." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2012. http://tel.archives-ouvertes.fr/tel-00753510.
Full textLucena, Filho Walfredo da Costa. "Mecanismo de controle de potência para estimativa de etiquetas em redes de identificação por rádio frequência." Universidade Federal do Amazonas, 2015. http://tede.ufam.edu.br/handle/tede/4722.
Full textApproved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-11-30T19:51:08Z (GMT) No. of bitstreams: 1 Dissertação - Walfredo da Costa Lucena Filho.pdf: 2083187 bytes, checksum: 72f63311dba60bbea7ef2d5cc474c601 (MD5)
Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2015-11-30T19:55:39Z (GMT) No. of bitstreams: 1 Dissertação - Walfredo da Costa Lucena Filho.pdf: 2083187 bytes, checksum: 72f63311dba60bbea7ef2d5cc474c601 (MD5)
Made available in DSpace on 2015-11-30T19:55:40Z (GMT). No. of bitstreams: 1 Dissertação - Walfredo da Costa Lucena Filho.pdf: 2083187 bytes, checksum: 72f63311dba60bbea7ef2d5cc474c601 (MD5) Previous issue date: 2015-08-03
FAPEAM - Fundação de Amparo à Pesquisa do Estado do Amazonas
An RFID system is typically composed of a reader and a set of tags. An anti-collision algorithm is necessary to avoid collision between tags that respond simultaneously to a reader. The most widely used anti-collision algorithm is DFSA (Dynamic Framed Slotted ALOHA) due to its simplicity and low computational cost. In DFSA algorithms, the optimal TDMA (Time Division Multiple Access) frame size must be equal to the number of unread tags. If the exact number of tags is unknown, the DFSA algorithm needs a tag estimator to get closer to the optimal performance. Currently, applications have required the identification of large numbers of tags, which causes an increase in collisions and hence the degradation in performance of the traditional algorithms DFSA. This work proposes a power control mechanism to estimate the number of tags for radio frequency identification networks (RFID). The mechanism divides the interrogation zone into subgroups of tags and then RSSI (Received Signal Strength Indicator) measurements estimate the number of tags in a subarea. The mechanism is simulated and evaluated using a simulator developed in C/C++ language. In this study, we compare the number of slots and identification time, with ideal DFSA algorithm and Q algorithm EPCglobal standard. Simulation results shows the proposed mechanism provides 99% performance of ideal DFSA in dense networks, where there are many tags. Regarding the Q algorithm, we can see the improvement in performance of 6.5%. It is also important to highlight the lower energy consumption of the reader comparing to ideal DFSA is 63%.
Um sistema de identificação por rádio frequência (RFID) é composto basicamente de um leitor e etiquetas. Para que o processo de identificação das etiquetas seja bem sucedido, é necessário um algoritmo anticolisão a fim de evitar colisões entre etiquetas que respondem simultaneamente à interrogação do leitor. O algoritmo anticolisão mais usado é o DFSA (Dynamic Framed Slotted ALOHA) devido à sua simplicidade e baixo custo computacional. Em algoritmos probabilísticos, tal como o DFSA, o tamanho ótimo do quadro TDMA (Time Division Multiple Access) utilizado para leitura das etiquetas deve ser igual à quantidade de etiquetas não lidas. Uma vez que no processo de leitura, normalmente não se sabe a quantidade exata de etiquetas, o algoritmo DFSA faz uso de um estimador para obter um desempenho mais próximo do ideal. Atualmente, as aplicações têm demandado a identificação de grandes quantidades de etiquetas, o que ocasiona um aumento das colisões e, consequentemente, a degradação no desempenho dos algoritmos DFSA tradicionais. Este trabalho propõe um mecanismo de controle de potência para estimar a quantidade de etiquetas em redes de identificação por rádio frequência (RFID). O mecanismo baseia-se na divisão da área de interrogação em subáreas e, consequentemente, subgrupos de etiquetas. Tal divisão é utilizada para realizar medições de RSSI (Received Signal Strength Indicator) e, assim, estimar a quantidade de etiquetas por subárea. O mecanismo é simulado e avaliado utilizando um simulador próprio desenvolvido em linguagem C/C++. Neste estudo, comparam-se os resultados de quantidade de slots e tempo de identificação das etiquetas, com os obtidos a partir da utilização dos algoritmos DFSA ideal e algoritmo padrão Q da norma EPCglobal. A partir dos resultados da simulação, é possível perceber que o mecanismo proposto apresenta desempenho 99% do DFSA ideal em redes densas, onde há grande quantidade de etiquetas. Em relação ao algoritmo Q, percebe-se a melhoria de 6,5% no desempenho. É importante ressaltar também a redução no consumo de energia do leitor em torno de 63% em relação ao DFSA ideal.
Hee, Sonke. "Computational Bayesian techniques applied to cosmology." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/273346.
Full textTardivel, Patrick. "Représentation parcimonieuse et procédures de tests multiples : application à la métabolomique." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30316/document.
Full textLet Y be a Gaussian vector distributed according to N (m,sigma²Idn) and X a matrix of dimension n x p with Y observed, m unknown, sigma and X known. In the linear model, m is assumed to be a linear combination of the columns of X In small dimension, when n ≥ p and ker (X) = 0, there exists a unique parameter Beta* such that m = X Beta*; then we can rewrite Y = Beta* + Epsilon. In the small-dimensional linear Gaussian model framework, we construct a new multiple testing procedure controlling the FWER to test the null hypotheses Beta*i = 0 for i belongs to [[1,p]]. This procedure is applied in metabolomics through the freeware ASICS available online. ASICS allows to identify and to qualify metabolites via the analyse of RMN spectra. In high dimension, when n < p we have ker (X) ≠ 0 consequently the parameter Beta* described above is no longer unique. In the noiseless case when Sigma = 0, implying thus Y = m, we show that the solutions of the linear system of equation Y = X Beta having a minimal number of non-zero components are obtained via the lalpha with alpha small enough
Gomes, André Filipe Correia. "Nonparametric estimation of Expected Shortfall." Master's thesis, 2017. http://hdl.handle.net/10316/84757.
Full textA Perda Esperada é uma medida de risco muito presente no ramo financeiro. Este trabalho procura avaliar as propriedades assintóticas de dois estimadores não paramétricos da Perda Esperada, sob a hipótese de existência de um dado grau de dependência na série financeira em estudo. O primeiro estimador a ser analisado pode ser visto como uma média de valores que satisfazem certa propriedade, e o segundo estimador é uma versão modificada do primeiro, utilizando kernel smoothing. A hipótese de dependência considerada é das mais fracas (alpha-mixing), pelo que o controlo das variáveis aleatórias apresentadas (nomeadamente das suas variâncias e covariâncias) tem bastante ênfase no trabalho. Devido a este controlo, conseguimos concluir um Teorema do Limite Central para cada estimador, que permite chegar a conclusões sobre a eficiência de ambos.A Perda Esperada é uma medida de risco muito presente no ramo financeiro. Este trabalho procura avaliar as propriedades assintóticas de dois estimadores não paramétricos da Perda Esperada, sob a hipótese de existência de um dado grau de dependência na série financeira em estudo. O primeiro estimador a ser analisado pode ser visto como uma média de valores que satisfazem certa propriedade, e o segundo estimador é uma versão modificada do primeiro, utilizando kernel smoothing. A hipótese de dependência considerada é das mais fracas (alpha-mixing), pelo que o controlo das variáveis aleatórias apresentadas (nomeadamente das suas variâncias e covariâncias) tem bastante ênfase no trabalho. Devido a este controlo, conseguimos concluir um Teorema do Limite Central para cada estimador, que permite chegar a conclusões sobre a eficiência de ambos.
The Expected Shortfall is an increasingly popular risk measure in financial risk management. This work seeks to study the asymptotic statistical properties of two nonparametric estimators of Expected Shortfall, under the assumption of dependence in the time series of study. The first estimator can be seen as an average of values that satisfy a certain property, whereas the second estimator is a kernel smoothed version of the first. The assumption of dependence is considered one of weakest (alpha-mixing), for which reason the control of the presented random variables (namely they variances and covariances) has a big emphasis on this work. Due to this control we are able to present a Central Limit Theorem for each estimator, from which we are to draw relevant conclusions about the efficiency of both estimators.The Expected Shortfall is an increasingly popular risk measure in financial risk management. This work seeks to study the asymptotic statistical properties of two nonparametric estimators of Expected Shortfall, under the assumption of dependence in the time series of study. The first estimator can be seen as an average of values that satisfy a certain property, whereas the second estimator is a kernel smoothed version of the first. The assumption of dependence is considered one of weakest (alpha-mixing), for which reason the control of the presented random variables (namely they variances and covariances) has a big emphasis on this work. Due to this control we are able to present a Central Limit Theorem for each estimator, from which we are to draw relevant conclusions about the efficiency of both estimators.
Lin, Fang-Ju, and 林芳如. "Image Super Resolution, Image Alpha Estimation, and Metric Learning for Image Classification." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/w6g95t.
Full text國立交通大學
資訊科學與工程研究所
107
Computer vision technology uses digital cameras to simulate human vision, computer programs and algorithms to simulate people's understanding and thinking about things. Computer vision algorithms combine a wide range of disciplines such as artificial intelligence, machine learning, image processing, and neurobiology. In this thesis, we discuss three different applications combined with machine learning and pattern recognition algorithms, from pixel, patch, and feature units of the image to achieve better results than traditional image processing algorithms. In the first part, image super resolution is the process of generating a high-resolution (HR) image using one or more low-resolution (LR) inputs. Many SR methods have been proposed but generating the small-scale structure of an SR image remains a challenging task. We hence propose a single-image SR algorithm that combines the benefits of both internal and external SR methods. First, we estimate the enhancement weights of each LR-HR image patch pair. Next, we multiply each patch by the estimated enhancement weight to generate an initial SR patch. We then employ a method to recover the missing information from the high-resolution patches and create that missing information to generate a final SR image. We then employ iterative back-projection to further enhance visual quality. The method is compared qualitatively and quantitatively with several state-of-the-art methods, and the experimental results indicate that the proposed framework provides high contrast and better visual quality, particularly for non-smooth texture areas. The primary focus of the second part was to present a new approach for extracting foreground elements from an image by means of color and opacity (alpha) estimation which considers available samples in a searching window of variable size for each unknown pixel. Alpha-matting is conventionally defined as the endeavor of softly extracting foreground objects from a single input image and plays a central role within the realm of image-processing. In particular, the challenging case of natural image matting has received considerable research attention since there are virtually no restrictions for characterizing background regions. Many algorithms are presently available for estimating foreground samples and background samples for all unknown pixels of an image, along with opacity values. Given a trimap configuration of background/foreground/unknown regions of an input image, a straightforward approach for determining an alpha value is to sample (collect) unknown foreground and background colors for each unknown pixel defined in the trimap. Such a proposed sampling method is robust in that similar sampling results can be generated for input trimaps of different unknown regions. Moreover, after an initial estimation of the alpha matte, a fully-connected conditional random field (CRF) can be adopted to correct a predicted matte at the pixel level. In the third part, we developed effective weather features to solve the problem of weather recognition using a metric learning method. The recognition of weather conditions based on single image in large datasets is a challenging problem in computer vision. Although previous approaches have proposed methods to classify weather conditions into classes such as sunny and cloudy, their performance is still far from satisfactory. Under different weather conditions, we defined several categories of more robust weather features based on observations of outdoor images. We improve the classification accuracy using metric learning approaches. The results indicate that our method is able to provide much better performance than previous methods. The proposed method is also straightforward to implement and is computationally inexpensive, demonstrating the effectiveness of metric learning methods with computer vision problems.
Achim, Alin. "Novel Bayesian multiscale methods for image denoising using alpha-stable distributions." 2003. http://nemertes.lis.upatras.gr/jspui/handle/10889/1265.
Full textΟ απώτερος σκοπός της έρευνας που παρουσιάζεται σε αυτή τη διδακτορική διατριβή είναι η διάθεση στην κοινότητα των κλινικών επιστημόνων μεθόδων οι οποίες να παρέχουν την καλύτερη δυνατή πληροφορία για να γίνει μια σωστή ιατρική διάγνωση. Οι εικόνες υπερήχων προσβάλλονται ενδογενώς από θόρυβο, ο οποίος οφείλεται στην διαδικασία δημιουργίας των εικόνων μέσω ακτινοβολίας που χρησιμοποιεί σύμφωνες κυματομορφές. Είναι σημαντικό πριν τη διαδικασία ανάλυσης της εικόνας να γίνεται απάλειψη του θορύβου με κατάλληλο τρόπο ώστε να διατηρείται η υφή της εικόνας, η οποία βοηθά στην διάκριση ενός ιστού από έναν άλλο. Κύριος στόχος της διατριβής αυτής υπήρξε η ανάπτυξη νέων μεθόδων καταστολής του θορύβου σε ιατρικές εικόνες υπερήχων στο πεδίο του μετασχηματισμού κυματιδίων. Αρχικά αποδείξαμε μέσω εκτενών πειραμάτων μοντελοποίησης, ότι τα δεδομένα που προκύπτουν από τον διαχωρισμό των εικόνων υπερήχων σε υποπεριοχές συχνοτήτων περιγράφονται επακριβώς από μη-γκαουσιανές κατανομές βαρέων ουρών, όπως είναι οι άλφα-ευσταθείς κατανομές. Κατόπιν, αναπτύξαμε Μπεϋζιανούς εκτιμητές που αξιοποιούν αυτή τη στατιστική περιγραφή. Πιο συγκεκριμένα, χρησιμοποιήσαμε το άλφα-ευσταθές μοντέλο για να σχεδιάσουμε εκτιμητές ελάχιστου απόλυτου λάθος και μέγιστης εκ των υστέρων πιθανότητας για άλφα-ευσταθή σήματα αναμεμειγμένα με μη-γκαουσιανό θόρυβο. Οι επεξεργαστές αφαίρεσης θορύβου που προέκυψαν επενεργούν κατά μη-γραμμικό τρόπο στα δεδομένα και συσχετίζουν με βέλτιστο τρόπο αυτή την μη-γραμμικότητα με τον βαθμό κατά τον οποίο τα δεδομένα είναι μη-γκαουσιανά. Συγκρίναμε τις τεχνικές μας με κλασσικά φίλτρα καθώς και σύγχρονες μεθόδους αυστηρού και μαλακού κατωφλίου εφαρμόζοντάς τες σε πραγματικές ιατρικές εικόνες υπερήχων και ποσοτικοποιήσαμε την απόδοση που επιτεύχθηκε. Τέλος, δείξαμε ότι οι προτεινόμενοι επεξεργαστές μπορούν να βρουν εφαρμογές και σε άλλες περιοχές ενδιαφέροντος και επιλέξαμε ως ενδεικτικό παράδειγμα την περίπτωση εικόνων ραντάρ συνθετικής διατομής.
Chang, Yen-Chieh, and 張彥介. "Estimation of Treatment Effects without Monotonicity Assumption in Dose-Finding Studies — The Application of alpha-Splitting Procedure." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/45927947851332165268.
Full text國立成功大學
統計學系碩博士班
96
During the process of drug development, dose-response studies are conducted to evaluate the treatment effects at various doses of the test drug. In such clinical trials, subjects or patients are randomly allocated to separate groups to receive either several increasing dose levels of the test drug or a placebo. The primary focus of dose-response studies is usually on identifying the minimal effective dose (MED) and on estimating the treatment effect at each dose level. Based on the suggestion of the International Conference on Harmonization (ICH) E9 guideline that the confidence interval is the preferable way to present the treatment effect, we propose a method to construct the simultaneous confidence intervals to estimate the treatment effects and to define the MED accordingly. As a rule of thumb, the relation between the dose level and corresponding response is assumed to be monotone. However, sometimes the response may drop when the test dose is beyond certain level. Such phenomenon is the so-called "dose-response with non-monotonicity". In this article, the authors apply the alpha-splitting approach (Tu, 2006) that divides the pre-specified significant level into testing and estimating parts to the method proposed by Stefansson, Kim, and Hsu (1988) with a view to obtaining more precise confidence bounds when the dose-response relation is non-monotone. Through simulations, our extended method further demonstrates the ability to construct more informative confidence intervals for the treatment effects whether the dose-response relation is monotone or non-monotone.
Rauf, Awais. "Estimation of Pile Capacity by Optimizing Dynamic Pile Driving Formulae." Thesis, 2012. http://hdl.handle.net/10012/6651.
Full textChiang, Che-Yuan, and 江哲元. "Performance Analysis of Multi-channel Slotted ALOHA System using Iterative Contending-user Estimation Method." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/75144211451774381194.
Full text國立臺灣科技大學
電子工程系
101
Multi-channel slotted ALOHA is the main channel access scheme for the random access channels of the next generation cellular networks. Seo and Leung [1] proposed a novel Markov-chain-based analytical model to analyze the performance of multi-channel slotted ALOHA adopting uniform backoff and exponential backoff policies. The model is developed assuming that each terminal adopts a delay-first transmission (i.e., performs random backoff before transmitting a new packet. However, this assumption is not always enforced in all networks. This paper presents a difference-equation-based analytical model to solve the same problem considered in [1]. The proposed model is developed based on an iterative contending-user estimation method and it can be applied for general networks with or without the delay-first transmission. Computer simulations were conducted to verify the accuracy of the analysis. The results show that the proposed model can accurately e7stimate the system throughput, average access delay, and packet-dropping probability of the networks
Ndongo, Mor. "Les processus à mémoire longue saisonniers avec variance infinie des innovations et leurs applications." Phd thesis, 2011. http://tel.archives-ouvertes.fr/tel-00947321.
Full text