Siga este enlace para ver otros tipos de publicaciones sobre el tema: Mean error.

Tesis sobre el tema "Mean error"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Mean error".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Degtyarena, Anna Semenovna. "The window least mean square error algorithm". CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2385.

Texto completo
Resumen
In order to improve the performance of LMS (least mean square) algorithm by decreasing the amount of calculations this research proposes to make an update on each step only for those elements from the input data set, that fall within a small window W near the separating hyperplane surface. This work aims to describe in detail the results that can be achieved by using the proposed LMS with window learning algorithm in information systems that employ the methodology of neural network for the purposes of classification.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Cui, Xiangchen. "Mean-Square Error Bounds and Perfect Sampling for Conditional Coding". DigitalCommons@USU, 2000. https://digitalcommons.usu.edu/etd/7107.

Texto completo
Resumen
In this dissertation, new theoretical results are obtained for bounding convergence and mean-square error in conditional coding. Further new statistical methods for the practical application of conditional coding are developed. Criteria for the uniform convergence are first examined. Conditional coding Markov chains are aperiodic, π-irreducible, and Harris recurrent. By applying the general theories of uniform ergodicity of Markov chains on genera l state space, one can conclude that conditional coding Markov cha ins are uniformly ergodic and further, theoretical convergence rates based on Doeblin's condition can be found. Conditional coding Markov chains can be also viewed as having finite state space. This allows use of techniques to get bounds on the second largest eigenvalue which lead to bounds on convergence rate and the mean-square error of sample averages. The results are applied in two examples showing that these bounds are useful in practice. Next some algorithms for perfect sampling in conditional coding are studied. An application of exact sampling to the independence sampler is shown to be equivalent to standard rejection sampling. In case of single-site updating, traditional perfect sampling is not directly applicable when the state space has large cardinality and is not stochastically ordered, so a new procedure is developed that gives perfect samples at a predetermined confidence interval. In last chapter procedures and possibilities of applying conditional coding to mixture models are explored. Conditional coding can be used for analysis of a finite mixture model. This methodology is general and easy to use.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kong, Kar-lun y 江嘉倫. "Some mean value theorems for certain error terms in analytic number theory". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2014. http://hdl.handle.net/10722/206432.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Strobel, Matthias. "Estimation of minimum mean squared error with variable metric from censored observations". [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-35333.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Jakobsson, Sofie. ""How mean can you be?" : A study of teacher trainee and teacher views on error correction". Thesis, Högskolan i Gävle, Akademin för utbildning och ekonomi, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-8426.

Texto completo
Resumen
The present study investigates three teacher trainees and three teachers’ views on error correction during oral communication, and the similarities and differences between them. These six people were interviewed separately and they were asked six questions; the first five questions were asked to all six people but the last question differed between the teacher trainees and the teachers. My result shows that the teacher trainees are insecure when it comes to error correction and that the teachers´ sees it as a part of their job, and that is the biggest difference between them. The teacher trainees and the teachers focus on the same types of errors and those are the errors that can cause problems in communication, and that can be pronunciation errors, grammatical errors or vocabulary errors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Dear, K. B. G. "A generalisation of mean squared error and its application to variance component estimation". Thesis, University of Reading, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.379691.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Potter, Chris. "Modeling Channel Estimation Error in Continuously Varying MIMO Channels". International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/604490.

Texto completo
Resumen
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada
The accuracy of channel estimation plays a crucial role in the demodulation of data symbols sent across an unknown wireless medium. In this work a new analytical expression for the channel estimation error of a multiple input multiple output (MIMO) system is obtained when the wireless medium is continuously changing in the temporal domain. Numerical examples are provided to illustrate our findings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Nazar, Gabriel Luca. "Fine-grained error detection techniques for fast repair of FPGAs". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/77746.

Texto completo
Resumen
Field Programmable Gate Arrays (FPGAs) são componentes reconfiguráveis de hardware que encontraram grande sucesso comercial ao longo dos últimos anos em uma grande variedade de nichos de aplicação. Alta vazão de processamento, flexibilidade e tempo de projeto reduzido estão entre os principais atrativos desses dispositivos, e são essenciais para o seu sucesso comercial. Essas propriedades também são valiosas para sistemas críticos, que frequentemente enfrentam restrições severas de desempenho. Além disso, a possibilidade de reprogramação após implantação é relevante, uma vez que permite a adição de novas funcionalidades ou a correção de erros de projeto, estendendo a vida útil do sistema. Tais dispositivos, entretanto, dependem de grandes memórias para armazenar o bitstream de configuração, responsável por definir a função presente do FPGA. Assim, falhas afetando esta configuração são capazes de causar defeitos funcionais, sendo uma grande ameaça à confiabilidade. A forma mais tradicional de remover tais erros, isto é, scrubbing de configuração, consiste em periodicamente sobrescrever a memória com o seu conteúdo desejado. Entretanto, devido ao seu tamanho significativo e à banda de acesso limitada, scrubbing sofre de um longo tempo médio de reparo, e que está aumentando à medida que FPGAs ficam maiores e mais complexos a cada geração. Partições reconfiguráveis são úteis para reduzir este tempo, já que permitem a execução de um procedimento local de reparo na partição afetada. Para este propósito, mecanismos rápidos de detecção de erros são necessários para rapidamente disparar este scrubbing localizado e reduzir a latência de erro. Além disso, diagnóstico preciso é necessário para identificar a localização do erro dentro do espaço de endereçamento da configuração. Técnicas de redundância de grão fino têm o potencial de prover ambos, mas normalmente introduzem custos significativos devido à necessidade de numerosos verificadores de redundância. Neste trabalho, propomos uma técnica de detecção de erros de grão fino que utiliza recursos abundantes e subutilizados encontrados em FPGAs do estado da arte, especificamente as cadeias de propagação de vai-um. Assim, a técnica provê os principais benefícios da redundância de grão fino enquanto minimiza sua principal desvantagem. Reduções bastante significativas na latência de erro são atingíveis com a técnica proposta. Também é proposto um mecanismo heurístico para explorar o diagnóstico provido por técnicas desta natureza. Este mecanismo tem por objetivo identificar as localizações mais prováveis do erro na memória de configuração, baseado no diagnóstico de grão fino, e fazer uso dessa informação de forma a minimizar o tempo de reparo.
Field Programmable Gate Arrays (FPGAs) are reconfigurable hardware components that have found great commercial success over the past years in a wide variety of application niches. High processing throughput, flexibility and reduced design time are among the main assets of such devices, and are essential to their commercial success. These features are also valuable for critical systems that often face stringent performance constraints. Furthermore, the possibility to perform post-deployment reprogramming is relevant, as it allows adding new functionalities or correcting design mistakes, extending the system lifetime. Such devices, however, rely on large memories to store the configuration bitstream, responsible for defining the current FPGA function. Thus, faults affecting this configuration are able to cause functional failures, posing a major dependability threat. The most traditional means to remove such errors, i.e., configuration scrubbing, consists in periodically overwriting the memory with its desired contents. However, due to its significant size and limited access bandwidth, scrubbing suffers from a long mean time to repair, and which is increasing as FPGAs get larger and more complex after each generation. Reconfigurable partitions are useful to reduce this time, as they allow performing a local repair procedure on the affected partition. For that purpose, fast error detection mechanisms are required, in order to quickly trigger this localized scrubbing and reduce error latency. Moreover, precise diagnosis is necessary to identify the error location within the configuration addressing space. Fine-grained redundancy techniques have the potential to provide both, but usually introduce significant costs due to the need of numerous redundancy checkers. In this work we propose a fine-grained error detection technique that makes use of abundant and underused resources found in state-of-the-art FPGAs, namely the carry propagation chains. Thereby, the technique provides the main benefits of fine-grained redundancy while minimizing its main drawback. Very significant reductions in error latency are attainable with the proposed approach. A heuristic mechanism to explore the diagnosis provided by techniques of this nature is also proposed. This mechanism aims at identifying the most likely error locations in the configuration memory, based on the fine-grained diagnosis, and to make use of this information in order to minimize the repair time of scrubbing.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

林啓任 y Kai-yam Lam. "Some results on the mean values of certain error terms in analytic number theory". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B31214241.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Lam, Kai-yam. "Some results on the mean values of certain error terms in analytic number theory /". Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18611977.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Anver, Haneef Mohamed. "Mean Hellinger Distance as an Error Criterion in Univariate and Multivariate Kernel Density Estimation". OpenSIUC, 2010. https://opensiuc.lib.siu.edu/dissertations/161.

Texto completo
Resumen
Ever since the pioneering work of Parzen the mean square error( MSE) and its integrated form (MISE) have been used as the error criteria in choosing the bandwidth matrix for multivariate kernel density estimation. More recently other criteria have been advocated as competitors to the MISE, such as the mean absolute error. In this study we define a weighted version of the Hellinger distance for multivariate densities and show that it has an asymptotic form, which is one-fourth the asymptotic MISE under weak smoothness conditions on the multivariate density f. In addition the proposed criteria give rise to a new data-dependent bandwidth matrix selector. The performance of the new data-dependent bandwidth matrix selector is compared with other well known bandwidth matrix selectors such as the least squared cross validation (LSCV) and the plug-in (HPI) through simulation. We derived a closed form formula for the mean Hellinger distance (MHD) in the univariate case. We also compared via simulation mean weighted Hellinger distance (MWHD) and the asymptotic MWHD, and the MISE and the asymptotic MISE for both univariate and bivariate cases for various densities and sample sizes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Fodor, Balázs [Verfasser]. "Contributions to Statistical Modeling for Minimum Mean Square Error Estimation in Speech Enhancement / Balázs Fodor". Aachen : Shaker, 2015. http://d-nb.info/1070151815/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Xing, Chengwen y 邢成文. "Linear minimum mean-square-error transceiver design for amplify-and-forward multiple antenna relaying systems". Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2010. http://hub.hku.hk/bib/B44769738.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Loce, Robert P. "Morphological filter mean-absolute-error representation theorems and their application to optimal morphological filter design /". Online version of thesis, 1993. http://hdl.handle.net/1850/11065.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Thompson, Grant. "Effects of DEM resolution on GIS-based solar radiation model output: A comparison with the National Solar Radiation Database". University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1258663688.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Garcia-Alis, Daniel. "On adaptive MMSE receiver strategies for TD-CDMA". Thesis, University of Strathclyde, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.366896.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Stephens, Christopher Neil. "An investigation into the psychometric properties of the proportional reduction of mean squared error and augmented scores". Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/3539.

Texto completo
Resumen
Augmentation procedures are designed to provide better estimates for a given test or subtest through the use of collateral information. The main purpose of this dissertation was to use Haberman's and Wainer's augmentation procedures on a large-scale, standardized achievement test to understand the relationship between reliability and correlation that exist to create the proportional reduction of mean squared error (PRMSE) statistic and to compare the practical effects of Haberman's augmentation procedure with the practical effects of Wainer's augmentation procedure. In this dissertation, Haberman's and Wainer's augmentation procedures were used on a data set that consisted of a large-scale, standardized achievement test with tests in three different content areas, reading, language arts, and mathematics, in both 4th and 8th grade. Each test could be broken down into different content area subtests, between two and five depending on the content area. The data sets contained between 2,500 and 3,000 examinees for each test. The PRMSE statistic was used on the all of the data sets to evaluate two augmentation procedures, one proposed by Haberman and one by Wainer. Following the augmentation analysis, the relationship between the reliability of the subtest to be augmented and that subtest's correlation with the rest of the test was investigated using a pseudo-simulated data set, which consisted of different values for those variables. Lastly, the Haberman and Wainer augmentation procedures were used on the data sets and the augmented data was analyzed to determine the magnitude of the effects of using these augmentation procedures. The main findings based on the data analysis and pseudo-simulated data analysis were as follows: (1) the more questions the better the estimates and the better the augmentation procedures; (2) there is virtually no difference between the Haberman and Wainer augmentation procedures, except for certain correlational relationships; (3) there is a significant effect of using the Haberman or Wainer augmentation procedures, however as the reliability increases, this effect lessens. The limitations of the study and possible future research are also discussed in the dissertation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Septarina, Septarina. "Micro-Simulation of the Roundabout at Idrottsparken Using Aimsun : A Case Study of Idrottsparken Roundabout in Norrköping, Sweden". Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-79964.

Texto completo
Resumen
Microscopic traffic simulation is useful tool in analysing traffic and estimating the capacity and level of service of road networks. In this thesis, the four legged Idrottsparken roundabout in the city of Norrkoping in Sweden is analysed by using the microscopic traffic simulation package AIMSUN. For this purpose, data regarding traffic flow counts, travel times and queue lengths were collected for three consecutive weekdays during both the morning and afternoon peak periods. The data were then used in model building for simulation of traffic of the roundabout. The Root Mean Square Error (RMSE) method is used to get the optimal parameter value between queue length and travel time data and validation of travel time data are carried out to obtain the basic model which represents the existing condition of the system. Afterward, the results of the new models were evaluated and compared to the results of a SUMO model for the same scenario model. Based on calibrated and validated model, three alternative scenarios were simulated and analysed to improve efficiency of traffic network in the roundabout. The three scenarios includes: (1) add one free right turn in the north and east sections; (2) add one free right turn in the east and south sections; and (3) addition of one lane in roundabout. The analysis of these scenarios shows that the first and second scenario are only able to reduce the queue length and travel time in two or three legs, while the third scenario is not able to improve the performance of the roundabout. In this research, it can be concluded that the first scenario is considered as the best scenario compared to the second scenario and the third scenario. The comparison between AIMSUN and SUMO for the same scenario shows that the results have no significance differences. In calibration process, to get the optimal parameter values between the model measurements and the field measurements, both of AIMSUN and SUMO uses two significantly influencing parametersfor queue and travel time. AIMSUN package uses parameter of driver reaction time and the maximum acceleration, while SUMO package uses parameter of driver imperfection and also the driver rection time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Nassr, Husam y Kurt Kosbar. "PERFORMANCE EVALUATION FOR DECISION-FEEDBACK EQUALIZER WITH PARAMETER SELECTION ON UNDERWATER ACOUSTIC COMMUNICATION". International Foundation for Telemetering, 2017. http://hdl.handle.net/10150/626999.

Texto completo
Resumen
This paper investigates the effect of parameter selection for the decision feedback equalization (DFE) on communication performance through a dispersive underwater acoustic wireless channel (UAWC). A DFE based on minimum mean-square error (MMSE-DFE) criterion has been employed in the implementation for evaluation purposes. The output from the MMSE-DFE is input to the decoder to estimate the transmitted bit sequence. The main goal of this experimental simulation is to determine the best selection, such that the reduction in the computational overload is achieved without altering the performance of the system, where the computational complexity can be reduced by selecting an equalizer with a proper length. The system performance is tested for BPSK, QPSK, 8PSK and 16QAM modulation and a simulation for the system is carried out for Proakis channel A and real underwater wireless acoustic channel estimated during SPACE08 measurements to verify the selection.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Kanyongo, Gibbs Y. "Using Large-Scale Datasets to Teach Abstract Statistical Concepts: Sampling Distribution". Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-82613.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Ding, Minhua. "Multiple-input multiple-output wireless system designs with imperfect channel knowledge". Thesis, Kingston, Ont. : [s.n.], 2008. http://hdl.handle.net/1974/1335.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Karaer, Arzu. "Optimum bit-by-bit power allocation for minimum distortion transmission". Texas A&M University, 2005. http://hdl.handle.net/1969.1/4760.

Texto completo
Resumen
In this thesis, bit-by-bit power allocation in order to minimize mean-squared error (MSE) distortion of a basic communication system is studied. This communication system consists of a quantizer. There may or may not be a channel encoder and a Binary Phase Shift Keying (BPSK) modulator. In the quantizer, natural binary mapping is made. First, the case where there is no channel coding is considered. In the uncoded case, hard decision decoding is done at the receiver. It is seen that errors that occur in the more significant information bits contribute more to the distortion than less significant bits. For the uncoded case, the optimum power profile for each bit is determined analytically and through computer-based optimization methods like differential evolution. For low signal-to-noise ratio (SNR), the less significant bits are allocated negligible power compared to the more significant bits. For high SNRs, it is seen that the optimum bit-by-bit power allocation gives constant MSE gain in dB over the uniform power allocation. Second, the coded case is considered. Linear block codes like (3,2), (4,3) and (5,4) single parity check codes and (7,4) Hamming codes are used and soft-decision decoding is done at the receiver. Approximate expressions for the MSE are considered in order to find a near-optimum power profile for the coded case. The optimization is done through a computer-based optimization method (differential evolution). For a simple code like (7,4) Hamming code simulations show that up to 3 dB MSE gain can be obtained by changing the power allocation on the information and parity bits. A systematic method to find the power profile for linear block codes is also introduced given the knowledge of input-output weight enumerating function of the code. The information bits have the same power, and parity bits have the same power, and the two power levels can be different.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Leksono, Catur Yudo y Tina Andriyana. "Roundabout Microsimulation using SUMO : A Case Study in Idrottsparken RoundaboutNorrkӧping, Sweden". Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-79771.

Texto completo
Resumen
Idrottsparken roundabout in Norrkoping is located in the more dense part of the city.Congestion occurs in peak hours causing queue and extended travel time. This thesis aims to provide alternative model to reduce queue and travel time. Types ofobservation data are flow, length of queue, and travel time that are observed during peakhours in the morning and afternoon. Calibration process is done by minimising root meansquare error of queue, travel time, and combination both of them between observation andcalibrated model. SUMO version 0.14.0 is used to perform the microsimulation. There are two proposed alternatives, namely Scenario 1: the additional lane for right turnfrom East leg to North and from North leg to West and Scenario 2: restriction of heavy goodsvehicles passing Kungsgatan which is located in Northern leg of Idrottsparken roundaboutduring peak hours. For Scenario 1, the results from SUMO will be compared with AIMSUNin terms of queue and travel time. The result of microsimulation shows that parameters that have big influence in the calibrationprocess for SUMO are driver imperfection and driver’s reaction time, while for AIMSUN isdriver’s reaction time and maximum acceleration. From analysis found that the model of thecurrent situation at Idrottsparken can be represented by model simulation which usingcombination between root mean square error of queue and travel time in calibration andvalidation process. Moreover, scenario 2 is the best alternative for SUMO because itproduces the decrease of queue and travel time almost in all legs at morning and afternoonpeak hour without accompanied by increase significant value of them in the other legs. Thecomparison between SUMO and AIMSUN shows that, in general, the AIMSUN has higherchanges value in terms of queue and travel time due to the limited precision in SUMO forroundabout modelling.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Williams, Ian E. "Channel Equalization and Spatial Diversity for Aeronautical Telemetry Applications". International Foundation for Telemetering, 2010. http://hdl.handle.net/10150/605946.

Texto completo
Resumen
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California
This work explores aeronautical telemetry communication performance with the SOQPSK- TG ARTM waveforms when frequency-selective multipath corrupts received information symbols. A multi-antenna equalization scheme is presented where each antenna's unique multipath channel is equalized using a pilot-aided optimal linear minimum mean-square error filter. Following independent channel equalization, a maximal ratio combining technique is used to generate a single receiver output for detection. This multi-antenna equalization process is shown to improve detection performance over maximal ratio combining alone.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

MARAKBI, ZAKARIA. "Mean-Variance Portfolio Optimization : Challenging the role of traditional covariance estimation". Thesis, KTH, Industriell Marknadsföring och Entreprenörskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-199185.

Texto completo
Resumen
Ever since its introduction in 1952, the Mean-Variance (MV) portfolio selection theory has remained a centerpiece within the realm of e_cient asset allocation. However, in scienti_c circles, the theory has stirred controversy. A strand of criticism has emerged that points to the phenomenon that Mean-Variance Optimization su_ers from the severe drawback of estimation errors contained in the expected return vector and the covariance matrix, resulting in portfolios that may signi_cantly deviate from the true optimal portfolio. While a substantial amount of e_ort has been devoted to estimating the expected return vector in this context, much less is written about the covariance matrix input. In recent times, however, research that points to the importance of the covariance matrix in MV optimization has emerged. As a result, there has been a growing interest whether MV optimization can be enhanced by improving the estimate of the covariance matrix. Hence, this thesis was set forth by the purpose to investigate whether nancial practitioners and institutions can allocate portfolios consisting of assets in a more e_cient manner by changing the covariance matrix input in mean-variance optimization. In the quest of chieving this purpose, an out-of-sample analysis of MV optimized portfolios was performed, where the performance of ve prominent covariance matrix estimators were compared, holding all other things equal in the MV optimization. The optimization was performed under realistic investment constraints, taking incurred transaction costs into account, and for an investment asset universe ranging from equity to bonds. The empirical _ndings in this study suggest one dominant estimator: the covariance matrix estimator implied by the Gerber Statistic (GS). Speci_cally, by using this covariance matrix estimator in lieu of the traditional sample covariance matrix, the MV optimization rendered more e_cient portfolios in terms of higher Sharpe ratios, higher risk-adjusted returns and lower maximum drawdowns. The outperformance was protruding during recessionary times. This suggests that an investor that employs traditional MVO in quantitative asset allocation can improve their asset picking abilities by changing to the, in theory, more robust GS  ovariance matrix estimator in times of volatile nancial markets.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Mierswa, Alina Verfasser] y Klaus [Gutachter] [Deckelnick. "Error estimates for a finite difference approximation of mean curvature flow for surfaces of torus type / Alina Mierswa ; Gutachter: Klaus Deckelnick". Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2020. http://d-nb.info/1222670747/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Kulkarni, Aditya. "Performance Analysis of Zero Forcing and Minimum Mean Square Error Equalizers on Multiple Input Multiple Output System on a Spinning Vehicle". International Foundation for Telemetering, 2014. http://hdl.handle.net/10150/577482.

Texto completo
Resumen
ITC/USA 2014 Conference Proceedings / The Fiftieth Annual International Telemetering Conference and Technical Exhibition / October 20-23, 2014 / Town and Country Resort & Convention Center, San Diego, CA
Channel equalizers based on minimum mean square error (MMSE) and zero forcing (ZF) criteria have been formulated for a general scalable multiple input multiple output (MIMO) system and implemented for a 2x2 MIMO system with spatial multiplexing (SM) for Rayleigh channel associated with additive white Gaussian noise. A model to emulate transmitters and receivers on a spinning vehicle has been developed. A transceiver based on the BLAST architecture is developed in this work. A mathematical framework to explain the behavior of the ZF and MMSE equalizers is formulated. The performance of the equalizers has been validated for a case with one of the communication entities being a spinning aero vehicle. Performance analysis with respect to variation of angular separation between the antennas and relative antenna gain for each case is presented. Based on the simulation results a setup with optimal design parameters for placement of antennas, choice of the equalizers and transmit power is proposed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Mierswa, Alina [Verfasser] y Klaus [Gutachter] Deckelnick. "Error estimates for a finite difference approximation of mean curvature flow for surfaces of torus type / Alina Mierswa ; Gutachter: Klaus Deckelnick". Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2020. http://d-nb.info/1222670747/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Han, Changan. "Neural Network Based Off-line Handwritten Text Recognition System". FIU Digital Commons, 2011. http://digitalcommons.fiu.edu/etd/363.

Texto completo
Resumen
This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: 1) error rate on testing set, 2) processing time needed to recognize a segmented character and 3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Nounagnon, Jeannette Donan. "Using Kullback-Leibler Divergence to Analyze the Performance of Collaborative Positioning". Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/86593.

Texto completo
Resumen
Geolocation accuracy is a very crucial and a life-or-death factor for rescue teams. Natural disasters or man-made disasters are just a few convincing reasons why fast and accurate position location is necessary. One way to unleash the potential of positioning systems is through the use of collaborative positioning. It consists of simultaneously solving for the position of two nodes that need to locate themselves. Although the literature has addressed the benefits of collaborative positioning in terms of accuracy, a theoretical foundation on the performance of collaborative positioning has been disproportionally lacking. This dissertation uses information theory to perform a theoretical analysis of the value of collaborative positioning.The main research problem addressed states: 'Is collaboration always beneficial? If not, can we determine theoretically when it is and when it is not?' We show that the immediate advantage of collaborative estimation is in the acquisition of another set of information between the collaborating nodes. This acquisition of new information reduces the uncertainty on the localization of both nodes. Under certain conditions, this reduction in uncertainty occurs for both nodes by the same amount. Hence collaboration is beneficial in terms of uncertainty. However, reduced uncertainty does not necessarily imply improved accuracy. So, we define a novel theoretical model to analyze the improvement in accuracy due to collaboration. Using this model, we introduce a variational analysis of collaborative positioning to deter- mine factors that affect the improvement in accuracy due to collaboration. We derive range conditions when collaborative positioning starts to degrade the performance of standalone positioning. We derive and test criteria to determine on-the-fly (ahead of time) whether it is worth collaborating or not in order to improve accuracy. The potential applications of this research include, but are not limited to: intelligent positioning systems, collaborating manned and unmanned vehicles, and improvement of GPS applications.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Alexandridis, Roxana Antoanela. "Minimum disparity inference for discrete ranked set sampling data". Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1126033164.

Texto completo
Resumen
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xi, 124 p.; also includes graphics. Includes bibliographical references (p. 121-124). Available online via OhioLINK's ETD Center
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

DeNooyer, Eric-Jan D. "Statistical Idealities and Expected Realities in the Wavelet Techniques Used for Denoising". Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3929.

Texto completo
Resumen
In the field of signal processing, one of the underlying enemies in obtaining a good quality signal is noise. The most common examples of signals that can be corrupted by noise are images and audio signals. Since the early 1980's, a time when wavelet transformations became a modernly defined tool, statistical techniques have been incorporated into processes that use wavelets with the goal of maximizing signal-to-noise ratios. We provide a brief history of wavelet theory, going back to Alfréd Haar's 1909 dissertation on orthogonal functions, as well as its important relationship to the earlier work of Joseph Fourier (circa 1801), which brought about that famous mathematical transformation, the Fourier series. We demonstrate how wavelet theory can be used to reconstruct an analyzed function, ergo, that it can be used to analyze and reconstruct images and audio signals as well. Then, in order to ground the understanding of the application of wavelets to the science of denoising, we discuss some important concepts from statistics. From all of these, we introduce the subject of wavelet shrinkage, a technique that combines wavelets and statistics into a "thresholding" scheme that effectively reduces noise without doing too much damage to the desired signal. Subsequently, we discuss how the effectiveness of these techniques are measured, both in the ideal sense and in the expected sense. We then look at an illustrative example in the application of one technique. Finally, we analyze this example more generally, in accordance with the underlying theory, and make some conclusions as to when wavelets are an effective technique in increasing a signal-to-noise ratio.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Gagakuma, Edem Coffie. "Multipath Channel Considerations in Aeronautical Telemetry". BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6529.

Texto completo
Resumen
This thesis describes the use of scattering functions to characterize time-varying multipath radio channels. Channel Impulse responses were measured at Edwards Air Force Base (EAFB) and scattering functions generated from the impulse response data. From the scattering functions we compute the corresponding Doppler power spectrum and multipath intensity profile. These functions completely characterize the signal delay and the time varying nature of the channel in question and are used by systems engineers to design reliable communications links. We observe from our results that flight paths with ample reflectors exhibit significant multipath events. We also examine the bit error rate (BER) performance of a reduced-complexity equalizer for a truncated version of the pulse amplitude modulation (PAM) representation of SOQPSK-TG in a multipath channel. Since this reduced-complexity equalizer is based on the maximum likelihood (ML) principle, we expect it to perform optimally than any of the filter-based equalizers used in estimating received SOQPSK-TG symbols. As such we present a comparison between this ML detector and a minimum mean square error (MMSE) equalizer for the same example channel. The example channel used was motivated by the statistical channel characterizations described in thisthesis. Our analysis shows that the ML equalizer outperforms the MMSE equalizer in estimating received SOQPSK-TG symbols.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Yapici, Yavuz. "A Bidirectional Lms Algorithm For Estimation Of Fast Time-varying Channels". Phd thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613220/index.pdf.

Texto completo
Resumen
Effort to estimate unknown time-varying channels as a part of high-speed mobile communication systems is of interest especially for next-generation wireless systems. The high computational complexity of the optimal Wiener estimator usually makes its use impractical in fast time-varying channels. As a powerful candidate, the adaptive least mean squares (LMS) algorithm offers a computationally efficient solution with its simple first-order weight-vector update equation. However, the performance of the LMS algorithm deteriorates in time-varying channels as a result of the eigenvalue disparity, i.e., spread, of the input correlation matrix in such chan nels. In this work, we incorporate the L MS algorithm into the well-known bidirectional processing idea to produce an extension called the bidirectional LMS. This algorithm is shown to be robust to the adverse effects of time-varying channels such as large eigenvalue spread. The associated tracking performance is observed to be very close to that of the optimal Wiener filter in many cases and the bidirectional LMS algorithm is therefore referred to as near-optimal. The computational complexity is observed to increase by the bidirectional employment of the LMS algorithm, but nevertheless is significantly lower than that of the optimal Wiener filter. The tracking behavior of the bidirectional LMS algorithm is also analyzed and eventually a steady-state step-size dependent mean square error (MSE) expression is derived for single antenna flat-fading channels with various correlation properties. The aforementioned analysis is then generalized to include single-antenna frequency-selective channels where the so-called ind ependence assumption is no more applicable due to the channel memory at hand, and then to multi-antenna flat-fading channels. The optimal selection of the step-size values is also presented using the results of the MSE analysis. The numerical evaluations show a very good match between the theoretical and the experimental results under various scenarios. The tracking analysis of the bidirectional LMS algorithm is believed to be novel in the sense that although there are several works in the literature on the bidirectional estimation, none of them provides a theoretical analysis on the underlying estimators. An iterative channel estimation scheme is also presented as a more realistic application for each of the estimation algorithms and the channel models under consideration. As a result, the bidirectional LMS algorithm is observed to be very successful for this real-life application with its increased but still practical level of complexity, the near-optimal tracking performa nce and robustness to the imperfect initialization.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Ogorodnikova, Natalia. "Pareto πps sampling design vs. Poisson πps sampling design. : Comparison of performance in terms of mean-squared error and evaluation of factors influencing the performance measures". Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-67978.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Enqvist, Martin. "Linear Models of Nonlinear Systems". Doctoral thesis, Linköping : Linköpings universitet, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5330.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Syntetos, Argyrios. "Forecasting of intermittent demand". Thesis, Online version, 2001. http://bibpurl.oclc.org/web/26215.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Behrle, Charles D. "Computer simulation studies of multiple broadband target localization via frequency domain beamforming for planar arrays". Thesis, Monterey, California. Naval Postgraduate School, 1988. http://hdl.handle.net/10945/22976.

Texto completo
Resumen
Approved for public release; distribution is unlimited
Computer simulation studies of a frequency domain adaptive beamforming algorithm are presented. These simulation studies were conducted to determine the multiple broadband target localization capability and the full angular coverage capability of the algorithm. The algorithm was evaluated at several signal-to-noise ratios with varying sampling rates. The number of iterations that the adaptive algorithm took to reach a minimum estimation error was determined. Results of the simulation studies indicate that the algorithm can localize multiple broadband targets and has full angular coverage capability.
http://archive.org/details/computersimulati00behr
Lieutenant, United States Navy
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Drvoštěp, Tomáš. "Ekonomie vychýleného odhadu". Master's thesis, Vysoká škola ekonomická v Praze, 2014. http://www.nusl.cz/ntk/nusl-193409.

Texto completo
Resumen
This thesis investigates optimality of heuristic forecasting. According to Goldstein a Gigerenzer (2009), heuristics can be viewed as predictive models, whose simplicity is exploiting the bias-variance trade-off. Economic agents learning in the context of rational expectations (Marcet a Sargent 1989) employ, on the contrary, complex models of the whole economy. Both of these approaches can be perceived as an optimal response complexity of the prediction task and availability of observations. This work introduces a straightforward extension to the standard model of decision making under uncertainty, where agents utility depends on accuracy of their predictions and where model complexity is moderated by regularization parameter. Results of Monte Carlo simulations reveal that in complicated environments, where few observations are at disposal, it is beneficial to construct simple models resembling heuristics. Unbiased models are preferred in more convenient conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Huang, Deng. "Experimental planning and sequential kriging optimization using variable fidelity data". Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1110297243.

Texto completo
Resumen
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xi, 120 p.; also includes graphics (some col.). Includes bibliographical references (p. 114-120). Available online via OhioLINK's ETD Center
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Jun, Shi. "Frequentist Model Averaging For Functional Logistic Regression Model". Thesis, Uppsala universitet, Statistiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-352519.

Texto completo
Resumen
Frequentist model averaging as a newly emerging approach provides us a way to overcome the uncertainty caused by traditional model selection in estimation. It acknowledges the contribution of multiple models, instead of making inference and prediction purely based on one single model. Functional logistic regression is also a burgeoning method in studying the relationship between functional covariates and a binary response. In this paper, the frequentist model averaging approach is applied to the functional logistic regression model. A simulation study is implemented to compare its performance with model selection. The analysis shows that when conditional probability is taken as the focus parameter, model averaging is superior to model selection based on BIC. When the focus parameter is the intercept and slopes, model selection performs better.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Vavruška, Marek. "Realised stochastic volatility in practice". Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-165381.

Texto completo
Resumen
Realised Stochastic Volatility model of Koopman and Scharth (2011) is applied to the five stocks listed on NYSE in this thesis. Aim of this thesis is to investigate the effect of speeding up the trade data processing by skipping the cleaning rule requiring the quote data. The framework of the Realised Stochastic Volatility model allows the realised measures to be biased estimates of the integrated volatility, which further supports this approach. The number of errors in recorded trades has decreased significantly during the past years. Different sample lengths were used to construct one day-ahead forecasts of realised measures to examine the forecast precision sensitivity to the rolling window length. Use of the longest window length does not lead to the lowest mean square error. The dominance of the Realised Stochastic Volatility model in terms of the lowest mean square errors of one day-ahead out-of-sample forecasts has been confirmed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

MEDEIROS, Rex Antonio da Costa. "Zero-Error capacity of quantum channels". Universidade Federal de Campina Grande, 2008. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1320.

Texto completo
Resumen
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-01T21:11:37Z No. of bitstreams: 1 REX ANTONIO DA COSTA MEDEIROS - TESE PPGEE 2008..pdf: 1089371 bytes, checksum: ea0c95501b938e0d466779a06faaa4f6 (MD5)
Made available in DSpace on 2018-08-01T21:11:37Z (GMT). No. of bitstreams: 1 REX ANTONIO DA COSTA MEDEIROS - TESE PPGEE 2008..pdf: 1089371 bytes, checksum: ea0c95501b938e0d466779a06faaa4f6 (MD5) Previous issue date: 2008-05-09
Nesta tese, a capacidade erro-zero de canais discretos sem memória é generalizada para canais quânticos. Uma nova capacidade para a transmissão de informação clássica através de canais quânticos é proposta. A capacidade erro-zero de canais quânticos (CEZQ) é definida como sendo a máxima quantidade de informação por uso do canal que pode ser enviada através de um canal quântico ruidoso, considerando uma probabilidade de erro igual a zero. O protocolo de comunicação restringe palavras-código a produtos tensoriais de estados quânticos de entrada, enquanto que medições coletivas entre várias saídas do canal são permitidas. Portanto, o protocolo empregado é similar ao protocolo de Holevo-Schumacher-Westmoreland. O problema de encontrar a CEZQ é reformulado usando elementos da teoria de grafos. Esta definição equivalente é usada para demonstrar propriedades de famílias de estados quânticos e medições que atingem a CEZQ. É mostrado que a capacidade de um canal quântico num espaço de Hilbert de dimensão d pode sempre ser alcançada usando famílias compostas de, no máximo,d estados puros. Com relação às medições, demonstra-se que medições coletivas de von Neumann são necessárias e suficientes para alcançar a capacidade. É discutido se a CEZQ é uma generalização não trivial da capacidade erro-zero clássica. O termo não trivial refere-se a existência de canais quânticos para os quais a CEZQ só pode ser alcançada através de famílias de estados quânticos não-ortogonais e usando códigos de comprimento maior ou igual a dois. É investigada a CEZQ de alguns canais quânticos. É mostrado que o problema de calcular a CEZQ de canais clássicos-quânticos é puramente clássico. Em particular, é exibido um canal quântico para o qual conjectura-se que a CEZQ só pode ser alcançada usando uma família de estados quânticos não-ortogonais. Se a conjectura é verdadeira, é possível calcular o valor exato da capacidade e construir um código de bloco quântico que alcança a capacidade. Finalmente, é demonstrado que a CEZQ é limitada superiormente pela capacidade de Holevo-Schumacher-Westmoreland.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Shikhar. "COMPRESSIVE IMAGING FOR DIFFERENCE IMAGE FORMATION AND WIDE-FIELD-OF-VIEW TARGET TRACKING". Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/194741.

Texto completo
Resumen
Use of imaging systems for performing various situational awareness tasks in militaryand commercial settings has a long history. There is increasing recognition,however, that a much better job can be done by developing non-traditional opticalsystems that exploit the task-specific system aspects within the imager itself. Insome cases, a direct consequence of this approach can be real-time data compressionalong with increased measurement fidelity of the task-specific features. In others,compression can potentially allow us to perform high-level tasks such as direct trackingusing the compressed measurements without reconstructing the scene of interest.In this dissertation we present novel advancements in feature-specific (FS) imagersfor large field-of-view surveillence, and estimation of temporal object-scene changesutilizing the compressive imaging paradigm. We develop these two ideas in parallel.In the first case we show a feature-specific (FS) imager that optically multiplexesmultiple, encoded sub-fields of view onto a common focal plane. Sub-field encodingenables target tracking by creating a unique connection between target characteristicsin superposition space and the target's true position in real space. This isaccomplished without reconstructing a conventional image of the large field of view.System performance is evaluated in terms of two criteria: average decoding time andprobability of decoding error. We study these performance criteria as a functionof resolution in the encoding scheme and signal-to-noise ratio. We also includesimulation and experimental results demonstrating our novel tracking method. Inthe second case we present a FS imager for estimating temporal changes in the objectscene over time by quantifying these changes through a sequence of differenceimages. The difference images are estimated by taking compressive measurementsof the scene. Our goals are twofold. First, to design the optimal sensing matrixfor taking compressive measurements. In scenarios where such sensing matrices arenot tractable, we consider plausible candidate sensing matrices that either use theavailable a priori information or are non-adaptive. Second, we develop closed-form and iterative techniques for estimating the difference images. We present results to show the efficacy of these techniques and discuss the advantages of each.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Lin, Lizhen. "Nonparametric Inference for Bioassay". Diss., The University of Arizona, 2012. http://hdl.handle.net/10150/222849.

Texto completo
Resumen
This thesis proposes some new model independent or nonparametric methods for estimating the dose-response curve and the effective dosage curve in the context of bioassay. The research problem is also of importance in environmental risk assessment and other areas of health sciences. It is shown in the thesis that our new nonparametric methods while bearing optimal asymptotic properties also exhibit strong finite sample performance. Although our specific emphasis is on bioassay and environmental risk assessment, the methodology developed in this dissertation applies broadly to general order restricted inference.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Chitte, Sree Divya. "Source localization from received signal strength under lognormal shadowing". Thesis, University of Iowa, 2010. https://ir.uiowa.edu/etd/477.

Texto completo
Resumen
This thesis considers statistical issues in source localization from the received signal strength (RSS) measurements at sensor locations, under the practical assumption of log-normal shadowing. Distance information of source from sensor locations can be estimated from RSS measurements and many algorithms directly use powers of distances to localize the source, even though distance measurements are not directly available. The first part of the thesis considers the statistical analysis of distance estimation from RSS measurments. We show that the underlying problem is inefficient and there is only one unbiased estimator for this problem and its mean square error (MSE) grows exponentially with noise power. Later, we provide the linear minimum mean square error (MMSE) estimator whose bias and MSE are bounded in noise power. The second part of the thesis establishes an isomorphism between estimates of differences between squares of distances and the source location. This is used to completely characterize the class of unbiased estimates of the source location and to show that their MSEs grow exponentially with noise powers. Later, we propose an estimate based on the linear MMSE estimate of distances that has error variance and bias that is bounded in the noise variance.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Challakere, Nagaravind. "Carrier Frequency Offset Estimation for Orthogonal Frequency Division Multiplexing". DigitalCommons@USU, 2012. https://digitalcommons.usu.edu/etd/1423.

Texto completo
Resumen
This thesis presents a novel method to solve the problem of estimating the carrier frequency set in an Orthogonal Frequency Division Multiplexing (OFDM) system. The approach is based on the minimization of the probability of symbol error. Hence, this approach is called the Minimum Symbol Error Rate (MSER) approach. An existing approach based on Maximum Likelihood (ML) is chosen to benchmark the performance of the MSER-based algorithm. The MSER approach is computationally intensive. The thesis evaluates the approximations that can be made to the MSER-based objective function to make the computation tractable. A modified gradient function based on the MSER objective is developed which provides better performance characteristics than the ML-based estimator. The estimates produced by the MSER approach exhibit lower Mean Squared Error compared to the ML benchmark. The performance of MSER-based estimator is simulated with Quaternary Phase Shift Keying (QPSK) symbols, but the algorithm presented is applicable to all complex symbol constellations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Jones, Haley M. y Haley Jones@anu edu au. "On multipath spatial diversity in wireless multiuser communications". The Australian National University. Research School of Information Sciences and Engineering, 2001. http://thesis.anu.edu.au./public/adt-ANU20050202.152811.

Texto completo
Resumen
The study of the spatial aspects of multipath in wireless communications environments is an increasingly important addition to the study of the temporal aspects in the search for ways to increase the utilization of the available wireless channel capacity. Traditionally, multipath has been viewed as an encumbrance in wireless communications, two of the major impairments being signal fading and intersymbol interference. However, recently the potential advantages of the diversity offered by multipath rich environments in multiuser communications have been recognised. Space time coding, for example, is a recent technique which relies on a rich scattering environment to create many practically uncorrelated signal transmission channels. Most often, statistical models have been used to describe the multipath environments in such applications. This approach has met with reasonable success but is limited when the statistical nature of a field is not easily determined or is not readily described by a known distribution.¶ Our primary aim in this thesis is to probe further into the nature of multipath environments in order to gain a greater understanding of their characteristics and diversity potential. We highlight the shortcomings of beamforming in a multipath multiuser access environment. We show that the ability of a beamformer to resolve two or more signals in angle directly limits its achievable capacity.¶ We test the probity of multipath as a source of spatial diversity, the limiting case of which is co-located users. We introduce the concept of separability to define the fundamental limits of a receiver to extract the signal of a desired user from interfering users’ signals and noise. We consider the separability performances of the minimum mean square error (MMSE), decorrelating (DEC) and matched filter (MF) detectors as we bring the positions of a desired and an interfering user closer together. We show that both the MMSE and DEC detectors are able to achieve acceptable levels of separability with the users as close as λ/10.¶ In seeking a better understanding of the nature of multipath fields themselves, we take two approaches. In the first we take a path oriented approach. The effects on the variation of the field power of the relative values of parameters such as amplitude and propagation direction are considered for a two path field. The results are applied to a theoretical analysis of the behaviour of linear detectors in multipath fields. This approach is insightful for fields with small numbers of multipaths, but quickly becomes mathematically complex.¶ In a more general approach, we take a field oriented view, seeking to quantify the complexity of arbitrary fields. We find that a multipath field has an intrinsic dimensionality of (πe)R/λ≈8.54R/λ, for a field in a two dimensional circular region, increasing only linearly with the radius R of the region. This result implies that there is no such thing as an arbitrarily complicated multipath field. That is, a field generated by any number of nearfield and farfield, specular and diffuse multipath reflections is no more complicated than a field generated by a limited number of plane waves. As such, there are limits on how rich multipath can be. This result has significant implications including means: i) to determine a parsimonious parameterization for arbitrary multipath fields and ii) of synthesizing arbitrary multipath fields with arbitrarily located nearfield or farfield, spatially discrete or continuous sources. The theoretical results are corroborated by examples of multipath field analysis and synthesis.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

He, Jun. "THE APPLICATION OF LAST OBSERVATION CARRIED FORWARD (LOCF) IN THE PERSISTENT BINARY CASE". VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3621.

Texto completo
Resumen
The main purpose of this research was to evaluate use of Last Observation Carried Forward (LOCF) as an imputation method when persistent binary outcomes are missing in a Randomized Controlled Trial. A simulation study was performed to see the effect of dropout rate and type of dropout (random or associated with treatment arm) on Type I error and power. Properties of estimated event rates, treatment effect, and bias were also assessed. LOCF was also compared to two versions of complete case analysis - Complete1 (excluding all observations with missing data), and Complete2 (only carrying forward observations if the event is observed to occur). LOCF was not recommended because of the bias. Type I error was increased, and power was decreased. The other two analyses also had poor properties. LOCF analysis was applied to a mammogram dataset, with results similar to the simulation study.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Thiebaut, Nicolene Magrietha. "Statistical properties of forward selection regression estimators". Diss., University of Pretoria, 2011. http://hdl.handle.net/2263/29520.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía