To see the other types of publications on this topic, follow the link: Electric engineering, estimates.

Dissertations / Theses on the topic 'Electric engineering, estimates'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Electric engineering, estimates.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ye, Ziwei. "A Label-based Conditional Mutual Information Estimator using Contrastive Loss Functions." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286781.

Full text
Abstract:
In the field of machine learning, representation learning is a collection of techniques that transform raw data into a technology that can be effectively developed by machine learning. In recent years, deep neural network-based representation learning technology has been widely used in image learning, recognition, classification and other fields, and one of the representative ones is the mutual information estimator/encoder. In this thesis, a new form of contrastive loss function that can be applied to existing mutual information encoder networks is proposed. Deep learning-based representation learning is very different from traditional machine learning feature extraction algorithms. In general, the features obtained by feature extraction are surface-level and can be understood by humans, while the representation learning learns the underlying structures of data, which are easy to be understood by machines but difficult for humans. Based on the above differences, when the scale of data is small, human’s prior knowledge of the data can play a big role, so the feature extraction algorithm has a greater advantage; when the scale of data increases, the component of the prior knowledge will decline sharply. At this time, the strong computing ability of deep learning is needed to make up for this deficiency, thus the effect of representation learning will be better. The research done in this thesis is mainly aimed at a more special situation, where the scale of training data is small and the scale of test data is large. In this case, there are two issues that need to be considered, one is the distribution representation of the model, and the other is the overfitting problem of the model. The LMIE (label-based mutual information estimator) model proposed in this thesis has certain advantages regarding both issues. The LMIE model mainly contains three parts: (a) a neural network based-mutual information encoder; (b) a loss function calculation module; (c) a linear classifier. Among them, the loss function calculation module is the most important one, as well as the main factor that distinguishes this model from other models.
Inom maskininlärning är representationslärande en samling tekniker som omvandlar rådata till en teknik som effektivt kan utvecklas genom maskininlärning. Under senare år har djup neurala nätverksbaserad representationsinlärningsteknologi använts i stor utsträckning för bildinlärning, igenkänning, klassificering och andra områden, och ett av de representativa är den ömsesidiga informationsberäknaren / kodaren. I denna avhandling föreslås en ny form av kontrastiv förlustfunktion som kan tillämpas på befintliga nätverk för ömsesidig informationskodare. Djup inlärningsbaserad representationsinlärning skiljer sig mycket från traditionella maskininlärningsextraheringsalgoritmer. I allmänhet är de funktioner som erhålls genom funktionsekstraktion ytnivå och kan förstås av människor, medan representationsinlärningen lär sig de underliggande strukturerna för data, som är lätta att förstå av maskiner men svåra för människor. Baserat på ovanstående skillnader kan människans förkunskaper om uppgifterna spela en stor roll, när skalan av data är liten, så funktionsekstraktionsalgoritmen har en större fördel; när skalan på data ökar kommer komponenten i förkunskaperna att minska kraftigt. För närvarande behövs den starka beräkningsförmågan för djup inlärning för att kompensera för denna brist, varför effekten av representationslärande blir bättre. Forskningen som gjorts i denna avhandling är främst inriktad på en mer speciell situation där utbildningsdata är liten och omfattningen av testdata är stor. I det här fallet är det två frågor som måste beaktas, det ena är fördelningen av modellen, och den andra är det överpassade problemet med modellen. LMIE-modellen (labelbaserad ömsesidig informationsberäknare) som föreslås i denna avhandling har vissa fördelar när det gäller båda frågorna. LMIE-modellen innehåller huvudsakligen tre delar: (a) en neural baseradömsesidig informationskodare; (b) en beräkningsmodul för förlustfunktion; (c) en linjär klassificering. Bland dem är förlustfunktionsberäkningsmodulen den viktigaste, liksom huvudfaktorn som skiljer denna modell från andra modeller.
APA, Harvard, Vancouver, ISO, and other styles
2

Tao, Zuoyu. "Improved uncertainty estimates for geophysical parameter retrieval." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/61516.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (p. 167-169).
Algorithms for retrieval of geophysical parameters from radiances measured by instruments onboard satellites play a large role in helping scientists monitor the state of the planet. Current retrieval algorithms based on neural networks are superior in accuracy and speed compared to physics-based algorithms like iterated minimum variance (IMV). However, they do not have any form of error estimation, unlike IMV. This thesis examines the suitability of several different approaches to adding in confidence intervals and other methods of error estimation to the retrieval algorithm, as well as alternative machine learning methods that can both retrieve the parameters desired and assign error bars. Test datasets included both current generation operational instruments like AIRS/AMSU, as well as a hypothetical future hyper- spectral microwave sounder. Mixture density networks (MDN) and Sparse Pseudo Input Gaussian processes (SPGP) were found to be the most accurate at variance prediction. Both of these are novel methods in the field of remote sensing. MDNs also had similar training and testing time to neural networks, while SPGPs often took three times as long to train in typical cases. As a baseline, neural networks trained to estimate variance were also tested, but found to be lacking in accuracy and reliability compared to the other methods.
by Zuoyu Tao.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
3

Riggi, Frank Peter. "Robust invariant feature correspondence for scene geometry estimates." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99789.

Full text
Abstract:
This thesis presents a novel approach to increasing the accuracy and robustness of 3D scene geometry estimates based on two 2D images of the same scene. Our approach focuses on finding feature correspondences, i.e. matching similar features between 2 images and estimating the fundamental matrix (FM), which is a key component to understanding scene geometry given two views. The accuracy of the fundamental matrix is acutely related to the number and accuracy of the correspondences. Determining point correspondences is a difficult and open research problem because many features are not robust enough to handle changes in illumination, scale, and viewpoint. As a result, there may be an insufficient number of correspondences present to estimate the fundamental matrix. Furthermore, if the FM can be estimated, it may not be accurate enough for the required task. Invariant features have been introduced in the literature to model local geometry in such a way that they are capable of being matched over more disparate changes of illumination, viewpoint, and scale. However, cases still exist where an insufficient number of correspondences are present to be able to estimate the fundamental matrix properly. In order to address this problem, we therefore introduce the Transfer of Invariant Parameters (TIP) as a new technique that will exploit the informative geometric parameters stored in several popular classes of invariant features in order to generate additional point correspondences. We anticipate that the addition of a small number of these points for each match will greatly reduce errors in the fundamental matrix estimate and permit its computation in contexts where there are an insufficient number of matches found. In addition, we present modifications to an existing RANSAC robust estimator to better exclude outliers. We demonstrate our approach with two of the most popular scale and affine invariant feature detectors. We test our methods on real images of 3D scenes and show that, with the inclusion of TIP points, it is possible to estimate the FM with fewer than seven corresponding features while still using standard estimation approaches. The resulting estimates also show that with the inclusion of additional TIP point correspondences, the average residual error is lower, fewer estimates are deemed catastrophic failures, and epipole locations are found to be more stable when compared to standard FM estimates.
APA, Harvard, Vancouver, ISO, and other styles
4

Menezes, Karol Fidelis 1966. "Signal delay estimates for design of multichip assemblies." Thesis, The University of Arizona, 1992. http://hdl.handle.net/10150/278171.

Full text
Abstract:
Signal delay estimates for high-speed interconnection nets are formulated using analytical methods. The equations are suitable for estimating delay in interconnects of printed wiring boards and multi-chip modules where the resistance of wires is small. Effects of drivers, receivers, chip interfaces and wires on delay are considered by using simple models. The wires are treated as lossless transmission lines with capacitive discontinuities modeling receiver chip interfaces. Drivers are voltage sources with series resistance. Signal delay consists of line propagation delay and delay due to the change in rise time and reflections at the discontinuities. Various commonly used net topologies are identified and wiring rules and delay predictors provided for each of them. It is shown that interconnect delay can be formulated as a non-linear function of the product of the line characteristic impedance and load capacitance. SPICE simulations are sued to validate analytical derivations.
APA, Harvard, Vancouver, ISO, and other styles
5

Phan, Andrew Minh Tri. "Obtaining dense road speed estimates from sparse GPS measurements." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=32609.

Full text
Abstract:
A major challenge for traffic management systems is the inference of traffic flow in regions of the network for which there is little data. In this thesis, GPS-based vehicle locator data from a fleet of 40-60 roving ambulances are used to estimate traffic congestion along a network of 20,000 streets in the city of Ottawa, Canada. Essentially, the road network is represented as a directed graph and a belief propagation algorithm is used to interpolate measurements from the fleet. The system incorporates a number of novel features. It makes no distinctions between freeways and surface streets, incorporates both historical and live sensor data, handles user inputs such as road closures and manual speed overrides, and is computationally efficient - providing updates every 5 to 6 minutes on commodity hardware. Experimental results are presented which address the key issue of validating the performance and reliability of the system.
Un défi important en lien avec les systèmes de gestion de la circulation routière est de définir la situation actuelle du réseau routier dans les régions où peu de données sont disponibles. Les données provenant d'une flotte de 40-60 ambulances munies d'un GPS ont été utilisées afin d'estimer la congestion routière sur un réseau de plus de 20 000 rues dans la ville d'Ottawa, au Canada. Essentiellement, le réseau routier est représenté par un graphe orienté et un algorithme de propagation de confiance est utilisé pour interpoler les données provenant de la flotte d'ambulances. Ce système comprend des caractéristiques innovatives. Le système ne fait aucune distinction entre les autoroutes et les rues, il intègre les données archivées et actuelles, il gère les informations entrées par l'utilisateur au central concernant les fermeture des routes et les changements de vitesse sur le réseau routier et il est efficace dans ses calculs puisqu'il fournit des mises à jour de l'état du réseau routier toutes les 5-6 minutes sur un ordinateur standard. Les résultats de l'expérience, la validation de la performance ainsi que la fiabilité du système sont présentés.
APA, Harvard, Vancouver, ISO, and other styles
6

Farah, Kamal. "ROC- and LTF- based estimates of neural- behavioral and neural-neural correlations." Thesis, McGill University, 2014. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=123145.

Full text
Abstract:
Observed correlations between responses in visual cortex and perceptual performance help draw a functional link between neural activity and visually guided behavior. These correlations are commonly derived with ROC-based neural-behavioral covariances (referred to as choice or detect probability) using boxcar analysis windows. Although boxcar windows capture the covariation between neural activity and behavior during steady-state stimulus presentations, they are not optimized to capture these correlations during realistic time-varying visual inputs. In this thesis, we implemented a matched-filter technique, combined with cross-validation, to improve the estimation of ROC-based neural-behavioral covariance under dynamic stimulus conditions. We show that this approach maximizes the area under the ROC curve and converges to the true neural-behavioral covariance using a Poisson spiking model. We also demonstrate that the matched filter, combined with cross-validation, reveals the dynamics of the neural-behavioral covariations of individual MT neurons during the detection of a brief motion stimulus.Temporal correlations among responses in the visual cortex, on the other hand, have a substantial effect on determining the functional connectivity between neurons. In order to study the interactions of correlations in neural processing, we measured the linear transfer function between cortical areas V1 and V2 using data collected with multi-electrode extracellular recordings. Our aim was to study the effect that a single V1 action potential has on V2 neurons using linear systems identification. A linear transfer function (referred to as a kernel) is a useful metric for understanding functional connectivity between two cortical areas because it quantifies the temporal integration by V2 of V1 inputs. We used a multiple-input single-output regression model to estimate the pairwise V1-V2 kernels. Because of the large number of simultaneous V1 inputs used in our model, this multivariate analysis has the advantage of minimizing the bias in the kernels due to correlated activity between V1 neurons.We estimated 25,470 kernels from the blank and stimulus periods. Kernel quality was evaluated based on a signal-to-noise ratio (SNR) of the estimated kernel variance to the shuffled kernel variance. Putative good kernels (high SNR > 1) were extracted from the blank (4,665) and stimulus (2,542) conditions, and both were found to be exponential in shape with 18 and 27 ms time constants, respectively. Thus, V2 neurons tended to integrate V1 inputs over an exponentially decaying window, with a combined average time constant of 21 ms, that was independent of the occurrence of a visual stimulus. Although the dynamics of cortical circuitry likely contribute to our measured kernels, the integrative properties of single neurons appear to be a dominant component of the V1-V2 linear transfer function.
Les corrélations entre le comportement du cortex visuel et les habiletés visuelles permettent de comprendre le lien fonctionnel entre l'activité des neurones et le comportement d'un sujet lors du visionnement d'un écran. Ces corrélations sont déduites à travers des courbes ROC représentant les covariances entre les comportements et les neurones (la probabilité de choix et de détection) et ce, en utilisant des filtres avec une fonction de transfert Sinc. Ces filtres capturent efficacement les covariances entre les activités neurologiques et les comportements durant des stimuli constants. Toutefois, il n'est pas optimal d'utiliser ces filtres pour capturer ces corrélations lors de situations réelles et variantes en fonction du temps. Dans cette thèse, nous présentons une technique de filtrage équivalent combinée avec une validation croisée pour améliorer l'estimation de la courbe ROC représentant les covariances entre les comportements et les neurones lors de conditions de stimulus dynamiques. Suite à l'utilisation d'un modèle de déchargement sous forme d'une distribution Poisson, nous démontrons que cette approche maximise l'aire sous la courbe ROC et converge vers la vraie covariance entre les comportements et les neurones. Nous démontrons également que le filtre équivalent combiné à une validation croisée permet de représenter la dynamique des covariances des comportements et des neurones de la région visuelle MT durant la détection de brefs stimuli. Les corrélations temporelles entre les réponses du cortex visuel ont un effet considérable sur la détermination des connections fonctionnelles entre les neurones. Pour être en mesure d'étudier les interactions des corrélations du fonctionnement des neurones, nous avons calculé les valeurs des fonctions de transfert linéaires entre les régions corticales V1 et V2 en utilisant des données collectées à l'aide d'électrodes extracellulaires multiples enregistrées. Le but derrière cette analyse est de comprendre l'effet de l'influx nerveux de V1 sur les neurones V2 en usant de systèmes d'identification linéaires. Une fonction de transfert linéaire (ou fonction « kernel ») est une métrique utile à comprendre la connexion entre deux régions du cortex puisqu'elle quantifie l'intégration temporelle des neurones V1 dans V2. Nous avons employé un modèle de régression à variables multiples pour estimer la combinaison entre les « kernels » V1-V2. Face à l'insertion simultanée d'une multitude de variables V1 dans notre modèle, cette analyse à variables multiple s'est avérée être adéquate; permettant de minimiser le biais dans les « kernels » causés par les activités corrélées entre les neurones V1.Nous avons estimé 25,470 « kernels » à l'état de repos (lorsqu'aucune image n'apparait sur l'écran) et à l'état de stimulus. La qualité a été évaluée dépendamment de la valeur du ratio du signal-bruit de la variance estimée du « kernel » sur la variance du « kernel » brouillé. Les « kernels » considérés de bonne qualité (ayant un ratio signal-bruit supérieur à 1) ont été extraits des données de l'état de repos (4,665) et celles des conditions de stimuli (2,542). Il s'est avéré que ces deux types de données suivent une forme exponentielle avec des constantes de temps égales à 18 et 27 ms respectivement. Donc la convolution des V1 avec le « kernel » est équivalente à V2. V2 à son tour tend à intégrer les variables V1 dans la fonction exponentielle décroissante dont la moyenne combinée est de 21 ms. Nous pouvons noter que la forme du « kernel » est indépendante de la manifestation du stimulus visuel. Bien que les dynamiques du circuit du cortex peuvent contribuer au « kernel » quantifié, les propriétés intégratives des neurones s'avèrent être des éléments importants de la fonction de transfert linéaire de V1-V2.
APA, Harvard, Vancouver, ISO, and other styles
7

Chan, Eric Wai Chi. "Novel motion estimators for video compression." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/6864.

Full text
Abstract:
In this thesis, the problem of motion estimation is addressed from two perspectives, namely, hardware architecture and reduced complexity algorithms in the spatial and transform domains. First, a VLSI architecture which implements the full search block matching algorithm in real time is presented. The interblock dependency is exploited and hence the architecture can meet the real time requirement in various applications. Most importantly, the architecture is simple, modular and cascadable. Hence the proposed architecture is easily implementable in VLSI as a codec. The spatial domain algorithm consists of a layered structure and alleviates the local optimum problem. Most importantly, it employs a simple matching criterion, namely, a modified pixel difference classification (MPDC) and hence results in a reduced computational complexity. In addition, the algorithm is compatible with the recently proposed MPEG-1 video compression standard. Simulation results indicate that the proposed algorithm provides a comparable performance (compared to the algorithms reported in the literature) at a significantly reduced computational complexity. In addition, the hardware implementation of the proposed algorithm is very simple because of the binary operations used in the matching criteria. Finally, we present a wavelet transform based fast multiresolution motion estimation (FMRME) scheme. Here, the wavelet transform is used to exploit both the spatial and temporal redundancies resulting in an efficient coder. In FMRME, the correlations among the orientation subimages of the wavelet pyramid structure are exploited resulting in an efficient motion estimation process. In addition, this significantly reduces side information for motion vectors which corresponds to significant improvements in coding performance of the FMRME based wavelet coder for video compression. Simulation results demonstrate the superior coding performance of the FMRME based wavelet transform coder. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
8

Miller, Erik G. (Erik Gundersen). "An analysis of surface area estimates of binary volumes under three tilings." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/43929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Molins, Jiménez Antonio. "Multimodal integration of EEG and MEG data using minimum ℓ₂-norm estimates." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40528.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (leaves 69-74).
The aim of this thesis was to study the effects of multimodal integration of electroencephalography (EEG) and magnetoencephalography (MEG) data on the minimum ℓ₂-norm estimates of cortical current densities. We investigated analytically the effect of including EEG recordings in MEG studies versus the addition of new MEG channels. To further confirm these results, clinical datasets comprising concurrent MEG/EEG acquisitions were analyzed. Minimum ℓ₂-norm estimates were computed using MEG alone, EEG alone, and the combination of the two modalities. Localization accuracy of responses to median-nerve stimulation was evaluated to study the utility of combining MEG and EEG.
by Antonio Molins Jiménez.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
10

Millet, Floyd W. "Improving Electromagnetic Bias Estimates." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd525.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Benner, Elyse. "Digital Signal Processing of Human Skin Videos to Estimate Regions of Significant Blood Perfusion." Case Western Reserve University School of Graduate Studies / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=case1441126290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wineteer, Alexander Grant. "Towards Improved Estimates of Upper Ocean Energetics." DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1566.

Full text
Abstract:
The energy exchanged between the atmosphere and the ocean is an important parameter in understanding the Earth’s climate. One way of quantifying this energy exchange is through the use of “wind work,” or the work done on the ocean by the wind. Since wind work is calculated according to the interaction between ocean surface currents and surface wind stress, a number of surface current decompositions can be used to decompose wind work calculations. In this research, geostrophic, ageostrophic, Ekman, and total current decompositions are all used to calculate wind work. Geostrophic currents are formed by the balance of surface pressure gradients and the Coriolis effect. Ageostrophic currents, on the other hand, are difficult to calculate because they are made up of many types of currents, and are generally defined as any current not in geostrophic balance. The main component of ageostrophic currents, Ekman currents, are used in this work to approximate ageostrophic currents. Ekman currents are formed by the balance of surface wind stress and the Coriolis effect. Finally, total currents are the sum of all currents in the ocean. Using high resolution, global NASA ocean models, the wind work on the global oceans is estimated via a number of decompositions, with results finding about 3.2 TW, .32 TW, and 3.05 TW for total, geostrophic, and Ekman wind work respectively, when taking a 7 day window average of surface currents and a 1 day average of surface stress. Averaging period for currents is found to significantly affect the resulting calculated wind work, with greater than 50 percent difference between 1 and 15 days of averaging. Looking at the same total, geostrophic, and Ekman wind work results for 1 day averages of wind stress and surface currents finds 5.5 TW, .03 TW, and 6.3 TW respectively. This result indicates that high frequency currents are very important to wind work. Seasonally, wind work is found to be at a maximum during the Northern Hemisphere (NH) summer, and at a minimum during the NH winter months. To help motivate the funding of a Doppler Scatterometer, simulations are used to show the capabilities of such an instrument in measuring wind work. DopplerScat simulations find that a satellite capable of measuring coincident surface vector winds and surface vector currents, with 1.1 m/s wind speed error and .5 m/s current speed error, could estimate global wind work to within 2 percent accuracy on an 8 day average with daily global snapshots.
APA, Harvard, Vancouver, ISO, and other styles
13

Park, Joon B. "Fault-Tolerant Nonlinear Estimator-Based Direct Torque Control of Sensorless AC Motor Drives." Thesis, Southern Illinois University at Edwardsville, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10982121.

Full text
Abstract:

The advancement of sophisticated power electronics technology and high expectation of sensing reliability attribute to the rapid deployment of sensorless-control of AC motor drives. The purpose of this thesis is to provide the comparative studies of the extended Kalman filter (EKF), the fault tolerant extended Kalman filter (FTEKF), and the unscented Kalman filter (UKF)-based sensorless direct torque control approaches for permanent magnet AC motor (PMAC) and induction motor (IM) drives to improve Kalman filtering based state-estimation performances during external disturbances, noise and measurement failures. The proposed fault tolerant Kalman filtering control algorithm is robust to modeling uncertainties and sensing failures. Comparative of computer simulation studies and hardware implementations results have shown that the proposed second-order fault tolerant Kalman filter (SOFTEKF) provide superior state-estimation performance improvements in comparison with the unscented Kalman filter and traditional extended Kalman filter for sensorless direct torque control applications of AC motor drives.

APA, Harvard, Vancouver, ISO, and other styles
14

Townsend, Daphne. "Clinical trial of estimated risk stratification prediction tool." Thesis, University of Ottawa (Canada), 2007. http://hdl.handle.net/10393/27926.

Full text
Abstract:
This work presents doctors with a model of the estimated degree of risk of rare and important neonatal outcomes to aid in better decisions and improved allocation of equipment and resources. An extensive list of admission day parameters is reduced to minimum variable sets to create models for outcomes that are relevant to decision-making in the neonatal intensive care unit. Models are applied to a special collection of cases and compared to neonatologists' risk estimates. A comparative analysis of physician's predictions and the models' discrimination abilities highlights areas of success and areas that can be improved for future trials. Doctors responded positively to the prediction interface concept and to the estimated risk stratification models. Physicians' strengths identified outcomes that could benefit from increased sensitivity. A substantial effort was made to conduct the usability and performance evaluations within the ethical standards that are especially important for engineering healthcare management applications.
APA, Harvard, Vancouver, ISO, and other styles
15

Richmond, Christ D. (Christ David). "Statistical analysis of adaptive maximum-likelihood signal estimator." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36952.

Full text
Abstract:
Thesis (Elec. E.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1995.
Includes bibliographical references (leaves 56-57).
by Christ D. Richmond.
Elec.E.
APA, Harvard, Vancouver, ISO, and other styles
16

Wisniewski, Wit Tadeusz 1962. "An event qualifier for double differentiator time of arrival estimators." Thesis, The University of Arizona, 1991. http://hdl.handle.net/10150/277993.

Full text
Abstract:
A low variance pulse time-of-arrival (TOA) estimator is considered. It should be insensitive to all non TOA parameters and operate without a priori information about any pulse parameters assuming they are constrained to a wide range. The estimator is driven by a wide-band input diode detector with a narrow filtered base-band output. The double differentiator TOA estimator is selected. It marks inflection points of pulses and noise. An event qualifier is necessary to distinguish the pulse-only TOAs and is synthesized using the unique polarities of pulses and their derivative by detecting level crossings on both. It experiences errors of mis-detection and false alarm with noise present. Error performance is established at constant false alarm rate (FAR), set by choice of threshold level pairs. Detection probability is maximized by simulation over the locus of constant FAR levels. Design information for operating a qualifier is provided.
APA, Harvard, Vancouver, ISO, and other styles
17

Plourde, Eric. "Bayesian short-time spectral amplitude estimators for single-channel speech enhancement." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66864.

Full text
Abstract:
Single-channel speech enhancement algorithms are used to remove background noise in speech. They are present in many common devices such as cell phones and hearing aids. In the Bayesian short-time spectral amplitude (STSA) approach for speech enhancement, an estimate of the clean speech STSA is derived by minimizing the statistical expectation of a chosen cost function. Examples of such estimators are the minimum mean square error (MMSE) STSA, the β-order MMSE STSA (β-SA), which includes a power law parameter, and the weighted Euclidian (WE), which includes a weighting parameter. This thesis analyzes single-channel Bayesian STSA estimators for speech enhancement with the aim of, firstly, gaining a better understanding of their properties and, secondly, proposing new cost functions and statistical models to improve their performance. In addition to a novel analysis of the β-SA estimator for parameter β ≤ 0, three new families of estimators are developed in this thesis: the Weighted β-SA (Wβ-SA), the Generalized Weighted family of STSA estimators (GWSA) and a family of multi-dimensional Bayesian STSA estimators. The Wβ-SA combines the power law of the β-SA and the weighting factor of the WE. Its parameters are chosen based on the characteristics of the human auditory system which is found to have the advantage of improving the noise reduction at high frequencies while limiting the speech distortions at low frequencies. An analytical generalization of a cost function structure found in many existing Bayesian STSA estimators is proposed through the GWSA family of estimators. This allows a unification of Bayesian STSA estimators and, moreover, provides a better understanding of this general class of estimators. Finally, we propose a multi-dimensional family of estimators that accounts for the correlated frequency components in a digitized speech signal. In fact, the spectral components of the clean
Les algorithmes de rehaussement de la parole à voie unique sont utilisés afin de réduire le bruit de fond d'un signal de parole bruité. Ils sont présents dans plusieurs appareils tels que les téléphones sans fil et les prothèses auditives. Dans l'approche bayésienne d'estimation de l'amplitude spectrale locale (Short-Time Spectral Amplitude - STSA) pour le rehaussement de la parole, un estimé de la STSA non bruitée est déterminé en minimisant l'espérance statistique d'une fonction de coût. Ce type d'estimateurs incluent le MMSE STSA, le β-SA, qui intègre un exposant comme paramètre de la fonction de coût, et le WE, qui possède un paramètre de pondération.Cette thèse étudie les estimateurs bayésiens du STSA avec pour objectifs d'approfondir la compréhension de leurs propriétés et de proposer de nouvelles fonctions de coût ainsi que de nouveaux modèles statistiques afin d'améliorer leurs performances. En plus d'une étude approfondie de l'estimateur β-SA pour les valeurs de β ≤ 0, trois nouvelles familles d'estimateur sont dévelopées dans cette thèse: le β-SA pondéré (Weighted β-SA - Wβ-SA), une famille d'estimateur du STSA généralisé et pondéré (Generalized Weighted STSA - GWSA) ainsi qu'une famille d'estimateur du STSA multi-dimensionnel.Le Wβ-SA combine l'exposant présent dans le β-SA et le paramètre de pondération du WE. Ses paramètres sont choisis en considérant certaines caractéristiques du système auditif humain ce qui a pour avantage d'améliorer la réduction du bruit de fond à hautes fréquences tout en limitant les distorsions de la parole à basses fréquences. Une généralisation de la structure commune des fonctions de coût de plusieurs estimateurs bayésiens du STSA est proposée à l'aide de la famille d'estimateur GWSA. Cette dernière permet une unification des estimateurs bayésiens du STSA et apporte une meilleure compréhensio
APA, Harvard, Vancouver, ISO, and other styles
18

Tsoutsas, Athanasios. "Designing a sensorless torque estimator for direct torque control of an induction motor." Thesis, Monterey, California : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Sep/09Sep%5FTsoutsas.pdf.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, September 2009.
Thesis Advisor(s): Julian, Alexander L. "September 2009." Description based on title screen as viewed on November 5, 2009. Author(s) subject terms: Induction Motor, Electromagnetic Torque Estimator, Field Programmable Gate Array (FPGA), XILINX. Includes bibliographical references (p. 59). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
19

Bhagawat, Pankaj. "Design of a robust parameter estimator for nominally Laplacian noise." Texas A&M University, 2003. http://hdl.handle.net/1969/107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Pelletier, Stéphane. "High-resolution video synthesis from mixed-resolution video based on the estimate-and-correct method." Thesis, McGill University, 2003. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=79253.

Full text
Abstract:
A technique to increase the frame rate of digital video cameras at high-resolution is presented. The method relies on special video hardware capable of simultaneously generating low-speed, high-resolution frames and high-speed, low-resolution frames. The algorithm follows an estimate-and-correct approach, in which a high-resolution estimate is first produced by translating the pixels of the high-resolution frames produced by the camera with respect to the motion dynamic observed in the low-resolution ones. The estimate is then compared against the current low-resolution frame and corrected locally as necessary for consistency with the latter. This is done by replacing the wrong pixels of the estimate with pixels from a bilinear interpolation of the current low-resolution frame. Because of their longer exposure time, high-resolution frames are more prone to motion blur than low-resolution frames, so a motion blur reduction step is also applied. Simulations demonstrate the ability of our technique in synthesizing high-quality, high-resolution frames at modest computational expense.
APA, Harvard, Vancouver, ISO, and other styles
21

Blodgett, Jeffrey Richard. "Analysis, Validation, and Improvement of High-Resolution Wind Estimates from the Advanced Scatterometer (ASCAT)." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/5614.

Full text
Abstract:
The standard L2B ocean wind product from the Advanced Scatterometer (ASCAT) is retrieved as a 25 km product on a 12.5 km grid. Ultra-high resolution (UHR) processing allows ASCAT wind retrieval on a high-resolution 1.25 km grid. Ideally, such a high-resolution sample grid provides wind information down to a 2.5 km scale, allowing better analysis of winds with high spatial variability such as those in near-coastal regions and storms. Though the wind field is sampled on a finer grid, the actual data resolution needs to be validated. This thesis provides an analysis and validation of ASCAT UHR wind estimates in order to determine the improvement in resolution compared to the L2B product. This is done using analysis tools such as statistics, the power spectrum, and derivative fields, and through comparison to other high-resolution data such as synthetic aperture radar (SAR). The improvement of UHR wind retrieval is also explored by reducing ambiguity selection errors and correcting for contamination of wind vectors near land. Results confirm that ASCAT UHR winds contain high-resolution information that is not present in the L2B product. The resolution improvement is difficult to quantify due to a lack of truth data. Nevertheless, there is evidence to suggest that the resolution is improved by at least a factor of three to 10 km, and perhaps down to 3 or 4 km. It is found through comparison of UHR and SAR winds that (1) both products have common fine-scale features, (2) their comparative statistics are similar to that of L2B and SAR, suggesting that the high resolution content agrees just as well as the low resolution content because the comparison is performed at a finer scale (3) both products have derivative fields that match well, (4) the UHR product benefits from high-resolution direction information, and (5) the UHR product matches better the expected spectral properties of ocean winds. For the UHR processing improvement methods, the model-based improvement of UHR ambiguity selection allows obvious ambiguity errors to be found and corrected, increases the self-consistency of the wind field, and causes the spectrum to better follow a power law at high wavenumbers. The removal of land-contamination from near-coastal wind vectors allows accurate wind retrieval much closer to land and greater visibility of high-resolution wind features near the coast.
APA, Harvard, Vancouver, ISO, and other styles
22

Williams, Scott Lawrence. "Separation of mixed radiometric land cover temperatures in time-delayed bi-angular views using estimated fractional differential coefficients." Thesis, New Mexico State University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3582404.

Full text
Abstract:

A dissertation is presented concerning the separation of radiometric temperatures of sparse land covers from two views of mixed thermal and NDVI samples with a time delay between the views. The research scope is limited to a simple binary land cover of vegetation canopy and bare soil. Previous methods have been developed using simultaneous views but little work has been done on time-delayed sampling, which is the focus of this study.

The dissertation hypothesis is based on the observation that the rate of change of a mixed radiometric temperature with respect to actual fractional vegetation cover, dTm/dfa originally constructed using spatially varying vegetation covers, can also be constructed using bi-angular views of the same land parcel but with a different interpretation; that bi-angular samples provide a perceived fractional cover differential, dTm/df0 . The hypothesis is that dTm/df0 can be used for sub-pixel temperature discrimination of binary land covers and, moreover, that the separate soil and vegetation total differential coefficients dTm/df0 and dTv/df0, required in the algebraic system, can be characterized to sufficiently capture environmental influences between samples in time. To test the hypothesis, this study heuristically derives a first-order estimation of the differential coefficients, required to decompose land cover temperatures from mixed data points, for any time-delayed sampling spanning the day. Applying the estimated values on similar target days gives a high success rate for a local time span of at least a week.

This approach, once scaled up, could be used by platforms with inherent time delays, such as tandem weather satellites, to provide separate land cover temperature estimates from low-resolution sensors.

APA, Harvard, Vancouver, ISO, and other styles
23

Carroll, Brandon T. "Using Motion Fields to Estimate Video Utility and Detect GPS Spoofing." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3291.

Full text
Abstract:
This work explores two areas of research. The first is the development of a video utility metric for use in aerial surveillance and reconnaissance tasks. To our knowledge, metrics that compute how useful aerial video is to a human in the context of performing tasks like detection, recognition, or identification (DRI) do not exist. However, the Targeting Task Performance (TTP) metric was previously developed to estimate the usefulness of still images for DRI tasks. We modify and extend the TTP metric to create a similar metric for video, called Video Targeting Task Performance (VTTP). The VTTP metric accounts for various things like the amount of lighting, motion blur, human vision, and the size of an object in the image. VTTP can also be predictively calculated to estimate the utility that a proposed flight path will yield. This allows it to be used to help automate path planning so that operators are able to devote more of their attention to DRI. We have used the metric to plan and fly actual paths. We also carried out a small user study that verified that VTTP correlates with subjective human assessment of video. The second area of research explores a new method of detecting GPS spoofing on an unmanned aerial system (UAS) equipped with a camera and a terrain elevation map. Spoofing allows an attacker to remotely tamper with the position, time, and velocity readings output by a GPS receiver. This tampering can throw off the UAS's state estimates, but the optical flow through the camera still depends on the actual movement of the UAS. We develop a method of detecting spoofing by calculating the expected optical flow based on the state estimates and comparing it against the actual optical flow. If the UAS is successfully spoofed to a different location, then the detector can also be triggered by differences in the terrain between where the UAS actually is and where it thinks it is. We tested the spoofing detector in simulation, and found that it works well in some scenarios.
APA, Harvard, Vancouver, ISO, and other styles
24

Pavy, Anne M. "SV-Means: A Fast One-Class Support Vector Machine-Based Level Set Estimator." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1516047120200949.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Hossan, Md Shakawat. "Prediction Model to Estimate the Zero Crossing Point for Faulted Waveforms." UKnowledge, 2014. http://uknowledge.uky.edu/ece_etds/53.

Full text
Abstract:
In any power system, fault means abnormal flow of current. Insulation breakdown is the cause of fault generation. Different factors can cause the breakdown: Wires drifting together in the wind, Lightning ionizing air, wires with contacts of animals and plants, Salt spray or pollution on insulators. The common type of faults on a three phase system are single line-to-ground (SLG), Line-to-line faults (LL), double line-to-ground (DLG) faults, and balanced three phase faults. And these faults can be symmetrical (balanced) or Unsymmetrical (imbalanced).In this Study, a technique to predict the zero crossing point has been discussed and simulated. Zero crossing point prediction for reliable transmission and distribution plays a significant role. Electrical power control switching works in zero crossing point when a fault occurs. The precision of measuring zero crossing point for syncing power system control and instrumentation requires a thoughtful approach to minimize noise and external signals from the corrupted waveforms A faulted current waveform with estimated faulted phase/s, the technique is capable of identifying the time of zero crossing point. Proper Simulation has been organized on MATLAB R2012a.
APA, Harvard, Vancouver, ISO, and other styles
26

Sonti, Niharika. "A Unified Method for Detecting and Isolating Process Faults and Sensor Faults in Nonlinear Systems." Wright State University / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=wright1292763603.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Sarvepalli, Pradeep Kiran. "Non-data aided digital feedforward timing estimators for linear and nonlinear modulations." Texas A&M University, 2003. http://hdl.handle.net/1969/360.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Cobb, Richard E. "Confidence bands, measurement noise, and multiple input - multiple output measurements using three-channel frequency response function estimator." Diss., Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/53675.

Full text
Abstract:
A three-channel Frequency Response Function (FRF) estimator is discussed and statistical relations developed. Methods for estimating the variance of the FRF magnitude and levels of uncorrelated content in the test signals are developed. FRF magnitude variance estimates allow ’confidence bands’ to be placed on FRF magnitude estimates, giving an indication of the variability of the result. Uncorrelated content estimates indicate sources and magnitudes of noise in the measurement system. Both Monte Carlo simulations and experimental work are used to verify the statistical and uncorrelated content estimates. Relations to extend the three-channel FRF estimator to multiple input-multiple output measurements are developed and verified through simulations.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
29

Irwin, Shaun George. "Optimal estimation and sensor selection for autonomous landing of a helicopter on a ship deck." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/95894.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: This thesis presents a complete state estimation framework for landing an unmanned helicopter on a ship deck. In order to design and simulate an optimal state estimator, realistic sensor models are required. Selected inertial, absolute and relative sensors are modeled based on extensive data analysis. The short-listed relative sensors include monocular vision, stereo vision and laser-based sensors. A state estimation framework is developed to fuse available helicopter estimates, ship estimates and relative measurements. The estimation structure is shown to be both optimal, as it minimises variance on the estimates, and flexible, as it allows for varying degrees of ship deck instrumentation. Deck instrumentation permitted ranges from a fully instrumented deck, equipped with an inertial measurement unit and differential GPS, to a completely uninstrumented ship deck. Optimal estimates of all helicopter, relative and ship states necessary for the autonomous landing on the ship deck are provided by the estimator. Active gyro bias estimation is incorporated into the helicopter’s attitude estimator. In addition, the process and measurement noise covariance matrices are derived from sensor noise analysis, rather than conventional tuning methods. A full performance analysis of the estimator is then conducted. The optimal relative sensor combination is determined through Monte Carlo simulation. Results show that the choice of sensors is primarily dependent on the desired hover height during the ship motion prediction stage. For a low hover height, monocular vision is sufficient. For greater altitudes, a combination of monocular vision and a scanning laser beam greatly improves relative and ship state estimation. A communication link between helicopter and ship is not required for landing, but is advised for added accuracy. The estimator is implemented on a microprocessor running real-time Linux. The successful performance of the system is demonstrated through hardware-in-the-loop and actual flight testing.
AFRIKAANSE OPSOMMING: Hierdie tesis bied ’n volledige sensorfusie- en posisieskattingstruktuur om ’n onbemande helikopter op ’n skeepsdek te laat land. Die ontwerp van ’n optimale posisieskatter vereis die ontwikkeling van realistiese sensormodelle ten einde die skatter akkuraat te simuleer. Die gekose inersie-, absolute en relatiewe sensors in hierdie tesis is op grond van uitvoerige dataontleding getipeer, wat eenoogvisie-, stereovisieen lasergegronde sensors ingesluit het. ’n Innoverende raamwerk vir die skatting van relatiewe en skeepsposisie is ontwikkel om die beskikbare helikopterskattings, skeepskattings en relatiewe metings te kombineer. Die skattingstruktuur blyk optimaal te wees in die beperking van skattingsvariansie, en is terselfdertyd buigsaam aangesien dit vir wisselende mates van skeepsdekinstrumentasie voorsiening maak. Die toegelate vlakke van dekinstrumentasie wissel van ’n volledig geïnstrumenteerde dek wat met ’n inersiemetingseenheid en ’n differensiële globale posisioneringstelsel (GPS) toegerus is, tot ’n algeheel ongeïnstrumenteerde dek. Die skatter voorsien optimale skattings van alle vereiste helikopter-, relatiewe en skeepsposisies vir die doeleinde van outonome landing op die skeepsdek. Aktiewe giro-sydige skatting is by die posisieskatter van die helikopter ingesluit. Die proses- en metingsmatrikse vir geruiskovariansie in die helikopterskatter is met behulp van ’n ontleding van sensorgeruis, eerder as gebruiklike instemmingsmetodes, afgelei. ’n Volledige werkingsontleding is daarna op die skatter uitgevoer. Die optimale relatiewe sensorkombinasie vir landing op ’n skeepsdek is met Monte Carlo-simulasie bepaal. Die resultate toon dat die keuse van sensors hoofsaaklik van die gewenste sweefhanghoogte gedurende die voorspellingstadium van skeepsbeweging afhang. Vir ’n lae sweefhanghoogte is eenoogvisie-sensors voldoende. Vir hoër hoogtes het ’n kombinasie van eenoogvisie-sensors en ’n aftaslaserbundel ’n groot verbetering in relatiewe en skeepsposisieskatting teweeggebring. ’n Kommunikasieskakel tussen helikopter en skip is nie ’n vereiste vir landing nie, maar word wel aanbeveel vir ekstra akkuraatheid. Die skatter is op ’n mikroverwerker met intydse Linux in werking gestel. Die suksesvolle werking van die stelsel is deur middel van hardeware-geïntegreerde simulasie en werklike vlugtoetse aangetoon.
APA, Harvard, Vancouver, ISO, and other styles
30

Lei, Jiansheng. "Using graph theory to resolve state estimator issues faced by deregulated power systems." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Keller, Andrew Mark. "Using On-Chip Error Detection to Estimate FPGA Design Sensitivity to Configuration Upsets." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6302.

Full text
Abstract:
SRAM-based FPGAs provide valuable computation resources and reconfigurability; however, ionizing radiation can cause designs operating on these devices to fail. The sensitivity of an FPGA design to configuration upsets, or its SEU sensitivity, is an indication of a design's failure rate. SEU mitigation techniques can reduce the SEU sensitivity of FPGA designs in harsh radiation environments. The reliability benefits of these techniques must be determined before they can be used in mission-critical applications and can be determined by comparing the SEU sensitivity of an FPGA design with and without these techniques applied to it. Many approaches can be taken to evaluate the SEU sensitivity of an FPGA design. This work describes a low-cost easier-to-implement approach for evaluating the SEU sensitivity of an FPGA design. This approach uses additional logic resources on the same FPGA as the design under test to determine when the design has failed, or deviated from its specified behavior. Three SEU mitigation techniques were evaluated using this approach: triple modular redundancy (TMR), configuration scrubbing, and user-memory scrubbing. Significant reduction in SEU sensitivity is demonstrated through fault injection and radiation testing. Two LEON3 processors operating in lockstep are compared against each other using on-chip error detection logic on the same FPGA. The design SEU sensitivity is reduced by 27x when TMR and configuration scrubbing are applied, and by approximately 50x when TMR, configuration scrubbing, and user-memory scrubbing are applied together. Using this approach, an SEU sensitivity comparison is made of designs implemented on both an Altera Stratix V FPGA and a Xilinx Kintex 7 FPGA. Several instances of a finite state machine are compared against each other and a set of golden output vectors, all on the same FPGA. Instances of an AES cryptography core are chained together and the output of two chains are compared using on-chip error detection. Fault injection and neutron radiation testing reveal several similarities between the two FPGA architectures. SEU mitigation techniques reduce the SEU sensitivity of the two designs between 4x and 728x. Protecting on-chip functional error detection logic with TMR and duplication with compare (DWC) is compared. Fault injection results suggest that it is more favorable to protect on-chip functional error detection logic with DWC than it is to protect it with TMR for error detection.
APA, Harvard, Vancouver, ISO, and other styles
32

Radan, Damir. "Integrated Control of Marine Electrical Power Systems." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Engineering Science and Technology, 2008. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-1984.

Full text
Abstract:

This doctoral thesis presents new ideas and research results on control of marine electric power system.

The main motivation for this work is the development of a control system, power management system (PMS) capable to improve the system robustness to blackout, handle major power system faults, minimize the operational cost and keep the power system machinery components under minimal stress in all operational conditions.

Today, the electric marine power system tends to have more system functionality implemented in integrated automation systems. The present state of the art type of tools and methods for analyzing marine power systems do only to a limited extent utilize the increased knowledge available within each of the mechanical and electrical engineering disciplines.

As the propulsion system is typically consisted of the largest consumers on the vessel, important interactions exists between the PMS and vessel propulsion system. These are interacted through the dynamic positioning (DP) controller, thrust allocation algorithm, local thruster controllers, generators' local frequency and voltage controllers. The PMS interacts with the propulsion system through the following main functions: available power static load control, load rate limiting control and blackout prevention control (i.e. fast load reduction). These functions serve to prevent the blackout and to ensure that the vessel will always have enough power.

The PMS interacts with other control systems in order to prevent a blackout and to minimize operational costs. The possibilities to maximize the performance of the vessel, increase the robustness to faults and decrease a component wear-out rate are mainly addressed locally for the individual control systems. The solutions are mainly implicative (for e.g. local thruster control, or DP thrust allocation), and attention has not been given on the interaction between these systems, the power system and PMS. Some of the questions that may arise regarding the system interactions, are as follows: how the PMS functionality may affect a local thruster control, how the local thruster control may affect the power system performance, how some consumers may affect the power system performance in normal operations and thus affect other consumers, how the power system operation may affect the susceptibility to faults and blackout, how various operating and weather conditions may affect the power system performance and thus propulsion performance though the PMS power limiting control, how propulsion performance may affect the overall vessel performance, which kind of faults can be avoided if the control system is re-structured, how to minimize the operational costs and to deal with the conflicting goals. This PhD thesis aims to provide answers to such questions.

The main contributions of this PhD thesis are:

− A new observer-based fast load reduction system for the blackout prevention control has been proposed. When compared to the existing fast load reduction systems, the proposed controller gives much faster blackout detection rate, high reliability in the detection and faster and more precise load reduction (within 150 miliseconds).

− New advanced energy management control strategies for reductions in the operational costs and improved fuel economy of the vessel.

− Load limiting controllers for the reduction of thruster wear-out rate. These controllers are based on the probability of torque loss, real-time torque loss and the thruster shaft

accelerations. The controllers provide means of redistributing thrust from load fluctuating thrusters to less load fluctuating ones, and may operate independently of the thrust allocation system. Another solution is also proposed where the load limiting controller based on thrust losses is an integrated part of DP thrust allocation algorithm.

− A new concept of totally integrated thrust allocation system, local thruster control and power system. These systems are integrated through PMS functionality which is contained within each thruster PLC, thereby distributed among individual controllers, and independent of the communications and dedicated controllers.

− Observer-based inertial controller and direct torque-loss controller (soft anti-spin controller) with particular attention to the control of machine wear-out rate. These controller contribute to general shaft speed control of electrical thrusters, generators and main propulsion prime movers.

The proposed controllers, estimators and concepts are demonstrated through time-domain simulations performed in MATLAB/SIMULINK. The selected data are typical for the required applications and may differ slightly for the presented cases.

APA, Harvard, Vancouver, ISO, and other styles
33

Chitte, Sree Divya. "Source localization from received signal strength under lognormal shadowing." Thesis, University of Iowa, 2010. https://ir.uiowa.edu/etd/477.

Full text
Abstract:
This thesis considers statistical issues in source localization from the received signal strength (RSS) measurements at sensor locations, under the practical assumption of log-normal shadowing. Distance information of source from sensor locations can be estimated from RSS measurements and many algorithms directly use powers of distances to localize the source, even though distance measurements are not directly available. The first part of the thesis considers the statistical analysis of distance estimation from RSS measurments. We show that the underlying problem is inefficient and there is only one unbiased estimator for this problem and its mean square error (MSE) grows exponentially with noise power. Later, we provide the linear minimum mean square error (MMSE) estimator whose bias and MSE are bounded in noise power. The second part of the thesis establishes an isomorphism between estimates of differences between squares of distances and the source location. This is used to completely characterize the class of unbiased estimates of the source location and to show that their MSEs grow exponentially with noise powers. Later, we propose an estimate based on the linear MMSE estimate of distances that has error variance and bias that is bounded in the noise variance.
APA, Harvard, Vancouver, ISO, and other styles
34

Esterhuizen, Gerhard. "Generalised density function estimation using moments and the characteristic function." Thesis, Link to the online version, 2003. http://hdl.handle.net/10019.1/1001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Fallqvist, Marcus. "Automatic Volume Estimation Using Structure-from-Motion Fused with a Cellphone's Inertial Sensors." Thesis, Linköpings universitet, Datorseende, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-144194.

Full text
Abstract:
The thesis work evaluates a method to estimate the volume of stone and gravelpiles using only a cellphone to collect video and sensor data from the gyroscopesand accelerometers. The project is commissioned by Escenda Engineering withthe motivation to replace more complex and resource demanding systems with acheaper and easy to use handheld device. The implementation features popularcomputer vision methods such as KLT-tracking, Structure-from-Motion, SpaceCarving together with some Sensor Fusion. The results imply that it is possible toestimate volumes up to a certain accuracy which is limited by the sensor qualityand with a bias.
I rapporten framgår hur volymen av storskaliga objekt, nämligen grus-och stenhögar,kan bestämmas i utomhusmiljö med hjälp av en mobiltelefons kamerasamt interna sensorer som gyroskop och accelerometer. Projektet är beställt avEscenda Engineering med motivering att ersätta mer komplexa och resurskrävandesystem med ett enkelt handhållet instrument. Implementationen använderbland annat de vanligt förekommande datorseendemetoderna Kanade-Lucas-Tommasi-punktspårning, Struktur-från-rörelse och 3D-karvning tillsammans medenklare sensorfusion. I rapporten framgår att volymestimering är möjligt mennoggrannheten begränsas av sensorkvalitet och en bias.
APA, Harvard, Vancouver, ISO, and other styles
36

Haycock, Spencer S. "Frequency Estimation of Linear FM Scatterometer Pulses Received by the SeaWinds Calibration Ground Station." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd543.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Krishnan, Rajet. "Problems in distributed signal processing in wireless sensor networks." Thesis, Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/1351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Trinkūnaitė, Ingrida. "Asinchroninės bejutiklės pavaros modeliavimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2011. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2011~D_20110621_170245-16691.

Full text
Abstract:
Baigiamajame darbe sudarytas uždarosios asinchroninės bejutiklės vektoriškai valdomos pavaros imitacinis modelis ir ištirtos charakteristikos. Teorinėje darbo dalyje yra aptariami asinchroninių elektros pavarų privalumai bei šiose pavarose naudojami greičio jutikliai. Aprašomi stebiklių privalumai bei trūkumai, pagrindžiamas jų naudojimas asinchroninėse pavarose. Nagrinėjami bejutiklių elektros pavarų ypatumai, aprašomi vektorinio valdymo bendrieji principai bei aprašomi bejutiklėse vektoriškai valdomose pavarose naudojamų stebiklių modeliai. Pateikiami du skirtingi asinchroninių variklių matematiniai modeliai. Tiriamojoje dalyje parenkamas asinchroninio variklio modelis, tiriant abiejų imitacinių modelių dinamines greičio charakteristikas. Sudaromas stebiklio imitacinis modelis. Tiriamos stebiklio greičio dinaminės charakteristikos, sudaroma uždaroji greičio reguliavimo sistema su stebikliu. Analizuojamos uždarosios greičio reguliavimo sistemos greičio charakteristikos be apkrovos, su šuoline apkrova ir harmoniškai kintančia apkrova. Nagrinėjama sistemos stiprinimo koeficiento įtaka uždarosios greičio reguliavimo sistemos greičio charakteristikų pereinamiesiams procesams. Magistro darbas baigiamas tyrimo išvadomis, kuriose aptariamas darbo rezultatų realaus pritaikymo galimybės. Darbą sudaro 8 dalys: įvadas, žymėjimai, literatūros šaltinių analizė, tyrimo tikslas ir uždaviniai, teorinė dalis, tiriamoji dalis išvados ir pasiūlymai, literatūros šaltiniai.
The final master degree thesis presents sensorless vector controlled induction motor drive simulation model and characteristics. In the analytic part of master thesis advantages of induction motor drives and speed sensors are described. Advantages and disadvantages of speed estimators are presented and purpose of using them are proved. Peculiarities of sensorless motor drives, principles of vector control and models of speed estimators are analyzed. Two simulation models of induction motor are proposed. In the research part characteristics of induction motors are compared and motor model is chosen. Characteristics of open loop induction motor drive are investigated and simulation model of closed loop induction motor drive with speed estimator is designed. Characteristics of closed loop control system at no load, constant load and harmonic load are analyzed and influence of speed controller gain is considered. Thesis is closed with conclusions about designed system application in real projects. Structure: introduction, list of symbols, literature review, the study aims and objectives, the theoretical part, research part, conclusions and proposals, references.
APA, Harvard, Vancouver, ISO, and other styles
39

Le, Ny Mathieu. "Diagnostic non invasif de piles à combustible par mesure du champ magnétique proche." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00844407.

Full text
Abstract:
Cette thèse propose une technique innovante de diagnostic non invasive pour les systèmes piles à combustible. Cette technique s'appuie sur la mesure de la signature magnétique générée par ces systèmes. A l'aide de ces champs magnétiques externes, il est possible d'obtenir une cartographie de la densité de courant interne par résolution d'un problème inverse. Ce problème est néanmoins mal posé : la solution est non unique et est extrêmement sensible au bruit. Des techniques de régularisation ont ainsi été mises en place pour filtrer les erreurs de mesures et obtenir une solution physiquement acceptable. Afin d'augmenter la qualité de reconstruction des courants, nous avons conçu notre outil de diagnostic de manière à ce qu'il soit uniquement sensible aux défaillances de la pile (capteur de défauts). De plus, cette reconstruction se base sur un nombre extrêmement faible de mesures. Une telle approche facilite l'instrumentation du système et augmente la précision et la rapidité de celui-ci. La sensibilité de notre outil à certaines défaillances (assèchements, appauvrissement en réactifs, dégradations) est démontrée.
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Nan. "Digital control strategies for DC/DC SEPIC converters towards integration." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00760064.

Full text
Abstract:
The use of SMPS (Switched mode power supply) in embedded systems is continuously increasing. The technological requirements of these systems include simultaneously a very good voltage regulation and a strong compactness of components. SEPIC ( Single-Ended Primary Inductor Converter) is a DC/DC switching converter which possesses several advantages with regard to the other classical converters. Due to the difficulty in control of its 4th-order and non linear property, it is still not well-exploited. The objective of this work is the development of successful strategies of control for a SEPIC converter on one hand and on the other hand the effective implementation of the control algorithm developed for embedded applications (FPGA, ASIC) where the constraints of Silicon surface and the loss reduction factor are important. To do it, two non linear controls and two observers of states and load have been studied: a control and an observer based on the principle of sliding mode, a deadbeat predictive control and an Extended Kalman observer. The implementation of both control laws and the Extended Kalman observer are implemented in FPGA. An 11-bit digital PWM has been developed by combining a 4-bit Δ-Σ modulation, a 4-bit segmented DCM (Digital Clock Management) phase-shift and a 3-bit counter-comparator. All the proposed approaches are experimentally validated and constitute a good base for the integration of embedded switching mode converters
APA, Harvard, Vancouver, ISO, and other styles
41

Hunter, Brandon. "Channel Probing for an Indoor Wireless Communications Channel." BYU ScholarsArchive, 2003. https://scholarsarchive.byu.edu/etd/64.

Full text
Abstract:
The statistics of the amplitude, time and angle of arrival of multipaths in an indoor environment are all necessary components of multipath models used to simulate the performance of spatial diversity in receive antenna configurations. The model presented by Saleh and Valenzuela, was added to by Spencer et. al., and included all three of these parameters for a 7 GHz channel. A system was built to measure these multipath parameters at 2.4 GHz for multiple locations in an indoor environment. Another system was built to measure the angle of transmission for a 6 GHz channel. The addition of this parameter allows spatial diversity at the transmitter along with the receiver to be simulated. The process of going from raw measurement data to discrete arrivals and then to clustered arrivals is analyzed. Many possible errors associated with discrete arrival processing are discussed along with possible solutions. Four clustering methods are compared and their relative strengths and weaknesses are pointed out. The effects that errors in the clustering process have on parameter estimation and model performance are also simulated.
APA, Harvard, Vancouver, ISO, and other styles
42

Fernandes, Bruno Filipe Salgado. "Improving Software Project Estimates Based on Historical Data." Master's thesis, 2014. https://repositorio-aberto.up.pt/handle/10216/89041.

Full text
Abstract:
Devido à forte concorrência, existente nos mercados atuais, é imprescindível para uma empresa como a Altran, evoluir ainda mais, de forma a se posicionar na frente tanto a nível nacional como internacionalmente. Para que isso aconteça, é necessário fazer boas estimativas de forma a contratualizar e controlar os seus projetos, mas por vezes na prática nem sempre é fácil, devido a vários fatores. Como tal a estimação do esforço dos projetos torna-se uma etapa fulcral, para que o cliente perceba quanto é que o projeto irá custar, tanto em tempo como em dinheiro. A qualidade das mesmas é determinante, tanto para satisfazer os atuais clientes, como para atrair novos a aderirem aos produtos desenvolvidos pela Altran. O presente trabalho resultou de uma proposta feita pela Altran, que definiu desafios bem específicos e ambiciosos com o objetivo de evoluir e melhorar a metodologia seguida atualmente e auxiliar os gestores de projeto e respetivas equipas. Apesar da estimação atual dos projetos da Altran levar a resultados bastante aceitáveis, existem ainda algumas limitações e algumas falhas onde é preciso intervir de forma a que os padrões de qualidade, no mínimo se mantenham elevados. Após a análise da situação atual da Altran, foi feito um levantamento sobre técnicas e métodos de estimação, com o objetivo de as aplicar na criação do modelo, levando a resultados mais realistas. A grande ambição da realização desta dissertação será evoluir na metodologia atualmente utilizada, cumprindo com as necessidades dos projetos desenvolvidos pela Altran e com as práticas do modelo CMMI que levam a uma melhoria de processos para o desenvolvimento de produtos e serviços. Um dos passos do processo de estimação será definir e aplicar mecanismos de retroalimentação com ajuste de coeficientes para que a análise de cada tipo de projeto se torne mais intuitiva. De forma a melhorar o processo de estimação, a proposta da metodologia de estimação, deverá basear-se em dados históricos de projetos passados concluídos, disponibilizados pela Altran, os quais constituem um dos requisitos mais importantes desta dissertação. Esta proposta incluirá duas variantes, uma baseada num modelo mais simples, mas com fácil validação, tornando-o mais fiável para quem o vai utilizar. A outra variante será um modelo mais complexo, baseado em técnicas de estimação, que poderá ser muito útil em certos tipos de projetos. Uma vez que o tempo para a realização desta dissertação é limitado, a validação dos modelos implementados em projetos futuros não se torna viável e como tal foi usado o método de Cross-Validation. Os resultados mostraram grande potencial de melhoria das estimativas, comparando com o método seguido atualmente, e com forte probabilidade de no futuro ser possível atingir metas mais ambiciosas. No que diz respeito à estrutura deste documento, inicialmente é apresentada uma análise do problema e seguidamente é feito um estudo sobre o estado da arte. É também descrita a metodologia e os modelos propostos, bem como a validação efetuada, de forma a torna-los mais fiáveis e mais precisos. Como conclusão, esta dissertação aborda de uma forma diferente o processo de estimação seguido pela Altran e é inovadora no sentido que usa técnicas de estimação. Espera-se que no futuro os projetos a ser desenvolvidos por esta empresa tenham ainda mais qualidade e que a sua expansão continue a aumentar exponencialmente.
Due to the strong competition that exists in today's markets, it is essential for a company like Altran, to grow up further in order to be in front of both national and international level. For this to happen, it is necessary to make good estimates in order to contractually and control its projects, but sometimes in practice it isn't always easy due to several factors. Therefore the effort estimation of the projects becomes a crucial step, so that the client understands how much the project will cost, both in time and money .The estimates quality is decisive both to satisfy current customers and to attract new clients to accede to developed products by Altran. The current work resulted from a proposal made by Altran, which set very specific and ambitious challenges in order to grow up and improve the current methodology and assist project managers and respective teams. Although the current Altran's project estimation leads to very acceptable results, there are still some limitations and some gaps where it is necessary to intervene so that the quality standards at least remain high. After analyzing the current situation of Altran was done a gathering about some techniques and estimation methods, with the goal of applying them in the model creation, leading to more realistic results. The great ambition of the completion of this dissertation will evolve in the current methodology used, fulfilling the needs of the developed projects by Altran and the CMMI model practices that lead to a processes improvement for the products and services development.One of the steps of the estimation process is to define and apply feedback mechanisms with coefficients adjustment so that the analysis of each project type becomes more intuitive. In order to improve the estimation process, the proposal of the estimation methodology should be based on historical data from past completed projects, provided by Altran, which constitute one of the most important requirements of this dissertation. This proposal will include two variants, one based on a simple model, but with easy validation, making it more reliable for those who will use it. The other variant is a more complex model based on estimation techniques, which can be very useful in certain project types. Once the time to perform this dissertation is limited, the validation of the implemented models in future projects doesn't become viable and as such it was used the Cross-Validation method. The results showed great potential for estimates improving, compared with the current followed method, and with strong likelihood that eventually in future be possible to achieve more ambitious goals.Regarding to the structure of this document, initially is presented a problem analysis and subsequently is made a study about the state of the art. It is also described the proposed methodology and models, as well as the performed validation in order to make them more reliable and accurate. In conclusion, this dissertation addresses of a different way the estimation process followed by Altran and it is innovative since it uses estimation techniques. It is hoped that in future projects to be developed by this company have even more quality and that its expansion continues to increase exponentially.
APA, Harvard, Vancouver, ISO, and other styles
43

Fernandes, Bruno Filipe Salgado. "Improving Software Project Estimates Based on Historical Data." Dissertação, 2014. https://repositorio-aberto.up.pt/handle/10216/89041.

Full text
Abstract:
Devido à forte concorrência, existente nos mercados atuais, é imprescindível para uma empresa como a Altran, evoluir ainda mais, de forma a se posicionar na frente tanto a nível nacional como internacionalmente. Para que isso aconteça, é necessário fazer boas estimativas de forma a contratualizar e controlar os seus projetos, mas por vezes na prática nem sempre é fácil, devido a vários fatores. Como tal a estimação do esforço dos projetos torna-se uma etapa fulcral, para que o cliente perceba quanto é que o projeto irá custar, tanto em tempo como em dinheiro. A qualidade das mesmas é determinante, tanto para satisfazer os atuais clientes, como para atrair novos a aderirem aos produtos desenvolvidos pela Altran. O presente trabalho resultou de uma proposta feita pela Altran, que definiu desafios bem específicos e ambiciosos com o objetivo de evoluir e melhorar a metodologia seguida atualmente e auxiliar os gestores de projeto e respetivas equipas. Apesar da estimação atual dos projetos da Altran levar a resultados bastante aceitáveis, existem ainda algumas limitações e algumas falhas onde é preciso intervir de forma a que os padrões de qualidade, no mínimo se mantenham elevados. Após a análise da situação atual da Altran, foi feito um levantamento sobre técnicas e métodos de estimação, com o objetivo de as aplicar na criação do modelo, levando a resultados mais realistas. A grande ambição da realização desta dissertação será evoluir na metodologia atualmente utilizada, cumprindo com as necessidades dos projetos desenvolvidos pela Altran e com as práticas do modelo CMMI que levam a uma melhoria de processos para o desenvolvimento de produtos e serviços. Um dos passos do processo de estimação será definir e aplicar mecanismos de retroalimentação com ajuste de coeficientes para que a análise de cada tipo de projeto se torne mais intuitiva. De forma a melhorar o processo de estimação, a proposta da metodologia de estimação, deverá basear-se em dados históricos de projetos passados concluídos, disponibilizados pela Altran, os quais constituem um dos requisitos mais importantes desta dissertação. Esta proposta incluirá duas variantes, uma baseada num modelo mais simples, mas com fácil validação, tornando-o mais fiável para quem o vai utilizar. A outra variante será um modelo mais complexo, baseado em técnicas de estimação, que poderá ser muito útil em certos tipos de projetos. Uma vez que o tempo para a realização desta dissertação é limitado, a validação dos modelos implementados em projetos futuros não se torna viável e como tal foi usado o método de Cross-Validation. Os resultados mostraram grande potencial de melhoria das estimativas, comparando com o método seguido atualmente, e com forte probabilidade de no futuro ser possível atingir metas mais ambiciosas. No que diz respeito à estrutura deste documento, inicialmente é apresentada uma análise do problema e seguidamente é feito um estudo sobre o estado da arte. É também descrita a metodologia e os modelos propostos, bem como a validação efetuada, de forma a torna-los mais fiáveis e mais precisos. Como conclusão, esta dissertação aborda de uma forma diferente o processo de estimação seguido pela Altran e é inovadora no sentido que usa técnicas de estimação. Espera-se que no futuro os projetos a ser desenvolvidos por esta empresa tenham ainda mais qualidade e que a sua expansão continue a aumentar exponencialmente.
Due to the strong competition that exists in today's markets, it is essential for a company like Altran, to grow up further in order to be in front of both national and international level. For this to happen, it is necessary to make good estimates in order to contractually and control its projects, but sometimes in practice it isn't always easy due to several factors. Therefore the effort estimation of the projects becomes a crucial step, so that the client understands how much the project will cost, both in time and money .The estimates quality is decisive both to satisfy current customers and to attract new clients to accede to developed products by Altran. The current work resulted from a proposal made by Altran, which set very specific and ambitious challenges in order to grow up and improve the current methodology and assist project managers and respective teams. Although the current Altran's project estimation leads to very acceptable results, there are still some limitations and some gaps where it is necessary to intervene so that the quality standards at least remain high. After analyzing the current situation of Altran was done a gathering about some techniques and estimation methods, with the goal of applying them in the model creation, leading to more realistic results. The great ambition of the completion of this dissertation will evolve in the current methodology used, fulfilling the needs of the developed projects by Altran and the CMMI model practices that lead to a processes improvement for the products and services development.One of the steps of the estimation process is to define and apply feedback mechanisms with coefficients adjustment so that the analysis of each project type becomes more intuitive. In order to improve the estimation process, the proposal of the estimation methodology should be based on historical data from past completed projects, provided by Altran, which constitute one of the most important requirements of this dissertation. This proposal will include two variants, one based on a simple model, but with easy validation, making it more reliable for those who will use it. The other variant is a more complex model based on estimation techniques, which can be very useful in certain project types. Once the time to perform this dissertation is limited, the validation of the implemented models in future projects doesn't become viable and as such it was used the Cross-Validation method. The results showed great potential for estimates improving, compared with the current followed method, and with strong likelihood that eventually in future be possible to achieve more ambitious goals.Regarding to the structure of this document, initially is presented a problem analysis and subsequently is made a study about the state of the art. It is also described the proposed methodology and models, as well as the performed validation in order to make them more reliable and accurate. In conclusion, this dissertation addresses of a different way the estimation process followed by Altran and it is innovative since it uses estimation techniques. It is hoped that in future projects to be developed by this company have even more quality and that its expansion continues to increase exponentially.
APA, Harvard, Vancouver, ISO, and other styles
44

Duarte, André Tiago Oliveira da Silva. "Software Repository Mining Analytics to Estimate Software Component Reliability." Master's thesis, 2016. https://repositorio-aberto.up.pt/handle/10216/89450.

Full text
Abstract:
Dada a crescente necessidade de identificar a localização dos erros no código fonte de software, de forma a facilitar o trabalho dos programadores e a acelerar o processo de desenvolvimento, muitos avanços têm sido feitos na sua automação.Existem três abordagens principais: Program-spectra based (PSB), Model-based diagnosis (MDB) e Program slicing.Barinel, solução que integra tanto o PSB como o MDB, é, até hoje, com base na investigação feita, a que apresenta melhores resultados. Contudo, a ordenação de conjuntos de candidatos (componentes faltosos) não tem em conta a verdadeira qualidade do componente em causa, mas sim o conjunto de valores que maximizam a probabilidade do conjunto (Maximum Likehood Estimation - MLE), devido à dificuldade da sua determinação.Com esta tese pretende-se colmatar esta falha e contribuir para uma melhor ordenação dos conjuntos, classificando, com recurso a técnicas de Machine Learning como Naive Bayes, Support Vector Machines (SVM) ou Random Forests, a qualidade e fiabilidade de cada componente, através das informações disponíveis no sistema de controlo de versões (Software Repository Mining), neste caso Git, como por exemplo: número de vezes que foi modificado, número de contribuidores, data de última alteração, nome de últimos contribuidores e tamanho das alterações.A investigação já feita, revelou a existência de algumas soluções de análise preditiva de software, como BugCache, FixCache e Change Classification, capazes de identificar componentes com grande probabilidade de falhar e de classificar as revisões (commits) como faltosas ou não, mas nenhuma soluciona o problema.Este trabalho visa também a integração com o Crowbar e a contribuição para a sua possível comercialização.
Given the rising necessity of identifying errors on the source code of software, in order to make the developers work easier and to speed up the development process, many progresses have been made in its automation.There are three main approaches: Program-spectra based (PSB), Model-based diagnosis (MDB) and Program slicing.Barinel, solution that integrates both PSB and MDB, is, until now, to our knowledge, the option that guarantees the best results. Despite this, the candidates (faulty components) set order doesn't take into account the real quality of the given component. With this thesis we want to fix this issue and contribute for a better candidates ordered set, classifying the quality and reliability of each component, using Machine Learning techniques such as Decision Trees, Support Vector Machines or Random Forests with the information extracted from Git, like: number of times it was modified, number of contributors, date of last change and size of those changes.The research revealed the existence of some software predictive analysis solutions, such as BugCache, FixCache and Change Classification, capable of identifying the components with a high probability of failure and of classifying the changes (commits) as faulty or clean. But none solves our issue.This work also aims to integrate with Crowbar and to contribute to its possible commercialization.
APA, Harvard, Vancouver, ISO, and other styles
45

Duarte, André Tiago Oliveira da Silva. "Software Repository Mining Analytics to Estimate Software Component Reliability." Dissertação, 2016. https://repositorio-aberto.up.pt/handle/10216/89450.

Full text
Abstract:
Dada a crescente necessidade de identificar a localização dos erros no código fonte de software, de forma a facilitar o trabalho dos programadores e a acelerar o processo de desenvolvimento, muitos avanços têm sido feitos na sua automação.Existem três abordagens principais: Program-spectra based (PSB), Model-based diagnosis (MDB) e Program slicing.Barinel, solução que integra tanto o PSB como o MDB, é, até hoje, com base na investigação feita, a que apresenta melhores resultados. Contudo, a ordenação de conjuntos de candidatos (componentes faltosos) não tem em conta a verdadeira qualidade do componente em causa, mas sim o conjunto de valores que maximizam a probabilidade do conjunto (Maximum Likehood Estimation - MLE), devido à dificuldade da sua determinação.Com esta tese pretende-se colmatar esta falha e contribuir para uma melhor ordenação dos conjuntos, classificando, com recurso a técnicas de Machine Learning como Naive Bayes, Support Vector Machines (SVM) ou Random Forests, a qualidade e fiabilidade de cada componente, através das informações disponíveis no sistema de controlo de versões (Software Repository Mining), neste caso Git, como por exemplo: número de vezes que foi modificado, número de contribuidores, data de última alteração, nome de últimos contribuidores e tamanho das alterações.A investigação já feita, revelou a existência de algumas soluções de análise preditiva de software, como BugCache, FixCache e Change Classification, capazes de identificar componentes com grande probabilidade de falhar e de classificar as revisões (commits) como faltosas ou não, mas nenhuma soluciona o problema.Este trabalho visa também a integração com o Crowbar e a contribuição para a sua possível comercialização.
Given the rising necessity of identifying errors on the source code of software, in order to make the developers work easier and to speed up the development process, many progresses have been made in its automation.There are three main approaches: Program-spectra based (PSB), Model-based diagnosis (MDB) and Program slicing.Barinel, solution that integrates both PSB and MDB, is, until now, to our knowledge, the option that guarantees the best results. Despite this, the candidates (faulty components) set order doesn't take into account the real quality of the given component. With this thesis we want to fix this issue and contribute for a better candidates ordered set, classifying the quality and reliability of each component, using Machine Learning techniques such as Decision Trees, Support Vector Machines or Random Forests with the information extracted from Git, like: number of times it was modified, number of contributors, date of last change and size of those changes.The research revealed the existence of some software predictive analysis solutions, such as BugCache, FixCache and Change Classification, capable of identifying the components with a high probability of failure and of classifying the changes (commits) as faulty or clean. But none solves our issue.This work also aims to integrate with Crowbar and to contribute to its possible commercialization.
APA, Harvard, Vancouver, ISO, and other styles
46

Machado, João Pedro Rodrigues. "Estimate of energy production in aerial systems of Wind Energy." Master's thesis, 2019. https://hdl.handle.net/10216/121229.

Full text
Abstract:
O impacto ambiental resultante da produção de energia elétrica através de combustíveis fósseis determinou uma mudança na forma de obtenção desta energia aparecendo assim as energias renováveis.Nesta dissertação é abordada uma forma de produção de energia elétrica eólica utilizando um AWES (Airborne Wind Energy System), mais especificamente um Pumping Kite Generator. O sistema apresenta uma asa presa a um cabo que a liga a um gerador elétrico através de um tambor, produzindo energia elétrica através da tensão no cabo enquanto a asa se move numa trajetória aproximadamente ortogonal à direção do vento. Quando o comprimento máximo do cabo é atingido, a asa é controlada de forma a minimizar a tensão no cabo e é recolhida até uma posição inicial da qual reinicia o ciclo de produção.Com base no perfil dinâmico da asa e para efeitos de cáculo da produção de energia durante um ciclo, as fases de largada e de recolha são analisadas. São introduzidos parâmetros adimensionais que ajudam a descrever a eficiência do ciclo. As velocidades de largada e de recolha são calculadas e usadas para o cáculo da potência ao longo de um ciclo. A potência máxima obtida durante um ciclo é calculada através de duas etapas sendo que a primeira é para velocidades de vento abaixo da velocidade nominal e a segunda etapa são para velocidades acima da nominal. Através de uma seleção de vários parâmetros é também construída uma curva de potência do sistema.É ainda obtida a produção de energia anual teórica num determinado local com base em dados de vento reais.
The environmental impact resulting from the production of electricity through fossil fuels has led to a change in the way this energy is obtained, thus giving rise to renewable energies.This dissertation discusses a way to produce wind power using an AWES (Airborne Wind Energy System), more specifically a Pumping Kite Generator. The system presents a wing attached to a cable that connects it to an electric generator through a drum, producing electric energy through the tension in the cable while the wing moves in a path approximately orthogonal to the direction of the wind. When the maximum cable length is reached, the wing is controlled so as to minimize the tension in the cable and is withdrawn to an initial position from which it restarts the production cycle.Based on the dynamic profile of the wing and for the purpose of calculating energy production during a cycle, the reel in and reel out phases are analyzed. Dimensional parameters are introduced to help describe the efficiency of the cycle.Reel in and reel out speeds are calculated and used to calculate power over a cycle. The maximum power obtained during a cycle is calculated through two steps, the first one being for wind speeds below the nominal speed and the second step for speeds above the nominal. Through a selection of several parameters a system power curve is also constructed.The theoretical annual energy production is also obtained in a given location based on actual wind data.
APA, Harvard, Vancouver, ISO, and other styles
47

Ahmed, Razi. "Accuracy of biomass and structure estimates from radar and lidar." 2012. https://scholarworks.umass.edu/dissertations/AAI3518203.

Full text
Abstract:
A better understanding of ecosystem processes requires accurate estimates of forest biomass and structure on global scales. Recently, there have been demonstrations of the ability of remote sensing instruments, such as radar and lidar, for the estimation of forest parameters from spaceborne platforms in a consistent manner. These advances can be exploited for global forest biomass accounting and structure characterization, leading to a better understanding of the global carbon cycle. The popular techniques for estimation of forest parameters from radar instruments in particular, use backscatter intensity, interferometry and polarimetric interferometry. This dissertation analyzes the accuracy of biomass and structure estimates over temperate forests of the North-Eastern United States. An empirical approach is adopted, relying on ground truth data collected during field campaigns over the Harvard and Howland Forests in 2009. The accuracy of field biomass estimates, including the impact of the diameter-biomass allometry is characterized for the field sites. Full waveform lidar data from two LVIS field campaigns of 2009 over the Harvard and Howland forests is analyzed to assess the accuracy of various lidar-biomass relationships. Radar data from NASA JPL's UAVSAR is analyzed to assess the accuracy of the backscatter-biomass relationships with a theoretical radar error model. The relationship between field biomass and InSAR heights is explored using SRTM elevation and LVIS derived ground topography. Temporal decorrelation, a major factor affecting the accuracy of repeat-pass InSAR observations of forests is analyzed using the SIR-C single-day repeat data from 1994. Finally, PolInSAR inversion of heights over the Harvard and Howland forests is explored using UAVSAR repeat-pass data from the 2009 campaign. These heights are compared with LVIS height estimates and the impact of temporal decorrelation is assessed.
APA, Harvard, Vancouver, ISO, and other styles
48

Tayade, Rajeshwary. "Robustness analysis of linear estimators." 2003. http://hdl.handle.net/1969/500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Silva, João Pedro Vasques Vieira da. "An optimization framework to estimate the active and reactive power flexibility in the TSO-DSO interface." Doctoral thesis, 2021. https://hdl.handle.net/10216/134152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Silva, João Pedro Vasques Vieira da. "An optimization framework to estimate the active and reactive power flexibility in the TSO-DSO interface." Tese, 2021. https://hdl.handle.net/10216/134152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography