To see the other types of publications on this topic, follow the link: Metric estimation.

Dissertations / Theses on the topic 'Metric estimation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Metric estimation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

O'Loan, Caleb J. "Topics in estimation of quantum channels." Thesis, University of St Andrews, 2010. http://hdl.handle.net/10023/869.

Full text
Abstract:
A quantum channel is a mapping which sends density matrices to density matrices. The estimation of quantum channels is of great importance to the field of quantum information. In this thesis two topics related to estimation of quantum channels are investigated. The first of these is the upper bound of Sarovar and Milburn (2006) on the Fisher information obtainable by measuring the output of a channel. Two questions raised by Sarovar and Milburn about their bound are answered. A Riemannian metric on the space of quantum states is introduced, related to the construction of the Sarovar and Milburn bound. Its properties are characterized. The second topic investigated is the estimation of unitary channels. The situation is considered in which an experimenter has several non-identical unitary channels that have the same parameter. It is shown that it is possible to improve estimation using the channels together, analogous to the case of identical unitary channels. Also, a new method of phase estimation is given based on a method sketched by Kitaev (1996). Unlike other phase estimation procedures which perform similarly, this procedure requires only very basic experimental resources.
APA, Harvard, Vancouver, ISO, and other styles
2

Kazzazi, Seyedeh Mandan. "Dental metric standards for sex estimation in archaeological populations from Iran." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/31067.

Full text
Abstract:
Sex estimation of skeletal remains is one of the major components of forensic identification of unknown individuals. Teeth are a potential source of information on sex and are often recovered in archaeological or forensic contexts due to their post-mortem longevity. Currently there is limited data on dental sexual dimorphism of archaeological populations from Iran. This dissertation represents the first study to provide a dental sex estimation method for Iron Age populations. The current study was conducted on the skeletal remains of 143 adults from two Iron Age populations in close temporal and geographic proximity in the Solduz Valley (West Azerbaijan Province of Iran). 2D and 3D cervical mesiodistal and buccolingual and root volume measurements of maxillary and mandibular teeth were used to investigate the degree of sexual dimorphism in permanent dentition and to assess their applicability in sex estimation. In total 1327, 457, and 480 anterior and posterior teeth were used to collect 2D cervical, 3D cervical, and root volume measurements respectively. 2D cervical measurements were taken using Hillson-Fitzgerald dental calliper and 3D measurements were collected using CT images provided by Open Research Scan Archive (ORSA) - Penn Museum. 3D models of the teeth were created using manual segmentation in the Amira 6.01 software package. Since tooth density largely differs from crown to apex, root segmentation required two threshold levels: the segmentation of the root from the jaw and the segmentation of the crown from the root. Thresholds used for root segmentation were calculated using the half maximum height protocol of Spoor et al. (1993) for each skull, and thresholds used for crown segmentation were set visually for each tooth separately. Data was analysed using discriminant function analysis and posterior probabilities were calculated for all produced formulae where sex was previously assessed from morphological features of pelvis and skull. Bootstrapping was used to account for small sample sizes in the analysis. Statistical analysis was carried out using SPSS 23. The percentage of sexual dimorphism was also used to quantify the amount of sexual dimorphism in the sample. The results showed that incisors and canines were the most sexually dimorphic teeth, providing percentages of correct sex classification between 80% and 100% depending on the measurement used. Root volume measurement was shown to be the most sexually dimorphic variable providing an accuracy of over 90% in all functions. The present study provided the first dental metric standards for sex estimation using odontometric data in Iranian archaeological populations. Dental measurements, particularly root volume measurements, were found to be of value for sex assessment and the method presented here could be a useful tool for establishing accurate demographic data from skeletal remains of the Iron Age from Iran.
APA, Harvard, Vancouver, ISO, and other styles
3

Tabulo, Moti M. "Radio resource management and metric estimation for multi-carrier CDMA systems." Thesis, University of Edinburgh, 2005. http://hdl.handle.net/1842/13065.

Full text
Abstract:
This thesis investigates the management of radio resources in the physical layer (PHY and MAC layers of multi-carrier CDMA (MC-CDMA)) systems and how the estimation of metric sin the various layers may be used in performing a cross layer management of resources to provide increased QoS whilst making optimal usage of the radio resource. At the PHY layer, the grouping and subcarrier allocation problem for a grouped MC-CDMA system is formulated as an integer linear programming problem. Two algorithms are proposed to solve this problem, namely a Branch and Bound based algorithm and a mixed probabilistic-greedy Local Search algorithm. The Local Search algorithm is found to offer increased QoS (in terms of BER) for more users at a lower complexity than any of the other algorithms. At the MAC layer, a new multi-rate model - multi-group MC-CDMA (MG-MC-CDMA) - is introduced and the performance of power-control and multi-group allocation algorithms in the MG-MC-CDMA system examined. A weighted fair queuing scheduler that takes advantage of the particular features of the MG-MC-CDMA system is proposed. A capacity model, incorporating an interference analysis and that takes into account the nature of the traffic types carried in the system, is outlined. In addition to MAC layer metrics characterising the traffic in the system, the capacity model has, as some of its required metrics, PHY layer parameters such as the ratio of inter-cell interference to total received power and information of whether or not a mobile is in a cell’s edge region. New techniques are proposed to estimate these metrics. The final contribution of the thesis is the use of the proposed dynamic capacity estimation formwork to develop new radio resource management algorithms that work across the PHY and MAC layers to deliver enhanced QoS.
APA, Harvard, Vancouver, ISO, and other styles
4

Strobel, Matthias. "Estimation of minimum mean squared error with variable metric from censored observations." [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:93-opus-35333.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Engström, Isak. "Automated Gait Analysis : Using Deep Metric Learning." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-178139.

Full text
Abstract:
Sectors of security, safety, and defence require methods for identifying people on the individual level. Automation of these tasks has the potential of outperforming manual labor, as well as relieving workloads. The ever-extending surveillance camera networks, advances in human pose estimation from monocular cameras, together with the progress of deep learning techniques, pave the way for automated walking gait analysis as an identification method. This thesis investigates the use of 2D kinematic pose sequences to represent gait, monocularly extracted from a limited dataset containing walking individuals captured from five camera views. The sequential information of the gait is captured using recurrent neural networks. Techniques in deep metric learning are applied to evaluate two network models, with contrasting output dimensionalities, against deep-metric-, and non-deep-metric-based embedding spaces. The results indicate that the gait representation, network designs, and network learning structure show promise when identifying individuals, scaling particularly well to unseen individuals. However, with the limited dataset, the network models performed best when the dataset included the labels from both the individuals and the camera views simultaneously, contrary to when the data only contained the labels from the individuals without the information of the camera views. For further investigations, an extension of the data would be required to evaluate the accuracy and effectiveness of these methods, for the re-identification task of each individual.

Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet

APA, Harvard, Vancouver, ISO, and other styles
6

Winden, Matthew Wayne. "INTEGRATING STATED PREFERENCE CHOICE ANALYSIS AND MULTI-METRIC INDICATORS IN ENVIRONMENTAL VALUATION." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1343325594.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schenkel, Flávio Schramm. "Studies on effects of parental selection on estimation of genetic parameters and breeding values of metric traits." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ35812.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Khodabandeloo, Babak, Dyan Melvin, and Hongki Jo. "Model-Based Heterogeneous Data Fusion for Reliable Force Estimation in Dynamic Structures under Uncertainties." MDPI AG, 2017. http://hdl.handle.net/10150/626477.

Full text
Abstract:
Direct measurements of external forces acting on a structure are infeasible in many cases. The Augmented Kalman Filter (AKF) has several attractive features that can be utilized to solve the inverse problem of identifying applied forces, as it requires the dynamic model and the measured responses of structure at only a few locations. But, the AKF intrinsically suffers from numerical instabilities when accelerations, which are the most common response measurements in structural dynamics, are the only measured responses. Although displacement measurements can be used to overcome the instability issue, the absolute displacement measurements are challenging and expensive for full-scale dynamic structures. In this paper, a reliable model-based data fusion approach to reconstruct dynamic forces applied to structures using heterogeneous structural measurements (i.e., strains and accelerations) in combination with AKF is investigated. The way of incorporating multi-sensor measurements in the AKF is formulated. Then the formulation is implemented and validated through numerical examples considering possible uncertainties in numerical modeling and sensor measurement. A planar truss example was chosen to clearly explain the formulation, while the method and formulation are applicable to other structures as well.
APA, Harvard, Vancouver, ISO, and other styles
9

Rojas, Christian Andres. "Demand Estimation with Differentiated Products: An Application to Price Competition in the U.S. Brewing Industry." Diss., Virginia Tech, 2005. http://hdl.handle.net/10919/28916.

Full text
Abstract:
A large part of the empirical work on differentiated products markets has focused on demand estimation and the pricing behavior of firms. These two themes are key inputs in important applications such as the merging of two firms or the introduction of new products. The validity of inferences, therefore, depends on accurate demand estimates and sound assumptions about the pricing behavior of firms. This dissertation makes a contribution to this literature in two ways. First, it adds to previous techniques of estimating demand for differentiated products. Second, it extends previous analyses of pricing behavior to models of price leadership that, while important, have received limited attention. The investigation focuses on the U.S. brewing industry, where price leadership appears to be an important type of firm behavior. The analysis is conducted in two stages. In the first stage, the recent Distance Metric (DM) method devised by Pinkse, Slade and Brett is used to estimate the demand for 64 brands of beer in 58 major metropolitan areas of the United States. This study adds to previous applications of the DM method (Pinkse and Slade; Slade 2004) by employing a demand specification that is more flexible and also by estimating advertising substitution coefficients for numerous beer brands. In the second stage, different pricing models are compared and ranked by exploiting the exogenous change in the federal excise tax of 1991. Demand estimates of the first stage are used to compute the implied marginal costs for the different models of pricing behavior prior to the tax increase. Then, the tax increase is added to the these pre-tax increase marginal costs, and equilibrium prices for all brands are simulated for each model of pricing behavior. These "predicted" prices are then compared to actual prices for model assessment. Results indicate that Bertrand-Nash predicts the pricing behavior of firms more closely than other models, although Stackelberg leadership yields results that are not substanitally different from the Bertrand-Nash model. Nevertheless, Bertrand-Nash tends to under-predict prices of more price-elastic brands and to over-predict prices of less price- elastic brands. An implication of this result is that Anheuser-Busch could exert more market power by increasing the price of its highly inelastic brands, especially Budweiser. Overall, actual price movements as a result of the tax increase tend to be more similar across brands than predicted by any of the models considered. While this pattern is not inconsistent with leadership behavior, leadership models considered in this dissertation do not conform with this pattern.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Paditz, Ludwig. "On the error-bound in the nonuniform version of Esseen's inequality in the Lp-metric." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-112888.

Full text
Abstract:
The aim of this paper is to investigate the known nonuniform version of Esseen's inequality in the Lp-metric, to get a numerical bound for the appearing constant L. For a long time the results given by several authors constate the impossibility of a nonuniform estimation in the most interesting case δ=1, because the effect L=L(δ)=O(1/(1-δ)), δ->1-0, was observed, where 2+δ, 0<δ<1, is the order of the assumed moments of the considered independent random variables X_k, k=1,2,...,n. Again making use of the method of conjugated distributions, we improve the well-known technique to show in the most interesting case δ=1 the finiteness of the absolute constant L and to prove L=L(1)=<127,74*7,31^(1/p), p>1. In the case 0<δ<1 we only give the analytical structure of L but omit numerical calculations. Finally an example on normal approximation of sums of l_2-valued random elements demonstrates the application of the nonuniform mean central limit bounds obtained here
Das Anliegen dieses Artikels besteht in der Untersuchung einer bekannten Variante der Esseen'schen Ungleichung in Form einer ungleichmäßigen Fehlerabschätzung in der Lp-Metrik mit dem Ziel, eine numerische Abschätzung für die auftretende absolute Konstante L zu erhalten. Längere Zeit erweckten die Ergebnisse, die von verschiedenen Autoren angegeben wurden, den Eindruck, dass die ungleichmäßige Fehlerabschätzung im interessantesten Fall δ=1 nicht möglich wäre, weil auf Grund der geführten Beweisschritte der Einfluss von δ auf L in der Form L=L(δ)=O(1/(1-δ)), δ->1-0, beobachtet wurde, wobei 2+δ, 0<δ<1, die Ordnung der vorausgesetzten Momente der betrachteten unabhängigen Zufallsgrößen X_k, k=1,2,...,n, angibt. Erneut wird die Methode der konjugierten Verteilungen angewendet und die gut bekannte Beweistechnik verbessert, um im interessantesten Fall δ=1 die Endlichkeit der absoluten Konstanten L nachzuweisen und um zu zeigen, dass L=L(1)=<127,74*7,31^(1/p), p>1, gilt. Im Fall 0<δ<1 wird nur die analytische Struktur von L herausgearbeitet, jedoch ohne numerische Berechnungen. Schließlich wird mit einem Beispiel zur Normalapproximation von Summen l_2-wertigen Zufallselementen die Anwendung der gewichteten Fehlerabschätzung im globalen zentralen Grenzwertsatz demonstriert
APA, Harvard, Vancouver, ISO, and other styles
11

Muller, Jacob. "Higher order differential operators on graphs." Licentiate thesis, Stockholms universitet, Matematiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-178070.

Full text
Abstract:
This thesis consists of two papers, enumerated by Roman numerals. The main focus is on the spectral theory of -Laplacians. Here, an -Laplacian, for integer , refers to a metric graph equipped with a differential operator whose differential expression is the -th derivative. In Paper I, a classification of all vertex conditions corresponding to self-adjoint -Laplacians is given, and for these operators, a secular equation is derived. Their spectral asymptotics are analysed using the fact that the secular function is close to a trigonometric polynomial, a type of almost periodic function. The notion of the quasispectrum for -Laplacians is introduced, identified with the positive roots of the associated trigonometric polynomial, and is proved to be unique. New results about almost periodic functions are proved, and using these it is shown that the quasispectrum asymptotically approximates the spectrum, counting multiplicities, and results about asymptotic isospectrality are deduced. The results obtained on almost periodic functions have wider applications outside the theory of differential operators. Paper II deals more specifically with bi-Laplacians (), and a notion of standard conditions is introduced. Upper and lower estimates for the spectral gap --- the difference between the two lowest eigenvalues - for these standard conditions are derived. This is achieved by adapting the methods of graph surgery used for quantum graphs to fourth order differential operators. It is observed that these methods offer stronger estimates for certain classes of metric graphs. A geometric version of the Ambartsumian theorem for these operators is proved.
APA, Harvard, Vancouver, ISO, and other styles
12

Carbonera, Luvizon Diogo. "Apprentissage automatique pour la reconnaissance d'action humaine et l'estimation de pose à partir de l'information 3D." Thesis, Cergy-Pontoise, 2019. http://www.theses.fr/2019CERG1015.

Full text
Abstract:
La reconnaissance d'actions humaines en 3D est une tâche difficile en raisonde la complexité de mouvements humains et de la variété des poses et desactions accomplies par différents sujets. Les technologies récentes baséessur des capteurs de profondeur peuvent fournir les représentationssquelettiques à faible coût de calcul, ce qui est une information utilepour la reconnaissance d'actions.Cependant, ce type de capteurs se limite à des environnementscontrôlés et génère fréquemment des données bruitées. Parallèlement à cesavancées technologiques, les réseaux de neurones convolutifs (CNN) ontmontré des améliorations significatives pour la reconnaissance d’actions etpour l’estimation de la pose humaine en 3D à partir des images couleurs.Même si ces problèmes sont étroitement liés, les deux tâches sont souventtraitées séparément dans la littérature.Dans ce travail, nous analysons le problème de la reconnaissance d'actionshumaines dans deux scénarios: premièrement, nous explorons lescaractéristiques spatiales et temporelles à partir de représentations desquelettes humains, et qui sont agrégées par une méthoded'apprentissage de métrique. Dans le deuxième scénario, nous montrons nonseulement l'importance de la précision de la pose en 3D pour lareconnaissance d'actions, mais aussi que les deux tâches peuvent êtreefficacement effectuées par un seul réseau de neurones profond capabled'obtenir des résultats du niveau de l'état de l'art.De plus, nous démontrons que l'optimisation de bout en bout en utilisant lapose comme contrainte intermédiaire conduit à une précision plus élevée sur latâche de reconnaissance d'action que l'apprentissage séparé de ces tâches. Enfin, nous proposons une nouvellearchitecture adaptable pour l’estimation de la pose en 3D et la reconnaissancede l’actions simultanément et en temps réel. Cette architecture offre une gammede compromis performances vs vitesse avec une seule procédure d’entraînementmultitâche et multimodale
3D human action recognition is a challenging task due to the complexity ofhuman movements and to the variety on poses and actions performed by distinctsubjects. Recent technologies based on depth sensors can provide 3D humanskeletons with low computational cost, which is an useful information foraction recognition. However, such low cost sensors are restricted tocontrolled environment and frequently output noisy data. Meanwhile,convolutional neural networks (CNN) have shown significant improvements onboth action recognition and 3D human pose estimation from RGB images. Despitebeing closely related problems, the two tasks are frequently handled separatedin the literature. In this work, we analyze the problem of 3D human actionrecognition in two scenarios: first, we explore spatial and temporalfeatures from human skeletons, which are aggregated by a shallow metriclearning approach. In the second scenario, we not only show that precise 3Dposes are beneficial to action recognition, but also that both tasks can beefficiently performed by a single deep neural network and stillachieves state-of-the-art results. Additionally, wedemonstrate that optimization from end-to-end using poses as an intermediateconstraint leads to significant higher accuracy on the action task thanseparated learning. Finally, we propose a new scalable architecture forreal-time 3D pose estimation and action recognition simultaneously, whichoffers a range of performance vs speed trade-off with a single multimodal andmultitask training procedure
APA, Harvard, Vancouver, ISO, and other styles
13

Akcay, Koray. "Performance Metrics For Fundamental Estimation Filters." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606510/index.pdf.

Full text
Abstract:
This thesis analyzes fundamental estimation filters &ndash
Alpha-Beta Filter, Alpha-Beta-Gamma Filter, Constant Velocity (CV) Kalman Filter, Constant Acceleration (CA) Kalman Filter, Extended Kalman Filter, 2-model Interacting Multiple Model (IMM) Filter and 3-model IMM with respect to their resource requirements and performance. In resource requirement part, fundamental estimation filters are compared according to their CPU usage, memory needs and complexity. The best fundamental estimation filter which needs very low resources is the Alpha-Beta-Filter. In performance evaluation part of this thesis, performance metrics used are: Root-Mean-Square Error (RMSE), Average Euclidean Error (AEE), Geometric Average Error (GAE) and normalized form of these. The normalized form of performance metrics makes measure of error independent of range and the length of trajectory. Fundamental estimation filters and performance metrics are implemented in MATLAB. MONTE CARLO simulation method and 6 different air trajectories are used for testing. Test results show that performance of fundamental estimation filters varies according to trajectory and target dynamics used in constructing the filter. Consequently, filter performance is application-dependent. Therefore, before choosing an estimation filter, most probable target dynamics, hardware resources and acceptable error level should be investigated. An estimation filter which matches these requirements will be &lsquo
the best estimation filter&rsquo
.
APA, Harvard, Vancouver, ISO, and other styles
14

Hwang, Sung Jun. "Communication over Doubly Selective Channels: Efficient Equalization and Max-Diversity Precoding." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1261506237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Andersson, Veronika, and Hanna Sjöstedt. "Improved effort estimation of software projects based on metrics." Thesis, Linköping University, Department of Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-5269.

Full text
Abstract:

Saab Ericsson Space AB develops products for space for a predetermined price. Since the price is fixed, it is crucial to have a reliable prediction model to estimate the effort needed to develop the product. In general software effort estimation is difficult, and at the software department this is a problem.

By analyzing metrics, collected from former projects, different prediction models are developed to estimate the number of person hours a software project will require. Models for predicting the effort before a project begins is first developed. Only a few variables are known at this state of a project. The models developed are compared to a current model used at the company. Linear regression models improve the estimate error with nine percent units and nonlinear regression models improve the result even more. The model used today is also calibrated to improve its predictions. A principal component regression model is developed as well. Also a model to improve the estimate during an ongoing project is developed. This is a new approach, and comparison with the first estimate is the only evaluation.

The result is an improved prediction model. There are several models that perform better than the one used today. In the discussion, positive and negative aspects of the models are debated, leading to the choice of a model, recommended for future use.

APA, Harvard, Vancouver, ISO, and other styles
16

Asif, Sajjad. "Investigating Web Size Metrics for Early Web Cost Estimation." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16036.

Full text
Abstract:
Context Web engineering is a new research field which utilizes engineering principles to produce quality web applications. Web applications have become more complex with the passage of time and it's quite difficult to analyze the web metrics for the estimation due to a wide range of web applications. Correct estimates for web development effort play a very important role in the success of large-scale web development projects. Objectives In this study I investigated size metrics and cost drivers used by web companies for early web cost estimation. I also aim to get validation through industrial interviews and web quote form. This form is designed based on most frequently occurring metrics after analyzing different companies. Secondly, this research aims to revisit previous work done by Mendes (a senior researcher and contributor in this research area) to validate whether early web cost estimation trends are same or changed? The ultimate goal is to help companies in web cost estimation. Methods First research question is answered by conducting an online survey through 212 web companies and finding their web predictor forms (quote forms). All companies included in the survey used Web forms to give quotes on Web development projects based on gathered size and cost measures. The second research question is answered by finding most occurring size metrics from the results of Survey 1. List of size metrics are validated by two methods: (i) Industrial interviews are conducted with 15 web companies to validate results of the first survey (ii) a quote form is designed using validated results from industrial interviews and quote form sent to web companies around the world to seek data on real Web projects. Data gathered from Web projects are analyzed using CBR tool and results are validated with Industrial interview results along with Survey 1.  Final results are compared with old research to justify answer of third research question whether size metrics have been changed. All research findings are contributed to Tukutuku research benchmark project. Results “Number of pages/features” and “responsive implementation” are top web size metrics for early Web cost estimation. Conclusions. This research investigated metrics which can be used for early Web cost estimation at the early stage of Web application development. This is the stage where the application is not built yet but just requirements are being collected and an expected cost estimation is being evaluated. List of new metrics variable is concluded which can be added in Tukutuku project.
APA, Harvard, Vancouver, ISO, and other styles
17

Archibald, Colin J. "A software testing estimation and process control model." Thesis, Durham University, 1998. http://etheses.dur.ac.uk/4735/.

Full text
Abstract:
The control of the testing process and estimation of the resource required to perform testing is key to delivering a software product of target quality on budget. This thesis explores the use of testing to remove errors, the part that metrics and models play in this process, and considers an original method for improving the quality of a software product. The thesis investigates the possibility of using software metrics to estimate the testing resource required to deliver a product of target quality into deployment and also determine during the testing phases the correct point in time to proceed to the next testing phase in the life-cycle. Along with the metrics Clear ratio. Chum, Error rate halving. Severity shift, and faults per week, a new metric 'Earliest Visibility' is defined and used to control the testing process. EV is constructed upon the link between the point at which an error is made within development and subsequently found during testing. To increase the effectiveness of testing and reduce costs, whilst maintaining quality the model operates by each test phase being targeted at the errors linked to that test phase and the ability for each test phase to build upon the previous phase. EV also provides a measure of testing effectiveness and fault introduction rate by development phase. The resource estimation model is based on a gradual refinement of an estimate, which is updated following each development phase as more reliable data is available. Used in conjunction with the process control model, which will ensure the correct testing phase is in operation, the estimation model will have accurate data for each testing phase as input. The proposed model and metrics have been developed and tested on a large-scale (4 million LOC) industrial telecommunications product written in C and C++ running within a Unix environment. It should be possible to extend this work to suit other environments and other development life-cycles.
APA, Harvard, Vancouver, ISO, and other styles
18

Nowak, James. "Integrated Population Models and Habitat Metrics for Wildlife Management." Doctoral thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/26023.

Full text
Abstract:
La gestion des espèces est entièrement dépendante de notre capacité à évaluer les décisions de gestion et de les corriger si nécessaire. Dans un monde idéal les gestionnaires auraient une connaissance extensive et mécanistique des systèmes qu’ils gèrent et ces connaissances seraient mises à jour de façon continue. Dans la réalité, les gestionnaires doivent gérer les populations et développer des objectifs de populations en dépit de leur connaissance imparfaites et des manques de données chronique. L’émergence de nouveaux outils statistiques ouvrent toutefois la porte à de nouvelles possibilités ce qui permet une gestion plus proactive de la faune. Dans le Chapitre 1, j’ai évalué l’efficacité de modèles intégrés de populations (MIP) à combler des lacunes dans notre connaissance en présence de données limitées et de modèles de populations mal spécifiés. J’ai démontré que les MIP peuvent maintenir une précision élevée et présenter un biais faible, et ce dans une large gamme de conditions. Dans le chapitre 2, j’ai développé une approche de MIP qui inclut des effets aléatoires entre les différentes populations. J’ai constaté que les effets aléatoires permettent améliorer considérablement les performances des algorithmes d'optimisation, produisent des estimations raisonnables et permettent même d'estimer les paramètres pour les populations avec des données très limitées. J’ai par la suite appliqué le modèle à 51 unités de gestion du Wapiti en Idaho, USA afin de démonter son application. La viabilité des populations à long terme est généralement réalisé à grâce à des manipulations d’habitat qui sont identifiées grâces à des méthodes de sélection des ressources. Les méthodes basées sur la sélection des ressources assume cependant que l’utilisation disproportionnée d’une partie du paysage reflète la volonté d’un individu de remplir une partie de son cycle biologique. Toutefois, dans le troisième chapitre j’ai démontré que des simples mesures d’habitat sont à mieux de décrire la variation dans la survie des Wapitis. Selon, mes résultats, la variation individuelle dans la sélection des habitats était le modèle qui expliquait le mieux la corrélation entre les habitats et le succès reproducteur et que les coefficients de sélection des ressources n’étaient pas corrélés à la survie.
Successful management of harvested species critically depends on an ability to predict the consequences of corrective actions. Ideally, managers would have comprehensive, quantitative and continuous knowledge of a managed system upon which to base decisions. In reality, wildlife managers rarely have comprehensive system knowledge. Despite imperfect knowledge and data deficiencies, a desire exists to manipulate populations and achieve objectives. To this end, manipulation of harvest regimes and the habitat upon which species rely have become staples of wildlife management. Contemporary statistical tools have potential to enhance both the estimation of population size and vital rates while making possible more proactive management. In chapter 1 we evaluate the efficacy of integrated population models (IPM) to fill knowledge voids under conditions of limited data and model misspecification. We show that IPMs maintain high accuracy and low bias over a wide range of realistic conditions. In recognition of the fact that many monitoring programs have focal data collection areas we then fit a novel form of the IPM that employs random effects to effectively share information through space and time. We find that random effects dramatically improve performance of optimization algorithms, produce reasonable estimates and make it possible to estimate parameters for populations with very limited data. We applied these random effect models to 51 elk management units in Idaho, USA to demonstrate the abilities of the models and information gains. Many of the estimates are the first of their kind. Short-term forecasting is the focus of population models, but managers assess viability on longer time horizons through habitat. Modern approaches to understanding large ungulate habitat requirements largely depend on resource selection. An implicit assumption of the resource selection approach is that disproportionate use of the landscape directly reflects an individual’s desire to meet life history goals. However, we show that simple metrics of habitat encountered better describe variations in elk survival. Comparing population level variation through time to individual variation we found that individual variation in habitat used was the most supported model relating habitat to a fitness component. Further, resource selection coefficients did not correlate with survival.
APA, Harvard, Vancouver, ISO, and other styles
19

Miller, Jordan Mitchell. "Estimation of individual tree metrics using structure-from-motion photogrammetry." Thesis, University of Canterbury. Geography, 2015. http://hdl.handle.net/10092/11035.

Full text
Abstract:
The deficiencies of traditional dendrometry mean improvements in methods of tree mensuration are necessary in order to obtain accurate tree metrics for applications such as resource appraisal, and biophysical and ecological modelling. This thesis tests the potential of SfM-MVS (Structure-fromMotion with Multi-View Stereo-photogrammetry) using the software package PhotoScan Professional, for accurately determining linear (2D) and volumetric (3D) tree metrics. SfM is a remote sensing technique, in which the 3D position of objects is calculated from a series of photographs, resulting in a 3D point cloud model. Unlike other photogrammetric techniques, SfM requires no control points or camera calibration. The MVS component of model reconstruction generates a mesh surface based on the structure of the SfM point cloud. The study was divided into two research components, for which two different groups of study trees were used: 1) 30 small, potted ‘nursery’ trees (mean height 2.98 m), for which exact measurements could be made and field settings could be modified, and; 2) 35 mature ‘landscape’ trees (mean height 8.6 m) located in parks and reserves in urban areas around the South Island, New Zealand, for which field settings could not be modified. The first component of research tested the ability of SfM-MVS to reconstruct spatially-accurate 3D models from which 2D (height, crown spread, crown depth, stem diameter) and 3D (volume) tree metrics could be estimated. Each of the 30 nursery trees was photographed and measured with traditional dendrometry to obtain ground truth values with which to evaluate against SfM-MVS estimates. The trees were destructively sampled by way of xylometry, in order to obtain true volume values. The RMSE for SfM-MVS estimates of linear tree metrics ranged between 2.6% and 20.7%, and between 12.3% and 47.5% for volumetric tree metrics. Tree stems were reconstructed very well though slender stems and branches were reconstructed poorly. The second component of research tested the ability of SfM-MVS to reconstruct spatially-accurate 3D models from which height and DBH could be estimated. Each of the 35 landscape trees, which varied in height and species, were photographed, and ground truth values were obtained to evaluate against SfM-MVS estimates. As well as this, each photoset was thinned to find the minimum number of images required to achieve total image alignment in PhotoScan and produce an SfM point cloud (minimum photoset), from which 2D metrics could be estimated. The height and DBH were estimated by SfM-MVS from the complete photosets with RMSE of 6.2% and 5.6% respectively. The height and DBH were estimated from the minimum photosets with RMSE of 9.3% and 7.4% respectively. The minimum number of images required to achieve total alignment was between 20 and 50. There does not appear to be a correlation between the minimum number of images required for alignment and the error in the estimates of height or DBH (R2 =0.001 and 0.09 respectively). Tree height does not appear to affect the minimum number of images required for image alignment (R 2 =0.08).
APA, Harvard, Vancouver, ISO, and other styles
20

Ellis, Kyle Kent Edward Schnell Thomas. "Eye tracking metrics for workload estimation in flight deck operations." Iowa City : University of Iowa, 2009. http://ir.uiowa.edu/etd/288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ellis, Kyle Kent Edward. "Eye tracking metrics for workload estimation in flight deck operations." Thesis, University of Iowa, 2009. https://ir.uiowa.edu/etd/288.

Full text
Abstract:
Flight decks of the future are being enhanced through improved avionics that adapt to both aircraft and operator state. Eye tracking allows for non-invasive analysis of pilot eye movements, from which a set of metrics can be derived to effectively and reliably characterize workload, this research will generate quantitative algorithms to classify pilot state through eye tracking metrics. Through various metrics within the realm of eye tracking, flight deck operation research is used to determine metric correlations between a pilot's workload and eye tracking metric patterns. The basic metrics within eye tracking, such as saccadic movement, fixations and link analysis provide clear measurable elements that experimenters analyzed to create a quantitative algorithm that reliably classifies operator workload. The study conducted at the University of Iowa's Operator Performance Lab 737-800 simulator was outfit with a Smarteye remote eye-tracking system that yielded gaze vector resolution down to 1 degree across the flight deck. Three levels of automation and 2 levels of outside visual conditions were changed on a KORD ILS approach between CAT II and CAT III visual conditions, and varying from full autopilot controlled by the pre-programmed flight management system, flight director guidance, and full manual approach with localizer and glide slope guidance. Initial subjective results indicated a successful variation in driving pilot workload across all 12 IFR pilots that were run through the 7 run testing sequence.
APA, Harvard, Vancouver, ISO, and other styles
22

Shahidi, Parham. "Fuzzy Analysis of Speech Metrics to Estimate Crew Alertness." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/50436.

Full text
Abstract:
A novel approach for estimating alertness levels from speech and tagging them with a reliability component has been developed. The Fatigue Quotient and Believability are both derived from the time series analysis of the speech signal in the communication between the operator and dispatch. Operator attention is the most important human factor element for safe transportation operations. In addition to substance abuse, illness and intoxication fatigue is a major contributing factor to the decrease of attention. The goal of this study was to develop a means to detect and estimate fatigue levels of railroad operating personnel during on-duty hours. This goal continues to gain importance with new efforts from the government to expand rail transportation operations as a tool for high speed mass transportation in urban areas. Previous research has shown that sleeping disorders, reduced hours of rest and disrupted circadian rhythms lead to significantly increased fatigue levels which manifest themselves in alterations of speech patterns as compared to alert states of mind. In this study vocal indicators of fatigue are extracted from the speech signal and Fuzzy Logic is used to generate an estimate of the cognitive state of the train conductor. The output is tagged with a believability metric based on its behavior with respect to previous outputs and a fully alert state. Communication between the conductor and dispatch over radio provides an unobtrusive way of accessing the speech signal through existing speech infrastructure. The speech signal is discretized and processed through a digital signal processing algorithm, which extracts speech metrics from the signal that were determined to be indicative of fatigue levels. Speech metrics include, but are not limited to, speech duration, silence duration, word production rate, phrase gap duration, number of words per phrase and speech intensity. A fuzzy logic minimum inference engine maps the inputs to an output through an empirically determined rule base. The rule base and the associated membership functions were derived from batch mode and real time testing and the subsequent tuning of parameters to refine the detection of changes in patterns. To increase the validity and transparency of the output time series analysis is used to create the believability metric. A moving average filter eliminates the short term fluctuations and determines the long term trend of the output. A moving standard deviation estimation quantifies instantaneous fluctuations and provides a measure of the difference to a nominal alertness state. A real time version of the algorithm was developed and prototyped on a generic, low cost and scalable hardware platform. Rapid Prototyping was realized through the Matlab/Simulink xPC Target toolbox which allowed for instant real time code generation, testing and modification. This testing environment together with batch mode testing was used to extensively test and fine tune parameters to improve the performance of the algorithm. A testing procedure was developed and standardized to collect data and tune the parameters of the algorithm. As a high level goal it was proven that the concept of digital signal processing and Fuzzy Logic can be utilized to detect changes in speech and estimate alertness levels from it. Furthermore, this study has proven that the framework to run such an analysis continuously as a monitoring function in locomotive cabins is feasible and can be realized with relatively inexpensive hardware. The development, implementation and testing process conducted for this project is explained and results are presented here.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
23

Fonseca, Filho José Raimundo dos Santos. "ESTIMAÇÃO DE MÉTRICAS DE DESENVOLVIMENTO AUXILIADA POR REDES NEURAIS ARTIFICIAIS." Universidade Federal do Maranhão, 2003. http://tedebc.ufma.br:8080/jspui/handle/tede/324.

Full text
Abstract:
Made available in DSpace on 2016-08-17T14:52:46Z (GMT). No. of bitstreams: 1 Jose Raimundo Fonseca.pdf: 3208998 bytes, checksum: 1d07c2f744a920df74b235bd4e1801f5 (MD5) Previous issue date: 2003-04-14
Several modeling approaches for the process of development in software engineering able of subsidizing decision making in the management of project are being searched. Metric of softwares, process modeling and estimation techniques have been independently considered either taking into consideration the intrinsic characteristic of softwares or their constructive process. This research proposes a complete, simple and efficient model for representing the whole process of development which, based on a set of features of the process and basic attributes of softwares, yields good estimation metrics (time and effort) of the development of the software still at the beginning of the process. The model relates constructive characteristics of the process to each type of organization, for identifying classes of homogeneous behavior based on Kohonen Neural Network. Directly, from this classification, according to the basic attributes of each software being developed, metrics may be estimated supported by Feedforward Neural Networks. A prototype is specified in Unified Model Language (UML) and implemented to estimate metrics for the development of softwares. Comparisons of the obtained results with those available in literature are presented.
Diversas representações do processo de desenvolvimento na Engenharia de softwares capazes de, eficientemente, subsidiar a tomada de decisões no gerenciamento de projetos, vêm sendo arduamente pesquisadas. Métricas de softwares, modelos de processo e técnicas de estimação têm sido propostos em grande quantidade, tanto devido a características intrínsecas dos softwares quanto a características do próprio processo construtivo. Buscando superar algumas das dificuldades de estimação de métricas relacionadas ao processo de desenvolvimento, este trabalho realiza, inicialmente, um estudo de ferramentas voltadas para tal objetivo e que estão disponíveis no mercado. Em seguida, um conjunto de descritores do processo em questão e também um conjunto de atributos básicos dos softwares será levantado. A partir de então, é proposto um modelo que represente o processo de desenvolvimento de maneira simples e eficiente. O modelo de processo do desenvolvimento na Engenharia de softwares relaciona as características desse processo construtivo a classes de entidades desenvolvedoras, tal que se possa estabelecer um comportamento homogêneo ao processo. Baseado nessa classificação, são relacionados, de maneira direta, métricas (tempo e esforço) de desenvolvimento com os atributos básicos dos softwares, definidos por Albrecht, visando a estimação de métricas. O modelo de processo é baseado no mapa de Kohonen e o estimador de métricas será auxiliado por redes neurais feed forward. Uma ferramenta de software (protótipo) é especificado em Linguagem de modelamento unificada (UML). Esta ferramenta auxiliará a produção de estimativas de tempo e de esforço de desenvolvimento de softwares. Comparações de resultados obtidos serão realizadas com os disponibilizados na literatura consultada.
APA, Harvard, Vancouver, ISO, and other styles
24

Marshall, Ian Mitchell. "Evaluating courseware development effort estimation measures and models." Thesis, University of Abertay Dundee, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.318946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Vieira, Andrws Aires. "Uma abordagem para estimação prévia dos requisitos não funcionais em sistemas embarcados utilizando métricas de software." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/117766.

Full text
Abstract:
O crescente aumento da complexidade dos sistemas embarcados demanda consigo a necessidade do uso de novas abordagens que acelerem o seu desenvolvimento, como por exemplo, o desenvolvimento baseado em modelos. Essas novas abordagens buscam aumentar o nível de abstração, utilizando conceitos de orientação a objetos e UML para modelar um software embarcado. Porém, com o aumento do nível de abstração, o projetista de software embarcado não possui a ideia exata do impacto de suas decisões de modelagem em questões importantes, como desempenho, consumo de energia, entre tantas outras que são de suma importância em um projeto embarcado. Dessa forma, se fazem necessárias técnicas de análise e/ou estimação de projeto que, em um ambiente de desenvolvimento mais abstrato, possam auxiliar o projetista a tomar melhores decisões nas etapas inicias de projeto, garantindo assim, as funcionalidades (requisitos funcionais) e os requisitos não funcionais do sistema embarcado. Neste trabalho, propõe-se estimar os requisitos não funcionais de um sistema embarcado a partir de informações (métricas) extraídas das etapas iniciais do projeto. Pretende-se com isso auxiliar o projetista na exploração do espaço de projeto já nos estágios iniciais do processo de desenvolvimento, através de uma rápida realimentação sobre o impacto de uma decisão de projeto no desempenho da aplicação em uma dada plataforma de execução. Os resultados experimentais mostram a aplicabilidade da abordagem, principalmente para um ambiente de evolução e manutenção de projetos de software, onde se tem um histórico de métricas de aplicações semelhantes para serem usadas como dados de treinamento. Neste cenário, a abordagem proposta possui acurácia de pelo menos 98% para as estimativas apresentadas ao projetista. Em um cenário heterogêneo, assumindo o uso da metodologia em um sistema diferente daquele usado para treinamento, a acurácia do método de estimação cai para pelo menos 80%.
The increasing complexity of embedded systems demands the use of new approaches to accelerate their development, such as model-driven engineering. Such approaches aim at increasing the level of abstraction using concepts such as object-orientation and UML for modeling the embedded software. However, with the increase of the abstraction level, the embedded software developer looses controllability and predictability over important issues such as performance, power dissipation and memory usage for a specific embedded platform. Thus, new design estimation techniques that can be used in the early development stages become necessary. Such a strategy may help the designer to make better decisions in the early stages of the project, thus ensuring the final system meets both functional and non-functional requirements. In this work, we propose an estimation technique of non-functional requirements for embedded systems, based on data (metrics) extracted from early stages of the project. The proposed methodology allows to better explore different design options in the early steps of software development process and can therefore provide a fast and yet accurate feedback to the developer. Experimental results show the applicability of the approach, particularly for software evolution and maintenance, which has a history of similar applications metrics to be used as training data. In this scenario, the accuracy of the estimation is at least of 98%. In a heterogeneous scenario, where the estimation is performed for a system that is different from the one used during training, the accuracy drops to 80%.
APA, Harvard, Vancouver, ISO, and other styles
26

Dinh, Ngoc Thach. "Observateur par intervalles et observateur positif." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112335/document.

Full text
Abstract:
Cette thèse est construite autour de deux types d'estimation de l'état d'un système, traités séparément. Le premier problème abordé concerne la construction d'observateurs positifs basés sur la métrique de Hilbert. Le second traite de la synthèse d'observateurs par intervalles pour différentes familles de systèmes dynamiques et la construction de lois de commande robustes qui stabilisent ces systèmes.Un système positif est un système dont les variables d'état sont toujours positives ou nulles lorsque celles-ci ont des conditions initiales qui le sont. Les systèmes positifs apparaissent souvent de façon naturelle dans des applications pratiques où les variables d'état représentent des quantités qui n'ont pas de signification si elles ont des valeurs négatives. Dans ce contexte, il parait naturel de rechercher des observateurs fournissant des estimées elles aussi positives ou nulles. Dans un premier temps, notre contribution réside dans la mise au point d'une nouvelle méthode de construction d'observateurs positifs sur l'orthant positif. L'analyse de convergence est basée sur la métrique de Hilbert. L'avantage concurrentiel de notre méthode est que la vitesse de convergence peut être contrôlée.Notre étude concernant la synthèse d'observateurs par intervalles est basée sur la théorie des systèmes dynamiques positifs. Les observateurs par intervalles constituent un type d'observateurs très particuliers. Ce sont des outils développés depuis moins de 15 ans seulement : ils trouvent leur origine dans les travaux de Gouzé et al. en 2000 et se développent très rapidement dans de nombreuses directions. Un observateur par intervalles consiste en un système dynamique auxiliaire fournissant un intervalle dans lequel se trouve l'état, en considérant que l'on connait des bornes pour la condition initiale et pour les quantités incertaines. Les observateurs par intervalles donnent la possibilité de considérer le cas où des perturbations importantes sont présentes et fournissent certaines informations à tout instant
This thesis presents new results in the field of state estimation based on the theory of positive systems. It is composed of two separate parts. The first one studies the problem of positive observer design for positive systems. The second one which deals with robust state estimation through the design of interval observers, is at the core of our work.We begin our thesis by proposing the design of a nonlinear positive observer for discrete-time positive time-varying linear systems based on the use of generalized polar coordinates in the positive orthant. For positive systems, a natural requirement is that the observers should provide state estimates that are also non-negative so they can be given a physical meaning at all times. The idea underlying the method is that first, the direction of the true state is correctly estimated in the projective space thanks to the Hilbert metric and then very mild assumptions on the output map allow to reconstruct the norm of the state. The convergence rate can be controlled.Later, the thesis is continued by studying the so-called interval observers for different families of dynamic systems in continuous-time, in discrete-time and also in a context "continuous-discrete" (i.e. a class of continuous-time systems with discrete-time measurements). Interval observers are dynamic extensions giving estimates of the solution of a system in the presence of various type of disturbances through two outputs giving an upper and a lower bound for the solution. Thanks to interval observers, one can construct control laws which stabilize the considered systems
APA, Harvard, Vancouver, ISO, and other styles
27

González, Rojas Victor Manuel. "Análisis conjunto de múltiples tablas de datos mixtos mediante PLS." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284659.

Full text
Abstract:
The fundamental content of this thesis corresponds to the development of the GNM-NIPALIS, GNM-PLS2 and GNM-RGCCA methods, used to quantify qualitative variables parting from the first k components given by the appropriate methods in the analysis of J matrices of mixed data. These methods denominated GNM-PLS (General Non Metric Partial Least Squares) are an extension of the NM-PLS methods that only take the first principal component in the quantification function. The transformation of the qualitative variables is done through optimization processes, usually maximizing functions of covariance or correlation, taking advantage of the flexibility of the PLS algorithms and keeping the properties of group belonging and order if it exists; The metric variables are keep their original state as well, excepting standardization. GNM-NIPALS has been created for the purpose of treating one (J = 1) mixed data matrix through the quantification via ACP type reconstruction of the qualitative variables parting from a k components aggregated function. GNM-PLS2 relates two (J = 2) mixed data sets Y~X through PLS regression, quantifying the qualitative variables of a space with the first H PLS components aggregated function of the other space, obtained through cross validation under PLS2 regression. When the endogenous matrix Y contains only one answer variable the method is denominated GNM-PLS1. Finally, in order to analyze more than two blocks (J = 2) of mixed data Y~X1+...+XJ through their latent variables (LV) the GNM-RGCCA was created, based on the RGCCA (Regularized Generalized Canonical Correlation Analysis) method, that modifies the PLS-PM algorithm implementing the new mode A and specifies the covariance or correlation maximization functions related to the process. The quantification of the qualitative variables on each Xj block is done through the inner Zj = Σj ej Yj function, which has J dimension due to the aggregation of the outer Yj estimations. Zj, as well as Yj estimate the ξj component associated to the j-th block.
El contenido fundamental de esta tesis corresponde al desarrollo de los métodos GNM-NIPALS, GNM-PLS2 y GNM-RGCCA para la cuantificación de las variables cualitativas a partir de las primeras k componentes proporcionadas por los métodos apropiados en el análisis de J matrices de datos mixtos. Estos métodos denominados GNM-PLS (General Non Metric Partial Least Squares) son una extensión de los métodos NM-PLS que toman sólo la primera componente principal en la función de cuantificación. La trasformación de las variables cualitativas se lleva a cabo mediante procesos de optimización maximizando generalmente funciones de covarianza o correlación, aprovechando la flexibilidad de los algoritmos PLS y conservando las propiedades de pertenencia grupal y orden si existe; así mismo se conservan las variables métricas en su estado original excepto por estandarización. GNM-NIPALS ha sido creado para el tratamiento de una (J=1) matriz de datos mixtos mediante la cuantificación vía reconstitución tipo ACP de las variables cualitativas a partir de una función agregada de k componentes. GNM-PLS2 relaciona dos (J=2) conjuntos de datos mixtos Y~X mediante regresión PLS, cuantificando las variables cualitativas de un espacio con la función agregada de las primeras H componentes PLS del otro espacio, obtenidas por validación cruzada bajo regresión PLS2. Cuando la matriz endógena Y contiene sólo una variable de respuesta el método se denomina GNM-PLS1. Finalmente para el análisis de más de dos bloques (J>2) de datos mixtos Y~X1+...+XJ a través de sus variables latentes (LV) se implementa el método NM-RGCCA basado en el método RGCCA (Regularized Generalized Canonical Correlation Analysis) que modifica el algoritmo PLS-PM implementando el nuevo modo A y especifica las funciones de maximización de covarianzas o correlaciones asociadas al proceso. La cuantificación de las variables cualitativas en cada bloque Xj se realiza mediante la función inner Zj de dimensión J debido a la agregación de las estimaciones outer Yj. Tanto Zj como Yj estiman la componente ξj asociad al j-ésimo bloque.
APA, Harvard, Vancouver, ISO, and other styles
28

Hill, Terry. "Metrics and Test Procedures for Data Quality Estimation in the Aeronautical Telemetry Channel." International Foundation for Telemetering, 2015. http://hdl.handle.net/10150/596445.

Full text
Abstract:
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV
There is great potential in using Best Source Selectors (BSS) to improve link availability in aeronautical telemetry applications. While the general notion that diverse data sources can be used to construct a consolidated stream of "better" data is well founded, there is no standardized means of determining the quality of the data streams being merged together. Absent this uniform quality data, the BSS has no analytically sound way of knowing which streams are better, or best. This problem is further exacerbated when one imagines that multiple vendors are developing data quality estimation schemes, with no standard definition of how to measure data quality. In this paper, we present measured performance for a specific Data Quality Metric (DQM) implementation, demonstrating that the signals present in the demodulator can be used to quickly and accurately measure the data quality, and we propose test methods for calibrating DQM over a wide variety of channel impairments. We also propose an efficient means of encapsulating this DQM information with the data, to simplify processing by the BSS. This work leads toward a potential standardization that would allow data quality estimators and best source selectors from multiple vendors to interoperate.
APA, Harvard, Vancouver, ISO, and other styles
29

Chan, Joanne S. M. Massachusetts Institute of Technology. "Rail transit OD matrix estimation and journey time reliability metrics using automated fare data." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/38955.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2007.
Includes bibliographical references (p. 190-191).
The availability of automatic fare collection (AFC) data greatly enhances a transit planner's ability to understand and characterize passenger travel demands which have traditionally been estimated by manual surveys handed out to passengers at stations or on board vehicles. The AFC data also presents an unprecedentedly consistent source of information on passenger travel times in those transit networks which have both entry and exit fare gates. By taking the difference between entry and exit times, AFC transactions can be used to capture the bulk of a passenger's time spent in the system including walking between gates and platforms, platform wait, in-train time, as well as interchange time for multi-vehicle trips. This research aims at demonstrating the potential value of AFC data in rail transit operations and planning. The applications developed in this thesis provide rail transit operators an easy-to-update management tool that evaluates several dimensions of rail service and demand at near real-time. While the concepts of the applications can be adapted to other transit systems, the detailed configurations and unique characteristics of each transit system require the methodologies to be tailored to solve its needs.
(cont.) The focus of this research is the London Underground network which adopted the automatic fare collection system, known as the "Oyster Card", in 2003. The Oyster card is now used as the main form of public transport fare payment in all public transport modes within the Greater London area. The two applications developed for the London Underground using Oyster data are (1) estimation of an origin-destination flow matrix that reflects current demand and (2) rail service reliability metrics that capture both excess journey time and variation in journey times at the origin-destination, line segment or line levels. The Oyster dataset captures travel on more than three times the number of OD pairs in one 4-week AM peak period compared to those OD pairs evident in the RODS database - 57,407 vs. 17,421. The resulting Oyster-based OD matrix shows very similar travel patterns as the RODS matrix at the network and zonal levels. Station level differences are significant at a number of central stations with respect to interchanges. At the OD level, the differences are the greatest and a significant number of OD pairs in the RODS matrix seem to be erroneous or outdated. The proposed Excess Journey Time Metric and Journey Time Reliability Metric utilize large continuous streams of Oyster journey time data to support analyses during short time periods.
(cont.) The comparison of the Excess Journey Time Metric and the official Underground Journey Time Metric show significant differences in line level results in terms of both number of excess minutes and relative performance across lines. The differences are mainly due to differences in scheduled journey times and OD demand weightings used in the two methodologies. Considerable differences in excess journey time and reliability results exist between directions on the same line due to the large imbalance of directional demand in the AM peak.
by Joanne Chan.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Lein-Lein. "Study of the effectiveness of cost-estimation models and complexity metrics on small projects." FIU Digital Commons, 1986. http://digitalcommons.fiu.edu/etd/2134.

Full text
Abstract:
Software cost overruns and time delay are common occurrences in the software development process. To reduce the occurrences of these problems, software cost estimation models and software complexity metrics measurements are two popular approaches used by the industry. Most of the related studies are conducted for large scale software projects. In this thesis, we have investigated the effectiveness of three popular cost estimation models and program complexity metrics in so far as their applicability to small scale projects is concerned. Experiments conducted on the programs collected from FIU and NCR corporation indicate that none of the cost estimation models precisely estimates the actual development effort. However, the regression results indicate that the actual development effort is some function of the model variables. In addition, it also showed that the complexity metrics are useful measurements in predicting the actual development effort. Additional results related to lines of code metric are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
31

Honauer, Katrin [Verfasser], and Bernd [Akademischer Betreuer] Jähne. "Performance Metrics and Test Data Generation for Depth Estimation Algorithms / Katrin Honauer ; Betreuer: Bernd Jähne." Heidelberg : Universitätsbibliothek Heidelberg, 2019. http://d-nb.info/1177045168/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Andersen, Hans-Erik. "Estimation of critical forest structure metrics through the spatial analysis of airborne laser scanner data /." Thesis, Connect to this title online; UW restricted, 2003. http://hdl.handle.net/1773/5579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Macedo, Marcus Vinicius La Rocca. "Uma proposta de aplicação da metrica de pontos de função em aplicações de dispositivos portateis." [s.n.], 2003. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276368.

Full text
Abstract:
Orientadores: Paulo Licio de Geus, Thelma Chiossi
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-03T22:41:17Z (GMT). No. of bitstreams: 1 Macedo_MarcusViniciusLaRocca_M.pdf: 842436 bytes, checksum: 27c553ffbf6a5e8dd848ac57e803bdf8 (MD5) Previous issue date: 2003
Resumo: Este trabalho apresenta uma proposta de contagem de pontos de função para aplicações de dispositivos portáteis (telefones celulares) tomando como base técnicas de contagem para aplicações de interface gráfica (GUI). A obtenção da contagem de pontos de função a partir de especificações funcionais e de layout fornecidas no início do projeto de desenvolvimento do produto de software abre a possibilidade de se estabelecer estimativas de custo (esforço) e duração mais precisas, beneficiando significativamente o planejamento do projeto. Nesse sentido são apresentadas algumas técnicas de estimativas de esforço. Também são apresentadas e comparadas algumas métricas de software com ênfase na métrica de pontos de função. Finalmente são apresentados os conceitos básicos de aplicações gráficas de telefones celulares e é estabelecido um paralelo na contagem de pontos de função para então se mostrar um exemplo prático de contagem utilizando este paralelo
Abstract: This dissertation presents a proposal of function point account for portable device (wireless phone) applications based on normal graphical user interface (GUI) application accounting techniques. The function point accounting done with functional and layout specification provided in the beginning of the software development project allows the establishment of cost (effort) estimation more precise, significantly improving the project planning. This way some effort estimation techniques are presented. Software metrics are also presented and compared with emphasis on function point metric. Finally, the basic concepts of graphical user interface wireless phone applications are presented and it is established a parallel in the function point accounting that is applied in a practical example
Mestrado
Engenharia de Computação
Mestre em Computação
APA, Harvard, Vancouver, ISO, and other styles
34

Nyman, Moa. "Estimating the energy consumption of a mobile music streaming application using proxy metrics." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-288532.

Full text
Abstract:
For users of mobile devices an application with high energy consumption may cause the user to remove the application. For a music streaming service, energy consumption is a factor of competition. However, to reduce energy consumption it must be measured. In this study, proxy metrics, for example CPU usage and number of bytes written to memory, are investigated for their suitability as predictors for the energy consumption of a music streaming application on a mobile device. A literature review is conducted to find which metrics have the greatest impact on energy consumption. Further, the literature review is used to identify potential relationships between metrics and energy consumption. A OnePlus 6T using Android 9 as its operating system is rooted and its battery modified to collect data. The data is collected from three test cases for the mobile music streaming application. Memory and network statistics are gathered using the software strace, CPU statistics are collected using data from the proc file system while energy consumption is measured using a power meter. Based on the results from the literature review, linear models are constructed to model the energy consumption. The results show that many of the metrics are highly collinear. From each pair of collinear variables only one variable is kept. The resulting model had a 32.5% better predictive power than an non-optimised model. The best performing model used transmitted network bytes, read and written memory bytes, and user CPU as predictor variables.
Bland mobila musikströmningstjänster är energikonsumtion en konkurrensfaktor. Har tjänsten en för hög energikonsumtion kan det leda till att användarna avinstallerar applikationen. Därför är det viktigt för apputvecklare att minimera energin deras app använder. För att kunna minska energikonsumtionen måste den först mätas. I den här studien är proxymetriker, såsom CPU-användning och antal skrivna bytes till minnet, undersökta för deras lämplighet som prediktorer för energikonsumtion i en musikströmningstjänst på mobila enheter. En litteraturstudie genomfördes för att hitta vilka metriker som har störst påverkan på energikonsumtionen. Vidare användes litteraturstudien för att identifiera potentiella samband mellan metrikerna och energikonsumtion. En OnePlus 6T som kör Android 9 som operativsystem rotas och dess batteri anpassas för att samla in data. Datan samlas in från tre testfall inom ramen för den mobila musikströmningsapplikationen. Minnes- och nätverksstatistik samlas genom att använda mjukvaran strace, CPU-statistik samlas in genom att använda data från proc-filsystemet medan energikonsumtionen mäts genom att använda en energimätare. Baserat på resultaten från litteraturstudien konstrueras modeller för att modellera energikonsumtionen. Resultaten visar att många av metrikerna är högst kollinära. Från varje par av kollinära variabler behålls endast en. Den slutliga modellen visade en 32.5% bättre prediktiv förmåga än en icke-optimerad modell. Den bäst presterande modellen använde skickade nätverksbytes, lästa och skrivna minnesbytes samt användar-CPU som prediktorvariabler.
APA, Harvard, Vancouver, ISO, and other styles
35

Gonçalves, André Miguel Augusto. "Estimating data divergence in cloud computing storage systems." Master's thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10852.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Many internet services are provided through cloud computing infrastructures that are composed of multiple data centers. To provide high availability and low latency, data is replicated in machines in different data centers, which introduces the complexity of guaranteeing that clients view data consistently. Data stores often opt for a relaxed approach to replication, guaranteeing only eventual consistency, since it improves latency of operations. However, this may lead to replicas having different values for the same data. One solution to control the divergence of data in eventually consistent systems is the usage of metrics that measure how stale data is for a replica. In the past, several algorithms have been proposed to estimate the value of these metrics in a deterministic way. An alternative solution is to rely on probabilistic metrics that estimate divergence with a certain degree of certainty. This relaxes the need to contact all replicas while still providing a relatively accurate measurement. In this work we designed and implemented a solution to estimate the divergence of data in eventually consistent data stores, that scale to many replicas by allowing clientside caching. Measuring the divergence when there is a large number of clients calls for the development of new algorithms that provide probabilistic guarantees. Additionally, unlike previous works, we intend to focus on measuring the divergence relative to a state that can lead to the violation of application invariants.
Partially funded by project PTDC/EIA EIA/108963/2008 and by an ERC Starting Grant, Agreement Number 307732
APA, Harvard, Vancouver, ISO, and other styles
36

Koch, Stefan. "Effort Modeling and Programmer Participation in Open Source Software Projects." Department für Informationsverarbeitung und Prozessmanagement, WU Vienna University of Economics and Business, 2005. http://epub.wu.ac.at/1494/1/document.pdf.

Full text
Abstract:
This paper analyses and develops models for programmer participation and effort estimation in open source software projects. This has not yet been a centre of research, although any results would be of high importance for assessing the efficiency of this model and for various decision-makers. In this paper, a case study is used for hypotheses generation regarding manpower function and effort modeling, then a large data set retrieved from a project repository is used to test these hypotheses. The main results are that Norden-Rayleigh-based approaches need to be complemented to account for the addition of new features during the lifecycle to be usable in this context, and that programmer-participation based effort models show significantly less effort than those based on output metrics like lines-of-code. (author's abstract)
Series: Working Papers on Information Systems, Information Business and Operations
APA, Harvard, Vancouver, ISO, and other styles
37

Haufe, Maria Isabel. "Estimativa da produtividade no desenvolvimento de software." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2001. http://hdl.handle.net/10183/1651.

Full text
Abstract:
Este trabalho apresenta uma ferramenta para gerenciamento de projetos, priorizando as fases de planejamento e o controle do desenvolvimento de software. Ao efetuar o planejamento de um projeto é necessário estimar o prazo, o custo e o esforço necessário, aplicando técnicas já aprovadas, existentes na literatura, tais como: Estimativa do Esforço, Estimativa de Putnam, Modelo COCOMO, Análise de Pontos por Função, Pontos de Particularidade e PSP. É necessária a utilização de uma ferramenta que automatizem o processo de estimativa. Hoje no mercado, encontram-se várias ferramentas de estimativas, tais como: ESTIMACS, SLIM, SPQR/20, ESTIMATE Professional. O controle do desenvolvimento do projeto está relacionado ao acompanhamento do projeto, do profissional e da própria estimativa de custo e esforço de desenvolvimento. Nenhuma das ferramentas estudadas permitiu o controle do projeto por parte da gerência, por isto esta se propondo o desenvolvimento uma nova ferramenta que permita o planejamento e controle do processo de desenvolvimento. Esta ferramenta deve permitir a comparação entre as diversas técnicas de estimativas, desde que baseadas na mesma medida de tamanho: pontos por função. Para exemplificar o uso desta ferramenta, foram aplicados dois estudos de casos desenvolvidos pela empresa Newsoft Consultoria de Informática.
APA, Harvard, Vancouver, ISO, and other styles
38

Drach, Marcos David. "Aplicabilidade de metricas por pontos de função em sistemas baseados em Web." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276360.

Full text
Abstract:
Orientadores: Ariadne Maria Brito Rizzoni Carvalho, Thelma Cecilia dos Santos Chiossi
Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-04T13:53:20Z (GMT). No. of bitstreams: 1 Drach_MarcosDavid_M.pdf: 1097883 bytes, checksum: 0be02ee41451affd5b6b6ef00b77ddf1 (MD5) Previous issue date: 2005
Resumo: Métricas de software são padrões quantitativos de medidas de vários aspectos de um projeto ou produto de software, e se constitui em uma poderosa ferramenta gerencial, contribuindo para a elaboração de estimativas de prazo e custo mais precisas e para o estabelecimento de metas plausíveis, facilitando assim o processo de tomada de decisões e a subsequente obtenção de medidas de produtividade e qualidade. A métrica de Análise por Pontos de Função - FPA, criada no final da década de 70 com o objetivo de medir o tamanho de software a partir de sua especificação funcional, foi considerada um avanço em relação ao método de contagem por Linhas de Código Fonte - SLOC, a única métrica de tamanho empregada na época. Embora vários autores tenham desde então publicado várias extensões e alternativas ao método original no sentido de adequá-lo a sistemas específicos, sua aplicabilidade em sistemas Web ainda carece de um exame mais crítico. Este trabalho tem por objetivo realizar uma análise das características computacionais específicas da plataforma Web que permita a desenvolvedores e gerentes de projeto avaliarem o grau de adequação da FPA a este tipo de ambiente e sua contribuição para extração de requisitos e estimativa de esforço
Abstract: Software metrics are quantitative standards of measurement for many aspects of a software project or product, consisting of a powerful management tool that contributes to more accurate delivery time and cost estimates and to the establishment of feasible goals, facilitating both the decision-making process itself and the subsequent obtention of data measuring productivity and quality. The metric Function Point Analysis - FPA, created at the end of 70¿s to measure software size in terms of its functional specification, was considered an advance over the Source Line Of Code - SLOC counting method, the only method available at that time. Although many authors have published various extensions and alternatives to the original method, in order to adapt it to specific systems, its applicability in Web-based systems still requires a deeper and more critical examination. This work aims to present an analysis of the specific computational characteristics of the Web platform that allows developers and project managers to evaluate the adequacy of the FPA method to this environment and its contribution to the requirement extraction and effort estimation
Mestrado
Engenharia de Computação
Mestre Profissional em Computação
APA, Harvard, Vancouver, ISO, and other styles
39

Alomari, Hakam W. "Supporting Software Engineering Via Lightweight Forward Static Slicing." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1341996135.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Rai, Ajit. "Estimation de la disponibilité par simulation, pour des systèmes incluant des contraintes logistiques." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S105/document.

Full text
Abstract:
L'analyse des FDM (Reliability, Availability and Maintainability en anglais) fait partie intégrante de l'estimation du coût du cycle de vie des systèmes ferroviaires. Ces systèmes sont hautement fiables et présentent une logistique complexe. Les simulations Monte Carlo dans leur forme standard sont inutiles dans l'estimation efficace des paramètres des FDM à cause de la problématique des événements rares. C'est ici que l'échantillonnage préférentiel joue son rôle. C'est une technique de réduction de la variance et d'accélération de simulations. Cependant, l'échantillonnage préférentiel inclut un changement de lois de probabilité (changement de mesure) du modèle mathématique. Le changement de mesure optimal est inconnu même si théoriquement il existe et fournit un estimateur avec une variance zéro. Dans cette thèse, l'objectif principal est d'estimer deux paramètres pour l'analyse des FDM: la fiabilité des réseaux statiques et l'indisponibilité asymptotique pour les systèmes dynamiques. Pour ce faire, la thèse propose des méthodes pour l'estimation et l'approximation du changement de mesure optimal et l'estimateur final. Les contributions se présentent en deux parties: la première partie étend la méthode de l'approximation du changement de mesure de l'estimateur à variance zéro pour l'échantillonnage préférentiel. La méthode estime la fiabilité des réseaux statiques et montre l'application à de réels systèmes ferroviaires. La seconde partie propose un algorithme en plusieurs étapes pour l'estimation de la distance de l'entropie croisée. Cela permet d'estimer l'indisponibilité asymptotique pour les systèmes markoviens hautement fiables avec des contraintes logistiques. Les résultats montrent une importante réduction de la variance et un gain par rapport aux simulations Monte Carlo
RAM (Reliability, Availability and Maintainability) analysis forms an integral part in estimation of Life Cycle Costs (LCC) of passenger rail systems. These systems are highly reliable and include complex logistics. Standard Monte-Carlo simulations are rendered useless in efficient estimation of RAM metrics due to the issue of rare events. Systems failures of these complex passenger rail systems can include rare events and thus need efficient simulation techniques. Importance Sampling (IS) are an advanced class of variance reduction techniques that can overcome the limitations of standard simulations. IS techniques can provide acceleration of simulations, meaning, less variance in estimation of RAM metrics in same computational budget as a standard simulation. However, IS includes changing the probability laws (change of measure) that drive the mathematical models of the systems during simulations and the optimal IS change of measure is usually unknown, even though theroretically there exist a perfect one (zero-variance IS change of measure). In this thesis, we focus on the use of IS techniques and its application to estimate two RAM metrics : reliability (for static networks) and steady state availability (for dynamic systems). The thesis focuses on finding and/or approximating the optimal IS change of measure to efficiently estimate RAM metrics in rare events context. The contribution of the thesis is broadly divided into two main axis : first, we propose an adaptation of the approximate zero-variance IS method to estimate reliability of static networks and show the application on real passenger rail systems ; second, we propose a multi-level Cross-Entropy optimization scheme that can be used during pre-simulation to obtain CE optimized IS rates of Markovian Stochastic Petri Nets (SPNs) transitions and use them in main simulations to estimate steady state unavailability of highly reliably Markovian systems with complex logistics involved. Results from the methods show huge variance reduction and gain compared to MC simulations
APA, Harvard, Vancouver, ISO, and other styles
41

LARIZZATTI, FLAVIO E. "Determinacao de metais pesados e outros elementos de interesse por ativacao neutronica, em amostras de sedimentos da Laguna Mar Chiquita (Cordoba, Argentina)." reponame:Repositório Institucional do IPEN, 2001. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10972.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:46:18Z (GMT). No. of bitstreams: 0
Made available in DSpace on 2014-10-09T14:01:38Z (GMT). No. of bitstreams: 1 07917.pdf: 6865830 bytes, checksum: 4b3f91ec8ea8157f98b981ad8985ffbb (MD5)
Dissertacao (Mestrado)
IPEN/D
Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
42

França, Ana Beatriz Coelho. "Assinatura magnética e espectral na estimativa de elementos potencialmente tóxicos em solos tropicais /." Jaboticabal, 2019. http://hdl.handle.net/11449/183569.

Full text
Abstract:
Orientador: José Marques Júnior
Coorientador: Livia Arantes Camargo
Banca: Luis Reynaldo Ferracciú Alleoni
Alan Rodrigo Panosso
Resumo: A contaminação dos solos causada pela ação antrópica representa uma preocupação mundial no escopo da segurança alimentar. Dessa forma, torna-se necessário mapear de maneira rápida e não poluente os teores dos elementos potencialmente tóxicos (EPTs) dos solos, como Ba, Co, Cr, Cu, Ni, Pb e Cd, a fim de mitigar a ação danosa destes elementos no ambiente. A falta de informações sobre estes EPTs e a necessidade de um grande número de amostras dificultam as avaliações de risco em áreas contaminadas e sua espacialização em grandes áreas agrícolas. Nesse sentido, a Suscetibilidade magnética (SM) e a Espectroscopia de reflectância difusa (ERD) podem ser técnicas indiretas promissoras para a predição dos teores dos EPTs por estar relacionada com atributos pedoindicadores do solo, tais como a mineralogia. Por isso, os objetivos com este trabalho foram: (a) avaliar os teores de EPTs em solos sob cultivo de cana-de-açúcar e compreender a influência antrópica na presença destes elementos nos solos, e (b) estimar os teores dos EPTs (Ba, Co, Cr, Cu, Ni, Pb e Cd) nos solos com o auxílio das medidas de SM e ERD. As amostras de solo foram coletadas em uma área de transição de solos originários de basalto, arenito Botucatu e Depósito Colúvio Eluvionar. Foram realizadas análises granulométricas, químicas, mineralógicas, espectrais e medidas de SM. Os dados foram analisados por estatística descritiva, correlação de Pearson, regressão linear múltipla (RLM), geoestatísica e funções de pedotranferên... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: Contamination of soil caused by anthropic action is a worldwide concern for food safety. Thus, it is necessary to map the potentially toxic elements (PTEs) as Ba, Co, Cr, Cu, Ni, Pb and Cd, in a quick and non-polluting way in order to mitigate the damaging action of these elements in the environment. In addition, the lack of information on these elements prevents risk assessments in contaminated areas and its spatialization in large agricultural areas requires many samples. In this sense, magnetic susceptibility (MS) and the diffuse reflectance spectroscopy (DRS) may be promising indirect techniques for the prediction of PTEs because it is related to soil pedoindicator attributes such as mineralogy. Therefore, the objectives with this work were: (a) to evaluate the levels of PTEs in soils under sugar cane cultivation and to understand the anthropic influence on these elements in the soils and (b) to estimate the content of PTEs (Ba, Co, Cr, Cu, Ni, Pb and Cd) in soils through MS and DRS. Soil samples were collected in a transition area of soils originating from Basalt, Botucatu sandstone and Eluvionar Collution Deposit. Sieve analysis, chemical, mineralogical, spectral and MS measurements were performed. Data were analyzed by descriptive statistics, Pearson's correlation, linear multiple regression (LMR), geostatistics and pedotransfer functions were used in the estimation the content PTEs using MS and DRS. The prediction models of the PTEs were calibrated using MS and clay f... (Complete abstract click electronic access below)
Mestre
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Tianfang. "Direct optimization of dose-volume histogram metrics in intensity modulated radiation therapy treatment planning." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231548.

Full text
Abstract:
In optimization of intensity-modulated radiation therapy treatment plans, dose-volumehistogram (DVH) functions are often used as objective functions to minimize the violationof dose-volume criteria. Neither DVH functions nor dose-volume criteria, however,are ideal for gradient-based optimization as the former are not continuously differentiableand the latter are discontinuous functions of dose, apart from both beingnonconvex. In particular, DVH functions often work poorly when used in constraintsdue to their being identically zero when feasible and having vanishing gradients on theboundary of feasibility.In this work, we present a general mathematical framework allowing for direct optimizationon all DVH-based metrics. By regarding voxel doses as sample realizations ofan auxiliary random variable and using kernel density estimation to obtain explicit formulas,one arrives at formulations of volume-at-dose and dose-at-volume which are infinitelydifferentiable functions of dose. This is extended to DVH functions and so calledvolume-based DVH functions, as well as to min/max-dose functions and mean-tail-dosefunctions. Explicit expressions for evaluation of function values and corresponding gradientsare presented. The proposed framework has the advantages of depending on onlyone smoothness parameter, of approximation errors to conventional counterparts beingnegligible for practical purposes, and of a general consistency between derived functions.Numerical tests, which were performed for illustrative purposes, show that smoothdose-at-volume works better than quadratic penalties when used in constraints and thatsmooth DVH functions in certain cases have significant advantage over conventionalsuch. The results of this work have been successfully applied to lexicographic optimizationin a fluence map optimization setting.
Vid optimering av behandlingsplaner i intensitetsmodulerad strålterapi används dosvolym- histogram-funktioner (DVH-funktioner) ofta som målfunktioner för att minimera avståndet till dos-volymkriterier. Varken DVH-funktioner eller dos-volymkriterier är emellertid idealiska för gradientbaserad optimering då de förstnämnda inte är kontinuerligt deriverbara och de sistnämnda är diskontinuerliga funktioner av dos, samtidigt som båda också är ickekonvexa. Speciellt fungerar DVH-funktioner ofta dåligt i bivillkor då de är identiskt noll i tillåtna områden och har försvinnande gradienter på randen till tillåtenhet. I detta arbete presenteras ett generellt matematiskt ramverk som möjliggör direkt optimering på samtliga DVH-baserade mått. Genom att betrakta voxeldoser som stickprovsutfall från en stokastisk hjälpvariabel och använda ickeparametrisk densitetsskattning för att få explicita formler, kan måtten volume-at-dose och dose-at-volume formuleras som oändligt deriverbara funktioner av dos. Detta utökas till DVH-funktioner och så kallade volymbaserade DVH-funktioner, såväl som till mindos- och maxdosfunktioner och medelsvansdos-funktioner. Explicita uttryck för evaluering av funktionsvärden och tillhörande gradienter presenteras. Det föreslagna ramverket har fördelarna av att bero på endast en mjukhetsparameter, av att approximationsfelen till konventionella motsvarigheter är försumbara i praktiska sammanhang, och av en allmän konsistens mellan härledda funktioner. Numeriska tester genomförda i illustrativt syfte visar att slät dose-at-volume fungerar bättre än kvadratiska straff i bivillkor och att släta DVH-funktioner i vissa fall har betydlig fördel över konventionella sådana. Resultaten av detta arbete har med framgång applicerats på lexikografisk optimering inom fluensoptimering.
APA, Harvard, Vancouver, ISO, and other styles
44

Schwieder, Marcel. "Landsat derived land surface phenology metrics for the characterization of natural vegetation in the Brazilian savanna." Doctoral thesis, Humboldt-Universität zu Berlin, 2018. http://dx.doi.org/10.18452/19368.

Full text
Abstract:
Die Brasilianische Savanne, auch bekannt als der Cerrado, bedeckt ca. 24% der Landoberfläche Brasiliens. Der Cerrado ist von einer einzigartigen Biodiversität und einem starken Gradienten in der Vegetationsstruktur gekennzeichnet. Großflächige Landnutzungsveränderungen haben dazu geführt, dass annähernd die Hälfte der Cerrado in bewirtschaftetes Land umgewandelt wurde. Die Kartierung ökologischer Prozesse ist nützlich, um naturschutzpolitische Entscheidungen auf räumlich explizite Informationen zu stützen, sowie um das Verständnis der Ökosystemdynamik zu verbessern. Neue Erdbeobachtungssensoren, frei verfügbare Daten, sowie Fortschritte in der Datenverarbeitung ermöglichen erstmalig die großflächige Erfassung saisonaler Vegetationsdynamiken mit hohem räumlichen Detail. In dieser Arbeit wird der Mehrwert von Landsat-basierten Landoberflächenphänologischen (LSP) Metriken, für die Charakterisierung der Cerrado-Vegetation, hinsichtlich ihrer strukturellen und phänologischen Diversität, sowie zur Schätzung des oberirdischen Kohlenstoffgehaltes (AGC), analysiert. Die Ergebnisse zeigen, dass LSP-Metriken die saisonale Vegetatiosdynamik erfassen und für die Kartierung von Vegetationsphysiognomien nützlich sind, wobei hier die Grenzen der Einteilung von Vegetationsgradienten in diskrete Klassen erreicht wurden. Basierend auf Ähnlichkeiten in LSP wurden LSP Archetypen definiert, welche die Erfassung und Darstellung der phänologischen Diversität im gesamten Cerrado ermöglichten und somit zur Optimierung aktueller Kartierungskonzepte beitragen können. LSP-Metriken ermöglichten die räumlich explizite Quantifizierung von AGC in drei Untersuchungsgebieten und sollten bei zukünftigen Kohlenstoffschätzungen berücksichtigt werden. Die Erkenntnisse dieser Dissertation zeigen die Vorteile und Nutzungsmöglichkeiten von LSP Metriken im Bereich der Ökosystemüberwachung und haben demnach direkte Implikationen für die Entwicklung und Bewertung nachhaltiger Landnutzungsstrategien.
The Brazilian savanna, known as the Cerrado, covers around 24% of Brazil. It is characterized by a unique biodiversity and a strong gradient in vegetation structure. Land-use changes have led to almost half of the Cerrado being converted into cultivated land. The mapping of ecological processes is, therefore, an important prerequisite for supporting nature conservation policies based on spatially explicit information and for deepening our understanding of ecosystem dynamics. New sensors, freely available data, and advances in data processing allow the analysis of large data sets and thus for the first time to capture seasonal vegetation dynamics over large extents with a high spatial detail. This thesis aimed to analyze the benefits of Landsat based land surface phenological (LSP) metrics, for the characterization of Cerrado vegetation, regarding its structural and phenological diversity, and to assess their relation to above ground carbon. The results revealed that LSP metrics enable to capture the seasonal dynamics of photosynthetically active vegetation and are beneficial for the mapping of vegetation physiognomies. However, the results also revealed limitations of hard classification approaches for mapping vegetation gradients in complex ecosystems. Based on similarities in LSP metrics, which were for the first time derived for the whole extent of the Cerrado, LSP archetypes were proposed, which revealed the spatial patterns of LSP diversity at a 30 m spatial resolution and offer potential to enhance current mapping concepts. Further, LSP metrics facilitated the spatially explicit quantification of AGC in three study areas in the central Cerrado and should thus be considered as a valuable variable for future carbon estimations. Overall, the insights highlight that Landsat based LSP metrics are beneficial for ecosystem monitoring approaches, which are crucial to design sustainable land management strategies that maintain key ecosystem functions and services.
APA, Harvard, Vancouver, ISO, and other styles
45

Görgens, Eric Bastos. "LiDAR technology applied to vegetation quantification and qualification." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/11/11150/tde-10042015-112503/.

Full text
Abstract:
The methodology to quantify vegetation from airborne laser scanning (or LiDAR - Light Detection And Ranging) is somehow consolidated, but some concerns are still in the checklist of the scientific community. This thesis aims to bring some of those concerns and try to contribute with some results and insights. Four aspects were studied along this thesis. In the first study, the effect of threshold heights (minimum height and height break) in the quality of the set of metrics was investigated aiming the volume estimation of a eucalyptus plantation. The results indicate that higher threshold height may return a better set of metrics. The impact of threshold height was more evident in young stands and for canopy density metrics. In the second study, the stability of the LiDAR metrics between different LiDAR surveys over the same area was analyzed. This study demonstrated how the selection of stable metrics contributed to generate reliable models between different data sets. According to our results, the height metrics provided the greatest stability when used in the models, specifically the higher percentiles (>50%) and the mode. The third study was designed to evaluate the use of machine learning tools to estimate wood volume of eucalyptus plantations from LiDAR metrics. Rather than being limited to a subset of LiDAR metrics in attempting explain as much variability in a dependent variable as possible, artificial intelligence tools explored the complete metrics set when looking for patterns between LiDAR metrics and stand volume. The fourth and last study has focused upon several highly important forest typologies, and shown that it is possible to differentiate the typologies through their vertical profiles as derived from airborne laser surveys. The size of the sampling cell does have an influence on the behavior observed in analyses of spatial dependence. Each typology has its own specific characteristics, which will need to be taken into consideration in projects targeting monitoring, inventory construction, and mapping based upon airborne laser surveys. The determination of a converged vertical profile could be achieved with data representing 10 % of the area for all typologies, while for some typologies 2 % coverage was sufficient.
A metodologia para quantificar vegetação a partir de dados LiDAR (Light Detection And Ranging) está de certa forma consolidada, porém ainda existem pontos a serem esclarecidos que permanecem na lista da comunidade científica. Quatro aspectos foram estudos nesta tese. No primeiro estudo, foi investigado a influência das alturas de referência (altura mínima e altura de quebra) na qualidade do conjunto de métricas extraído visando estimação do volume de um plantio de eucalipto. Os resultados indicaram que valor mais altos de alturas de referência retornaram um conjunto de métricas melhor. O efeito das alturas de referência foi mais evidente em povoamentos jovens e para as métricas de densidade. No segundo estudo, avaliou-se a estabilidade de métricas LiDAR derivadas para uma mesma área sobrevoada com diferentes configurações de equipamentos e voo. Este estudo apresentou como a seleção de métricas estáveis pode contribuir para a geração de modelos compatíveis com diferentes bases de dados LiDAR. De acordo com os resultados, as métricas de altura foram mais estáveis que as métricas de densidade, com destaque para os percentis acima de 50% e a moda. O terceiro estudo avaliou o uso de máquinas de aprendizado para a estimação do volume em nível de povoamento de plantios de eucalipto a partir de métricas LiDAR. Ao invés de estarem limitados a um pequeno subconjunto de métricas na tentativa de explicar a maior parte possível da variabilidade total dos dados, as técnicas de inteligência artificial permitiram explorar todo o conjunto de dados e detectar padrões que estimaram o volume em nível de povoamento a partir do conjunto de métricas. O quarto e último estudo focou em sete áreas de diferentes tipologias florestais brasileiras, estudando os seus perfis verticais de dossel. O estudo mostrou que é possível diferenciar estas tipologias com base no perfil vertical derivado de levantamentos LiDAR. Foi observado também que o tamanho das parcelas possui diferentes níveis de dependência espacial. Cada tipologia possui características específicas que precisam ser levadas em considerações em projetos de monitoramento, inventário e mapeamento baseado em levantamentos LiDAR. O estudo mostrou que é possível determinar o perfil vertical de dossel a partir da cobertura de 10% da área, chegando a algumas tipologias em apenas 2% da área.
APA, Harvard, Vancouver, ISO, and other styles
46

Ferreira, Marcos Manoel. "Estimativa dos fluxos de Zn, Cd, Pb e Cu no saco do Engenho, Baía de Sepetiba, RJ Niterói." Niterói, 2017. https://app.uff.br/riuff/handle/1/3048.

Full text
Abstract:
Submitted by Biblioteca de Pós-Graduação em Geoquímica BGQ (bgq@ndc.uff.br) on 2017-03-14T15:32:17Z No. of bitstreams: 1 Dissertação - Marcos Ferreira.pdf: 4318874 bytes, checksum: 78e7b7840a168358d28a5f9dfa90c96c (MD5)
Made available in DSpace on 2017-03-14T15:32:17Z (GMT). No. of bitstreams: 1 Dissertação - Marcos Ferreira.pdf: 4318874 bytes, checksum: 78e7b7840a168358d28a5f9dfa90c96c (MD5)
Coordenação de Aperfeiçoamento de Pessoal Nível Superior
Universidade Federal Fluminense. Instituto de Química. Programa de Pós-Graduação em Geociências- Geoquímica, Niterói, RJ
Este estudo quantificou o aporte via transporte aquático superficial de Zn, Cd, Pb, e Cu para a Baía de Sepetiba, oriundos do Saco do Engenho, buscando estimar a carga da poluição anual, proveniente em grande parte, dos rejeitos industriais armazenados na área da falida Cia. de beneficiamento de Zn e Cd, a Ingá Mercantil. Os resultados mostram o quão preocupante é a carga dos contaminantes metálicos que ainda são exportados para a Baía de Sepetiba através do Saco do Engenho, devido ao processo conjunto de lixiviação e erosão dos rejeitos industriais ricos em metais pesados. Zn e Cd foram caracterizados como os principais metais que são lixiviados do rejeito pela ação das águas da Baía e das chuvas locais, sendo então os metais que apresentam os maiores riscos ao ambiente aquático local, principalamente se a este fato for acrescentado às suas altas concentrações no rejeito. Nas análises realizadas, em 71% das amostras, a concentração de Zn total superou os limites máximos permissíveis na Classe 2 para águas, segundo a legislação ambiental brasileira. Comparando-se os resultados deste estudo com outras regiões, as concentrações de Zn, Cd, Pb, e Cu encontrados nas águas do Saco do Engenho são comparáveis às concentrações encontradas em outros áreas altamante impactadas por atividades industriais e portuárias. Os resultados referentes ao fluxo de contaminantes que aportam à Baía de Sepetiba a partir do Saco do Engenho foram de 33 t.ano-1 de Zn, 0,3 t.ano-1 de Cd, e 0,06 t.ano-1 de Pb. O metal Cu apresentou um fluxo de 0,51 t.ano-1, mas no sentido inverso. Zn e Cd, apresentaram valores de fluxo anuais maiores, ou pelo menos na mesma ordem de grandeza, que os encontrados no Canal de São Francisco e no Rio Guandú, rios estes altamente contaminados por metais pesados, sobretudo em seus trechos finais devido a grande concentração de indústrias com elevado potencial poluidor, e que possuem uma vazão anual cerca de 100 vezes maior, que a o maior valor de vazão medido no Saco do Engenho
This study quantified the superficial aquatic aport of Zn, Cd, Pb, e Cu at Sepetiba Bay, derivative from Engenho Inlet, to look for the daily values of trace metals pollution, mostly originated of a large environmental liabilitie in that place, the tailing of old bankrupt Cia. Ingá Mercantil. The results show how concern is the load of metal contaminants that are still exported to the Sepetiba Bay through the Engenho Inlet, due to the joint process of leaching and erosion of industrial wastes rich in heavy metals. Zn and Cd were characterized as the main metals that are leached from the tailings by the action of the waters of the Bay and local rains, and then the metals that pose the greatest risks to the local aquatic environment, especially if this fact is added to their high concentrations to reject. In the analysis performed, in 71% of the samples, the concentration of Zn total exceeded the maximum allowable for Waters Class 2, according to Brazilian environmental legislation. Comparing our results with other regions, the concentrations of Zn, Cd, Pb, and Cu found in the waters of Engenho Inlet are comparable to concentrations found in other areas highly impacted by industrial and portuary activities. The results for the flow of contaminants who come to Sepetiba Bay from the Engenho Inlet were 33 t.year-1 of Zn, 0.3 t.year-1 of Cd and 0.06 t.year-1 of Pb. The metal Cu showed a flow of 0.51 t.ano-1, but in reverse flux. Zn and Cd showed higher anual flow values, or at least the same order of magnitude, than those found in the Canal of San Francisco and Rio Guandu rivers, these highly contaminated by heavy metals, especially in its final stretches due to high concentration of industries with high pollution potential, and have an annual flow of about 100 times greater than the highest value of flow measured in the Engenho Inlet
APA, Harvard, Vancouver, ISO, and other styles
47

Leandre, Fernet Renand. "Estimating Effects of Poverty on the Survival of HIV Patients on ART and Food Supplementation in Rural Haiti: A Comparative Evaluation of Socio-Economic Indicators." Thesis, Harvard University, 2014. http://nrs.harvard.edu/urn-3:HUL.InstRepos:13041360.

Full text
Abstract:
Background: Because economic conditions are both a risk factor for disease and may themselves be objectives for health delivery interventions, monitoring changes in economic outcomes has become a routine priority for health and development efforts. However, the lack of formal commerce in poor agrarian communities creates challenges for measuring economic status. Data on household finances, such as income, are ideal but are time-consuming, costly, and less reliable, whereas proxy measures of wealth such as indices of durable assets are easier to measure but relatively coarse and are less sensitive to rapid changes in underlying drivers. Methods: We used data from a cohort of 528 people living with HIV/AIDS (PLHA) enrolled in a food intervention study on household demographics, agricultural production, cash income, in-kind income, household durable assets and health status, including CD4 count. We created a household economic index using principal components analysis (PCA) and compared it with three other economic indicators generated from the data (income, expenditures, poverty score). Through multivariate logistic regression analysis we evaluated the effect of the economic metric on probability of survival within the first year of study. Results: Socioeconomic status determined by PCA of durable assets, weighted by the square root of the household size, was the only consistently significant economic predictor of probability of death. It remained significant even after controlling for direct health indicators such as CD4 count. There was no significant correlation between CD4 count and the economic indicators, which may be attributable to uniform access to ART among study participants. Conclusion: Among people who have HIV and are all enrolled in ART and food programs, household socioeconomic status is an important predictor of mortality rates, even after controlling for direct health measurements such as CD4 count and other health-related covariates. The SES indicator from PCA is also a simple metric to estimate. The study underscores that poverty is a social determinant of mortality even in the context of equal access to health services, and is suggestive of the importance of poverty alleviation activities as an important supplement to clinical interventions.
APA, Harvard, Vancouver, ISO, and other styles
48

Zemánek, Ondřej. "Počítání vozidel v statickém obraze." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2020. http://www.nusl.cz/ntk/nusl-417211.

Full text
Abstract:
Tato práce se zaměřuje na problém počítání vozidel v statickém obraze bez znalosti geometrických vlastností scény. V rámci řešení bylo implementováno a natrénováno 5 architektur konvolučních neuronových sítí. Také byl pořízen rozsáhlý dataset s 19 310 snímky pořízených z 12pohledů a zachycujících 7 různých scén. Použité konvoluční sítě mapují vstupní vzorek na mapu hustoty vozidel, ze které lze získat jejich počet a lokalizaci v kontextu vstupního snímku. Hlavním přínosem této práce je porovnání a aplikace dosavadních nejlepších řešení pro počítání objektů v obraze. Většina z těchto architektur byla navržena pro počítání lidí v obraze, proto musely být uzpůsobeny pro potřeby počítání vozidel v statickém obraze. Natrénované modely jsou vyhodnoceny GAME metrikou na TRANCOS datasetu a na velkém spojeném datasetu. Dosažené výsledky všech modelů jsou následně popsány a porovnány.
APA, Harvard, Vancouver, ISO, and other styles
49

Bellorio, Marcos Bruno. "Revisão sobre critérios de fadiga para cabos condutores de energia e uso de metodologia para estimativa de sua vida remanescente." reponame:Repositório Institucional da UnB, 2009. http://repositorio.unb.br/handle/10482/5951.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2009.
Submitted by Jaqueline Ferreira de Souza (jaquefs.braz@gmail.com) on 2010-11-11T15:24:36Z No. of bitstreams: 1 2009_MarcosBrunoBellorio.pdf: 1800293 bytes, checksum: a7c6b5c97e0b0799f6fb4bcf15ae8fc8 (MD5)
Approved for entry into archive by Daniel Ribeiro(daniel@bce.unb.br) on 2010-11-19T23:13:51Z (GMT) No. of bitstreams: 1 2009_MarcosBrunoBellorio.pdf: 1800293 bytes, checksum: a7c6b5c97e0b0799f6fb4bcf15ae8fc8 (MD5)
Made available in DSpace on 2010-11-19T23:13:51Z (GMT). No. of bitstreams: 1 2009_MarcosBrunoBellorio.pdf: 1800293 bytes, checksum: a7c6b5c97e0b0799f6fb4bcf15ae8fc8 (MD5)
O presente trabalho tem como objetivo conduzir uma revisao critica sobre as diferentes metodologias existentes para o projeto e manutencao de linhas de transmissao de energia quanto a fadiga sob condicoes de fretting. Entre as metodologias estudadas, o metodo da Cigre para calculo da vida remanescente mostrou ser a mais consistente e util. Uma vez concluida a analise critica aplicou-se estas metodologias para um conjunto de dados medidos para um cabo da Eletronorte em uma linha de transmissao de 230 kV instalada na regiao Norte do Brasil no trecho de travessia Vila do Conde - Guama. Essa analise permite estimar a durabilidade do cabo e constituiu importante ferramenta de analise para o setor de manutencao de empresas da area. Atualmente ha uma forte demanda das empresas do setor de transmissao de energia eletrica para tentar elevar o nivel da carga de pre-esticamento do cabo condutor. Isso alem de reduzir custos amenizaria dificuldades operacionais associadas, por exemplo, a construcao de torres muito altas para a travessia de grandes rios Amazonicos. Nesse sentido, esse trabalho propos uma alternativa para o calculo da curva de resistencia a fadiga do cabo na presenca de maiores cargas de esticamento. Ate onde o autor tenha conhecimento, essa e uma proposta inedita no sentido que, na presenca de cargas de esticamento mais elevadas costuma-se corrigir apenas os valores da solicitacao dinamica do cabo por meio da correcao do fator de rigidez na formula de Poffenberger-Swart. Essa e a primeira tentativa de corrigir-se nao apenas a solicitacao, mas tambem a curva de resistencia a fadiga da montagem cabo/grampo de suspensao. Uma discussao sobre a necessidade dessa medida e construida em detalhes. _________________________________________________________________________________ ABSTRACT
The aim of this work is to conduct a critical review of the different existing methodologies for the design and maintenance of overhead conductors under fretting fatigue. Among these methodologies, the Cigre method for remain life calculation has proved to be the most consistent and useful. Once the critical review was concluded those methodologies were applied using a set of data obtained from an Eletronorte overhead conductor installed in the North region of Brazil. This analysis allows one to estimate the durability of the overhead conductor which is a very important tool for the maintenance sector of transmission line companies. Nowadays there is a strong demand from the electrical transmission line companies to elevate the stretch factor (Every Day Stress - EDS) of the overhead conductor. This will reduce the costs and the operational difficulties associated to it, for instance, the construction of very high towers for the crossing of large Amazon rivers. In this context, this work proposed an alternative way to calculate the conductor fatigue curve for higher pre-loads. To the author’s knowledge, this is a proposal without precedent in the sense that in a presence of higher stretch loads it is common to correct only the values for dynamic loads by correcting the stiffness factor of Poffenberger-Swart formula. This is the first proposal to correct the stiffness factor and also the fatigue endurance limit curve for the conductor/clamp set. A discussion on the need for this measure is constructed in detail.
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Li. "Quasi transformées de Riesz, espaces de Hardy et estimations sous-gaussiennes du noyau de la chaleur." Phd thesis, Université Paris Sud - Paris XI, 2014. http://tel.archives-ouvertes.fr/tel-01001868.

Full text
Abstract:
Dans cette thèse nous étudions les transformées de Riesz et les espaces de Hardy associés à un opérateur sur un espace métrique mesuré. Ces deux sujets sont en lien avec des estimations du noyau de la chaleur associé à cet opérateur. Dans les Chapitres 1, 2 et 4, on étudie les transformées quasi de Riesz sur les variétés riemannienne et sur les graphes. Dans le Chapitre 1, on prouve que les quasi transformées de Riesz sont bornées dans Lp pour 1
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography