Segui questo link per vedere altri tipi di pubblicazioni sul tema: Statistic in engineering.

Tesi sul tema "Statistic in engineering"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Statistic in engineering".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Yap, Tammy 1976. "SCAN : a statistic code analyser for JavaScheme". Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80578.

Testo completo
Abstract (sommario):
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.
Includes bibliographical references (p. 45).
by Tammy Yap.
S.B.and M.Eng.
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Birkestedt, Sara, e Andreas Hansson. "Can web-based statistic services be trusted?" Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5282.

Testo completo
Abstract (sommario):
A large number of statistic services exist today, which shows that there is a great interest in knowing more about the visitors on a web site. But how reliable is the result the services are giving? The hypothesis examined in the thesis is: Web-based statistic services do not show an accurate result The purpose of the thesis is to find out how accurate the web-based statistic services are regarding unique visitors and number of pages viewed. Our hope is that this thesis will bring more knowledge about the different statistic services that exists today and the problems surrounding them. We will also draw attention to the importance of knowing how your statistic software works to be able to interpret the results correctly. To investigate this, we chose to do practical tests on a selection of web-based statistic services. The services registered the traffic from the same web site during a test period. During the same period a control program registered the same things and stored the result in a database. In addition to the test, we have done an interview with a person working with web statistics. The investigation showed that there are big differences between the results from the web-based statistic services in the test and that none of them showed an accurate result, neither for the total number of page views nor unique visitors. This led us to the conclusion that web-based statistic services do not show an accurate result, which verifies our hypothesis. Also the interview confirmed that there is a problem with measuring web statistics.
Ett stort antal statistiksystem existerar idag för att ta reda på information om besökare på webbplatser. Men hur pålitliga är egentligen dessa tjänster? Syftet med uppsatsen är att ta reda på hur pålitliga de är när det gäller att visa antal unika besökare och totalt antal sidvisningar. Hypotesen vi har formulerat är: Webb-baserade statistiksystem visar inte ett korrekt resultat. För att testa detta har vi gjort praktiska tester av fem olika webb-baserade statistiktjänster som användes på samma webbplats under samma period. Informationen som dessa tjänster registrerade lagrade vi i en databas, samtidigt som vi använde ett eget kontrollprogram för att mäta samma uppgifter. Vi har också genomfört en intervju med en person som arbetar med webbstatistik på ett webbföretag. Undersökningen visar att resultatet mellan de olika tjänsterna skiljer sig mycket, både jämfört med varandra och med kontrollprogrammet. Detta gällde både antal sidvisningar och unika besökare. Detta leder till slutsatsen att systemen inte visar korrekta uppgifter, vilket gör att vi kan verifiera vår hypotes. Även intervjun som utfördes visade på de problem som finns med att mäta besökarstatistik.
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Cepel, Raina. "The spatial cross-correlation coefficient as an ultrasonic detection statistic". Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/5054.

Testo completo
Abstract (sommario):
Thesis (M.S.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on April 7, 2008) Includes bibliographical references.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Mate, Samuel Spicer. "Anthropometric human modeling on the shape manifold". Thesis, University of Iowa, 2016. https://ir.uiowa.edu/etd/3139.

Testo completo
Abstract (sommario):
The accuracy of modern digital human models has led to the development of human simulation engines capable of performing a complex analysis of the biometrics and kinematics / dynamics of a digital model. While the capabilities of these simulations have seen much progress in recent years, they are hindered by a fundamental limitation regarding the diversity of the models compatible with the simulation engine, which in turn results in a reduction in the scope of the applications available to the simulation. This is typically due to the necessary implementation of a musculoskeletal structure within the model, as well as the inherent mass and inertial data that accompany it. As a result a significant amount of time and expertise is required to make a digital human model compatible with the simulation. In this research I present a solution to this limitation by outlining a process to develop a set of mutually compatible human models that spans the range of feasible body shapes and allows for a “free” exploration of body shape within the shape manifold. Additionally, a method is presented to represent the human body shapes with a reduction of dimensionality, via a spectral shape descriptor, that enables a statistical analysis that is both more computationally efficient and anthropometrically accurate than traditional methods. This statistical analysis is then used to develop a set of representative models that succinctly represent the full scope of human body shapes across the population, with applications reaching beyond the research-oriented simulations into commercial human-centered product design and digital modeling.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Isaksson, Nils, e Helena Lundström. "Dammsäkerhetsutvärdering samt utformning av dammregister och felrapporteringssystem för svenska gruvdammar". Thesis, Uppsala University, Department of Earth Sciences, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-88834.

Testo completo
Abstract (sommario):

A lot of mine waste rock and tailings arise from all mining processes and have to be stored in an appropriate way. Tailings are deposited in impoundments retained by tailings dams. The objective of tailings dams is to retain the slurry from the mining process and in that way prevent spill into the surroundings that might be harmful for the environment. Tailings dams are often constructed as staged embankments so that construction costs and demand of materials are spread more evenly over the period of deposition.

The objective of this thesis has been to compile information about and evaluate events at Swedish tailings dams and also to develop a collective database for all Swedish mining companies for all tailings dams and all events that occur at tailings dams.

Information about 60 events at Swedish tailings dams has been gathered and evaluated. The evaluation has been performed by comparison between and analysis of individual parameters and also by use of a multivariate statistical method called PLS. The statistical analysis shows a decrease in the numbers of events during the last five years, which indicates improved dam safety within the mining industry. The analysis also shows that severe events and the human factor might be related when it comes to the initiating cause of the event. Further relations between the parameters and the severity of the events can be seen from the PLS-analysis, for example that low and short tailings dams to a greater extent are subjected to severe events. To be able to draw more reliable conclusions further studies with a more complete basic data are needed.

This work has shown a need of a collective database within the Swedish mining industry for tailings dams and occurring events at tailings dams so that more complete basic data could be obtained for future studies. A structure for such a database has been developed in Microsoft Access 2000. The aim of the database is to facilitate feedback within the mining industry and to gather comprehensive data for future statistical evaluations.


Vid alla gruvprocesser skapas stora mängder restprodukter i form av gråberg och anrikningssand som måste tas om hand på lämpligt sätt. Anrikningssanden deponeras tillsammans med vatten från gruvprocessen i magasin omgärdade av dammvallar, s.k. gruvdammar. Gruvdammar har som syfte att hålla kvar anrikningssand och vatten och måste vara stabila så att de skyddar omgivningen från utsläpp av anrikningssand som skulle kunna vara skadligt för miljön. En gruvdamm byggs ofta upp i etapper eftersom byggkostnaderna och behovet av dammfyllnadsmaterial då sprids över tiden.

Syftet med arbetet har varit att sammanställa och utvärdera händelser vid svenska gruvdammar samt att utforma ett för gruvindustrin gemensamt dammregister och felrapporteringssystem.

60 händelser vid svenska gruvdammar har sammanställts och utvärderats. Utvärderingen har genomförts dels genom att enskilda parametrar jämförts och analyserats och dels med hjälp av den multivariata analysmetoden PLS. Den statistiska analysen visar på en minskning i antal händelser under de senaste fem åren, vilket tyder på ett förbättrat dammsäkerhetsarbete inom gruvindustrin. Analysen har kunnat uppvisa ett samband mellan allvarliga händelser och den mänskliga faktorn när det gäller vad det är som initierat händelserna. Genom PLS-analysen har ytterligare samband mellan de undersökta parametrarna och allvarlighetsgraden av händelserna kunnat utläsas, bl.a. visar analysen att låga och korta dammar i större utsträckning drabbas av allvarliga händelser jämfört med höga och långa dammar. För att säkra slutsatser ska kunna dras krävs dock vidare studier med ett mer komplett statistiskt underlag.

Examensarbetet har påvisat ett behov av ett branchgemensamt damm- och felrapporteringsregister för att ett mer komplett underlag ska kunna erhållas i framtiden. En färdig databasstruktur för ett sådant dammregister och felrapporteringsregister för svenska gruvdammar har utformats. Databasen är uppbyggd i Microsoft Access 2000 och är tänkt att underlätta erfarenhetsåterföring inom branschen samt att ge ett underlag för framtida statistiska undersökningar.

Gli stili APA, Harvard, Vancouver, ISO e altri
6

Hong, Sui. "Experiments with K-Means, Fuzzy c-Means and Approaches to Choose K and C". Honors in the Major Thesis, University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/1224.

Testo completo
Abstract (sommario):
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf
Bachelors
Engineering and Computer Science
Computer Engineering
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Torres, Ariela da Silva. "Corrosão por cloretos em estruturas de concreto armado : uma meta-análise". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/29405.

Testo completo
Abstract (sommario):
O concreto é o segundo material mais consumido no mundo, perdendo somente para água, justificando a importância de estudos que envolvam seu comportamento frente à durabilidade, e que objetivam a obtenção de estruturas com longas vidas úteis. Dentre as diversas manifestações patológicas que ocorrem nas estruturas de concreto armado, a corrosão das armaduras tem uma grande incidência, como já comprovado por diversos autores. A corrosão das armaduras ocorre pela ação de agentes agressivos sendo que os dois principais são a carbonatação e a penetração de cloretos. Em função da grande costa marítima do Brasil, a ação de íons cloretos é a mais significativa, deteriorando estruturas e levando a necessidade de manutenções periódicas. Porém, sabe-se do alto custo para realização destas manutenções. Em função da preocupação com a manutenção e gastos das estruturas de concreto armado, este estudo objetivou realizar uma validação dos ensaios de corrosão por cloretos em estruturas de concreto armado a partir de meta-análise dos dados de técnicas eletroquímicas utilizadas nos trabalhos de teses e dissertações desenvolvidos no Brasil. Pretende-se desta maneira realizar um mapeamento das pesquisas na área de corrosão em armaduras de concreto armado realizadas no Brasil e direcionar os futuros estudos. Pretende-se também realizar uma análise conjunta de todos estes trabalhos para verificar a quantidade e qualidade dos mesmos sob uma visão do conjunto nacional. Registrou-se a falta de pesquisas na região Norte do país e a centralização das mesmas nas regiões Sudeste e Sul. Com o uso de técnicas estatísticas conclui-se a falta de diversas combinações de variáveis (relação água/cimento, tipo de cimento, método de indução de cloretos, adição de materiais, entre outros) para criação de um modelo brasileiro confiável.
Concrete is the second most consumed material in the world, second only to water, explaining the importance of studies involving behavior concerning durability, and aimed to obtain structures with long lifetimes. Among the various pathologics that occur in reinforced concrete structures, corrosion of the reinforcement has a major impact, as proven by several authors. The reinforcement corrosion occurs by the action of aggressive agents and the two main ones are carbonation and chloride penetration. Due to the great sea coast of Brazil, the action of chloride ions is the most significant and deteriorating structures leading to the need for periodic maintenance, but it is known the high cost of implementing these maintenances. Because of concerns with the maintenance and expense of concrete structures, this study aimed to perform a validation of the corrosion by chlorides in reinforced concrete structures from the meta-analysis of data from electrochemical techniques used in the work of theses and dissertations developed in Brazil. The aim is thus to map the research on corrosion in reinforced concrete made in Brazil and direct future studies. Also perform a joint analysis of all these works to verify the quantity and quality of these in a vision of the National Assembly. We observed the lack of research in the northern region of the country and centralizing them in the Southeast and South By using statistical techniques concludes the lack of various combinations of variables (water / cement ratio, cement type, method induction of chloride, addition of materials, among others) to create a Brazilian model reliable.
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Nilsson, Marcus, e Stefan Borgström. "Pokerboten". Thesis, Malmö högskola, Fakulteten för teknik och samhälle (TS), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20655.

Testo completo
Abstract (sommario):
Syftet med följande examensarbete är att undersöka teorier och få fram idéer om hur man kan bygga en bot som spelar poker. Ett viktigt ämne som studeras är artificiell intelligens och hur ett AI kan utvecklas hos en bot som ska ersätta en mänsklig pokerspelare spelandes i ett nätverk. Studien ger en inblick om spelregler för Texas Hold’em och går även in på teori om betydelsefull statistik, sannolikhet och odds. Resultatet av denna undersökning består av framtagna algoritmer som kan användas vid utveckling av en bot som spelar poker på ett bord med tio spelare.
The aim of the following thesis is to explore and develop ideas on how to build a bot that plays poker. An important topic that is studied is artificial intelligence and how an AI is implemented for a bot that replaces a human poker player playing in a network.The study provides insight of the playing rules for Texas Hold’em and theory of meaningful statistics, probability and odds will be used.The results of this study consist of algorithms that can be used in the development of a bot that plays poker on a table with ten players.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Raj, Alvin Andrew. "Ambiguous statistics - how a statistical encoding in the periphery affects perception". Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/79214.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 159-163).
Recent understanding in human vision suggests that the periphery compresses visual information to a set of summary statistics. Some visual information is robust to this lossy compression, but others, like spatial location and phase are not perfectly represented, leading to ambiguous interpretations. Using the statistical encoding, we can visualize the information available in the periphery to gain intuitions about human performance in visual tasks, which have implications for user interface design, or more generally, whether the periphery encodes sufficient information to perform a task without additional eye movements. The periphery is most of the visual field. If it undergoes these losses of information, then our perception and ability to perform tasks efficiently are affected. We show that the statistical encoding explains human performance in classic visual search experiments. Based on the statistical understanding, we also propose a quantitative model that can estimate the average number of fixations humans would need to find a target in a search display. Further, we show that the ambiguities in the peripheral representation predict many aspects of some illusions. In particular, the model correctly predicts how polarity and width affects the Pinna-Gregory illusion. Visualizing the statistical representation of the illusion shows that many qualitative aspects of the illusion are captured by the statistical ambiguities. We also investigate a phenomena known as Object Substitution Masking (OSM), where the identity of an object is impaired when a sparse, non-overlapping, and temporally trailing mask surrounds that object. We find that different types of grouping of object and mask produce different levels of impairment. This contradicts a theory about OSM which predicts that grouping should always increase masking strength. We speculate some reasons for why the statistical model of the periphery may explain OSM.
by Alvin Andrew Raj.
Ph.D.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Marco, Almagro Lluís. "Statistical methods in Kansei engineering studies". Doctoral thesis, Universitat Politècnica de Catalunya, 2011. http://hdl.handle.net/10803/85059.

Testo completo
Abstract (sommario):
Aquesta tesi doctoral tracta sobre Enginyeria Kansei (EK), una tècnica per traslladar emocions transmeses per productes en paràmetres tècnics, i sobre mètodes estadístics que poden beneficiar la disciplina. El propòsit bàsic de l'EK és descobrir de quina manera algunes propietats d'un producte transmeten certes emocions als seus usuaris. És un mètode quantitatiu, i les dades es recullen típicament fent servir qüestionaris. S'extreuen conclusions en analitzar les dades recollides, normalment usant algun tipus d'anàlisi de regressió. L'EK es pot situar en l'àrea de recerca del disseny emocional. La tesi comença justificant la importància del disseny emocional. Com que el rang de tècniques usades sota el nom d'EK és extens i no massa clar, la tesi proposa una definició d'EK que serveix per delimitar el seu abast. A continuació, es suggereix un model per desenvolupar estudis d'EK. El model inclou el desenvolupament de l'espai semàntic – el rang d'emocions que el producte pot transmetre – i l'espai de propietats – les variables tècniques que es poden modificar en la fase de disseny. Després de la recollida de dades, l'etapa de síntesi enllaça ambdós espais (descobreix com diferents propietats del producte transmeten certes emocions). Cada pas del model s'explica detalladament usant un estudi d'EK realitzat per aquesta tesi: l'experiment dels sucs de fruites. El model inicial es va millorant progressivament durant la tesi i les dades de l'experiment es van reanalitzant usant noves propostes.Moltes inquietuds pràctiques apareixen quan s'estudia el model per a estudis d'EK esmentat anteriorment (entre d'altres, quants participants són necessaris i com es desenvolupa la sessió de recollida de dades). S'ha realitzat una extensa revisió bibliogràfica amb l'objectiu de respondre aquestes i altres preguntes. Es descriuen també les aplicacions d'EK més habituals, juntament amb comentaris sobre idees particularment interessants de diferents articles. La revisió bibliogràfica serveix també per llistar quines són les eines més comunament utilitzades en la fase de síntesi.La part central de la tesi se centra precisament en les eines per a la fase de síntesi. Eines estadístiques com la teoria de quantificació tipus I o la regressió logística ordinal s'estudien amb detall, i es proposen diverses millores. En particular, es proposa una nova forma gràfica de representar els resultats d'una regressió logística ordinal. S'introdueix una tècnica d'aprenentatge automàtic, els conjunts difusos (rough sets), i s'inclou una discussió sobre la seva idoneïtat per a estudis d'EK. S'usen conjunts de dades simulades per avaluar el comportament de les eines estadístiques suggerides, la qual cosa dóna peu a proposar algunes recomanacions.Independentment de les eines d'anàlisi utilitzades en la fase de síntesi, les conclusions seran probablement errònies quan la matriu del disseny no és adequada. Es proposa un mètode per avaluar la idoneïtat de matrius de disseny basat en l'ús de dos nous indicadors: un índex d'ortogonalitat i un índex de confusió. S'estudia l'habitualment oblidat rol de les interaccions en els estudis d'EK i es proposa un mètode per incloure una interacció, juntament amb una forma gràfica de representar-la. Finalment, l'última part de la tesi es dedica a l'escassament tractat tema de la variabilitat en els estudis d'EK. Es proposen un mètode (basat en l'anàlisi clúster) per segmentar els participants segons les seves respostes emocionals i una forma d'ordenar els participants segons la seva coherència en valorar els productes (usant un coeficient de correlació intraclasse). Com que molts usuaris d'EK no són especialistes en la interpretació de sortides numèriques, s'inclouen representacions visuals per a aquests dos nous mètodes que faciliten el processament de les conclusions.
Esta tesis doctoral trata sobre Ingeniería Kansei (IK), una técnica para trasladar emociones transmitidas por productos en parámetros técnicos, y sobre métodos estadísticos que pueden beneficiar la disciplina. El propósito básico de la IK es descubrir de qué manera algunas propiedades de un producto transmiten ciertas emociones a sus usuarios. Es un método cuantitativo, y los datos se recogen típicamente usando cuestionarios. Se extraen conclusiones al analizar los datos recogidos, normalmente usando algún tipo de análisis de regresión.La IK se puede situar en el área de investigación del diseño emocional. La tesis empieza justificando la importancia del diseño emocional. Como que el rango de técnicas usadas bajo el nombre de IK es extenso y no demasiado claro, la tesis propone una definición de IK que sirve para delimitar su alcance. A continuación, se sugiere un modelo para desarrollar estudios de IK. El modelo incluye el desarrollo del espacio semántico – el rango de emociones que el producto puede transmitir – y el espacio de propiedades – las variables técnicas que se pueden modificar en la fase de diseño. Después de la recogida de datos, la etapa de síntesis enlaza ambos espacios (descubre cómo distintas propiedades del producto transmiten ciertas emociones). Cada paso del modelo se explica detalladamente usando un estudio de IK realizado para esta tesis: el experimento de los zumos de frutas. El modelo inicial se va mejorando progresivamente durante la tesis y los datos del experimento se reanalizan usando nuevas propuestas. Muchas inquietudes prácticas aparecen cuando se estudia el modelo para estudios de IK mencionado anteriormente (entre otras, cuántos participantes son necesarios y cómo se desarrolla la sesión de recogida de datos). Se ha realizado una extensa revisión bibliográfica con el objetivo de responder éstas y otras preguntas. Se describen también las aplicaciones de IK más habituales, junto con comentarios sobre ideas particularmente interesantes de distintos artículos. La revisión bibliográfica sirve también para listar cuáles son las herramientas más comúnmente utilizadas en la fase de síntesis. La parte central de la tesis se centra precisamente en las herramientas para la fase de síntesis. Herramientas estadísticas como la teoría de cuantificación tipo I o la regresión logística ordinal se estudian con detalle, y se proponen varias mejoras. En particular, se propone una nueva forma gráfica de representar los resultados de una regresión logística ordinal. Se introduce una técnica de aprendizaje automático, los conjuntos difusos (rough sets), y se incluye una discusión sobre su idoneidad para estudios de IK. Se usan conjuntos de datos simulados para evaluar el comportamiento de las herramientas estadísticas sugeridas, lo que da pie a proponer algunas recomendaciones. Independientemente de las herramientas de análisis utilizadas en la fase de síntesis, las conclusiones serán probablemente erróneas cuando la matriz del diseño no es adecuada. Se propone un método para evaluar la idoneidad de matrices de diseño basado en el uso de dos nuevos indicadores: un índice de ortogonalidad y un índice de confusión. Se estudia el habitualmente olvidado rol de las interacciones en los estudios de IK y se propone un método para incluir una interacción, juntamente con una forma gráfica de representarla. Finalmente, la última parte de la tesis se dedica al escasamente tratado tema de la variabilidad en los estudios de IK. Se proponen un método (basado en el análisis clúster) para segmentar los participantes según sus respuestas emocionales y una forma de ordenar los participantes según su coherencia al valorar los productos (usando un coeficiente de correlación intraclase). Puesto que muchos usuarios de IK no son especialistas en la interpretación de salidas numéricas, se incluyen representaciones visuales para estos dos nuevos métodos que facilitan el procesamiento de las conclusiones.
This PhD thesis deals with Kansei Engineering (KE), a technique for translating emotions elicited by products into technical parameters, and statistical methods that can benefit the discipline. The basic purpose of KE is discovering in which way some properties of a product convey certain emotions in its users. It is a quantitative method, and data are typically collected using questionnaires. Conclusions are reached when analyzing the collected data, normally using some kind of regression analysis. Kansei Engineering can be placed under the more general area of research of emotional design. The thesis starts justifying the importance of emotional design. As the range of techniques used under the name of Kansei Engineering is rather vast and not very clear, the thesis develops a detailed definition of KE that serves the purpose of delimiting its scope. A model for conducting KE studies is then suggested. The model includes spanning the semantic space – the whole range of emotions the product can elicit – and the space of properties – the technical variables that can be modified in the design phase. After the data collection, the synthesis phase links both spaces; that is, discovers how several properties of the product elicit certain emotions. Each step of the model is explained in detail using a KE study specially performed for this thesis: the fruit juice experiment. The initial model is progressively improved during the thesis and data from the experiment are reanalyzed using the new proposals. Many practical concerns arise when looking at the above mentioned model for KE studies (among many others, how many participants are used and how the data collection session is conducted). An extensive literature review is done with the aim of answering these and other questions. The most common applications of KE are also depicted, together with comments on particular interesting ideas from several papers. The literature review also serves to list which are the most common tools used in the synthesis phase. The central part of the thesis focuses precisely in tools for the synthesis phase. Statistical tools such as quantification theory type I and ordinal logistic regression are studied in detail, and several improvements are suggested. In particular, a new graphical way to represent results from an ordinal logistic regression is proposed. An automatic learning technique, rough sets, is introduced and a discussion is included on its adequacy for KE studies. Several sets of simulated data are used to assess the behavior of the suggested statistical techniques, leading to some useful recommendations. No matter the analysis tools used in the synthesis phase, conclusions are likely to be flawed when the design matrix is not appropriate. A method to evaluate the suitability of design matrices used in KE studies is proposed, based on the use of two new indicators: an orthogonality index and a confusion index. The commonly forgotten role of interactions in KE studies is studied and a method to include an interaction in KE studies is suggested, together with a way to represent it graphically. Finally, the untreated topic of variability in KE studies is tackled in the last part of the thesis. A method (based in cluster analysis) for finding segments among subjects according to their emotional responses and a way to rank subjects based on their coherence when rating products (using an intraclass correlation coefficient) are proposed. As many users of Kansei Engineering are not specialists in the interpretation of the numerical output from statistical techniques, visual representations for these two new proposals are included to aid understanding.
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Keane, A. J. "Statistical energy analysis of engineering structures". Thesis, Brunel University, 1988. http://bura.brunel.ac.uk/handle/2438/5204.

Testo completo
Abstract (sommario):
This thesis examines the fundamental equations of the branch of linear oscillatory dynamics known as Statistical Energy Analysis (SEA). The investigation described is limited to the study of two, point coupled multi-modal sub-systems which form the basis for most of the accepted theory in this field. Particular attention is paid to the development of exact classical solutions against which simplified approaches can be compared. These comparisons reveal deficiencies in the usual formulations of SEA in three areas, viz., for heavy damping, strong coupling between sub-systems and for systems with non-uniform natural frequency distributions. These areas are studied using axially vibrating rod models which clarify much of the analysis without significant loss of generality. The principal example studied is based on part of the structure of a modem warship. It illustrates the simplifications inherent in the models adopted here but also reveals the improvements that can be made over traditional SEA techniques. The problem of heavy damping is partially overcome by adopting revised equations for the various loss factors used in SEA. These are shown to be valid provided that the damping remains proportional so that inter-modal coupling is not induced by the damping mechanism. Strong coupling is catered for by the use of a correction factor based on the limiting case of infinite coupling strength, for which classical solutions may be obtained. This correction factor is used in conjunction with a new, theoretically based measure of the transition between weakly and strongly coupled behaviour. Finally, to explore the effects of non-uniform natural frequency distributions, systems with geometrically periodic and near-periodic parameters are studied. This important class of structures are common in engineering design and do not posses the uniform modal statistics commonly assumed in SEA. The theory of periodic structures is used in this area to derive more sophisticated statistical models that overcome some of these limitations.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Molaro, Mark Christopher. "Computational statistical methods in chemical engineering". Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/111286.

Testo completo
Abstract (sommario):
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 175-182).
Recent advances in theory and practice, have introduced a wide variety of tools from machine learning that can be applied to data intensive chemical engineering problems. This thesis covers applications of statistical learning spanning a range of relative importance of data versus existing detailed theory. In each application, the quantity and quality of data available from experimental systems are used in conjunction with an understanding of the theoretical physical laws governing system behavior to the extent they are available. A detailed generative parametric model for optical spectra of multicomponent mixtures is introduced. The application of interest is the quantification of uncertainty associated with estimating the relative abundance of mixtures of carbon nanotubes in solution. This work describes a detailed analysis of sources of uncertainty in estimation of relative abundance of chemical species in solution from optical spectroscopy. In particular, the quantification of uncertainty in mixtures with parametric uncertainty in pure component spectra is addressed. Markov Chain Monte Carlo methods are utilized to quantify uncertainty in these situations and the inaccuracy and potential for error in simpler methods is demonstrated. Strategies to improve estimation accuracy and reduce uncertainty in practical experimental situations are developed including when multiple measurements are available and with sequential data. The utilization of computational Bayesian inference in chemometric problems shows great promise in a wide variety of practical experimental applications. A related deconvolution problem is addressed in which a detailed physical model is not available, but the objective of analysis is to map from a measured vector valued signal to a sum of an unknown number of discrete contributions. The data analyzed in this application is electrical signals generated from a free surface electro-spinning apparatus. In this information poor system, MAP estimation is used to reduce the variance in estimates of the physical parameters of interest. The formulation of the estimation problem in a probabilistic context allows for the introduction of prior knowledge to compensate for a high dimensional ill-conditioned inverse problem. The estimates from this work are used to develop a productivity model expanding on previous work and showing how the uncertainty from estimation impacts system understanding. A new machine learning based method for monitoring for anomalous behavior in production oil wells is reported. The method entails a transformation of the available time series of measurements into a high-dimensional feature space representation. This transformation yields results which can be treated as static independent measurements. A new method for feature selection in one-class classification problems is developed based on approximate knowledge of the state of the system. An extension of features space transformation methods on time series data is introduced to handle multivariate data in large computationally burdensome domains by using sparse feature extraction methods. As a whole these projects demonstrate the application of modern statistical modeling methods, to achieve superior results in data driven chemical engineering challenges.
by Mark Christopher Molaro.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Su, Hua. "Statistical design and optimization of engineering artifacts /". The Ohio State University, 1995. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487864986609792.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Hutton, Timothy M. "Innovative Forced Response Analysis Method Applied to a Transonic Compressor". Wright State University / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=wright1074801945.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Gustafsson, Erik. "System Dynamics Statistics (SDS) : A Statistical Tool for Stochastic System Dynamics Modeling and Simulation". Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-321472.

Testo completo
Abstract (sommario):
This thesis is about the creation of a tool (SDS) for statistical analysis of stochasticSystem Dynamics models. System Dynamics is a specific field of simulation models based on a system of ordinary differential equations and algebraic equations.The tool is intended for analyzing stochastic System Dynamics models in various fields including biology, ecology, agriculture, economy, epidemiology, military strategy, physics, chemistry and many other fields. In particular, this project was initiated tofulfill the needs of a joint epidemiological project at Uppsala University (UU) andKarolinska Institute (KI). It is also intended to be used in basic courses in simulation at KI and the Swedish University of Agricultural Sciences (SLU).A stochastic model has to be run a large number of times to reveal its behavior. The SDS performs the analysis in the following way. First it connects to the SystemDynamics engine containing the model. Then a specified number of simulation runsare ordered. For each run the results of specified quantities are collected. From thecollected data, various statistical measures are calculated such as averages, standard deviations and confidence intervals. The statistics can then be presented graphically inform of distributions, histograms, scatter plots, and box plots. Finally, all features of SDS were thoroughly tested using manual testing. SDS wasthoroughly tested for statistical correctness, and then evaluated against some stochastic models.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Chang, Chia-Jung. "Statistical and engineering methods for model enhancement". Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44766.

Testo completo
Abstract (sommario):
Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as "Minimal Adjustment", which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Zafirakou, Antigoni Koulouris. "Statistical analysis techniques in water resources engineering /". Thesis, Connect to Dissertations & Theses @ Tufts University, 2000.

Cerca il testo completo
Abstract (sommario):
Thesis (Ph. D.)--Tufts University, 2000.
Adviser: Richard M. Vogel. Submitted to the Dept. of Civil and Environmental Engineering. Includes bibliographical references (leaves 206-214). Access restricted to members of the Tufts University community. Also available via the World Wide Web;
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Fraser, Catherine. "The use of statistics in business process re-engineering". Thesis, University of Newcastle Upon Tyne, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.408468.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Pfeifle, Martin. "Spatial Database Support for Virtual Engineering". Diss., lmu, 2004. http://nbn-resolving.de/urn:nbn:de:bvb:19-27018.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Schroeder, Andreas. "Software engineering perspectives on physiological computing". Diss., lmu, 2011. http://nbn-resolving.de/urn:nbn:de:bvb:19-139294.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Stamoulis, Catherine 1968. "Application of statistical fault detection to civil engineering systems". Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12312.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Garcia, Marelys L. "Autonomous Interpretation of Statistical Analysis For Engineering Decision Making". FIU Digital Commons, 1999. https://digitalcommons.fiu.edu/etd/3834.

Testo completo
Abstract (sommario):
Existing statistical software fails to explain the meaning of their output. Practicing engineers, in areas of application where statistics is heavily used, have to deal not only with the nuances of different statistical packages but also with learning what the results from their analyses mean. An Architecture and a prototype for a knowledge-based statistical output interpreter has been designed. CLIENS integrates different heterogeneous components and it uses object oriented design principles to enable modularity and package independence. Outputs from several statistical packages were analyzed to discover common patterns. A heuristic was developed to search automatically for these patterns in the output files. An in-depth study of a small set of statistical techniques resulted in the derivation of descriptive and inferential knowledge, which is used by CLIENS to interpret statistical outputs. Experimentation with the prototype indicates that autonomous interpretation is feasible and that package independence is achievable. However, issues pertaining to natural language need to be resolved before a commercial CLIENS exists.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Kraus, Andreas. "Model Driven Software Engineering for Web Applications". Diss., lmu, 2007. http://nbn-resolving.de/urn:nbn:de:bvb:19-79362.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Fawaz, Bachir Ahmad. "Estimating the area wide effects of engineering measures on road accident frequency". Thesis, University of Liverpool, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316595.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Karlslätt, David. "Improved Statistics Handling". Thesis, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-18238.

Testo completo
Abstract (sommario):

Ericsson is a global provider of telecommunications systems equipment and related services for mobile and fixed network operators. 

3Gsim is a tool used by Ericsson in tests of the 3G RNC node.

In order to validate the tests, statistics are constantly gathered within 3Gsim and users can use telnet to access the statistics using some system specific 3Gsim commands.

The statistics can be retrieved but is unstructured for the human eye and needs parsing and arranging to be readable. 

The statistics handler that is implemented during this thesis provides a possibility for users of 3Gsim to present information that favors their personal interest.

The implementation can produce one prototype output document which contains the most common statistics needed by the 3Gsim user. A main focus of this final thesis has been to simplify content and format control for the user as much as possible.

Presenting and structuring information now comes down to simple text editing and rid the user of the time consuming work of updating and recompiling the entire application.

Earlier, scripts written in Perl, an iterative oriented language, were used for presenting the statistics. These scripts were often difficult to comprehend since there were many different authors with inadequate experience and knowledge.

The new statistics handler has been written in Java, a high-level object-oriented language which should better suite the users and developers of 3Gsim. 

Gli stili APA, Harvard, Vancouver, ISO e altri
26

Hameed, Faysal, e Mohammad Ejaz. "Model for conflict resolution in aspects within Aspect Oriented Requirement engineering". Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5292.

Testo completo
Abstract (sommario):
Requirement engineering is the most important phase within the software development phases since it is used to extract requirements from the customers which are used by the next phases for designing and implementation of the system. Because of its importance, this thesis focuses on the term aspect oriented requirement engineering, which is the first phase in aspect oriented software development used for the identification and representation of requirements gathered in the form of concerns. Besides the overall explanation of aspect oriented requirement engineering phase, detail attention is given to a specific activity within AORE phase called conflict resolution. Several techniques proposed for conflict resolution between aspects is discussed along with an attempt to give a new idea in the form of an extension of the already proposed model for conflict resolution. The need for extension to the already proposed model is justified by the use of a case study which is applied on both the models i.e. on the original model and on the extended model to compare the results.
Krav engineering är den viktigaste fasen inom mjukvaruutveckling faser eftersom det är användas för utvinning av krav från kunder som används av de följande faserna för utformning och genomförandet av systemet. På grund av dess betydelse, denna avhandling fokuserar på sikt aspekt orienterade krav på teknik, som är den första fasen i aspekt Orienten mjukvaran utveckling används för identifiering och representation krav som samlats in i form av oro. Förutom det övergripande förklaring av aspekt oriented Kravet tekniska fasen, detalj uppmärksamhet ges till en specifik verksamhet inom AORE fasen kallas konfliktlösning. Flera metoder som föreslås för konfliktlösning mellan aspekter diskuteras tillsammans med ett försök att ge en ny idé i form av en utvidgning av redan föreslagna modellen för konflikt resolution. Behovet av förlängning av redan föreslagna modellen är motiverad av att använda en fallstudie som appliceras på båda modellerna dvs i den ursprungliga modellen och om den utvidgade modell för att jämföra resultat.
faysal_hameed@hotmail.com, ijazbutt1@hotmail.com
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Meisner, Mark Joseph. "Heterogeneity in engineering materials: Cases of discrete and statistical disorder". Diss., The University of Arizona, 1994. http://hdl.handle.net/10150/186933.

Testo completo
Abstract (sommario):
This paper presents analytical and numerical models for simulation of elastic and fracture properties of heterogeneous materials such as fiber composite and concrete. When the nonhomogeneous material is idealized as an elasticity problem it is possible to analyze and solve it by Papkovich-Neuber displacement potentials. The paper examines the elastic fields generated by two elliptic and three circular inclusions. The inhomogeneities undergo either eigenstrain expansion or mechanical loading on the matrix in which they are imbedded. Perfectly bonded and slipping interfaces are compared and expressed in infinite series. The results are illustrated by two specific geometries. When the heterogeneous material is composed of not only inclusions but also random voids, microcracks, material gradients, etc., the analytic classical elasticity approach is inconvenient. Hence, simulation is performed using a linear elastic-brittle framework. It is then possible to numerically study the elastic, and more importantly, the fracture characteristics of the solid. Possible fractal dimensions representing roughness are found for correlated Gaussian random materials under various loading conditions. Multifractal properties of the crack dissipation energy are illustrated by the f(α) spectrum. Finally, the correlation between the multifractal properties and the p model is demonstrated.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Nguyen, Quang-Thang. "Contributions to Statistical Signal Processing with Applications in Biomedical Engineering". Télécom Bretagne, 2012. http://www.telecom-bretagne.eu/publications/publication.php?idpublication=13290.

Testo completo
Abstract (sommario):
Cette étude présente des contributions en traitement statistique du signal avec des applications biomédicales. La thèse est divisée en deux parties. La première partie traite de la détection des hotspots à l'interface des protéines. Les hotspots sont les résidus dont les contributions énergétiques sont les plus importantes dans l'interaction entre protéines. Les forêts aléatoires (Random Forests) sont utilisées pour la classification. Une nouvelle famille de descripteurs de hotspot est également introduite. Ces descripteurs sont basés seulement sur la séquence primaire unidimensionnelle d'acides aminés constituant la protéine. Aucune information sur la structure tridimensionnelle de la protéine ou le complexe n'est nécessaire. Ces descripteurs, capitalisant les caractéristiques fréquentielle des protéines, nous permettent de savoir la façon dont la séquence primaire d'une protéine peut déterminer sa structure tridimensionnelle et sa fonction. Dans la deuxième partie, le RDT (Random Distortion Testing), un test robuste d'hypothèse, est considéré. Son application en détection du signal a montré que le RDT peut résister aux imperfections du modèle d'observation. Nous avons également proposé une extension séquentielle du RDT. Cette extension s'appelle le RDT Séquentiel. Trois problèmes classiques de détection d'écart/distorsion du signal sont reformulés et résolus dans le cadre du RDT. En utilisant le RDT et le RDT Séquentiel, nous étudions la détection d'AutoPEEP (auto-Positive End Expiratory Pressure), une anomalie fréquente en ventilation mécanique. C'est la première étude de ce type dans la littérature. L'extension à la détection d'autres types d'asynchronie est également étudiée et discutée. Ces détecteurs d'AutoPEEP et d'asynchronies sont les éléments principaux de la plateforme de suivi de manière automatique et continue l'interface patient-ventilateur en ventilation mécanique
This PhD thesis presents some contributions to Statistical Signal Processing with applications in biomedical engineering. The thesis is separated into two parts. In the first part, the detection of protein interface hotspots ¿ the residues that play the most important role in protein interaction ¿ is considered in the Machine Learning framework. The Random Forests is used as the classifier. A new family of protein hotspot descriptors is also introduced. These descriptors are based exclusively on the primary one-dimensional amino acid sequence. No information on the three dimensional structure of the protein or the complex is required. These descriptors, capturing the protein frequency characteristics, make it possible to get an insight into how the protein primary sequence can determine its higher structure and its function. In the second part, the RDT (Random Distortion Testing) robust hypothesis testing is considered. Its application to signal detection is shown to be resilient to model mismatch. We propose an extension of RDT in the sequential decision framework, namely Sequential RDT. Three classical signal deviation/distortion detection problems are reformulated and cast into the RDT framework. Using RDT and Sequential RDT, we investigate the detection of AutoPEEP (auto-Positive End Expiratory Pressure), a common ventilatory abnormality during mechanical ventilation. This is the first work of that kind in the state-of-the-art. Extension to the detection of other types of asynchrony is also studied and discussed. These early detectors of AutoPEEP and asynchrony are key elements of an automatic and continuous patient-ventilator interface monitoring framework
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Muller, Cole. "Reliability analysis of the 4.5 roller bearing". Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2003. http://library.nps.navy.mil/uhtbin/hyperion-image/03Jun%5FMuller.pdf.

Testo completo
Abstract (sommario):
Thesis (M.S. in Applied Science (Operations Research))--Naval Postgraduate School, June 2003.
Thesis advisor(s): David H. Olwell, Samuel E. Buttrey. Includes bibliographical references (p. 65). Also available online.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Milo, Michael William. "Anomaly Detection in Heterogeneous Data Environments with Applications to Mechanical Engineering Signals & Systems". Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23962.

Testo completo
Abstract (sommario):
Anomaly detection is a relevant problem in the field of Mechanical Engineering, because the analysis of mechanical systems often relies on identifying deviations from what is considered "normal". The mechanical sciences are represented by a heterogeneous collection of data types: some systems may be highly dimensional, may contain exclusively spatial or temporal data, may be spatiotemporally linked, or may be non-deterministic and best described probabilistically. Given the broad range of data types in this field, it is not possible to propose a single processing method that will be appropriate, or even usable, for all data types. This has led to human observation remaining a common, albeit costly and inefficient, approach to detecting anomalous signals or patterns in mechanical data. The advantages of automated anomaly detection in mechanical systems include reduced monitoring costs, increased reliability of fault detection, and improved safety for users and operators. This dissertation proposes a hierarchical framework for anomaly detection through machine learning, and applies it to three distinct and heterogeneous data types: state-based data, parameter-driven data, and spatiotemporal sensor network data. In time-series data, anomaly detection results were robust in synthetic data generated using multiple simulation algorithms, as well as experimental data from rolling element bearings, with highly accurate detection rates (>99% detection, <1% false alarm). Significant developments were shown in parameter-driven data by reducing the sample sizes necessary for analysis, as well as reducing the time required for computation. The event-space model extends previous work into a geospatial sensor network and demonstrates applications of this type of event modeling at various timescales, and compares the model to results obtained using other approaches. Each data type is processed in a unique way relative to the others, but all are fitted to the same hierarchical structure for system modeling. This hierarchical model is the key development proposed by this dissertation, and makes both novel and significant contributions to the fields of mechanical analysis and data processing. This work demonstrates the effectiveness of the developed approaches, details how they differ from other relevant industry standard methods, and concludes with a proposal for additional research into other data types.
Ph. D.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Gryder, Ryan W. "Design & Analysis of a Computer Experiment for an Aerospace Conformance Simulation Study". VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4208.

Testo completo
Abstract (sommario):
Within NASA's Air Traffic Management Technology Demonstration # 1 (ATD-1), Interval Management (IM) is a flight deck tool that enables pilots to achieve or maintain a precise in-trail spacing behind a target aircraft. Previous research has shown that violations of aircraft spacing requirements can occur between an IM aircraft and its surrounding non-IM aircraft when it is following a target on a separate route. This research focused on the experimental design and analysis of a deterministic computer simulation which models our airspace configuration of interest. Using an original space-filling design and Gaussian process modeling, we found that aircraft delay assignments and wind profiles significantly impact the likelihood of spacing violations and the interruption of IM operations. However, we also found that implementing two theoretical advancements in IM technologies can potentially lead to promising results.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Graf, Franz. "Data and knowledge engineering for medical image and sensor data". Diss., lmu, 2012. http://nbn-resolving.de/urn:nbn:de:bvb:19-151051.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Luo, Wuben. "A comparative assessment of Dempster-Shafer and Bayesian belief in civil engineering applications". Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/28500.

Testo completo
Abstract (sommario):
The Bayesian theory has long been the predominate method in dealing with uncertainties in civil engineering practice including water resources engineering. However, it imposes unnecessary restrictive requirements on inferential problems. Concerns thus arise about the effectiveness of using Bayesian theory in dealing with more general inferential problems. The recently developed Dempster-Shafer theory appears to be able to surmount the limitations of Bayesian theory. The new theory was originally proposed as a pure mathematical theory. A reasonable amount of work has been done in trying to adopt this new theory in practice, most of this work being related to inexact inference in expert systems and all of the work still remaining in the fundamental stage. The purpose of this research is first to compare the two theories and second to try to apply Dempster-Shafer theory in solving real problems in water resources engineering. In comparing Bayesian and Dempster-Shafer theory, the equivalent situation between these two theories under a special situation is discussed first. The divergence of results from Dempster-Shafer and Bayesian approaches under more general situations where Bayesian theory is unsatisfactory is then examined. Following this, the conceptual difference between the two theories is argued. Also discussed in the first part of this research is the issue of dealing with evidence including classifying sources of evidence and expressing them through belief functions. In attempting to adopt Dempster-Shafer theory in engineering practice, the Dempster-Shafer decision theory, i.e. the application of Dempster-Shafer theory within the framework of conventional decision theory, is introduced. The application of this new decision theory is demonstrated through a water resources engineering design example.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Sloan, Bethany L. "Engineering at Miami". Miami University Honors Theses / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=muhonors1178308538.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Guyader, Andrew C. "A statistical approach to equivalent linearization with application to performance-based engineering /". Pasadena : California Institute of Technology, Earthquake Engineering Research Laboratory, 2004. http://caltecheerl.library.caltech.edu.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Guyader, Andrew Charles Iwan W. D. "A statistical approach to equivalent linearization with application to performance-based engineering /". Diss., Pasadena, Calif. : California Institute of Technology, 2003. http://resolver.caltech.edu/CaltechETD:etd-06012003-123539.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Svenson, Kristin. "A Microdata Analysis Approach to Transport Infrastructure Maintenance". Doctoral thesis, Högskolan Dalarna, Mikrodataanalys, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:du-23576.

Testo completo
Abstract (sommario):
Maintenance of transport infrastructure assets is widely advocated as the key in minimizing current and future costs of the transportation network. While effective maintenance decisions are often a result of engineering skills and practical knowledge, efficient decisions must also account for the net result over an asset's life-cycle. One essential aspect in the long term perspective of transport infrastructure maintenance is to proactively estimate maintenance needs. In dealing with immediate maintenance actions, support tools that can prioritize potential maintenance candidates are important to obtain an efficient maintenance strategy. This dissertation consists of five individual research papers presenting a microdata analysis approach to transport infrastructure maintenance. Microdata analysis is a multidisciplinary field in which large quantities of data is collected, analyzed, and interpreted to improve decision-making. Increased access to transport infrastructure data enables a deeper understanding of causal effects and a possibility to make predictions of future outcomes. The microdata analysis approach covers the complete process from data collection to actual decisions and is therefore well suited for the task of improving efficiency in transport infrastructure maintenance. Statistical modeling was the selected analysis method in this dissertation and provided solutions to the different problems presented in each of the five papers. In Paper I, a time-to-event model was used to estimate remaining road pavement lifetimes in Sweden. In Paper II, an extension of the model in Paper I assessed the impact of latent variables on road lifetimes; displaying the sections in a road network that are weaker due to e.g. subsoil conditions or undetected heavy traffic. The study in Paper III incorporated a probabilistic parametric distribution as a representation of road lifetimes into an equation for the marginal cost of road wear. Differentiated road wear marginal costs for heavy and light vehicles are an important information basis for decisions regarding vehicle miles traveled (VMT) taxation policies. In Paper IV, a distribution based clustering method was used to distinguish between road segments that are deteriorating and road segments that have a stationary road condition. Within railway networks, temporary speed restrictions are often imposed because of maintenance and must be addressed in order to keep punctuality. The study in Paper V evaluated the empirical effect on running time of speed restrictions on a Norwegian railway line using a generalized linear mixed model.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Asenov, Plamen. "Accurate statistical circuit simulation in the presence of statistical variability". Thesis, University of Glasgow, 2013. http://theses.gla.ac.uk/4996/.

Testo completo
Abstract (sommario):
Semiconductor device performance variation due to the granular nature of charge and matter has become a key problem in the semiconductor industry. The main sources of this ‘statistical’ variability include random discrete dopants (RDD), line edge roughness (LER) and metal gate granularity (MGG). These variability sources have been studied extensively, however a methodology has not been developed to accurately represent this variability at a circuit and system level. In order to accurately represent statistical variability in real devices the GSS simulation toolchain was utilised to simulate 10,000 20/22nm n- and p-channel transistors including RDD, LER and MGG variability sources. A statistical compact modelling methodology was developed which accurately captured the behaviour of the simulated transistors, and produced compact model parameter distributions suitable for advanced compact model generation strategies like PCA and NPM. The resultant compact model libraries were then utilised to evaluate the impact of statistical variability on SRAM design, and to quantitatively evaluate the difference between accurate compact model generation using NPM with the Gaussian VT methodology. Over 5 million dynamic write simulations were performed, and showed that at advanced technology nodes, statistical variability cannot be accurately represented using Gaussian VT . The results also show that accurate modelling techniques can help reduced design margins by elimiating some of the pessimism of standard variability modelling approaches.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Hernandez, J. A. "Statistics of aberrations in polycrystalline materials". Thesis, University of Nottingham, 2007. http://eprints.nottingham.ac.uk/13948/.

Testo completo
Abstract (sommario):
This thesis is concerned with the propagation of elastic waves in polycrystalline materials. In particular, in establishing a relationship between the statistical properties of the wavefield and the statistical properties of the material via a correlation function. Here the study of elastic waves has been restricted to surface acoustic waves (SAWs), mainly because they are readily accessible using an optical scanning acoustic microscope (OSAM)
Gli stili APA, Harvard, Vancouver, ISO e altri
40

König, Ralf. "Engineering of IT Management Automation along Task Analysis, Loops, Function Allocation, Machine Capabilities". Diss., lmu, 2010. http://nbn-resolving.de/urn:nbn:de:bvb:19-126492.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Borrotti, Matteo <1981&gt. "An evolutionary approach to the design of experiments for combinatorial optimization with an application to enzyme engineering". Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3422/.

Testo completo
Abstract (sommario):
In a large number of problems the high dimensionality of the search space, the vast number of variables and the economical constrains limit the ability of classical techniques to reach the optimum of a function, known or unknown. In this thesis we investigate the possibility to combine approaches from advanced statistics and optimization algorithms in such a way to better explore the combinatorial search space and to increase the performance of the approaches. To this purpose we propose two methods: (i) Model Based Ant Colony Design and (ii) Naïve Bayes Ant Colony Optimization. We test the performance of the two proposed solutions on a simulation study and we apply the novel techniques on an appplication in the field of Enzyme Engineering and Design.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Ghoudi, Kilani. "Multivariate non-parametric quality control statistics". Thesis, University of Ottawa (Canada), 1990. http://hdl.handle.net/10393/5658.

Testo completo
Abstract (sommario):
During the startup phase of a production process while statistics on the product quality are being collected it is useful to establish that the process is under control. Small samples $\{ n\sb{i}\} \sbsp{i=1}{q}$ are taken periodically for $q$ periods. We shall assume each measurement is bivariate. A process is under control or on-target if all the observations are deemed to be independent and identically distributed and moreover the distribution of each observation is a product distribution. This would be the case if each coordinate of an observation is a nominal value plus noise. Let $F\sp{i}$ represent the empirical distribution function of the $i\sp{-th}$ sample. Let $\overline {F}$ represent the empirical distribution function of all observations. Following Lehman (1951) we propose statistics of the form$${\sum\limits\sbsp{i = 1}{q}}\int\sbsp{-\infty}{\infty}\int\sbsp{-\infty}{\infty}(F\sp{i}(s,t) - \overline{F}(s)\overline{F}(t))\sp2 d\overline{F}(s,t)\eqno(1)$$The emphasis there, however, is on the case where $n\sb{i}\ \to\ \infty$ while $q$ stayed fixed. Here we study the following family of statistics$$S\sb{q}={\sum\limits\sbsp{i = 1}{q}}\int\sbsp{-\infty}{\infty}\int\sbsp{-\infty}{\infty}k\sb{q}(n, i, F\sp{i}(s,t),\overline{F}(s)\overline{F}(t))n\sb{i}dF\sp{i}(s,t)\eqno(2)$$in the above quality control situation, where $q\to\infty$ while $n\sb{i}$ stays fixed. (Abstract shortened by UMI.)
Gli stili APA, Harvard, Vancouver, ISO e altri
43

vilhu, daniel, e Urban Säfström. "Byggavfall vid nybyggnation : En studie om Projekt Hammarby Sjöstad". Thesis, Högskolan i Gävle, Institutionen för teknik och byggd miljö, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-5891.

Testo completo
Abstract (sommario):
Arbetet har skrivits på uppdrag av Exploateringskontoret i Stockholm och innefattar sexbyggnadsentreprenörer som alla bygger i området Hammarby Sjöstad där studiengenomförts. Tillsammans bygger de totalt 1126 stycken lägenheter, samt fem styckenbutiker, sex lokaler och ett dagis. Utöver detta tillkommer även två garage vars avfall ärinräknat i statistiken.Arbetet har gått ut på att undersöka hur mycket byggavfall som bildas per nyproduceradlägenhet, varför det skapas samt när i byggskedet detta sker. Kostnaderna för detta skaäven beräknas och hänsyn ska tas till inköpskostnader av det material som blir till avfallsamt avfallsentreprenörernas efterhanteringsersättning. Grunden till frågeställningen ovanbottnar i indikationer som getts från diverse byggnadsentreprenörer som menat att detbildas cirka två ton avfall per nyproducerad lägenhet.Syftet är att detta ska ge underlag för att på sikt skapa bättre rutiner för projektering ochproduktion av byggnader som möjliggör en reducering av byggavfallet, samt även för attbidra till en större medvetenhet kring avfallssorteringen hos byggarbetare ochbyggnadsentreprenörer. Rapporten bygger på personliga intervjuer, telefonintervjuer ochavfallsstatistik från de medverkande byggnadsentreprenörerna. Grova antagandenbaserade på åsikter från personer med insyn i branschen har i flera fall fått göras.Resultatet baseras på ett genomsnitt av samtliga byggnadsentreprenörers avfallsstatistikoch visar på att det bildas cirka tre och ett halvt ton avfall per nyproducerad lägenhet. Detmesta av avfallet börjar uppstå strax efter halvvägs in i byggskedet och består mestadelsav avfallsfraktionerna Osorterat och Brännbart.Uppgiften att ta reda på orsaken till avfallets uppkomst visade sig vara oss övermäktigeftersom de tidsresurser vi haft tillgodo inte har varit tillräckliga. Istället skapades ettavfallsdiagram med de ingående avfallsfraktionerna uppdelade. Detta diagram löperparallellt med en framtagen generell tidsplan för byggskedet. Det går därför att utläsa vari byggskedet arbetet befinner sig när en viss typ av avfall bildas. För att få mer detaljeradesvar är denna rapport en bra grund till vidare studier i bland annat denna fråga.
hammarby sjöstad
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Alterovitz, Gil 1975. "A Bayesian framework for statistical signal processing and knowledge discovery in proteomic engineering". Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34479.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, February 2006.
Includes bibliographical references (leaves 73-85).
Proteomics has been revolutionized in the last couple of years through integration of new mass spectrometry technologies such as -Enhanced Laser Desorption/Ionization (SELDI) mass spectrometry. As data is generated in an increasingly rapid and automated manner, novel and application-specific computational methods will be needed to deal with all of this information. This work seeks to develop a Bayesian framework in mass-based proteomics for protein identification. Using the Bayesian framework in a statistical signal processing manner, mass spectrometry data is filtered and analyzed in order to estimate protein identity. This is done by a multi-stage process which compares probabilistic networks generated from mass spectrometry-based data with a mass-based network of protein interactions. In addition, such models can provide insight on features of existing models by identifying relevant proteins. This work finds that the search space of potential proteins can be reduced such that simple antibody-based tests can be used to validate protein identity. This is done with real proteins as a proof of concept. Regarding protein interaction networks, the largest human protein interaction meta-database was created as part of this project, containing over 162,000 interactions. A further contribution is the implementation of the massome network database of mass-based interactions- which is used in the protein identification process.
(cont.) This network is explored in terms potential usefulness for protein identification. The framework provides an approach to a number of core issues in proteomics. Besides providing these tools, it yields a novel way to approach statistical signal processing problems in this domain in a way that can be adapted as proteomics-based technologies mature.
by Gil Alterovitz.
Ph.D.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Kamsani, Noor 'Ain. "Statistical circuit simulations - from ‘atomistic’ compact models to statistical standard cell characterisation". Thesis, University of Glasgow, 2011. http://theses.gla.ac.uk/2720/.

Testo completo
Abstract (sommario):
This thesis describes the development and application of statistical circuit simulation methodologies to analyse digital circuits subject to intrinsic parameter fluctuations. The specific nature of intrinsic parameter fluctuations are discussed, and we explain the crucial importance to the semiconductor industry of developing design tools which accurately account for their effects. Current work in the area is reviewed, and three important factors are made clear: any statistical circuit simulation methodology must be based on physically correct, predictive models of device variability; the statistical compact models describing device operation must be characterised for accurate transient analysis of circuits; analysis must be carried out on realistic circuit components. Improving on previous efforts in the field, we posit a statistical circuit simulation methodology which accounts for all three of these factors. The established 3-D Glasgow atomistic simulator is employed to predict electrical characteristics for devices aimed at digital circuit applications, with gate lengths from 35 nm to 13 nm. Using these electrical characteristics, extraction of BSIM4 compact models is carried out and their accuracy in performing transient analysis using SPICE is validated against well characterised mixed-mode TCAD simulation results for 35 nm devices. Static d.c. simulations are performed to test the methodology, and a useful analytic model to predict hard logic fault limitations on CMOS supply voltage scaling is derived as part of this work. Using our toolset, the effect of statistical variability introduced by random discrete dopants on the dynamic behaviour of inverters is studied in detail. As devices scaled, dynamic noise margin variation of an inverter is increased and higher output load or input slew rate improves the noise margins and its variation. Intrinsic delay variation based on CV/I delay metric is also compared using ION and IEFF definitions where the best estimate is obtained when considering ION and input transition time variations. Critical delay distribution of a path is also investigated where it is shown non-Gaussian. Finally, the impact of the cell input slew rate definition on the accuracy of the inverter cell timing characterisation in NLDM format is investigated.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Svensson, Johanna. "Avfall på byggarbetsplatsen : statistik som hjälper platschefen". Thesis, Linköping University, Department of Science and Technology, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1991.

Testo completo
Abstract (sommario):

The purpose of this report is to find a model of statistics for construction and demolition waste, which makes it useful for the local manager at the building site. The intention with the model is that it will be used as a support of the local manager, to direct his project towards reduced strain of the environment. The model will also be used to make it easier to control that the claims from authority and aims for the environment are achieved.

To familiarise with the subject waste handling within the building and construction sectorI have done a literature search. I have also been studying rules and regulations regarding the waste handling. The main investigation is based on interviews with local managers. I have also spoken to waste contractors to get their view in the subject. To get to know how the local manager want the statistics formed I have made an opinion poll.

The investigations have indicated that it is unusual that the statistics is used at the building site at all. Among the local managers there are an interest thought in having the statistics as a result of the separation of construction and demolition waste and an encouragement to go further with it.

Economical information was in great demand in the statistics, because economy control most of the manager’s work. The local manager experiences that the accessibility of checking up how much of the waste that goes to deposition is good. In fact this is a problem, as part of the unsorted waste goes to deposition in next stage. Most of the statistics of the waste contractors don’t specifying how much.

Gli stili APA, Harvard, Vancouver, ISO e altri
47

Fernandez, Noemi. "Statistical information processing for data classification". FIU Digital Commons, 1996. http://digitalcommons.fiu.edu/etd/3297.

Testo completo
Abstract (sommario):
This thesis introduces new algorithms for analysis and classification of multivariate data. Statistical approaches are devised for the objectives of data clustering, data classification and object recognition. An initial investigation begins with the application of fundamental pattern recognition principles. Where such fundamental principles meet their limitations, statistical and neural algorithms are integrated to augment the overall approach for an enhanced solution. This thesis provides a new dimension to the problem of classification of data as a result of the following developments: (1) application of algorithms for object classification and recognition; (2) integration of a neural network algorithm which determines the decision functions associated with the task of classification; (3) determination and use of the eigensystem using newly developed methods with the objectives of achieving optimized data clustering and data classification, and dynamic monitoring of time-varying data; and (4) use of the principal component transform to exploit the eigensystem in order to perform the important tasks of orientation-independent object recognition, and di mensionality reduction of the data such as to optimize the processing time without compromising accuracy in the analysis of this data.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Nissen, Arne. "Analys av statistik om spårväxlars underhållsbehov". Licentiate thesis, Luleå, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-16799.

Testo completo
Abstract (sommario):
Banverket har behov att det genomförs analyser till orsaker till funktionstörningar och förseningstid vid infrastrukturen. Spårväxlar är en av de anläggningar som har många funktionstörningar. Syftet med studien har varit att ta fram ett arbetssätt för att värdera enskilda spårväxlars funktionssäkerhet. Det långsiktiga målet är att minska antalet störningar på Banverkets spårväxlar och de förseningar det orsakar. En matematisk modell har använts för att presentera den information som finns samlad i Banverkets datasystem om spårväxlar. Modellen baseras på teorin om den inhomogena poissonprocessen och visas grafiskt i ett kalkylblad. Kända faktorer kan anges för varje spårväxel och det gör det möjligt att testa hur stort inflytande enskilda faktorer har. En litteraturstudie har genomförts för att ta fram förslag på faktorer. De föreslagna faktorerna kan indelas i: Startvillkor Tågtrafik Ålder Klimat Med hjälp av faktorerna bestäms om en spårväxel kan anses vara normal. Förväntas den ha fler besiktningsanmärkningar eller funktionstörningar än normalt placeras den i en "riskgrupp". Med uppgifter om vilken grupp spårväxeln tillhör, typ av spårväxel, årligt tonnage och ålder kan antalet besiktningsanmärkningar och funktionstörningar förutsägas av modellen. Spårväxlar som, efter att uppdelningen med faktorerna är gjord, upptäcks ligga utanför prediktionsintervallet för modellen kan enkelt identifieras. Metoden har tillämpats i några delstudier och använts för att förklara antalet besiktningsanmärkningar och funktionstörningar för enskilda spårväxlar eller grupper av spårväxlar på bandels nivå. Någon heltäckande förklaring till alla spårväxlars antal besiktningsanmärkningar och funktionstörningar har inte rymts inom detta projekt och det finns behov av att komplettera den information som varit tillgänglig med bland annat: Användandet av avvikande tågspår Tågens vikt, antal axlar och hastighet Banförvaltarens underhållsstrategi Arbetsättet har visat sig vara tillämpbart och i framtiden kommer metoden att utvecklas så att den kan användas för att genomföra bedömningar av livscykelkostnaden för spårväxlar.

Godkänd; 2005; 20061213 (haneit)

Gli stili APA, Harvard, Vancouver, ISO e altri
49

Sharma, Vikas. "A new modeling methodology combining engineering and statistical modeling methods : a semiconductor manufacturing application". Thesis, Massachusetts Institute of Technology, 1996. http://hdl.handle.net/1721.1/10686.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Ma, Xiao. "Ontology engineering for ICT systems using semantic relationship mining and statistical social network analysis". Thesis, University of Warwick, 2011. http://wrap.warwick.ac.uk/63881/.

Testo completo
Abstract (sommario):
In information science, ontology is a formal representation of knowledge as a set of concepts within a domain, and the relationships between those concepts. It is used to reason about the entities within that domain, and may be used to describe the domain. (Wikipedia, 2011) This research takes two case study ICT applications in engineering and medicine, and evaluates the applications and supporting ontology to identify the main requirements for ontology in ICT systems. A study of existing ontology engineering methodology revealed difficulties in generating sufficient breadth and depth in domain concepts that contain rich internal relationships. These restrictions usually arise because of a heavy dependence on human experts in these methodologies. This research has developed a novel ontology engineering methodology – SEA, which economically, quickly and reliably generates ontology for domains that can provide the breadth and depth of coverage required for automated ICT systems. Normally SEA only requires three pairs of keywords from a domain expert. Through an automated snowballing mechanism that retrieves semantically related terms from the Internet, ontology can be generated relatively quickly. This mechanism also enhances and enriches the binary relationships in the generated ontology to form a network structure, rather than a traditional hierarchy structure. The network structure can then be analysed through a series of statistical network analysis methods. These enable concept investigation to be undertaken from multiple perspectives, with fuzzy matching and enhanced reasoning through directional weight-specified relationships. The SEA methodology was used to derive medical and engineering ontology for two existing ICT applications. The derived ontology was quicker to generate, relied less on expert contribution, and provided richer internal relationships. The methodology potentially has the flexibility and utility to be of benefit in a wide range of applications. SEA also exhibits "reliability" and "generalisability" as an ontology engineering methodology. It appears to have application potential in areas such as machine translation, semantic tagging and knowledge discovery. Future work needs to confirm its potential for generating ontology in other domains, and to assess its operation in semantic tagging and knowledge discovery.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia