To see the other types of publications on this topic, follow the link: Mechanics, data processing.

Dissertations / Theses on the topic 'Mechanics, data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Mechanics, data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Xu, Lin. "Data modeling and processing in deregulated power system." Online access for everyone, 2005. http://www.dissertations.wsu.edu/Dissertations/Spring2005/l%5Fxu%5F022805.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Tulong. "Meshless methods in computational mechanics." Diss., Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/11795.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wyatt, Timothy Robert. "Development and evaluation of an educational software tool for geotechnical engineering." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/20225.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Leung, Tsui-shan, and 梁翠珊. "A functional analysis of GIS for slope management in Hong Kong." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31223072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fortier, Hélène. "AFM Indentation Measurements and Viability Tests on Drug Treated Leukemia Cells." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34345.

Full text
Abstract:
A significant body of literature has reported strategies and techniques to assess the mechanical properties of biological samples such as proteins, cellular and tissue systems. Atomic force microscopy has been used to detect elasticity changes of cancer cells. However, only a few studies have provided a detailed and complete protocol of the experimental procedures and data analysis methods for non-adherent blood cancer cells. In this work, the elasticity of NB4 cells derived from acute promyelocytic leukemia (APL) was probed by AFM indentation measurements to investigate the effects of the disease on cellular biomechanics. Understanding how leukemia influences the nanomechanical properties of cells is expected to provide a better understanding of the cellular mechanisms associated to cancer, and promises to become a valuable new tool for cancer detection and staging. In this context, the quantification of the mechanical properties of APL cells requires a systematic and optimized approach for data collection and analysis, in order to generate reproducible and comparative data. This Thesis elucidates the automated data analysis process that integrates programming, force curve collection and analysis optimization to assess variations of cell elasticity in response to processing criteria. A processing algorithm was developed by using the IGOR Pro software to automatically analyze large numbers of AFM data sets in an efficient and accurate manner. In fact, since the analysis involves multiple steps that must be repeated for many individual cells, an automated and un-biased processing approach is essential to precisely determine cell elasticity. Different fitting models for extracting the Young’s modulus have been systematically applied to validate the process, and the best fitting criteria, such as the contact point location and indentation length, have been determined in order to obtain consistent results. The designed automated processing code described in this Thesis was used to correlate alterations in cellular biomechanics of cancer cells as they undergo drug treatments. In order to fully assess drug effects on NB4 cells, viability assays were first performed using Trypan Blue staining for primary insights before initiating thorough microplate fluorescence intensity readings using a LIVE/DEAD viability kit involving ethidium and calcein AM labelling components. From 0 to 24 h after treatment using 30 µM arsenic trioxide, relative live cell populations increased until 36 h. From 0 to 12 h post-treatment, relative populations of dead cells increased until 24 h post-treatment. Furthermore, a drastic drop in dead cell count has been observed between 12 and 24 h. Additionally, arsenic trioxide drug induced alterations in elasticity of NB4 cells can be correlated to the cell viability tests. With respect to cell mechanics, trapping of the non-adherent NB4 cells within fabricated SU8-10 microwell arrays, allowed consistent AFM indentation measurements up to 48 h after treatment. Results revealed an increase in cell elasticity up to 12 h post-treatment and a drastic decrease between 12 and 24 h. Furthermore, arsenic trioxide drug induced alterations in elasticity of NB4 cells can be correlated to the cell viability tests. In addition to these indentation and viability testing approaches, morphological appearances were monitored, in order to track the apoptosis process of the affected cells. Relationships found between viability and elasticity assays in conjunction with morphology alterations revealed distinguish stages of apoptosis throughout treatment. 24 h after initial treatment, most cells were observed to have burst or displayed obvious blebbing. These relations between different measurement methods may reveal a potential drug screening approach, for understanding specific physical and biological of drug effects on the cancer cells.
APA, Harvard, Vancouver, ISO, and other styles
6

Chiu, Cheng-Jung. "Data processing in nanoscale profilometry." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/36677.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1995.
Includes bibliographical references (p. 176-177).
New developments on the nanoscale are taking place rapidly in many fields. Instrumentation used to measure and understand the geometry and property of the small scale structure is therefore essential. One of the most promising devices to head the measurement science into the nanoscale is the scanning probe microscope. A prototype of a nanoscale profilometer based on the scanning probe microscope has been built in the Laboratory for Manufacturing and Productivity at MIT. A sample is placed on a precision flip stage and different sides of the sample are scanned under the SPM to acquire its separate surface topography. To reconstruct the original three dimensional profile, many techniques like digital filtering, edge identification, and image matching are investigated and implemented in the computer programs to post process the data, and with greater emphasis placed on the nanoscale application. The important programming issues are addressed, too. Finally, this system's error sources are discussed and analyzed.
by Cheng-Jung Chiu.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
7

Turel, Mesut. "Soft computing based spatial analysis of earthquake triggered coherent landslides." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/45909.

Full text
Abstract:
Earthquake triggered landslides cause loss of life, destroy structures, roads, powerlines, and pipelines and therefore they have a direct impact on the social and economic life of the hazard region. The damage and fatalities directly related to strong ground shaking and fault rupture are sometimes exceeded by the damage and fatalities caused by earthquake triggered landslides. Even though future earthquakes can hardly be predicted, the identification of areas that are highly susceptible to landslide hazards is possible. For geographical information systems (GIS) based deterministic slope stability and earthquake-induced landslide analysis, the grid-cell approach has been commonly used in conjunction with the relatively simple infinite slope model. The infinite slope model together with Newmark's displacement analysis has been widely used to create seismic landslide susceptibility maps. The infinite slope model gives reliable results in the case of surficial landslides with depth-length ratios smaller than 0.1. On the other hand, the infinite slope model cannot satisfactorily analyze deep-seated coherent landslides. In reality, coherent landslides are common and these types of landslides are a major cause of property damage and fatalities. In the case of coherent landslides, two- or three-dimensional models are required to accurately analyze both static and dynamic performance of slopes. These models are rarely used in GIS-based landslide hazard zonation because they are numerically expensive compared to one dimensional infinite slope models. Building metamodels based on data obtained from computer experiments and using computationally inexpensive predictions based on these metamodels has been widely used in several engineering applications. With these soft computing methods, design variables are carefully chosen using a design of experiments (DOE) methodology to cover a predetermined range of values and computer experiments are performed at these chosen points. The design variables and the responses from the computer simulations are then combined to construct functional relationships (metamodels) between the inputs and the outputs. In this study, Support Vector Machines (SVM) and Artificial Neural Networks (ANN) are used to predict the static and seismic responses of slopes. In order to integrate the soft computing methods with GIS for coherent landslide hazard analysis, an automatic slope profile delineation method from Digital Elevation Models is developed. The integrated framework is evaluated using a case study of the 1989 Loma Prieta, CA earthquake (Mw = 6.9). A seismic landslide hazard analysis is also performed for the same region for a future scenario earthquake (Mw = 7.03) on the San Andreas Fault.
APA, Harvard, Vancouver, ISO, and other styles
8

Roland, Jérémie. "Adiabatic quantum computation." Doctoral thesis, Universite Libre de Bruxelles, 2004. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211148.

Full text
Abstract:
Le développement de la Théorie du Calcul Quantique provient de l'idée qu'un ordinateur est avant tout un système physique, de sorte que ce sont les lois de la Nature elles-mêmes qui constituent une limite ultime sur ce qui peut être calculé ou non. L'intérêt pour cette discipline fut stimulé par la découverte par Peter Shor d'un algorithme quantique rapide pour factoriser un nombre, alors qu'actuellement un tel algorithme n'est pas connu en Théorie du Calcul Classique. Un autre résultat important fut la construction par Lov Grover d'un algorithme capable de retrouver un élément dans une base de donnée non-structurée avec un gain de complexité quadratique par rapport à tout algorithme classique. Alors que ces algorithmes quantiques sont exprimés dans le modèle ``standard' du Calcul Quantique, où le registre évolue de manière discrète dans le temps sous l'application successive de portes quantiques, un nouveau type d'algorithme a été récemment introduit, où le registre évolue continûment dans le temps sous l'action d'un Hamiltonien. Ainsi, l'idée à la base du Calcul Quantique Adiabatique, proposée par Edward Farhi et ses collaborateurs, est d'utiliser un outil traditionnel de la Mécanique Quantique, à savoir le Théorème Adiabatique, pour concevoir des algorithmes quantiques où le registre évolue sous l'influence d'un Hamiltonien variant très lentement, assurant une évolution adiabatique du système. Dans cette thèse, nous montrons tout d'abord comment reproduire le gain quadratique de l'algorithme de Grover au moyen d'un algorithme quantique adiabatique. Ensuite, nous montrons qu'il est possible de traduire ce nouvel algorithme adiabatique, ainsi qu'un autre algorithme de recherche à évolution Hamiltonienne, dans le formalisme des circuits quantiques, de sorte que l'on obtient ainsi trois algorithmes quantiques de recherche très proches dans leur principe. Par la suite, nous utilisons ces résultats pour construire un algorithme adiabatique pour résoudre des problèmes avec structure, utilisant une technique, dite de ``nesting', développée auparavant dans le cadre d'algorithmes quantiques de type circuit. Enfin, nous analysons la résistance au bruit de ces algorithmes adiabatiques, en introduisant un modèle de bruit utilisant la théorie des matrices aléatoires et en étudiant son effet par la théorie des perturbations.
Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
9

Cevikbas, Orcun. "Data Acquisition And Processing Interface Development For 3d Laser Rangefinder." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607514/index.pdf.

Full text
Abstract:
In this study, it is aimed to improve the previously developed data acquisition program which was run under DOS and 2D surface reconstruction program under Windows. A new system is set up and both data acquisition and processing software are developed to collect and process data within just one application, running under Windows. The main goal of the thesis is to acquire and process the range data taken from the laser rangefinder in order to construct the 3D image map of simple objects in different positions for indoor environments. The data acquisition program collects data in helical way. To do this, appropriate parameters for the data acquisition interface are determined. In the data processing step, it is aimed to use basic triangulation algorithms and threshold conditions to calculate resolutions, detect noisy points and segment objects from the environment for line fitting. The developed and implemented data acquisition and processing interfaces in the thesis are capable of creating 3D image map and obtaining the view of scanned environment in a short time with high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
10

Einstein, Noah. "SmartHub: Manual Wheelchair Data Extraction and Processing Device." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555352793977171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Rodriguez, Wilfredo. "Identifying mechanisms (naming) in distributed systems : goals, implications and overall influence on performance /." Online version of thesis, 1985. http://hdl.handle.net/1850/8820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Garg, Vivek. "Mechanisms for hiding communication latency in data parallel architecture." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15609.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

VASCONCELOS, RAFAEL OLIVEIRA. "A DYNAMIC LOAD BALANCING MECHANISM FOR DATA STREAM PROCESSING ON DDS SYSTEMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2013. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=23629@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Esta dissertação apresenta a solução de balanceamento de carga baseada em fatias de processamento de dados (Data Processing Slice Load Balancing solution) para permitir o balanceamento de carga dinâmico do processamento de fluxos de dados em sistemas baseados em DDS (Data Distribution Service). Um grande número de aplicações requer o processamento contínuo de alto volume de dados oriundos de várias fontes distribuídas., tais como monitoramento de rede, sistemas de engenharia de tráfego, roteamento inteligente de carros em áreas metropolitanas, redes de sensores, sistemas de telecomunicações, aplicações financeiras e meteorologia. Conceito chave da solução proposta é o Data Processing Slice, o qual é a unidade básica da carga de processamento dos dados dos nós servidores em um domínio DDS. A solução consiste de um nó balanceador, o qual é responsável por monitorar a carga atual de um conjunto de nós processadores homogêneos e quando um desbalanceamento de carga é detectado, coordenar ações para redistribuir entre os nós processadores algumas fatias de carga de trabalho de forma segura. Experimentos feitos com grandes fluxos de dados que demonstram a baixa sobrecarga, o bom desempenho e a confiabilidade da solução apresentada.
This thesis presents the Data Processing Slice Load Balancing solution to enable dynamic load balancing of Data Stream Processing on DDS-based systems (Data Distribution Service). A large number of applications require continuous and timely processing of high-volume of data originated from many distributed sources, such as network monitoring, traffic engineering systems, intelligent routing of cars in metropolitan areas, sensor networks, telecommunication systems, financial applications and meteorology. The key concept of the proposed solution is the Data Processing Slice (DPS), which is the basic unit of data processing load of server nodes in a DDS Domain. The Data Processing Slice Load Balancing solution consists of a load balancer, which is responsible for monitoring the current load of a set of homogenous data processing nodes and when a load unbalance is detected, it coordinates the actions to redistribute some data processing slices among the processing nodes in a secure way. Experiments with large data stream have demonstrated the low overhead, good performance and the reliability of the proposed solution.
APA, Harvard, Vancouver, ISO, and other styles
14

李仲麟 and Chung-lun Li. "Conceptual design of single and multiple state mechanical devices: an intelligent CAD approach." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31237332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Bao, Suying, and 鲍素莹. "Deciphering the mechanisms of genetic disorders by high throughput genomic data." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2013. http://hdl.handle.net/10722/196471.

Full text
Abstract:
A new generation of non-Sanger-based sequencing technologies, so called “next-generation” sequencing (NGS), has been changing the landscape of genetics at unprecedented speed. In particular, our capacity in deciphering the genotypes underlying phenotypes, such as diseases, has never been greater. However, before fully applying NGS in medical genetics, researchers have to bridge the widening gap between the generation of massively parallel sequencing output and the capacity to analyze the resulting data. In addition, even a list of candidate genes with potential causal variants can be obtained from an effective NGS analysis, to pinpoint disease genes from the long list remains a challenge. The issue becomes especially difficult when the molecular basis of the disease is not fully elucidated. New NGS users are always bewildered by a plethora of options in mapping, assembly, variant calling and filtering programs and may have no idea about how to compare these tools and choose the “right” ones. To get an overview of various bioinformatics attempts in mapping and assembly, a series of performance evaluation work was conducted by using both real and simulated NGS short reads. For NGS variant detection, the performances of two most widely used toolkits were assessed, namely, SAM tools and GATK. Based on the results of systematic evaluation, a NGS data processing and analysis pipeline was constructed. And this pipeline was proved a success with the identification of a mutation (a frameshift deletion on Hnrnpa1, p.Leu181Valfs*6) related to congenital heart defect (CHD) in procollagen type IIA deficient mice. In order to prioritize risk genes for diseases, especially those with limited prior knowledge, a network-based gene prioritization model was constructed. It consists of two parts: network analysis on known disease genes (seed-based network strategy)and network analysis on differential expression (DE-based network strategy). Case studies of various complex diseases/traits demonstrated that the DE-based network strategy can greatly outperform traditional gene expression analysis in predicting disease-causing genes. A series of simulation work indicated that the DE-based strategy is especially meaningful to diseases with limited prior knowledge, and the model’s performance can be further advanced by integrating with seed-based network strategy. Moreover, a successful application of the network-based gene prioritization model in influenza host genetic study further demonstrated the capacity of the model in identifying promising candidates and mining of new risk genes and pathways not biased toward our current knowledge. In conclusion, an efficient NGS analysis framework from the steps of quality control and variant detection, to those of result analysis and gene prioritization has been constructed for medical genetics. The novelty in this framework is an encouraging attempt to prioritize risk genes for not well-characterized diseases by network analysis on known disease genes and differential expression data. The successful applications in detecting genetic factors associated with CHD and influenza host resistance demonstrated the efficacy of this framework. And this may further stimulate more applications of high throughput genomic data in dissecting the genetic components of human disorders in the near future.
published_or_final_version
Biochemistry
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
16

Koenig, Mark A. "A DECENTRALIZED ADAPTIVE CONTROL SCHEME FOR ROBOTIC MANIPULATORS." Thesis, The University of Arizona, 1985. http://hdl.handle.net/10150/275238.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Prascher, Brian P. "Systematic Approaches to Predictive Computational Chemistry using the Correlation Consistent Basis Sets." Thesis, University of North Texas, 2009. https://digital.library.unt.edu/ark:/67531/metadc9920/.

Full text
Abstract:
The development of the correlation consistent basis sets, cc-pVnZ (where n = D, T, Q, etc.) have allowed for the systematic elucidation of the intrinsic accuracy of ab initio quantum chemical methods. In density functional theory (DFT), where the cc-pVnZ basis sets are not necessarily optimal in their current form, the elucidation of the intrinsic accuracy of DFT methods cannot always be accomplished. This dissertation outlines investigations into the basis set requirements for DFT and how the intrinsic accuracy of DFT methods may be determined with a prescription involving recontraction of the cc-pVnZ basis sets for specific density functionals. Next, the development and benchmarks of a set of cc-pVnZ basis sets designed for the s-block atoms lithium, beryllium, sodium, and magnesium are presented. Computed atomic and molecular properties agree well with reliable experimental data, demonstrating the accuracy of these new s-block basis sets. In addition to the development of cc-pVnZ basis sets, the development of a new, efficient formulism of the correlation consistent Composite Approach (ccCA) using the resolution of the identity (RI) approximation is employed. The new formulism, denoted 'RI-ccCA,' has marked efficiency in terms of computational time and storage, compared with the ccCA formulism, without the introduction of significant error. Finally, this dissertation reports three separate investigations of the properties of FOOF-like, germanium arsenide, and silicon hydride/halide molecules using high accuracy ab initio methods and the cc-pVnZ basis sets.
APA, Harvard, Vancouver, ISO, and other styles
18

Chan, Chor-Wai. "The study of the pipe mechanism in OSF's DCE." Virtual Press, 1994. http://liblink.bsu.edu/uhtbin/catkey/917047.

Full text
Abstract:
In this thesis, we explore the pipe mechanism available in Open Software Foundation's Distributed Computing Environment (OSF's DCE). OSF's DCE is one of the emerging technologies for distributed computing. The pipe mechanism in DCE provides an efficient way for transferring large or incrementally produced data that cannot fit in main memory at once. The empirical study of the pipe mechanism adds to the state of knowledge about how the pipe mechanism can be used in the development of client-server systems.
Department of Computer Science
APA, Harvard, Vancouver, ISO, and other styles
19

Adhikari, Sameer. "Programming Idioms and Runtime Mechanisms for Distributed Pervasive Computing." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4820.

Full text
Abstract:
The emergence of pervasive computing power and networking infrastructure is enabling new applications. Still, many milestones need to be reached before pervasive computing becomes an integral part of our lives. An important missing piece is the middleware that allows developers to easily create interesting pervasive computing applications. This dissertation explores the middleware needs of distributed pervasive applications. The main contributions of this thesis are the design, implementation, and evaluation of two systems: D-Stampede and Crest. D-Stampede allows pervasive applications to access live stream data from multiple sources using time as an index. Crest allows applications to organize historical events, and to reason about them using time, location, and identity. Together they meet the important needs of pervasive computing applications. D-Stampede supports a computational model called the thread-channel graph. The threads map to computing devices ranging from small to high-end processing elements. Channels serve as the conduits among the threads, specifically tuned to handle time-sequenced streaming data. D-Stampede allows the dynamic creation of threads and channels, and for the dynamic establishment (and removal) of the plumbing among them. The Crest system assumes a universe that consists of participation servers and event stores, supporting a set of applications. Each application consists of distributed software entities working together. The participation server helps the application entities to discover each other for interaction purposes. Application entities can generate events, store them at an event store, and correlate events. The entities can communicate with one another directly, or indirectly through the event store. We have qualitatively and quantitatively evaluated D-Stampede and Crest. The qualitative aspect refers to the ease of programming afforded by our programming abstractions for pervasive applications. The quantitative aspect measures the cost of the API calls, and the performance of an application pipeline that uses the systems.
APA, Harvard, Vancouver, ISO, and other styles
20

Ouellette, Mark Paul. "Form verification for the conceptual design of complex mechanical systems." Thesis, Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/18237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Williamson, Lance K. "ROPES : an expert system for condition analysis of winder ropes." Master's thesis, University of Cape Town, 1990. http://hdl.handle.net/11427/15982.

Full text
Abstract:
Includes bibliographical references.
This project was commissioned in order to provide engineers with the necessary knowledge of steel wire winder ropes so that they may make accurate decisions as to when a rope is near the end of its useful life. For this purpose, a knowledge base was compiled from the experience of experts in the field in order to create an expert system to aid the engineer in his task. The EXSYS expert system shell was used to construct a rule-based program which would be run on a personal computer. The program derived in this thesis is named ROPES, and provides information as to the forms of damage that may be present in a rope and the effect of any defects on rope strength and rope life. Advice is given as to the procedures that should be followed when damage is detected as well as the conditions which would necessitate rope discard and the urgency with which the replacement should take place. The expert system program will provide engineers with the necessary expertise and experience to assess, more accurately than at present, the condition of a winder rope. This should lead to longer rope life and improved safety with the associated cost savings. Rope assessment will also be more uniform with changes to policy being able to be implemented quickly and on an ongoing basis as technology and experience improves. The program ROPES, although compiled from expert knowledge, still requires the further input of personal opinions and inferences to some extent. For this reason, the program cannot be assumed infallible and must be used as an aid only.
APA, Harvard, Vancouver, ISO, and other styles
22

BISWAS, RATNABALI. "Query Processing and Link Layer QoS Provisioning Mechanisms for Wireless Sensor Networks." University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1163285841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Brownin, Dominic. "A mechanism to facilitate the accessibility of data within a manufacturing company." Thesis, University of Hertfordshire, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Byrne, Patricia Hiromi. "Development of an advisory system for indoor radon mitigation." PDXScholar, 1991. https://pdxscholar.library.pdx.edu/open_access_etds/4263.

Full text
Abstract:
A prototype hybrid knowledge-based advisory system for indoor radon mitigation has been developed to assist Pacific Northwest mitigators in the selection and design of mitigation systems for existing homes. The advisory system employs a heuristic inferencing strategy to determine which mitigation techniques are applicable, and applies procedural methods to perform the fan selection and cost estimation for particular techniques. The rule base has been developed employing knowledge in existing publications on radon mitigation. Additional knowledge has been provided by field experts. The benefits of such an advisory system include uniform record-keeping and consistent computations for the user, and verification of approved radon mitigation methods.
APA, Harvard, Vancouver, ISO, and other styles
25

Kern, Daniel C. (Daniel Clifton) 1974. "Forecasting manufacturing variation using historical process capability data : applications for random assembly, selective assembly, and serial processing." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/29960.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2003.
Includes bibliographical references (p. 337-340).
In today's competitive marketplace, companies are under increased pressure to produce products that have a low cost and high quality. Product cost and quality are influenced by many factors. One factor that strongly influences both is manufacturing variation. Manufacturing variation is the range of values that a product's dimensions assume. Variation exists because no production process is perfect. Often times, controlling this variation is attempted during production when substantial effort and resources, e.g., time, money, and manpower, are required. The effort and resources could be reduced if the manufacturing variation could be forecast and managed during the design of the product. Traditionally, several barriers have been present that make forecasting and managing variation during the design process very challenging. The first barrier is the effort required of a design engineer to know the company's process capability, which makes it difficult to specify tolerances that can be manufactured reliably. The second barrier is the difficulty associated with understanding how a single manufacturing process or series of processes affects the variation of a product. This barrier impedes the analysis of tradeoffs among processes, the quantifying of the impact incoming stock variation has on final product variation, and the identification of sources of variation within the production system. The third barrier is understanding how selective assembly influences the final variation of a product, which results in selective assembly not being utilized efficiently. In this thesis, tools and methods to overcome the aforementioned barriers are presented. A process capability database is developed to connect engineers to manufacturing data to assist with
(cont.) detailing a design. A theory is introduced that models a production process with two math functions, which are constructed using process capability data. These two math functions are used to build closed-form equations that calculate the mean and standard deviation of parts exiting a process. The equations are used to analyze tradeoffs among processes, to compute the impact incoming variation has on output, and to identify sources of variation. Finally, closed-form equations are created that compute the variation of a product resulting from a selective assembly operation. Using these tools, forecasting and managing manufacturing variation is possible for a wide variety of products and production systems.
by Daniel C. Kern.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
26

Bruneau, Phillippe Roger Paul, and Backstrom T. W. Von. "The design of a single rotor axial flow fan for a cooling tower application." Thesis, Stellenbosch : University of Stellenbosch, 1994. http://hdl.handle.net/10019.1/15528.

Full text
Abstract:
Thesis (MEng (Mechanical Engineering))--University of Stellenbosch, 1994.
213 leaves printed on single pages, preliminary pages i-xix and numbered pages 1-116. Includes bibliography, list of tables, list of figures and nomenclature.
Digitized at 600 dpi grayscale to pdf format (OCR), using a Bizhub 250 Konica Minolta Scanner.
ENGLISH ABSTRACT: A design methodology for low pressure rise, rotor only, ducted axial flow fans is formulated, implemented and validated using the operating point specifications of a 1/6th scale model fan as a reference. Two experimental fans are designed by means of the design procedure and tested in accordance with British Standards 848, Type A. The design procedure makes use of the simple radial equilibrium equations, embodied in a suite of computer programs. The experimental fans have the same hub-tip ratio and vortex distribution, but differ in the profile section used. The first design utilises the well known Clark-Y aerofoil profile whilst the second takes advantage of the high lift characteristics of the more modern NASA LS series. The characteristics of the two designs are measured over the entire operating envelope and compared to the reference fan from which the utility and accuracy of the design procedure is assessed. The performance of the experimental fans compares well with both the reference fan as well as the design intent.
AFRIKAANSE OPSOMMING: 'n Ontwerpmetode vir lae drukstyging, enkel rotor aksiaal waaiers is geformuleer, toegepas en bevestig deur gebruik te maak van die ontwerppunt spesifikasies van 'n 1/6 skaal verwysingswaaier. Twee eksperimentele waaiers is ontwerp deur middel van die ontwerpmetode en getoets volgens die BS 848, Type A kode. Die ontwerpmetode maak gebruik van die eenvoudig radiale ewewigsvergelykings en 'n stel rekenaarprogramme. Die twee eksperimentele waaiers het dieselfde naaf-huls verhouding en werwel verdeling, maar verskil daarin dat verskillende vleuelprofiele gebruik is vir elkeen van die twee waaiers. Die eerste ontwerp maak gebruik van die bekende Clark-Y profiel terwyl die tweede die moderne NASA LS profiel gebruik. Die karakteristieke van die twee eksperimentele waaiers is gemeet oor die hele werkbereik en vergelyk met die verwysings waaier waardeur die geldigheid en akkuraatheid van die ontwerpmetode bepaal is. Die werkverigting van die eksperimentele waaiers vergelyk goed met die verwysingswaaier en bevredig die ontwerpsdoelwitte.
APA, Harvard, Vancouver, ISO, and other styles
27

Solakoglu, Gokce. "Using DMAIC methodology to optimize data processing cycles with an overall goal of improving root cause analysis procedure." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113768.

Full text
Abstract:
Thesis: M. Eng. in Advanced Manufacturing and Design, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 68-69).
The main objective of this thesis is to use the DMAIC methodology to streamline customer related procedures in Waters Corporation in order to improve root cause analysis (RCA) capability. First, a software based approach is proposed to streamline the data collection stage in the field. The proposed system would ensure that field service reports capture essential information, are consistent, and are more easily filled out while at the customer site. Second, a new coding system is proposed to enable global service support engineers to better identify the underlying causes of field calls. By addressing these weaknesses in the current process, this thesis contributes a strategy to improve the content of the data captured during the field applications and to provide better feedback to the quality department for improved product robustness..
by Gokce Solakoglu.
M. Eng. in Advanced Manufacturing and Design
APA, Harvard, Vancouver, ISO, and other styles
28

Da, Silva Veith Alexandre. "Quality of Service Aware Mechanisms for (Re)Configuring Data Stream Processing Applications on Highly Distributed Infrastructure." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEN050/document.

Full text
Abstract:
Une grande partie de ces données volumineuses ont plus de valeur lorsqu'elles sont analysées rapidement, au fur et à mesure de leur génération. Dans plusieurs scénarios d'application émergents, tels que les villes intelligentes, la surveillance opérationnelle de grandes infrastructures et l'Internet des Objets (Internet of Things), des flux continus de données doivent être traités dans des délais très brefs. Dans plusieurs domaines, ce traitement est nécessaire pour détecter des modèles, identifier des défaillances et pour guider la prise de décision. Les données sont donc souvent rassemblées et analysées par des environnements logiciels conçus pour le traitement de flux continus de données. Ces environnements logiciels pour le traitement de flux de données déploient les applications sous-la forme d'un graphe orienté ou de dataflow. Un dataflow contient une ou plusieurs sources (i.e. capteurs, passerelles ou actionneurs); opérateurs qui effectuent des transformations sur les données (e.g., filtrage et agrégation); et des sinks (i.e., éviers qui consomment les requêtes ou stockent les données). Nous proposons dans cette thèse un ensemble de stratégies pour placer les opérateurs dans une infrastructure massivement distribuée cloud-edge en tenant compte des caractéristiques des ressources et des exigences des applications. En particulier, nous décomposons tout d'abord le graphe d'application en identifiant quelques comportements tels que des forks et des joints, puis nous le plaçons dynamiquement sur l'infrastructure. Des simulations et un prototype prenant en compte plusieurs paramètres d'application démontrent que notre approche peut réduire la latence de bout en bout de plus de 50% et aussi améliorer d'autres métriques de qualité de service. L'espace de recherche de solutions pour la reconfiguration des opérateurs peut être énorme en fonction du nombre d'opérateurs, de flux, de ressources et de liens réseau. De plus, il est important de minimiser le coût de la migration tout en améliorant la latence. Des travaux antérieurs, Reinforcement Learning (RL) et Monte-Carlo Tree Searh (MCTS) ont été utilisés pour résoudre les problèmes liés aux grands nombres d’actions et d’états de recherche. Nous modélisons le problème de reconfiguration d'applications sous la forme d'un processus de décision de Markov (MDP) et étudions l'utilisation des algorithmes RL et MCTS pour concevoir des plans de reconfiguration améliorant plusieurs métriques de qualité de service
A large part of this big data is most valuable when analysed quickly, as it is generated. Under several emerging application scenarios, such as in smart cities, operational monitoring of large infrastructure, and Internet of Things (IoT), continuous data streams must be processed under very short delays. In multiple domains, there is a need for processing data streams to detect patterns, identify failures, and gain insights. Data is often gathered and analysed by Data Stream Processing Engines (DSPEs).A DSPE commonly structures an application as a directed graph or dataflow. A dataflow has one or multiple sources (i.e., gateways or actuators); operators that perform transformations on the data (e.g., filtering); and sinks (i.e., queries that consume or store the data). Most complex operator transformations store information about previously received data as new data is streamed in. Also, a dataflow has stateless operators that consider only the current data. Traditionally, Data Stream Processing (DSP) applications were conceived to run in clusters of homogeneous resources or on the cloud. In a cloud deployment, the whole application is placed on a single cloud provider to benefit from virtually unlimited resources. This approach allows for elastic DSP applications with the ability to allocate additional resources or release idle capacity on demand during runtime to match the application requirements.We introduce a set of strategies to place operators onto cloud and edge while considering characteristics of resources and meeting the requirements of applications. In particular, we first decompose the application graph by identifying behaviours such as forks and joins, and then dynamically split the dataflow graph across edge and cloud. Comprehensive simulations and a real testbed considering multiple application settings demonstrate that our approach can improve the end-to-end latency in over 50% and even other QoS metrics. The solution search space for operator reassignment can be enormous depending on the number of operators, streams, resources and network links. Moreover, it is important to minimise the cost of migration while improving latency. Reinforcement Learning (RL) and Monte-Carlo Tree Search (MCTS) have been used to tackle problems with large search spaces and states, performing at human-level or better in games such as Go. We model the application reconfiguration problem as a Markov Decision Process (MDP) and investigate the use of RL and MCTS algorithms to devise reconfiguring plans that improve QoS metrics
APA, Harvard, Vancouver, ISO, and other styles
29

Anderson, Karen 1959. "Inverse kinematics of robot manipulators in the presence of singularities and redundancies." Thesis, McGill University, 1987. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=66208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wai, Hon Kee. "Priority feedback mechanism with quality of service control for MPEG video system." HKBU Institutional Repository, 1999. http://repository.hkbu.edu.hk/etd_ra/275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Thatch, Brian R. "A PHIGS based interactive graphical preprocessor for spatial mechanism analysis and synthesis." Thesis, Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/80139.

Full text
Abstract:
This thesis presents the development and use of MECHIN, an interactive graphical preprocessor for data input to spatial mechanism analysis and synthesis codes. A goal in the development of this preprocessor is to produce a graphical data input program that is both graphics device-independent and not structured for the input of data to any particular mechanism processing program. To achieve device-independence, the proposed graphics standard PHIGS (Programmer's Hierarchical Interactive Graphics System) is used for the graphics support software. Program development strategies including screen layout and user interfaces for three-dimensional data input are discussed. The program structure is also described and presented along with a complete listing of the program code to aid in future modifications and additions. Finally, a description of the use of the program is presented along with several examples of mechanism data input for synthesis and analysis.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Shaojie. "A Data Augmentation Methodology for Class-imbalanced Image Processing in Prognostic and Health Management." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin161375046654683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Soroush, Hamed. "A data processing workflow for borehole enlargement identification and characterisation using petrophysical logs." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/771.

Full text
Abstract:
Borehole breakouts provide valuable information with respect to the evaluation of the in-situ stress direction and magnitude, and also verification of any geomechanical models built for a specific field. Identifying the locations along a borehole where the breakouts form is therefore very important. On the other hand, the borehole geometry (defined as width and depth of breakouts), which is a critical factor in completion and production optimisation design, can also be estimated from the back analysis of breakout information. While breakout width has been widely used in obtaining an estimate of the maximum horizontal stress magnitude, few studies have been reported on the estimation of breakout depth and the information it may provide.Caliper and image logs are customarily used to identify and characterise borehole enlargement zones; in particular, the breakouts. However, these methods are limited in their applications in many instances. In addition, good quality image logs are not available in many wells including old wells. This leads to a need for the development of a new approach to identify the location of borehole enlargements along a wellbore.This research aims to understand the mechanisms under which breakouts form with respect to a rock’s physical and mechanical properties. Petrophysical logs, which are often acquired in most of the drilled wells, show correlations with mechanical properties of the rock. Therefore, this research attempts to develop an approach to identify the location of borehole enlargement zones using the information gained from petrophysical logs.This research introduces a new multi-variable approach based on various data processing techniques (including wavelet, classifiers, and neural networks) to extract rock properties from different petrophysical logs. This information was combined using a robust data fusion technique which determined the location of the enlarged borehole. The results demonstrated the accuracy of the location of the borehole enlargement identified along a borehole compared to that observed using calipers and image logs.In addition, there were correlations between breakout width and depth measurements when measurements taken from high quality acoustic image logs were used. Elastic and elastoplastic finite element numerical models also showed how breakout width and depth could change due to a change in different rock properties. The models were verified by comparing results of numerical analysis with real observations from field data.
APA, Harvard, Vancouver, ISO, and other styles
34

Chang, Wei-Chieh. "Transputer-based robot controller /." Online version of thesis, 1990. http://hdl.handle.net/1850/11557.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Paul, Douglas James. "Parallel microcomputer control of a 3DOF robotic arm." Thesis, Georgia Institute of Technology, 1989. http://hdl.handle.net/1853/18371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Ghalsasi, Omkar. "An Image Processing-based Approach for Additive Manufacturing of Cranial Implants." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623169593210198.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Wildschek, Reto. "Surface capture using near-real-time photogrammetry for a computer numerically controlled milling system." Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/18605.

Full text
Abstract:
During the past three years, a research project has been carried out in the Department of Mechanical Engineering at UCT, directed at developing a system to accurately reproduce three-dimensional (3D), sculptured surfaces on a three axis computer numerically controlled (CNC) milling machine. Sculptured surfaces are surfaces that cannot easily be represented mathematically. The project was divided into two parts: the development of an automatic noncontact 3D measuring system, and the development of a milling system capable of machining 30 sculptured surfaces (Back, 1988). The immediate need for such a system exists for the manufacture of medical prostheses. The writer undertook to investigate the measurement system, .with the objective to develop a non-contact measuring system that can be used to 'map' a sculptured surface so that it can be represented by a set of XYZ coordinates in the form required by the milling system developed by Back (1988). This thesis describes the development of a PC-based near-realtime photogrammetry system (PHOENICS) for surf ace capture. The topic is introduced by describing photogrammetric principles as used for non-contact measurements of objects. A number of different algorithms for image target detection, centering and matching is investigated. The approach to image matching adopted was the projection of a regular grid onto the surface with subsequent matching of conjugate grid intersections. A general algorithm which automatically detects crosses on a line and finds their accurate centres was developed. This algorithm was then extended from finding the crosses on a line, to finding all the intersection points of a grid. The algorithms were programmed in TRUE BASIC and specifically adapted for use with PHOENICS as an object point matching tool. The non-contact surface measuring technique which was developed was used in conjunction with the milling system developed by Back (1988) to replicate a test object. This test proved that the combined system is suitable for the manufacture of sculptured surf aces. The accuracy requirements for the manufacture of medical prostheses can be achieved with the combined measuring and milling system. At an object-to-camera distance of 0.5 m, points on a surface can be measured with an accuracy of approximately 0.3 mm at an interval of 5 mm. This corresponds to a relative accuracy of 1:1600. Back (1988) reported an average undercutting error of 0.46 mm for the milling system. This combines to an uncertainty of 0.55 mm. Finally, the limitations of PHOENICS at its prototype stage as a surface measuring tool are discussed, in particular the factors influencing the system's accuracy. PHOENICS is an ongoing project and the thesis is concluded by some recommendations for further research work.
APA, Harvard, Vancouver, ISO, and other styles
38

Wodrich, Karsten H. K. "A design programme for dilute phase pneumatic conveyors." Thesis, Link to the online version, 1997. http://hdl.handle.net/10019/1420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Yang. "Distributed parallel processing in networks of workstations." Ohio : Ohio University, 1994. http://www.ohiolink.edu/etd/view.cgi?ohiou1174328416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ragnucci, Beatrice. "Data analysis of collapse mechanisms of a 3D printed groin vault in shaking table testing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22365/.

Full text
Abstract:
The aim of this novel experimental study is to investigate the behaviour of a 2m x 2m model of a masonry groin vault, which is built by the assembly of blocks made of a 3D-printed plastic skin filled with mortar. The choice of the groin vault is due to the large presence of this vulnerable roofing system in the historical heritage. Experimental tests on the shaking table are carried out to explore the vault response on two support boundary conditions, involving four lateral confinement modes. The data processing of markers displacement has allowed to examine the collapse mechanisms of the vault, based on the arches deformed shapes. There then follows a numerical evaluation, to provide the orders of magnitude of the displacements associated to the previous mechanisms. Given that these displacements are related to the arches shortening and elongation, the last objective is the definition of a critical elongation between two diagonal bricks and consequently of a diagonal portion. This study aims to continue the previous work and to take another step forward in the research of ground motion effects on masonry structures.
APA, Harvard, Vancouver, ISO, and other styles
41

Anie, Allen Joseph. "The adoption of market-mechanisms by local government IT-units : an empirical study of recent evidence of impacts on IT-services management in English local authorities." Thesis, University of Kent, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.300950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Long, Manda Marie. "Kinematics of the fingers during typing." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-06162009-063244/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Braginton, Pauline. "Taxonomy of synchronization and barrier as a basic mechanism for building other synchronization from it." CSUSB ScholarWorks, 2003. https://scholarworks.lib.csusb.edu/etd-project/2288.

Full text
Abstract:
A Distributed Shared Memory(DSM) system consists of several computers that share a memory area and has no global clock. Therefore, an ordering of events in the system is necessary. Synchronization is a mechanism for coordinating activities between processes, which are program instantiations in a system.
APA, Harvard, Vancouver, ISO, and other styles
44

Posada, Maria. "Comparison of 3-D Friction Stir Welding Viscoplastic Finite Element Model with Weld Data and Physically-Simulated Data." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3494.

Full text
Abstract:
Models (both physical and numerical) of the friction stir (FS) welding process are used to develop a greater understanding of the influence of independent process parameters on dependent process output variables, such as torque, power, specific weld energy, peak temperature, cooling rates and various metallurgical factors (e.g., grain size and precipitates). An understanding of how the independent process parameters influence output variables and ultimately their effect on resultant properties (e.g., strength, hardness, etc..) is desirable. Most models developed have been validated primarily for aluminum alloys with relatively small amounts of experimental data. Fewer models have been validated for steels or stainless steels, particularly since steels and stainless steels have proven more challenging to friction stir than aluminum alloys. The Gleeble system is also a powerful tool with the capability to perform thermomechanical simulations in a known and controlled environment and provide physical representation of resultant microstructure and hardness values. The coupling of experimental data and physical simulated data can be extremely useful in assessing the capabilities of friction stir numerical process models. The overall approach is to evaluate Isaiah an existing three-dimensional finite element code developed at Cornell University by comparing against experimental and physically-simulated data to determine how well the code output relates to real FS data over a range of nine processing conditions. Physical simulations replicating select thermomechanical streamline histories were conducted to provide a physical representation of resultant metallurgy and hardness. Isaiah shows promise in predicting qualitative trends over a limited range of parameters and is not recommended for use as a predictive tool but rather a complimentary tool, Once properly calibrated, the Isaiah code can be a powerful tool to gain insight into the process, strength evolution during the process and coupled with a texture evolution model may also provide insight into microstructural and texture evolution over a range for which it is calibrated.
APA, Harvard, Vancouver, ISO, and other styles
45

Kinnaert, Xavier. "Data processing of induced seismicity : estimation of errors and of their impact on geothermal reservoir models." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAH013/document.

Full text
Abstract:
La localisation de séismes induits ainsi que les mécanismes au foyer associés sont des outils fréquemment utilisés afin, entre autres, d’imager la structure d’un réservoir. Cette thèse présente une technique permettant de quantifier les erreurs associées à ces deux paramètres. Par cette méthode, incertitudes et imprécisions sont distinguées. La méthode a été appliquée aux sites de Soultz et de Rittershoffen pour étudier l’impact de plusieurs critères sur la localisation de la sismicité induite. Ainsi, il a été montré que l’utilisation de capteurs installés profondément dans des puits et qu’une bonne couverture sismique azimutale réduit sérieusement les incertitudes de localisation. Les incertitudes du modèle de vitesse, représentées par une distribution gaussienne des modèles avec un écart de 5% autour du modèle de référence, multiplient les incertitudes de localisation par un facteur 2 à 3. Des simplifications utilisées pour le calcul ou une mauvaise connaissance du milieu peuvent mener à des imprécisions de l’ordre de 10% spatialement non isotropes. Ainsi, les structures du sous-sol peuvent être déformées dans les interprétations. L’application d’un tir de calibration peut néanmoins corriger ce fait en grande partie. L’étude d’erreurs associées aux mécanismes au foyer ne semble cependant pas conduire aux mêmes conclusions. Le biais angulaire peut certes être augmenté par l’omission de la faille dans le modèle de vitesse, mais dans plusieurs cas il est le même que dans le cas idéal voire diminué. En outre, une meilleure couverture sismique améliorerait toujours le mécanisme au foyer obtenu. Ainsi, il n’est pas conseillé d’imager un réservoir en n’utilisant que la localisation de séismes, mais une combinaison de plusieurs paramètres sismiques pourrait s’avérer efficace. La méthode appliquée dans le cadre de cette thèse pourra servir pour d’autres sites à condition d’en avoir une bonne connaissance a priori
Induced seismicity location and focal mechanisms are commonly used to image the sub-surface designin reservoirs among other tasks. In this Ph.D. the inaccuracies and uncertainties on earthquake location and focal mechanisms are quantified using a three-step method. The technique is applied to the geothermal sites of Soultz and Rittershoffen to investigate the effect of several criteria on thee arthquake location. A good azimuthal seismic coverage and the use of seismic down-hole sensors seriously decrease the location uncertainty. On the contrary, velocity model uncertainties, represented by a 5% Gaussian distribution of the velocity model around the reference model, will multiply location uncertainties by a factor of 2 to 3. An incorrect knowledge of the sub-surface or the simplifications performed before the earthquake location can lead to biases of 10% of the vertical distance separating the source and the stations with a non-isotropic spatial distribution. Hence the sub-surface design maybe distorted in the interpretations. To prevent from that fact, the calibration shot method was proved to be efficient. The study on focal mechanism errors seems to lead to different conclusions. Obviously, the angular bias may be increased by neglecting the fault in the velocity. But, it may also be the same as or even smaller than the bias calculated for the case simulating a perfect knowledge of the medium of propagation. Furthermore a better seismic coverage always leads to smaller angular biases. Hence,it is worth advising to use more than only earthquake location in order to image a reservoir. Other geothermal sites and reservoirs may benefit from the method developed here
Die korrekte Lokalisierung von induzierter Seismizität und den dazugehörigen Herdflächenlösungensind sehr wichtige Parameter. So werden zum Beispiel die Verteilung der Erdbeben und die Orientierung ihrer Herdflächenlösungen dazu benutzt um in der Tiefe liegende Reservoirs zulokalisieren und abzubilden. In dieser Doktorarbeit wird eine Technik vorgeschlagen um diemethodisch bedingten Fehler zu quantifizieren. Mit dieser Methode werden die verschiedenen Fehlerquellen, die Unsicherheiten und die Fehler im Modell getrennt. Die Technik wird für die geothermischen Felder in Soultz und in Rittershoffen benutzt um den Einfluss verschiedener Parameter (Annahmen) auf die Lokalisierung der induzierten Seismizität zu bestimmen. Es wurde festgestellt, dass Bohrlochseismometer und eine gute azimutale Verteilung der seismischen Stationen die Unbestimmtheiten verkleinern. Die Geschwindigkeitsunbestimmheiten, die durch eine Gauss-Verteilung mit 5% Fehler dargestellt werden, vervielfachen die Lokalisierungsungenauigkeiten um einen Faktor 2 bis 3. Eine ungenaue Kenntnis des Untergrunds oder die verwendete vereinfachte Darstellung der Geschwindigkeitsverhältnisse im Untergrund (notwendig um die synthetischen Rechnungen durchführen zu können) führen zu anisotropen Abweichungen und Fehlern in der Herdtiefe von bis zu 10%. Diese können die Interpretationen des Untergrunds deutlich verfälschen. Ein “calibration shot” kann diese Fehler korrigieren. Leider können die Fehler für die Herdflächenlösungen nicht in derselben Weise korrigiert werden. Es erscheint daher als keine gute Idee, ein Reservoir nur über die Lokalisierung von Erdbeben zu bestimmen. Eine Kombination mehrerer seismischer Methoden scheint angezeigt. Die hier besprochene Methode kann als Grundlage dienen für die Erkundung anderer (geothermischer)
APA, Harvard, Vancouver, ISO, and other styles
46

余啓明 and Kai-ming Yu. "Dimensioning and tolerancing in geometric modelling." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1990. http://hub.hku.hk/bib/B31232450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Williams, Robert L. "Synthesis and design of the RSSR spatial mechanism for function generation." Thesis, Virginia Tech, 1985. http://hdl.handle.net/10919/41573.

Full text
Abstract:

The purpose of this thesis is to provide a complete package for the synthesis and design of the RSSR spatial function generating mechanism.

In addition to the introductory material this thesis is divided into three sections. The section on background kinematic theory includes synthesis, analysis, link rotatability, transmission quality, and branching analysis. The second division details the computer application of the kinematic theory. The program RSSRSD has been developed to incorporate the RSSR synthesis and design theory. An example is included to demonstrate the computer-implemented theory.

The third part of this thesis includes miscellaneous mechanism considerations and recommendations for further research.

The theoretical work in this project is a combination of original derivations and applications of the theory in the mechanism literature.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
48

Navas, Portella Víctor. "Statistical modelling of avalanche observables: criticality and universality." Doctoral thesis, Universitat de Barcelona, 2020. http://hdl.handle.net/10803/670764.

Full text
Abstract:
Complex systems can be understood as an entity composed by a large number of interactive elements whose emergent global behaviour cannot be derived from the local laws characterizing their constituents. The observables characterizing these systems can be observed at different scales and they often exhibit interesting properties such as lack of characteristic scales and self-similarity. In this context, power-law type functions take an important role in the description of these observables. The presence of power-law functions resembles to the situation of thermodynamic quantities close to a critical point in equilibrium critical phenomena. Different complex systems can be grouped into the same universality class when the power-law functions characterizing their observables have the same exponents. The response of some complex systems proceeds by the so called avalanche process: a collective response of the system characterized by following an intermittent dynamics, with sudden bursts of activity separated by periods of silence. This kind of out-of-equilibrium systems can be found in different disciplines such as seismology, astrophysics, ecology, finance or epidemiology, just to mention a few of them. Avalanches are characterized by a set of observables such as the size, the duration or the energy. When avalanche observables exhibit lack of characteristic scales, their probability distributions can be statistically modelled by power-law-type distributions. Avalanche criticality occurs when avalanche observables can be characterized by this kind of distributions. In this sense, the concepts of criticality and universality, which are well defined in equilibrium phenomena, can be also extended for the probability distributions describing avalanche observables in out-of-equilibrium systems. The main goal of this PhD thesis relies on providing robust statistical methods in order to characterize avalanche criticality and universality in empirical datasets. Due to limitations in data acquisition, empirical datasets often only cover a narrow range of observation, making it difficult to establish power-law behaviour unambiguously. With the aim of discussing the concepts of avalanche criticality and universality, two different systems are going to be considered: earthquakes and acoustic emission events generated during compression experiments of porous materials in the laboratory (labquakes). The techniques developed in this PhD thesis are mainly focused on the distribution of earthquake and labquake sizes, which is known as the Gutenberg-Richter law. However, the methods are much more general and can be applied to any other avalanche observable. The statistical techniques provided in this work can also be helpful for earthquake forecasting. Coulomb-stress theory has been used for years in seismology to understand how earthquakes trigger each other. Earthquake models that relate earthquake rates and Coulomb stress after a main event, such as the rate-and-state model, assume that the magnitude distribution of earthquakes is not affected by the change in the Coulomb stress. Several statistical analyses are performed to test whether the distribution of magnitudes is sensitive to the sign of the Coulomb-stress increase. The use of advanced statistical techniques for the analysis of complex systems has been found to be necessary and very helpful in order to provide rigour to the empirical results, particularly, to those problems regarding hazard analysis.
Els sistemes complexos es poden entendre com entitats compostes per un gran nombre d’elements en interacció on la seva resposta global i emergent no es pot derivar de les lleis particulars que caracteritzen cadascun dels seus constituents. Els observables que caracteritzen aquests sistemes es poden observar a diferents escales i, sovint, mostren propietats interessants tals com la manca d’escales característiques i autosimilitud. En aquest context, les funcions amb lleis de potència prenen un paper important en la descripció d’aquests observables. La presència de lleis de potència s’assimila a la situació dels fenòmens crítics en equilibri, on algunes quantitats termodinàmiques mostren un comportament funcional similar prop d’un punt crític. Diferents sistemes complexos es poden agrupar en la mateixa classe d’universalitat quan les funcions de lleis de potència que caracteritzen els seus observables tenen els mateixos exponents. Quan són conduïts externament, la resposta d’alguns sistemes complexos segueix el que s’anomonena un procès d’allaus: una resposta col·lectiva del sistema caracteritzada per seguir una dinàmica intermitent, amb sobtats increments d’activitat separats per períodes de silenci. Aquesta mena de sistemes fora de l’equilibri es poden trobar en diferents disciplines tals com la sismologia, astrofísica, ecologia, epidemologia o finances, per mencionar alguns. Les allaus estan caracteritzades per un conjunt d’observables tals com la mida, l’energia o la durada. Quan aquests observables mostren una manca d’escales característiques, les seves distribucions de probabilitat es poden modelitzar estadísticament per distribucions de lleis de potència. S’anomenen allaus crítiques aquelles en que els seus observables es poden caracteritzar per aquestes distribucions. En aquest sentit, els conceptes de criticalitat i universalitat, els quals estan ben definits per fenòmens en equilibri, es poden extendre per les distribucions de probabilitat que descriuen els observables de les allaus en sistemes fora de l’equilibri. L’objectiu principal d’aquesta tesi doctoral és proporcionar mètodes estadístics robusts per tal de caracteritzar la criticalitat i la universalitat en allaus corresponents a dades empíriques. Degut a les limitacions en l’adquisició de dades, les dades empíriques sovint cobreixen un rang petit d’observació, dificultant que es pugui establir un determinat comportament en forma de llei de potència de manera inequívoca. Amb l’objectiu de discutir els conceptes de criticalitat i universalitat en allaus, es consideraran dos sistemes diferents: els terratrèmols i els esdeveniments d’emissió acústica que es generen durant experiments de compressió de materials porosos al laboratori (labquakes). Les tècniques desenvolupades en aquesta tesi doctoral estan enfocades principalment a la distribució de la mida dels terratrèmols i labquakes, altrament coneguda com a llei de Gutenberg-Richter. No obstant, aquests mètodes són molt més generals i es poden aplicar a qualsevol observable de les allaus. Les tècniques estadístistiques proporcionades en aquest treball poden també ajudar al pronòstic de terratrèmols. Durant anys, la teoria d’esforços de Coulomb s’ha utilitzat en sismologia per tal d’entendre com els terratrèmols desencadenen l’ocurrència d’altres de nous. Els models de terratrèmols que relacionen la taxa d’ocurrència de rèpliques i l’esforç de Coulomb després d’un gran esdeveniment, assumeixen que la distribució de la mida dels terratrèmols no està afectada pel canvi en l’esforç de Coulomb. Diverses anàlisi estadístiques s’aplicaran per tal de comprovar si la distribució de magnituds és sensible al signe de l’esforç de Coulomb. S’ha provat que l’ús de tècniques estadístiques avançades en l’anàlisi de sistemes complexos és útil i necessari per tal d’aportar rigor als resultats empírics i, en particular, a problemes d’anàlisi de riscos.
APA, Harvard, Vancouver, ISO, and other styles
49

Beyou, Sébastien. "Estimation de la vitesse des courants marins à partir de séquences d'images satellitaires." Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00870722.

Full text
Abstract:
Cette thèse étudie des méthodes d'assimilation de données par filtrage particulaire à l'estimation d'écoulements fluides observés au travers de séquences d'images. Nous nous appuyons sur un filtre particulaire spécifique dont la distribution de proposition est donnée par un filtre de Kalman d'ensemble, nommé filtre de Kalman d'ensemble pondéré. Deux variations à celui-ci sont introduites et étudiées. La première consiste à utiliser un bruit dynamique (permettant de modéliser l'incertitude du modèle et de séparer les particules entre elles) dont la forme spatiale suit une loi de puissance, cohérente avec la théorie phénoménologique de la turbulence. La deuxième variation repose sur un schéma d'assimilation multi-échelles introduisant un mécanisme de raffinements successifs à partir d'observations à des échelles de plus en plus petites. Ces deux méthodes ont été testées sur des séquences synthétiques et expérimentales d'écoulements 2D incompressibles. Ces résultats montrent un gain important sur l'erreur quadratique moyenne. Elles ont ensuite été testées sur des séquences d'images satellite réelles. Sur les images réelles, une bonne cohérence temporelle est observée, ainsi qu'un bon suivi des structures de vortex. L'assimilation multi-échelles montre un gain visible sur le nombre d'échelles reconstruites. Quelques variations additionnelles sont aussi présentées et testées afin de s'affranchir de problèmes importants rencontrés dans un contexte satellitaire réel. Il s'agit notamment de la prise en compte de données manquantes sur les images de température de surface de l'océan. En dernier lieu, une expérience d'un filtre de Kalman d'ensemble pondéré avec un modèle océanique complet est présentée pour une assimilation de champs de courants de surface en mer d'Iroise, à l'embouchure de la Manche. Quelques autres pistes d'amélioration sont également esquissées et testées.
APA, Harvard, Vancouver, ISO, and other styles
50

Ponelis, S. R. (Shana Rachel). "Data marts as management information delivery mechanisms: utilisation in manufacturing organisations with third party distribution." Thesis, University of Pretoria, 2002. http://hdl.handle.net/2263/27061.

Full text
Abstract:
Customer knowledge plays a vital part in organisations today, particularly in sales and marketing processes, where customers can either be channel partners or final consumers. Managing customer data and/or information across business units, departments, and functions is vital. Frequently, channel partners gather and capture data about downstream customers and consumers that organisations further upstream in the channel require to be incorporated into their information systems in order to allow for management information delivery to their users. In this study, the focus is placed on manufacturing organisations using third party distribution since the flow of information between channel partner organisations in a supply chain (in contrast to the flow of products) provides an important link between organisations and increasingly represents a source of competitive advantage in the marketplace. The purpose of this study is to determine whether there is a significant difference in the use of sales and marketing data marts as management information delivery mechanisms in manufacturing organisations in different industries, particularly the pharmaceuticals and branded consumer products. The case studies presented in this dissertation indicates that there are significant differences between the use of sales and marketing data marts in different manufacturing industries, which can be ascribed to the industry, both directly and indirectly.
Thesis (MIS(Information Science))--University of Pretoria, 2002.
Information Science
MIS
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography