To see the other types of publications on this topic, follow the link: Measuring instruments - Data processing.

Dissertations / Theses on the topic 'Measuring instruments - Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 dissertations / theses for your research on the topic 'Measuring instruments - Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Derbyshire, John Andrew. "Echo-planar anemometry using conventional magnetic resonance imaging hardware." Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364590.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Grando, Flavio Lori. "Arquitetura para o desenvolvimento de unidades de medição fasorial sincronizada no monitoramento a nível de distribuição." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/1762.

Full text
Abstract:
CAPES<br>Este trabalho tem por objetivo o desenvolvimento de uma arquitetura de baixo custo para construção de unidades de medição fasorial sincronizada (PMU). O dispositivo prevê conexão com a baixa tensão da rede elétrica, de forma que, instalada neste ponto do sistema permita o monitoramento da rede de transmissão e distribuição. Os desenvolvimentos deste projeto contemplam uma arquitetura completa, com módulo de instrumentação para uso na baixa tens˜ao da rede, módulo GPS para fornecer o sinal de sincronismo e etiqueta de tempo das medidas, unidade de processamento com sistema de aquisição, estimação de fasores e formatação dos dados de acordo com a norma e, por fim, módulo de comunicação para transmissão dos dados. Para o desenvolvimento e avaliação do desempenho da arquitetura, desenvolveu-se um conjunto de aplicativos em ambiente LabVIEW com funcionalidades específicas que permitem analisar o comportamento das medidas e identificar as fontes de erro da PMU, além de aplicar todos os testes previstos pela norma IEEE C37.118.1. O primeiro aplicativo, útil para o desenvolvimento da instrumentação, consiste em um gerador de funções integrado com osciloscópio, que permite a geração e aquisição de sinais de forma sincronizada, além da manipulação das amostras. O segundo e principal deles, é a plataforma de testes capaz de gerar todos os ensaios previstos pela norma, permitindo também armazenar os dados ou fazer a análise das medidas em tempo real. Por fim, um terceiro aplicativo foi desenvolvido para avaliar os resultados dos testes e gerar curvas de ajuste para calibração da PMU. Os resultados contemplam todos os testes previstos pela norma e um teste adicional que avalia o impacto de ruído. Além disso, através de dois protótipos conectados à instalação elétrica de consumidores de um mesmo circuito de distribuição, obteve-se registros de monitoramento que permitiram a identificação das cargas no consumidor, análise de qualidade de energia, além da detecção de eventos a nível de distribuição e transmissão.<br>This work presents a low cost architecture for development of synchronized phasor measurement units (PMU). The device is intended to be connected in the low voltage grid, which allows the monitoring of transmission and distribution networks. Developments of this project include a complete PMU, with instrumentation module for use in low voltage network, GPS module to provide the sync signal and time stamp for the measures, processing unit with the acquisition system, phasor estimation and formatting data according to the standard and finally, communication module for data transmission. For the development and evaluation of the performance of this PMU, it was developed a set of applications in LabVIEW environment with specific features that let analyze the behavior of the measures and identify the sources of error of the PMU, as well as to apply all the tests proposed by the standard. The first application, useful for the development of instrumentation, consists of a function generator integrated with an oscilloscope, which allows the generation and acquisition of signals synchronously, in addition to the handling of samples. The second and main, is the test platform, with capabality of generating all tests provided by the synchronized phasor measurement standard IEEE C37.118.1, allowing store data or make the analysis of the measurements in real time. Finally, a third application was developed to evaluate the results of the tests and generate calibration curves to adjust the PMU. The results include all the tests proposed by synchrophasors standard and an additional test that evaluates the impact of noise. Moreover, through two prototypes connected to the electrical installation of consumers in same distribution circuit, it was obtained monitoring records that allowed the identification of loads in consumer and power quality analysis, beyond the event detection at the distribution and transmission levels.
APA, Harvard, Vancouver, ISO, and other styles
3

Westrate, Michael P. "Data Mining: Instruments of Measuring Landscape in Centralia Pennsylvania." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1337288541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mukun, Wang, Liu Gozhi, and Li Zhenglian. "HARDWARE PRE-PROCESSING FOR DATA OF UNDERWATER MEASURING SYSTEM." International Foundation for Telemetering, 1991. http://hdl.handle.net/10150/612907.

Full text
Abstract:
International Telemetering Conference Proceedings / November 04-07, 1991 / Riviera Hotel and Convention Center, Las Vegas, Nevada<br>The synchro double pulse signal mode is freqently used in Short Base Line (SBL) underwater positioning system so as to obtain the information of both distance and depth of a target simultaneously. Howerer, this signal mode also brings about ranging indistinctness resulting in a shorter positioning distance much less than that limited by the period of the synchro signal. This paper presents a hardware distance gate date acquiring scheme. It puts the original data sent to the computer in order of “direct first pulse--depth information pulse (or first pulse reflected by water surface)•••- to guarantee the effective positioning distance of the system. It has the advantage of reducing the processing time of the computer thus ensuring the realtime functioning of the system.
APA, Harvard, Vancouver, ISO, and other styles
5

Young, Chung-Ping. "Digital power metering manifold /." free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9842576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cui, Qingguang. "Measuring data abstraction quality in multiresolution visualizations." Worcester, Mass. : Worcester Polytechnic Institute, 2007. http://www.wpi.edu/Pubs/ETD/Available/etd-041107-224152/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pons, Freixes Sergi. "Do-it-yourself instruments and data processing methods for developing marine citizen observatories." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/394078.

Full text
Abstract:
Water is the most important resource for living on planet Earth, covering more than 70% of its surface. The oceans represent more than 97% of the planet total water and they are where more than the 99.5% of the living beings are concentrated. A great number of ecosystems depend on the health of these oceans; their study and protection are necessary. Large datasets over long periods of time and over wide geographical areas can be required to assess the health of aquatic ecosystems. The funding needed for data collection is considerable and limited, so it is important to look at new cost-effective ways of obtaining and processing marine environmental data. The feasible solution at present is to develop observational infrastructures that may increase significantly the conventional sampling capabilities. In this study we promote to achieve this solution with the implementation of Citizen Observatories, based on volunteer participation. Citizen observatories are platforms that integrate the latest information technologies to digitally connect citizens, improving observation skills for developing a new type of research known as Citizen Science. Citizen science has the potential to increase the knowledge of the environment, and aquatic ecosystems in particular, through the use of people with no specific scientific training to collect and analyze large data sets. We believe that citizen science based tools -open source software coupled with low-cost do-it-yourself hardware- can help to close the gap between science and citizens in the oceanographic field. As the public is actively engaged in the analysis of data, the research also provides a strong avenue for public education. This is the objective of this thesis, to demonstrate how open source software and low-cost do-it-yourself hardware are effectively applied to oceanographic research and how can it develop into citizen science. We analyze four different scenarios where this idea is demonstrated: an example of using open source software for video analysis where lobsters were monitored; a demonstration of using similar video processing techniques on in-situ low-cost do-it-yourself hardware for submarine fauna monitoring; a study using open source machine learning software as a method to improve biological observations; and last but not least, some preliminar results, as proof of concept, of how manual water sampling could be replaced by low-cost do-it-yourself hardware with optical sensors.<br>L’aigua és el recurs més important per la vida al planeta Terra, cobrint més del 70% de la seva superfície. Els oceans representen més del 70% de tota l'aigua del planeta, i és on estan concentrats més del 99.5% dels éssers vius. Un gran nombre d'ecosistemes depenen de la salut d'aquests oceans; el seu estudi i protecció són necessaris. Grans conjunts de dades durant llargs períodes de temps i al llarg d’amples àrees geogràfiques poden ser necessaris per avaluar la salut dels ecosistemes aquàtics. El finançament necessari per aquesta recol·lecció de dades és considerable però limitat, i per tant és important trobar noves formes més rendibles d’obtenir i processar dades mediambientals marines. La solució factible actualment és la de desenvolupar infraestructures observacionals que puguin incrementar significativament les capacitats de mostreig convencionals. En aquest estudi promovem que es pot assolir aquesta solució amb la implementació d’Observatoris Ciutadans, basats en la participació de voluntaris. Els observatoris ciutadans són plataformes que integren les últimes tecnologies de la informació amb ciutadans digitalment connectats, millorant les capacitats d’observació, per desenvolupar un nou tipus de recerca coneguda com a Ciència Ciutadana. La ciència ciutadana té el potencial d’incrementar el coneixement del medi ambient, i dels ecosistemes aquàtics en particular, mitjançant l'ús de persones sense coneixement científic específic per recollir i analitzar grans conjunts de dades. Creiem que les eines basades en ciència ciutadana -programari lliure juntament amb maquinari de baix cost i del tipus "fes-ho tu mateix" (do-it-yourself en anglès)- poden ajudar a apropar la ciència del camp oceanogràfic als ciutadans. A mesura que el gran públic participa activament en l'anàlisi de dades, la recerca esdevé també una nova via d’educació pública. Aquest és l’objectiu d’aquesta tesis, demostrar com el programari lliure i el maquinari de baix cost "fes-ho tu mateix" s’apliquen de forma efectiva a la recerca oceanogràfica i com pot desenvolupar-se cap a ciència ciutadana. Analitzem quatre escenaris diferents on es demostra aquesta idea: un exemple d’ús de programari lliure per anàlisi de vídeos de monitoratge de llagostes; una demostració utilitzant tècniques similars de processat de vídeo en un dispositiu in-situ de baix cost "fes-ho tu mateix" per monitoratge de fauna submarina; un estudi utilitzant programari lliure d’aprenentatge automàtic (machine learning en anglès) com a mètode per millorar observacions biològiques; i finalment uns resultats preliminars, com a prova de la seva viabilitat, de com un mostreig manual de mostres d’aigua podria ser reemplaçat per maquinari de baix cost "fes-ho tu mateix" amb sensors òptics.
APA, Harvard, Vancouver, ISO, and other styles
8

Tansley, Natalie Vanessa. "A methodology for measuring and monitoring IT risk." Thesis, Nelson Mandela Metropolitan University, 2007. http://hdl.handle.net/10948/772.

Full text
Abstract:
The primary objective of the research is to develop a methodology for monitoring and measuring IT risks, strictly focusing on internal controls. The research delivers a methodology whereby an organization can measure its system of internal controls, providing assurance that the risks are at an acceptable level. To achieve the primary objective a number of secondary objectives were addressed: What are the drivers forcing organizations to better corporate governance in managing risk? What is IT risk management, specifically focusing on operational risk. What is internal control and specifically focusing on COSO’s internal control process. Investigation of measurement methods, such as, Balance Scorecards, Critical Success Factors, Maturity Models, Key Performance Indicators and Key Goal Indicators. Investigation of various frameworks such as CobiT, COSO and ISO 17799, ITIL and BS 7799 as to how they manage IT risk relating to internal control.
APA, Harvard, Vancouver, ISO, and other styles
9

Chapel, Brian Ernie. "Digital disk recorder for geophysics." Thesis, University of British Columbia, 1985. http://hdl.handle.net/2429/24592.

Full text
Abstract:
This thesis describes the design and testing of a floppy disk drive based digital recorder. The device was originally built for a geomagnetic research project, but is also suitable for other phenomena with time scales from fractions of a second to approximately one day. The system is designed specifically to improve the reliability for long-term observing programs and to enhance the efficency of the subsequent data analysis procedures. Using an STD-Z80 BUS microcomputer, under the control of a Forth language program, the recorder stores digital data on removable 8-inch floppy disks. This thesis explicitly addresses the issue of cost and provides the necessary detail for reproduction of the device. A procedure is described for preparing the acquired data for analysis using computing facilities equiped with an appropriate disk reader. Also presented is a quantitative and qualitative evaluation of the recorder's performance when applied to both synthetic and natural signals. The latter include geomagnetically induced currents in power transmission lines.<br>Science, Faculty of<br>Earth, Ocean and Atmospheric Sciences, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
10

Michel, Holger [Verfasser]. "Integration of SRAM-FPGAs for Hardware Acceleration of a Data Processing Module for Space Instruments / Holger Michel." München : Verlag Dr. Hut, 2017. http://d-nb.info/1140977784/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Yalla, Veeraganesh. "OPTIMAL PHASE MEASURING PROFILOMETRY TECHNIQUES FOR STATIC AND DYNAMIC 3D DATA ACQUISITION." UKnowledge, 2006. http://uknowledge.uky.edu/gradschool_diss/348.

Full text
Abstract:
Phase measuring Profilometry (PMP) is an important technique used in 3D data acquisition. Many variations of the PMP technique exist in the research world. The technique involves projecting phase shifted versions of sinusoidal patterns with known frequency. The 3D information is obtained from the amount of phase deviation that the target object introduces in the captured patterns. Using patterns based on single frequency result in projecting a large number of patterns necessary to achieve minimal reconstruction errors. By using more than one frequency, that is multi-frequency, the error is reduced with the same number of total patterns projected as in the single frequency case. The first major goal of our research work is to minimize the error in 3D reconstruction for a given scan time using multiple frequency sine wave patterns. A mathematical model to estimate the optimal frequency values and the number of phase shift patterns based on stochastic analysis is given. Experiments are conducted by implementing the mathematical model to estimate the optimal frequencies and the number of patterns projected for each frequency level used. The reduction in 3D reconstruction errors and the quality of the 3D data obtained shows the validity of the proposed mathematical model. The second major goal of our research work is the implementation of a post-processing algorithm based on stereo correspondence matching adapted to structured light illumination. Composite pattern is created by combining multiple phase shift patterns and using principles from communication theory. Composite pattern is a novel technique for obtaining real time 3D depth information. The depth obtained by the demodulation of captured composite patterns is generally noisy compared to the multi-pattern approach. In order to obtain realistic 3D depth information, we propose a post-processing algorithm based on dynamic programming. Two different communication theory principles namely, Amplitude Modulation (AM) and Double Side Band Suppressed Carrier (DSBSC) are used to create the composite patterns. As a result of this research work, we developed a series of low-cost structured light scanners based on the multi-frequency PMP technique and tested them for their accuracy in different 3D applications. Three such scanners with different camera systems have been delivered to Toyota for vehicle assembly line inspection. All the scanners use off the shelf components. Two more scanners namely, the single fingerprint and the palmprint scanner developed as part of the Department of Homeland Security grant are in prototype and testing stages.
APA, Harvard, Vancouver, ISO, and other styles
12

Oller, Moreno Sergio. "Data processing for Life Sciences measurements with hyphenated Gas Chromatography-Ion Mobility Spectrometry." Doctoral thesis, Universitat de Barcelona, 2018. http://hdl.handle.net/10803/523539.

Full text
Abstract:
Recent progress in analytical chemistry instrumentation has increased the amount of data available for analysis. This progress has been encompassed by computational improvements, that have enabled new possibilities to analyze larger amounts of data. These two factors have allowed to analyze more complex samples in multiple life science fields, such as biology, medicine, pharmacology, or food science. One of the techniques that has benefited from these improvements is Gas Chromatography - Ion Mobility Spectrometry (GC-IMS). This technique is useful for the detection of Volatile Organic Compounds (VOCs) in complex samples. Ion Mobility Spectrometry is an analytical technique for characterizing chemical substances based on the velocity of gas-phase ions in an electric field. It is able to detect trace levels of volatile chemicals reaching for some analytes ppb concentrations. While the instrument has moderate selectivity it is very fast in the analysis, as an ion mobility spectrum can be acquired in tenths of milliseconds. As it operates at ambient pressure, it is found not only as laboratory instrumentation but also in-site, to perform screening applications. For instance it is often used in airports for the detection of drugs and explosives. To enhance the selectivity of the IMS, especially for the analysis of complex samples, a gas chromatograph can be used for sample pre-separation at the expense of the length of the analysis. While there is better instrumentation and more computational power, better algorithms are still needed to exploit and extract all the information present in the samples. In particular, GC-IMS has not received much attention compared to other analytical techniques. In this work we address some of the data analysis issues for GC-IMS: With respect to the pre-processing, we explore several baseline estimation methods and we suggest a variation of Asymmetric Least Squares, a popular baseline estimation technique, that is able to cope with signals that present large peaks or large dynamic range. This baseline estimation method is used in Gas Chromatography - Mass Spectrometry signals as well, as it suits both techniques. Furthermore, we also characterize spectral misalignments in a several months long study, and propose an alignment method based on monotonic cubic splines for its correction. Based on the misalignment characterization we propose an optimal time span between consecutive calibrant samples. We the explore the usage of Multivariate Curve Resolution methods for the deconvolution of overlapped peaks and their extraction into pure components. We propose the use of a sliding window in the retention time axis to extract the pure components from smaller windows. The pure components are tracked through the windows. This approach is able to extract analytes with lower response with respect to MCR, compounds that have a low variance in the overall matrix Finally we apply some of these developments to real world applications, on a dataset for the prevention of fraud and quality control in the classification of olive oils, measured with GC-IMS, and on data for biomarker discovery of prostate cancer by analyzing the headspace of urine samples with a GC-MS instrument.<br>Els avenços recents en instrumentació química i el progrés en les capacitats computacionals obren noves possibilitats per l’anàlisi de dades provinents de diversos camps en l’àmbit de les ciències de la vida, com la biologia, la medicina o la ciència de l’alimentació. Una de les tècniques que s’ha beneficiat d’aquests avenços és la cromatografia de gasos – espectrometria de mobilitat d’ions (GC-IMS). Aquesta tècnica és útil per detectar compostos orgànics volàtils en mostres complexes. L’IMS és una tècnica analítica per caracteritzar substàncies químiques basada en la velocitat d’ions en fase gasosa en un camp elèctric, capaç de detectar traces d’alguns volàtils en concentracions de ppb ràpidament. Per augmentar-ne la selectivitat, un cromatògraf de gasos pot emprar-se per pre-separar la mostra, a expenses de la durada de l’anàlisi. Tot i disposar de millores en la instrumentació i més poder computacional, calen millors algoritmes per extreure tota la informació de les mostres. En particular, GC-IMS no ha rebut molta atenció en comparació amb altres tècniques analítiques. En aquest treball, tractem alguns problemes de l’anàlisi de dades de GC-IMS: Pel que fa al pre-processat, explorem algoritmes d’estimació de la línia de base i en proposem una millora, adaptada a les necessitats de l’instrument. Aquest algoritme també s’utilitza en mostres de cromatografia de gasos espectrometria de masses (GC-MS), en tant que s’adapta correctament a ambdues tècniques. Caracteritzem els desalineaments espectrals que es produeixen en un estudi de diversos mesos de durada, i proposem un mètode d’alineat basat en splines cúbics monotònics per a la seva correcció i un interval de temps òptim entre dues mostres calibrants. Explorem l’ús de mètodes de resolució multivariant de corbes (MCR) per a la deconvolució de pics solapats i la seva extracció en components purs. Proposem l’ús d’una finestra mòbil en el temps de retenció. Aquesta millora permet extreure més informació d’analits. Finalment utilitzem alguns d’aquests desenvolupaments a dues aplicacions: la prevenció de frau en la classificació d’olis d’oliva, mesurada amb GC-IMS i la cerca de biomarcadors de càncer de pròstata en volàtils de la orina, feta amb GC-MS.
APA, Harvard, Vancouver, ISO, and other styles
13

Siddiqui, Feroz Ahmed. "Understanding and measuring systems flexibility : a case of object-oriented development environment." Thesis, Brunel University, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.268857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Morrison, Ronald. "Evaluating the Effectiveness of Certain Metrics in Measuring the Quality of End User Documentation." PDXScholar, 1993. https://pdxscholar.library.pdx.edu/open_access_etds/1326.

Full text
Abstract:
Traditional methods of evaluating quality in computer end user documentation have been subjective in nature, and have not been widely used in practice. Attempts to quantify quality and more narrowly define the essential features of quality have been limited -- leaving the issue of quality largely up to the writer of the user manual. Quantifiable measures from the literature, especially Velotta (1992) and Brockman (1990), have been assembled into a set of uniformly weighted metrics for the measurement of document quality. This measure has been applied to the end user documentation of eighty-two personal computer packages. End user documentation is defined in terms of paper documents only. The research examined only those manuals that were titled “user guide,” “training manual,” “tutorial,” or similar title. The research examined six categories of software: applications, graphics, utilities, spreadsheets, databases, and word processing. Following the recommendation of Duffy (1985), a panel of experts was assembled and asked to evaluate several of the 82 end user manuals in order to determine what correlation exists between the set of metrics and the subjective opinion of experts. The eighty-two documents in the sample were scored by the metrics using a convenient random sampling technique. This technique was selected based the consistency of the material in commercial software manuals and the methods of Velotta (1992). Data from the metrics suggest that there is little correlation between quality, category, price, page length, version number, and experience. On a scale of 0.0 to 1.0, the minimum total score from the metrics was .2; the maximum score .83; the mean total score was. 70; the median .697 with a standard deviation of .093. The distribution is slightly skewed and leptokurtic (steeper than a normal curve). The metrics further suggest a declining score as the integration of sentences into chapters and chapters into the document progresses. Of the metrics two consistently had lower scores: those relating to the transition between sections of the document; and the reference tools provided. Though not conclusive, the analysis of data from the panel of experts compared with the model results suggests only a moderate correlation. However, by varying the weighting scheme, it is possible to improve model performance - essentially by "tuning" the model to match the sample data from the panelists. Further research would be required to verify if these weights have more global application.
APA, Harvard, Vancouver, ISO, and other styles
15

Saini, Shivam. "Spark on Kubernetes using HopsFS as a backing store : Measuring performance of Spark with HopsFS for storing and retrieving shuffle files while running on Kubernetes." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-285561.

Full text
Abstract:
Data is a raw list of facts and details, such as numbers, words, measurements or observations that is not useful for us all by itself. Data processing is a technique that helps to process the data in order to get useful information out of it. Today, the world produces huge amounts of data that can not be processed using traditional methods. Apache Spark (Spark) is an open-source distributed general-purpose cluster computing framework for large scale data processing. In order to fulfill its task, Spark uses a cluster of machines to process the data in a parallel fashion. External shuffle service is a distributed component of Apache Spark cluster that provides resilience in case of a machine failure. A cluster manager helps spark to manage the cluster of machines and provide Spark with the required resources to run the application. Kubernetes is a new cluster manager that enables Spark to run in a containerized environment. However, running external shuffle service is not possible while running Spark using Kubernetes as the resource manager. This highly impacts the performance of Spark applications due to the failed tasks caused by machine failures. As a solution to this problem, the open source Spark community has developed a plugin that can provide the similar resiliency as provided by the external shuffle service. When used with Spark applications, the plugin asynchronously back-up the data onto an external storage. In order not to compromise the Spark application performance, it is important that the external storage provides Spark with a minimum latency. HopsFS is a next generation distribution of Hadoop Distributed Filesystem (HDFS) and provides special support to small files (&lt;64 KB) by storing them in a NewSQL database and thus enabling it to provide lower client latencies. The thesis work shows that HopsFS provides 16% higher performance to Spark applications for small files as compared to larger ones. The work also shows that using the plugin to back-up Spark data on HopsFS can reduce the total execution time of Spark applications by 20%-30% as compared to recalculation of tasks in case of a node failure.<br>Data är en rå lista över fakta och detaljer, som siffror, ord, mätningar eller observationer som inte är användbara för oss alla i sig. Databehandling är en teknik som hjälper till att bearbeta data för att få användbar information ur den. Idag producerar världen enorma mängder data som inte kan bearbetas med traditionella metoder. Apache Spark (Spark) är en öppen källkod distribuerad ram för allmänt ändamål kluster dator för storskalig databehandling. För att fullgöra sin uppgift använder Spark ett kluster av maskiner för att bearbeta data på ett parallellt sätt. Extern shuffle-tjänst är en distribuerad komponent i Apache Spark-klustret som ger motståndskraft vid maskinfel. En klusterhanterare hjälper gnista att hantera kluster av maskiner och förse Spark med de resurser som krävs för att köra applikationen. Kubernetes är en ny klusterhanterare som gör att Spark kan köras i en containeriserad miljö. Det är dock inte möjligt att köra extern shuffle-tjänst när du kör Spark med Kubernetes som resurshanterare. Detta påverkar starkt prestanda för Spark-applikationer på grund av misslyckade uppgifter orsakade av maskinfel. Som en lösning på detta problem har Spark-communityn med öppen källkod utvecklat ett plugin-program som kan tillhandahålla liknande motståndskraft som tillhandahålls av den externa shuffle-tjänsten. När det används med Spark- applikationer säkerhetskopierar plugin-programmet asynkront data till en extern lagring. För att inte kompromissa med Spark-applikationsprestandan är det viktigt att det externa lagret ger Spark en minimal latens. HopsFS är en nästa generations distribution av Hadoop Distribuerat filsystem (HDFS) och ger specialstöd till små filer (&lt;64 kB) genom att lagra dem i en NewSQL-databas och därmed möjliggöra lägre klientfördröjningar. Examensarbetet visar att HopsFS ger 16 % högre prestanda till Spark-applikationer för små filer jämfört med större. Arbetet visar också att användning av plugin för att säkerhetskopiera Spark-data på HopsFS kan minska den totala körningstiden för Spark-applikationer med 20 % - 30 % jämfört med omberäkning av uppgifter i händelse av ett nodfel.
APA, Harvard, Vancouver, ISO, and other styles
16

Faria, Vinicius Tasca. "Measuring the impacts of database processing utilization in innovation processes on companies : exploratory study with focus on service industries." reponame:Repositório Institucional da UFABC, 2017.

Find full text
Abstract:
Orientador: Prof. Dr. Alexandre Acácio de Andrade<br>Coorientador: Prof. Dr. Júlio Francisco Blumetti Facó<br>Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia e Gestão da Inovação, Santo André, 2017.<br>Toda atividade produtiva gera algum tipo de dado. Ele pode estar bem estruturado ou não, existir em papel ou digitalmente, mas invariavelmente algum tipo de informação está presente em algum grau em qualquer empreendimento humano. Empresas, muitas vezes, utilizam dados de suas atividades para beneficio próprio, mas não é difícil encontrar exemplos de subutilização de informações que poderiam ser extremamente valiosas para elas. A maioria das empresas, incluindo as de grande porte, não estão preparadas para lidar como uma grande quantidade de dados e acabam utilizando ferramentas estatísticas tradicionais e ignoram que aquela grande massa de dados poderia ter as informações necessárias para mudar o rumo da empresa em si. Muitas ferramentas da tecnologia da informação poderiam ser utilizadas de maneira a transformar dados brutos em informações valiosas, como a mineração de dados, utilização de big data, aprendizagem de máquina, entre outros. Esse processo de refinamento de dados pode levar a inovações úteis, gerando vantagem competitiva para as empresas. Essa informação foi confirmada por meio de estudos de caso de empresas que conseguiram algum tipo de vantagem com o processamento de dados. Nesse trabalho, esses casos são apresentados e comparados por meio de uma metodologia comparativa, levando em conta o tipo de processamento, o grau de inovação e nível de crescimento de cada um dos casos. Para este fim, foi criada uma metodologia utilizando dois rankings, sendo um do nível de complexidade do tipo de processamento de dados que cada empresa usou e a outra do nível de inovação, com base em três métricas diferentes (medidas tradicionais, dez tipos de inovação e classificação como empresa inovadora). Esses rankings levaram a uma melhor compreensão sobre como o processamento de dados tem um papel crucial na criação de inovação para as empresas estudadas e seu crescimento durante o período estudado.<br>Every productive activity generates some kind of raw data. It may be well structured or not, exist on paper or digitally, but some kind of information is always present at some degree in any human endeavors. Companies often use data extracted from their activities for their own benefit, but it is not difficult to find examples of underuse of information that could be extremely valuable for them. Most of the companies, including larger ones, are not prepared to handle such a massive amount of data and end up using traditional statistical tools and ignoring that the supermassive data could have some necessary information to change the company's direction itself. Many information technology tools could be used in order to transform raw data into valuable information, such as data mining, big data utilization, machine learning, among others. This data refinement process can lead to useful innovations, which leads to a competitive edge. This information was confirmed by case studies of companies that have achieved some sort of advantage with data processing methods. In this work, those cases are presented and compared with a pre-established methodology, taking into account the type of processing and the degree of innovation of each presented case. To this end, a framework was created by using two ranking, one regarding the complexity level of the data processing type each company has used and the other regarding innovation level, based on three different metrics (traditional measurements, ten types of innovation and classification as innovative enterprise). Those rankings led to a better understanding on how data processing has a crucial role on creating innovation to the studied companies and their growth during the studied period.
APA, Harvard, Vancouver, ISO, and other styles
17

Igboayaka, Jane-Vivian Chinelo Ezinne. "Using Social Media Networks for Measuring Consumer Confidence: Problems, Issues and Prospects." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/32341.

Full text
Abstract:
This research examines the confluence of consumers’ use of social media to share information with the ever-present need for innovative research that yields insight into consumers’ economic decisions. Social media networks have become ubiquitous in the new millennium. These networks, including, among others: Facebook, Twitter, Blog, and Reddit, are brimming with conversations on an expansive array of topics between people, private and public organizations, governments and global institutions. Preliminary findings from initial research confirms the existence of online conversations and posts related to matters of personal finance and consumers’ economic outlook. Meanwhile, the Consumer Confidence Index (CCI) continues to make headline news. The issue of consumer confidence (or sentiment) in anticipating future economic activity generates significant interest from major players in the news media industry, who scrutinize its every detail and report its implications for key players in the economy. Though the CCI originated in the United States in 1946, variants of the survey are now used to track and measure consumer confidence in nations worldwide. In light of the fact that the CCI is a quantified representation of consumer sentiments, it is possible that the level of confidence consumers have in the economy could be deduced by tracking the sentiments or opinions they express in social media posts. Systematic study of these posts could then be transformed into insights that could improve the accuracy of an index like the CCI. Herein lies the focus of the current research—to analyze the attributes of data from social media posts, in order to assess their capacity to generate insights that are novel and/or complementary to traditional CCI methods. The link between data gained from social media and the survey-based CCI is perhaps not an obvious one. But our research will use a data extraction tool called NetBase Insight Workbench to mine data from the social media networks and then apply natural language processing to analyze the social media content. Also, KH Coder software will be used to perform a set of statistical analyses on samples of social media posts to examine the co-occurrence and clustering of words. The findings will be used to expose the strengths and weaknesses of the data and to assess the validity and cohesion of the NetBase data extraction tool and its suitability for future research. In conclusion, our research findings support the analysis of opinions expressed in social media posts as a complement to traditional survey-based CCI approaches. Our findings also identified a key weakness with regards to the degree of ‘noisiness’ of the data. Although this could be attributed to the ‘modeling’ error of the data mining tool, there is room for improvement in the area of association—of discerning the context and intention of posts in online conversations.
APA, Harvard, Vancouver, ISO, and other styles
18

Wollstadt, Patricia [Verfasser], Michael [Akademischer Betreuer] Wibral, Matthias [Gutachter] Kaschube, and Michael [Gutachter] Wibral. "Measuring information processing in neural data: The application of transfer entropy in neuroscience / Patricia Wollstadt ; Gutachter: Matthias Kaschube, Michael Wibral ; Betreuer: Michael Wibral." Frankfurt am Main : Universitätsbibliothek Johann Christian Senckenberg, 2018. http://d-nb.info/1153572370/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Karbaschi, Arash. "Dynamic pattern recognition and data storage using localized holographic recording." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/24753.

Full text
Abstract:
Thesis (Ph.D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2008.<br>Committee Chair: Adibi, Ali; Committee Member: Altunbasak, Yucel; Committee Member: Callen Jr, William R; Committee Member: Gaylord, Thomas K; Committee Member: McLaughlin, Steven W; Committee Member: Trebino, Rick.
APA, Harvard, Vancouver, ISO, and other styles
20

Copley, Charles Judd. "Temperature dependence of the HartRAO pointing model." Thesis, Rhodes University, 2008. http://hdl.handle.net/10962/d1005277.

Full text
Abstract:
This thesis investigates control aspects of the Hartebeeshoek Radio Astronomy Observatory (HartRAO) antenna. The installation of a new 22 GHz receiver has required the pointing accuracy to be improved to less than 4 mdeg. The effect of thermal conditions on the the HartRAO antenna pointing offset is investigated using a variety of modelling techniques including simple geometric modelling, neural networks and Principal Component Analysis (PCA). Convincing results were obtained for the Declination pointing offset, where applying certain model predictions to observations resulted in an improvement in Declination pointing offset from 5.5 mdeg to 3.2 mdeg (≈50%). The Right Ascension pointing model was considerably less convincing with an improvement of approximately from 5.5 mdeg to 4.5 mdeg (≈20%) in the Right Ascension pointing offset. The Declination pointing offset can be modelled sufficiently well to reduce the pointing offset to less than 4 mdeg, however further investigation of the underlying causes is required for the Right Ascension pointing offset.
APA, Harvard, Vancouver, ISO, and other styles
21

KUAHARA, LILIAN T. "Desenvolvimento de uma metodologia de calibração "in situ" de medidores de atividade." reponame:Repositório Institucional do IPEN, 2017. http://repositorio.ipen.br:8080/xmlui/handle/123456789/28048.

Full text
Abstract:
Submitted by Pedro Silva Filho (pfsilva@ipen.br) on 2017-11-23T10:55:22Z No. of bitstreams: 0<br>Made available in DSpace on 2017-11-23T10:55:22Z (GMT). No. of bitstreams: 0<br>O desempenho de uma prática segura e eficiente de um serviço de medicina nuclear depende, entre outros fatores, de um programa de controle de qualidade completo, principalmente em se tratando dos instrumentos medidores de atividade dos radionuclídeos, os ativímetros. Um programa de controle de qualidade completo deve incluir a calibração de todos os instrumentos de medição utilizados no procedimento. No entanto, no Brasil, a atual norma que estabelece os requisitos de proteção radiológica para serviços de medicina nuclear (SMN), não inclui, ainda, a calibração do ativímetro. Considerando que estes instrumentos, por diversas razões, são de difícil remoção para envio a um serviço de calibração, o propósito deste trabalho foi desenvolver uma metodologia de calibração de medidores de atividades que possa ser aplicada \"in situ\", para o principal radiofármaco utilizado atualmente, o 99mTc. Foram definidos os parâmetros de influência que devem ser levados em conta durante a calibração, assim como uma logística de transporte dos radiofármacos. Um programa de controle de qualidade foi aplicado aos ativímetros do Laboratório de Calibração de Instrumento (LCI). Neste trabalho foram desenvolvidas três metodologias diferentes de calibração, considerando a logística disponível e também a origem da fonte de referência. Na primeira metodologia poderá ser aplicada nos casos em que o LCI envia uma fonte de referência ao SMN. Na segunda o SMN envia uma fonte previamente medida ao LCI que determinará sua atividade real. A terceira metodologia foi aplicada para calibração dos ativímetros pertencentes ao setor de produção de radiofármacos do IPEN. Neste caso a fonte de referência foi enviada ao LCI após uma medição prévia pelo setor de produção. Foi possível aplicar as metodologias em alguns instrumentos pertencentes a clínicas e ao setor de produção. Em todos eles foram encontrados coeficientes de calibração diferentes entre si. A maior variação encontrada foi de 5%, indicando que a medição com este ativímetro está menor em 5% do que é necessário aplicar no paciente. Verificou-se que a troca de recipientes deixa um resíduo que não tem sido considerado nas medições clínicas, podendo acrescentar uma diferença de até 3% nas medições.<br>Dissertação (Mestrado em Tecnologia Nuclear)<br>IPEN/D<br>Instituto de Pesquisas Energéticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
22

Sandrock, Trudie. "Multi-label feature selection with application to musical instrument recognition." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019/11071.

Full text
Abstract:
Thesis (PhD)--Stellenbosch University, 2013.<br>ENGLISH ABSTRACT: An area of data mining and statistics that is currently receiving considerable attention is the field of multi-label learning. Problems in this field are concerned with scenarios where each data case can be associated with a set of labels instead of only one. In this thesis, we review the field of multi-label learning and discuss the lack of suitable benchmark data available for evaluating multi-label algorithms. We propose a technique for simulating multi-label data, which allows good control over different data characteristics and which could be useful for conducting comparative studies in the multi-label field. We also discuss the explosion in data in recent years, and highlight the need for some form of dimension reduction in order to alleviate some of the challenges presented by working with large datasets. Feature (or variable) selection is one way of achieving dimension reduction, and after a brief discussion of different feature selection techniques, we propose a new technique for feature selection in a multi-label context, based on the concept of independent probes. This technique is empirically evaluated by using simulated multi-label data and it is shown to achieve classification accuracy with a reduced set of features similar to that achieved with a full set of features. The proposed technique for feature selection is then also applied to the field of music information retrieval (MIR), specifically the problem of musical instrument recognition. An overview of the field of MIR is given, with particular emphasis on the instrument recognition problem. The particular goal of (polyphonic) musical instrument recognition is to automatically identify the instruments playing simultaneously in an audio clip, which is not a simple task. We specifically consider the case of duets – in other words, where two instruments are playing simultaneously – and approach the problem as a multi-label classification one. In our empirical study, we illustrate the complexity of musical instrument data and again show that our proposed feature selection technique is effective in identifying relevant features and thereby reducing the complexity of the dataset without negatively impacting on performance.<br>AFRIKAANSE OPSOMMING: ‘n Area van dataontginning en statistiek wat tans baie aandag ontvang, is die veld van multi-etiket leerteorie. Probleme in hierdie veld beskou scenarios waar elke datageval met ‘n stel etikette geassosieer kan word, instede van slegs een. In hierdie skripsie gee ons ‘n oorsig oor die veld van multi-etiket leerteorie en bespreek die gebrek aan geskikte standaard datastelle beskikbaar vir die evaluering van multi-etiket algoritmes. Ons stel ‘n tegniek vir die simulasie van multi-etiket data voor, wat goeie kontrole oor verskillende data eienskappe bied en wat nuttig kan wees om vergelykende studies in die multi-etiket veld uit te voer. Ons bespreek ook die onlangse ontploffing in data, en beklemtoon die behoefte aan ‘n vorm van dimensie reduksie om sommige van die uitdagings wat deur sulke groot datastelle gestel word die hoof te bied. Veranderlike seleksie is een manier van dimensie reduksie, en na ‘n vlugtige bespreking van verskillende veranderlike seleksie tegnieke, stel ons ‘n nuwe tegniek vir veranderlike seleksie in ‘n multi-etiket konteks voor, gebaseer op die konsep van onafhanklike soek-veranderlikes. Hierdie tegniek word empiries ge-evalueer deur die gebruik van gesimuleerde multi-etiket data en daar word gewys dat dieselfde klassifikasie akkuraatheid behaal kan word met ‘n verminderde stel veranderlikes as met die volle stel veranderlikes. Die voorgestelde tegniek vir veranderlike seleksie word ook toegepas in die veld van musiek dataontginning, spesifiek die probleem van die herkenning van musiekinstrumente. ‘n Oorsig van die musiek dataontginning veld word gegee, met spesifieke klem op die herkenning van musiekinstrumente. Die spesifieke doel van (polifoniese) musiekinstrument-herkenning is om instrumente te identifiseer wat saam in ‘n oudiosnit speel. Ons oorweeg spesifiek die geval van duette – met ander woorde, waar twee instrumente saam speel – en hanteer die probleem as ‘n multi-etiket klassifikasie een. In ons empiriese studie illustreer ons die kompleksiteit van musiekinstrumentdata en wys weereens dat ons voorgestelde veranderlike seleksie tegniek effektief daarin slaag om relevante veranderlikes te identifiseer en sodoende die kompleksiteit van die datastel te verminder sonder ‘n negatiewe impak op klassifikasie akkuraatheid.
APA, Harvard, Vancouver, ISO, and other styles
23

Teoh, Pek Loo. "A study of single laser interferometry-based sensing and measuring technique in robot manipulator control and guidance. Volume 1." Monash University, Dept. of Mechanical Engineering, 2003. http://arrow.monash.edu.au/hdl/1959.1/9565.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Brinkman, Karen L. "Design of a microcomputer-based open heart surgery patient monitor." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/76031.

Full text
Abstract:
A patient monitor device for use during open heart surgery has been designed and constructed. The device uses a VIC 20 microcomputer along with some additional circuitry to monitor 3 separate functions. The first patient variable monitored is the blood flow rate through the extracorporeal blood circuit during surgery. The device also continuously monitors and displays 6 separate temperatures. Finally, 3 individual timers are monitored and displayed with the device. Both the hardware and the software used in the design are fully described.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
25

MAIA, ANA F. "Padronização de feixes e metodologia dosimétrica em tomografia computadorizada." reponame:Repositório Institucional do IPEN, 2005. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11291.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:50:22Z (GMT). No. of bitstreams: 0<br>Made available in DSpace on 2014-10-09T13:58:51Z (GMT). No. of bitstreams: 1 10890.pdf: 10562798 bytes, checksum: 063bd7e321751780a6d96cafecfe45bc (MD5)<br>Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)<br>Tese (Doutoramento)<br>IPEN/T<br>Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP<br>FAPESP:01/06837-2
APA, Harvard, Vancouver, ISO, and other styles
26

Eter, Walid. "Système de suivi des tempêtes de verglas en temps réel = Analysis of real time icing events /." Thèse, Chicoutimi : Université du Québec à Chicoutimi, 2003. http://theses.uqac.ca.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

TAKEDA, MAURO N. "Aplicação do método de Monte Carlo no estudo da padronização de radionuclídeos com esquema de desintegração complexos em sistema de coincidencias 4-pi-beta-gama." reponame:Repositório Institucional do IPEN, 2006. http://repositorio.ipen.br:8080/xmlui/handle/123456789/11474.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:52:18Z (GMT). No. of bitstreams: 0<br>Made available in DSpace on 2014-10-09T14:00:31Z (GMT). No. of bitstreams: 0<br>Tese (Doutoramento)<br>IPEN/T<br>Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
28

Hawkins, Kevin Michael. "Development of an automated anesthesia system for the stabilization of physiological parameters in rodents." Link to electronic thesis, 2003. http://www.wpi.edu/Pubs/ETD/Available/etd-0424103-105500/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Waller, Annalu. "Implementing linguistic text anticipation in a writing device for the disabled." Master's thesis, University of Cape Town, 1989. http://hdl.handle.net/11427/26608.

Full text
Abstract:
The advent of the microcomputer has provided the severely handicapped with the means to create text. Instead of using a keyboard, the disabled typist is able to scan and select linguistic items with an appropriate input switch. The resulting communication rate is, however, prohibitively slow for writing and impractical for conversation. A variety of techniques is used to improve this rate and range from static letter matrices to more sophisticated methods in which words and phrases are anticipated. Although many anticipatory methods claim to be linguistically based, most, if not all, depend solely on letter and word frequency statistics. A series of phonological rules can be used to anticipate the letter structure of most English words. This linguistically based system reflects a degree of "intelligence" not present in other anticipatory writing systems. To evaluate and compare the new system with several existing techniques in practice, a programmable evaluation system has been developed on an IBM-compatible personal computer using the Artificial Intelligence language, LISP. Different communication strategies are transcribed into rulebases which serve as input to the software. The core program then executes the particular system under consideration. Input text can be processed in either manual or simulation mode and an evaluation report is generated when the session ends. The characteristics of efficient communication systems are introduced as a basis for this dissertation, after which the development and application of a linguistic anticipatory writing system is described. The design of the evaluation software is documented and the successful implementation of the various communication systems is discussed.
APA, Harvard, Vancouver, ISO, and other styles
30

Baghi, Quentin. "Optimisation de l’analyse de données de la mission spatiale MICROSCOPE pour le test du principe d’équivalence et d’autres applications." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLEO003/document.

Full text
Abstract:
Le Principe d'Equivalence (PE) est un pilier fondamental de la Relativité Générale. Il est aujourd'hui remis en question par les tentatives d'élaborer une théorie plus exhaustive en physique fondamentale, comme la théorie des cordes. La mission spatiale MICROSCOPE vise à tester ce principe à travers l'universalité de la chute libre, avec un objectif de précision de 10-15, c'est-à-dire un gain de deux ordres de grandeurs par rapport aux expériences actuelles. Le satellite embarque deux accéléromètres électrostatiques, chacun intégrant deux masses-test. Les masses de l'accéléromètre servant au test du PE sont de compositions différentes, alors que celles de l'accéléromètre de référence sont constituées d'un même matériau. L'objectif est de mesurer la chute libre des masses-test dans le champ gravitationnel de la Terre, en mesurant leur accélération différentielle avec une précision attendue de 10-12 ms-2Hz-1/2 dans la bande d'intérêt. Une violation du PE se traduirait par une différence périodique caractéristique entre les deux accélérations. Cependant, diverses perturbations sont également mesurées en raison de la grande sensibilité de l'instrument. Certaines d'entre elles, comme les gradients de gravité et d'inertie, sont bien définies. En revanche d'autres ne sont pas modélisées ou ne le sont qu'imparfaitement, comme le bruit stochastique et les pics d'accélérations dus à l'environnement du satellite, qui peuvent entraîner des saturations de la mesure ou des données lacunaires. Ce contexte expérimental requiert le développement d'outils adaptés pour l'analyse de données, qui s'inscrivent dans le cadre général de l'analyse des séries temporelles par régression linéaire.On étudie en premier lieu la détection et l’estimation de perturbations harmoniques dans le cadre de l'analyse moindres carrés. On montre qu’avec cette technique la projection des perturbations harmoniques sur le signal de violation du PE peut être maintenue à un niveau acceptable. On analyse ensuite l'impact des pertes de données sur la performance du test du PE. On montre qu'avec l'hypothèse pire cas sur la fréquence des interruptions de données (environ 300 interruptions de 0.5 seconde par orbite, chiffre évalué avant le vol), l'incertitude des moindres carrés ordinaires est multipliée par un facteur 35 à 60. Pour compenser cet effet, une méthode de régression linéaire basée sur une estimation autorégressive du bruit est développée, qui permet de décorréler les observations disponibles, sans calcul ni inversion directs de la matrice de covariance. La variance de l'estimateur ainsi construit est proche de la valeur optimale, ce qui permet de réaliser un test du PE au niveau attendu, même en présence de pertes de données fréquentes. On met également en place une méthode pour évaluer plus précisément la DSP du bruit à partir des données disponibles, sans utilisation de modèle a priori. L'approche est fondée sur une modification de l'algorithme espérance-maximisation (EM) avec une hypothèse de régularité de la DSP, en utilisant une imputation statistique des données manquantes. On obtient une estimée de la DSP avec une erreur inférieure à 10-12 ms-2Hz-1/2. En dernier lieu, on étend les applications de l'analyse de données en étudiant la faisabilité de la mesure du gradient de gravité terrestre avec MICROSCOPE. On évalue la capacité de cette observable à déchiffrer la géométrie des grandes échelles du géopotentiel. Par simulation des signaux obtenus à partir de différents modèles du manteau terrestre profond, on montre que leurs particularités peuvent être distinguées<br>The Equivalence Principle (EP) is a cornerstone of General Relativity, and is called into question by the attempts to build more comprehensive theories in fundamental physics such as string theories. The MICROSCOPE space mission aims at testing this principle through the universality of free fall, with a target precision of 10-15, two orders of magnitude better than current on-ground experiments. The satellite carries on-board two electrostatic accelerometers, each one including two test-masses. The masses of the test accelerometer are made with different materials, whereas the masses of the reference accelerometer have the same composition. The objective is to monitor the free fall of the test-masses in the gravitational field of the earth by measuring their differential accelerations with an expected precision of 10-12 ms-2Hz-1/2 in the bandwidth of interest. An EP violation would result in a characteristic periodic difference between the two accelerations. However, various perturbations are also measured because of the high sensitivity of the instrument. Some of them are well defined, e.g. gravitational and inertial gradient disturbances, but others are unmodeled, such as random noise and acceleration peaks due to the satellite environment, which can lead to saturations in the measurement or data gaps. This experimental context requires us to develop suited tools for the data analysis, which are applicable in the general framework of linear regression analysis of time series.We first study the statistical detection and estimation of unknown harmonic disturbances in a least squares framework, in the presence of a colored noise of unknown PSD. We show that with this technique the projection of the harmonic disturbances onto the WEP violation signal can be rejected. Secondly we analyze the impact of the data unavailability on the performance of the EP test. We show that with the worst case before-flight hypothesis (almost 300 gaps of 0.5 second per orbit), the uncertainty of the ordinary least squares is increased by a factor 35 to 60. To counterbalance this effect, a linear regression method based on an autoregressive estimation of the noise is developed, which allows a proper decorrelation of the available observations, without direct computation and inversion of the covariance matrix. The variance of the constructed estimator is close to the optimal value, allowing us to perform the EP test at the expected level even in case of very frequent data interruptions. In addition, we implement a method to more accurately characterize the noise PSD when data are missing, with no prior model on the noise. The approach is based on modified expectation-maximization (EM) algorithm with a smooth assumption on the PSD, and use a statistical imputation of the missing data. We obtain a PSD estimate with an error less than 10-12 ms-2Hz-1/2. Finally, we widen the applications of the data analysis by studying the feasibility of the measurement of the earth's gravitational gradient with MICROSCOPE data. We assess the ability of this set-up to decipher the large scale geometry of the geopotential. By simulating the signals obtained from different models of the earth's deep mantle, and comparing them to the expected noise level, we show that their features can be distinguished
APA, Harvard, Vancouver, ISO, and other styles
31

MOREIRA, GREGORI de A. "Métodos para obtenção da altura da camada limite planetária a partir de dados de Lidar." reponame:Repositório Institucional do IPEN, 2013. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10564.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:42:00Z (GMT). No. of bitstreams: 0<br>Made available in DSpace on 2014-10-09T14:04:50Z (GMT). No. of bitstreams: 0<br>Dissertação (Mestrado)<br>IPEN/D<br>Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
APA, Harvard, Vancouver, ISO, and other styles
32

RICHARD, JOEL. "Application de methodes de traitements numeriques de signaux a la detection, compression et reconnaissance d'evenements d'origines sismiques dans une station autonome de type sismographe fond de mer." Rennes 1, 1988. http://www.theses.fr/1988REN10121.

Full text
Abstract:
Les conditions de fonctionnement des sismographes fond de mer imposent le conditionnement des signaux pour limiter les debits d'information lors de l'enregistrement ou de la transmission. Trois methodes deduites de la transformation de fourier, de la transformation de walsh et de la modelisation autoregression sont examines et testes
APA, Harvard, Vancouver, ISO, and other styles
33

Pajgrt, Michal. "Programové vybavení pro komunikaci a nastavení jednotky pro sběr dat JSD600." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-412780.

Full text
Abstract:
This diploma thesis describes a JSD600 unit intended for mathematical processing, measuring and data recording. The JSD600 unit is highly intended for industrial measuring of water steam energy delivery.First part of the work focuses on basics of the JSD600 unit detailing and describing main structures and principles for relevant data storing.Next part summarizes some basic facts and knowledge from the area of industrial liquid flow, energy measuring and describes water steam states. All these points within this range are needed for dilemma understanding.Last task deals with complete list of communication datagrams, which have been used for communication with JSD600 unit, including a main packet of structure descriptions.The second part of this dissertation brings the unit setup application description including some implementation details of the most interesting parts.
APA, Harvard, Vancouver, ISO, and other styles
34

LOUAHALA, SAM. "Signatures spectrales de roches en milieu tempere : valeurs reelles et valeurs percues par le satellite." Paris 7, 1988. http://www.theses.fr/1988PA077109.

Full text
Abstract:
Etude automatique des donnees de teledetection multispectrale pour la cartographie en provence a partir de mesures spectroradiometriques qui donnent les caracteristiques spectrales de surfaces rocheuses naturelles. Le role de la rugosite sur les mesures de reflectance est mis en evidence a partir de mesures realisees sur des plages de galets de granulometrie differentes, il est confirme par simulation numerique. Une comparaison entre mesures au sols et mesures satellitaires est effectuee afin d'evaluer les corrections atmospheriques necessaires pour des petites cibles. Une methode de reduction des variations d'eclairement des cibles est testee. Des ameliorations de resolution spectrale ne peuvent etre envisagee sans amelioration de resolution spatiale
APA, Harvard, Vancouver, ISO, and other styles
35

Quadros, Thiago de. "Development and evaluation of an elderly fall detection system based on a wearable device located at wrist." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2619.

Full text
Abstract:
A queda de idosos é um problema de saúde mundial. Todos os anos, cerca de 30% dos idosos com 65 anos ou mais são vítimas de quedas. Além disso, as consequências de uma queda podem ser fisiológicas (e.g. fraturas ósseas, ferimentos musculares) e psicológicas, como a perda de autoconfiança, levando a novas quedas. Uma solução para este problema está relacionada com ações preventivas (e.g. adaptação de mobília) aliadas a sistemas de detecção de quedas, os quais podem notificar familiares e serviços médicos de urgência. Como o tempo de espera por socorro após uma queda está relacionado com a severidade das consequências dela, esses sistemas devem oferecer elevada acurácia e detecção em tempo real. Embora existam várias soluções para isso na literatura (a maioria relacionada com dispositivos vestíveis), poucas delas estão relacionadas a dispositivos de punho, principalmente por causa dos desafios existentes para essa configuração. Considerando o punho como um local mais confortável, discreto e aceitável para uso de um dispositivo (menos associado com o estigma do uso de uma solução médica), este trabalho propõe o desenvolvimento e avaliação de uma solução baseada nessa configuração. Para isso, diferentes sensores (acelerômetro, giroscópio e magnetômetro) foram combinados com diferentes algoritmos, baseados em métodos de limiar e aprendizado de máquina, visando definir os melhores sinais e abordagem para a detecção de quedas. Esses métodos consideraram informações de aceleração, velocidade, deslocamento e orientação espacial, permitindo o cálculo de componentes verticais do movimento. Para o treino e avaliação dos algoritmos, dois protocolos diferentes foram empregados: um primeiro envolvendo 2 voluntários (homens, 27 e 31 anos) simulando um total de 80 sinais de queda e 80 de não-queda, e um segundo envolvendo 22 voluntários (14/8 homens/mulheres, idade média: 25,2 ± 4,7) simulando um total de 396 sinais de queda e 396 de não-queda. Uma análise exaustiva de diferentes sinais e parâmetros de configuração foi executada para cada método. O melhor algoritmo baseado em limiar considerou sinais de aceleração vertical e velocidade total, alcançando 95,8% de sensibilidade e 86,5% de especificidade. Por outro lado, o melhor algoritmo de aprendizagem de máquina foi o baseado no método K-Nearest Neighbors, considerando informações de aceleração, velocidade e deslocamento verticais combinadas com os ângulos de orientação espacial: 100% de sensibilidade e 97,9% de especificidade. Os resultados obtidos permitem enfatizar a relevância de algoritmos de aprendizagem de máquina para sistemas de detecção de queda vestíveis localizados no punho quando comparados a algoritmos baseados em limiar. Esta conclusão oferece grande contribuição para a pesquisa de detectores de quedas similares, sugerindo a melhor abordagem para novos desenvolvimentos.<br>Falls in the elderly age are a world health problem. Every year, about 30% of people aged 65 or older become victims of fall events. The consequences of a fall may be physiological (e.g. bone fractures, muscular injuries) and psychological, including the loss of self-confidence by fear of falling, which leads to new falls. A solution to this problem is related to preventive actions (e.g. adapting furniture) allied to fall detection systems, which can alert family members and emergency medical services. Since the response time for help is related to the fall's consequences and severity, such systems must offer high accuracy and real-time fall detection. Although there are many fall detection solutions in literature (most part of them related to wearable devices), few of them are related to wrist-worn devices, mainly because of the existing challenges for this configuration. Considering the wrist as a comfortable, discrete and acceptable place for an elderly wearable device (less associated to the stigma of using a medical device), this work proposes the development and evaluation of a fall detection solution based on this configuration. For this, different sensors (accelerometer, gyroscope and magnetometer) were combined to different algorithms, based on threshold and machine learning methods, in order to define the best signals and approach for an elderly fall detection. These methods considered acceleration, velocity and displacement information, relating them with wrist spatial orientation, allowing the calculation of the vertical components of each movement. For the algorithms' training and evaluation, two different protocols were employed: one involving 2 volunteers (both males, ages of 27 and 31) performing a total of 80 fall and 80 non-fall events simulation, and the other involving 22 volunteers (14/8 males/females, ages mean: 25.2 ± 4.7) performing a total of 396 fall and 396 non-fall events simulation. An exhaustive evaluation of different signals and configuration parameters was performed for each method. The best threshold-based algorithm employed the vertical acceleration and total velocity signals, achieving 95.8% and 86.5% of sensitivity and specificity, respectively. On the other hand, the best machine learning algorithm was based on the K-Nearest Neighbors method employing the vertical acceleration, velocity and displacement information combined with spatial orientation angles: 100% of sensitivity and 97.9% of specificity. The obtained results allow to emphasize the relevance of machine learning algorithms for wrist-worn fall detection systems instead of traditional threshold-based algorithms. These results offer great contributions for the research of similar wearable fall detectors, suggesting the best approach for new developments.
APA, Harvard, Vancouver, ISO, and other styles
36

Castano, Antoine. "Methode d'analyse des cotes de fabrication." Paris 6, 1988. http://www.theses.fr/1988PA066123.

Full text
Abstract:
Modele definissant la distance entre deux surfaces par une fonction aleatoire. Ce modele permet l'analyse du comportement de la piece dans son montage d'usinage. Il permet d'exploiter les donnees des machines a mesurer tridimensionnelles ainsi que les controles statistiques. On en deduit une methode generale de determination des cotations de fabrication. Applications a la productique, a la fabrication assistee, aux systemes flexibles de fabrication et a l'informatique industrielle
APA, Harvard, Vancouver, ISO, and other styles
37

Zlotnicki, Jacques. "Sur les effets volcanomagnetiques et tectonomagnetiques." Paris 7, 1987. http://www.theses.fr/1987PA077296.

Full text
Abstract:
Etude du champ magnetique terrestre sur les edifices volcaniques ou dans les regions soumises a une activite sismotectonique. Trois themes de recherche sont developpes: l'acquisition des mesures de terrain, les mesures experimentales realisees en laboratoire, et les simulations numeriques. L'etude de terrain porte sur des resultats obtenus sur les volcans de la soufriere de guadeloupe, la montagne pelee de martinique et le piton de la fournaise a la reunion
APA, Harvard, Vancouver, ISO, and other styles
38

Christoforou, Georges. "Conception de préamplificateurs intégrés pour fonctionnement à basse température et sous rayonnement intense." Université Joseph Fourier (Grenoble), 1998. http://www.theses.fr/1998GRE10031.

Full text
Abstract:
Le grand nombre de voies d'acquisition des signaux issus du calorimetre electromagnetique du detecteur atlas (machine lhc) pose un probleme de cablage et des solutions prevoyant le placement de la partie amont de la chaine electronique d'acquisition dans le meme milieu que l'element froid de detection ont ete envisagees. L'electronique amont doit donc etre resistante aux radiations (2 10#1#4n/cm#2, 0. 5mrad), fonctionner a la temperature de l'argon liquide (89k), avoir un faible niveau de bruit, une non-linearite inferieure a 1%, consommer peu et etre rapide (40mhz). Dans le cadre de ce projet nous avons explore les possibilites offertes par les differentes technologies. Nous avons retenu les technologies asga qui resistent aux radiations et fonctionnent jusqu'a des temperatures cryogeniques. Nous avons mis en evidence au moyen de caracterisations (a basse temperature) le fait que les technologies asga sont capables de fonctionner dans un tel environnement. Les amplificateurs concus presentent une amelioration de leurs performances quand ils fonctionnent a basse temperature (reduction du bruit, reduction de la puissance dissipee augmentation du gain) rencontrant les contraintes posees par la calorimetrie dans atlas, faible niveau de bruit, faible puissance dissipee, grande dynamique de sortie et bonne non-linearite integrale, mais ne sont pas encore capables d'assurer un niveau de fiabilite de fabrication suffisant. Nous abordons egalement le probleme de la simulation des mesfet a basse temperature. Les modeles manquant, nous avons employe pour la simulation des parametres spice extraits a la temperature de l'azote liquide. Finalement, nous avons approfondi la simulation du bruit des circuits analogiques et mis en evidence les problemes existant ainsi que les precautions a prendre afin de rendre la simulation spice du bruit plus fiable.
APA, Harvard, Vancouver, ISO, and other styles
39

Scepanovic, Bogdan. "A programmable datalogger with universal inputs." Thesis, 2012. http://hdl.handle.net/10210/5806.

Full text
Abstract:
M.Ing.<br>This thesis describes a full design project of a processor based sophisticated measurement instrument - the datalogger. It covers the theoretical approach to the design project followed with hardware and software design. Many nonstandard solutions in hardware and software parts are used to approach a target, and they are fully described. The world that surrounds us today is full of products based on science and technology knowledge. These products are part of everyday life. The development of science and technology is very much depend upon a parallel development of measurement techniques and instruments. Measurement and the technology of measurement called instrumentation, serves not only science but all branches of engineering, medicine and almost every sphere of human life. Measuring instruments are used in the monitoring and control of processes and operations, too. Most specialised instruments, such as the datalogger, are used in experimental, research and develop science and engineering work. This thesis is organised into six chapters. The dataloger position in the measurement instruments tree is shown in the first chapter. The electronics design philosophy follows in the next chapter. It covers the most common problems found when the new design project starts. The global design strategy with brief description of all steps follows. The second chapter contains the datalogger project history, the reasons for going into the project, and the requests of the new device, too. At the end of this chapter the basic work principle of the datalogger is described to allow an easily following applied solutions. The third chapter covers all datalogger design specialities that make the datalogger design different from the design of other measurement instruments. It starts with remote sensors problems and problems commonly connected to the input stage of similar systems. The second half of this chapter analyses the instrument precision and error sources. There are several different methods that precision can be increased. Two methods applied here, reducing measurement range and oversampling with noise, are briefly described. The forth chapter interpretates the design of the processor board. It starts with a general microcontroller overview, describing the reasons for selecting the Hitachi microcontroller H8/532. The most important microcontroller characteristics are shown, too. The second part of this chapter contains the organisation, connections and contents of other electronics blocks in the processor board. At the end of this chapter the processor board schematic and full characteristics are given. The datalogger's hardware is described in the fifth chapter. The basic work principles of the various hardware parts are given in the beginning. The hardware is broken down and described in the following way: power electronics, digital control, signal processing part, and interface cards. All parts are covered with detailed descriptions of design circuit and the following calculation. The last chapter shows the software for the datalogger. It starts with the mathematical calculation principle developed and used in the datalogger. The customer part which follows covers software and hardware part relation between user and datalogger. One of the datalogger's software speciality is organisation of RAM space which allows high software flexibility of the datalogger as a measurement instrument. At the end the full datalogger program organisation is given on a global level.
APA, Harvard, Vancouver, ISO, and other styles
40

Li, Hsin-Chien, and 李信堅. "Research of Data Collection, Processing and Inquiring System for Internet-Based Remote-end Measuring Instruments." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/70132462464586889425.

Full text
Abstract:
碩士<br>國立成功大學<br>電機工程學系<br>89<br>Measuring instruments are frequently used as to monitor and acquire data for a long period.In this thesis, we presented an inter based system you for researcher to inquire the stored experimental data or results through the internet all day long and all over the world .To send or get data worldwide,we fulfill the purpose of remote-end- delivery by the most popular and useful internet environment. We utilize the internet servers to acquire data,and stroe them in a specific database. Moreover, We can do some further analysis for the stored data and then upload all the analyzed results back to the internet server. Finally, all users can search the related raw data and analyzed results by the browser.
APA, Harvard, Vancouver, ISO, and other styles
41

Galpin, Vashti Christina. "Measuring concurrency in CCS." Thesis, 1993. https://hdl.handle.net/10539/25061.

Full text
Abstract:
A research report submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science<br>This research report investigates the application of Charron-Bost's measure of currency m to Milner's Calculus of Communicating Systems (CCS). The aim of this is twofold: first to evaluate the measure m in terms of criteria gathered from the literature: and second to determine the feasiblllty of measuring concurrency in CCS and hence provide a new tool for understanding concurrency using CCS. The approach taken is to identify the differences hetween the message-passing formalism in which the measure m is defined, and CCS and to modify this formalism to-enable the mapping of CCS agents to it. A software tool, the Concurrency Measurement Tool, is developed to permit experimentation with chosen CCS agents. These experiments show that the measure m, although intuitively appealing, is defined by an algebraic expression that is ill-behaved. A new measure is defined and it is shown that it matches the evaluation criteria better than m, although it is still not ideal. This work demonstrates that it is feasible to measure concurrency in CCS and that a methodology has been developed for evaluating concurrency measures.<br>Andrew Chakane 2018
APA, Harvard, Vancouver, ISO, and other styles
42

Bergeron, Marielle. "Measuring the business value of information technology: the case of financial electronic data interchange (EDI) in Canada." Thesis, 1994. http://hdl.handle.net/2429/8752.

Full text
Abstract:
Why and how much should we invest in this information technology (IT)? The difficulty to formulate well-justified, convincing answers to those questions asked by corporate decision-makers has been identified as a major impediment to one rapid adoption of IT innovations by the business community. This study investigates the fundamental construct underlying these questions by performing a formal assessment of the business value of financial electronic data interchange (EDI) technology for corporate adopters in Canada. Three major Canadian financial institutions, seven cross-industry financial EDI user organizations (originators and receivers), several reference firms and more than fifty individuals actively participated in this study which follows a triangulation data collection approach. Within a cohesive financial EDI value measurement framework based on the theory of capital budgeting, a set of realistic and flexible models for measuring the business value of financial EDI was developed from a rigorous, item-by-item analysis of the data. Following a scenario-based approach, the data and models were used to estimate the magnitude of potential net benefits of financial EDI to corporate adopters. A formal evaluation of the expected and actual costs and benefits of financial EDI to participating user firms was conducted using the models. Several major conclusions were drawn from this in-depth study of financial EDI investments including, among others, the substantiated observation that from a payment process perspective, financial EDI is potentially more beneficial to corporate receivers than originators. Compared to non-financial EDI applications, potential economic gains from reduced payment cycles accrue primarily to the supplier community, rather than the initiators of financial EDI systems. Major contributions of this study include first, the development of a theory-based value measurement framework, and second, the presentation and application of a structured, iterative methodology for the evaluation of financial EDI investments. The proposed financial EDI cost/benefit models also offer a useful, practical set of tools to potential and actual user firms in evaluating the organizational value of future or current financial EDI programs. Finally, the study is also intended to assist Canadian financial institutions in their financial EDI diffusion effort.
APA, Harvard, Vancouver, ISO, and other styles
43

Newsom, Eric Tyner. "An exploratory study using the predicate-argument structure to develop methodology for measuring semantic similarity of radiology sentences." Thesis, 2013. http://hdl.handle.net/1805/3666.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)<br>The amount of information produced in the form of electronic free text in healthcare is increasing to levels incapable of being processed by humans for advancement of his/her professional practice. Information extraction (IE) is a sub-field of natural language processing with the goal of data reduction of unstructured free text. Pertinent to IE is an annotated corpus that frames how IE methods should create a logical expression necessary for processing meaning of text. Most annotation approaches seek to maximize meaning and knowledge by chunking sentences into phrases and mapping these phrases to a knowledge source to create a logical expression. However, these studies consistently have problems addressing semantics and none have addressed the issue of semantic similarity (or synonymy) to achieve data reduction. To achieve data reduction, a successful methodology for data reduction is dependent on a framework that can represent currently popular phrasal methods of IE but also fully represent the sentence. This study explores and reports on the benefits, problems, and requirements to using the predicate-argument statement (PAS) as the framework. A convenient sample from a prior study with ten synsets of 100 unique sentences from radiology reports deemed by domain experts to mean the same thing will be the text from which PAS structures are formed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!