To see the other types of publications on this topic, follow the link: Data capture.

Dissertations / Theses on the topic 'Data capture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data capture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Connell, Edward B., William P. Barnes, and William H. Stallings. "The Generic Data Capture Facility." International Foundation for Telemetering, 1987. http://hdl.handle.net/10150/615290.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1987 / Town and Country Hotel, San Diego, California
The growing complexity of space science missions is causing a dramatic increase in the data rates and volumes from spaced-based experiments, and the ground operations functions associated with handling data from these missions are growing in complexity consistent with this increase. A key requirement on the systems that provide data handling support is to control operations costs carefully while providing high-quality data capture functions. One approach to meeting this particular objective that has been taken at the Goddard Space Flight Center has been to initiate the development of a Generic Data Capture Facility (GDCF) that can provide data capture support for a variety of different types of spacecraft. The GDCF is emerging through a blend of new system development and evolution of existing systems, and when complete, it will have the capability to support the two major data formatting schemes (packet and Time-Division Multiplexed (TDM)). The specific implementations are designed to support the Gamma Ray Observatory and the Upper Atmosphere Research Satellite, but the GDCF will provide the baseline system to support various new missions as they emerge.
APA, Harvard, Vancouver, ISO, and other styles
2

Miller, Iain. "Finding associations in motion capture data." Thesis, University of the West of Scotland, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.729427.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Turner, Elizabeth L. "Marginal modelling of capture-recapture data." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=103302.

Full text
Abstract:
The central theme of this dissertation is the development of a new approach to conceptualize and quantify dependence structures of capture-recapture data for closed populations, with specific emphasis on epidemiological applications. We introduce a measure of source dependence: the Coefficient of Incremental Dependence (CID). Properties of this and the related Coefficient of Source Dependence (CSD) of Vandal, Walker, and Pearson (2005) are presented, in particular their relationships to the conditional independence structures that can be modelled by hierarchical joint log-linear models (HJLLM). From these measures, we develop a new class of marginal log-linear models (MLLM), which we compare and contrast to HJLLMs.
We demonstrate that MLLMs serve to extend the universe of dependence structures of capture-recapture data that can be modelled and easily interpreted. Furthermore, the CIDs and CSDs enable us to meaningfully interpret the parameters of joint log-linear models previously excluded from the analysis of capture-recapture data for reasons of non-interpretability of model parameters.
In order to explore the challenges and features of MLLMs, we show how to produce inference from them under both a maximum likelihood and a Bayesian paradigm. The proposed modelling approach performs well and provides new insight into the fundamental nature of epidemiological capture-recapture data.
APA, Harvard, Vancouver, ISO, and other styles
4

Mayo, Timothy Robert. "Intelligent systems for cartographic data capture." Thesis, University of Cambridge, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.357566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shi, Jiangpeng. "Wearable personal data information capture system." ScholarWorks@UNO, 2004. http://louisdl.louislibraries.org/u?/NOD,172.

Full text
Abstract:
Thesis (M.S.)--University of New Orleans, 2004.
Title from electronic submission form. "A thesis ... in partial fulfillment of the requirements for the degree of Master of Science in the Department of Computer Science."--Thesis t.p. Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
6

Larsson, Albin. "MC.d.o.t : Motion capture data och dess tillgänglighet." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-9622.

Full text
Abstract:
Hårdvara kan bli gammal, program kan sluta utvecklas, filer som skapats från sådan hårdvara respektive mjukvara kan bli oanvändbara med tiden. Samt att hålla ordning på många individuella filer kan i längden bli jobbigt för användare. Med en databasorienterad lagrinsgslösning kan olika API:er användas för att göra data kompatibel med flera olika verktyg och program, samt att det kan användas för att skapa en centraliserad lösning för att enkelt hålla ordning på information. Bland databaser finns det två primära grupperingar: SQL och NoSQL. Detta arbete ämnar undersöka vilken typ som passar för att hantera motion capture data. Tester har utförts på SQLs MySQL och NoSQLs Neo4j. Neo4j som är specialiserad för att hantera data som motion capture data. Resultatet från testningarna är förvånande nog att MySQL hanterar motion capture data bättre än Neo4j. Ytterligare arbeten för att undersöka fler varianter av databaser för en mer komplett bild föreslås.
APA, Harvard, Vancouver, ISO, and other styles
7

Rogers, Bennett Lee. "Query-by-example for motion capture data." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/42255.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 57-58).
Motion capture datasets are employed widely in animation research and industry, however there currently exists no efficient way to index and search this data for diversified use. Motion clips are generally searched by filename or keywords, neither of which incorporates knowledge of actions in the clip aside from those listed in the descriptions. We present a method for indexing and searching a large database of motion capture clips that allows for fast insertion and query-by-example. Over time, more motions can be added to the index, incrementally increasing its value. The result is a tool that reduces the amount of time spent gathering new data for motion applications, and increases the utility of existing motion clips.
by Bennett Lee Rogers.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Zhao. "Motion capture data processing, retrieval and recognition." Thesis, Bournemouth University, 2018. http://eprints.bournemouth.ac.uk/31038/.

Full text
Abstract:
Character animation plays an essential role in the area of featured film and computer games. Manually creating character animation by animators is both tedious and inefficient, where motion capture techniques (MoCap) have been developed and become the most popular method for creating realistic character animation products. Commercial MoCap systems are expensive and the capturing process itself usually requires an indoor studio environment. Procedural animation creation is often lacking extensive user control during the generation progress. Therefore, efficiently and effectively reusing MoCap data can brings significant benefits, which has motivated wider research in terms of machine learning based MoCap data processing. A typical work flow of MoCap data reusing can be divided into 3 stages: data capture, data management and data reusing. There are still many challenges at each stage. For instance, the data capture and management often suffer from data quality problems. The efficient and effective retrieval method is also demanding due to the large amount of data being used. In addition, classification and understanding of actions are the fundamental basis of data reusing. This thesis proposes to use machine learning on MoCap data for reusing purposes, where a frame work of motion capture data processing is designed. The modular design of this framework enables motion data refinement, retrieval and recognition. The first part of this thesis introduces various methods used in existing motion capture processing approaches in literature and a brief introduction of relevant machine learning methods used in this framework. In general, the frameworks related to refinement, retrieval, recognition are discussed. A motion refinement algorithm based on dictionary learning will then be presented, where kinematical structural and temporal information are exploited. The designed optimization method and data preprocessing technique can ensure a smooth property for the recovered result. After that, a motion refinement algorithm based on matrix completion is presented, where the low-rank property and spatio-temporal information is exploited. Such model does not require preparing data for training. The designed optimization method outperforms existing approaches in regard to both effectiveness and efficiency. A motion retrieval method based on multi-view feature selection is also proposed, where the intrinsic relations between visual words in each motion feature subspace are discovered as a means of improving the retrieval performance. A provisional trace-ratio objective function and an iterative optimization method are also included. A non-negative matrix factorization based motion data clustering method is proposed for recognition purposes, which aims to deal with large scale unsupervised/semi-supervised problems. In addition, deep learning models are used for motion data recognition, e.g. 2D gait recognition and 3D MoCap recognition. To sum up, the research on motion data refinement, retrieval and recognition are presented in this thesis with an aim to tackle the major challenges in motion reusing. The proposed motion refinement methods aim to provide high quality clean motion data for downstream applications. The designed multi-view feature selection algorithm aims to improve the motion retrieval performance. The proposed motion recognition methods are equally essential for motion understanding. A collection of publications by the author of this thesis are noted in publications section.
APA, Harvard, Vancouver, ISO, and other styles
9

Segelstad, Johan. "Layering animation principles on motion capture data : Surpass the limitations of motion capture." Thesis, Luleå tekniska universitet, Institutionen för konst, kommunikation och lärande, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-74636.

Full text
Abstract:
This thesis deals with the use of Disney's twelve animation principles in relation to Motion Capture. The purpose of the work was to investigate whether animation principles can be applied to finished motion capture animations to surpass the limitations of motion capture  by using animation layers, where each added layer is a new principle. To investigate this, motion capture data was retrieved with various movements from Mixamo, which was then imported into Maya where various animation principles were applied with the help of Maya animation layers. The result of this research will answer the following… Is it possible to surpass the limitations of motion capture by layering disney's animation principles on motion captured animations in Maya with the use of animation layers?
APA, Harvard, Vancouver, ISO, and other styles
10

De, Wet Francois Johan. "Data capture of geometric data for local authorities' geographic information systems." Master's thesis, University of Cape Town, 1995. http://hdl.handle.net/11427/14953.

Full text
Abstract:
Bibliography: leaves 64-65.
This thesis describes research and development work which led to algorithms, procedures and computer programs which facilitate the cost effective and accurate capture of geometric data. The geometric data for a Geographical Information System (GIS) at a local authority or municipality consist of a number of different data sets. These include inter alia: the cadastral information, zoning information, servitudes, building lines, the outlines of improvements and the reticulation networks and the house connection points of the engineering services. The initial capture of the geometric data appears to be deceptively simple and is often not given the required consideration. The initial data capture phase of GIS projects is usually a difficult and time consuming process. This is even more so in the case of GIS for local authorities. The reason for this difficulty is the large volume of data coupled with the high accuracies required for the cadastral base map and the engineering services. Input facilities of most commercial GIS software packages generally do not provide the most efficient means of data capture. This problem warrants the development of techniques and procedures specific to local authority GIS applications which ensure that data capture can be done effectively and efficiently. The major benefit of these procedures is that they can be implemented on personal computers with low random access memory capacity. This eliminates the need for investment in costly equipment at the initial stage of data capture in the development of a GIS. It allows the capture of data on low cost technology and the postponement of the purchase of an expensive system or workstation until the data capture phase has been completed. The lowest personnel skills required are copy typing in contrast to the traditional methods of using CAD operators who command higher salaries and require more expensive training. The system developed by the author is more productive, both in quality and volume of work produced, than the CAD approach. It also permits the delay of purchase and training on expensive GIS software and hardware, which may be obsolete by the time the graphic database is established.
APA, Harvard, Vancouver, ISO, and other styles
11

Tanco, L. Molina. "Human motion synthesis from captured data." Thesis, University of Surrey, 2002. http://epubs.surrey.ac.uk/844411/.

Full text
Abstract:
Animation of human motion is one of the most challenging topics in computer graphics. This is due to the large number of degrees of freedom of the body and to our ability to detect unnatural motion. Keyframing and interpolation remains the form of animation that is preferred by most animators because of the control and flexibility it provides. However this is a labour intensive process that requires skills that take years to acquire. Human motion capture techniques provide accurate measurement of the motion of a performer that can be mapped onto an animated character to provide strikingly natural animation. This raises the problem of how to allow an animator to modify captured movement to produce a desired animation whilst preserving the natural quality. This thesis introduces a new approach to the animation of human motion based on combining the flexibility of keyframing with the visual quality of motion capture data. In particular it addresses the problem of synthesising natural inbetween motion for sparse keyframes. This thesis proposes to obtain this motion by sampling high quality human motion capture data. The problem of keyframe interpolation is formulated as a search problem in a graph. This presents two difficulties: The complexity of the search makes it impractical for the large databases of motion capture required to model human motion. The second difficulty is that the global temporal structure in the data may not be preserved in the search. To address these difficulties this thesis introduces a layered framework that both reduces the complexity of the search and preserves the global temporal structure of the data. The first layer is a simplification of the graph obtained by clustering methods. This layer enables efficient planning of the search for a path between start and end keyframes. The second layer directly samples segments of the original motion data to synthesise realistic inbetween motion for the keyframes. A number of additional contributions are made including novel representations for human motion, pose similarity cost functions, dynamic programming algorithms for efficient search and quantitative evaluation methods. Results of realistic inbetween motion are presented with databases of up to 120 sequences (35000 frames). Key words: Human Motion Synthesis, Motion Capture, Character Animation, Graph Search, Clustering, Unsupervised Learning, Markov Models, Dynamic Programming.
APA, Harvard, Vancouver, ISO, and other styles
12

Röder, Tido. "Similarity, retrieval, and classification of motion capture data." [S.l.] : [s.n.], 2006. http://deposit.ddb.de/cgi-bin/dokserv?idn=983632332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Mason, Terry, and Fred Thames. "AN OPEN-ARCHITECTURE APPROACH TO SCALEABLE DATA CAPTURE." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/608537.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
The ultra high capacity disk-based data recorders now entering service offer not just a convenient and inexpensive alternative to conventional tape systems for applications like Telemetry and Flight Test but also a unique opportunity to rethink the classical models for data capture, analysis and storage. Based on ‘open architecture’ interface standards- typically SCSI-this new generation of products represents an entirely new approach to the way data is handled. But the techniques they employ are equally applicable to any SCSI storage device. This Paper discusses a range of practical scenarios illustrating how it is now possible to `mix-and-match’ recording technologies at will-disk-array, DLT, DTF, ExaByte, JAZ, ZIP, DVD, etc.- to produce an almost infinite combination of readily scaleable plug-and-play data capture, analysis and archiving solutions. The cost and reliability benefits arising from the use of standard mass-produced storage sub-systems are also considered
APA, Harvard, Vancouver, ISO, and other styles
14

Sabia, Steve, and Sarah Hand. "Functional Component Approach to Telemetry Data Capture Systems." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615084.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
To support the telemetry data rates and meet the needs of telemetry data users in the next decade, telemetry data capture systems will have to be radically different from today's systems. At Goddard Space Flight Center, the Mission Operations and Data Systems Directorate is developing the capability to build user specific data capture systems from a library of high performance hardware and software elements that satisfy standard data capture processing requirements. One or more telemetry functions are encapsulated in a single standard open bus system (e.g. VME, Multibus II, NuBus etc.) with supporting software to form a user data capture system. Each subsystem module (card or board) includes a local microprocessor supplying on board intelligence and programmability for changing requirements. Many of these subsystem designs include custom very large scale integration (VLSI) components to increase speed while minimizing cost and size. A standard hardware and software interface to each card subsystem is employed to simplify system integration in the open system environment.
APA, Harvard, Vancouver, ISO, and other styles
15

Ward, Michael James. "The capture and integration of construction site data." Thesis, Loughborough University, 2004. https://dspace.lboro.ac.uk/2134/799.

Full text
Abstract:
The use of mobile computing on the construction site has been a well-researched area since the early 1990's, however, there still remains a lack of computing on the construction site. Where computers are utilised on the site this tends to be by knowledge workers utilising a laptop or PC in the site office with electronic data collection being the exception rather than the norm. The problems associated with paper-based documentation on the construction site have long been recognised (Baldwin, et al, 1994; McCullough, 1993) yet there still seems to be reluctance to replace this with electronic alternatives. Many reasons exist for this such as; low profit margins, perceived high cost; perceived lack of available hardware and perceived inability of the workforce. However, the benefits that can be gained from the successful implementation of IT on the construction site and the ability to re-use construction site data to improve company performance, whilst difficult to cost, are clearly visible. This thesis represents the development and implementation of a data capture system for the management of the construction of rotary bored piles (SHERPA). Operated by the site workforce, SHERPA comprises a wireless network, site-based server and webbased data capture using tablet computers. This research intends to show that mobile computing technologies can be implemented on the construction site and substantial benefits can be gained for the company from the re-use and integration of the captured site data.
APA, Harvard, Vancouver, ISO, and other styles
16

Zhao, Meng. "DATA CAPTURE AND REPORT IN EPILEPSY MONITORING UNIT." Case Western Reserve University School of Graduate Studies / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=case1380503113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kelman, Timothy George Harold. "Techniques for capture and analysis of hyperspectral data." Thesis, University of Strathclyde, 2016. http://oleg.lib.strath.ac.uk:80/R/?func=dbin-jump-full&object_id=26434.

Full text
Abstract:
The work presented in this thesis focusses on new techniques for capture and analysis of hyperspectral data. Due to the three-dimensional nature of hyperspectral data, image acquisition often requires some form of movement of either the object or the detector. This thesis presents a novel technique which utilises a rotational line-scan rather than a linear line-scan. Furthermore, a method for automatically calibrating this system using a calibration object is described. Compared with traditional linear scanning systems, the performance is shown to be high enough that a rotational scanning system is a viable alternative. Classification is an important tool in hyperspectral image analysis. In this thesis, five different classification techniques are explained before they are tested on a classification problem; the classification of five different kinds of Chinese tea leaves. The process from capture to pre-processing to classification and post-processing is described. The effects of altering the parameters of the classifers and the pre and post-processing steps are also evaluated. This thesis documents the analysis of baked sponges using hyperspectral imaging. By comparing hyperspectral images of sponges of varying ages with the results of an expert tasting panel, a strong correlation is shown between the hyperspectral data and human determined taste, texture and appearance scores. This data is then used to show the distribution of moisture content throughout a sponge image. While hyperspectral imaging provides significantly more data than a conventional imaging system, the benefits offered by this extra data are not always clear. A quantitative analysis of hyperspectral imaging versus conventional imaging is performed using a rice grain classification problem where spatial, spectral and colour information is compared.
APA, Harvard, Vancouver, ISO, and other styles
18

Brunner, Seth A. "Improved Computer-Generated Simulation Using Motion Capture Data." BYU ScholarsArchive, 2014. https://scholarsarchive.byu.edu/etd/4182.

Full text
Abstract:
Ever since the first use of crowds in films and videogames there has been an interest in larger, more efficient and more realistic simulations of crowds. Most crowd simulation algorithms are able to satisfy the viewer from a distance but when inspected from close up the flaws in the individual agent's movements become noticeable. One of the bigger challenges faced in crowd simulation is finding a solution that models the actual movement of an individual in a crowd. This paper simulates a more realistic crowd by using individual motion capture data as well as traditional crowd control techniques to reach an agent's desired goal. By augmenting traditional crowd control algorithms with the use of motion capture data for individual agents, we can simulate crowds that mimic more realistic crowd motion, while maintaining real-time simulation speed.
APA, Harvard, Vancouver, ISO, and other styles
19

Nar, Selim. "A Virtual Human Animation Tool Using Motion Capture Data." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609683/index.pdf.

Full text
Abstract:
In this study, we developed an animation tool to animate 3D virtual characters. The tool offers facilities to integrate motion capture data with a 3D character mesh and animate the mesh by using Skeleton Subsurface Deformation and Dual Quaternion Skinning Methods. It is a compact tool, so it is possible to distribute, install and use the tool with ease. This tool can be used to illustrate medical kinematic gait data for educational purposes. For validation, we obtained medical motion capture data from two separate sources and animated a 3D mesh model by using this data. The animations are presented to physicians for evaluation. The results show that the tool is sufficient in displaying obvious gait patterns of the patients. The tool provides interactivity for inspecting the movements of patient from different angles and distances. We animate anonymous virtual characters which provide anonymity of the patient.
APA, Harvard, Vancouver, ISO, and other styles
20

Parkin, Stewart. "Manufacturing down time data capture and communication information system /." Leeds : University of Leeds, School of Computer Studies, 2008. http://www.comp.leeds.ac.uk/fyproj/reports/0708/Parkin.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Dickinson, Keith William. "Traffic data capture and analysis using video image processing." Thesis, University of Sheffield, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306374.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Rentzsch, Walter Herbert Werner. "Data capture and modelling for material processing and design." Thesis, University of Cambridge, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.613904.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Silva, Mário Jorge Marques da. "Mobile devices for electronic data capture in clinical studies." Master's thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/14698.

Full text
Abstract:
Mestrado em Engenharia de Computadores e Telemática
Mobile devices, including common smartphones and tablets, are being increasingly used for mHealth scenarios, in which the device is used to capture health values directly or acting as a hub for health sensors. Such applications allow a machine-to-machine capture and persistence of data, avoiding problems with manual data entry. The availability of smartphones and tablets, on one side, and wearable sensors/medical devices, on the other, creates an opportunity to use mobile data capture of health values also in clinical studies applications. In this dissertation, we propose a mobile front-end for clinical studies participants, developed in Android, including electronic data capture in ambulatory contexts. Besides the common questionnaire filling support, the front-end relies on the ISO/IEEE 11073 standard to directly obtain values from compliant medical devices. The work has been designed to integrate with the existing clinical studies platform uEDC (developed by iUZ Technologies). Early usage of the system shows that the mobile front-end can successfully support different devices and study protocols, fully integrated with the uEDC backend.
Os dispositivos móveis, incluido os comuns smartphones e tablets, estão a ser cada vez mais usados em cenários de mHealth, em que o dispositivo é usado para a recolha de dados médicos diretamente ou atuando como um agregador para sensores médicos. Tais aplicações permitem captura e persistência eletrónica dos dados, prevenindo problemas com a inserção manual. A disponibilidade de smartphones e tablets, por um lado, e sensores vestíveis/dispositivos médicos, por outro, cria uma oportunidade para usar a captura de dados de saúde com dispositivos móveis também em aplicações de estudos clínicos. Nesta dissertação, propomos uma applicação móvel para a participação em estudos clínicos, desenvolvida em Android, incluindo a captura eletrónica de dados em contextos ambulatórios. Além do suporte comum para preenchimento de questionários, a aplicação utiliza o protocolo ISO/IEEE 11073 para comunicar com dispositivos médicos compatíveis. O trabalho foi concebido para se integrar com a plataforma de estudos clínicos uEDC (desenvolvida pela iUZ Technologies). O uso experimental dos sistemas mostra que o front-end móvel consegue com sucesso suportar diferentes dispositivos e protocolos de estudo, totalmente integrados com o backend uEDC.
APA, Harvard, Vancouver, ISO, and other styles
24

Gustafsson, Hanna, and Lea Zuna. "Unmanned Aerial Vehicles for Geographic Data Capture: A Review." Thesis, KTH, Geodesi och satellitpositionering, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-210039.

Full text
Abstract:
In GIS-projects the data capture is one of the most time consuming processes. Both how to collect the data and the quality of the collected data is of high importance. Common methods for data capture are GPS, LiDAR, Total Station and Aerial Photogrammetry. Unmanned Aerial Vehicles, UAVs, have become more common in recent years and the number of applications continues to increase. As the technique develops there are more ways that UAV technique can be used for collection of geographic data. One of these techniques is the UAV photogrammetry that entails using an UAV equipped with a camera combined with photogrammetric software in order to create three dimensional models and orthophotos of the ground surface. This thesis contains a comparison between different geographic data capture methods such as terrestrial and aerial methods as well as UAV photogrammetry. The aim is to investigate how UAVs are used to collect geographic data today as well how the techniques involving UAVs can replace or be used as a complement to traditional methods. This study is based on a literature study and interviews. The literature study aims to give a deeper insight in where and how UAVs are used today for geographic data capturing with focus on three main areas: environmental monitoring, urban environment and infrastructure, and natural resources. Regarding the interviews companies and other participants using UAVs for geographic data collection in Sweden have been interviewed to get an accurate overview of the current status regarding the use of UAVs in Sweden. Advantages, disadvantages, limitations, economical aspects, accuracy and possible future use or development are considered as well as different areas of applications. The study is done in collaboration with the geographic IT company Digpro Solutions AB. The goal is to be able to present suggestions of how UAV data can be applied in Digpros applications. Information from the literature study and the interviews show that using a UAV makes it possible to cover a large range between terrestrial and aerial methods, and that it can replace or complement other methods for surveying and data collection. The use gives the possibility to get close to the object without being settle to the ground, as well as work environment profits since dangerous, difficult areas can be accessed from distance. The data can be collected faster, quicker, cheaper and more frequent. Time savings occurs in the measurement stage but compared to terrestrial methods more time is required for the post-processing of the data. The use in Sweden is limited due to difficulties linked to Swedish legislation regarding camera surveillance, as well as long waiting times for the permissions that is required to fly. However, a change in the camera surveillance law is expected which means that UAVs will be excluded from the law. That may result in great benefits for everyone within the industry as well as a continued development of the technique and the use of UAVs.
Inom GIS ar datainsamling en av de mest tidskrävande processerna. Både hur data samlas in samt kvaliteten ar av hög vikt. Några av de vanligaste metoderna för datainsamling idag är GPS, LiDAR, totalstation och fotogrammetri. Obemannade flygfarkoster, UAVs, har de senaste åren blivit allt vanligare och användningsområdena fortsätter att öka. I takt med att tekniken hela tiden utvecklas finns idag flertalet satt att med hjälp av UAVs samla in geografisk data. Med kamerautrustade obemannade flygfarkoster och fotogrammetriska programvaror ar det bland annat möjligt att skapa tredimensionella modeller samt ortofoton av markytan. Detta kandidatexamensarbete innehaller en jämförelse mellan terrestra- samt flygburna metoder för datainsamling och obemannade flygburna metoder. Syftet är att undersöka hur UAVs kan anvandas för att samla in geografisk data samt möjligheten att ersätta eller komplettera existerande metoder, samt att presentera en overgripande bild av UAVs anvandningsomåden. Denna studie bygger pa en litteraturstudie samt intervjuer. Litteraturstudien syftar till en djupare inblick i anvandningsområden för UAV tekniken med fokus på tre huvudområden: miljöövervakning, urbana miljöer och infrastruktur samt naturliga resurser. Under intervjuerna intervjuades företag och andra aktörer inom branschen med syftet att göra en nulägesanalys av hur UAVs används för insamling av geografisk data i Sverige. Det insamlade materialet analyserades med avseende pa användningsområden, för- och nackdelar, hinder, kostnader, noggrannhet samt möjlig framtida användning och utveckling av tekniken. Studien är gjord i samarbete med företaget Digpro Solutions AB som är verksamma inom geografisk IT. Målet är att efter studien kunna ge förslag på hur data insamlad med UAV kan appliceras på Digpros applikationer. Information fran intervjuerna och litteraturen har visat att UAV täcker ett stort spann mellan terrestra- och flygburna metoder, och att den kan ersätta eller utgöra ett komplement till många mät- och datainsamlingsmetoder. Användningen av UAVs innebär möjlighet till att samla in data på ett nära avstånd till objekt utan att vara bunden till marken. Den medför även arbetsmiljövinster då farliga, svårtillgängliga områden kan nås från avstånd. Data kan samlas in snabbare, enklare, billigare och mer frekvent. Tisdbesparingar sker i inmätningsskedet men jämfört med terrestra mätmetoder krävs dock mer tid för efterbearbetning av mätdatat. Användningen i Sverige begränsas av svårigheter kopplade till Svensk lagstiftning gällande kameraövervakning, samt långa väntetider på de tillstånd som kravs för att få flyga. Dock väntas en ändring i kameraövervakningslagen som innebär att drönare inte innefattas i lagen. Detta kan komma att medföra stora fördelar för samtliga inom branschen samt en fortsatt utveckling av tekniken samt användningen av UAVs.
APA, Harvard, Vancouver, ISO, and other styles
25

Arvedson, Tilde, and Anna Lundemo. "Analysing and classifying wheelchair movements from motion capture data." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-167132.

Full text
Abstract:
This is a project that describes a part of the development of a tool intended for wheelchair users in their learning process of new and important movements in everyday life and in sports. The process began with an investigation, with the help of people with wheelchair experience, of which movements that were considered important and which were considered useful in sports and in everyday life. The project was first focused on improving the technical skills of athletes practicing wheelchair sports. After studies and discussions it however changed focus, to also deal with obstacles in everyday life and to help people using wheelchairs to learn more everyday movements. After practical workshops, a mechanical analysis of the wheelchair was made. Some types of movement patterns, such as balancing on the rear wheels and turning, was considered essential to study and the goal was also to get information about for example the speed and acceleration of the wheelchair. This was to be analysed in real time and then sonified in later steps. We developed a code torecognize and assess these movements, which is the product of the project.
Detta är ett projekt som beskriver en del av utvecklingsprocessen för ett verktyg ämnat för rullstolsåkare i deras inlärningsprocess av nya och viktiga rörelser i vardagen och i sportsammanhang. Processen började med en undersökning, med hjälp av rullstolsburna,av vilka rörelser som ansågs svåra och vilka som ansågs använd bara inom sport och i vardagen. Projektet var först inriktat mot att förbättra tekniken hos idrottsutövare inom rullstolssporter. Efter undersökningar och diskussioner bytte det dock senare bana, till att även handla om hinder i vardagen och att hjälpa rullstolsåkare att lära sig vardagligare rörelser. Efter praktiska workshops gjordes en mekanisk analys av rullstolen. Vissa typer av rörelsemönster, till exempel balansering på bakhjul och sväng, ansågs vara viktiga att studera och målet var även att få information om till exempel rullstolens hastighet och acceleration. Detta skulle analyseras i realtid för att sedan sonifieras i senare steg. Vi utvecklade en kod för att känna igen och bedöma dessa rörelser, vilket är produkten av projektet.
APA, Harvard, Vancouver, ISO, and other styles
26

Amlie, Kristian. "Realtime capture and streaming of gameplay experiences." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-18328.

Full text
Abstract:
Today's games are social on a level that could only be imagined before. With modern games putting a stronger emphasis on social networking than ever before, the identity in the game often becomes on par with the one in real life. Yet many games lack the really strong networking tools, especially when networking regarding players of different games is concerned.Geelix is a project which tries to enhance the social gaming aspect by providing sharing of what has been aptly named: gaming experiences. The motivation for this goal is to enable stronger support for letting friends take part in your online, or even offline, experiences. It is the belief that sharing gaming experiences is a key element in building strong social networks in games. This master thesis was written in relation to the Geelix project, where the focus was on enhancing the Geelix live sharing experience with advanced methods for video compression and streaming.
APA, Harvard, Vancouver, ISO, and other styles
27

Collins, Michael Christopher. "Multimedia data capture with multicast dissemination for online distance learning." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA401308.

Full text
Abstract:
Thesis (M.S. in Modeling, Virtual Environments and Simulation (MOVES)--Naval Postgraduate School, December 2001.
Thesis Advisor(s): Brutzman, Don. "December 2001." Includes bibliographical references (p. 175-177). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
28

Van, Leeuwen Matthijs. "Stochastic determination of well capture zones conditioned on transmission data." Thesis, Imperial College London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399155.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ching, Siu-tong, and 程肇堂. "Digital photogrammetry as a means of data capture for GIS." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B30110129.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kaskasamkul, Panicha. "Capture-recapture estimation and modelling for one-inflated count data." Thesis, University of Southampton, 2018. https://eprints.soton.ac.uk/424742/.

Full text
Abstract:
Capture-recapture methods are used to estimate the unknown size of a target population whose size cannot be reasonably enumerated. This thesis proposes the estimators and the models specifically designed to estimate the size of a population for one-inflated capture-recapture count data allowing for heterogeneity. These estimators can assist with overestimation problems occurring from one-inflation that can be seen in several areas of researches. The estimators are developed under three approaches. The first approach is based on a modification by truncating singletons and applying the conventional Turing and maximum likelihood estimation approach to the one-truncated geometric data for estimating the parameter p0. These p0 are applied to the Horvitz-Thompson approach for the modified Turing estimator (T_OT) and the modified maximum likelihood estimator (MLE_OT). The second approach is the model-based approach. It focuses on developing a statistical model that describes the mechanism to generate the extra of count ones. The new estimator MLE_ZTOI is developed from a maximum likelihood approach by using the nested EM algorithm based upon the zero-truncated one-inflated geometric distribution. The last approach focuses on modifying a classical Chao’s estimator to involve the frequency of counts of twos and threes instead of the frequency of counts of ones and twos. The modified Chao estimator (MC) is asymptotic unbiased estimator for a power series distribution with and without one-inflation and provides a lower bound estimator under a mixture of power series distributions with and without one-inflation. The three bias-correction versions of the modified Chao estimator have been developed to reduce the bias when the sample size is small. A variance approximation of MC and MC3 are also constructed by using a conditioning technique. All of the proposed estimators are assessed through simulation studies. The real data sets are provided for understanding the methodologies.
APA, Harvard, Vancouver, ISO, and other styles
31

Kendrick, Connah. "Markerless facial motion capture : deep learning approaches on RGBD data." Thesis, Manchester Metropolitan University, 2018. http://e-space.mmu.ac.uk/622357/.

Full text
Abstract:
Facial expressions are a series of fast, complex and interconnected movement that causes an array of deformations, such as stretching, compressing and folding of the skin. Identifying expression is a natural process in human vision, but due to the diversity of faces, it has many challenges for computer vision. Research in markerless facial motion capture using single Red Green Blue (RGB) camera has gained popularity due to the wide access of the data, such as from mobile phones. The motivation behind this work is much of the existing work attempts to infer the 3-Dimensional (3D) data from 2-Dimensional (2D) images, such as in motion capture multiple 2D cameras are calibration to allow some depth prediction. Whereas, the inclusion of Red Green Blue Depth (RGBD) sensors that give ground truth depth data could gain a better understanding of the human face and how expressions are visualised. The aim of this thesis is to investigate and develop novel methods of markerless facial motion capture, where the focus is on the inclusions of RGBD data to provide 3D data. The contributions are: A tool to aid in the annotation of 3D facial landmarks; A novel neural network that demonstrate the ability of predicting 2D and 3D landmarks by merging RGBD data; Working application that demonstrates complex deep learning network on portable handheld devices; A review of existing methods of denoising fine detail in depth maps using neural networks; A network for the complete analysis of facial landmarks and expressions in 3D. The 3D annotator was developed to overcome the issues of relying on existing 3D modelling software, which made feature identification difficult. The technique of predicting 2D and 3D with auxiliary information, allowed high accuracy 3D landmarking, without the need for full model generation. Also, it outperformed other recent techniques of landmarking. The networks running on the handheld devices show as a proof of concept that even without much optimisation, a complex task can be performed in near real-time. Denoising Time of Flight (ToF) depth maps, showed much more complexity than the tradition RGB denoising, where we reviewed and applied an array of techniques to the task. The full facial analysis showed that when neural networks perform on a wide range of related task for auxiliary information allow for deep understanding of the overall task. The research for facial processing is vast, but still with many new problems and challenges to face and improve upon. While RGB cameras are used widely, we see the inclusion of high accuracy and cost-effective depth sensing device available. The new devices allow better understanding of facial features and expression. By using and merging RGB data, the area of facial landmarking, and expression intensity recognition can be improved.
APA, Harvard, Vancouver, ISO, and other styles
32

Cabral, de Moura Borges Jose Luis. "A data mining model to capture user web navigation patterns." Thesis, University College London (University of London), 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.393733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lee, Jaehong. "Improvement of Experimental Data Accuracy for Neutron Capture Cross Section." Kyoto University, 2018. http://hdl.handle.net/2433/232482.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Eccles, Mitchell John. "Pragmatic development of service based real-time change data capture." Thesis, Aston University, 2013. http://publications.aston.ac.uk/19148/.

Full text
Abstract:
This thesis makes a contribution to the Change Data Capture (CDC) field by providing an empirical evaluation on the performance of CDC architectures in the context of realtime data warehousing. CDC is a mechanism for providing data warehouse architectures with fresh data from Online Transaction Processing (OLTP) databases. There are two types of CDC architectures, pull architectures and push architectures. There is exiguous data on the performance of CDC architectures in a real-time environment. Performance data is required to determine the real-time viability of the two architectures. We propose that push CDC architectures are optimal for real-time CDC. However, push CDC architectures are seldom implemented because they are highly intrusive towards existing systems and arduous to maintain. As part of our contribution, we pragmatically develop a service based push CDC solution, which addresses the issues of intrusiveness and maintainability. Our solution uses Data Access Services (DAS) to decouple CDC logic from the applications. A requirement for the DAS is to place minimal overhead on a transaction in an OLTP environment. We synthesize DAS literature and pragmatically develop DAS that eciently execute transactions in an OLTP environment. Essentially we develop effeicient RESTful DAS, which expose Transactions As A Resource (TAAR). We evaluate the TAAR solution and three pull CDC mechanisms in a real-time environment, using the industry recognised TPC-C benchmark. The optimal CDC mechanism in a real-time environment, will capture change data with minimal latency and will have a negligible affect on the database's transactional throughput. Capture latency is the time it takes a CDC mechanism to capture a data change that has been applied to an OLTP database. A standard definition for capture latency and how to measure it does not exist in the field. We create this definition and extend the TPC-C benchmark to make the capture latency measurement. The results from our evaluation show that pull CDC is capable of real-time CDC at low levels of user concurrency. However, as the level of user concurrency scales upwards, pull CDC has a significant impact on the database's transaction rate, which affirms the theory that pull CDC architectures are not viable in a real-time architecture. TAAR CDC on the other hand is capable of real-time CDC, and places a minimal overhead on the transaction rate, although this performance is at the expense of CPU resources.
APA, Harvard, Vancouver, ISO, and other styles
35

Brotherton, Jason Alan. "Enriching everyday activities through the automated capture and access of live experiences : eClass: building, observing and understanding the impact of capture and access in an educational domain." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/8143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Niwe, Moses. "Organizational patterns for knowledge capture in B2B engagements." Doctoral thesis, Stockholm : Department of Computer and Systems Sciences, Stockholm University, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-38631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Boxall, Guy. "'Diversification of automatic identification and data capture technologies with Omron Corporation'." Thesis, University of Warwick, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Aslan, David. "Lagring av Motion Capture Data i NoSQL-databser : Undersökning av CouchDB." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-11095.

Full text
Abstract:
Motion capture data behöver lagra på ett eller annat sätt lagra detta med databas skulle innebära väldigt många fördelar. Den används på många olika sätt och i olika branscher därmed skulle innebära en stor förändring. Det finns två olika kategorier av databaser SQL databaser och NoSQL databaser, dem databaser som kommer testas är relationsbaserade MySQL och dokumentbaserade CouchDB med en prototyp som utvecklas för att utföra dessa tester. Testerna påvisar att CouchDB är bättre databaslösningen vid lagringen av motion capture data. Men ytterligare arbete skulle kunna utföra flera tester som påvisar att läsning av motion capture data från databaserna kan ske i realtid. Mätningar från experimentet bevisar att CouchDB är den snabbare på att lagra Motion Capture data. I framtida arbete skulle arbetet kunna införas i filmbranschen och bli effektivt genom att använda mindre hårddiskutrymme och minska kostnaderna.
APA, Harvard, Vancouver, ISO, and other styles
39

Karlsson, David. "Electronic Data Capture for Injury and Illness Surveillance : A usability study." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-102737.

Full text
Abstract:
Despite the development of injury surveillance systems for use at large multi sportsevents (Junge 2008), their implementation is still methodologically and practicallychallenging. Edouard (2013) and Engebretsen (2013) have pointed out that thecontext of athletics championships feature unique constraints, such as a limiteddata-collection window and large amounts of data to be recorded and rapidlyvalidated. To manage these logistical issues, Electronic Data Capture (EDC) methodshave been proposed (Bjorneboe 2009, Alonso 2012, Edouard 2013). EDC systemshave successfully been used for surveillance during multi-sport events Derman et al(2013) and its potential for surveillance studies during athletics championships istherefore interesting. The focus for surveillance during athletics championships hasthis far been on injury and illness data collected from team medical staff in directassociation to the competitions. But the most common injury and illness problems inathletics are overuse syndromes (Alonso 2009, Edouard 2012, Jacobsson 2013) andknowledge of risk factors associated to these problems is also relevant in associationto championships. A desirable next step to extend the surveillance routines istherefore to include also pre-participation risk factors. For surveillance of overusesyndromes, online systems for athlete self-report of data on pain and othersymptoms have been reported superior to reports from coaches (Shiff 2010). EDCsystems have also been applied for athlete self-report of exposure and injury data inathletics and other individual sports and have been found to be well accepted with agood efficiency (Jacobsson 2013, Clarsen 2013). There are thus reasons forinvestigating EDC system use by both athletes and team medical staff during athleticchampionships.This thesis used a cross-sectional design to collect qualitative data from athletes andteam medical staff using interviews and “think-aloud” usability evaluation methods(Ericsson 1993; Kuusela 2000). It was performed over 3 days during the 2013European Athletics Indoor Championships in Gothenburg, Sweden. Online EDCsystems for collection of data from athletes and team medical staff, respectively,were prepared for the study. The system for use by team medical staff was intendedto collect data on injuries and illnesses sustained during the championship and thesystem for athletes to collect data on risk factors.This study does not provide a solution in how an EDC effort should be implementedduring athletics championships. It does however points towards usability factorsthat needs to be taken into consideration if taking such an approach.
APA, Harvard, Vancouver, ISO, and other styles
40

Huakau, John Tupou. "New methods for analysis of epidemiological data using capture-recapture methods." Thesis, University of Auckland, 2002. http://wwwlib.umi.com/dissertations/fullcit/3085723.

Full text
Abstract:
Capture-recapture methods take their origins from animal abundance estimation, where they were used to estimate the unknown size of the animal population under study. In the late 1940s and again in the late 1960s and early 1970s these same capture-recapture methods were modified and applied to epidemiological list data. Since then through their continued use, in particular in the 1990s, these methods have become popular for the estimation of the completeness of disease registries and for the estimation of the unknown total size of human disease populations. In this thesis we investigate new methods for the analysis of epidemiological list data using capture-recapture methods. In particular we compare two standard methods used to estimate the unknown total population size, and examine new methods which incorporate list mismatch errors and model-selection uncertainty into the process for the estimation of the unknown total population size and its associated confidence interval. We study the use of modified tag loss methods from animal abundance estimation to allow for list mismatch errors in the epidemio-logical list data. We also explore the use of a weighted average method, the use of Bootstrap methods, and the use of a Bayesian model averaging method for incorporating model-selection uncertainty into the estimate of the unknown total population size and its associated confidence interval. In addition we use two previously unanalysed Diabetes studies to illustrate the methods examined and a well-known Spina Bifida Study for simulation purposes. This thesis finds that ignoring list mismatch errors will lead to biased estimates of the unknown total population size and that the list mismatch methods considered here result in a useful adjustment. The adjustment also approximately agrees with the results obtained using a complex matching algorithm. As for the incorporation of model-selection uncertainty, we find that confidence intervals which incorporate model-selection uncertainty are wider and more appropriate than confidence intervals that do not. Hence we recommend the use of tag loss methods to adjust for list mismatch errors and the use of methods that incorporate model-selection uncertainty into both point and interval estimates of the unknown total population size.
Subscription resource available via Digital Dissertations only.
APA, Harvard, Vancouver, ISO, and other styles
41

Hargreaves, Steven. "Music metadata capture in the studio from audio and symbolic data." Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/8816.

Full text
Abstract:
Music Information Retrieval (MIR) tasks, in the main, are concerned with the accurate generation of one of a number of different types of music metadata {beat onsets, or melody extraction, for example. Almost always, they operate on fully mixed digital audio recordings. Commonly, this means that a large amount of signal processing effort is directed towards the isolation, and then identification, of certain highly relevant aspects of the audio mix. In some cases, results of one MIR algorithm are useful, if not essential, to the operation of another { a chord detection algorithm for example, is highly dependent upon accurate pitch detection. Although not clearly defined in all cases, certain rules exist which we may take from music theory in order to assist the task { the particular note intervals which make up a specific chord, for example. On the question of generating accurate, low level music metadata (e.g. chromatic pitch and score onset time), a potentially huge advantage lies in the use of multitrack, rather than mixed, audio recordings, in which the separate instrument recordings may be analysed in isolation. Additionally, in MIR, as in many other research areas currently, there is an increasing push towards the use of the Semantic Web for publishing metadata using the Resource Description Framework (RDF). Semantic Web technologies, though, also facilitate the querying of data via the SPARQL query language, as well as logical inferencing via the careful creation and use of web ontology language (OWL) ontologies. This, in turn, opens up the intriguing possibility of deferring our decision regarding which particular type of MIR query to ask of our low-level music metadata until some point later down the line, long after all the heavy signal processing has been carried out. In this thesis, we describe an over-arching vision for an alternative MIR paradigm, built around the principles of early, studio-based metadata capture, and exploitation of open, machine-readable Semantic Web data. Using the specific example of structural segmentation, we demonstrate that by analysing multitrack rather than mixed audio, we are able to achieve a significant and quantifiable increase in the accuracy of our segmentation algorithm. We also provide details of a new multitrack audio dataset with structural segmentation annotations, created as part of this research, and available for public use. Furthermore, we show that it is possible to fully implement a pair of pattern discovery algorithms (the SIA and SIATEC algorithms { highly applicable, but not restricted to, symbolic music data analysis) using only SemanticWeb technologies { the SPARQL query language, acting on RDF data, in tandem with a small OWL ontology. We describe the challenges encountered by taking this approach, the particular solution we've arrived at, and we evaluate the implementation both in terms of its execution time, and also within the wider context of our vision for a new MIR paradigm.
APA, Harvard, Vancouver, ISO, and other styles
42

O’Connell, Richard. "200 MBPS TO 1 GBPS DATA ACQUISITION & CAPTURE USING RACEWAY." International Foundation for Telemetering, 1997. http://hdl.handle.net/10150/607569.

Full text
Abstract:
International Telemetering Conference Proceedings / October 27-30, 1997 / Riviera Hotel and Convention Center, Las Vegas, Nevada
For many years VME has been the platform of choice for high-performance, real-time data acquisition systems. VME’s longevity has been made possible in part by timely enhancements which have expanded system bandwidth and allowed systems to support ever increasing throughput. One of the most recent ANSI-standard extensions of the VME specification defines RACEway, a system of dynamically switched, 160 Mbyte/second board-to-board interconnects. In typical systems RACEway increases the internal bandwidth of a VME system by an order of magnitude. Since this bandwidth is both scaleable and deterministic, it is particularly well suited to high-performance, real-time systems. The potential of RACEway for very high-performance (200 Mbps to 1 Gbps) real-time systems has been recognized by both the VME industry and a growing number of system integrators. This recognition has yielded many new RACEway-ready VME products from more than a dozen vendors. In fact many significant real-time data acquisition systems that consist entirely of commercial-off-the-shelf (COTS) RACEway products are being developed and fielded today. This paper provides an overview of RACEway technology, identifies the types of RACEway equipment currently available, discusses how RACEway can be applied in high-performance data acquisition systems, and briefly describes two systems that acquiring and capturing real-time data streams at rates from 200 Mbps to 1 Gbps using RACEway.
APA, Harvard, Vancouver, ISO, and other styles
43

Anan, Orasa. "Capture-recapture modelling for zero-truncated count data allowing for heterogeneity." Thesis, University of Southampton, 2016. https://eprints.soton.ac.uk/402562/.

Full text
Abstract:
Capture-recapture modelling is a powerful tool for estimating an elusive target population size. This thesis proposes four new population size estimators allowing for population heterogeneity. The first estimator is developed under the zero-truncated of generalised Poisson distribution (ZTGP), called the MLEGP. The two parameters of the ZTGP are estimated by using a maximum likelihood with the Expectation-Maximisation algorithm (EM algorithm). The second estimator is the population size estimator under the zero-truncated Conway-Maxwell-Poisson distribution (ZTCMP). The benefits of using the Conway-Maxwell-Poisson distribution (CMP) are that it includes the Bernoulli, Poisson and geometric distribution as special cases. It is also flexible for over- and under-dispersions relative to the original Poisson model. Moreover, the parameter estimates can be achieved by a simple linear regression approach. The uncertainty in estimating variances of the unknown population size under new estimator is studied with analytic and resampling approaches. The geometric distribution is one of the nested models under the Conway-Maxwell-Poisson distribution, the Turing and the Zelterman estimators are extended for the geometric distribution and its related model, respectively. Variance estimation and confidence intervals are constructed by the normal approximation method. An uncertainty of variance estimation of population size estimators for single marking capture-recapture data is studied in the final part of the research. Normal approximation and three resample approaches of variance estimation are compared for the Chapman and the Chao estimators. All of the approaches are assessed through simulations, and real data sets are provided as guidance for understanding the methodologies.
APA, Harvard, Vancouver, ISO, and other styles
44

Nykvist, Joar. "Sonication in Kinesiophobia Therapy: Presenting Motion Capture Data as Mechanical Quantities." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-211567.

Full text
Abstract:
Kinesiophobia is a severe limitation in its victims' lives, this paper is part of a larger study aiming to use motion capture combined with a reliable mechanical model to provide auditory feedback on a kinesiophobia victim's locomotion on a track designed to inspire movements relevant for rehabilitation. This paper focuses on creating a theoretical model with a foundation in the biomechanics eld and uses it to establish programming logic that facilitates sonication methods used on the recorded motion data. Motion capture is a widely used technique and a lot of research has been done on how to use it, this paper brings up examples of skeleton gen-eration and movement categorization. Nonetheless, there is no precedent using quantities entirely based on rigid body coordinates for the sonication of motion data as a possible segment in kinesiophobia therapy. The mechanical quantities that were analyzed in this study are based solely on travelled distances and their derivatives to minimize the customization need for dierent subjects. Functions calculating the quantities of interest based on a motion capture stream were written in MatLab and programatically run from a script in Java, where the data was stored. A swift tool for analyzing valuable motion quantities in real time was developed and tested successfully, the full procedure is disclosed along with suggested improvements and an evaluation of its strengths and drawbacks.
APA, Harvard, Vancouver, ISO, and other styles
45

Holm, Malin, Christoffer Roepstorff, and Martin Svedberg. "Validering av Inertial Measurment Units som insamlare av data för drivande av OpenSim-modell." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-175937.

Full text
Abstract:
The purpose of this paper is to investigate the possibility of replacing data from highspeed filming (Qualisys motion capture) with data from Inertial Measurement Units (X-io technologies), when used to run a model of torso and pelvis in OpenSim. Qualisys motion capture data is used as the golden standard to validate the result visually and with Bland-Altman plots. In order to obtain comparable data experiments are conducted where both methods of collecting data are used simultaneously. Data from the IMU's then need to be processed in Matlab before it can be used to run the OpenSim modell. Several Matlab programs rotate the IMU data to a static reference frame, filter and integrate it, then create viritual markers that correspond to Qualisys' optical markers. The conclusion is that using IMU as a method for collecting data can replace Qualisys in some applications, but not in ones that require high precision. However, this paper only begins the examination of IMU's and there are most likely improvements to be made.
APA, Harvard, Vancouver, ISO, and other styles
46

Nordahl, Lina. "Key Factors for Successful Development and Implementation of Electronic Data Capture in Clinical Trials." Thesis, Uppsala universitet, Industriell teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-237406.

Full text
Abstract:
Drug development in general and clinical trials in particular is expensive and time consuming processes. One mandatory procedure in clinical trials are data collection, about 15 years ago almost all data were collected with a paper based approach but with new digitalised technology for data collection the process were about to become more efficient in regard to time, cost and quality of data. However the adoption rate of these systems for data collection were much lower than anticipated and most previous research points toward poorly developed products as the main reason for the adoption failure. Nevertheless, these systems have become more user friendly and efficient and today almost all studies use Electronic Data Capture (EDC) as the primary method for data collection. This project aim to investigate if the reason for the slow diffusion was a result of poorly developed products or if there are external factors such as social or organisational aspects that caused this delay. Semi structured interviews were conducted with 15 informants who works with EDC systems daily and are professionals within this industry. The result indicates that the slow diffusion is partly caused by initially bad systems that in turn might have caused a resistance among the end users and partly caused by slow decision organisations such as multinational pharmaceutical companies. The advice given to the project owner who intends to acquire this market is to focus on electronic Patient Reported Outcome (ePRO), which is a tool used by individual patients for self-reporting of data in clinical trials. ePRO is an extension of the EDC systems and must be user friendly for the patients and easy to connect to other systems. The company should rather focus on small Contract Research Organisation (CRO) as main customers rather than Big Pharma. Big Pharma often conduct multinational studies and decisions regarding the protocol and how data is to be collected are centrally decided. Since the project owner is a newly started, small firm with limited experience of clinical trials my advice would be to target CROs that conduct smaller studies.
APA, Harvard, Vancouver, ISO, and other styles
47

Chan, Ming Kit. "Active queue management schemes using a capture-recapture model /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?COMP%202002%20CHAN.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 58-61). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
48

Phillips, Sophie E. C. "Structuring multimedia data to capture design rationale and to support product development." Thesis, University College London (University of London), 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.399013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

King, Ruth. "Bayesian model discrimination in the analysis of capture-recapture and related data." Thesis, University of Bristol, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Bhattacharjee, Partha Sarathi S. M. Massachusetts Institute of Technology. "VacSeen : semantically enriched automatic identification and data capture for improved vaccine logistics." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107582.

Full text
Abstract:
Thesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, Technology and Policy Program, 2016.
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, System Design and Management Program, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 79-82).
Vaccines are globally recognized as a critical public health intervention. Routine immunization coverage in large parts of the developing world is around 80%. Technology and policy initiatives are presently underway to improve vaccine access in such countries. Efforts to deploy AIDC technologies, such as barcodes, on vaccine packaging in developing countries are currently ongoing under the aegis of the 'Decade of Vaccines' initiative by key stakeholders. Such a scenario presents an opportunity to evaluate novel approaches for enhancing vaccine access. In this thesis I report the development of VacSeen, a Semantic Web technology-enabled platform for improving vaccine access in developing countries. Furthermore, I report results of evaluation of a suite of constituent software and hardware tools pertaining to facilitating equitable vaccine access in resource-constrained settings through data linkage and temperature sensing. I subsequently discuss the value of such linkage and approaches to implementation using concepts from technology, policy, and systems analysis.
by Partha Sarathi Bhattacharjee.
S.M. in Technology and Policy
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography