To see the other types of publications on this topic, follow the link: Event Tracing.

Dissertations / Theses on the topic 'Event Tracing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Event Tracing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wagner, Michael. "Concepts for In-memory Event Tracing." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-172882.

Full text
Abstract:
This thesis contributes to the field of performance analysis in High Performance Computing with new concepts for in-memory event tracing. Event tracing records runtime events of an application and stores each with a precise time stamp and further relevant metrics. The high resolution and detailed information allows an in-depth analysis of the dynamic program behavior, interactions in parallel applications, and potential performance issues. For long-running and large-scale parallel applications, event-based tracing faces three challenges, yet unsolved: the number of resulting trace files limits scalability, the huge amounts of collected data overwhelm file systems and analysis capabilities, and the measurement bias, in particular, due to intermediate memory buffer flushes prevents a correct analysis. This thesis proposes concepts for an in-memory event tracing workflow. These concepts include new enhanced encoding techniques to increase memory efficiency and novel strategies for runtime event reduction to dynamically adapt trace size during runtime. An in-memory event tracing workflow based on these concepts meets all three challenges: First, it not only overcomes the scalability limitations due to the number of resulting trace files but eliminates the overhead of file system interaction altogether. Second, the enhanced encoding techniques and event reduction lead to remarkable smaller trace sizes. Finally, an in-memory event tracing workflow completely avoids intermediate memory buffer flushes, which minimizes measurement bias and allows a meaningful performance analysis. The concepts further include the Hierarchical Memory Buffer data structure, which incorporates a multi-dimensional, hierarchical ordering of events by common metrics, such as time stamp, calling context, event class, and function call duration. This hierarchical ordering allows a low-overhead event encoding, event reduction and event filtering, as well as new hierarchy-aided analysis requests. An experimental evaluation based on real-life applications and a detailed case study underline the capabilities of the concepts presented in this thesis. The new enhanced encoding techniques reduce memory allocation during runtime by a factor of 3.3 to 7.2, while at the same do not introduce any additional overhead. Furthermore, the combined concepts including the enhanced encoding techniques, event reduction, and a new filter based on function duration within the Hierarchical Memory Buffer remarkably reduce the resulting trace size up to three orders of magnitude and keep an entire measurement within a single fixed-size memory buffer, while still providing a coarse but meaningful analysis of the application. This thesis includes a discussion of the state-of-the-art and related work, a detailed presentation of the enhanced encoding techniques, the event reduction strategies, the Hierarchical Memory Buffer data structure, and a extensive experimental evaluation of all concepts.
APA, Harvard, Vancouver, ISO, and other styles
2

Wagner, Michael [Verfasser], Wolfgang E. [Akademischer Betreuer] Nagel, and Felix [Akademischer Betreuer] Wolf. "Concepts for In-memory Event Tracing : Runtime Event Reduction with Hierarchical Memory Buffers / Michael Wagner. Gutachter: Wolfgang E. Nagel ; Felix Wolf. Betreuer: Wolfgang E. Nagel." Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://d-nb.info/1074350138/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Knüpfer, Andreas. "Advanced Memory Data Structures for Scalable Event Trace Analysis." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1239979718089-56362.

Full text
Abstract:
The thesis presents a contribution to the analysis and visualization of computational performance based on event traces with a particular focus on parallel programs and High Performance Computing (HPC). Event traces contain detailed information about specified incidents (events) during run-time of programs and allow minute investigation of dynamic program behavior, various performance metrics, and possible causes of performance flaws. Due to long running and highly parallel programs and very fine detail resolutions, event traces can accumulate huge amounts of data which become a challenge for interactive as well as automatic analysis and visualization tools. The thesis proposes a method of exploiting redundancy in the event traces in order to reduce the memory requirements and the computational complexity of event trace analysis. The sources of redundancy are repeated segments of the original program, either through iterative or recursive algorithms or through SPMD-style parallel programs, which produce equal or similar repeated event sequences. The data reduction technique is based on the novel Complete Call Graph (CCG) data structure which allows domain specific data compression for event traces in a combination of lossless and lossy methods. All deviations due to lossy data compression can be controlled by constant bounds. The compression of the CCG data structure is incorporated in the construction process, such that at no point substantial uncompressed parts have to be stored. Experiments with real-world example traces reveal the potential for very high data compression. The results range from factors of 3 to 15 for small scale compression with minimum deviation of the data to factors > 100 for large scale compression with moderate deviation. Based on the CCG data structure, new algorithms for the most common evaluation and analysis methods for event traces are presented, which require no explicit decompression. By avoiding repeated evaluation of formerly redundant event sequences, the computational effort of the new algorithms can be reduced in the same extent as memory consumption. The thesis includes a comprehensive discussion of the state-of-the-art and related work, a detailed presentation of the design of the CCG data structure, an elaborate description of algorithms for construction, compression, and analysis of CCGs, and an extensive experimental validation of all components
Diese Dissertation stellt einen neuartigen Ansatz für die Analyse und Visualisierung der Berechnungs-Performance vor, der auf dem Ereignis-Tracing basiert und insbesondere auf parallele Programme und das Hochleistungsrechnen (High Performance Computing, HPC) zugeschnitten ist. Ereignis-Traces (Ereignis-Spuren) enthalten detaillierte Informationen über spezifizierte Ereignisse während der Laufzeit eines Programms und erlauben eine sehr genaue Untersuchung des dynamischen Verhaltens, verschiedener Performance-Metriken und potentieller Performance-Probleme. Aufgrund lang laufender und hoch paralleler Anwendungen und dem hohen Detailgrad kann das Ereignis-Tracing sehr große Datenmengen produzieren. Diese stellen ihrerseits eine Herausforderung für interaktive und automatische Analyse- und Visualisierungswerkzeuge dar. Die vorliegende Arbeit präsentiert eine Methode, die Redundanzen in den Ereignis-Traces ausnutzt, um sowohl die Speicheranforderungen als auch die Laufzeitkomplexität der Trace-Analyse zu reduzieren. Die Ursachen für Redundanzen sind wiederholt ausgeführte Programmabschnitte, entweder durch iterative oder rekursive Algorithmen oder durch SPMD-Parallelisierung, die gleiche oder ähnliche Ereignis-Sequenzen erzeugen. Die Datenreduktion basiert auf der neuartigen Datenstruktur der "Vollständigen Aufruf-Graphen" (Complete Call Graph, CCG) und erlaubt eine Kombination von verlustfreier und verlustbehafteter Datenkompression. Dabei können konstante Grenzen für alle Abweichungen durch verlustbehaftete Kompression vorgegeben werden. Die Datenkompression ist in den Aufbau der Datenstruktur integriert, so dass keine umfangreichen unkomprimierten Teile vor der Kompression im Hauptspeicher gehalten werden müssen. Das enorme Kompressionsvermögen des neuen Ansatzes wird anhand einer Reihe von Beispielen aus realen Anwendungsszenarien nachgewiesen. Die dabei erzielten Resultate reichen von Kompressionsfaktoren von 3 bis 5 mit nur minimalen Abweichungen aufgrund der verlustbehafteten Kompression bis zu Faktoren > 100 für hochgradige Kompression. Basierend auf der CCG_Datenstruktur werden außerdem neue Auswertungs- und Analyseverfahren für Ereignis-Traces vorgestellt, die ohne explizite Dekompression auskommen. Damit kann die Laufzeitkomplexität der Analyse im selben Maß gesenkt werden wie der Hauptspeicherbedarf, indem komprimierte Ereignis-Sequenzen nicht mehrmals analysiert werden. Die vorliegende Dissertation enthält eine ausführliche Vorstellung des Stands der Technik und verwandter Arbeiten in diesem Bereich, eine detaillierte Herleitung der neu eingeführten Daten-strukturen, der Konstruktions-, Kompressions- und Analysealgorithmen sowie eine umfangreiche experimentelle Auswertung und Validierung aller Bestandteile
APA, Harvard, Vancouver, ISO, and other styles
4

Knüpfer, Andreas. "Advanced Memory Data Structures for Scalable Event Trace Analysis." Doctoral thesis, Technische Universität Dresden, 2008. https://tud.qucosa.de/id/qucosa%3A23611.

Full text
Abstract:
The thesis presents a contribution to the analysis and visualization of computational performance based on event traces with a particular focus on parallel programs and High Performance Computing (HPC). Event traces contain detailed information about specified incidents (events) during run-time of programs and allow minute investigation of dynamic program behavior, various performance metrics, and possible causes of performance flaws. Due to long running and highly parallel programs and very fine detail resolutions, event traces can accumulate huge amounts of data which become a challenge for interactive as well as automatic analysis and visualization tools. The thesis proposes a method of exploiting redundancy in the event traces in order to reduce the memory requirements and the computational complexity of event trace analysis. The sources of redundancy are repeated segments of the original program, either through iterative or recursive algorithms or through SPMD-style parallel programs, which produce equal or similar repeated event sequences. The data reduction technique is based on the novel Complete Call Graph (CCG) data structure which allows domain specific data compression for event traces in a combination of lossless and lossy methods. All deviations due to lossy data compression can be controlled by constant bounds. The compression of the CCG data structure is incorporated in the construction process, such that at no point substantial uncompressed parts have to be stored. Experiments with real-world example traces reveal the potential for very high data compression. The results range from factors of 3 to 15 for small scale compression with minimum deviation of the data to factors > 100 for large scale compression with moderate deviation. Based on the CCG data structure, new algorithms for the most common evaluation and analysis methods for event traces are presented, which require no explicit decompression. By avoiding repeated evaluation of formerly redundant event sequences, the computational effort of the new algorithms can be reduced in the same extent as memory consumption. The thesis includes a comprehensive discussion of the state-of-the-art and related work, a detailed presentation of the design of the CCG data structure, an elaborate description of algorithms for construction, compression, and analysis of CCGs, and an extensive experimental validation of all components.
Diese Dissertation stellt einen neuartigen Ansatz für die Analyse und Visualisierung der Berechnungs-Performance vor, der auf dem Ereignis-Tracing basiert und insbesondere auf parallele Programme und das Hochleistungsrechnen (High Performance Computing, HPC) zugeschnitten ist. Ereignis-Traces (Ereignis-Spuren) enthalten detaillierte Informationen über spezifizierte Ereignisse während der Laufzeit eines Programms und erlauben eine sehr genaue Untersuchung des dynamischen Verhaltens, verschiedener Performance-Metriken und potentieller Performance-Probleme. Aufgrund lang laufender und hoch paralleler Anwendungen und dem hohen Detailgrad kann das Ereignis-Tracing sehr große Datenmengen produzieren. Diese stellen ihrerseits eine Herausforderung für interaktive und automatische Analyse- und Visualisierungswerkzeuge dar. Die vorliegende Arbeit präsentiert eine Methode, die Redundanzen in den Ereignis-Traces ausnutzt, um sowohl die Speicheranforderungen als auch die Laufzeitkomplexität der Trace-Analyse zu reduzieren. Die Ursachen für Redundanzen sind wiederholt ausgeführte Programmabschnitte, entweder durch iterative oder rekursive Algorithmen oder durch SPMD-Parallelisierung, die gleiche oder ähnliche Ereignis-Sequenzen erzeugen. Die Datenreduktion basiert auf der neuartigen Datenstruktur der "Vollständigen Aufruf-Graphen" (Complete Call Graph, CCG) und erlaubt eine Kombination von verlustfreier und verlustbehafteter Datenkompression. Dabei können konstante Grenzen für alle Abweichungen durch verlustbehaftete Kompression vorgegeben werden. Die Datenkompression ist in den Aufbau der Datenstruktur integriert, so dass keine umfangreichen unkomprimierten Teile vor der Kompression im Hauptspeicher gehalten werden müssen. Das enorme Kompressionsvermögen des neuen Ansatzes wird anhand einer Reihe von Beispielen aus realen Anwendungsszenarien nachgewiesen. Die dabei erzielten Resultate reichen von Kompressionsfaktoren von 3 bis 5 mit nur minimalen Abweichungen aufgrund der verlustbehafteten Kompression bis zu Faktoren > 100 für hochgradige Kompression. Basierend auf der CCG_Datenstruktur werden außerdem neue Auswertungs- und Analyseverfahren für Ereignis-Traces vorgestellt, die ohne explizite Dekompression auskommen. Damit kann die Laufzeitkomplexität der Analyse im selben Maß gesenkt werden wie der Hauptspeicherbedarf, indem komprimierte Ereignis-Sequenzen nicht mehrmals analysiert werden. Die vorliegende Dissertation enthält eine ausführliche Vorstellung des Stands der Technik und verwandter Arbeiten in diesem Bereich, eine detaillierte Herleitung der neu eingeführten Daten-strukturen, der Konstruktions-, Kompressions- und Analysealgorithmen sowie eine umfangreiche experimentelle Auswertung und Validierung aller Bestandteile.
APA, Harvard, Vancouver, ISO, and other styles
5

Búrdalo, Rapa Luis Antonio. "TRAMMAS: Enhancing Communication in Multiagent Systems." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/61765.

Full text
Abstract:
[EN] Over the last years, multiagent systems have been proven to be a powerful and versatile paradigm, with a big potential when it comes to solving complex problems in dynamic and distributed environments, due to their flexible and adaptive behavior. This potential does not only come from the individual features of agents (such as autonomy, reactivity or reasoning power), but also to their capability to communicate, cooperate and coordinate in order to fulfill their goals. In fact, it is this social behavior what makes multiagent systems so powerful, much more than the individual capabilities of agents. The social behavior of multiagent systems is usually developed by means of high level abstractions, protocols and languages, which normally rely on (or at least, benefit from) agents being able to communicate and interact indirectly. However, in the development process, such high level concepts habitually become weakly supported, with mechanisms such as traditional messaging, massive broadcasting, blackboard systems or ad hoc solutions. This lack of an appropriate way to support indirect communication in actual multiagent systems compromises their potential. This PhD thesis proposes the use of event tracing as a flexible, effective and efficient support for indirect interaction and communication in multiagent systems. The main contribution of this thesis is TRAMMAS, a generic, abstract model for event tracing support in multiagent systems. The model allows all entities in the system to share their information as trace events, so that any other entity which require this information is able to receive it. Along with the model, the thesis also presents an abstract architecture, which redefines the model in terms of a set of tracing facilities that can be then easily incorporated to an actual multiagent platform. This architecture follows a service-oriented approach, so that the tracing facilities are provided in the same way than other traditional services offered by the platform. In this way, event tracing can be considered as an additional information provider for entities in the multiagent system, and as such, it can be integrated from the earliest stages of the development process.
[ES] A lo largo de los últimos años, los sistemas multiagente han demostrado ser un paradigma potente y versátil, con un gran potencial a la hora de resolver problemas complejos en entornos dinámicos y distribuidos, gracias a su comportamiento flexible y adaptativo. Este potencial no es debido únicamente a las características individuales de los agentes (como son su autonomía, y su capacidades de reacción y de razonamiento), sino que también se debe a su capacidad de comunicación y cooperación a la hora de conseguir sus objetivos. De hecho, por encima de la capacidad individual de los agentes, es este comportamiento social el que dota de potencial a los sistemas multiagente. El comportamiento social de los sistemas multiagente suele desarrollarse empleando abstracciones, protocolos y lenguajes de alto nivel, los cuales, a su vez, se basan normalmente en la capacidad para comunicarse e interactuar de manera indirecta de los agentes (o como mínimo, se benefician en gran medida de dicha capacidad). Sin embargo, en el proceso de desarrollo software, estos conceptos de alto nivel son soportados habitualmente de manera débil, mediante mecanismos como la mensajería tradicional, la difusión masiva, o el uso de pizarras, o mediante soluciones totalmente ad hoc. Esta carencia de un soporte genérico y apropiado para la comunicación indirecta en los sistemas multiagente reales compromete su potencial. Esta tesis doctoral propone el uso del trazado de eventos como un soporte flexible, efectivo y eficiente para la comunicación indirecta en sistemas multiagente. La principal contribución de esta tesis es TRAMMAS, un modelo genérico y abstracto para dar soporte al trazado de eventos en sistemas multiagente. El modelo permite a cualquier entidad del sistema compartir su información en forma de eventos de traza, de tal manera que cualquier otra entidad que requiera esta información sea capaz de recibirla. Junto con el modelo, la tesis también presenta una arquitectura {abs}{trac}{ta}, que redefine el modelo como un conjunto de funcionalidades que pueden ser fácilmente incorporadas a una plataforma multiagente real. Esta arquitectura sigue un enfoque orientado a servicios, de modo que las funcionalidades de traza son ofrecidas por parte de la plataforma de manera similar a los servicios tradicionales. De esta forma, el trazado de eventos puede ser considerado como una fuente adicional de información para las entidades del sistema multiagente y, como tal, puede integrarse en el proceso de desarrollo software desde sus primeras etapas.
[CAT] Al llarg dels últims anys, els sistemes multiagent han demostrat ser un paradigma potent i versàtil, amb un gran potencial a l'hora de resoldre problemes complexes a entorns dinàmics i distribuïts, gràcies al seu comportament flexible i adaptatiu. Aquest potencial no és només degut a les característiques individuals dels agents (com són la seua autonomia, i les capacitats de reacció i raonament), sinó també a la seua capacitat de comunicació i cooperació a l'hora d'aconseguir els seus objectius. De fet, per damunt de la capacitat individual dels agents, es aquest comportament social el que dóna potencial als sistemes multiagent. El comportament social dels sistemes multiagent solen desenvolupar-se utilitzant abstraccions, protocols i llenguatges d'alt nivell, els quals, al seu torn, es basen normalment a la capacitat dels agents de comunicar-se i interactuar de manera indirecta (o com a mínim, es beneficien en gran mesura d'aquesta capacitat). Tanmateix, al procés de desenvolupament software, aquests conceptes d'alt nivell son suportats habitualment d'una manera dèbil, mitjançant mecanismes com la missatgeria tradicional, la difusió massiva o l'ús de pissarres, o mitjançant solucions totalment ad hoc. Aquesta carència d'un suport genèric i apropiat per a la comunicació indirecta als sistemes multiagent reals compromet el seu potencial. Aquesta tesi doctoral proposa l'ús del traçat d'esdeveniments com un suport flexible, efectiu i eficient per a la comunicació indirecta a sistemes multiagent. La principal contribució d'aquesta tesi és TRAMMAS, un model genèric i abstracte per a donar suport al traçat d'esdeveniments a sistemes multiagent. El model permet a qualsevol entitat del sistema compartir la seua informació amb la forma d'esdeveniments de traça, de tal forma que qualsevol altra entitat que necessite aquesta informació siga capaç de rebre-la. Junt amb el model, la tesi també presenta una arquitectura abstracta, que redefineix el model com un conjunt de funcionalitats que poden ser fàcilment incorporades a una plataforma multiagent real. Aquesta arquitectura segueix un enfoc orientat a serveis, de manera que les funcionalitats de traça són oferides per part de la plataforma de manera similar als serveis tradicionals. D'aquesta manera, el traçat d'esdeveniments pot ser considerat com una font addicional d'informació per a les entitats del sistema multiagent, i com a tal, pot integrar-se al procés de desenvolupament software des de les seues primeres etapes.
Búrdalo Rapa, LA. (2016). TRAMMAS: Enhancing Communication in Multiagent Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/61765
TESIS
APA, Harvard, Vancouver, ISO, and other styles
6

Tröger, Ralph, and Rainer Alt. "Design Options for Supply Chain Visibility Services – Learnings from Three EPCIS Implementations." Springer, 2017. https://ul.qucosa.de/id/qucosa%3A32385.

Full text
Abstract:
Supply chains in many industries are experiencing an ever-growing complexity. They involve many actors and, similar to intra-organizational processes, visibility is an important enabler for managing supply chains in an inter-organizational setting. It is the backbone of advanced sup-ply chain (event) management solutions, which serve to detect critical incidents in time and to determine alternative actions. Due to the numerous parties involved, distributed supply chains call for a modular system architecture that aims at re-using visibility data from standardized sources. Following the wide variety of supply chain configurations in many industries there are also many options to design such services. This paper sheds light on these aspects by conduct-ing a case study on EPCIS, a global service specification for capturing and sharing visibility data. Based on three implementations, it shows the main design options for a supply chain vis-ibility service, generic operator models as well as major potentials.:1. Introduction and motivation 2. Research questions and methodology 3. Literature analysis 4. EPCIS case study 4.1. Deutsche Post DHL 4.2. ThyssenKrupp 4.3. GS1 Germany 5. Discussion and findinds 5.1.Design options 5.2. Operator models 5.3. Potentials 6. Conclusions
APA, Harvard, Vancouver, ISO, and other styles
7

Harrigan, Edward. "Seismic event tracking." Thesis, University of Strathclyde, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.267506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

SILVA, Adson Diego Dionisio da. "Arcabouço para análise de eventos em vídeos." Universidade Federal de Campina Grande, 2015. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/592.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-05-07T15:29:04Z No. of bitstreams: 1 ADSON DIEGO DIONISIO DA SILVA - DISSERTAÇÃO PPGCC 2015..pdf: 2453030 bytes, checksum: 863c817f9714377b827d4d6fa0770c51 (MD5)
Made available in DSpace on 2018-05-07T15:29:04Z (GMT). No. of bitstreams: 1 ADSON DIEGO DIONISIO DA SILVA - DISSERTAÇÃO PPGCC 2015..pdf: 2453030 bytes, checksum: 863c817f9714377b827d4d6fa0770c51 (MD5) Previous issue date: 2015-08-31
O reconhecimento automático de eventos de interesse em vídeos envolvendo conjuntos de ações ou de interações entre objetos. Pode agregar valor a sistemas de vigilância,aplicações de cidades inteligentes, monitoramento de pessoas com incapacidades físicas ou mentais, dentre outros. Entretanto, conceber um arcabouço que possa ser adaptado a diversas situações sem a necessidade de um especialista nas tecnologias envolvidas, continua sendo um desafio para a área. Neste contexto, a pesquisa realizada tem como base a criação de um arcabouço genérico para detecção de eventos em vídeo com base em regras. Para criação das regras, os usuários formam expressões lógicas utilizando Lógica de Primeira Ordem e relacionamos termos com a álgebra de intervalos de Allen, adicionando assim um contexto temporal às regras. Por ser um arcabouço, ele é extensível, podendo receber módulos adicionais para realização de novas detecções e inferências Foi realizada uma avaliação experimental utilizando vídeos de teste disponíveis no site Youtube envolvendo um cenário de trânsito, com eventos de ultrapassagem do sinal vermelho e vídeos obtidos de uma câmera ao vivo do site Camerite, contendo eventos de carros estacionando. O foco do trabalho não foi criar detectores de objetos (e.g. carros ou pessoas) melhores do que aqueles existentes no estado da arte, mas propor e desenvolver uma estrutura genérica e reutilizável que integra diferentes técnicas de visão computacional. A acurácia na detecção dos eventos ficou no intervalo de 83,82% a 90,08% com 95% de confiança. Obteve acurácia máxima (100%) na detecção dos eventos, quando substituído os detectores de objetos por rótulos atribuídos manualmente, o que indicou a eficácia do motor de inferência desenvolvido para o arcabouço.
Automatic recognition of relevant events in videos involving sets of actions or interactions between objects can improve surveillance systems, smart cities applications, monitoring of people with physical or mental disabilities, among others. However, designing a framework that can be adapted to several situations without an expert in the involved technologies remains a challenge. In this context, this work is based on the creation of a rule-based generic framework for event detection in video. To create the rules, users form logical expressions using firstorder logic (FOL) and relate the terms with the Allen’s interval algebra, adding a temporal context to the rules. Once it is a framework, it is extensible, and may receive additional modules for performing new detections and inferences. Experimental evaluation was performed using test videos available on Youtube, involving a scenario of traffic with red light crossing events and videos from Camerite website containing parking car events. The focus of the work was not to create object detectors (e.g. cars or people) better than those existing in the state-of-the-art, but, propose and develop a generic and reusable framework that integrates differents computer vision techniques. The accuracy in the detection of the events was within the range of 83.82% and 90.08% with 95% confidence. Obtained maximum accuracy (100 %) in the detection of the events, when replacing the objects detectors by labels manually assigned, what indicated the effectiveness of the inference engine developed for this framework.
APA, Harvard, Vancouver, ISO, and other styles
9

Reverter, Valeiras David. "Event-based detection and tracking." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066566/document.

Full text
Abstract:
L'objectif principal de cette thèse est le développement d'algorithmes événementiels pour la détection et le suivi d'objets. Ces algorithmes sont spécifiquement conçus pour travailler avec une sortie produite par des caméras neuromorphiques. Ce type de caméras sont un nouveau type de capteurs bio inspirés, dont le principe de fonctionnement s'inspire de la rétine: chaque pixel est indépendant et génère des événements de manière asynchrone lorsqu'un changement de luminosité suffisamment important est détecté à la position correspondante du plan focal. Cette nouvelle façon d'encoder l'information visuelle requiert de nouvelles méthodes pour la traiter. D'abord, un suiveur (tracker) plan est décrit. Cet algorithme associe à un objet une série de formes simples reliées par des ressorts. Le système mécanique virtuel résultant est mis à jour pour chaque événement. Le chapitre suivant présente un algorithme de détection de lignes et de segments, pouvant constituer une caractéristique (feature) événementielle de bas niveau. Ensuite, deux méthodes événementielles pour l'estimation de la pose 3D sont présentées. Le premier de ces algorithmes 3D est basé sur l'hypothèse que l'estimation de la pose est toujours proche de la position réelle, et requiert donc une initialisation manuelle. Le deuxième de ces algorithmes 3D est conçu pour surmonter cette limitation. Toutes les méthodes présentées mettent à jour l'estimation de la position (2D ou 3D) pour chaque événement. Cette thèse montre que la vision événementielle permet de reformuler une vaste série de problèmes en vision par ordinateur, souvent donnant lieu à des algorithmes plus simples mais toujours précis
The main goal of this thesis is the development of event-based algorithms for visual detection and tracking. This algorithms are specifically designed to work on the output of neuromorphic event-based cameras. This type of cameras are a new type of bioinspired sensors, whose principle of operation is based on the functioning of the retina: every pixel is independent and generates events asynchronously when a sufficient amount of change is detected in the luminance at the corresponding position on the focal plane. This new way of encoding visual information calls for new processing methods. First, a part-based shape tracking is presented, which represents an object as a set of simple shapes linked by springs. The resulting virtual mechanical system is simulated with every incoming event. Next, a line and segment detection algorithm is introduced, which can be employed as an event-based low level feature. Two event-based methods for 3D pose estimation are then presented. The first of these 3D algorithms is based on the assumption that the current estimation is close to the true pose of the object, and it consequently requires a manual initialization step. The second of the 3D methods is designed to overcome this limitation. All the presented methods update the estimated position (2D or 3D) of the tracked object with every incoming event. This results in a series of trackers capable of estimating the position of the tracked object with microsecond resolution. This thesis shows that event-based vision allows to reformulate a broad set of computer vision problems, often resulting in simpler but accurate algorithms
APA, Harvard, Vancouver, ISO, and other styles
10

Ting, Kin-hung. "Fast tracking and analysis of event-related potentials /." View the Table of Contents & Abstract, 2005. http://sunzi.lib.hku.hk/hkuto/record/B30268096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mac, Mahon Noel R. "Special event computerized tracking of officers reporting (SECTOR)." [Denver, Colo.] : Regis University, 2006. http://165.236.235.140/lib/NMacMahon2006.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ting, Kin-hung, and 丁建鴻. "Fast tracking and analysis of event-related potentials." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B45015016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Mohror, Kathryn Marie. "Scalable event tracking on high-end parallel systems." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/2811.

Full text
Abstract:
Accurate performance analysis of high end systems requires event-based traces to correctly identify the root cause of a number of the complex performance problems that arise on these highly parallel systems. These high-end architectures contain tens to hundreds of thousands of processors, pushing application scalability challenges to new heights. Unfortunately, the collection of event-based data presents scalability challenges itself: the large volume of collected data increases tool overhead, and results in data files that are difficult to store and analyze. Our solution to these problems is a new measurement technique called trace profiling that collects the information needed to diagnose performance problems that traditionally require traces, but at a greatly reduced data volume. The trace profiling technique reduces the amount of data measured and stored by capitalizing on the repeated behavior of programs, and on the similarity of the behavior and performance of parallel processes in an application run. Trace profiling is a hybrid between profiling and tracing, collecting summary information about the event patterns in an application run. Because the data has already been classified into behavior categories, we can present reduced, partially analyzed performance data to the user, highlighting the performance behaviors that comprised most of the execution time.
APA, Harvard, Vancouver, ISO, and other styles
14

Dola, Lorris. "Biomimetic trajectory tracking by means of event-based control." Thesis, KTH, Reglerteknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142842.

Full text
Abstract:
Flying insects are able to accomplish unprecedented flight by regulating their optic flow which is the velocty to which the environment scrolls in front of their eyes. This approach can be correlated to an event-based control where an event is generated according to the change of the environment. The event-based control allows to use a very few updates of the control to reach an objective. In order to mimic the insect behavior, we propose to study and apply an event-based approach. the event-based control is simulated on a miniature direct current motor linked to a propeller and experiment on a real one. We also study different controllers: a event-based PI controller and a state-feedback controller. A special attenntion is given to the power consumption of the control in term of energy and computational resources. We propose to lower the sampling frequency of the direct current motor during the experimentation to reduce the power consumption and we estimate the propeller velocity in order to get rid of the velocity sensor.
APA, Harvard, Vancouver, ISO, and other styles
15

Steeds, Lucy. "Tracing threshold events : across art, psychopathology and prehistory." Thesis, Goldsmiths College (University of London), 2012. http://research.gold.ac.uk/7040/.

Full text
Abstract:
The starting point for this thesis is the juxtaposition of two works of art from the 1960s: Study for ‘Skin’ I, a print-drawing from 1962 by Jasper Johns, and the photograph Self-Portrait as a Fountain from 1966 by Bruce Nauman. Viewing these works in conjunction with Palaeolithic hand stencils, the marking of threshold events emerges as a theme. Resonant material is then assembled and studied: Surrealist texts and photography, or the use of photography, by André Breton, Claude Cahun and Man Ray; the medical theses of psychiatrists François Tosquelles and Jean Oury; and works on prehistoric art by Georges Bataille and André Leroi-Gourhan. The marking of threshold events at two nesting scales of analysis – the evolutionary emergence of the human species; and the psychotic onset of hallucination and delusion – is examined. Echoes are found to resound in a third register– in the neurological events that give rise to consciousness and dream experience. Consideration of the Johns drawing and Nauman photograph in these terms is proposed.
APA, Harvard, Vancouver, ISO, and other styles
16

Khaitan, Siddhartha Kumar. "On-line cascading event tracking and avoidance decision support tool." [Ames, Iowa : Iowa State University], 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
17

Hossain, Akdas, and Emma Miléus. "Eye Movement Event Detection for Wearable Eye Trackers." Thesis, Linköpings universitet, Matematik och tillämpad matematik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129616.

Full text
Abstract:
Eye tracking research is a growing area and the fields as where eye trackingcould be used in research are large. To understand the eye tracking data dif-ferent filters are used to classify the measured eye movements. To get accu-rate classification this thesis has investigated the possibility to measure bothhead movements and eye movements in order to improve the estimated gazepoint.The thesis investigates the difference in using head movement compensationwith a velocity based filter, I-VT filter, to using the same filter without headmovement compensation. Further on different velocity thresholds are testedto find where the performance of the filter is the best. The study is made with amobile eye tracker, where this problem exist since you have no absolute frameof reference as opposed to when using remote eye trackers. The head move-ment compensation shows promising results with higher precision overall.
APA, Harvard, Vancouver, ISO, and other styles
18

Aycock, Christopher C. "Progressive messages : tracking message progress through events." Thesis, University of Oxford, 2011. http://ora.ox.ac.uk/objects/uuid:28425a0e-cd08-4a0a-b978-74adc4901a58.

Full text
Abstract:
This thesis introduces the Progressive Messages model of communication. It is an event-driven framework for building scalable parallel and distributed computing applications on modern networks. In particular, the paradigm provides notification of message termination. That is, when a message succeeds or fails, the user’s application can capture an event (often through a callback) and perform a designated action. The semantics of the Progressive Messages model are defined as an extension to the message-driven model, which is like an asynchronous RPC. Together, these models can be contrasted to the message-passing model (the basis of Sockets and MPI), which has no event notification. Using Progressive Messages allows for a more scalable design than permitted by either the message-passing or message-driven model. In particular, Progressive Messages can handle communication concurrently with computation, which means that one process does not need to wait in order to service a request or response from another process. This overlap leads to more efficiency. As part of the study of Progressive Messages, we create the MATE (Message Alerts Through Events) library, which is a prototype API that supports event notification in communication. This API was implemented in both MPI and InfiniBand verbs (OpenFabrics). "Unit tests" of network metrics shows that there is some latency in event-driven message handling, though it is difficult to determine if the source of the latency is hardware or software based. The goal of the Progressive Messages model is that parallel and distributed computing applications will be easier to build and will be more scalable.
APA, Harvard, Vancouver, ISO, and other styles
19

Akman, Oytun. "Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking And Event Recognition." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608620/index.pdf.

Full text
Abstract:
In this thesis, novel methods for background modeling, tracking, occlusion handling and event recognition via multi-camera configurations are presented. As the initial step, building blocks of typical single camera surveillance systems that are moving object detection, tracking and event recognition, are discussed and various widely accepted methods for these building blocks are tested to asses on their performance. Next, for the multi-camera surveillance systems, background modeling, occlusion handling, tracking and event recognition for two-camera configurations are examined. Various foreground detection methods are discussed and a background modeling algorithm, which is based on multi-variate mixture of Gaussians, is proposed. During occlusion handling studies, a novel method for segmenting the occluded objects is proposed, in which a top-view of the scene, free of occlusions, is generated from multi-view data. The experiments indicate that the occlusion handling algorithm operates successfully on various test data. A novel tracking method by using multi-camera configurations is also proposed. The main idea of multi-camera employment is fusing the 2D information coming from the cameras to obtain a 3D information for better occlusion handling and seamless tracking. The proposed algorithm is tested on different data sets and it shows clear improvement over single camera tracker. Finally, multi-camera trajectories of objects are classified by proposed multi-camera event recognition method. In this method, concatenated different view trajectories are used to train Gaussian Mixture Hidden Markov Models. The experimental results indicate an improvement for the multi-camera event recognition performance over the event recognition by using single camera.
APA, Harvard, Vancouver, ISO, and other styles
20

Joo, Seong-Wook. "Multi-object tracking, event modeling, and activity discovery in video sequences." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/6868.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Computer Science. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
21

Harvey, Nicholas Keller James M. "Estimation and tracking of elder activity levels for health event prediction." Diss., Columbia, Mo. : University of Missouri--Columbia, 2009. http://hdl.handle.net/10355/6657.

Full text
Abstract:
Title from PDF of title page (University of Missouri--Columbia, viewed on March 10, 2010). The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Thesis advisor: Dr. James Keller. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
22

Borovies, Drew A. "Particle filter based tracking in a detection sparse discrete event simulation environment." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion.exe/07Mar%5FBorovies.pdf.

Full text
Abstract:
Thesis (M.S. in Modeling, Virtual Environment, and Simulation (MOVES))--Naval Postgraduate School, March 2007.
Thesis Advisor(s): Christian Darken. "March 2007." Includes bibliographical references (p. 115). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
23

Smith, Christopher Rand. "The Programmatic Generation of Discrete-Event Simulation Models from Production Tracking Data." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5829.

Full text
Abstract:
Discrete-event simulation can be a useful tool in analyzing complex system dynamics in various industries. However, it is difficult for entry-level users of discrete-event simulation software to both collect the appropriate data to create a model and to actually generate the base-case simulation model. These difficulties decrease the usefulness of simulation software and limit its application in areas in which it could be potentially useful. This research proposes and evaluates a data collection and analysis methodology that would allow for the programmatic generation of simulation models using production tracking data. It uses data collected from a GPS device that follows products as they move through a system. The data is then analyzed by identifying accelerations in movement as the products travel and then using those accelerations to determine discrete events of the system. The data is also used to identify flow paths, pseudo-capacities, and to characterize the discrete events. Using the results of this analysis, it is possible to then generate a base-case discrete event simulation. The research finds that discrete event simulations can be programmatically generated within certain limitations. It was found that, within these limitations, the data collection and analysis method could be used to build and characterize a representative simulation model. A test scenario found that a model could be generated with 2.1% error on the average total throughput time of a product in the system, and less than 8% error on the average throughput time of a product through any particular process in the system. The research also found that the time to build a model under the proposed method is likely significantly less, as it took an experienced simulation modeler .4% of the time to build a simple model based off a real-world scenario programmatically than it did to build the model manually.
APA, Harvard, Vancouver, ISO, and other styles
24

Mayne, Anna Louise. "A study of ATLAS semiconductor tracker module distortions and event cleaning with tracking." Thesis, University of Sheffield, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.554383.

Full text
Abstract:
The search for new physics with the ATLAS detector at the LHC requires a thorough understanding of Standard Model physics and the performance of the detector. A reliable prediction of the Standard Model backgrounds combined with precise measurements of collision events at a previously unreachable centre of mass energy {/s = 7 TeV) in ATLAS provides excellent opportunities for new physics discoveries.
APA, Harvard, Vancouver, ISO, and other styles
25

Danancher, Mickaël. "A discrete event approach for model-based location tracking of inhabitants in smart homes." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2013. http://tel.archives-ouvertes.fr/tel-00955543.

Full text
Abstract:
Life expectancy has continuously increased in most countries over the last decades and will probably continue to increase in the future. This leads to new challenges relative to the autonomy and the independence of elderly. The development of Smart Homes is a direction to face these challenges and to enable people to live longer in a safe and comfortable environment. Making a home smart consists in placing sensors, actuators and a controller in the house in order to take into account the behavior of their inhabitants and to act on their environment to improve their safety, health and comfort. Most of these approaches are based on the real-time indoor Location Tracking of the inhabitants. In this thesis, a whole new approach for model-based Location Tracking of an a priori unknown number of inhabitants is proposed. This approach is based on Discrete Event Systems paradigms, theory and tools. The usage of Finite Automata (FA) to model the detectable motion of the inhabitants as well as different methods to create such FA models have been developed. Based on these models, algorithms to perform efficient Location Tracking are defined. Finally, several approaches aiming at evaluating the relevance of the instrumentation of a Smart Home with the objective of Location Tracking are proposed. The approach has also been fully implemented and tested. Throughout the thesis, the different contributions are illustrated on case studies.
APA, Harvard, Vancouver, ISO, and other styles
26

Adedoyin-Olowe, Mariam. "An association rule dynamics and classification approach to event detection and tracking in Twitter." Thesis, Robert Gordon University, 2015. http://hdl.handle.net/10059/1222.

Full text
Abstract:
Twitter is a microblogging application used for sending and retrieving instant on-line messages of not more than 140 characters. There has been a surge in Twitter activities since its launch in 2006 as well as steady increase in event detection research on Twitter data (tweets) in recent years. With 284 million monthly active users Twitter has continued to grow both in size and activity. The network is rapidly changing the way global audience source for information and influence the process of journalism [Newman, 2009]. Twitter is now perceived as an information network in addition to being a social network. This explains why traditional news media follow activities on Twitter to enhance their news reports and news updates. Knowing the significance of the network as an information dissemination platform, news media subscribe to Twitter accounts where they post their news headlines and include the link to their on-line news where the full story may be found. Twitter users in some cases, post breaking news on the network before such news are published by traditional news media. This can be ascribed to Twitter subscribers' nearness to location of events. The use of Twitter as a network for information dissemination as well as for opinion expression by different entities is now common. This has also brought with it the issue of computational challenges of extracting newsworthy contents from Twitter noisy data. Considering the enormous volume of data Twitter generates, users append the hashtag (#) symbol as prefix to keywords in tweets. Hashtag labels describe the content of tweets. The use of hashtags also makes it easy to search for and read tweets of interest. The volume of Twitter streaming data makes it imperative to derive Topic Detection and Tracking methods to extract newsworthy topics from tweets. Since hashtags describe and enhance the readability of tweets, this research is developed to show how the appropriate use of hashtags keywords in tweets can demonstrate temporal evolvements of related topic in real-life and consequently enhance Topic Detection and Tracking on Twitter network. We chose to apply our method on Twitter network because of the restricted number of characters per message and for being a network that allows sharing data publicly. More importantly, our choice was based on the fact that hashtags are an inherent component of Twitter. To this end, the aim of this research is to develop, implement and validate a new approach that extracts newsworthy topics from tweets' hashtags of real-life topics over a specified period using Association Rule Mining. We termed our novel methodology Transaction-based Rule Change Mining (TRCM). TRCM is a system built on top of the Apriori method of Association Rule Mining to extract patterns of Association Rules changes in tweets hashtag keywords at different periods of time and to map the extracted keywords to related real-life topic or scenario. To the best of our knowledge, the adoption of dynamics of Association Rules of hashtag co-occurrences has not been explored as a Topic Detection and Tracking method on Twitter. The application of Apriori to hashtags present in tweets at two consecutive period t and t + 1 produces two association rulesets, which represents rules evolvement in the context of this research. A change in rules is discovered by matching every rule in ruleset at time t with those in ruleset at time t + 1. The changes are grouped under four identified rules namely 'New' rules, 'Unexpected Consequent' and 'Unexpected Conditional' rules, 'Emerging' rules and 'Dead' rules. The four rules represent different levels of topic real-life evolvements. For example, the emerging rule represents very important occurrence such as breaking news, while unexpected rules represents unexpected twist of event in an on-going topic. The new rule represents dissimilarity in rules in rulesets at time t and t+1. Finally, the dead rule represents topic that is no longer present on the Twitter network. TRCM revealed the dynamics of Association Rules present in tweets and demonstrates the linkage between the different types of rule dynamics to targeted real-life topics/events. In this research, we conducted experimental studies on tweets from different domains such as sports and politics to test the performance effectiveness of our method. We validated our method, TRCM with carefully chosen ground truth. The outcome of our research experiments include: Identification of 4 rule dynamics in tweets' hashtags namely: New rules, Emerging rules, Unexpected rules and 'Dead' rules using Association Rule Mining. These rules signify how news and events evolved in real-life scenario. Identification of rule evolvements on Twitter network using Rule Trend Analysis and Rule Trace. Detection and tracking of topic evolvements on Twitter using Transaction-based Rule Change Mining TRCM. Identification of how the peculiar features of each TRCM rules affect their performance effectiveness on real datasets.
APA, Harvard, Vancouver, ISO, and other styles
27

Trudeau, Ashley B. "Tracing the Evolution of Collaborative Virtual Research Environments: A Critical Events-Based Perspective." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc862831/.

Full text
Abstract:
A significant number of scientific projects pursuing large scale, complex investigations involve dispersed research teams, which conduct a large part or their work virtually. Virtual Research Environments (VREs), cyberinfrastructure that facilitates coordinated activities amongst dispersed scientists, thus provide a rich context to study organizational evolution. Due to the constantly evolving nature of technologies, it is important to understand how teams of scientists, system developers, and managers respond to critical incidents. Critical events are organizational situations that trigger strategic decision making to adjust structure or redirect processes in order to maintain balance or improve an already functioning system. This study examines two prominent VREs: The United States Virtual Astronomical Observatory (US-VAO) and the HathiTrust Research Center (HTRC) in order to understand how these environments evolve through critical events and strategic choices. Communication perspectives lend themselves well to a study of VRE development and evolution because of the central role occupied by communication technologies in both the functionality and management of VREs. Using the grounded theory approach, this study uses organizational reports to trace how critical events and their resulting strategic choices shape these organizations over time. The study also explores how disciplinary demands influence critical events.
APA, Harvard, Vancouver, ISO, and other styles
28

Ljungberg, Christian, and Erik Nilsson. "Reduction of surveillance video playback time using event-based playback : based on object tracking metadata." Thesis, Blekinge Tekniska Högskola, Institutionen för kreativa teknologier, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-12854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Prada, Rojas Carlos Hernan. "Une approche à base de composants logiciels pour l'observation de systèmes embarqués." Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00621143.

Full text
Abstract:
À l'heure actuelle, les dispositifs embarqués regroupent une grande variété d'applications, ayant des fonctionnalités complexes et demandant une puissance de calcul de plus en plus importante. Ils évoluent actuellement de systèmes multiprocesseur sur puce vers des architectures many-core et posent de nouveaux défis au développement de logiciel embarqué. En effet, Il a classiquement été guidé par les performances et donc par les besoins spécifiques des plates-formes. Or, cette approche s'avère trop couteuse avec les nouvelles architectures matérielles et leurs évolutions rapprochées. Actuellement, il n'y a pas un consensus sur les environnements à utiliser pour programmer les nouvelles architectures embarquées. Afin de permettre une programmation plus rapide du logiciel embarqué, la chaîne de développement a besoin d'outils pour la mise au point des applications. Cette mise au point s'appuie sur des techniques d'observation, qui consistent à recueillir des informations sur le comportement du système embarqué pendant l'exécution. Les techniques d'observation actuelles ne supportent qu'un nombre limité de processeurs et sont fortement dépendantes des caractéristiques matérielles. Dans cette thèse, nous proposons EMBera~: une approche à base de composants pour l'observation de systèmes multiprocesseurs sur puce. EMBera vise la généricité, la portabilité, l'observation d'un grand nombre d'éléments, ainsi que le contrôle de l'intrusion. La généricité est obtenue par l'encapsulation de fonctionnalités spécifiques et l'exportation d'interfaces génériques d'observation. La portabilité est possible grâce à des composants qui, d'une part, ciblent des traitements communs aux MPSoCs, et d'autre part, permettent d'être adaptés aux spécificités des plates-formes. Le passage à l'échelle est réussi en permettant une observation partielle d'un système en se concentrant uniquement sur les éléments d'intérêt~: les modules applicatifs, les composants matériels ou les différents niveaux de la pile logicielle. Le contrôle de l'intrusion est facilité par la possibilité de configurer le type et le niveau de détail des mécanismes de collecte de données. L'approche est validée par le biais de différentes études de cas qui utilisent plusieurs configurations matérielles et logicielles. Nous montrons que cette approche offre une vraie valeur ajoutée dans le support du développement de logiciels embarqués.
APA, Harvard, Vancouver, ISO, and other styles
30

Orten, Burkay Birant. "Moving Object Identification And Event Recognition In Video Surveillamce Systems." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606294/index.pdf.

Full text
Abstract:
This thesis is devoted to the problems of defining and developing the basic building blocks of an automated surveillance system. As its initial step, a background-modeling algorithm is described for segmenting moving objects from the background, which is capable of adapting to dynamic scene conditions, as well as determining shadows of the moving objects. After obtaining binary silhouettes for targets, object association between consecutive frames is achieved by a hypothesis-based tracking method. Both of these tasks provide basic information for higher-level processing, such as activity analysis and object identification. In order to recognize the nature of an event occurring in a scene, hidden Markov models (HMM) are utilized. For this aim, object trajectories, which are obtained through a successful track, are written as a sequence of flow vectors that capture the details of instantaneous velocity and location information. HMMs are trained with sequences obtained from usual motion patterns and abnormality is detected by measuring the distance to these models. Finally, MPEG-7 visual descriptors are utilized in a regional manner for object identification. Color structure and homogeneous texture parameters of the independently moving objects are extracted and classifiers, such as Support Vector Machine (SVM) and Bayesian plug-in (Mahalanobis distance), are utilized to test the performance of the proposed person identification mechanism. The simulation results with all the above building blocks give promising results, indicating the possibility of constructing a fully automated surveillance system for the future.
APA, Harvard, Vancouver, ISO, and other styles
31

Oldham, Kevin M. "Table tennis event detection and classification." Thesis, Loughborough University, 2015. https://dspace.lboro.ac.uk/2134/19626.

Full text
Abstract:
It is well understood that multiple video cameras and computer vision (CV) technology can be used in sport for match officiating, statistics and player performance analysis. A review of the literature reveals a number of existing solutions, both commercial and theoretical, within this domain. However, these solutions are expensive and often complex in their installation. The hypothesis for this research states that by considering only changes in ball motion, automatic event classification is achievable with low-cost monocular video recording devices, without the need for 3-dimensional (3D) positional ball data and representation. The focus of this research is a rigorous empirical study of low cost single consumer-grade video camera solutions applied to table tennis, confirming that monocular CV based detected ball location data contains sufficient information to enable key match-play events to be recognised and measured. In total a library of 276 event-based video sequences, using a range of recording hardware, were produced for this research. The research has four key considerations: i) an investigation into an effective recording environment with minimum configuration and calibration, ii) the selection and optimisation of a CV algorithm to detect the ball from the resulting single source video data, iii) validation of the accuracy of the 2-dimensional (2D) CV data for motion change detection, and iv) the data requirements and processing techniques necessary to automatically detect changes in ball motion and match those to match-play events. Throughout the thesis, table tennis has been chosen as the example sport for observational and experimental analysis since it offers a number of specific CV challenges due to the relatively high ball speed (in excess of 100kph) and small ball size (40mm in diameter). Furthermore, the inherent rules of table tennis show potential for a monocular based event classification vision system. As the initial stage, a proposed optimum location and configuration of the single camera is defined. Next, the selection of a CV algorithm is critical in obtaining usable ball motion data. It is shown in this research that segmentation processes vary in their ball detection capabilities and location out-puts, which ultimately affects the ability of automated event detection and decision making solutions. Therefore, a comparison of CV algorithms is necessary to establish confidence in the accuracy of the derived location of the ball. As part of the research, a CV software environment has been developed to allow robust, repeatable and direct comparisons between different CV algorithms. An event based method of evaluating the success of a CV algorithm is proposed. Comparison of CV algorithms is made against the novel Efficacy Metric Set (EMS), producing a measurable Relative Efficacy Index (REI). Within the context of this low cost, single camera ball trajectory and event investigation, experimental results provided show that the Horn-Schunck Optical Flow algorithm, with a REI of 163.5 is the most successful method when compared to a discrete selection of CV detection and extraction techniques gathered from the literature review. Furthermore, evidence based data from the REI also suggests switching to the Canny edge detector (a REI of 186.4) for segmentation of the ball when in close proximity to the net. In addition to and in support of the data generated from the CV software environment, a novel method is presented for producing simultaneous data from 3D marker based recordings, reduced to 2D and compared directly to the CV output to establish comparative time-resolved data for the ball location. It is proposed here that a continuous scale factor, based on the known dimensions of the ball, is incorporated at every frame. Using this method, comparison results show a mean accuracy of 3.01mm when applied to a selection of nineteen video sequences and events. This tolerance is within 10% of the diameter of the ball and accountable by the limits of image resolution. Further experimental results demonstrate the ability to identify a number of match-play events from a monocular image sequence using a combination of the suggested optimum algorithm and ball motion analysis methods. The results show a promising application of 2D based CV processing to match-play event classification with an overall success rate of 95.9%. The majority of failures occur when the ball, during returns and services, is partially occluded by either the player or racket, due to the inherent problem of using a monocular recording device. Finally, the thesis proposes further research and extensions for developing and implementing monocular based CV processing of motion based event analysis and classification in a wider range of applications.
APA, Harvard, Vancouver, ISO, and other styles
32

Sipahioglu, Sara M. "Tracking storms through time event deposition and biologic response in Storr's Lake, San Salvador Island, Bahamas /." Akron, OH : University of Akron, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=akron1227031927.

Full text
Abstract:
Thesis (M.S.)--University of Akron, Dept. of Geology, 2008.
"December, 2008." Title from electronic thesis title page (viewed 12/13/2009) Advisor, Lisa E. Park; Faculty Readers, Ira D. Sasowsky, John Peck; Department Chair, John P. Szabo; Dean of the College, Ronald F. Levant; Dean of the Graduate School, George R. Newkome. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
33

Sipahioglu, Sara M. "Tracking Storms through Time: Event Deposition and Biologic Response in Storr’s Lake, San Salvador Island, Bahamas." University of Akron / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=akron1227031927.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Louw, Illka. "From designer through space to spectator : tracking an imaginative exchage between the actants of a scenographic event." Master's thesis, University of Cape Town, 2013. http://hdl.handle.net/11427/11200.

Full text
Abstract:
Includes bibliographical references.
The aim of this enquiry is to deepen the understanding of the author's practice as theatre designer, scenographer and visual dramaturge in a postdramatic milieu. This study creates a theoretical frame for a research-led performance that is especially dependent on the release of 'active energies of imagination' (Lehmann, 2006:16). The performance will take the form of a scenographic event,which does not depend on 'the principles of narration and figuration' (Lehmann, 2006:18). Instead it relies on a 'visual dramaturgy ' in which just as in front of a painting, activates the dynamic capacity of the gaze to produce processes, combinations and rhythms on the basis of the data provided by the stage' (Lehmann, 2006:157). The study proposes that the release of 'active energies of imagination' (2006:16) extends beyond the space of the live event, tracking its origin to the interaction between the designer and the materials of her art.
APA, Harvard, Vancouver, ISO, and other styles
35

Yin, Munan. "Haptic optical tweezers with 3D high-speed tracking." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066003/document.

Full text
Abstract:
La micromanipulation a un grand potentiel pour révolutionner la recherche biologique et les soins médicaux. À petite échelle, microrobots peuvent effectuer des tâches médicales avec peu invasive, et d'explorer la vie à un niveau fondamental. Pinces optiques sont l'une des techniques les plus populaires pour la manipulation biologique. La production de petits lots qui exige une grande flexibilité repose principalement sur le processus de téléopération. Cependant, le niveau limité d'intuitivité rend de plus en plus difficile de conduire efficacement les tâches de manipulation et d'exploration dans le micromonde complexe. Dans de telles circonstances, des chercheurs pionniers ont proposé d'incorporer l'haptique dans la boucle de contrôle du système OTs, qui vise à gérer les tâches de micromanipulation de manière plus flexible et plus efficace. Cependant, la solution n'est pas encore complète, et il ya deux défis principaux à résoudre dans cette thèse: Détection de force 3D, qui doit être précis, rapide et robuste dans un espace de travail suffisamment grand; Haute vitesse jusqu'à 1 kHz force de rétroaction, ce qui est indispensable pour permettre une sensation tactile fidèle et d'assurer la stabilité du système. Dans la micromanipulation des pinceaux optiques, la vision est un bon candidat pour l'estimation de la force puisque le modèle force-position est bien établi. Cependant, le suivi de 1 kHz dépasse la vitesse des procédés de traitement classiques. La discipline émergente de l'ingénierie biomorphe visant à intégrer les comportements de vie dans le matériel informatique ou le logiciel à grande échelle rompt le goulot d'étranglement. Le capteur d'image asynchrone basé sur le temps (ATIS) est la dernière génération de prototype de rétine de silicium neuromorphique qui enregistre seulement les changements de contraste de scène sous la forme d'un flux d'événements. Cette propriété exclut le fond redondant et permet la détection et le traitement des mouvements à grande vitesse. La vision événementielle a donc été appliquée pour répondre à l'exigence de la rétroaction de force 3D à grande vitesse. Le résultat montre que les premières pinces optiques haptiques 3D à grande vitesse pour une application biologique ont été obtenues. La réalisation optique et les algorithmes de suivi événementiel pour la détection de force 3D à grande vitesse ont été développés et validés. L'exploration reproductible de la surface biologique 3D a été démontrée pour la première fois. En tant que puissant capteur de force 3D à grande vitesse, le système de pinces optiques développé présente un potentiel important pour diverses applications
Micromanipulation has a great potential to revolutionize the biological research and medical care. At small scales, microrobots can perform medical tasks with minimally invasive, and explore life at a fundamental level. Optical Tweezers are one of the most popular techniques for biological manipulation. The small-batch production which demands high flexibilities mainly relies on teleoperation process. However, the limited level of intuitiveness makes it more and more difficult to effectively conduct the manipulation and exploration tasks in the complex microworld. Under such circumstances, pioneer researchers have proposed to incorporate haptics into the control loop of OTs system, which aims to handle the micromanipulation tasks in a more flexible and effective way. However, the solution is not yet complete, and there are two main challenges to resolve in this thesis: 3D force detection, which should be accurate, fast, and robust in large enough working space; High-speed up to 1 kHz force feedback, which is indispensable to allow a faithful tactile sensation and to ensure system stability. In optical tweezers micromanipulation, vision is a sound candidate for force estimation since the position-force model is well established. However, the 1 kHz tracking is beyond the speed of the conventional processing methods. The emerging discipline of biomorphic engineering aiming to integrate the behaviors of livings into large-scale computer hardware or software breaks the bottleneck. The Asynchronous Time-Based Image Sensor (ATIS) is the latest generation of neuromorphic silicon retina prototype which records only scene contrast changes in the form of a stream of events. This property excludes the redundant background and allows high-speed motion detection and processing. The event-based vision has thus been applied to address the requirement of 3D high-speed force feedback. The result shows that the first 3D high-speed haptic optical tweezers for biological application have been achieved. The optical realization and event-based tracking algorithms for 3D high-speed force detection have been developed and validated. Reproducible exploration of the 3D biological surface has been demonstrated for the first time. As a powerful 3D high-speed force sensor, the developed optical tweezers system poses significant potential for various applications
APA, Harvard, Vancouver, ISO, and other styles
36

Acunzo, David Jean Pascal. "Interaction between visual attention and the processing of visual emotional stimuli in humans : eye-tracking, behavioural and event-related potential experiments." Thesis, University of Edinburgh, 2013. http://hdl.handle.net/1842/8016.

Full text
Abstract:
Past research has shown that the processing of emotional visual stimuli and visual attention are tightly linked together. In particular, emotional stimuli processing can modulate attention, and, reciprocally, the processing of emotional stimuli can be facilitated or inhibited by attentional processes. However, our understanding of these interactions is still limited, with much work remaining to be done to understand the characteristics of this reciprocal interaction and the different mechanisms that are at play. This thesis presents a series of experiments which use eye-tracking, behavioural and event-related potential (ERP) methods in order to better understand these interactions from a cognitive and neuroscientific point of view. First, the influence of emotional stimuli on eye movements, reflecting overt attention, was investigated. While it is known that the emotional gist of images attracts the eye (Calvo and Lang, 2004), little is known about the influence of emotional content on eye movements in more complex visual environments. Using eye-tracking methods, and by adapting a paradigm originally used to study the influence of semantic inconsistencies in scenes (Loftus and Mackworth, 1978), we found that participants spend more time fixating emotional than neutral targets embedded in visual scenes, but do not fixate them earlier. Emotional targets in scenes were therefore found to hold, but not to attract, the eye. This suggests that due to the complexity of the scenes and the limited processing resources available, the emotional information projected extra-foveally is not processed in such a way that it drives eye movements. Next, in order to better characterise the exogenous deployment of covert attention toward emotional stimuli, a sample of sub-clinically anxious individuals was studied. Anxiety is characterised by a reflexive attentional bias toward threatening stimuli. A dot-probe task (MacLeod et al., 1986) was designed to replicate and extend past findings of this attentional bias. In particular, the experiment was designed to test whether the bias was caused by faster reaction times to fear-congruent probes or slower reaction times to neutral-congruent probes. No attentional bias could be measured. A further analysis of the literature suggests that subliminal cue stimulus presentation, as used in our case, may not generate reliable attentional biases, unlike longer cue presentations. This would suggest that while emotional stimuli can be processed without awareness, further processing may be necessary to trigger reflexive attentional shifts in anxiety. Then the time-course of emotional stimulus processes and its modulation by attention was investigated. Modulations of the very early visual ERP C1 component by emotional stimuli (e.g. Pourtois et al., 2004; Stolarova et al., 2006), but also by visual attention (Kelly et al., 2008), were reported in the literature. A series of three experiments were performed, investigating the interactions between endogenous covert spatial attention and object-based attention with emotional stimuli processing in the C1 time window (50–100 ms). It was found that emotional stimuli modulated the C1 only when they were spatially attended and task-irrelevant. This suggests that whilst spatial attention gates emotional facial processing from the earliest stages, only incidental processing triggers a specific response before 100 ms. Additionally, the results suggest a very early modulation by feature-based attention which is independent from spatial attention. Finally, simulated and actual electroencephalographic data were used to show that modulations of early ERP and event-related field (ERF) components are highly dependent on the high-pass filter used in the pre-processing stage. A survey of the literature found that a large part of ERP/ERF reports (about 40%) use high-pass filters that may bias the results. More particularly, a large proportion of papers reporting very early modulations also use such filters. Consequently, a large part of the literature may need to be re-assessed. The work described in this thesis contributes to a better understanding of the links between emotional stimulus processing and attention at different levels. Using various experimental paradigms, this work confirms that emotional stimuli processing is not ‘automated’, but highly dependent on the focus of attention, even at the earlier stages of visual processing. Furthermore, the uncovered potential bias generated by filtering will help to improve the reliability and precision of research in the ERP/ERF field, and more particularly in studies looking at early effects.
APA, Harvard, Vancouver, ISO, and other styles
37

Abdelhaq, Hamed [Verfasser], and Michael [Akademischer Betreuer] Gertz. "Localized Events in Social Media Streams: Detection, Tracking, and Recommendation / Hamed Abdelhaq ; Betreuer: Michael Gertz." Heidelberg : Universitätsbibliothek Heidelberg, 2016. http://d-nb.info/118061075X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Trimble, Michael L., John E. Wells, and Timothy J. Wurth. "TELEMETRY SYSTEMS SUSTAINMENT." International Foundation for Telemetering, 2007. http://hdl.handle.net/10150/604527.

Full text
Abstract:
ITC/USA 2007 Conference Proceedings / The Forty-Third Annual International Telemetering Conference and Technical Exhibition / October 22-25, 2007 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Tactical training ranges provide an opportunity for all of the armed forces to assess operational readiness. To perform this task the various training ranges have deployed numerous telemetry systems. The current design efforts in place to upgrade the capabilities and unify the ranges under one telemetry system do not address the training ranges' need to maintain their training capability with the legacy systems that have been deployed until the new systems are ready. Two systems that have recently undergone sustainment efforts are the Player and Event Tracking System (TAPETS) and the Large Area Tracking Range (LATR). TAPETS is a telemetry system operated by the U.S. Army Operational Test Command. The TAPETS system is comprised of the ground mobile station Standard Range Unit (SRU) and the aircraft Inertial Global Positioning System (GPS) Integration (IGI) Pod. Both systems require a transponder for the wireless communications link. LATR is an over the horizon telemetry system operated by the U.S. Navy at various test ranges to track ground based, ship based, and airborne participants in training exercises. The LATR system is comprised of Rotary Wing (RW), Fixed Wing (FW) Pods, Fixed Wing Internal (FWI), Ship, and Ground Participant Instrumentation Packages (PIPs) as well as Ground Interrogation Station (GIS) and relay stations. Like the TAPETS system, each of these packages and stations also require a transponder for the wireless communications link. Both telemetry systems have developed additional capabilities in order to better support and train the Armed Forces, which consequently requires more transponders. In addition, some areas were experiencing failures in their transponders that have been deployed for many years. The available spare components of some systems had been depleted and the sustainment requirements along with the increased demand for assets were beginning to impact the ability of the systems to successfully monitor the training ranges during exercises. The path to maintaining operational capability chosen for the TAPETS system was a mixed approach that consisted of identifying a depot level repair facility for their transponders and funding the development of new transponder printed circuit boards (PCB's) where obsolescence prevented a sufficient number of repairable units. In the case of LATR, the decision was made to create new transponders to take advantage of cost effective state-of-the-art RF design and manufacturing processes. The result of this effort is a new transponder that is operationally indistinguishable from the legacy transponder in all installation environments. The purpose of this paper is to present two successful system sustainment efforts with different approaches to serve as models for preserving the current level of training range capabilities until the next generation of telemetry systems are deployed. While the two programs illustrated here deal primarily with the transponder components of the systems, these same methods can be applied to the other aspects of legacy telemetry system sustainment efforts.
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Ke. "A joint model of an internal time-dependent covariate and bivariate time-to-event data with an application to muscular dystrophy surveillance, tracking and research network data." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/2237.

Full text
Abstract:
Joint modeling of a single event time response with a longitudinal covariate dates back to the 1990s. The three basic types of joint modeling formulations are selection models, pattern mixture models and shared parameter models. The shared parameter models are most widely used. One type of a shared parameter model (Joint Model I) utilizes unobserved random effects to jointly model a longitudinal sub-model and a survival sub-model to assess the impact of an internal time-dependent covariate on the time-to-event response. Motivated by the Muscular Dystrophy Surveillance, Tracking and Research Network (MD STARnet), we constructed a new model (Joint Model II), to jointly analyze correlated bivariate time-to-event responses associated with an internal time-dependent covariate in the Frequentist paradigm. This model exhibits two distinctive features: 1) a correlation between bivariate time-to-event responses and 2) a time-dependent internal covariate in both survival models. Developing a model that sufficiently accommodates both characteristics poses a challenge. To address this challenge, in addition to the random variables that account for the association between the time-to-event responses and the internal time-dependent covariate, a Gamma frailty random variable was used to account for the correlation between the two event time outcomes. To estimate the model parameters, we adopted the Expectation-Maximization (EM) algorithm. We built a complete joint likelihood function with respect to both latent variables and observed responses. The Gauss-Hermite quadrature method was employed to approximate the two-dimensional integrals in the E-step of the EM algorithm, and the maximum profile likelihood type of estimation method was implemented in the M-step. The bootstrap method was then applied to estimate the standard errors of the estimated model parameters. Simulation studies were conducted to examine the finite sample performance of the proposed methodology. Finally, the proposed method was applied to MD STARnet data to assess the impact of shortening fractions and steroid use on the onsets of scoliosis and mental health issues.
APA, Harvard, Vancouver, ISO, and other styles
40

Napiralla, Philipp [Verfasser], Norbert [Akademischer Betreuer] Pietralla, and Herbert [Akademischer Betreuer] Egger. "Employing γ-ray Tracking as an Event-discrimination Technique for γ-spectroscopy with AGATA / Philipp Napiralla ; Norbert Pietralla, Herbert Egger." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2019. http://d-nb.info/1201820685/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Napiralla, Philipp [Verfasser], Norbert Akademischer Betreuer] Pietralla, and Herbert [Akademischer Betreuer] [Egger. "Employing γ-ray Tracking as an Event-discrimination Technique for γ-spectroscopy with AGATA / Philipp Napiralla ; Norbert Pietralla, Herbert Egger." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2019. http://d-nb.info/1201820685/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Joerger, Guillaume. "Multiscale modeling and event tracking wireless technologies to improve efficiency and safety of the surgical flow in an OR suite." Thesis, La Rochelle, 2017. http://www.theses.fr/2017LAROS009/document.

Full text
Abstract:
Améliorer la gestion et l’organisation des blocs opératoires est une tâche critique dans les hôpitaux modernes, principalement à cause de la diversité et l’urgence des activités impliquées. Contrairement à l’aviation civile, qui a su optimiser organisation et sécurité, le management de bloc opératoire est plus délicat. Le travail ici présenté abouti au développement et à l’installation de nouvelles technologies assistées par ordinateur résolvant les problèmes quotidiens des blocs opératoires. La plupart des systèmes existants modélisent le flux chirurgical et sont utilisés seulement pour planifier. Ils sont basés sur des procédés stochastiques, n’ayant pas accès à des données sûres. Nous proposons une structure utilisant un modèle multi-agent qui comprend tous les éléments indispensables à une gestion efficace et au maintien de la sécurité dans les blocs opératoires, allant des compétences communicationnelles du staff, au temps nécessaire à la mise en place du service de nettoyage. Nous pensons que la multiplicité des ressources humaines engagées dans cette structure cause des difficultés dans les blocs opératoires et doit être prise en compte dans le modèle. En parallèle, nous avons construit un modèle mathématique de flux d’air entre les blocs opératoires pour suivre et simuler la qualité de l’environnement de travail. Trois points sont nécessaires pour la construction et le bon fonctionnement d’un ensemble de bloc opératoire : 1) avoir accès au statut du système en temps réel grâce au placement de capteurs 2) la construction de modèles multi-échelles qui lient tous les éléments impliqués et leurs infrastructures 3) une analyse minutieuse de la population de patients, du comportement des employés et des conditions environnementales. Nous avons développé un système robuste et invisible qui permet le suivi et la détection automatique d’événements dans les blocs. Avec ce système nous pouvons suivre l’activité à la porte d’entrée des blocs, puis l’avancement en temps réel de la chirurgie et enfin l’état général du bloc. Un modèle de simulation numérique de mécanique des fluides de plusieurs blocs opératoires est utilisé pour suivre la dispersion de fumée chirurgicale toxique, ainsi qu’un modèle multi-domaine qui évalue les risques de propagation de maladie nosocomiale entre les blocs. La combinaison de ces trois aspects amène une nouvelle dimension de sensibilisation à l’environnent des blocs opératoires et donne au staff un système cyber-physique capable de prédire des événements rares impactant la qualité, l’efficacité, la rentabilité et la sécurité dans l’hôpital
Improving operating room management is a constant issue for modern large hospital systems who have to deal with the reality of day to day clinical activity. As opposed to other industrial sectors such as air civil aviation that have mastered the topic of industry organization and safety, progress in surgical flow management has been slower. The goal of the work presented here is to develop and implement technologies that leverage the principles of computational science to the application of OR suite problems. Most of the currently available models of surgical flow are used for planning purposes and are essentially stochastic processes due to uncertainties in the available data. We propose an agent-based model framework that can incorporate all the elements, from communication skills of the staff to the time it takes for the janitorial team to go clean an OR. We believe that human factor is at the center of the difficulty of OR suite management and should be incorporated in the model. In parallel, we use a numerical model of airflow at the OR suite level to monitor and simulate environment conditions inside the OR. We hypothesize that the following three key ingredients will provide the level of accuracy needed to improve OR management : 1) Real time updates of the model with ad hoc sensors of tasks/stages 2) Construction of a multi-scale model that links all key elements of the complex surgical infrastructure 3) Careful analysis of patient population factors, staff behavior, and environment conditions. We have developed a robust and non-obtrusive automatic event tracking system to make our model realistic to clinical conditions. Not only we track traffic through the door and the air quality inside the OR, we can also detect standard events in the surgical process. We propose a computational fluid dynamics model of a part of an OR suite to track dispersion of toxic surgical smoke and build in parallel a multidomain model of potential nosocomial contaminant particles flow in an OR suite. Combining the three models will raise the awareness of the OR suite by bringing to the surgical staff a cyber-physical system capable of prediction of rare events in the workflow and the safety conditions
APA, Harvard, Vancouver, ISO, and other styles
43

Huh, Seungil. "Toward an Automated System for the Analysis of Cell Behavior| Cellular Event Detection and Cell Tracking in Time-lapse Live Cell Microscopy." Thesis, Carnegie Mellon University, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3538985.

Full text
Abstract:

Time-lapse live cell imaging has been increasingly employed by biological and biomedical researchers to understand the underlying mechanisms in cell physiology and development by investigating behavior of cells. This trend has led to a huge amount of image data, the analysis of which becomes a bottleneck in related research. Consequently, how to efficiently analyze the data is emerging as one of the major challenges in the fields.

Computer vision analysis of non-fluorescent microscopy images, representatively phase-contrast microscopy images, promises to realize a long-term monitoring of live cell behavior with minimal perturbation and human intervention. To take a step forward to such a system, this thesis proposes computer vision algorithms that monitor cell growth, migration, and differentiation by detecting three cellular events—mitosis (cell division), apoptosis (programmed cell death), and differentiation—and tracking individual cells. Among the cellular events, to the best our knowledge, apoptosis and a certain type of differentiation, namely muscle myotubes, have never been detected without fluorescent labeling. We address these challenging problems by developing computer vision algorithms adopting phase contrast microscopy. We also significantly improve the accuracy of mitosis detection and cell tracking in phase contrast microscopy over previous methods, particularly under non-trivial conditions, such as high cell density or confluence. We demonstrate the usefulness of our methods in biological research by analyzing cell behavior in scratch wound healing assays. The automated system that we are pursuing would lead to a new paradigm of biological research by enabling quantitative and individualized assessment in behavior of a large population of intact cells.

APA, Harvard, Vancouver, ISO, and other styles
44

Yang, Yu-Fang. "Contribution des caractéristiques diagnostiques dans la reconnaissance des expressions faciales émotionnelles : une approche neurocognitive alliant oculométrie et électroencéphalographie." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS099/document.

Full text
Abstract:
La reconnaissance experte de l'expression faciale est cruciale pour l'interaction et la communication sociale. Le comportement, les potentiels évoqués (ERP), et les techniques d’oculométrie peuvent être utilisés pour étudier les mécanismes cérébraux qui participent au traitement visuel automatique. La reconnaissance d'expressions faciales implique non seulement l'extraction d'informations à partir de caractéristiques faciales diagnostiques, stratégie qualifiée de traitement local, mais aussi l'intégration d'informations globales impliquant des traitements configuraux. Des nombreuses recherches concernant le traitement des informations faciales émotionnelles il apparaît que l’interaction des traitements locaux et configuraux pour la reconnaissance des émotions est mal comprise. La complexité inhérente à l'intégration de l'information faciale est mise en lumière lorsque l'on compare la performance de sujets sains et d’individus atteints de schizophrénie, car ces derniers ont tendance à s’attarder sur quelques éléments locaux, parfois peu informatifs. Les différentes façons d'examiner les visages peuvent avoir un impact sur la capacité socio-cognitive de reconnaître les émotions. Pour ces raisons, cette thèse étudie le rôle des caractéristiques diagnostiques et configurales dans la reconnaissance de l'expression faciale. En plus des aspects comportementaux, nous avons donc examiné la dynamique spatiale et temporelle des fixations à l’aide de mesures oculométriques, ainsi que l’activité électrophysiologique précoce considérant plus particulièrement les composantes P100 et N170. Nous avons créé de nouveaux stimuli des esquisses par une transformation numérique de portraits photos en esquisses, pour des visages exprimant colère, tristesse, peur, joie ou neutralité, issus de la base Radboud Faces Database, en supprimant les informations de texture du visage et ne conservant que les caractéristiques diagnostiques (yeux et sourcils, nez, bouche). Ces esquisses altèrent le traitement configural en comparaison avec les visages photographiques, ce qui augmente le traitement des caractéristiques diagnostiques par traitement élémentaire, en contrepartie. La comparaison directe des mesures neurocognitives entre les esquisses et les visages photographiques exprimant des émotions de base n'a jamais été testée, à notre connaissance. Dans cette thèse, nous avons examiné (i) les fixations oculaires en fonction du type de stimulus, (ii) la réponse électrique aux manipulations expérimentales telles que l'inversion et la déconfiguration du visage. Concernant, les résultats comportementaux montrent que les esquisses de visage transmettent suffisamment d'information expressive (compte tenu de la présence des caractéristiques diagnostiques) pour la reconnaissance des émotions en comparaison des visages photographiques. Notons que, comme attendu, il y avait un net avantage de la reconnaissance des émotions pour les expressions heureuses par rapport aux autres émotions. En revanche, reconnaître des visages tristes et en colère était plus difficile. Ayant analysé séparément les fixations successives, les résultats indiquent que les participants ont adopté un traitement plus local des visages croqués et photographiés lors de la deuxième fixation. Néanmoins, l'extraction de l'information des yeux est nécessaire lorsque l'expression transmet des informations émotionnelles plus complexes et lorsque les stimuli sont simplifiés comme dans les esquisses. Les résultats de l’électroencéphalographie suggèrent également que les esquisses ont engendré plus de traitement basé sur les parties. Les éléments transmis par les traits diagnostiques pourraient avoir fait l'objet d'un traitement précoce, probablement dû à des informations de bas niveau durant la fenêtre temporelle de la P100, suivi d'un décodage ultérieur de la structure faciale dans la fenêtre temporelle de la N170
Proficient recognition of facial expression is crucial for social interaction. Behaviour, event-related potentials (ERPs), and eye-tracking techniques can be used to investigate the underlying brain mechanisms supporting this seemingly effortless processing of facial expression. Facial expression recognition involves not only the extraction of expressive information from diagnostic facial features, known as part-based processing, but also the integration of featural information, known as configural processing. Despite the critical role of diagnostic features in emotion recognition and extensive research in this area, it is still not known how the brain decodes configural information in terms of emotion recognition. The complexity of facial information integration becomes evident when comparing performance between healthy subjects and individuals with schizophrenia because those patients tend to process featural information on emotional faces. The different ways in examining faces possibly impact on social-cognitive ability in recognizing emotions. Therefore, this thesis investigates the role of diagnostic features and face configuration in the recognition of facial expression. In addition to behavior, we examined both the spatiotemporal dynamics of fixations using eye-tracking, and early neurocognitive sensitivity to face as indexed by the P100 and N170 ERP components. In order to address the questions, we built a new set of sketch face stimuli by transforming photographed faces from the Radboud Faces Database through the removal of facial texture and retaining only the diagnostic features (e.g., eyes, nose, mouth) with neutral and four facial expressions - anger, sadness, fear, happiness. Sketch faces supposedly impair configural processing in comparison with photographed faces, resulting in increased sensitivity to diagnostic features through part-based processing. The direct comparison of neurocognitive measures between sketch and photographed faces expressing basic emotions has never been tested. In this thesis, we examined (i) eye fixations as a function of stimulus type, and (ii) neuroelectric response to experimental manipulations such face inversion and deconfiguration. The use of these methods aimed to reveal which face processing drives emotion recognition and to establish neurocognitive markers of emotional sketch and photographed faces processing. Overall, the behavioral results showed that sketch faces convey sufficient expressive information (content of diagnostic features) as in photographed faces for emotion recognition. There was a clear emotion recognition advantage for happy expressions as compared to other emotions. In contrast, recognizing sad and angry faces was more difficult. Concomitantly, results of eye-tracking showed that participants employed more part-based processing on sketch and photographed faces during second fixation. The extracting information from the eyes is needed when the expression conveys more complex emotional information and when stimuli are impoverished (e.g., sketch). Using electroencephalographic (EEG), the P100 and N170 components are used to study the effect of stimulus type (sketch, photographed), orientation (inverted, upright), and deconfiguration, and possible interactions. Results also suggest that sketch faces evoked more part-based processing. The cues conveyed by diagnostic features might have been subjected to early processing, likely driven by low-level information during P100 time window, followed by a later decoding of facial structure and its emotional content in the N170 time window. In sum, this thesis helped elucidate elements of the debate about configural and part-based face processing for emotion recognition, and extend our current understanding of the role of diagnostic features and configural information during neurocognitive processing of facial expressions of emotion
APA, Harvard, Vancouver, ISO, and other styles
45

Limbach, Sebastian [Verfasser]. "Software tools and efficient algorithms for the feature detection, feature tracking, event localization, and visualization of large sets of atmospheric data / Sebastian Limbach." Mainz : Universitätsbibliothek Mainz, 2013. http://d-nb.info/1041309724/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Langner, Jens. "Event-Driven Motion Compensation in Positron Emission Tomography: Development of a Clinically Applicable Method." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-23509.

Full text
Abstract:
Positron emission tomography (PET) is a well-established functional imaging method used in nuclear medicine. It allows for retrieving information about biochemical and physiological processes in vivo. The currently possible spatial resolution of PET is about 5 mm for brain acquisitions and about 8 mm for whole-body acquisitions, while recent improvements in image reconstruction point to a resolution of 2 mm in the near future. Typical acquisition times range from minutes to hours due to the low signal-to-noise ratio of the measuring principle, as well as due to the monitoring of the metabolism of the patient over a certain time. Therefore, patient motion increasingly limits the possible spatial resolution of PET. In addition, patient immobilisations are only of limited benefit in this context. Thus, patient motion leads to a relevant resolution degradation and incorrect quantification of metabolic parameters. The present work describes the utilisation of a novel motion compensation method for clinical brain PET acquisitions. By using an external motion tracking system, information about the head motion of a patient is continuously acquired during a PET acquisition. Based on the motion information, a newly developed event-based motion compensation algorithm performs spatial transformations of all registered coincidence events, thus utilising the raw data of a PET system - the so-called `list-mode´ data. For routine acquisition of this raw data, methods have been developed which allow for the first time to acquire list-mode data from an ECAT Exact HR+ PET scanner within an acceptable time frame. Furthermore, methods for acquiring the patient motion in clinical routine and methods for an automatic analysis of the registered motion have been developed. For the clinical integration of the aforementioned motion compensation approach, the development of additional methods (e.g. graphical user interfaces) was also part of this work. After development, optimisation and integration of the event-based motion compensation in clinical use, analyses with example data sets have been performed. Noticeable changes could be demonstrated by analysis of the qualitative and quantitative effects after the motion compensation. From a qualitative point of view, image artefacts have been eliminated, while quantitatively, the results of a tracer kinetics analysis of a FDOPA acquisition showed relevant changes in the R0k3 rates of an irreversible reference tissue two compartment model. Thus, it could be shown that an integration of a motion compensation method which is based on the utilisation of the raw data of a PET scanner, as well as the use of an external motion tracking system, is not only reasonable and possible for clinical use, but also shows relevant qualitative and quantitative improvement in PET imaging
Die Positronen-Emissions-Tomographie (PET) ist ein in der Nuklearmedizin etabliertes funktionelles Schnittbildverfahren, das es erlaubt Informationen über biochemische und physiologische Prozesse in vivo zu erhalten. Die derzeit erreichbare räumliche Auflösung des Verfahrens beträgt etwa 5 mm für Hirnaufnahmen und etwa 8 mm für Ganzkörperaufnahmen, wobei erste verbesserte Bildrekonstruktionsverfahren eine Machbarkeit von 2 mm Auflösung in Zukunft möglich erscheinen lassen. Durch das geringe Signal/Rausch-Verhältnis des Messverfahrens, aber auch durch die Tatsache, dass der Stoffwechsel des Patienten über einen längeren Zeitraum betrachtet wird, betragen typische PET-Aufnahmezeiten mehrere Minuten bis Stunden. Dies hat zur Folge, dass Patientenbewegungen zunehmend die erreichbare räumliche Auflösung dieses Schnittbildverfahrens limitieren. Eine Immobilisierung des Patienten zur Reduzierung dieser Effekte ist hierbei nur bedingt hilfreich. Es kommt daher zu einer relevanten Auflösungsverschlechterung sowie zu einer Verfälschung der quantifizierten Stoffwechselparameter. Die vorliegende Arbeit beschreibt die Nutzbarmachung eines neuartigen Bewegungskorrekturverfahrens für klinische PET-Hirnaufnahmen. Mittels eines externen Bewegungsverfolgungssystems wird während einer PET-Untersuchung kontinuierlich die Kopfbewegung des Patienten registriert. Anhand dieser Bewegungsdaten führt ein neu entwickelter event-basierter Bewegungskorrekturalgorithmus eine räumliche Korrektur aller registrierten Koinzidenzereignisse aus und nutzt somit die als "List-Mode" bekannten Rohdaten eines PET Systems. Für die Akquisition dieser Daten wurden eigens Methoden entwickelt, die es erstmals erlauben, diese Rohdaten von einem ECAT Exact HR+ PET Scanner innerhalb eines akzeptablen Zeitraumes zu erhalten. Des Weiteren wurden Methoden für die klinische Akquisition der Bewegungsdaten sowie für die automatische Auswertung dieser Daten entwickelt. Ebenfalls Teil der Arbeit waren die Entwicklung von Methoden zur Integration in die klinische Routine (z.B. graphische Nutzeroberflächen). Nach der Entwicklung, Optimierung und Integration der event-basierten Bewegungskorrektur für die klinische Nutzung wurden Analysen anhand von Beispieldatensätzen vorgenommen. Es zeigten sich bei der Auswertung sowohl der qualitativen als auch der quantitativen Effekte deutliche Änderungen. In qualitativer Hinsicht wurden Bildartefakte eliminiert; bei der quantitativen Auswertung einer FDOPA Messung zeigte sich eine revelante Änderung der R0k3 Einstromraten eines irreversiblen Zweikompartment-Modells mit Referenzgewebe. Es konnte somit gezeigt werden, dass eine Integration einer Bewegungskorrektur unter Zuhilfenahme der Rohdaten eines PET Systems sowie unter Nutzung eines externen Verfolgungssystems nicht nur sinnvoll und in der klinischen Routine machbar ist, sondern auch zu maßgeblichen qualitativen und quantitativen Verbesserungen in der PET-Bildgebung beitragen kann
APA, Harvard, Vancouver, ISO, and other styles
47

Paduru, Anirudh. "Fast Algorithm for Modeling of Rain Events in Weather Radar Imagery." ScholarWorks@UNO, 2009. http://scholarworks.uno.edu/td/1097.

Full text
Abstract:
Weather radar imagery is important for several remote sensing applications including tracking of storm fronts and radar echo classification. In particular, tracking of precipitation events is useful for both forecasting and classification of rain/non-rain events since non-rain events usually appear to be static compared to rain events. Recent weather radar imaging-based forecasting approaches [3] consider that precipitation events can be modeled as a combination of localized functions using Radial Basis Function Neural Networks (RBFNNs). Tracking of rain events can be performed by tracking the parameters of these localized functions. The RBFNN-based techniques used in forecasting are not only computationally expensive, but also moderately effective in modeling small size precipitation events. In this thesis, an existing RBFNN technique [3] was implemented to verify its computational efficiency and forecasting effectiveness. The feasibility of modeling precipitation events using RBFNN effectively was evaluated, and several modifications to the existing technique have been proposed.
APA, Harvard, Vancouver, ISO, and other styles
48

Tavakoli, Siamak. "A generic predictive information system for resource planning and optimisation." Thesis, Brunel University, 2010. http://bura.brunel.ac.uk/handle/2438/8116.

Full text
Abstract:
The purpose of this research work is to demonstrate the feasibility of creating a quick response decision platform for middle management in industry. It utilises the strengths of current, but more importantly creates a leap forward in the theory and practice of Supervisory and Data Acquisition (SCADA) systems and Discrete Event Simulation and Modelling (DESM). The proposed research platform uses real-time data and creates an automatic platform for real-time and predictive system analysis, giving current and ahead of time information on the performance of the system in an efficient manner. Data acquisition as the backend connection of data integration system to the shop floor faces both hardware and software challenges for coping with large scale real-time data collection. Limited scope of SCADA systems does not make them suitable candidates for this. Cost effectiveness, complexity, and efficiency-orientation of proprietary solutions leave space for more challenge. A Flexible Data Input Layer Architecture (FDILA) is proposed to address generic data integration platform so a multitude of data sources can be connected to the data processing unit. The efficiency of the proposed integration architecture lies in decentralising and distributing services between different layers. A novel Sensitivity Analysis (SA) method called EvenTracker is proposed as an effective tool to measure the importance and priority of inputs to the system. The EvenTracker method is introduced to deal with the complexity systems in real-time. The approach takes advantage of event-based definition of data involved in process flow. The underpinning logic behind EvenTracker SA method is capturing the cause-effect relationships between triggers (input variables) and events (output variables) at a specified period of time determined by an expert. The approach does not require estimating data distribution of any kind. Neither the performance model requires execution beyond the real-time. The proposed EvenTracker sensitivity analysis method has the lowest computational complexity compared with other popular sensitivity analysis methods. For proof of concept, a three tier data integration system was designed and developed by using National Instruments’ LabVIEW programming language, Rockwell Automation’s Arena simulation and modelling software, and OPC data communication software. A laboratory-based conveyor system with 29 sensors was installed to simulate a typical shop floor production line. In addition, EvenTracker SA method has been implemented on the data extracted from 28 sensors of one manufacturing line in a real factory. The experiment has resulted 14% of the input variables to be unimportant for evaluation of model outputs. The method proved a time efficiency gain of 52% on the analysis of filtered system when unimportant input variables were not sampled anymore. The EvenTracker SA method compared to Entropy-based SA technique, as the only other method that can be used for real-time purposes, is quicker, more accurate and less computationally burdensome. Additionally, theoretic estimation of computational complexity of SA methods based on both structural complexity and energy-time analysis resulted in favour of the efficiency of the proposed EvenTracker SA method. Both laboratory and factory-based experiments demonstrated flexibility and efficiency of the proposed solution.
APA, Harvard, Vancouver, ISO, and other styles
49

Langner, Jens. "Event-Driven Motion Compensation in Positron Emission Tomography: Development of a Clinically Applicable Method." Doctoral thesis, Forschungszentrum Dresden-Rossendorf e.V, 2008. https://tud.qucosa.de/id/qucosa%3A25077.

Full text
Abstract:
Positron emission tomography (PET) is a well-established functional imaging method used in nuclear medicine. It allows for retrieving information about biochemical and physiological processes in vivo. The currently possible spatial resolution of PET is about 5 mm for brain acquisitions and about 8 mm for whole-body acquisitions, while recent improvements in image reconstruction point to a resolution of 2 mm in the near future. Typical acquisition times range from minutes to hours due to the low signal-to-noise ratio of the measuring principle, as well as due to the monitoring of the metabolism of the patient over a certain time. Therefore, patient motion increasingly limits the possible spatial resolution of PET. In addition, patient immobilisations are only of limited benefit in this context. Thus, patient motion leads to a relevant resolution degradation and incorrect quantification of metabolic parameters. The present work describes the utilisation of a novel motion compensation method for clinical brain PET acquisitions. By using an external motion tracking system, information about the head motion of a patient is continuously acquired during a PET acquisition. Based on the motion information, a newly developed event-based motion compensation algorithm performs spatial transformations of all registered coincidence events, thus utilising the raw data of a PET system - the so-called `list-mode´ data. For routine acquisition of this raw data, methods have been developed which allow for the first time to acquire list-mode data from an ECAT Exact HR+ PET scanner within an acceptable time frame. Furthermore, methods for acquiring the patient motion in clinical routine and methods for an automatic analysis of the registered motion have been developed. For the clinical integration of the aforementioned motion compensation approach, the development of additional methods (e.g. graphical user interfaces) was also part of this work. After development, optimisation and integration of the event-based motion compensation in clinical use, analyses with example data sets have been performed. Noticeable changes could be demonstrated by analysis of the qualitative and quantitative effects after the motion compensation. From a qualitative point of view, image artefacts have been eliminated, while quantitatively, the results of a tracer kinetics analysis of a FDOPA acquisition showed relevant changes in the R0k3 rates of an irreversible reference tissue two compartment model. Thus, it could be shown that an integration of a motion compensation method which is based on the utilisation of the raw data of a PET scanner, as well as the use of an external motion tracking system, is not only reasonable and possible for clinical use, but also shows relevant qualitative and quantitative improvement in PET imaging.
Die Positronen-Emissions-Tomographie (PET) ist ein in der Nuklearmedizin etabliertes funktionelles Schnittbildverfahren, das es erlaubt Informationen über biochemische und physiologische Prozesse in vivo zu erhalten. Die derzeit erreichbare räumliche Auflösung des Verfahrens beträgt etwa 5 mm für Hirnaufnahmen und etwa 8 mm für Ganzkörperaufnahmen, wobei erste verbesserte Bildrekonstruktionsverfahren eine Machbarkeit von 2 mm Auflösung in Zukunft möglich erscheinen lassen. Durch das geringe Signal/Rausch-Verhältnis des Messverfahrens, aber auch durch die Tatsache, dass der Stoffwechsel des Patienten über einen längeren Zeitraum betrachtet wird, betragen typische PET-Aufnahmezeiten mehrere Minuten bis Stunden. Dies hat zur Folge, dass Patientenbewegungen zunehmend die erreichbare räumliche Auflösung dieses Schnittbildverfahrens limitieren. Eine Immobilisierung des Patienten zur Reduzierung dieser Effekte ist hierbei nur bedingt hilfreich. Es kommt daher zu einer relevanten Auflösungsverschlechterung sowie zu einer Verfälschung der quantifizierten Stoffwechselparameter. Die vorliegende Arbeit beschreibt die Nutzbarmachung eines neuartigen Bewegungskorrekturverfahrens für klinische PET-Hirnaufnahmen. Mittels eines externen Bewegungsverfolgungssystems wird während einer PET-Untersuchung kontinuierlich die Kopfbewegung des Patienten registriert. Anhand dieser Bewegungsdaten führt ein neu entwickelter event-basierter Bewegungskorrekturalgorithmus eine räumliche Korrektur aller registrierten Koinzidenzereignisse aus und nutzt somit die als "List-Mode" bekannten Rohdaten eines PET Systems. Für die Akquisition dieser Daten wurden eigens Methoden entwickelt, die es erstmals erlauben, diese Rohdaten von einem ECAT Exact HR+ PET Scanner innerhalb eines akzeptablen Zeitraumes zu erhalten. Des Weiteren wurden Methoden für die klinische Akquisition der Bewegungsdaten sowie für die automatische Auswertung dieser Daten entwickelt. Ebenfalls Teil der Arbeit waren die Entwicklung von Methoden zur Integration in die klinische Routine (z.B. graphische Nutzeroberflächen). Nach der Entwicklung, Optimierung und Integration der event-basierten Bewegungskorrektur für die klinische Nutzung wurden Analysen anhand von Beispieldatensätzen vorgenommen. Es zeigten sich bei der Auswertung sowohl der qualitativen als auch der quantitativen Effekte deutliche Änderungen. In qualitativer Hinsicht wurden Bildartefakte eliminiert; bei der quantitativen Auswertung einer FDOPA Messung zeigte sich eine revelante Änderung der R0k3 Einstromraten eines irreversiblen Zweikompartment-Modells mit Referenzgewebe. Es konnte somit gezeigt werden, dass eine Integration einer Bewegungskorrektur unter Zuhilfenahme der Rohdaten eines PET Systems sowie unter Nutzung eines externen Verfolgungssystems nicht nur sinnvoll und in der klinischen Routine machbar ist, sondern auch zu maßgeblichen qualitativen und quantitativen Verbesserungen in der PET-Bildgebung beitragen kann.
APA, Harvard, Vancouver, ISO, and other styles
50

Mičánková, Veronika. "Kognitivní evokované potenciály a fixace očí při vizuální emoční stimulaci." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2016. http://www.nusl.cz/ntk/nusl-220722.

Full text
Abstract:
Cílem této diplomové práce je najít a popsat souvislost mezi fixací očí v emočně zabarveném stimulu, kterým je obrázek či video, a EEG signálu. K tomuto studiu je třeba vyvinout softwarové nástroje v prostředí Matlab k úpravě a zpracování dat získaných z eye trackeru a propojení s EEG signály pomocí nově vytvořených markerů. Na základě získaných znalostí o fixacích, jsou v prostředí BrainVision Analyzeru EEG data zpracovány a následně jsou segmentovány a průměrovány jako evokované potenciály pro jednotlivé stimuly (ERP a EfRP). Tato práce je vypracována ve spolupráci s Gipsa-lab v rámci výzkumného projektu.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography