To see the other types of publications on this topic, follow the link: Data Timeliness.

Dissertations / Theses on the topic 'Data Timeliness'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 dissertations / theses for your research on the topic 'Data Timeliness.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Olupot-Olupot, Peter. "Evaluation of Antiretroviral Therapy Information System In Mbale Regional Referral Hospital, Uganda." Thesis, University of the Western Cape, 2008. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_7320_1272589584.

Full text
Abstract:
<p>HIV/AIDS is the largest and most serious global epidemic in the recent times. To date, the epidemic has affected approximately 40 million people (range 33 &ndash<br>46 million) of whom 67%, that is, an estimated 27 million people are in the Sub Saharan Africa. The Sub Saharan Africa is also reported to have the highest regional prevalence of 7.2% compared to an average of 2% in other regions. A medical cure for HIV/AIDS remains elusive but use of antiretroviral therapy (ART) has resulted in improvement of quality and quantity of life as evidenced by the reduction of mortality and morbidity associated with the infection, hence longer and good quality life for HIV/AIDS patients on ART.</p>
APA, Harvard, Vancouver, ISO, and other styles
2

hu, xiaoxiang. "Analysis of Time-related Properties in Real-time Data Aggregation Design." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39046.

Full text
Abstract:
Data aggregation is extensively used in data management systems nowadays. Based on a data aggregation taxonomy named DAGGTAX, we propose an analytic process to evaluate the run-time platform and time-related parameters of Data Aggregation Processes (DAP) in a real-time system design, which can help designers to eliminate infeasible design decisions at early stage. The process for data aggregation design and analysis mainly includes the following outlined steps. Firstly, the user needs to specify the variation of the platform and DAP by figuring out the features of the system and time-related parameters respectively. Then, the user can choose one combination of the variations between the features of the platform and DAP, which forms the initial design of the system. Finally, apply the analytic method for feasibility analysis by schedulability analysis techniques. If there are no infeasibilities found in the process, then the design can be finished. Otherwise, the design must be altered from the run-time platform and DAP design stage, and the schedulability analysis will be applied again for the revised design until all the infeasibilities are fixed. In order to help designers to understand and describe the system and do feasibility analysis, we propose a new UML (Unified Modeling Language) profile that extends UML with concepts related to real-time data aggregation design. These extensions aim to accomplish the conceptual modeling of a real-time data aggregation. In addition, the transferring method based on UML profile to transfer the data aggregation design into a task model is proposed as well. In the end of the thesis, a case study, which applies the analytic process to analyze the architecture design of an environmental monitoring sensor network, is presented as a demonstration of our research.
APA, Harvard, Vancouver, ISO, and other styles
3

RULA, ANISA. "Time-related quality dimensions in linked data." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/81717.

Full text
Abstract:
Over the last few years, there has been an increasing di↵usion of Linked Data as a standard way to publish interlinked structured data on the Web, which allows users, and public and private organizations to fully exploit a large amount of data from several domains that were not available in the past. Although gathering and publishing such massive amount of structured data is certainly a step in the right direction, quality still poses a significant obstacle to the uptake of data consumption applications at large-scale. A crucial aspect of quality regards the dynamic nature of Linked Data where information can change rapidly and fail to reflect changes in the real world, thus becoming out-date. Quality is characterised by di↵erent dimensions that capture several aspects of quality such as accuracy, currency, consistency or completeness. In particular, the aspects of Linked Data dynamicity are captured by Time-Related Quality Dimen- sions such as data currency. The assessment of Time-Related Quality Dimensions, which is the task of measuring the quality, is based on temporal information whose collection poses several challenges regarding their availability, representation and diversity in Linked Data. The assessment of Time-Related Quality Dimensions supports data consumers in their decisions whether information are valid or not. The main goal of this thesis is to develop techniques for assessing Time-Related Quality Dimensions in Linked Data, which must overcome several challenges posed by Linked Data such as third-party applications, variety of data, high volume of data or velocity of data. The major contributions of this thesis can be summarized as follows: it presents a general settings of definitions for quality dimensions and measures adopted in Linked Data; it provides a large-scale analysis of approaches for representing temporal information in Linked Data; it provides a sharable and interoperable conceptual model which integrates vocabularies used to represent temporal information required for the assessment of Time-Related Quality Di- mensions; it proposes two domain-independent techniques to assess data currency that work with incomplete or inaccurate temporal information and finally it pro- vides an approach that enrich information with time intervals representing their temporal validity.
APA, Harvard, Vancouver, ISO, and other styles
4

Fahey, Rebecca Lee. "Evaluation of the System Attributes of Timeliness and Completeness of the West Virginia Electronic Disease Surveillance System' NationalEDSS Based System." ScholarWorks, 2015. https://scholarworks.waldenu.edu/dissertations/1341.

Full text
Abstract:
Despite technological advances in public health informatics, the evaluation of infectious disease surveillance systems data remains incomplete. In this study, a thorough evaluation was performed of the West Virginia Electronic Disease Surveillance System (WVEDSS, 2007-2010) and the West Virginia Electronic Disease Surveillance System NationalEDSS -Based System (WVEDSS-NBS; March 2012 - March 2014) for Category II infectious diseases in West Virginia. The purpose was to identify key areas in the surveillance system process from disease diagnosis to disease prevention that need improvement. Grounded in the diffusion of innovation theory, a quasi-experimental, interrupted, time-series design was used to evaluate the 2 data sets. Research questions examined differences in mean reporting time, the 24-hour standard, and comparison of complete fields (DOB, gender etc.) of the data sets using independent samples t tests. The study found (a) that the mean reporting times were shorter for WVEDSS compared to WVEDSS-NBS (p < .05) for all vaccine-preventable infectious diseases (VPID) in Category II except for mumps; (b) that the 24-hour standard was not met for WVEDSS compared to WVEDSS-NBS (p < .05) for all VPID in Category II except for mumps, and (c) that most fields were complete for WVEDSS compared to WVEDSS-NBS (p < .05) for all VPID in Category II except for meningococcal disease. Healthcare professionals in the state can use the results of this research to improve the system attributes of timeliness and completeness. Implications for positive social change included improved access to public health data to better understand health disparities, which, in turn could reduce morbidity and mortality within the population.
APA, Harvard, Vancouver, ISO, and other styles
5

Nilsson, Robert. "Automated Selective Test Case Generation Methods for Real-Time Systems." Thesis, University of Skövde, Department of Computer Science, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-487.

Full text
Abstract:
<p>This work aims to investigate the state of the art in test case generation for real-time systems, to analyze existing methods, and to propose future research directions in this area. We believe that a combination of design for testability, automation, and sensible test case selection is the key for verifying modern real-time systems. Existing methods for system-level test case generation for real-time systems are presented, classified, and evaluated against a real-time system model. Significant for real-time systems is that timeliness is crucial for their correctness. Our system model of the testing target adopts the event-triggered design paradigm for maximum flexibility. This paradigm results in target systems that are harder to test than its time-triggered counterpart, but the model improves testability by adopting previously proposed constraints on application behavior. This work investigates how time constraints can be tested using current methods and reveals problems relating to test-case generation for verifying such constraints. Further, approaches for automating the test-case generation process are investigated, paying special attention to methods aimed for real-time systems. We also note a need for special test-coverage criteria for concurrent and real-time systems to select test cases that increase confidence in such systems. We analyze some existing criteria from the perspective of our target model. The results of this dissertation are a classification of methods for generating test cases for real-time systems, an identification of contradictory terminology, and an increased body of knowledge about problems and open issues in this area. We conclude that the test-case generation process often neglects the internal behavior of the tested system and the properties of its execution environment as well as the effects of these on timeliness. Further, we note that most of the surveyed articles on testing methods incorporate automatic test-case generation in some form, but few consider the issues of automated execution of test cases. Four high-level future research directions are proposed that aim to remedy one or more of the identified problems.</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Barcellos, Leonardo Portugal. "Timeliness no Brasil: um estudo dos determinantes do prazo de divulgação das demonstrações contábeis das companhias não financeiras listadas na BM&FBOVESPA." Universidade do Estado do Rio de Janeiro, 2013. http://www.bdtd.uerj.br/tde_busca/arquivo.php?codArquivo=4764.

Full text
Abstract:
Esta dissertação tem o propósito principal de fornecer evidências empíricas acerca dos fatores que influenciam as decisões dos gestores quanto ao prazo de divulgação das demonstrações contábeis anuais das companhias não financeiras listadas na BM&FBOVESPA. O prazo de divulgação, chamado defasagem, foi medido como o intervalo em dias entre o encerramento do exercício social e a data da primeira apresentação das Demonstrações Financeiras Padronizadas (DFPs). O foco da pesquisa foi a influência, sobre a defasagem, dos seguintes fatores não observáveis: monitoramento, complexidade contábil, governança corporativa, relatório de auditoria e performance. Com base na literatura revisada, foram formuladas proxies destinadas a captar os efeitos desses fatores. Para a consecução dos objetivos, foram estimados modelos econométricos por meio dos métodos: (i) Mínimos Quadrados Ordinários (MQO) com dados em corte transversal; (ii) MQO com dados agrupados (OLS pooled); e (iii) painel de dados. Os testes foram aplicados sobre um painel balanceado de dados, ou seja, 644 observações de 322 companhias, referentes aos exercícios 2010 e 2011. Os resultados das estimações revelaram que tendem a divulgar mais rapidamente suas demonstrações companhias: (i) com maior número de acionistas; (ii) com maior nível de endividamento; (iii) que aderiram a um entre os níveis diferenciados de governança corporativa da BM&FBOVESPA; (iv) que possuem maiores proporções de diretores independentes na composição da diretoria (board); e (v) que foram auditadas por uma entre as firmas de auditoria do grupo Big-4. Por outro lado, constatou-se que tendem a atrasar suas divulgações companhias que: (i) estão sujeitas à consolidação de balanços; (ii) tiveram suas demonstrações contábeis ressalvadas pelos auditores independentes; (iii) e que registraram resultados negativos (prejuízos). Adicionalmente, foram formuladas proxies para captar os efeitos das surpresas contidas nos resultados, uma delas tendo como base o benchmark para as expectativas do mercado, qual seja, a previsão dos analistas, no entanto, não foram constatados impactos das surpresas sobre o prazo de divulgação. Também não foram verificadas influências, sobre o timing, oriundas da proporção de investidores institucionais, da formação de blocos de controle, da regulação estatal, do nível de rentabilidade, do porte e tampouco da negociação de valores mobiliários em mercados estrangeiros. Os achados desta pesquisa podem contribuir não apenas para a literatura dedicada a essa linha de pesquisa, como também para investidores, analistas de mercado e reguladores. As nuances observadas para os exercícios analisados, que marcaram a adoção integral do padrão contábil alinhado às normas IFRS e a recuperação da economia brasileira em relação aos impactos da crise financeira mundial, permitiram relevantes constatações. Além disso, a relevância deste estudo é ampliada pelo ineditismo presente na aplicação de proxies ainda não utilizadas em ambiente nacional para explicar os prazos de divulgação.
APA, Harvard, Vancouver, ISO, and other styles
7

Lee, Amra. "Why do some civilian lives matter more than others? Exploring how the quality, timeliness and consistency of data on civilian harm affects the conduct of hostilities for civilians caught in conflict." Thesis, Uppsala universitet, Institutionen för freds- och konfliktforskning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-387653.

Full text
Abstract:
Normatively, protecting civilians from the conduct of hostilities is grounded in the Geneva Conventions and the UN Security Council protection of civilian agenda, both of which celebrate their 70 and 20 year anniversaries in 2019. Previous research focusses heavily on protection of civilians through peacekeeping whereas this research focuses on ‘non-armed’ approaches to enhancing civilian protection in conflict. Prior research and experience reveals a high level of missingness and variation in the level of available data on civilian harm in conflict. Where civilian harm is considered in the peace and conflict literature, it is predominantly from a securitized lens of understanding insurgent recruitment strategies and more recent counter-insurgent strategies aimed at winning ‘hearts and minds’. Through a structured focused comparison of four case studies the correlation between the level of quality, timely and consistent data on civilian harm and affect on the conduct of hostilities will be reviewed and potential confounders identified. Following this the hypothesized causal mechanism will be process traced through the pathway case of Afghanistan. The findings and analysis from both methods identify support for the theory and it’s refinement with important nuances in the factors conducive to quality, timely and consistent data collection on civilian harm in armed conflict.
APA, Harvard, Vancouver, ISO, and other styles
8

Bashir, Imran. "Visualizing Complex Data Using Timeline." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-17958.

Full text
Abstract:
This thesis  introduces the idea of visualizing complex data using timeline for problem solving and analyzing a huge database. The database contains information about vehicles, which are continuously sending information about driving behavior, current location, driver ativities etc. Data complexity can be resolved by data visualization where user can see this complex data in the abstract form of timeline visualization. visualize complex data by using timeline mgiht help to monitor and track diffrent time dependent activities. We developed web application to monitor and track monthly, weekly, and daily activities which helps in decision making and understanding complex data.
APA, Harvard, Vancouver, ISO, and other styles
9

Kräutli, Florian. "Visualising cultural data : exploring digital collections through timeline visualisations." Thesis, Royal College of Art, 2016. http://researchonline.rca.ac.uk/1774/.

Full text
Abstract:
This thesis explores the ability of data visualisation to enable knowl-edge discovery in digital collections. Its emphasis lies on time-based visualisations, such as timelines. Although timelines are among the earliest examples of graphical renderings of data, they are often used merely as devices for linear storytelling and not as tools for visual analysis. Investigating this type of visualisation reveals the particular challenges of digital timelines for scholarly research. In addition, the intersection between the key issues of time-wise visualisation and digital collections acts as a focal point. Departing from authored temporal descriptions in collections data, the research examines how curatorial decisions influence collec-tions data and how these decisions may be made manifest in timeline visualisations. The thesis contributes a new understanding of the knowledge embedded in digital collections and provides practical and conceptual means for making this knowledge accessible and usable. The case is made that digital collections are not simply represen-tations of physical archives. Digital collections record not only what is known about the content of an archive. Collections data contains traces of institutional decisions and curatorial biases, as well as data related to administrative procedures. Such ‘hidden data’ – information that has not been explicitly recorded, but is nevertheless present in the dataset – is crucial for drawing informed conclusions from dig-itised cultural collections and can be exposed through appropriately designed visualisation tools. The research takes a practice-led and collaborative approach, work-ing closely with cultural institutions and their curators. Functional prototypes address issues of visualising large cultural datasets and the representation of uncertain and multiple temporal descriptions that are typically found in digital collections. The prototypes act as means towards an improved understanding of and a critical engagement with the time-wise visualisation of col-lections data. Two example implementations put the design principles that have emerged into practice and demonstrate how such tools may assist in knowledge discovery in cultural collections. Calls for new visualisation tools that are suitable for the purposes of humanities research are widespread in the scholarly community. However, the present thesis shows that gaining new insights into digital collections does not only require technological advancement, but also an epistemological shift in working with digital collections. This shift is expressed in the kind of questions that curators have started seeking to answer through visualisation. Digitisation requires and affords new ways of interrogating collections that depart from putting the collected artefact and its creator at the centre of human-istic enquiry. Instead, digital collections need to be seen as artefacts themselves. Recognising this leads curators to address self-reflective research questions that seek to study the history of an institution and the influence that individuals have had on the holdings of a collection; questions that so far escaped their areas of research.
APA, Harvard, Vancouver, ISO, and other styles
10

Erande, Abhijit. "Automatic detection of significant features and event timeline construction from temporally tagged data." Kansas State University, 2009. http://hdl.handle.net/2097/1675.

Full text
Abstract:
Master of Science<br>Department of Computing and Information Sciences<br>William H. Hsu<br>The goal of my project is to summarize large volumes of data and help users to visualize how events have unfolded over time. I address the problem of extracting overview terms from a time-tagged corpus of data and discuss some previous work conducted in this area. I use a statistical approach to automatically extract key terms, form groupings of related terms, and display the resultant groups on a timeline. I use a static corpus composed of news stories, as opposed to an on-line setting where continual additions to the corpus are being made. Terms are extracted using a Named Entity Recognizer, and importance of a term is determined using the [superscript]X[superscript]2 measure. My approach does not address the problem of associating time and date stamps with data, and is restricted to corpora that been explicitly tagged. The quality of results obtained is gauged subjectively and objectively by measuring the degree to which events known to exist in the corpus were identified by the system.
APA, Harvard, Vancouver, ISO, and other styles
11

Nel, Daniel Hermanus Greyling. "Performative digital asset management: To propose a framework and proof of concept model that effectively enables researchers to document, archive and curate their non-traditional research data." Thesis, Queensland University of Technology, 2015. https://eprints.qut.edu.au/84761/3/Daniel_Nel_Exegesis.pdf.

Full text
Abstract:
This cross disciplinary study was conducted as two research and development projects. The outcome is a multimodal and dynamic chronicle, which incorporates the tracking of spatial, temporal and visual elements of performative practice-led and design-led research journeys. The distilled model provides a strong new approach to demonstrate rigour in non-traditional research outputs including provenance and an 'augmented web of facticity'.
APA, Harvard, Vancouver, ISO, and other styles
12

Polowinski, Jan. "Visualisierung großer Datenmengen im Raum." Thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-108506.

Full text
Abstract:
Large, strongly connected amounts of data, as collected in knowledge bases or those occurring when describing software, are often read slowly and with difficulty by humans when they are represented as spreadsheets or text. Graphical representations can help people to understand facts more intuitively and offer a quick overview. The electronic representation offers means that are beyond the possibilities of print such as unlimited zoom and hyperlinks. This paper addresses a framework for visualizing connected information in 3D-space taking into account the techniques of media design to build visualization structures and map information to graphical properties<br>Große, stark vernetzte Datenmengen, wie sie in Wissensbasen oder Softwaremodellen vorkommen, sind von Menschen oft nur langsam und mühsam zu lesen, wenn sie als Tabellen oder Text dargestellt werden. Graphische Darstellungen können Menschen helfen, Tatsachen intuitiver zu verstehen und bieten einen schnellen Überblick. Die elektronische Darstellung bietet Mittel, welche über die Möglichkeiten von Print hinausgehen, wie z.B. unbegrenzten Zoom und Hyperlinks. Diese Arbeit stellt ein Framework für die Visualisierung vernetzter Informationen im 3D-Raum vor, welches Techniken der Gestaltung zur Erstellung von graphischen Strukturen und zur Abbildung von Informationen auf graphische Eigenschaften berücksichtigt
APA, Harvard, Vancouver, ISO, and other styles
13

Jabour, Abdulrahman M. "Cancer reporting: timeliness analysis and process reengineering." Diss., 2015. http://hdl.handle.net/1805/10481.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)<br>Introduction: Cancer registries collect tumor-related data to monitor incident rates and support population-based research. A common concern with using population-based registry data for research is reporting timeliness. Data timeliness have been recognized as an important data characteristic by both the Centers for Disease Control and Prevention (CDC) and the Institute of Medicine (IOM). Yet, few recent studies in the United States (U.S.) have systemically measured timeliness. The goal of this research is to evaluate the quality of cancer data and examine methods by which the reporting process can be improved. The study aims are: 1- evaluate the timeliness of cancer cases at the Indiana State Department of Health (ISDH) Cancer Registry, 2- identify the perceived barriers and facilitators to timely reporting, and 3- reengineer the current reporting process to improve turnaround time. Method: For Aim 1: Using the ISDH dataset from 2000 to 2009, we evaluated the reporting timeliness and subtask within the process cycle. For Aim 2: Certified cancer registrars reporting for ISDH were invited to a semi-structured interview. The interviews were recorded and qualitatively analyzed. For Aim 3: We designed a reengineered workflow to minimize the reporting timeliness and tested it using simulation. Result: The results show variation in the mean reporting time, which ranged from 426 days in 2003 to 252 days in 2009. The barriers identified were categorized into six themes and the most common barrier was accessing medical records at external facilities. We also found that cases reside for a few months in the local hospital database while waiting for treatment data to become available. The recommended workflow focused on leveraging a health information exchange for data access and adding a notification system to inform registrars when new treatments are available.
APA, Harvard, Vancouver, ISO, and other styles
14

Braga, André Filipe Gonçalves Névoa Fernandes. "Pervasive patient timeline." Master's thesis, 2015. http://hdl.handle.net/1822/40094.

Full text
Abstract:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação<br>Em Medicina Intensiva, a apresentação de informação médica nas Unidades de Cuidados Intensivos (UCI) é feita de diversas formas (gráficos, tabelas, texto, …), pois depende do tipo de análises realizadas, dos dados recolhidos em tempo real pelos sistemas de monitorização, entre outros. A forma como é apresentada a informação pode dificultar a leitura da condição clínica dos doentes por parte dos profissionais de saúde, principalmente quando há a necessidade de um cruzamento entre vários tipos de dados clínicos/fontes de informação. A evolução das tecnologias para novos padrões como a ubiquidade e o pervasive torna possível a recolha e o armazenamento de vários tipos de informação, possibilitando um acesso em temporeal sem restrições de espaço e tempo. A representação de timelines em papel transformou-se em algo desatualizado e por vezes inutilizável devido às diversas vantagens da representação em formato digital. O uso de Sistemas de Apoio à Decisão Clínica (SADC) em UCI não é uma novidade, sendo que a sua principal função é facilitar o processo de tomada de decisão dos profissionais de saúde. No entanto, a associação de timelines a SADC, com o intuito de melhorar a forma como a informação é apresentada, é uma abordagem inovadora, especialmente nas UCI. Este trabalho procura explorar uma nova forma de apresentar a informação relativa aos doentes, tendo por base o espaço temporal em que os eventos ocorrem. Através do desenvolvimento de uma Pervasive Patient Timeline interativa, os profissionais de saúde terão acesso a um ambiente, em tempo real, onde podem consultar o historial clínico dos doentes, desde a sua admissão na unidade de cuidados intensivos até ao momento da alta. Torna-se assim possível visualizar os dados relativos a sinais vitais, análises clínicas, entre outros. A incorporação de modelos de Data Mining (DM) produzidos pelo sistema INTCare é também uma realidade possível, tendo neste âmbito sido induzidos modelos de DM para a previsão da toma de vasopressores, que foram incorporados na Pervasive Patient Timeline. Deste modo os profissionais de saúde passam assim a ter uma nova plataforma capaz de os ajudar a tomarem decisões de uma forma mais precisa.<br>In Intensive Care Medicine, the presentation of medical information in the Intensive Care Units (ICU) is done in many shapes (graphics, tables, text,…). It depends on the type of exams executed, the data collected in real time by monitoring systems, among others. The way in which information is presented can make it difficult for health professionals to read the clinical condition of patients. When there is the need to cross between several types of clinical data/information sources the situation is even worse. The evolution of technologies for emerging standards such as ubiquity and pervasive makes it possible to gather and storage various types of information, thus making it available in real time and anywhere. Also with the advancement of technologies, the representation of timelines on paper turned into something outdated and sometimes unusable due to the many advantages of representation in digital format. The use of Clinical Decision Support Systems (CDSS) is not a novelty, and its main function is to facilitate the decision-making process, through predictive models, continuous information monitoring, among others. However, the association of timelines to CDSS, in order to improve the way information is presented, is an innovative approach, especially in the ICU. This work seeks to explore a new way of presenting information about patients, based on the time frame in which events occur. By developing an interactive Pervasive Patient Timeline, health professionals will have access to an environment in real time, where they can consult the medical history of patients. The medical history will be available from the moment in which patients are admitted in the ICU until their discharge, allowing health professionals to analyze data regarding vital signs, medication, exams, among others. The incorporation of Data Mining (DM) models produced by the INTCare system is also a reality, and in this context, DM models were induced for predicting the intake of vasopressors, which were incorporated in Pervasive Patient Timeline. Thus health professionals will have a new platform that can help them to make decisions in a more accurate manner.
APA, Harvard, Vancouver, ISO, and other styles
15

Mendes, Celso Rafael Clara. "Visão e Análise Temporal do Processo Clínico." Master's thesis, 2016. http://hdl.handle.net/10316/99229.

Full text
Abstract:
Relatório Final de Estágio do Mestrado em Engenharia Informática apresentado à Faculdade de Ciências e Tecnologia da Universidade de Coimbra.<br>Atualmente, processos clínicos de utentes podem ser consideravelmente extensos, contendo desde informação relativa a exames realizados até a problemas de saúde que possam existir. Atendendo à extensão dos dados que podem existir, a sua análise e visualização torna-se difícil. Adicionalmente, para além de a leitura e interpretação se tratar de um processo moroso, a informação pode ainda estar em diferentes documentos/ localizações, por vezes em relatórios e sem se encontrar totalmente formatada. Uma possível solução para facilitar, e em geral melhorar, o processo é a utilização de uma visualização que reduza o esforço cognitivo e permita mais rapidamente identificar e extrair a informação relevante e tirar conclusões. Neste projeto tenho como objetivo estudar e implementar a solução identificada, para que um profissional de saúde possa rapidamente e com qualidade visualizar a informação clínica e facilmente entender as necessidades do utente. Na linha do tempo o profissional clínico terá ao seu dispor diferentes formas de visualizar a informação, desde a visualização da situação clínica atual, análise do histórico clínico e até mesmo entender as necessidades clínicas que o utente possa vir a ter no futuro, tais como, vacinas, exames periódicos, etc.. De forma a enriquecer os dados disponíveis também se pretende fornecer ao profissional clínico informação sobre a possibilidade de um paciente poder vir a contrair uma determinada patologia. Neste documento é possível observar o trabalho desenvolvido neste âmbito e as vantagens que este projeto trará para a área em questão.<br>Patients’ clinical processes can be considerably extensive, containing information from clinical test results to diagnosed health issues. Taking in account the volume of data that might accumulate, visualising and analysing it becomes a complex task. On top of how slow the process of reading and interpreting the data is, the relevant information might be spread across distinct documents/locations and in unstructured formats. A possible solution to ease, and generally improve, this process is using a better visualization aimed at reducing the cognitive effort and speeding the identification and extraction of the relevant information as well as the conclusions that might follow. The objective of the project is the research and implementation of the proposed solution, empowering medical personnel with high quality and fast means of clinical information acquisition and analysis. A direct benefit is giving the medical professional a clear view of the patient’s state and help him better understand their needs. On the timeline, the clinical professional will have at their disposal a myriad of ways to visualize not only the current clinical data but the past history as well. Additional, possible future needs, such as vaccinations, periodic clinical tests, etc. With the objective of enriching the data available to the healthcare professionals, the visualization, will also contain indication of pathologies a patient might contract as a result of their medical history and lifestyle. This document details the efforts undertaken in this scope as well as exposing the resulting advantages to the healthcare field.
APA, Harvard, Vancouver, ISO, and other styles
16

Polowinski, Jan. "Visualisierung großer Datenmengen im Raum: Großer Beleg." Thesis, 2006. https://tud.qucosa.de/id/qucosa%3A26757.

Full text
Abstract:
Large, strongly connected amounts of data, as collected in knowledge bases or those occurring when describing software, are often read slowly and with difficulty by humans when they are represented as spreadsheets or text. Graphical representations can help people to understand facts more intuitively and offer a quick overview. The electronic representation offers means that are beyond the possibilities of print such as unlimited zoom and hyperlinks. This paper addresses a framework for visualizing connected information in 3D-space taking into account the techniques of media design to build visualization structures and map information to graphical properties.:1 EINFÜHRUNG S. 9 1.1 Zusammenfassung des Gestaltungsentwurfs S. 9 1.2 Ziel des Belegs S. 10 1.3 Interdisziplinäres Projekt S. 10 2 VORGEHEN S. 12 2.1 Ablauf S. 12 2.2 Konkrete Beispielinhalte S. 13 2.3 Beispielimplementierung S. 13 3 DATENMODELL S. 15 3.1 Ontologien S. 15 3.2 Ontologie Konstruktion S. 15 3.3 Analyse der Domain Design S. 18 3.8 Erstes Ordnen S. 19 3.9 Verwendete Ontologie-Struktur S. 21 3.10 Design-Ontologien S. 23 3.11 Schwierigkeiten bei der Ontologiekonstruktion S. 28 3.12 Einpflegen der Daten mit Protégé S. 29 3.13 Facetten S. 29 3.14 Filter S. 32 4 DATENVISUALISIERUNG S. 35 4.1 Visualisierung zeitlicher Daten S. 35 4.2 Hyperhistory S. 35 4.3 Graphisches Vokabular - graphische Dimensionen S. 37 4.4 Mapping S. 39 5 FRAMEWORK UND GESTALTUNG DES MEDIUMS S. 43 5.1 Technologien und Werkzeuge S. 44 5.2 Architektur S. 46 5.3 Konfiguration S. 51 5.4 DataBackendManager S. 52 5.5 Mapping im Framework S. 53 5.6 atomicelements S. 54 5.7 Appearance Bibliothek S. 55 5.8 TransformationUtils S. 56 5.9 Structures S. 57 5.10 LOD S. 64 5.11 Häufung von Einträgen [+] S. 66 5.12 Darstellung von Relationen [+] S. 69 5.13 Head Up Display [+] S. 71 5.14 Navigation S. 72 5.15 Performanz S. 73 5.16 Gestaltung des Mediums S. 74 6 AUSBLICK S. 80 7 FAZIT S. 81 8 ANHANG A – Installation S. 82 8.1 Vorraussetzungen S. 82 8.2 Programmaufruf S. 82 8.3 Stereoskopie S. 82 9 ANHANG B – Beispielimplementierung zur Visualisierung des Themas „Geschichte des Designs in Deutschland im 19. und 20. Jh.“ S. 84 9.1 Eingrenzung des Umfangs S. 84 9.2 Überblick zur deutschen Designgeschichte S. 84 9.3 Vorgehen S. 85 9.4 Unscharfe Datumsangaben S. 85 9.5 Kontextereignisse S. 85 9.6 Ursache-Wirkung-Beziehungen S. 86 9.7 Mehrsprachigkeit S. 86 9.8 Quellenangaben S. 86 9.9 Bildmaterial S. 87 LITERATURVERZEICHNIS S. 88 GLOSSAR S. 90 ABBILDUNGSVERZEICHNIS S. 91<br>Große, stark vernetzte Datenmengen, wie sie in Wissensbasen oder Softwaremodellen vorkommen, sind von Menschen oft nur langsam und mühsam zu lesen, wenn sie als Tabellen oder Text dargestellt werden. Graphische Darstellungen können Menschen helfen, Tatsachen intuitiver zu verstehen und bieten einen schnellen Überblick. Die elektronische Darstellung bietet Mittel, welche über die Möglichkeiten von Print hinausgehen, wie z.B. unbegrenzten Zoom und Hyperlinks. Diese Arbeit stellt ein Framework für die Visualisierung vernetzter Informationen im 3D-Raum vor, welches Techniken der Gestaltung zur Erstellung von graphischen Strukturen und zur Abbildung von Informationen auf graphische Eigenschaften berücksichtigt.:1 EINFÜHRUNG S. 9 1.1 Zusammenfassung des Gestaltungsentwurfs S. 9 1.2 Ziel des Belegs S. 10 1.3 Interdisziplinäres Projekt S. 10 2 VORGEHEN S. 12 2.1 Ablauf S. 12 2.2 Konkrete Beispielinhalte S. 13 2.3 Beispielimplementierung S. 13 3 DATENMODELL S. 15 3.1 Ontologien S. 15 3.2 Ontologie Konstruktion S. 15 3.3 Analyse der Domain Design S. 18 3.8 Erstes Ordnen S. 19 3.9 Verwendete Ontologie-Struktur S. 21 3.10 Design-Ontologien S. 23 3.11 Schwierigkeiten bei der Ontologiekonstruktion S. 28 3.12 Einpflegen der Daten mit Protégé S. 29 3.13 Facetten S. 29 3.14 Filter S. 32 4 DATENVISUALISIERUNG S. 35 4.1 Visualisierung zeitlicher Daten S. 35 4.2 Hyperhistory S. 35 4.3 Graphisches Vokabular - graphische Dimensionen S. 37 4.4 Mapping S. 39 5 FRAMEWORK UND GESTALTUNG DES MEDIUMS S. 43 5.1 Technologien und Werkzeuge S. 44 5.2 Architektur S. 46 5.3 Konfiguration S. 51 5.4 DataBackendManager S. 52 5.5 Mapping im Framework S. 53 5.6 atomicelements S. 54 5.7 Appearance Bibliothek S. 55 5.8 TransformationUtils S. 56 5.9 Structures S. 57 5.10 LOD S. 64 5.11 Häufung von Einträgen [+] S. 66 5.12 Darstellung von Relationen [+] S. 69 5.13 Head Up Display [+] S. 71 5.14 Navigation S. 72 5.15 Performanz S. 73 5.16 Gestaltung des Mediums S. 74 6 AUSBLICK S. 80 7 FAZIT S. 81 8 ANHANG A – Installation S. 82 8.1 Vorraussetzungen S. 82 8.2 Programmaufruf S. 82 8.3 Stereoskopie S. 82 9 ANHANG B – Beispielimplementierung zur Visualisierung des Themas „Geschichte des Designs in Deutschland im 19. und 20. Jh.“ S. 84 9.1 Eingrenzung des Umfangs S. 84 9.2 Überblick zur deutschen Designgeschichte S. 84 9.3 Vorgehen S. 85 9.4 Unscharfe Datumsangaben S. 85 9.5 Kontextereignisse S. 85 9.6 Ursache-Wirkung-Beziehungen S. 86 9.7 Mehrsprachigkeit S. 86 9.8 Quellenangaben S. 86 9.9 Bildmaterial S. 87 LITERATURVERZEICHNIS S. 88 GLOSSAR S. 90 ABBILDUNGSVERZEICHNIS S. 91
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography