Dissertations / Theses on the topic 'Data-Intensive Systems'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 49 dissertations / theses for your research on the topic 'Data-Intensive Systems.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Xu, Yiqi. "Storage Management of Data-intensive Computing Systems." FIU Digital Commons, 2016. http://digitalcommons.fiu.edu/etd/2474.
Full textCai, Simin. "Systematic Design of Data Management for Real-Time Data-Intensive Applications." Licentiate thesis, Mälardalens högskola, Inbyggda system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35369.
Full textDAGGERS
Schnell, Felicia. "Multicast Communication for Increased Data Exchange in Data- Intensive Distributed Systems." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232132.
Full textNutidens applikationer måste kunna hantera och kommunicera en ökad datamängd. Samtidigt har distribuerade system bestående av många beräkningsmässigt svaga enheter blivit allt mer vanligt, vilket är problematiskt. Valet av kommunikationsstrategi, för att leverera data mellan enheter i ett system, är därför av stor betydelse för att uppnå effektivt utnyttjande av tillgängliga resurser. System där identisk information ska distribueras till flertalet mottagare är vanligt förekommande idag. Den underliggande kommunikationsstrategin som används kan dock baseras på direkt interaktion mellan sändare och mottagare vilket är ineffektivt. Multicast (Flersändning) syftar till ett samlingsbegrepp inom datorkommunikation baserat på gruppsändning av information. Denna teknik är utvecklad för att kringgå problematiken med hög belastning på sändarsidan och dessutom minska belastningen på nätverket, och utgör fokus för detta arbete. Inom telekrigföring och självskyddssystem utgör tiden en betydande faktor för att kunna tillhandahålla relevant information som kan stödja beslutsfattning. För självskyddssystem utvecklade av Saab, vilka används i militärflygplan, är situationsmedvetenhet av stor betydelse då det möjliggör för att korrekta beslut kan tas vid rätt tidpunkt. Genom utvecklingen av mer avancerade system, där mängden meddelanden som måste passera genom nätverket ökar, tillkommer höga krav på snabb kommunikation för att kunna åstadkomma kvalité. Denna uppsatsrapport undersöker hur införandet av multicast, i ett dataintensivt distribuerat system, kan förbereda ett system för ökat datautbyte. Arbetet har resulterat i en kommunikationsdesign som gör det möjligt för systemet att distribuera meddelanden till grupp av mottagare med minskad belastning på sändarsidan och mindre redundant trafik på de utgående länkarna. Jämförandet mätningar har gjorts mellan den nya implementationen och det gamla systemet. Resultaten visar att multicast-lösningen både kan reducera tiden för meddelande hantering samt belastningen på ändnoder avsevärt.
Yeom, Jae-seung. "Optimizing Data Accesses for Scaling Data-intensive Scientific Applications." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/64180.
Full textPh. D.
Khemiri, Wael. "Data-intensive interactive workflows for visual analytics." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00659227.
Full textVijayakumar, Sruthi. "Hadoop Based Data Intensive Computation on IAAS Cloud Platforms." UNF Digital Commons, 2015. http://digitalcommons.unf.edu/etd/567.
Full textMaheshwari, Ketan. "Data-intensive scientific workflows : representations of parallelism and enactment on distributed systems." Nice, 2011. http://www.theses.fr/2011NICE4007.
Full textPorting data-intensive applications on large scale distributed computing infrastructures is not trivial. Bridging the gap between application and its workflow expression poses challenges at different levels. The challenge at the end-user level is a need to express the application's logic and data flow requirements from a non-technical domain. At the infrastructure level, it is a challenge to port the application such that a maximum exploitation of the underlying resources can takes place. Workflows enable distributed application deployment by recognizing the application component's inter-connections and the flow among them. However, workflow expressions and engines need enhancements to meet the challenges outlined. Facilitation of a concise expression of parallelism, data combinations and higher level data structures in a coherent fashion is required. This work targets to fulfill these requirements. It is driven by the use-cases in the field of medical image processing domain. Various strategies are developed to efficiently express asynchronous and maximum parallel execution of complex flows by providing concise expression and enactments interfaced with large scale distributed computing infrastructures. The main contributions of this research are: a) A rich workflow language with two-way expression and fruitful results from the experiments carried out on enactment of medical image processing applications workflows on the European Grid Computing Infrastructure; and b) Extension of an existing workflow environment (Taverna) to interface with the Grid Computing Infrastructures
Schäler, Martin [Verfasser], and Gunter [Akademischer Betreuer] Saake. "Minimal-invasive provenance integration into data-intensive systems / Martin Schäler. Betreuer: Gunter Saake." Magdeburg : Universitätsbibliothek, 2014. http://d-nb.info/1066295352/34.
Full textShang, Pengju. "Research in high performance and low power computer systems for data-intensive environment." Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5033.
Full textID: 030423445; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 119-128).
Ph.D.
Doctorate
Electrical Engineering and Computer Science
Engineering and Computer Science
Computer Science
Saito, Yasushi. "Functionally homogeneous clustering : a framework for building scalable data-intensive internet services /." Thesis, Connect to this title online; UW restricted, 2001. http://hdl.handle.net/1773/6936.
Full textGoldhill, David Raymond. "Identifying priorities in intensive care : a description of a system for collecting intensive care data, an analysis of the data collected, a critique of aspects of severity scoring systems used to compare intensive care outcome, identification of priorities in intensive care and proposals to improve outcome for intensive care patients." Thesis, Queen Mary, University of London, 1999. http://qmro.qmul.ac.uk/xmlui/handle/123456789/1405.
Full textGamatié, Abdoulaye. "Design and Analysis for Multi-Clock and Data-Intensive Applications on Multiprocessor Systems-on-Chip." Habilitation à diriger des recherches, Université des Sciences et Technologie de Lille - Lille I, 2012. http://tel.archives-ouvertes.fr/tel-00756967.
Full textKrishnajith, Anaththa Pathiranage Dhanushka. "Memory management and parallelization of data intensive all-to-all comparison in shared-memory systems." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/79187/1/Anaththa%20Pathiranage%20Dhanushka_Krishnajith_Thesis.pdf.
Full textBicer, Tekin. "Supporting Fault Tolerance and Dynamic Load Balancing in FREERIDE-G." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1267638588.
Full textMartí, Fraiz Jonathan. "dataClay : next generation object storage." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/405907.
Full textLes solucions actuals per a compartir dades no són compatibles per a contexts multi-proveïdor. Tradicionalment, els proveïdors de dades les ofereixen via Data Services hermètics amb APIs molt restringides. De manera que els consumidors per una banda es veuen obligats a adaptar les seves aplicacions a la funcionalitat actual, i d'altra banda veuen com les possibilitats de contribuir amb el seu propi know-how queden molt limitades. A nivell de gestió, els sistemes gestors de bases de dades que sostenen aquests Data Services estan dissenyats per a escenaris amb un únic proveïdor, forçant una administració centralitzada que recau en el rol de l'administrador de la base de dades o DBA. El DBA defineix les restriccions d'integritat necessàries i especifica el model extern de les dades a oferir als usuaris. El problema és que en un entorn multi-proveïdor, no podem assumir l'existència d'un únic administrador central que s'ocupi de les dades de tothom. A nivell de processament, el fet de tenir diferents representacions de les dades segons es processin a nivell aplicació, de servei, o de base de dades; fa que les aplicacions hagin de dedicar d'entre un 20 i un 50% del codi a realitzar les transformacions corresponents. Això té un impacte negatiu tan a nivell de productivitat dels programadors, com a nivell de rendiment global en aplicacions que fa un ús intensiu de les dades. Tenint en compte aquestes dificultats, aquesta tesi proposa tres nous mecanismes per fer possible que un sistema gestor de dades pugui donar suport a entorns multi-proveïdor, on es faciliti la col·laboració amb els consumidors i el desenvolupament d'aplicacions que facin un ús intensiu de les dades. En concret, partint de la descentralització de l'administració de les dades i d'un model de dades orientat a objectes, aquesta tesi contribueix a la comunitat científica amb: 1) un mecanisme per permetre que els consumidors puguin estendre el model extern de les dades i la funcionalitat oferta, sense comprometre les restriccions dels proveïdors. 2) un mecanisme per permetre que cada proveïdor pugui definir les restriccions d'integritat que cregui convenients sobre el model de les dades, i de tal manera que sempre siguin respectades independentment de l'ús que se'n faci i les extensions que hi hagi. 3) la integració d'un model de programació paral·lela amb el model de dades per millorar el rendiment de les aplicacions i la productivitat dels programadors, reduint significativament les transformacions de les dades i el codi necessari per accedir-les. Aquestes contribucions es validen per mitjà del disseny i implementació de dataClay, com a exemple de gestor de dades multi-proveïdor que compleix els requisits definits. A més, en relació a la primera i tercera contribucions, es mostren una serie d'estudis de rendiment que n'avaluen i en demostren la seva viabilitat (la segona contribució és només lògica).
Farahanchi, Ali. "The impact of strategic investment on success of capital-intensive ventures." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112623.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 408-417).
Established companies in technology-enabled industries such as software, telecommunications, pharmaceuticals, and semiconductors, have used corporate venture capital as a lever to access and screen technological advances, and to drive innovation outside the traditional firm boundaries. Recent years have witnessed emergence of a new wave of corporate venture capital funds that increasingly interact and compete with traditional venture capital firms in the entrepreneurial ecosystem. The incremental benefits of financing a startup through corporate venture capital have been a subject of study by researchers across Economics, Finance, Strategy, and Innovation fields. First, this thesis examines entrepreneurs' rationale for raising capital from corporate investors. Through the analysis of an online survey conducted with startups based in the US and founded between 2010-15, we identify that startups that operate in capital-intensive industries, such as life sciences and manufacturing, raise capital from corporate investors in order to establish strategic partnership with corporates, significantly more than do startups in capital-light industries such as enterprise and consumer software. Second, through an empirical analysis of a panel of 8,190 startups founded in the US between 2000-10, this thesis shows that corporate venture capital is more beneficial to startups that operate in capital-intensive industries. Using a bi-variate probit model, this thesis shows that startups backed by corporate venture capital are more likely to be acquired or go public, and that the likelihood of an exit event increases as capital-intensity of the industry magnifies, as measured by the level of fixed assets on companies' balance sheets. In addition, we provide empirical evidence that participation of corporate venture capital in a financing round, helps a capital-intensive startup to raise further funding from reputable traditional venture capital firms. Third, this thesis presents empirical evidence that establishing strategic collaboration between capital-intensive startups and corporate parents of venture capital firms, in forms of joint research, product development, or commercialization, is a main source of value for startups. Using data gathered on 130 corporate news announcements on strategic collaborations, this thesis shows that capital-intensive startups backed by corporate venture capital, are significantly more likely to succeed when they establish strategic collaboration with corporate parents. The final contribution of this thesis is a formal assessment of traditional venture capital firms' investment behavior in presence of corporate investors. We present a game-theoretic model and identify the circumstances under which traditional venture capital firms benefit financially from corporate investors participation in financing a capital-intensive startup. By leveraging data gathered on 8,190 startups, we apply the game-theoretic model and Monte-Carlo method to simulate financial returns for a traditional venture capital firm investing a capital-intensive startup in the pharmaceutical industry.
by Ali Farahanchi.
Ph. D. in Engineering Systems
Fumai, Nicola. "A database for an intensive care unit patient data management system." Thesis, McGill University, 1992. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22500.
Full textComputers can help by processing the data and displaying the information in easy to understand formats. Also, knowledge-based systems can provide advice in diagnosis and treatment of patients. If these systems are to be effective, they must be integrated into the total hospital information system and the separate computer data must be jointly integrated into a new database which will become the primary medical record.
This thesis presents the design and implementation of a computerized database for an intensive care unit patient data management system being developed for the Montreal Children's Hospital. The database integrates data from the various PDMS components into one logical information store. The patient data currently managed includes physiological parameter data, patient administrative data and fluid balance data.
A simulator design is also described, which allows for thorough validation and verification of the Patient Data Management System. This simulator can easily be extended for use as a teaching and training tool for PDMS users.
The database and simulator were developed in C and implemented under the OS/2 operating system environment. The database is based on the OS/2 Extended Edition relational Database Manager.
Baker, Lawrence S. M. (Lawrence M. )Massachusetts Institute of Technology. "Characterisation of glucose management in intensive care." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/124577.
Full textThesis: S.M. in Technology and Policy, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, 2019
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 121-130).
Patients in intensive care routinely have their blood glucose monitored and controlled using insulin. Two decades of on-going research has attempted to establish optimal glucose targets and treatment policy for patients with hyperglycemia in the intensive care unit (ICU). These efforts rely on the assumption that health care providers can reliably meet given targets. Significant proportions of the ICU population are either hypoglycemic or hyperglycemic and poor blood glucose control may lead to adverse patient outcomes. This thesis analyses approximately 20,000 ICU stays at the Beth Israel Deaconess Medical Center (BIDMC) which occurred between 2008 and 2018. These data are used to describe the state of clinical practice in the ICU and identify areas where treatment may be suboptimal. Even at a world-renowned teaching hospital, blood sugars are not optimally managed. 41.8% of diabetics and 14.2% of non-diabetics are severely hyperglycemic (>215mg/dL) each day. Insulin boluses are given more frequently than insulin infusions, despite guidelines recommending infusions for most critical care patients. When infusions are given, rates do not follow a consistent set of rules. Blood sugar management faces several challenges, including unreliable readings. Laboratory and fingerstick measurements that were taken at the same time had an R² of only 0.63 and the fingerstick measurements read on average 10mg/dL higher. Overcoming these challenges is an important part of improving care in the ICU. It is hoped that publicly sharing the code used to extract and clean data used for analysis will encourage further research. Code can be found at https://github.com/lawbaker/MIMIC-Glucose-Management
by Lawrence Baker.
S.M. in Technology and Policy
S.M.inTechnologyandPolicy Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society
Suthakar, Uthayanath. "A scalable data store and analytic platform for real-time monitoring of data-intensive scientific infrastructure." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/15788.
Full textPaz, Alvarez Alfonso. "Deviation occurrence analysis in a human intensive production environment by using MES data." Thesis, KTH, Industriell produktion, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230674.
Full textTrots decennier av automatiseringsinitiativ utgör manuell montering fortfarande en av de mest kostnadseffektiva metoderna i scenarier med hög produktsortiment och komplex geometri. Den representerar 50% av den totala produktionstiden och 20% av den totala produktionskostnaden. Att förstå mänsklig prestanda och dess inverkan i monteringsledningen är nyckeln för att förbättra den totala prestandan hos en monteringslinje. Utöver detta avhandlingsarbete, genom att studera avvikelserna som uppstår i linjen, syftar det till att förstå hur mänskliga arbetstagare påverkas av vissa fungerande aspekter av monteringslinjen. För att göra det har tre olika inflytningsfaktorer valts och sedan observerat dess inverkan i mänsklig prestation: i. Hur tidigare händelser som uppstår i linjen påverkar arbetarens nuvarande åtgärder. ii. Hur påverkar planerade stopp arbetstagarens nuvarande åtgärder. iii. Hur påverkar teoretisk cykeltid arbetarens prestation. För att observera dessa inflytningsrelationer har det använts data som samlats in i butiksgolvetfrån SCANIAs Manufacturing Execution System (MES). Genom att tillämpa metoder för Knowledge Discovery i Database (KDD) har data indexerats och analyseras vilket ger de nödvändiga resultaten för studien. Slutligen kan det framgå av de visade resultaten att variationen i linjens funktion har en inverkan på den mänskliga prestationen övergripande. På grund av tillverkningssystemets komplexitet kan emellertid effekten i mänsklig prestanda inte vara så regelbunden som ursprungligen tänkt.
Shahzad, Khurram. "Energy Efficient Wireless Sensor Node Architecture for Data and Computation Intensive Applications." Doctoral thesis, Mittuniversitetet, Avdelningen för elektronikkonstruktion, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-21956.
Full textWang, Yuying. "Type-2 fuzzy probabilistic system for proactive monitoring of uncertain data-intensive seasonal time series." Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/11059.
Full textJiang, Wei. "A Map-Reduce-Like System for Programming and Optimizing Data-Intensive Computations on Emerging Parallel Architectures." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1343677821.
Full textOluwaseun, Ajayi Olabode. "An evaluation of galaxy and ruffus-scripting workflows system for DNA-seq analysis." University of the Western Cape, 2018. http://hdl.handle.net/11394/6765.
Full textFunctional genomics determines the biological functions of genes on a global scale by using large volumes of data obtained through techniques including next-generation sequencing (NGS). The application of NGS in biomedical research is gaining in momentum, and with its adoption becoming more widespread, there is an increasing need for access to customizable computational workflows that can simplify, and offer access to, computer intensive analyses of genomic data. In this study, the Galaxy and Ruffus frameworks were designed and implemented with a view to address the challenges faced in biomedical research. Galaxy, a graphical web-based framework, allows researchers to build a graphical NGS data analysis pipeline for accessible, reproducible, and collaborative data-sharing. Ruffus, a UNIX command-line framework used by bioinformaticians as Python library to write scripts in object-oriented style, allows for building a workflow in terms of task dependencies and execution logic. In this study, a dual data analysis technique was explored which focuses on a comparative evaluation of Galaxy and Ruffus frameworks that are used in composing analysis pipelines. To this end, we developed an analysis pipeline in Galaxy, and Ruffus, for the analysis of Mycobacterium tuberculosis sequence data. Furthermore, this study aimed to compare the Galaxy framework to Ruffus with preliminary analysis revealing that the analysis pipeline in Galaxy displayed a higher percentage of load and store instructions. In comparison, pipelines in Ruffus tended to be CPU bound and memory intensive. The CPU usage, memory utilization, and runtime execution are graphically represented in this study. Our evaluation suggests that workflow frameworks have distinctly different features from ease of use, flexibility, and portability, to architectural designs.
Teng, Sin Yong. "Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-433427.
Full textNilsson, Johanna, and Helena Roos. ""PDMS skapar flera nyanser av patientsäkerhet" : En kvalitativ intervjustudie om intensivvårdssjuksköterskors erfarenheter av att arbeta med ett Patient Data Management System." Thesis, Linnéuniversitetet, Institutionen för hälso- och vårdvetenskap (HV), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-75749.
Full textBackground: PDMS, Patient Data Management System, is a clinical information system specially developed for intensive care which generates a large amount of patient data. The system automatically collects patient data from monitoring and medical equipment and presents the information in a clear overall view. Previous research highlights especially usability of the system, reduced time spent on documentation and benefits with handling medications but shows contradictory results in terms of what the released time is used for. Aim: to highlight intensive care nurses experiences of using PDMS in nursing. Method: qualitative interview-study with intensive care nurses which was analyzed with qualitative content analysis. Results: five categories came forward in the result; close-to-patient care, evidece-based care, different forms of quality developement, safe care and informatics. These categories reflects the experiences of working with PDMS among intensive care nurses and the results clearly demonstrate that PDMS increases patient safety for several reasons. Conclusion: increased quality of care, reduced documentation time, more easy-to-understand continuous learning for the staff, opportunity to follow-up and posibility for reasearch and safer handling with medications are considered the biggest gains with PDMS. Overall, all factors contribute to increased patient safety.
Callerström, Emma. "Clinicians' demands on monitoring support in an Intensive Care Unit : A pilot study, at Capio S:t Görans Hospital." Thesis, KTH, Skolan för teknik och hälsa (STH), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-202541.
Full textPatienter som vårdas på intensivvårdsavdelningar har svikt i ett eller flera organ. Övervakning sker av patienterna för att kunna bidra till den vård som behövs för att upprätthålla ett meningsfullt liv. Idag hanterar sjukvårdpersonal en stor mängd data som genereras från övervakningsutrustning och system förknippade med övervakningsutrustning. Övervakningsparameterar kan antecknas förhand på ett övervakningspapper eller direkt sparas i digitalt format. Parameterarna sparas med syfte att vara ett lättillgängligt underlag under hela intensivvårdsprocessen. Patient data management systems (PDMSs) förenklar hämtning och integrering av data på exempelvis intensivvårdsavdelningar. Innan en ny konfiguration av ett patientdatasystem erhålls, är det eftersträvnadsvärt att intensivvårdsavdelningen analyserar vilken datasom skall hanteras. Detta examensarbete bidrog till kunskap om hur övervakning utförs på en intensivvårdsavdelning, på ett akutsjukhus i Stockholm. Målet med detta examensarbete var att insamla data om vad klinikerna behöver och vilken utrustning och system som de använder idag för att utföra övervakning. Behovsframkallning är en teknik som kan användas för att insamla krav. I detta projekt insamlades data genom aktivaobservationer och kvalitativa intervjuer. Mönster har hittats bland undersköterskornas, sjuksköterskornas och läkarnas behov av teknisksupport från system och utrustning som stödjer sjukvårdspersonalen under övervakningen av en patient. Undersköterskor uttrycker ett behov av att bli avlastade från uppgifter så som att manuellt skrivaner vitala parametervärden. De ifrågasätter behovet av automatiserad datahämtning eftersom de ständigt är närvarande bredvid patienten. Sjuksköterskor beskriver en hög vårdtyngd och önskaratt inte bli tillägnade fler aktiviteter som ökar den vårdtyngden. Läkare beskriver ett behov av ökat stöd för hur en interversion leder till resultat för individuella patienter. Resultaten visar attdet finns information om möjliga kliniska beslutsstöd utan givet sätt att applicera dessa, bättre än de sätt som används idag. Sjukvårdspersonalen hävdar att det det finns ett behov av att utvärdera det kliniska arbetet med hjälp av övervakningsparametrar. Resultaten utgör kunskap om vilka områden som sjukvårdpersonalens behov inte har stöd av nuvarnade verktyg. Resultaten visar att beroende på vilken profession och erfarenhet som sjukvårdspersonalen har, är behoven olika. På intensivvårdsavdelningen sker övervakning då enskilda patienter visuellt observeras såväl som övervakningsparametrar från medicintekniska produkter, resultat från medicinska tester och fysiska examinationer. Det finns behov att integrera och presenterainformation från dessa källor givet kunskap om att sjukvårdpersonalen fattar beslut på dessa som resulterar i behandling, diagnostik och/eller vård.
Ortscheid, Julius, and Thomas Jensen. "Patient Data Management System (PDMS) : Anestesi- och intensivvårdspersonalens upplevelser av implementering och arbete med PDMS." Thesis, Linnéuniversitetet, Institutionen för hälso- och vårdvetenskap (HV), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-64031.
Full textTitle: Patient Data Management System (PDMS) – Anesthesia- and intensive care staff experiences of implementation and work with PDMS. Background: Todays and future healthcare means an increasing use of digital systems in nursing care. Patient Data Management System (PDMS) is a clinical information system and clinical decision support which is implemented in swedish hospitals. Previous research shows different experiences of digital systems impact on nursing care, workload and patient safety. Aim: The purpose was to describe anesthesia- and intensive care unit staff experiences of implementation and work with PDMS. Method: The study was conducted by interviews with a qualitative approach. Results: In the result four themes appear, process of introduction, serviceability, transfer of information and patient safety. The four themes depict the anesthesia- and intensive care unit staff experiences of the implementation and work with PDMS. Conclusion: PDMS is implemented in an increasing number of swedish hospitals. The anesthesia- and intensive care unit staff consider it very important with information and education before implementation of PDMS. The comprehensive view on the hospitals computer system is important due to the fact that these systems appear not to always be in synchronization with each other. That leads to an increased workload and also an increased risk regarding patient safety. More research on the PDMS impact on nursing and patient safety are needed.
Brossier, David. "Élaboration et validation d'une base de données haute résolution destinée à la calibration d'un patient virtuel utilisable pour l'enseignement et la prise en charge personnalisée des patients en réanimation pédiatrique Perpetual and Virtual Patients for Cardiorespiratory Physiological Studies Creating a High-Frequency Electronic Database in the PICU: The Perpetual Patient Qualitative subjective assessment of a high-resolution database in a paediatric intensive care unit-Elaborating the perpetual patient's ID card Validation Process of a High-Resolution Database in a Pediatric Intensive Care Unit – Describing the Perpetual Patient’s Validation Evaluation of SIMULRESP©: a simulation software of child and teenager cardiorespiratory physiology." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMC428.
Full textThe complexity of the patients in the intensive care unit requires the use of clinical decision support systems. These systems bring together automated management protocols that enable adherence to guidelines and virtual physiological or patient simulators that can be used to safely customize management. These devices operating from algorithms and mathematical equations can only be developed from a large number of patients’ data. The main objective of the work was the elaboration of a high resolution database automatically collected from critically ill children. This database will be used to develop and validate a physiological simulator called SimulResp© . This manuscript presents the whole process of setting up the database from concept to use
Bailly, Sébastien. "Utilisation des antifongiques chez le patient non neutropénique en réanimation." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAS013/document.
Full textCandida species are among the main pathogens isolated from patients in intensive care units (ICUs) and are responsible for a serious systemic infection: invasive candidiasis. A late and unreliable diagnosis of invasive candidiasis aggravates the patient's status and increases the risk of short-term death. The current guidelines recommend an early treatment of patients with high risks of invasive candidiasis, even in absence of documented fungal infection. However, increased antifungal drug consumption is correlated with increased costs and the emergence of drug resistance whereas there is yet no consensus about the benefits of the probabilistic antifungal treatment.The present work used modern statistical methods on longitudinal observational data. It investigated the impact of systemic antifungal treatment (SAT) on the distribution of the four Candida species most frequently isolated from ICU patients', their susceptibilities to SATs, the diagnosis of candidemia, and the prognosis of ICU patients. The use of autoregressive integrated moving average (ARIMA) models for time series confirmed the negative impact of SAT use on the susceptibilities of the four Candida species and on their relative distribution over a ten-year period. Hierarchical models for repeated measures showed that SAT has a negative impact on the diagnosis of candidemia: it decreases the rate of positive blood cultures and increases the time to positivity of these cultures. Finally, the use of causal inference models showed that early SAT has no impact on non-neutropenic, non-transplanted patient prognosis and that SAT de-escalation within 5 days after its initiation in critically ill patients is safe and does not influence the prognosis
Ramraj, Varun. "Exploiting whole-PDB analysis in novel bioinformatics applications." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:6c59c813-2a4c-440c-940b-d334c02dd075.
Full textChiossi, Luca. "High-Performance Persistent Caching in Multi- and Hybrid- Cloud Environments." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/20089/.
Full textHerodotou, Herodotos. "Automatic Tuning of Data-Intensive Analytical Workloads." Diss., 2012. http://hdl.handle.net/10161/5415.
Full textModern industrial, government, and academic organizations are collecting massive amounts of data ("Big Data") at an unprecedented scale and pace. The ability to perform timely and cost-effective analytical processing of such large datasets in order to extract deep insights is now a key ingredient for success. These insights can drive automated processes for advertisement placement, improve customer relationship management, and lead to major scientific breakthroughs.
Existing database systems are adapting to the new status quo while large-scale dataflow systems (like Dryad and MapReduce) are becoming popular for executing analytical workloads on Big Data. Ensuring good and robust performance automatically on such systems poses several challenges. First, workloads often analyze a hybrid mix of structured and unstructured datasets stored in nontraditional data layouts. The structure and properties of the data may not be known upfront, and will evolve over time. Complex analysis techniques and rapid development needs necessitate the use of both declarative and procedural programming languages for workload specification. Finally, the space of workload tuning choices is very large and high-dimensional, spanning configuration parameter settings, cluster resource provisioning (spurred by recent innovations in cloud computing), and data layouts.
We have developed a novel dynamic optimization approach that can form the basis for tuning workload performance automatically across different tuning scenarios and systems. Our solution is based on (i) collecting monitoring information in order to learn the run-time behavior of workloads, (ii) deploying appropriate models to predict the impact of hypothetical tuning choices on workload behavior, and (iii) using efficient search strategies to find tuning choices that give good workload performance. The dynamic nature enables our solution to overcome the new challenges posed by Big Data, and also makes our solution applicable to both MapReduce and Database systems. We have developed the first cost-based optimization framework for MapReduce systems for determining the cluster resources and configuration parameter settings to meet desired requirements on execution time and cost for a given analytic workload. We have also developed a novel tuning-based optimizer in Database systems to collect targeted run-time information, perform optimization, and repeat as needed to perform fine-grained tuning of SQL queries.
Dissertation
Yu, Boyang. "On exploiting location flexibility in data-intensive distributed systems." Thesis, 2016. http://hdl.handle.net/1828/7602.
Full textGraduate
Khoshkbar, Foroushha Ali Reza. "Workload Modelling and Elasticity Management of Data-Intensive Systems." Phd thesis, 2018. http://hdl.handle.net/1885/154330.
Full textAlbanese, Ilijc. "Periodic Data Structures for Bandwidth-intensive Applications." Thesis, 2014. http://hdl.handle.net/1828/5851.
Full textGraduate
Borisov, Nedyalko Krasimirov. "Integrated Management of the Persistent-Storage and Data-Processing Layers in Data-Intensive Computing Systems." Diss., 2012. http://hdl.handle.net/10161/5806.
Full textOver the next decade, it is estimated that the number of servers (virtual and physical) in enterprise datacenters will grow by a factor of 10, the amount of data managed by these datacenters will grow by a factor of 50, and the number of files the datacenter has to deal with will grow by a factor of 75. Meanwhile, skilled information technology (IT) staff to manage the growing number of servers and data will increase less than 1.5 times. Thus, a system administrator will face the challenging task of managing larger and larger numbers of production systems. We have developed solutions to make the system administrator more productive by automating some of the hard and time-consuming tasks in system management. In particular, we make new contributions in the Monitoring, Problem Diagnosing, and Testing phases of the system management cycle.
We start by describing our contributions in the Monitoring phase. We have developed a tool called Amulet that can continuously monitor and proactively detect problems on production systems. A notoriously hard problem that Amulet can detect is that of data corruption where bits of data in persistent storage differ from their true values. Once a problem is detected, our DiaDS tool helps in diagnosing the cause of the problem. DiaDS uses a novel combination of machine learning techniques and domain knowledge encoded in a symptoms database to guide the system administrator towards the root cause of the problem.
Before applying any change (e.g., changing a configuration parameter setting) to the production system, the system administrator needs to thoroughly understand the effect that this change can have. Well-meaning changes to production systems have led to performance or availability problems in the past. For this phase, our Flex tool enables administrators to evaluate the change hypothetically in a manner that is fairly accurate while avoiding overheads on the production system. We have conducted a comprehensive evaluation of Amulet, DiaDS, and Flex in terms of effectiveness, efficiency, integration of these contributions in the system management cycle, and how these tools bring data-intensive computing systems closer the goal of self-managing systems.
Dissertation
"EIS for ICU: information requirements determination." 1997. http://library.cuhk.edu.hk/record=b5889218.
Full textThesis (M.Phil.)--Chinese University of Hong Kong, 1997.
Includes bibliographical references (leaves 82-89).
Abstract --- p.ii
Table of Contents --- p.iv
LIST of Figures --- p.viii
List of Tables --- p.ix
Acknowledgments --- p.xi
Chapter
Chapter 1. --- Introduction --- p.1
Chapter 1.1 --- Intensive Care Unit --- p.2
Chapter 1.1.1 --- Expensive Costs of Intensive Care --- p.2
Chapter 1.1.2 --- Tremendous Demands with Limited Resources --- p.3
Chapter 1.1.3 --- Conflicting Roles of ICU Physicians --- p.3
Chapter 1.1.4 --- Disorganized Patient Information --- p.4
Chapter 1.2 --- ICU Management Problems --- p.5
Chapter 1.3 --- Executive Information Systems (EIS) for ICU Physician --- p.6
Chapter 1.4 --- Determine Information Requirements of the EIS --- p.7
Chapter 1.5 --- Scope of the Study --- p.8
Chapter 1.6 --- Organization of the Report --- p.8
Chapter 2. --- Literature Review --- p.9
Chapter 2.1 --- Intensive Care Unit --- p.9
Chapter 2.1.1 --- Costs of ICU --- p.10
Chapter 2.2 --- ICU Physicians are Executives --- p.10
Chapter 2.3 --- Computers in ICU --- p.11
Chapter 2.3.1 --- Record Keeping --- p.11
Chapter 2.3.2 --- Data Management --- p.12
Chapter 2.3.3 --- Decision Making --- p.13
Chapter 2.4 --- Problems Facing ICU Physicians --- p.14
Chapter 2.4.1 --- Conflicting Role --- p.14
Chapter 2.4.2 --- Information Overload --- p.14
Chapter 2.4.3 --- Poor Information Quality --- p.15
Chapter 2.4.4 --- Technophobia --- p.16
Chapter 2.5 --- Executive Information Systems --- p.16
Chapter 2.5.1 --- Definition --- p.16
Chapter 2.5.2 --- Characteristics of EIS --- p.17
Chapter 2.5.3 --- EIS in Healthcare Industry --- p.20
Chapter 2.6 --- Determining Information Requirements --- p.20
Chapter 2.6.1 --- Strategies and Methods to Determine Information Requirements --- p.21
Chapter 2.6.2 --- Critical Success Factors Analysis --- p.25
Chapter 2.6.2.1 --- Definition of CSFs --- p.26
Chapter 2.6.2.2 --- Different Executives Have Different CSFs and Different Information Needs --- p.26
Chapter 2.6.2.3 --- Hierarchical Nature of CSFs --- p.26
Chapter 2.6.2.4 --- Steps in the CSFs Approach --- p.28
Chapter 2.6.2.5 --- "Critical Information, Assumptions, and Decisions" --- p.29
Chapter 3. --- Research Methodology --- p.31
Chapter 3.1 --- Literature Review --- p.31
Chapter 3.2 --- Design a Methodology for Information Requirements Determination --- p.32
Chapter 3.3 --- ICU Admission Case Study --- p.34
Chapter 3.4 --- Analysis and Validation --- p.35
Chapter 3.5 --- COPD Survey: The Importance of Medical History --- p.36
Chapter 3.5.1 --- Chronic Obstructive Pulmonary Disease --- p.36
Chapter 3.5.2 --- The Survey --- p.38
Chapter 4. --- A Three-Stage Methodology --- p.41
Chapter 4.1 --- Stage 1 - Understanding ICU Operations --- p.42
Chapter 4.2 --- Stage 2 - Determine CSFs within the ICU --- p.43
Chapter 4.2.1 --- CSFs Analysis Steps in the Study --- p.44
Chapter 4.2.2 --- Step 1: Determine CSFs of ICUs --- p.44
Chapter 4.2.3 --- Step 2: Determine CSFs of the ICU Physicians --- p.45
Chapter 4.2.4 --- Step 3: Determine CSFs of the ICU Admission --- p.45
Chapter 4.3 --- Stage 3 譯 Determine Information Requirements --- p.45
Chapter 4.4 --- Importance of Medical History: A COPD Survey --- p.46
Chapter 4.4.1 --- COPD Questionnaire --- p.46
Chapter 5. --- Findings --- p.48
Chapter 5.1 --- Findings in Stage 1 --- p.48
Chapter 5.1.1 --- Decision Making in ICU --- p.49
Chapter 5.2 --- Findings in Stage 2 - CSFs --- p.54
Chapter 5.2.1 --- CSFs of the ICU --- p.54
Chapter 5.2.2 --- CSFs of the ICU Physicians --- p.56
Chapter 5.2.3 --- CSFs of the ICU Admission --- p.56
Chapter 5.3 --- Findings in Stage 3 --- p.58
Chapter 5.3.1 --- Types of Information Requirement --- p.58
Chapter 5.3.2 --- Detailed Contents of the Information Requirements --- p.59
Chapter 6. --- Analysis --- p.65
Chapter 6.1 --- A Three-Stage Methodology for Information Requirements Determination --- p.65
Chapter 6.1.1 --- Comparison of the Three-Stage Methodology with CSFs Analysis --- p.66
Chapter 6.1.2 --- A Case Study Using the Three-Stage Methodology --- p.67
Chapter 6.2 --- Roles of Information Types in Admission Decision --- p.68
Chapter 6.2.1 --- Admitting Patients from Different Sources --- p.69
Chapter 6.2.2 --- Admitting Patients with Different Diseases --- p.70
Chapter 6.3 --- The Importance of Medical History --- p.71
Chapter 7 --- Conclusions --- p.78
Bibliography --- p.82
Interviews --- p.90
Appendices --- p.91
"Executive information systems (EIS): its roles in decision making on patients' discharge in intensive care unit." Chinese University of Hong Kong, 1995. http://library.cuhk.edu.hk/record=b5888309.
Full textThesis (M.B.A.)--Chinese University of Hong Kong, 1995.
Includes bibliographical references (leaves 56-57).
ABSTRACT --- p.ii
TABLE OF CONTENTS --- p.iv
LIST OF FIGURES --- p.vi
LIST OF TABLES --- p.vii
ACKNOWLEDGMENT --- p.viii
Chapter
Chapter I. --- INTRODUCTION --- p.1
Intensive Care Services --- p.1
Clinician as an Information Processor --- p.2
Executive Information System (EIS) for Intensive Care Services --- p.7
Scope of the Study --- p.7
The Organization of the Remaining Report --- p.8
Chapter II. --- LITERATURE REVIEW --- p.9
Sickness Scoring Systems --- p.9
Executive Information Systems (EIS) --- p.15
Information Requirements Determination for EIS --- p.17
Future Direction of EIS in Intensive Care --- p.20
Chapter III. --- RESEARCH METHODOLOGY --- p.22
Survey by Mailed Questionnaire --- p.23
Personal Interview --- p.24
Subjects Selection --- p.26
Analysis --- p.27
Chapter IV. --- RESULTS AND FINDINGS --- p.28
Part 1 - Questionnaires --- p.29
Part 2 - Interviews --- p.31
Chapter V. --- ANALYSIS AND DISCUSSION --- p.44
Analysis of Results and Findings --- p.44
Evaluation on Information Requirements Determination for an EIS --- p.50
Chapter VI. --- CONCLUSION --- p.52
Chapter VII. --- FUTURE DIRECTION OF DECISION SUPPORT IN CRITICAL CARE --- p.54
REFERENCES --- p.56
INTERVIEWS --- p.59
APPENDIX --- p.60
Chapter 1. --- A Sample of Hospital Information System Requirement Survey Questionnaire --- p.61
Chapter 2. --- Samples of Visual Display --- p.67
Chapter 3. --- A Sample of Format of a Structured Report --- p.70
Hübert, Heiko [Verfasser]. "MEMTRACE: a memory, performance and energy profiler targeting RISC-based embedded systems for data intensive applications / von Heiko Hübert." 2009. http://d-nb.info/995210012/34.
Full textBraga, André Filipe Gonçalves Névoa Fernandes. "Pervasive patient timeline." Master's thesis, 2015. http://hdl.handle.net/1822/40094.
Full textEm Medicina Intensiva, a apresentação de informação médica nas Unidades de Cuidados Intensivos (UCI) é feita de diversas formas (gráficos, tabelas, texto, …), pois depende do tipo de análises realizadas, dos dados recolhidos em tempo real pelos sistemas de monitorização, entre outros. A forma como é apresentada a informação pode dificultar a leitura da condição clínica dos doentes por parte dos profissionais de saúde, principalmente quando há a necessidade de um cruzamento entre vários tipos de dados clínicos/fontes de informação. A evolução das tecnologias para novos padrões como a ubiquidade e o pervasive torna possível a recolha e o armazenamento de vários tipos de informação, possibilitando um acesso em temporeal sem restrições de espaço e tempo. A representação de timelines em papel transformou-se em algo desatualizado e por vezes inutilizável devido às diversas vantagens da representação em formato digital. O uso de Sistemas de Apoio à Decisão Clínica (SADC) em UCI não é uma novidade, sendo que a sua principal função é facilitar o processo de tomada de decisão dos profissionais de saúde. No entanto, a associação de timelines a SADC, com o intuito de melhorar a forma como a informação é apresentada, é uma abordagem inovadora, especialmente nas UCI. Este trabalho procura explorar uma nova forma de apresentar a informação relativa aos doentes, tendo por base o espaço temporal em que os eventos ocorrem. Através do desenvolvimento de uma Pervasive Patient Timeline interativa, os profissionais de saúde terão acesso a um ambiente, em tempo real, onde podem consultar o historial clínico dos doentes, desde a sua admissão na unidade de cuidados intensivos até ao momento da alta. Torna-se assim possível visualizar os dados relativos a sinais vitais, análises clínicas, entre outros. A incorporação de modelos de Data Mining (DM) produzidos pelo sistema INTCare é também uma realidade possível, tendo neste âmbito sido induzidos modelos de DM para a previsão da toma de vasopressores, que foram incorporados na Pervasive Patient Timeline. Deste modo os profissionais de saúde passam assim a ter uma nova plataforma capaz de os ajudar a tomarem decisões de uma forma mais precisa.
In Intensive Care Medicine, the presentation of medical information in the Intensive Care Units (ICU) is done in many shapes (graphics, tables, text,…). It depends on the type of exams executed, the data collected in real time by monitoring systems, among others. The way in which information is presented can make it difficult for health professionals to read the clinical condition of patients. When there is the need to cross between several types of clinical data/information sources the situation is even worse. The evolution of technologies for emerging standards such as ubiquity and pervasive makes it possible to gather and storage various types of information, thus making it available in real time and anywhere. Also with the advancement of technologies, the representation of timelines on paper turned into something outdated and sometimes unusable due to the many advantages of representation in digital format. The use of Clinical Decision Support Systems (CDSS) is not a novelty, and its main function is to facilitate the decision-making process, through predictive models, continuous information monitoring, among others. However, the association of timelines to CDSS, in order to improve the way information is presented, is an innovative approach, especially in the ICU. This work seeks to explore a new way of presenting information about patients, based on the time frame in which events occur. By developing an interactive Pervasive Patient Timeline, health professionals will have access to an environment in real time, where they can consult the medical history of patients. The medical history will be available from the moment in which patients are admitted in the ICU until their discharge, allowing health professionals to analyze data regarding vital signs, medication, exams, among others. The incorporation of Data Mining (DM) models produced by the INTCare system is also a reality, and in this context, DM models were induced for predicting the intake of vasopressors, which were incorporated in Pervasive Patient Timeline. Thus health professionals will have a new platform that can help them to make decisions in a more accurate manner.
Ribeiro, Ana Catarina Vieira. "Previsão dos fatores de risco e caracterização de doentes internados nos cuidados intensivos." Master's thesis, 2016. http://hdl.handle.net/1822/54545.
Full textA Medicina Intensiva (MI) é uma das áreas mais críticas da Medicina. A sua característica multidisciplinar torna-a muito abrangente, reunindo todo o tipo de profissionais de saúde, bem como um local com equipamentos e condições especiais, denominadas Unidades de Cuidados Intensivos (UCI). Tendo em conta o seu ambiente crítico torna-se evidente a necessidade de prever admissões às UCI, pois, para além de constituírem custos adicionais para as instituições e ocuparem recursos desnecessariamente, admissões não planeadas são arriscadas para os doentes que se encontram debilitados. Ao longo dos anos os Sistemas de Informação (SI) têm acompanhando o desenvolvimento da Medicina, tornando-se instrumentos imprescindíveis para o tratamento de doentes, sobretudo através dos Sistemas de Apoio à Decisão (SAD) que apresentam as informações pertinentes sobre os doentes, sem necessidade análise manual de dados. Deste modo, a utilização de SAD na Medicina é crucial, principalmente na MI, em que as decisões têm, muito frequentemente, de ser tomadas com celeridade sempre no melhor interesse do doente. Um SAD pode ser constituído por diferentes técnicas, como é o caso do Data Mining (DM). A presente dissertação envolve descoberta de conhecimento em bases de dados extraídas a partir do sistema de apoio à decisão INTCare, localizado no Centro Hospitalar do Porto (CHP). Foi utilizado um conjunto de técnicas de DM, nomeadamente Clustering e Classificação, tendo por base diferentes algoritmos e métricas de avaliação. Assim foram descobertos padrões naturais nos dados, nomeadamente através da formação de dois grupos de características (Clusters) dos doentes internados em UCI e identificando os atributos mais críticos nestes Clusters. Além disso, foram obtidas previsões com cerca de 97% de capacidade de acertar nos doentes internados (sensibilidade) e que, apesar de criar demasiados Falsos Positivos (63% de especificidade), permitiu obter modelos que permitam que os médicos possam agir de forma proactiva e preventiva, tendo sido esta uma das principais motivações desta dissertação. A presente dissertação serviu para aumentar o número de estudos que aplicam técnicas de DM em MI, particularmente para realização de previsão de internamentos em UCI. Deste modo, contribui-se com conhecimento para a comunidade científica não só de DM, mas também para a Medicina, de modo a potenciar o processo de tomada de decisão médica e na procura pela melhoria dos serviços prestados aos doentes.
Intensive Medicine is one of the most critical areas of medicine. Its multidisciplinary feature makes it a very wide area that gathers all kinds of health professionals as well as a place with special equipment and conditions known as Intensive Care Unit. Having in account its critical environment it becomes evident the need to forecast Intensive Care Unit admissions because, besides being additional costs for institutions and occupy resources unnecessarily, unplanned admissions are risky for patients who are debilitated. Over the years, Information Systems are accompanying the development of medicine and have become essentials instruments for the treatment of patients especially using Clinical Systems Decision Support that have relevant information about patients without the need to manually analyse clinical data. Therefore, the use of DSS is crucial in medicine, particularly in the IM in which decisions must very often be taken speedily always in the best interest of the patient. This Decision Support Systems may be constituted by different techniques such as Data Mining (DM). This dissertation involves knowledge discovery in databases extracted from the Clinical Decision Support System being used in Centro Hospital do Porto (CHP) and named INTCare System. It was used a set of DM and rating techniques including clustering and classification which are based on different algorithms and evaluation metrics. Thereby, natural patterns were discovered in the data particularly through the formation of two groups of characteristics (clusters) of patients admitted to Intensive Care Unit and through the identification the most critical attributes in these clusters. Moreover, it was obtained predictions with approximately 97% of ability to get properly forecast admissions to Intensive Care Unit (Sensitivity) and despite creating too many false positives (63% specificity) it also created models that allow doctors to act proactively and preventively which is one of the main motivations of this dissertation. This dissertation served to increase the number of studies that apply DM techniques in Intensive Medicine particularly for performing predictions of admissions to Intensive Care Units. Thus, knowledge was created for the scientific community not only of DM, but also of medicine in order to promote the process of clinical decision-making and to improve services rendered to patients.
(7022108), Gowtham Kaki. "Automatic Reasoning Techniques for Non-Serializable Data-Intensive Applications." Thesis, 2019.
Find full textThe performance bottlenecks in modern data-intensive applications have induced database implementors to forsake high-level abstractions and trade-off simplicity and ease of reasoning for performance. Among the first casualties of this trade-off are the well-known ACID guarantees, which simplify the reasoning about concurrent database transactions. ACID semantics have become increasingly obsolete in practice due to serializable isolation – an integral aspect of ACID, being exorbitantly expensive. Databases, including the popular commercial offerings, default to weaker levels of isolation where effects of concurrent transactions are visible to each other. Such weak isolation guarantees, however, are extremely hard to reason about, and have led to serious safety violations in real applications. The problem is further complicated in a distributed setting with asynchronous state replications, where high availability and low latency requirements compel large-scale web applications to embrace weaker forms of consistency (e.g., eventual consistency) besides weak isolation. Given the serious practical implications of safety violations in data-intensive applications, there is a pressing need to extend the state-of-the-art in program verification to reach non- serializable data-intensive applications operating in a weakly-consistent distributed setting.
This thesis sets out to do just that. It introduces new language abstractions, program logics, reasoning methods, and automated verification and synthesis techniques that collectively allow programmers to reason about non-serializable data-intensive applications in the same way as their serializable counterparts. The contributions
xi
made are broadly threefold. Firstly, the thesis introduces a uniform formal model to reason about weakly isolated (non-serializable) transactions on a sequentially consistent (SC) relational database machine. A reasoning method that relates the semantics of weak isolation to the semantics of the database program is presented, and an automation technique, implemented in a tool called ACIDifier is also described. The second contribution of this thesis is a relaxation of the machine model from sequential consistency to a specifiable level of weak consistency, and a generalization of the data model from relational to schema-less or key-value. A specification language to express weak consistency semantics at the machine level is described, and a bounded verification technique, implemented in a tool called Q9 is presented that bridges the gap between consistency specifications and program semantics, thus allowing high-level safety properties to be verified under arbitrary consistency levels. The final contribution of the thesis is a programming model inspired by version control systems that guarantees correct-by-construction replicated data types (RDTs) for building complex distributed applications with arbitrarily-structured replicated state. A technique based on decomposing inductively-defined data types into characteristic relations is presented, which is used to reason about the semantics of the data type under state replication, and eventually derive its correct-by-construction replicated variant automatically. An implementation of the programming model, called Quark, on top of a content-addressable storage is described, and the practicality of the programming model is demonstrated with help of various case studies.
Yu-TangHuang and 黃昱棠. "A Memcached-Based Inter-Framework Caching System for Multi-Layer Data-Intensive Computing." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/nf48e2.
Full text國立成功大學
電腦與通信工程研究所
102
In the age of information explosion, the conventional computing platforms cannot deal with the huge amount of data. MapReduce is a parallel distributed framework that is proposed by google. It is used for processing data-intensive computing. Hadoop implemented the MapReduce framework and Hadoop Distributed File System cluster to process large amounts of data. Nowadays, a lot of research organizations and enterprises each build their own Hadoop platform to process large-scale data. Various frameworks have been proposed according to different requirements. For example, Storm is used to deal with streaming data, Spark is used to interactive query. Therefore, fast data access and transport of same or different frameworks have become an important topic. In this thesis, we propose a system that improves the Hadoop 2.0 framework called ” Inter-Framework Caching ” .The purpose of this thesis is that we provide an inter-framework distributed cache storage system to speed up data access and transport , it can reduce the disk access frequency and improve the performance.
Armstrong, Hannah Marie. "Evaluation of an Intensive Data Collection System for Tennessee Surface Water Quality Assessment and Watershed Model Calibration." 2011. http://trace.tennessee.edu/utk_gradthes/948.
Full textPortela, Filipe. "Pervasive intelligent decision support in critical health care." Doctoral thesis, 2013. http://hdl.handle.net/1822/27792.
Full textIntensive Care Units (ICU) are recognized as being critical environments, due to the fact that patients admitted to these units typically find themselves in situations of organ failure or serious health conditions. ICU professionals (doctors and nurses) dedicate most of their time taking care for the patients, relegating to a second plan all documentation tasks. Tasks such as recording vital signs, treatment planning and calculation of indicators, are only performed when patients are in a stable clinical condition. These records can occur with a lag of several hours. Since this is a critical environment, the Process of Decision Making (PDM) has to be fast, objective and effective. Any error or delay in the implementation of a particular decision may result in the loss of a human life. Aiming to minimize the human effort in bureaucratic processes and improve the PDM, dematerialization of information is required, eliminating paper-based recording and promoting an automatic registration of electronic and real-time data of patients. These data can then be used as a complement to the PDM, e.g. in Decision Support Systems that use Data Mining (DM) models. At the same time it is important for PDM to overcome barriers of time and space, making the platforms as universal as possible, accessible anywhere and anytime, regardless of the devices used. In this sense, it has been observed a proliferation of pervasive systems in healthcare. These systems are focused on providing healthcare to anyone, anytime and anywhere by removing restrictions of time and place, increasing both the coverage and quality of health care. This approach is mainly based on information that is stored and available online. With the aim of supporting the PDM a set of tests were carried out using static DM models making use of data that had been collected and entered manually in Euricus database. Preliminary results of these tests showed that it was possible to predict organ failure and outcome of a patient using DM techniques considering a set of physiological and clinical variables as input. High rates of sensitivity were achieved: Cardiovascular - 93.4%; Respiratory - 96.2%; Renal - 98.1%; Liver - 98.3%; hematologic - 97.5%; and Outcome and 98.3%. Upon completion of this study a challenge emerged: how to achieve the same results but in a dynamic way and in real time? A research question has been postulated as: "To what extent, Intelligent Decision Support Systems (IDSS) may be appropriate for critical clinical settings in a pervasive way? “. Research work included: 1. To percept what challenges a universal approach brings to IDSS, in the context of critical environments; 2. To understand how pervasive approaches can be adapted to critical environments; 3. To develop and test predictive models for pervasive approaches in health care. The main results achieved in this work made possible: 1. To prove the adequacy of pervasive approach in critical environments; 2. To design a new architecture that includes the information requirements for a pervasive approach, able to automate the process of knowledge discovery in databases; 3. To develop models to support pervasive intelligent decision able to act automatically and in real time. To induce DM ensembles in real time able to adapt autonomously in order to achieve predefined quality thresholds (total error < = 40 %, sensitivity > = 85 % and accuracy > = 60 %). Main contributions of this work include new knowledge to help overcoming the requirements of a pervasive approach in critical environments. Some barriers inherent to information systems, like the acquisition and processing of data in real time and the induction of adaptive ensembles in real time using DM, have been broken. The dissemination of results is done via devices located anywhere and anytime.
As Unidades de Cuidados Intensivos (UCIs) são conhecidas por serem ambientes críticos, uma vez que os doentes admitidos nestas unidades encontram-se, tipicamente, em situações de falência orgânica ou em graves condições de saúde. Os profissionais das UCIs (médicos e enfermeiros) dedicam a maioria do seu tempo no cuidado aos doentes, relegando para segundo plano todas as tarefas relacionadas com documentação. Tarefas como o registo dos sinais vitais, o planeamento do tratamento e o cálculo de indicadores são apenas realizados quando os doentes se encontram numa situação clínica estável. Devido a esta situação, estes registos podem ocorrer com um atraso de várias horas. Dado que este é um ambiente crítico, o Processo de Tomada de Decisão (PTD) tem de ser rápido, objetivo e eficaz. Qualquer erro ou atraso na implementação de uma determinada decisão pode resultar na perda de uma vida humana. Com o intuito de minimizar os esforços humanos em processos burocráticos e de otimizar o PTD, é necessário proceder à desmaterialização da informação, eliminando o registo em papel, e promover o registo automático e eletrónico dos dados dos doentes obtidos em tempo real. Estes dados podem, assim, ser usados com um complemento ao PTD, ou seja, podem ser usados em Sistemas de Apoio à Decisão que utilizem modelos de Data Mining (DM). Ao mesmo tempo, é imperativo para o PTD superar barreiras ao nível de tempo e espaço, desenvolvendo plataformas tão universais quanto possíveis, acessíveis em qualquer lugar e a qualquer hora, independentemente dos dispositivos usados. Nesse sentido, tem-se verificado uma proliferação dos sistemas pervasive na saúde. Estes sistemas focam-se na prestação de cuidados de saúde a qualquer pessoa, a qualquer altura e em qualquer lugar através da eliminação das restrições ao nível do tempo e espaço, aumentando a cobertura e a qualidade na área da saúde. Esta abordagem é, principalmente, baseada em informações que estão armazenadas disponíveis online. Com o objetivo de suportar o PTD, foi realizado um conjunto de testes com modelos de DM estáticos, recorrendo a dados recolhidos e introduzidos manualmente na base de dados “Euricus”. Os resultados preliminares destes testes mostraram que era possível prever a falência orgânica ou a alta hospitalar de um doente, através de técnicas de DM utilizando como valores de entrada um conjunto de variáveis clínicas e fisiológicas. Nos testes efetuados, foram obtidos elevados níveis de sensibilidade: cardiovascular - 93.4%; respiratório - 96.2%; renal - 98.1%; hepático - 98.3%; hematológico - 97.5%; e alta hospitalar - 98.3%. Com a finalização deste estudo, observou-se o aparecimento de um novo desafio: como alcançar os mesmos resultados mas em modo dinâmico e em tempo real? Uma questão de investigação foi postulada: “Em que medida os Sistemas de Apoio à Decisão Inteligentes (SADIs) podem ser adequados às configurações clínicas críticas num modo pervasive?”. Face ao exposto, o trabalho de investigação inclui os seguintes pontos: 1. Perceber quais os desafios que uma abordagem universal traz para os SADIs, no contexto dos ambientes críticos; 2. Compreender como as abordagens pervasive podem ser adaptadas aos ambientes críticos; 3. Desenvolver e testar modelos de previsão para abordagens pervasive na área da saúde. Os principais resultados alcançados neste trabalho tornaram possível: 1. Provar a adequação da abordagem pervasive em ambientes críticos; 2. Conceber uma nova arquitetura que inclui os requisitos de informação para uma abordagem pervasive, capaz de automatizar o processo de descoberta de conhecimento em base de dados; 3. Desenvolver modelos de suporte à decisão inteligente e pervasive, capazes de atuar automaticamente e em tempo real. Induzir ensembles DM em tempo real, capazes de se adaptarem de forma autónoma, com o intuito de alcançar as medidas de qualidade pré-definidas (erro total <= 40 %, sensibilidade> = 85 % e acuidade> = 60 %). As principais contribuições deste trabalho incluem novos conhecimentos para ajudar a ultrapassar as exigências de uma abordagem pervasive em ambientes críticos. Algumas barreiras inerentes aos sistemas de informação, como a aquisição e o processamento de dados em tempo real e a indução de ensembles adaptativos em tempo real utilizando DM, foram transpostas. A divulgação dos resultados é feita através de dispositivos localizados, em qualquer lugar e a qualquer hora.
Intensive Care Units (ICU) are recognized as being critical environments, due to the fact that patients admitted to these units typically find themselves in situations of organ failure or serious health conditions. ICU professionals (doctors and nurses) dedicate most of their time taking care for the patients, relegating to a second plan all documentation tasks. Tasks such as recording vital signs, treatment planning and calculation of indicators, are only performed when patients are in a stable clinical condition. These records can occur with a lag of several hours. Since this is a critical environment, the Process of Decision Making (PDM) has to be fast, objective and effective. Any error or delay in the implementation of a particular decision may result in the loss of a human life. Aiming to minimize the human effort in bureaucratic processes and improve the PDM, dematerialization of information is required, eliminating paper-based recording and promoting an automatic registration of electronic and real-time data of patients. These data can then be used as a complement to the PDM, e.g. in Decision Support Systems that use Data Mining (DM) models. At the same time it is important for PDM to overcome barriers of time and space, making the platforms as universal as possible, accessible anywhere and anytime, regardless of the devices used. In this sense, it has been observed a proliferation of pervasive systems in healthcare. These systems are focused on providing healthcare to anyone, anytime and anywhere by removing restrictions of time and place, increasing both the coverage and quality of health care. This approach is mainly based on information that is stored and available online. With the aim of supporting the PDM a set of tests were carried out using static DM models making use of data that had been collected and entered manually in Euricus database. Preliminary results of these tests showed that it was possible to predict organ failure and outcome of a patient using DM techniques considering a set of physiological and clinical variables as input. High rates of sensitivity were achieved: Cardiovascular - 93.4%; Respiratory - 96.2%; Renal - 98.1%; Liver - 98.3%; hematologic - 97.5%; and Outcome and 98.3%. Upon completion of this study a challenge emerged: how to achieve the same results but in a dynamic way and in real time? A research question has been postulated as: "To what extent, Intelligent Decision Support Systems (IDSS) may be appropriate for critical clinical settings in a pervasive way? “. Research work included: 1. To percept what challenges a universal approach brings to IDSS, in the context of critical environments; 2. To understand how pervasive approaches can be adapted to critical environments; 3. To develop and test predictive models for pervasive approaches in health care. The main results achieved in this work made possible: 1. To prove the adequacy of pervasive approach in critical environments; 2. To design a new architecture that includes the information requirements for a pervasive approach, able to automate the process of knowledge discovery in databases; 3. To develop models to support pervasive intelligent decision able to act automatically and in real time. To induce DM ensembles in real time able to adapt autonomously in order to achieve predefined quality thresholds (total error < = 40 %, sensitivity > = 85 % and accuracy > = 60 %). Main contributions of this work include new knowledge to help overcoming the requirements of a pervasive approach in critical environments. Some barriers inherent to information systems, like the acquisition and processing of data in real time and the induction of adaptive ensembles in real time using DM, have been broken. The dissemination of results is done via devices located anywhere and anytime.
Yang, Tao. "Brand and usability in content-intensive websites." Thesis, 2014. http://hdl.handle.net/1805/4667.
Full textOur connections to the digital world are invoked by brands, but the intersection of branding and interaction design is still an under-investigated area. Particularly, current websites are designed not only to support essential user tasks, but also to communicate an institution's intended brand values and traits. What we do not yet know, however, is which design factors affect which aspect of a brand. To demystify this issue, three sub-projects were conducted. The first project developed a systematic approach for evaluating the branding effectiveness of content-intensive websites (BREW). BREW gauges users' brand perceptions on four well-known branding constructs: brand as product, brand as organization, user image, and brand as person. It also provides rich guidelines for eBranding researchers in regard to planning and executing a user study and making improvement recommendations based on the study results. The second project offered a standardized perceived usability questionnaire entitled DEEP (design-oriented evaluation of perceived web usability). DEEP captures the perceived website usability on five design-oriented dimensions: content, information architecture, navigation, layout consistency, and visual guidance. While existing questionnaires assess more holistic concepts, such as ease-of-use and learnability, DEEP can more transparently reveal where the problem actually lies. Moreover, DEEP suggests that the two most critical and reliable usability dimensions are interface consistency and visual guidance. Capitalizing on the BREW approach and the findings from DEEP, a controlled experiment (N=261) was conducted by manipulating interface consistency and visual guidance of an anonymized university website to see how these variables may affect the university's image. Unexpectedly, consistency did not significantly predict brand image, while the effect of visual guidance on brand perception showed a remarkable gender difference. When visual guidance was significantly worsened, females became much less satisfied with the university in terms of brand as product (e.g., teaching and research quality) and user image (e.g., students' characteristics). In contrast, males' perceptions of the university's brand image stayed the same in most circumstances. The reason for this gender difference was revealed through a further path analysis and a follow-up interview, which inspired new research directions to unpack even more the nexus between branding and interaction design.
Chen, Tseng-Yi, and 陳增益. "Based on a Novel Economic Evaluation Model to Design an Energy-efficient and Reliable Storage Mechanism with Associated Tools for Data-intensive Archive System." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/30953709336081462185.
Full text國立清華大學
資訊工程學系
103
Recently, a green data center issue has garnered much attention due to the dramatic growth of data in every conceivable industry and application. With high network bandwidth, mobile applications and user clients always backups program/user data in remote data centers. In addition to the data from users, a data center usually employs a data fault-tolerance mechanism to generate redundant data, so as to keep user data from getting lost/error. To preserve numerous data in data centers, a storage system consumes about 27%-35% of the power consumption in a typical data center. Reducing the energy consumption of storage systems, previous studies conserved power in their respective storage systems by switching idle disks to standby/sleep modes. According to research conducted by Google and the IDEMA standard, frequently setting the disk status to standby mode will increase the disk's Annual Failure Rate and reduce its lifespan. However, in most cases, the authors did not analyze the reliability of their solutions. To address the issue, we propose an evaluation function called E3SaRC (Economic Evaluation of Energy saving with Reliability Constraint), which comprehensively evaluates the effects of a energy-efficient solution by considering the cost of hardware failure when applying energy saving schemes. With system reliability and energy-efficient considerations, this study proposes an energy-efficient and reliable storage system that is composed of an energy-efficient storage scheme with a data fault-tolerance algorithm, an adaptive simulation tool and a monitor framework. First of all, because power consumption is the most important issue in this dissertation, we developed a data placement mechanism called CacheRAID based on a Redundant Array of Independent Disks (RAID-5) architecture to mitigate the random access problems that implicitly exist in RAID techniques and thereby reduce the energy consumption of RAID disks. On system reliability issue, CacheRAID applies a control mechanism to the spin-down algorithm. To further enhance system energy-efficiency of the proposed system, an adaptive simulation tool has been proposed to find the best system parameters for CacheRAID by quickly simulating the current workload on storage systems. At the end, the contributions of this dissertation are presented in two parts. In the first part, our experimental results show that the proposed storage system can reduce the power consumption of the conventional software RAID 5 system by 65-80%. Moreover, according to the E3SaRC measurement, the overall saved cost of CacheRAID, is the largest among the systems that we compared. Second, the analytical results demonstrate that the measurement error of the proposed simulation tool is 2.5% lower than that achieved in real-world experiments involving energy estimation experiments. Therefore, the proposed tool can accurately simulate the power consumption of a storage system under different system settings. According to the experimental results, the proposed system can significantly reduce storage system power consumption and increase the system reliability.
Brossier, David. "Élaboration et validation d’une base de données haute résolution destinée à la calibration d’un patient virtuel utilisable pour l’enseignement et la prise en charge personnalisée des patients en réanimation pédiatrique." Thesis, 2019. http://hdl.handle.net/1866/24620.
Full textLa complexité des patients de réanimation justifie le recours à des systèmes d’aide à la décision thérapeutique. Ces systèmes rassemblent des protocoles automatisés de prise en charge permettant le respect des recommandations et des simulateurs physiologiques ou patients virtuels, utilisables pour personnaliser de façon sécuritaire les prises en charge. Ces dispositifs fonctionnant à partir d’algorithmes et d’équations mathématiques ne peuvent être développés qu’à partir d’un grand nombre de données de patients. Le principal objectif de cette thèse était la mise en place d’une base de données haute résolution automatiquement collectée de patients de réanimation pédiatrique dont le but sera de servir au développement et à la validation d’un simulateur physiologique : SimulResp©. Ce travail présente l’ensemble du processus de mise en place de la base de données, du concept jusqu’à son utilisation.
The complexity of the patients in the intensive care unit requires the use of clinical decision support systems. These systems bring together automated management protocols that enable adherence to guidelines and virtual physiological or patient simulators that can be used to safely customize management. These devices operating from algorithms and mathematical equations can only be developed from a large number of patients’ data. The main objective of the work was the elaboration of a high resolution database automatically collected from critically ill children. This database will be used to develop and validate a physiological simulator called SimulResp© . This manuscript presents the whole process of setting up the database from concept to use.