To see the other types of publications on this topic, follow the link: ME163.

Journal articles on the topic 'ME163'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'ME163.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Berardelli, A. "ME13 Pathophysiology of dystonia." Clinical Neurophysiology 119 (May 2008): S4. http://dx.doi.org/10.1016/s1388-2457(08)60020-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

H-Kittikun, Aran, Poonsuk Prasertsan, Wolfgang Zimmermann, Phisit Seesuriyachan, and Thanongsak Chaiyaso. "Sugar Ester Synthesis by Thermostable Lipase from Streptomyces thermocarboxydus ME168." Applied Biochemistry and Biotechnology 166, no. 8 (March 21, 2012): 1969–82. http://dx.doi.org/10.1007/s12010-012-9624-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Neilson, Lynn M., Jianquong Zhu, Jianwu Xie, M. Grazia Malabarba, Kazuhito Sakamoto, Kay-Uwe Wagner, Robert A. Kirken, and Hallgeir Rui. "Coactivation of Janus Tyrosine Kinase (Jak)1 Positively Modulates Prolactin-Jak2 Signaling in Breast Cancer: Recruitment of ERK and Signal Transducer and Activator of Transcription (Stat)3 and Enhancement of Akt and Stat5a/b Pathways." Molecular Endocrinology 21, no. 9 (September 1, 2007): 2218–32. http://dx.doi.org/10.1210/me.2007-0173.

Full text
Abstract:
Abstract Prolactin (PRL) receptors (PRLRs) have been considered selective activators of Janus tyrosine kinase (Jak)2 but not Jak1, Jak3, or Tyk2. We now report marked PRL-induced tyrosine phosphorylation of Jak1, in addition to Jak2, in a series of human breast cancer cell lines, including T47D, MCF7, and SKBR3. In contrast, PRL did not activate Jak1 in immortalized, noncancerous breast epithelial lines HC11, MCF10A, ME16C, and HBL-100, or in CWR22Rv1 prostate cancer cells or MDA-MB-231 breast cancer cells. However, introduction of exogenous PRLR into MCF10A, ME16C, or MDA-MB-231 cells reconstituted both PRL-Jak1 and PRL-Jak2 signals. In vitro kinase assays verified that PRL stimulated enzymatic activity of Jak1 in T47D cells, and PRL activated Jak1 and Jak2 with indistinguishable time and dose kinetics. Relative Jak2 deficiency did not cause PRLR activation of Jak1, because overexpression of Jak2 did not interfere with PRL activation of Jak1. Instead, PRL activated Jak1 through a Jak2-dependent mechanism, based on disruption of PRL activation of Jak1 after Jak2 suppression by 1) lentiviral delivery of Jak2 short hairpin RNA, 2) adenoviral delivery of dominant-negative Jak2, and 3) AG490 pharmacological inhibition. Finally, suppression of Jak1 by lentiviral delivery of Jak1 short hairpin RNA blocked PRL activation of ERK and signal transducer and activator of transcription (Stat)3 and suppressed PRL activation of Jak2, Stat5a, Stat5b, and Akt, as well as tyrosine phosphorylation of PRLR. The data suggest that PRL activation of Jak1 represents a novel, Jak2-dependent mechanism that may serve as a regulatory switch leading to PRL activation of ERK and Stat3 pathways, while also serving to enhance PRL-induced Stat5a/b and Akt signaling.
APA, Harvard, Vancouver, ISO, and other styles
4

Coulon, Frédéric, Panagiota-Myrsini Chronopoulou, Anne Fahy, Sandrine Païssé, Marisol Goñi-Urriza, Louis Peperzak, Laura Acuña Alvarez, et al. "Central Role of Dynamic Tidal Biofilms Dominated by Aerobic Hydrocarbonoclastic Bacteria and Diatoms in the Biodegradation of Hydrocarbons in Coastal Mudflats." Applied and Environmental Microbiology 78, no. 10 (March 9, 2012): 3638–48. http://dx.doi.org/10.1128/aem.00072-12.

Full text
Abstract:
ABSTRACTMudflats and salt marshes are habitats at the interface of aquatic and terrestrial systems that provide valuable services to ecosystems. Therefore, it is important to determine how catastrophic incidents, such as oil spills, influence the microbial communities in sediment that are pivotal to the function of the ecosystem and to identify the oil-degrading microbes that mitigate damage to the ecosystem. In this study, an oil spill was simulated by use of a tidal chamber containing intact diatom-dominated sediment cores from a temperate mudflat. Changes in the composition of bacteria and diatoms from both the sediment and tidal biofilms that had detached from the sediment surface were monitored as a function of hydrocarbon removal. The hydrocarbon concentration in the upper 1.5 cm of sediments decreased by 78% over 21 days, with at least 60% being attributed to biodegradation. Most phylotypes were minimally perturbed by the addition of oil, but at day 21, there was a 10-fold increase in the amount of cyanobacteria in the oiled sediment. Throughout the experiment, phylotypes associated with the aerobic degradation of hydrocarbons, including polycyclic aromatic hydrocarbons (PAHs) (Cycloclasticus) and alkanes (Alcanivorax,Oleibacter, andOceanospirillalesstrain ME113), substantively increased in oiled mesocosms, collectively representing 2% of the pyrosequences in the oiled sediments at day 21. Tidal biofilms from oiled cores at day 22, however, consisted mostly of phylotypes related toAlcanivorax borkumensis(49% of clones),Oceanospirillalesstrain ME113 (11% of clones), and diatoms (14% of clones). Thus, aerobic hydrocarbon biodegradation is most likely to be the main mechanism of attenuation of crude oil in the early weeks of an oil spill, with tidal biofilms representing zones of high hydrocarbon-degrading activity.
APA, Harvard, Vancouver, ISO, and other styles
5

YOKOYAMA, T., K. YUYAMA, I. AKOJIMA, and S. MORIYA. "The Rocket Fighter Shusui—as Re-developed from Incomplete and Vague Me163B Data." Transactions of the Newcomen Society 70, no. 1 (January 1998): 257–76. http://dx.doi.org/10.1179/tns.1998.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Middleton, Douglas. "The profile packThe profile pack Mailing MacDonald et al ME13 8AR FavershamKent£15.95 incl postage." Nursing Standard 6, no. 18 (January 22, 1992): 44. http://dx.doi.org/10.7748/ns.6.18.44.s53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Slingluff, Jr., Craig L., Gina R. Petroni, Kimberly A. Chianese-Bullock, Nolan A. Wages, Walter C. Olson, Kelly T. Smith, Kathleen Haden, et al. "Trial to evaluate the immunogenicity and safety of a melanoma helper peptide vaccine plus incomplete Freund’s adjuvant, cyclophosphamide, and polyICLC (Mel63)." Journal for ImmunoTherapy of Cancer 9, no. 1 (January 2021): e000934. http://dx.doi.org/10.1136/jitc-2020-000934.

Full text
Abstract:
BackgroundPeptide vaccines designed to stimulate melanoma-reactive CD4+ T cells can induce T cell and antibody (Ab) responses, associated with enhanced overall survival. We hypothesized that adding toll-like receptor 3 agonist polyICLC to an incomplete Freund’s adjuvant (IFA) would be safe and would support strong, durable CD4+ T cell and Ab responses. We also hypothesized that oral low-dose metronomic cyclophosphamide (mCy) would be safe, would reduce circulating regulatory T cells (T-regs) and would further enhance immunogenicity.Participants and methodsAn adaptive design based on toxicity and durable CD4+ T cell immune response (dRsp) was used to assign participants with resected stage IIA-IV melanoma to one of four study regimens. The regimens included a vaccine comprising six melanoma peptides restricted by Class II MHC (6MHP) in an emulsion with IFA alone (Arm A), with IFA plus systemic mCy (Arm B), with IFA+ local polyICLC (Arm C), or with IFA+ polyICLC+ mCy (Arm D). Toxicities were recorded (CTCAE V.4.03). T cell responses were measured by interferon γ ELIspot assay ex vivo. Serum Ab responses to 6MHP were measured by ELISA. Circulating T-regs were assessed by flow cytometry.ResultsForty-eight eligible participants were enrolled and treated. Early data on safety and dRsp favored enrollment on arm D. Total enrollment on Arms A-D were 3, 7, 6, and 32, respectively. Treatment-related dose-limiting toxicities (DLTs) were observed in 1/7 (14%) participants on arm B and 2/32 (6%) on arm D. None exceeded the 25% DLT threshold for early closure to enrollment for any arm. Strong durable T cell responses to 6MHP were detected ex vivo in 0%, 29%, 67%, and 47% of participants on arms A-D, respectively. IgG Ab responses were greatest for arms C and D. Circulating T-regs frequencies were not altered by mCy.Conclusions6MHP vaccines administered with IFA, polyICLC, and mCy were well tolerated. The dRsp rate for arm D of 47% (90% CI 32 to 63) exceeded the 18% (90% CI 11 to 26) rate previously observed with 6MHP in IFA alone. Vaccination with IFA+ polyICLC (arm C) also showed promise for enhancing T cell and Ab responses.
APA, Harvard, Vancouver, ISO, and other styles
8

Guan, Z., J. Sun, Z. Wang, Y. Geng, and W. Xu. "Development of an Open Metadata Schema for Prospective Clinical Research (openPCR) in China." Methods of Information in Medicine 53, no. 01 (2014): 39–46. http://dx.doi.org/10.3414/me13-01-0008.

Full text
Abstract:
SummaryObjectives: In China, deployment of electronic data capture (EDC) and clinical data management system (CDMS) for clinical research (CR) is in its very early stage, and about 90% of clinical studies collected and submitted clinical data manually. This work aims to build an open metadata schema for Prospective Clinical Research (openPCR) in China based on openEHR archetypes, in order to help Chinese researchers easily create specific data entry templates for registration, study design and clinical data collection.Methods: Singapore Framework for Dublin Core Application Profiles (DCAP) is used to develop openPCR and four steps such as defining the core functional requirements and deducing the core metadata items, developing archetype models, defining metadata terms and creating archetype records, and finally developing implementation syntax are followed.Results: The core functional requirements are divided into three categories: requirements for research registration, requirements for trial design, and requirements for case report form (CRF). 74 metadata items are identified and their Chinese authority names are created. The minimum metadata set of openPCR includes 3 documents, 6 sections, 26 top level data groups, 32 lower data groups and 74 data elements. The top level container in openPCR is composed of public document, internal document and clinical document archetypes. A hierarchical structure of openPCR is established according to Data Structure of Electronic Health Record Architecture and Data Stand -ard of China (Chinese EHR Standard). Meta-data attributes are grouped into six parts: identification, definition, representation, relation, usage guides, and administration.Discussions and Conclusion: OpenPCR is an open metadata schema based on research registration standards, standards of the Clinical Data Interchange Standards Consortium (CDISC) and Chinese healthcare related stand -ards, and is to be publicly available throughout China. It considers future integration of EHR and CR by adopting data structure and data terms in Chinese EHR Standard. Archetypes in openPCR are modularity models and can be separated, recombined, and reused. The authors recommend that the method to develop openPCR can be referenced by other countries when designing metadata schema of clinical research. In the next steps, openPCR should be used in a number of CR projects to test its applicability and to continuously improve its coverage. Besides, metadata schema for research protocol can be developed to structurize and standardize protocol, and syntactical interoperability of openPCR with other related standards can be considered.
APA, Harvard, Vancouver, ISO, and other styles
9

Keller, H., M. Müller-Engelmann, M. Heinzel-Gutenbrunner, T. Krones, N. Donner-Banzhoff, and O. Hirsch. "Trying to Optimise the German Version of the OPTION Scale Regarding the Dyadic Aspect of Shared Decision Making." Methods of Information in Medicine 52, no. 06 (2013): 514–21. http://dx.doi.org/10.3414/me13-01-0011.

Full text
Abstract:
Summary Objectives: The OPTION scale (“observing patient involvement in decision making”) assesses the extent to which clinicians involve patients in decisions across a range of situations in clinical practice. It so far just covers physician behavior. We intended to modify the scoring of the OPTION scale to incorpo -rate active patient behavior in consultations. Methods: Modification was done on scoring level, attempting a dyadic, relationship-centred approach in that high ratings can be evoked also by the behaviour of active patients. The German version of the OPTION scale was compared with a modified version by analysing video recordings of primary care consultations dealing with cardiovascular prevention. Fifteen general practitioners provided 40 videotaped consultations. Videos were analysed by two rater pairs and two experts in shared decision making (SDM). Results: Reliability measures of the modified version were lower than those of the original scale. Significant associations of the dichotomised scale with the expert SDM rating as well as with physicians’ expertise in SDM were only found for the modified OPTION scale. Receiver Operating Characteristic (ROC) analyses confirmed a valid differentiation between the presence of SDM (yes/no) on total score level, even though the cut-off point was quite low. Standard deviations of the single items in the modified version were higher compared to the original OPTION scale, while the means of total scores were similar. Conclusions: The original OPTION scale is physician-centered and neglects the activity and a possible self-involvement of the patient. Our modified instruction was able to capture the dyadic element partially. The development of a separate dyadic instrument might be more promising.
APA, Harvard, Vancouver, ISO, and other styles
10

Luzi, D., and F. Pecoraro. "The Integration of the Risk Management Process with the Lifecycle of Medical Device Software." Methods of Information in Medicine 53, no. 02 (2014): 92–98. http://dx.doi.org/10.3414/me13-01-0014.

Full text
Abstract:
SummaryObjectives: The application of software in the Medical Device (MD) domain has become central to the improvement of diagnoses and treatments. The new European regulations that specifically address software as an important component of MD, require complex procedures to make software compliant with safety requirements, introducing thereby new challenges in the qualification and classi -fication of MD software as well as in the performance of risk management activities. Under this perspective, the aim of this paper is to propose an integrated framework that combines the activities to be carried out by the manufacturer to develop safe software within the development lifecycle based on the regulatory requirements reported in US and European regulations as well as in the relevant standards and guidelines.Methods: A comparative analysis was carried out to identify the main issues related to the application of the current new regulations. In addition, standards and guidelines recently released to harmonise procedures for the validation of MD software have been used to define the risk management ac -tivities to be carried out by the manufacturer during the software development process.Results: This paper highlights the main issues related to the qualification and classification of MD software, providing an analysis of the different regulations applied in Europe and the US. A model that integrates the risk management process within the software development lifecycle has been proposed too. It is based on regulatory requirements and considers software risk analysis as a central input to be managed by the manufacturer already at the initial stages of the software design, in order to prevent MD failures.Conclusions: Relevant changes in the process of MD development have been introduced with the recognition of software being an important component of MDs as stated in regulations and standards. This implies the performance of highly iterative processes that have to integrate the risk management in the framework of software development. It also makes it necessary to involve both medical and software engineering competences to safeguard patient and user safety.
APA, Harvard, Vancouver, ISO, and other styles
11

Bergrath, S., R. Rossaint, S. Thelen, T. Brodziak, B. Valentin, F. Hirsch, S. K. Beckers, J. C. Brokmann, and M. Czaplik. "Employment of Telemedicine in Emergency Medicine." Methods of Information in Medicine 53, no. 02 (2014): 99–107. http://dx.doi.org/10.3414/me13-01-0022.

Full text
Abstract:
SummaryObjectives: Demographic change, rising comorbidity and an increasing number of emer -gencies are the main challenges that emer -gency medical services (EMS) in several countries worldwide are facing. In order to improve quality in EMS, highly trained personnel and well-equipped ambulances are essential. However several studies have shown a deficiency in qualified EMS physicians. Telemedicine emerges as a complementary system in EMS that may provide expertise and improve quality of medical treatment on the scene. Hence our aim is to develop and test a specific teleconsultation system.Methods: During the development process several use cases were defined and technically specified by medical experts and en -gineers in the areas of: system administration, start-up of EMS assistance systems, audio communication, data transfer, routine tele-EMS physician activities and research capabilities. Upon completion, technical field tests were performed under realistic conditions to test system properties such as robustness, feasibility and usability, providing end-to-end measurements.Results: Six ambulances were equipped with telemedical facilities based on the results of the requirement analysis and 55 scenarios were tested under realistic conditions in one month. The results indicate that the developed system performed well in terms of usability and robustness. The major challenges were, as expected, mobile communication and data network availability. Third generation networks were only available in 76.4% of the cases. Although 3G (third generation), such as Universal Mobile Telecommunications System (UMTS), provides beneficial conditions for higher bandwidth, system performance for most features was also acceptable under adequate 2G (second generation) test conditions.Conclusions: An innovative concept for the use of telemedicine for medical consultations in EMS was developed. Organisational and technical aspects were considered and practical requirements specified. Since technical feasibility was demonstrated in these technical field tests, the next step would be to prove medical usefulness and technical robustness under real conditions in a clinical trial.
APA, Harvard, Vancouver, ISO, and other styles
12

Liu, K. E., C. L. Lo, and Y. H. Hu. "Improvement of Adequate Use of Warfarin for the Elderly Using Decision Tree-based Approaches." Methods of Information in Medicine 53, no. 01 (2014): 47–53. http://dx.doi.org/10.3414/me13-01-0027.

Full text
Abstract:
SummaryObjectives: Due to the narrow therapeutic range and high drug-to-drug interactions (DDIs), improving the adequate use of warfarin for the elderly is crucial in clinical practice. This study examines whether the effectiveness of using warfarin among elderly inpatients can be improved when machine learning techniques and data from the laboratory information system are incorporated.Methods: Having employed 288 validated clinical cases in the DDI group and 89 cases in the non-DDI group, we evaluate the prediction performance of seven classification techniques, with and without an Adaptive Boosting (AdaBoost) algorithm. Measures including accuracy, sensitivity, specificity and area under the curve are used to evaluate model performance.Results: Decision tree-based classifiers outperform other investigated classifiers in all evaluation measures. The classifiers supplemented with AdaBoost can generally improve the performance. In addition, weight, congestive heart failure, and gender are among the top three critical variables affecting prediction accuracy for the non-DDI group, while age, ALT, and warfarin doses are the most influential factors for the DDI group.Conclusion: Medical decision support systems incorporating decision tree-based approaches improve predicting performance and thus may serve as a supplementary tool in clinical practice. Information from laboratory tests and inpatients’ history should not be ignored because related variables are shown to be decisive in our prediction models, especially when the DDIs exist.
APA, Harvard, Vancouver, ISO, and other styles
13

Kamper, C. H., N. H. Rasmussen, P. Ahrendt, T. S. Toftegaard, O. W. Bertelsen, and S. Wagner. "Reliable Blood Pressure Self-measurement in the Obstetric Waiting Room." Methods of Information in Medicine 53, no. 03 (2014): 225–34. http://dx.doi.org/10.3414/me13-01-0032.

Full text
Abstract:
SummaryBackground: Patients often fail to adhere to clinical recommendations when using current blood pressure self-measurement (BPSM) methods and equipment. As existing BPSM equipment is not able to detect non-adherent behavior, this could result in mis-diagnosis and treatment error. To overcome this problem, we suggest introducing an alternative method for achieving reliable BPSM by measuring additional context meta-data for validating patient adherence. To facilitate this, we have developed ValidAid, a context-aware system for determining patient adherence levels during BPSM.Objectives: The aim of this study was to validate this new reliable BPSM method based on ValidAid in the clinical setting. Specifically, we wanted to evaluate ValidAid’s ability to accurately detect and model patient adherence levels during BPSM in the clinic.Methods: The validation was done by asking 41 pregnant diabetic patients scheduled for self-measuring their blood pressure (BP) in the waiting room at an obstetrics department’s outpatient clinic to perform an additional BPSM using ValidAid. We then compared the automatically measured and classified values from ValidAid with our manual observations.Results: We found that a) the pregnant diabetics did not adhere to given instructions when performing BPSM in the waiting room, and that b) the ValidAid system was able to accurately classify patient adherence to the modeled recommendations.Conclusions: A new method for ensuring reliable BPSM based on the ValidAid system was validated. Results indicate that context-aware technology is useful for accurately modeling important aspects of non-adherent patient behavior. This may be used to identify patients in need of additional training, or to design better aids to actively assist the patients during measurements. ValidAid is also applicable to other self-measurement environments including the home setting and outpatient clinics in remote or underserved areas as it is built using telemedicine technology and thus well-suited for remote monitoring and diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
14

Semjonow, A., E. Oezguer, J. Herden, I. Akbarov, A. Tok, U. Engelmann, S. Wille, and O. Eminaga. "An Electronic Specimen Collection Protocol Schema (eSCPS)." Methods of Information in Medicine 53, no. 01 (2014): 29–38. http://dx.doi.org/10.3414/me13-01-0035.

Full text
Abstract:
SummaryBackground: The integrity of collection protocols in biobanking is essential for a high-quality sample preparation process. However, there is not currently a well-defined universal method for integrating collection protocols in the biobanking information system (BIMS). Therefore, an electronic schema of the collection protocol that is based on Extensible Markup Language (XML) is required to maintain the integrity and enable the exchange of collection protocols.Materials and Methods: The development and implementation of an electronic specimen collection protocol schema (eSCPS) was performed at two institutions (Muenster and Cologne) in three stages. First, we analyzed the infrastructure that was already established at both the biorepository and the hospital information systems of these institutions and determined the requirements for the sufficient preparation of specimens and documentation. Second, we designed an eSCPS according to these requirements. Fi -nally, a prospective study was conducted to implement and evaluate the novel schema in the current BIMS.Results: We designed an eSCPS that provides all of the relevant information about collection protocols. Ten electronic collection protocols were generated using the supplementary Protocol Editor tool, and these protocols were successfully implemented in the existing BIMS. Moreover, an electronic list of collection protocols for the current studies being performed at each institution was included, new collection protocols were added, and the existing protocols were redesigned to be modifiable. The documentation time was significantly reduced after implementing the eSCPS (5 ± 2 min vs. 7 ± 3 min; p = 0.0002).Conclusion: The eSCPS improves the integrity and facilitates the exchange of specimen collection protocols in the existing open-source BIMS.
APA, Harvard, Vancouver, ISO, and other styles
15

Brown, S. H., G. Wright, and P. L. Elkin. "Biomedical Informatics: We Are What We Publish." Methods of Information in Medicine 52, no. 06 (2013): 538–46. http://dx.doi.org/10.3414/me13-01-0041.

Full text
Abstract:
SummaryIntroduction: This article is part of a For-Discussion-Section of Methods of Information in Medicine on “Biomedical Informatics: We are what we publish“. It is introduced by an editorial and followed by a commentary paper with invited comments. In subsequent issues the discussion may continue through letters to the editor.Objective: Informatics experts have attempted to define the field via consensus projects which has led to consensus statements by both AMIA. and by IMIA. We add to the output of this process the results of a study of the Pubmed publications with abstracts from the field of Biomedical Informatics.Methods: We took the terms from the AMIA consensus document and the terms from the IMIA definitions of the field of Biomedical Informatics and combined them through human review to create the Health Infor -matics Ontology. We built a terminology server using the Intelligent Natural Language Processor (iNLP). Then we downloaded the entire set of articles in Medline identified by searching the literature by “Medical Informatics” OR “Bioinformatics”. The articles were parsed by the joint AMIA / IMIA terminology and then again using SNOMED CT and for the Bioinformatics they were also parsed using HGNC Ontology.Results: We identified 153,580 articles using “Medical Informatics” and 20,573 articles using “Bioinformatics”. This resulted in 168,298 unique articles and an overlap of 5,855 articles. Of these 62,244 articles (37%) had titles and abstracts that contained at least one concept from the Health Infor -matics Ontology. SNOMED CT indexing showed that the field interacts with most all clinical fields of medicine.Conclusions: Further defining the field by what we publish can add value to the consensus driven processes that have been the mainstay of the efforts to date. Next steps should be to extract terms from the literature that are uncovered and create class hierarchies and relationships for this content. We should also examine the high occurring of MeSH terms as markers to define Biomedical Informatics. Greater understanding of the Biomedical Informatics Literature has the potential to lead to improved self-awareness for our field.
APA, Harvard, Vancouver, ISO, and other styles
16

Sikora, M., and Ł. Wróbel. "Censoring Weighted Separate-and-Conquer Rule Induction from Survival Data." Methods of Information in Medicine 53, no. 02 (2014): 137–48. http://dx.doi.org/10.3414/me13-01-0046.

Full text
Abstract:
SummaryObjectives: Rule induction is one of the major methods of machine learning. Rule-based models can be easily read and interpreted by humans, that makes them particularly useful in survival studies as they can help clinicians to better understand analysed data and make informed decisions about patient treatment. Although of such usefulness, there is still a little research on rule learning in survival analysis. In this paper we take a step towards rule-based analysis of survival data.Methods: We investigate so-called covering or separate-and-conquer method of rule induction in combination with a weighting scheme for handling censored observations. We also focus on rule quality measures being one of the key elements differentiating particular implementations of separate-and-conquer rule induction algorithms. We examine 15 rule quality measures guiding rule induction process and reflecting a wide range of different rule learning heuristics.Results: The algorithm is extensively tested on a collection of 20 real survival datasets and compared with the state-of-the-art survival trees and random survival forests algorithms. Most of the rule quality measures outperform Kaplan-Meier estimate and perform at least equally well as tree-based algorithms.Conclusions: Separate-and-conquer rule induction in combination with weighting scheme is an effective technique for building rule-based models of survival data which, according to predictive accuracy, are competitive with tree-based representations.
APA, Harvard, Vancouver, ISO, and other styles
17

Karvounis, E. C., M. G. Tsipouras, A. T. Tzallas, N. S. Katertsidis, K. Stefanou, Y. Goletsis, M. Frigerio, et al. "A Decision Support System for the Treatment of Patients with Ventricular Assist Device Support." Methods of Information in Medicine 53, no. 02 (2014): 121–36. http://dx.doi.org/10.3414/me13-01-0047.

Full text
Abstract:
SummaryBackground: Heart failure (HF) is affecting millions of people every year and it is characterized by impaired ventricular performance, exercise intolerance and shortened life expectancy. Despite significant advancements in drug therapy, mortality of the disease remains excessively high, as heart transplant remains the gold standard treatment for end-stage HF when no contraindications subsist. Traditionally, implanted Ventricular Assist Devices (VADs) have been employed in order to provide circulatory support to patients who cannot survive the waiting time to transplantation, reducing the workload imposed on the heart. In many cases that process could recover its contractility performance.Objectives: The SensorART platform focuses on the management and remote treatment of patients suffering from HF. It provides an inter-operable, extendable and VAD-independent solution, which incorporates various hardware and software components in a holistic approach, in order to improve the quality of the patients’ treatment and the workflow of the specialists. This paper focuses on the description and analysis of Specialist’s Decision Support System (SDSS), an innovative component of the SensorART platform.Methods: The SDSS is a Web-based tool that assists specialists on designing the therapy plan for their patients before and after VAD implantation, analyzing patients’ data, extracting new knowledge, and making informative decisions.Results: SDSS offers support to medical and VAD experts through the different phases of VAD therapy, incorporating several tools covering all related fields; Statistics, Association Rules, Monitoring, Treatment, Weaning, Speed and Suction Detection.Conclusions: SDSS and its modules have been tested in a number of patients and the results are encouraging.
APA, Harvard, Vancouver, ISO, and other styles
18

Choi, Y., C. Jung, Y. Chae, M. Kang, J. Kim, K. Joung, J. Lim, et al. "Comparison of Validity of Mapping between Drug Indications and ICD-10." Methods of Information in Medicine 53, no. 03 (2014): 195–201. http://dx.doi.org/10.3414/me13-01-0053.

Full text
Abstract:
SummaryBackground: Mapping of drug indications to ICD-10 was undertaken in Korea by a public and a private institution for their own purposes. A different mapping approach was used by each institution, which presented a good opportunity to compare the validity of the two approaches.Objectives: This study was undertaken to compare the validity of a direct mapping approach and an indirect terminology based mapping approach of drug indications against the gold standard drawn from the results of the two mapping processes.Methods: Three hundred and seventy-five cardiovascular reference drugs were selected from all listed cardiovascular drugs for the study. In the direct approach, two experienced nurse coders mapped the free text indications directly to ICD-10. In the indirect terminology based approach, the indications were extracted and coded in the Korean Standard Terminology of Medicine. These terminology coded indications were then manually mapped to ICD-10. The results of the two approaches were compared to the gold standard. A kappa statistic was calculated to see the compatibility of both mapping approaches. Recall, precision and F1 score of each mapping approach were calculated and analyzed using a paired t-test.Results: The mean number of indications for the study drugs was 5.42. The mean number of ICD-10 codes that matched in direct approach was 46.32 and that of indirect terminology based approach was 56.94. The agreement of the mapping results between the two approaches were poor (kappa = 0.19). The indirect terminology based approach showed higher recall (86.78%) than direct approach (p < 0.001). However, there was no difference in precision and F1 score between the two approaches.Conclusions: Considering no differences in the F1 scores, both approaches may be used in practice for mapping drug indications to ICD-10. However, in terms of consistency, time and manpower, better results are expected from the indirect terminology based approach.
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Y., S. Chiu, Y. Lin, and W. K. Chiou. "Pictogram-based Method of Visualizing Dietary Intake." Methods of Information in Medicine 53, no. 06 (2014): 493–500. http://dx.doi.org/10.3414/me13-01-0072.

Full text
Abstract:
SummaryBackground: Pictograms have been shown by many studies to be an effective way of conveying information. An easy-to-understand pictorial description is essential for communication of dietary intake in the computer era.Objectives: We proposed a novel approach that represents textual descriptions of dietary intake into a pictorial representation with the concept of pictograms. The computational implementation in terms of a web-based tool was investigated on how well the pictograms carry their intended message.Methods: 1) We investigated how well the pictograms are comprehended in terms of subjects’ accuracy rate and response time. In the study (n = 90), pictorial variants with three types of food images (black-and-white sketch, colored sketch, and colored photograph) were tested. 2) We also investigated how well subjects were able to select the standard food size among various food portions with the use of the tool. A comparison was made against the current standard of an educational session taught by a registered dietitian. We recruited 86 university students who were asked to select a standard size out of five different size categories. Three types of shapes were used. The bowl is the container that is widely used in the participants’ country. The pork strip was to represent foods with elliptical cross-section. The apple was used to represent a baseball-like size and shape.Results: Two pictograms with black-and-white food image were low of less than 50% in accuracy rate. The rest of the twenty-seven pictograms derived from portions of the nine foods were well understood with high accuracy rates (above 85%). Participants in using the tool without the dietitian’s session was better than participants in the dietitian education session in selecting a standard portion size of an apple (p < 0.0001; p = 0.0009 after adjustment for gender and age). The rate of correct bowl and pork strip size estimates were similar between the two conditions (p > 0.05).Conclusion: The development of pictograms could be used as a computational visual aid for comprehending and identifying dietary intake. Broader investigation is required for considering the effectiveness of the picto-grams on recall, measurement, or estimation as well as for further evaluation in the clinical practice.
APA, Harvard, Vancouver, ISO, and other styles
20

Preuß, M., and A. Ziegler. "A Simplification and Implementation of Random-effects Meta-analyses Based on the Exact Distribution of Cochran’s Q." Methods of Information in Medicine 53, no. 01 (2014): 54–61. http://dx.doi.org/10.3414/me13-01-0073.

Full text
Abstract:
SummaryBackground: The random-effects (RE) model is the standard choice for meta-analysis in the presence of heterogeneity, and the stand ard RE method is the DerSimonian and Laird (DSL) approach, where the degree of heterogeneity is estimated using a moment-estimator. The DSL approach does not take into account the variability of the estimated heterogeneity variance in the estimation of Cochran’s Q. Biggerstaff and Jackson derived the exact cumulative distribution function (CDF) of Q to account for the variability of Ť 2.Objectives: The first objective is to show that the explicit numerical computation of the density function of Cochran’s Q is not required. The second objective is to develop an R package with the possibility to easily calculate the classical RE method and the new exact RE method.Methods: The novel approach was validated in extensive simulation studies. The different approaches used in the simulation studies, including the exact weights RE meta-analysis, the I 2 and T 2 estimates together with their confidence intervals were implemented in the R package metaxa.Results: The comparison with the classical DSL method showed that the exact weights RE meta-analysis kept the nominal type I error level better and that it had greater power in case of many small studies and a single large study. The Hedges RE approach had inflated type I error levels. Another advantage of the exact weights RE meta-analysis is that an exact confidence interval for T 2is readily available. The exact weights RE approach had greater power in case of few studies, while the restricted maximum likelihood (REML) approach was superior in case of a large number of studies. Differences between the exact weights RE meta-analysis and the DSL approach were observed in the re-analysis of real data sets. Application of the exact weights RE meta-analysis, REML, and the DSL approach to real data sets showed that conclusions between these methods differed.Conclusions: The simplification does not require the calculation of the density of Cochran’s Q, but only the calculation of the cumulative distribution function, while the previous approach required the computation of both the density and the cumulative distribution function. It thus reduces computation time, improves numerical stability, and reduces the approximation error in meta-analysis. The different approaches, including the exact weights RE meta-analysis, the I 2 and T 2estimates together with their confidence intervals are available in the R package metaxa, which can be used in applications.
APA, Harvard, Vancouver, ISO, and other styles
21

Haag, M., L. R. Pilz, and D. Schrimpf. "Possible Combinations of Electronic Data Capture and Randomization Systems." Methods of Information in Medicine 53, no. 03 (2014): 202–7. http://dx.doi.org/10.3414/me13-01-0074.

Full text
Abstract:
SummaryBackground: Clinical trials (CT) are in a wider sense experiments to prove and establish clinical benefit of treatments. Nowadays electronic data capture systems (EDCS) are used more often bringing a better data management and higher data quality into clinical practice. Also electronic systems for the randomization are used to assign the patients to the treatments.Objectives: If the mentioned randomization system (RS) and EDCS are used, possibly identical data are collected in both, especially by stratified randomization. This separated data storage may lead to data inconsistency and in general data samples have to be aligned. The article discusses solutions to combine RS and EDCS. In detail one approach is realized and introduced.Methods: Different possible settings of combination of EDCS and RS are determined and the pros and cons for each solution are worked out. For the combination of two independent applications the necessary interfaces for the communication are defined. Thereby, existing standards are considered. An example realization is implemented with the help of open-source applications and state-of-the-art software development procedures.Results: Three possibilities of separate usage or combination of EDCS and RS are pre -sented and assessed: i) the complete independent usage of both systems; ii) realization of one system with both functions; and iii) two separate systems, which communicate via defined interfaces. In addition a realization of our preferred approach, the combination of both systems, is introduced using the open source tools RANDI2 and Open-Clinica.Conclusion: The advantage of a flexible independent development of EDCS and RS is shown based on the fact that these tool are very different featured. In our opinion the combination of both systems via defined interfaces fulfills the requirements of randomization and electronic data capture and is feasible in practice. In addition, the use of such a setting can reduce the training costs and the error-prone duplicated data entry.
APA, Harvard, Vancouver, ISO, and other styles
22

Cimino, J. J., and X. Jing. "A Complementary Graphical Method for Reducing and Analyzing Large Data Sets." Methods of Information in Medicine 53, no. 03 (2014): 173–85. http://dx.doi.org/10.3414/me13-01-0075.

Full text
Abstract:
SummaryObjectives: Graphical displays can make data more understandable; however, large graphs can challenge human comprehension. We have previously described a filtering method to provide high-level summary views of large data sets. In this paper we demonstrate our method for setting and selecting thresholds to limit graph size while retaining important information by applying it to large single and paired data sets, taken from patient and bibliographic databases.Methods: Four case studies are used to illustrate our method. The data are either patient discharge diagnoses (coded using the International Classification of Diseases, Clinical Modifications [ICD9-CM]) or Medline citations (coded using the Medical Subject Headings [MeSH]). We use combinations of different thresholds to obtain filtered graphs for detailed analysis. The thresholds setting and selection, such as thresholds for node counts, class counts, ratio values, p values (for diff data sets), and percentiles of selected class count thresholds, are demonstrated with details in case studies. The main steps include: data preparation, data manipulation, computation, and threshold selection and visualization. We also describe the data models for different types of thresholds and the considerations for thresholds selection.Results: The filtered graphs are 1%-3% of the size of the original graphs. For our case studies, the graphs provide 1) the most heavily used ICD9-CM codes, 2) the codes with most patients in a research hospital in 2011, 3) a profile of publications on “heavily represented topics” in MEDLINE in 2011, and 4) validated knowledge about adverse effects of the medication of rosiglitazone and new interesting areas in the ICD9-CM hierarchy associated with patients taking the medication of pioglitazone.Conclusions: Our filtering method reduces large graphs to a manageable size by re -moving relatively unimportant nodes. The graphical method provides summary views based on computation of usage frequency and semantic context of hierarchical ter -minology. The method is applicable to large data sets (such as a hundred thousand records or more) and can be used to generate new hypotheses from data sets coded with hierarchical terminologies.
APA, Harvard, Vancouver, ISO, and other styles
23

Abdekhoda, M., A. Dehnad, A. F. Hosseini, and M. Ahmadi. "Information Technology Acceptance in Health Information Management." Methods of Information in Medicine 53, no. 01 (2014): 14–20. http://dx.doi.org/10.3414/me13-01-0079.

Full text
Abstract:
SummaryObjective: User acceptance of information technology has been a significant area of research for more than two decades in the field of information technology. This study assessed the acceptance of information technology in the context of Health Information Management (HIM) by utilizing Technology Acceptance Model (TAM) which was modified and applied to assess user acceptance of health information technology as well as viability of TAM as a research construct in the context of HIM.Methods: This was a descriptive- analytical study in which a sample of 187 personnel from a population of 363 personnel, working in medical records departments of hospitals affiliated to Tehran University of Medical Sciences, was selected. Users’ perception of applying information technology was studiedby a researcher-developed questionnaire. Collected data were analyzed by SPSS software (version16) using descriptive statistics and regression analysis.Results: The results suggest that TAM is a useful construct to assess user acceptance of information technology in the context of HIM. The findings also evidenced the per -ceived ease of use (PEOU) and perceived usefulness (PE) were positively associated with favorable users’ attitudes towards HIM. PU was relatively more associated ( r= 0.22, p = 0.05) than PEOU (r = 0.014, p = 0.05) with favorable user attitudes towards HIM.Conclusions: Users’ perception of usefulness and ease of use are important determinants providing the incentive for users to accept information technologies when the application of a successful HIM system is attempted. The findings of the present study suggest that user acceptance is a key element and should subsequently be the major concern of health organizations and health policy makers.
APA, Harvard, Vancouver, ISO, and other styles
24

Kalpathy-Cramer, J., S. You, K. Keck, D. Erdogmus, M. F. Chiang, and E. Ataer-Cansizoglu. "Analysis of Underlying Causes of Inter-expert Disagreement in Retinopathy of Prematurity Diagnosis." Methods of Information in Medicine 54, no. 01 (2015): 93–102. http://dx.doi.org/10.3414/me13-01-0081.

Full text
Abstract:
SummaryObjective: Inter-expert variability in image-based clinical diagnosis has been demonstrated in many diseases including retinopathy of prematurity (ROP), which is a disease affecting low birth weight infants and is a major cause of childhood blindness. In order to better understand the underlying causes of variability among experts, we propose a method to quantify the variability of expert decisions and analyze the relationship between expert diagnoses and features computed from the images. Identification of these features is relevant for development of computer-based decision support systems and educational systems in ROP, and these methods may be applicable to other diseases where inter-expert variability is observed.Methods: The experiments were carried out on a dataset of 34 retinal images, each with diagnoses provided independently by 22 experts. Analysis was performed using concepts of Mutual Information (MI) and Kernel Density Estimation. A large set of structural features (a total of 66) were extracted from retinal images. Feature selection was utilized to identify the most important features that correlated to actual clinical decisions by the 22 study experts. The best three features for each observer were selected by an exhaustive search on all possible feature subsets and considering joint MI as a relevance criterion. We also compared our results with the results of Cohen’s Kappa [36] as an inter-rater reliability measure.Results: The results demonstrate that a group of observers (17 among 22) decide consistently with each other. Mean and second central moment of arteriolar tortuosity is among the reasons of disagreement between this group and the rest of the observers, meaning that the group of experts consider amount of tortuosity as well as the variation of tortuosity in the image.Conclusion: Given a set of image-based features, the proposed analysis method can identify critical image-based features that lead to expert agreement and disagreement in diagnosis of ROP. Although tree-based features and various statistics such as central moment are not popular in the literature, our results suggest that they are important for diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
25

Wilk, S., W. Michalowski, R. Slowinski, R. Thomas, M. Kadzinski, K. Farion, and D. O´Sullivan. "Learning the Preferences of Physicians for the Organization of Result Lists of Medical Evidence Articles." Methods of Information in Medicine 53, no. 05 (2014): 344–56. http://dx.doi.org/10.3414/me13-01-0085.

Full text
Abstract:
SummaryBackground: Online medical knowledge repositories such as MEDLINE and The Cochrane Library are increasingly used by physicians to retrieve articles to aid with clinical decision making. The prevailing approach for organizing retrieved articles is in the form of a rank-ordered list, with the assumption that the higher an article is presented on a list, the more relevant it is.Objectives: Despite this common list-based organization, it is seldom studied how physicians perceive the association between the relevance of articles and the order in which articles are presented. In this paper we describe a case study that captured physician preferences for 3-element lists of medical articles in order to learn how to organize medical knowledge for decision-making.Methods: Comprehensive relevance evaluations were developed to represent 3-element lists of hypothetical articles that may be retrieved from an online medical knowledge source such as MEDLINE or The Cochrane Library. Comprehensive relevance evalua tions asses not only an article’s relevance for a query, but also whether it has been placed on the correct list position. In other words an article may be relevant and correctly placed on a result list (e.g. the most relevant article appears first in the result list), an article may be relevant for a query but placed on an incorrect list position (e.g. the most relevant article appears second in a result list), or an article may be irrelevant for a query yet still appear in the result list. The relevance evaluations were presented to six senior physi cians who were asked to express their preferences for an article’s relevance and its position on a list by pairwise comparisons representing different combinations of 3-element lists. The elicited preferences were assessed using a novel GRIP (Generalized Regression with Intensities of Preference) method and represented as an additive value function. Value functions were derived for individual physicians as well as the group of physicians.Results: The results show that physicians assign significant value to the 1st position on a list and they expect that the most relevant article is presented first. Whilst physicians still prefer obtaining a correctly placed article on position 2, they are also quite satisfied with misplaced relevant article. Low consideration of the 3rd position was uniformly confirmed.Conclusions: Our findings confirm the importance of placing the most relevant article on the 1st position on a list and the importance paid to position on a list significantly diminishes after the 2nd position. The derived value functions may be used by developers of clinical decision support applications to decide how best to organize medical knowledge for decision making and to create personalized evaluation measures that can augment typical measures used to evaluate information retrieval systems.
APA, Harvard, Vancouver, ISO, and other styles
26

König, J., M. Blettner, P. Wild, T. Münzel, K. Lackner, S. Blankenberg, N. Pfeiffer, M. Beutel, I. Zwiener, and B. Engelhardt. "Combining Cross-sectional Data on Prevalence with Risk Estimates from a Prediction Model." Methods of Information in Medicine 53, no. 05 (2014): 371–79. http://dx.doi.org/10.3414/me13-01-0088.

Full text
Abstract:
SummaryObjectives: Estimation of the attributable risk for fatal diseases by combining two different data sources.Methods: We derive a method to estimate the attributable risks of different risk factors by combining general mortality risks with upto-date prevalences of the risk factors using estimates from a risk prediction model and cross-sectional data of a cohort study. Partial attributable risks have been used to illustrate the proportions of the different risk factors for the attributable risk. In addition we derive standard errors for the attributable risk based on the Taylor series expansion. Since the data of our cohort study was sampled with the same size in each 10 years age stratum which does not reflect the age-structure of the general population, the attributable risk and its standard errors are calculated using an approach that allows the weighting of the data according to population proportions of age. The formula for the standard errors has been evaluated using bootstrap-techniques.Results: We successfully implemented the method for the estimation of the attributable risk and its standard errors by integrating risk information using data of the HeartScore Germany and cross-sectional data emerging from the Gutenberg Health Study. The attributable risk can now be calculated without using the information of the overall disease rate. The bootstrap method shows, that the formula for the standard errors is useful.Conclusion: Our method allows for the combination of different data sources in order to estimate attributable risks and our formula for the standard errors seems to yield a good approximation. But the validity of our method highly depends on the validity of the underlying data sources.
APA, Harvard, Vancouver, ISO, and other styles
27

Marceglia, S., F. Pinciroli, and S. Bonacina. "A Pictorial Schema for a Comprehensive User-oriented Identification of Medical Apps." Methods of Information in Medicine 53, no. 03 (2014): 208–24. http://dx.doi.org/10.3414/me13-01-0093.

Full text
Abstract:
SummaryObjectives: The huge amount of released medical apps prevents medical app users from believing that medical scientific societies and other accreditation bodies as well, have the resources and the power for assigning to any medical app a quality score. By the time being, any medical app user has to take the risks related to the frequently insufficient accreditation of that app. Providing clear user-oriented schemas, to be adopted both when putting a medical App on the market and when an App comes to be evaluated by a cohort or single users, becomes crucial. The aim of our research was to define a pictorial identification one-shot schema for a comprehensive user-oriented identification of medical apps.Methods: Adopting a pictorial approach is common in software design modeling. To build up our identification schema we started from the limited number of Apps already available on a web site of app reviews (iMedicalApps.com), and we identified an appropriately large set of attributes for describing medical apps. We arranged the attributes in six main families. We organized them in a one-shot comprehensive pictorial schema. We adopted a traffic light color code for assessing each attribute, that was sufficient to provide simple elements of alerts and alarms regarding a single App. Then, we considered apps from iMedicalApps.com web site belonging to three medical specialties: cardiology, oncology, and pharma and analyzed them according to the proposed pictorial schema.Results: A pictorial schema having the attributes grouped in the families related to “Responsible Promoters”, “Offered Services”, “Searching Methods”, “Applications Domains”, “Envisaged Users”, and “Qualifiers and Quantifiers” has been identified. Furthermore, we produced a one-shot pictorial schema for each considered app, and for each medical specialty, we produced it also in an aggregated form.Conclusions: The one-shot pictorial schema provides a useful perception of when and where to use a considered app. It fits positively the expectations of potential but different user’s profiles. It can be a first step towards a systematic assessment of apps from the user viewpoint.
APA, Harvard, Vancouver, ISO, and other styles
28

Thomas, A. M., J. M. Dean, L. M. Olson, and L. J. Cook. "The Utility of Imputed Matched Sets." Methods of Information in Medicine 53, no. 03 (2014): 186–94. http://dx.doi.org/10.3414/me13-01-0094.

Full text
Abstract:
SummaryObjective: To compare results from high probability matched sets versus imputed matched sets across differing levels of linkage information.Methods: A series of linkages with varying amounts of available information were performed on two simulated datasets derived from multiyear motor vehicle crash (MVC) and hospital databases, where true matches were known. Distributions of high probability and imputed matched sets were compared against the true match population for occupant age, MVC county, and MVC hour. Regression models were fit to simulated log hospital charges and hospitalization status.Results: High probability and imputed matched sets were not significantly different from occupant age, MVC county, and MVC hour in high information settings (p > 0.999). In low information settings, high probability matched sets were significantly different from occupant age and MVC county (p < 0.002), but imputed matched sets were not (p > 0.493). High information settings saw no significant differences in inference of simulated log hospital charges and hospitalization status between the two methods. High probability and imputed matched sets were significantly different from the outcomes in low information settings; however, imputed matched sets were more robust.Conclusions: The level of information available to a linkage is an important con -sideration. High probability matched sets are suitable for high to moderate information settings and for situations involving case- specific analysis. Conversely, imputed matched sets are preferable for low information settings when conducting population-based analyses.
APA, Harvard, Vancouver, ISO, and other styles
29

Ryan, P., J. E. Hiller, and G. Mnatzaganian. "Does Co-morbidity Provide Significant Improvement on Age Adjustment when Predicting Medical Outcomes?" Methods of Information in Medicine 53, no. 02 (2014): 115–20. http://dx.doi.org/10.3414/me13-01-0095.

Full text
Abstract:
SummaryObjective: Using three risk-adjustment methods we evaluated whether co-morbidity derived from electronic hospital patient data provided significant improvement on age adjustment when predicting major outcomes following an elective total joint replacement (TJR) due to osteoarthritis.Methods: Longitudinal data from 819 elderly men who had had a TJR were integrated with hospital morbidity data (HMD) and mortality records. For each participant, any mor bidity or health-related outcome was retrieved from the linked data in the period 1970 through to 2007 and this enabled us to better account for patient co-morbidities. Co-mor bidities recorded in the HMD in all admissions preceding the index TJR admission were used to construct three risk-adjustment methods, namely Charlson co-morbidity index (CCI), Elixhauser’s adjustment method, and number of co-morbidities. Postoperative outcomes evaluated included length of hospital stay, 90-day readmission, and 1-year and 2-year mortality. These were modelled using Cox proportional hazards regression as a function of age for the baseline models, and as a function of age and each of the risk-adjustment methods. The difference in the statistical performance between the models that included age alone and those that also included the co-morbidity adjustment method was as sessed by measuring the difference in the Harrell’s C estimates between pairs of mod els applied to the same patient data using Bootstrap analysis with 1000 replications.Results: Number of co-morbidities did not provide any significant improvement in model discrimination when added to baseline models observed in all outcomes. CCI significantly improved model discrimination when predicting post-operative mortality but not when length of stay or readmission was modelled. For every one point increase in CCI, postoperative 1- and 2-year mortality increased by 37% and 30%, respectively. Elixhauser’s method outperformed the other two providing significant improvement on age adjustment in all outcomes.Conclusion: The predictive performance of co-morbidity derived from electronic hospital data is outcome and risk-adjustment method specific.
APA, Harvard, Vancouver, ISO, and other styles
30

Goñi, A., A. Illarramendi, and D. Antón. "Exercise Recognition for Kinect-based Telerehabilitation." Methods of Information in Medicine 54, no. 02 (2015): 145–55. http://dx.doi.org/10.3414/me13-01-0109.

Full text
Abstract:
SummaryBackground: An aging population and people’s higher survival to diseases and traumas that leave physical consequences are challenging aspects in the context of an efficient health management. This is why telerehabilitation systems are being developed, to allow monitoring and support of physiotherapy sessions at home, which could reduce healthcare costs while also improving the quality of life of the users.Objectives: Our goal is the development of a Kinect-based algorithm that provides a very accurate real-time monitoring of physical rehabilitation exercises and that also provides a friendly interface oriented both to users and physiotherapists.Methods: The two main constituents of our algorithm are the posture classification method and the exercises recognition method. The exercises consist of series of movements. Each movement is composed of an initial posture, a final posture and the angular trajectories of the limbs involved in the movement. The algorithm was designed and tested with datasets of real movements performed by volunteers. We also explain in the paper how we obtained the optimal values for the trade-off values for posture and trajectory recognition.Results: Two relevant aspects of the algorithm were evaluated in our tests, classification accuracy and real-time data processing. We achieved 91.9% accuracy in posture classification and 93.75% accuracy in trajectory recognition. We also checked whether the algorithm was able to process the data in real-time. We found that our algorithm could process more than 20,000 postures per second and all the required trajectory data-series in real-time, which in practice guarantees no perceptible delays. Later on, we carried out two clinical trials with real patients that suffered shoulder disorders. We obtained an exercise monitoring accuracy of 95.16%.Conclusions: We present an exercise recognition algorithm that handles the data provided by Kinect efficiently. The algorithm has been validated in a real scenario where we have verified its suitability. Moreover, we have received a positive feedback from both users and the physiotherapists who took part in the tests.
APA, Harvard, Vancouver, ISO, and other styles
31

Arias, M., and F. J. Díez. "Cost-effectiveness Analysis with Influence Diagrams." Methods of Information in Medicine 54, no. 04 (2015): 353–58. http://dx.doi.org/10.3414/me13-01-0121.

Full text
Abstract:
Summary Background: Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. Objective: To develop a method for CEA in problems involving several dozen variables. Methods: We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. Results: The evaluation of an ID returns a set of intervals for the willingness to pay – separated by cost-effectiveness thresholds – and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. Conclusion: IDs can perform CEA on large problems that cannot be analyzed with decision trees.
APA, Harvard, Vancouver, ISO, and other styles
32

Binder, H., O. Gefeller, M. Schmid, and A. Mayr. "The Evolution of Boosting Algorithms." Methods of Information in Medicine 53, no. 06 (2014): 419–27. http://dx.doi.org/10.3414/me13-01-0122.

Full text
Abstract:
SummaryBackground: The concept of boosting emerged from the field of machine learning. The basic idea is to boost the accuracy of a weak classifying tool by combining various instances into a more accurate prediction. This general concept was later adapted to the field of statistical modelling. Nowadays, boosting algorithms are often applied to estimate and select predictor effects in statistical regression models.Objectives: This review article attempts to highlight the evolution of boosting algorithms from machine learning to statistical modelling.Methods: We describe the AdaBoost algorithm for classification as well as the two most prominent statistical boosting approaches, gradient boosting and likelihood-based boosting for statistical modelling. We highlight the methodological background and present the most common software implementations.Results: Although gradient boosting and likelihood-based boosting are typically treated separately in the literature, they share the same methodological roots and follow the same fundamental concepts. Compared to the initial machine learning algorithms, which must be seen as black-box prediction schemes, they result in statistical models with a straight-forward interpretation.Conclusions: Statistical boosting algorithms have gained substantial interest during the last decade and offer a variety of options to address important research questions in modern biomedicine.
APA, Harvard, Vancouver, ISO, and other styles
33

Binder, H., O. Gefeller, M. Schmid, and A. Mayr. "Extending Statistical Boosting." Methods of Information in Medicine 53, no. 06 (2014): 428–35. http://dx.doi.org/10.3414/me13-01-0123.

Full text
Abstract:
SummaryBackground: Boosting algorithms to simultaneously estimate and select predictor effects in statistical models have gained substantial interest during the last decade.Objectives: This review highlights recent methodological developments regarding boosting algorithms for statistical modelling especially focusing on topics relevant for biomedical research.Methods: We suggest a unified framework for gradient boosting and likelihood-based boosting (statistical boosting) which have been addressed separately in the literature up to now.Results: The methodological developments on statistical boosting during the last ten years can be grouped into three different lines of research: i) efforts to ensure variable selection leading to sparser models, ii) developments regarding different types of predictor effects and how to choose them, iii) approaches to extend the statistical boosting framework to new regression settings.Conclusions: Statistical boosting algorithms have been adapted to carry out unbiased variable selection and automated model choice during the fitting process and can nowadays be applied in almost any regression setting in combination with a large amount of different types of predictor effects.
APA, Harvard, Vancouver, ISO, and other styles
34

Rühaak, J., R. Werner, H. Handels, J. Modersitzki, and T. Polzin. "Lung Registration Using Automatically Detected Landmarks." Methods of Information in Medicine 53, no. 04 (2014): 250–56. http://dx.doi.org/10.3414/me13-01-0125.

Full text
Abstract:
SummaryObjectives: Accurate registration of lung CT images is inevitable for numerous clinical applications. Usually, nonlinear intensity-based methods are used. Their accuracy is typically evaluated using corresponding anatomical points (landmarks; e.g. bifurcations of bronchial and vessel trees) annotated by medical experts in the images to register. As image registration can be interpreted as correspond ence finding problem, these corresponding landmarks can also be used in feature-based registration techniques. Recently, approaches for automated identification of such landmark correspondences in lung CT images have been presented. In this work, a novel combination of variational nonlinear intensity-based registration with an approach for automated landmark correspond ence detection in lung CT pairs is presented and evaluated.Methods: The main blocks of the proposed hybrid intensity- and feature-based registration scheme are a two-step landmark correspondence detection and the so-called CoLD (Combining Landmarks and Distance Measures) framework. The landmark correspondence identification starts with feature detection in one image followed by a blockmatching-based transfer of the features to the other image. The established correspond ences are used to compute a thin-plate spline (TPS) transformation. Within CoLD, the TPS transformation is improved by minimization of an objective function consisting of a Normalized Gradient Field distance measure and a curvature regularizer; the landmark correspondences are guaranteed to be preserved by optimization on the kernel of the discretized landmark constraints.Results: Based on ten publicly available end-inspiration/expiration CT scan pairs with anatomical landmark sets annotated by medical experts from the DIR-Lab database, it is shown that the hybrid registration approach is superior in terms of accuracy: The mean distance of expert landmarks is decreased from 8.46 mm before to 1.15 mm after registration, outperforming both the TPS transformation (1.68 mm) and a nonlinear registration without usage of automatically detected landmarks (2.44 mm). The improvement is statistically significant in eight of ten datasets in comparison to TPS and in nine of ten datasets in comparison to the intensity-based registration. Furthermore, CoLD globally estimates the breathing-induced lung volume change well and results in smooth and physiologically plausible motion fields of the lungs.Conclusions: We demonstrated that our novel landmark-based registration pipeline outperforms both TPS and the underlying nonlinear intensity-based registration without landmark usage. This highlights the potential of automatic landmark correspondence detection for improvement of lung CT registration accuracy.
APA, Harvard, Vancouver, ISO, and other styles
35

Palmerini, L., L. Chiari, and P. Palumbo. "A Probabilistic Model to Investigate the Properties of Prognostic Tools for Falls." Methods of Information in Medicine 54, no. 02 (2015): 189–97. http://dx.doi.org/10.3414/me13-01-0127.

Full text
Abstract:
SummaryBackground: Falls are a prevalent and burdensome problem in the elderly. Tools for the assessment of fall risk are fundamental for fall prevention. Clinical studies for the development and evaluation of prognostic tools for falls show high heterogeneity in the settings and in the reported results. Newly developed tools are susceptible to over- optimism.Objectives: This study proposes a probabilistic model to address critical issues about fall prediction through the analysis of the properties of an ideal prognostic tool for falls.Methods: The model assumes that falls occur within a population according to the Greenwood and Yule scheme for accident-proneness. Parameters for the fall rate distribution are estimated from counts of falls of four different epidemiological studies.Results: We obtained analytic formulas and quantitative estimates for the predictive and discriminative properties of the ideal prognostic tool. The area under the receiver operating characteristic curve (AUC) ranges between about 0.80 and 0.89 when prediction on any fall is made within a follow-up of one year. Predicting on multiple falls results in higher AUC.Conclusions: The discriminative ability of current validated prognostic tools for falls is sensibly lower than what the proposed ideal perfect tool achieves. A sensitivity analysis of the predictive and discriminative properties of the tool with respect to study settings and fall rate distribution identifies major factors that can account for the high heterogeneity of results observed in the literature.
APA, Harvard, Vancouver, ISO, and other styles
36

Hao, T., and C. Weng. "Adaptive Semantic Tag Mining from Heterogeneous Clinical Research Texts." Methods of Information in Medicine 54, no. 02 (2015): 164–70. http://dx.doi.org/10.3414/me13-01-0130.

Full text
Abstract:
SummaryObjectives: To develop an adaptive approach to mine frequent semantic tags (FSTs) from heterogeneous clinical research texts.Methods: We develop a “plug-n-play” framework that integrates replaceable un-supervised kernel algorithms with formatting, functional, and utility wrappers for FST mining. Temporal information identification and semantic equivalence detection were two example functional wrappers. We first compared this approach’s recall and efficiency for mining FSTs from ClinicalTrials.gov to that of a recently published tag-mining algorithm. Then we assessed this approach’s adaptability to two other types of clinical research texts: clinical data requests and clinical trial protocols, by comparing the prevalence trends of FSTs across three texts.Results: Our approach increased the average recall and speed by 12.8% and 47.02% respectively upon the baseline when mining FSTs from ClinicalTrials.gov, and maintained an overlap in relevant FSTs with the baseline ranging between 76.9% and 100% for varying FST frequency thresholds. The FSTs saturated when the data size reached 200 documents. Consistent trends in the prevalence of FST were observed across the three texts as the data size or frequency threshold changed.Conclusions: This paper contributes an adaptive tag-mining framework that is scalable and adaptable without sacrificing its recall. This component-based architectural design can be potentially generalizable to improve the adaptability of other clinical text mining methods.
APA, Harvard, Vancouver, ISO, and other styles
37

Ohlmeier, C., R. Linder, D. Enders, R. Mikolajczyk, W. Haverkamp, D. HorenkampSonntag, and E. Garbe. "Evaluating Methods for Intersectoral Comparison of Quality of Care." Methods of Information in Medicine 53, no. 04 (2014): 269–77. http://dx.doi.org/10.3414/me13-01-0132.

Full text
Abstract:
Summary Objectives: To compare the quality of care regarding the use of elective percutaneous coronary interventions (PCIs) in the inpatient and outpatient setting and to evaluate different methods of confounder control in this context. Methods: Based on data of three statutory health insurances including more than nine million insurance members, a retrospective cohort study between 2005 and 2009 was conducted. The occurrence of myocardial infarction, stroke, further coronary intervention and death was ascertained following the first PCI in the study period, which was preceded by a one-year period without a PCI. A Cox proportional hazard model was used to assess the influence of the setting of the elective PCI on the risk for complications after the PCI for each outcome separately. Age, sex, the number of diseases of the Elixhauser comorbidity measure, past acute coronary syndrome, coronary artery disease, dyslipidemia, past stroke, past coronary artery bypass surgery and the year of the PCI were included as covariables. The analyses were repeated in a propensity score matched cohort as well as in inverse probability of treatment weighted analyses. Results: The cohort comprised 4,269 patients with an outpatient PCI and 26,044 patients with an inpatient PCI. The majority of the analyses revealed no statistically significant effect of the setting of the PCI on the risk of myocardial infarction, stroke and further coronary interventions, whereas a reduced mortality risk was observed for out-patient PCIs. Similar results were obtained in the propensity score analyses. Conclusions: The analysis revealed that the adjusted risk for complications following an elective PCI is similar between the inpatient and the outpatient setting. For mortality the risk differed but this might be explained by residual or unmeasured confounding. The different methods applied in this study revealed mostly similar results. Since our study only covered one aspect of quality of care in the field of PCI and did not consider drug treatment in hospital or in the outpatient setting, further studies are needed which include these aspects.
APA, Harvard, Vancouver, ISO, and other styles
38

Christoph, J., L. Griebel, I. Leb, I. Engel, F. Köpcke, D. Toddenroth, H. U. Prokosch, J. Laufer, K. Marquardt, and M. Sedlmayr. "Secure Secondary Use of Clinical Data with Cloud-based NLP Services." Methods of Information in Medicine 54, no. 03 (2015): 276–82. http://dx.doi.org/10.3414/me13-01-0133.

Full text
Abstract:
SummaryObjectives: The secondary use of clinical data provides large opportunities for clinical and translational research as well as quality assurance projects. For such purposes, it is necessary to provide a flexible and scalable infrastructure that is compliant with privacy requirements. The major goals of the cloud4health project are to define such an architecture, to implement a technical prototype that fulfills these requirements and to evaluate it with three use cases.Methods: The architecture provides components for multiple data provider sites such as hospitals to extract free text as well as structured data from local sources and de-identify such data for further anonymous or pseudonymous processing. Free text documentation is analyzed and transformed into structured information by text-mining services, which are provided within a cloud-computing environment. Thus, newly gained annotations can be integrated along with the already available structured data items and the resulting data sets can be uploaded to a central study portal for further analysis.Results: Based on the architecture design, a prototype has been implemented and is under evaluation in three clinical use cases. Data from several hundred patients provided by a University Hospital and a private hospital chain have already been processed.Conclusions: Cloud4health has shown how existing components for secondary use of structured data can be complemented with text-mining in a privacy compliant manner. The cloud-computing paradigm allows a flexible and dynamically adaptable service provision that facilitates the adoption of services by data providers without own investments in respective hardware resources and software tools.
APA, Harvard, Vancouver, ISO, and other styles
39

Bache, R., M. McGilchrist, C. Daniel, M. Dugas, F. Fritz, and J. Doods. "Piloting the EHR4CR Feasibility Platform across Europe." Methods of Information in Medicine 53, no. 04 (2014): 264–68. http://dx.doi.org/10.3414/me13-01-0134.

Full text
Abstract:
SummaryBackground: Pharmaceutical clinical trials are primarily conducted across many countries, yet recruitment numbers are frequently not met in time. Electronic health records store large amounts of potentially useful data that could aid in this process. The EHR4CR project aims at re-using EHR data for clinical research purposes.Objective: To evaluate whether the protocol feasibility platform produced by the Electronic Health Records for Clinical Research (EHR4CR) project can be installed and set up in accordance with local technical and governance requirements to execute protocol feasibility queries uniformly across national borders.Methods: We installed specifically engineered software and warehouses at local sites. Approvals for data access and usage of the platform were acquired and terminology mapping of local site codes to central platform codes were performed. A test data set, or real EHR data where approvals were in place, were loaded into data warehouses. Test feasibility queries were created on a central component of the platform and sent to the local components at eleven university hospitals.Results: To use real, de-identified EHR data we obtained permissions and approvals from ‘data controllers‘ and ethics committees. Through the platform we were able to create feasibility queries, distribute them to eleven university hospitals and retrieve aggregated patient counts of both test data and de-identified EHR data.Conclusion: It is possible to install a uniform piece of software in different university hospitals in five European countries and configure it to the requirements of the local networks, while complying with local data protection regulations. We were also able set up ETL processes and data warehouses, to reuse EHR data for feasibility queries distributed over the EHR4CR platform.
APA, Harvard, Vancouver, ISO, and other styles
40

Otte, S., L. Wittig, G. Hüttmann, C. Kugler, D. Drömann, A. Zell, A. Schlaefer, and C. Otte. "Investigating Recurrent Neural Networks for OCT A-scan Based Tissue Analysis." Methods of Information in Medicine 53, no. 04 (2014): 245–49. http://dx.doi.org/10.3414/me13-01-0135.

Full text
Abstract:
Summary Objectives: Optical Coherence Tomography (OCT) has been proposed as a high resolution image modality to guide transbronchial biopsies. In this study we address the question, whether individual A-scans obtained in needle direction can contribute to the identification of pulmonary nodules. Methods: OCT A-scans from freshly resected human lung tissue specimen were recorded through a customized needle with an embedded optical fiber. Bidirectional Long Short Term Memory networks (BLSTMs) were trained on randomly distributed training and test sets of the acquired A-scans. Patient specific training and different pre-processing steps were evaluated. Results: Classification rates from 67.5% up to 76% were archived for different training scenarios. Sensitivity and specificity were highest for a patient specific training with 0.87 and 0.85. Low pass filtering decreased the accuracy from 73.2% on a reference distribution to 62.2% for higher cutoff frequencies and to 56% for lower cutoff frequencies. Conclusion: The results indicate that a grey value based classification is feasible and may provide additional information for diagnosis and navigation. Furthermore, the experiments show patient specific signal properties and indicate that the lower and upper parts of the frequency spectrum contribute to the classification.
APA, Harvard, Vancouver, ISO, and other styles
41

Werner, R., M. Blendowski, J. Ortmüller, H. Handels, and M. Wilms. "Simulation of Range Imaging-based Estimation of Respiratory Lung Motion." Methods of Information in Medicine 53, no. 04 (2014): 257–63. http://dx.doi.org/10.3414/me13-01-0137.

Full text
Abstract:
SummaryObjectives: A major problem associated with the irradiation of thoracic and abdominal tumors is respiratory motion. In clinical practice, motion compensation approaches are frequently steered by low-dimensional breathing signals (e.g., spirometry) and patient-specific correspondence models, which are used to estimate the sought internal motion given a signal measurement. Recently, the use of multidimensional signals derived from range images of the moving skin surface has been proposed to better account for complex motion patterns. In this work, a simulation study is carried out to investigate the motion estimation accuracy of such multidimensional signals and the influence of noise, the signal dimensionality, and different sampling patterns (points, lines, regions).Methods: A diffeomorphic correspondence modeling framework is employed to relate multidimensional breathing signals derived from simulated range images to internal motion patterns represented by diffeomorphic non-linear transformations. Furthermore, an automatic approach for the selection of optimal signal combinations/patterns within this framework is presented.Results: This simulation study focuses on lung motion estimation and is based on 28 4D CT data sets. The results show that the use of multidimensional signals instead of one-dimensional signals significantly improves the motion estimation accuracy, which is, however, highly affected by noise. Only small differences exist between different multidimensional sampling patterns (lines and regions). Automatically determined optimal combinations of points and lines do not lead to accuracy improvements compared to results obtained by using all points or lines.Conclusions: Our results show the potential of multidimensional breathing signals derived from range images for the model-based estimation of respiratory motion in radiation therapy.
APA, Harvard, Vancouver, ISO, and other styles
42

Brannath, W., and S. Schmidt. "Informative Simultaneous Confidence Intervals in Hierarchical Testing." Methods of Information in Medicine 53, no. 04 (2014): 278–83. http://dx.doi.org/10.3414/me13-01-0140.

Full text
Abstract:
SummaryBackground and Objectives: In clinical trials involving multiple tests it is often difficult to obtain informative simultaneous confidence intervals (SCIs). In particular in hierarchical testing, no quantification of effects is possible for the first tested (and most important) hypothesis after its rejection. Our goal is a construction of SCIs that are always informative.Methods: We present an approach where the level is split after rejection of each hypothesis to obtain an informative confidence bound. The splitting weights are continuous functions of the parameters. Our method is realizable by a simple algorithm and is illustrated by an intuitive graphical representation.Results: We show theoretically and by an example that the new SCIs always provide information when a hypothesis is rejected. The power to reject the first hypothesis is not smaller than for the classical fixed-sequence procedure. The price for the extra information is a small power loss in the hypotheses proceeding the most important one.Conclusions: Given the substantial gain in information, a small loss of power for the non-primary hypotheses seems often acceptable. Especially in the context of non-inferiority trials, this method is a useful alternative. The flexibility in the choice of the weight functions makes the procedure attractive for applications.
APA, Harvard, Vancouver, ISO, and other styles
43

Fiehler, J., S. Suniaga, H. Wersching, S. Knecht, A. Kemmling, and N. D. Forkert. "A Statistical Cerebroarterial Atlas Derived from 700 MRA Datasets." Methods of Information in Medicine 52, no. 06 (2013): 467–74. http://dx.doi.org/10.3414/me13-02-0001.

Full text
Abstract:
SummaryObjectives: The cerebroarterial system is a complex network of arteries that supply the brain cells with vitally important nutrients and oxygen. The inter-individual differences of the cerebral arteries, especially at a finer level, are still not understood sufficiently. The aim of this work is to present a statistical cerebroarterial atlas that can be used to overcome this problem.Methods: Overall, 700 Time-of-Flight (TOF) magnetic resonance angiography (MRA) data sets of healthy subjects were used for atlas generation. Therefore, the cerebral arteries were automatically segmented in each dataset and used for a quantification of the vessel diameters. After this, each TOF MRA dataset as well as the corresponding vessel segmentation and vessel diameter dataset were registered to the MNI brain atlas. Fi -nally, the registered datasets were used to calculate a statistical cerebroarterial atlas that incorporates information about the average TOF intensity, probability for a vessel occurrence and mean vessel diameter for each voxel.Results: Visual analysis revealed that arteries with a diameter as small as 0.5 mm are well represented in the atlas with quantitative values that are within range of anatomical reference values. Moreover, a highly significant strong positive correlation between the vessel diameter and occurrence probability was found. Furthermore, it was shown that an intensity-based automatic segmentation of cerebral vessels can be considerable improved by incorporating the atlas information leading to results within the range of the inter-observer agreement.Conclusion: The presented cerebroarterial atlas seems useful for improving the understanding about normal variations of cerebral arteries, initialization of cerebrovascular segmentation methods and may even lay the foundation for a reliable quantification of subtle morphological vascular changes.
APA, Harvard, Vancouver, ISO, and other styles
44

Plischke, M., B. Haarbrandt, M. Rochon, J. Schwartze, E. Tute, T. Bartkiewicz, T. Kleinschmidt, et al. "The Lower Saxony Bank of Health." Methods of Information in Medicine 53, no. 02 (2014): 73–81. http://dx.doi.org/10.3414/me13-02-0003.

Full text
Abstract:
SummaryIntroduction: This article is part of a Focus Theme of Methods of Information in Medicine on Health Record Banking.Background: Poor communication of health care information between health care providers (HCP) is still a major problem. One recent approach is the concept of Health Record Banking.Objectives: With this report we want to introduce the Lower Saxony Bank of Health (LSBH) to the international community. The main objective of this paper is to report and explain: 1) why this organization has been founded, 2) which basic principles have been set, 3) which services will be provided, 4) which type of organization has been chosen, and 5) which architectural framework has been selected.Methods: To report and discuss how we plan to achieve the intended objectives.Results: The LSBH was founded as an entrepreneurial company, regarding itself as a neutral third-party information broker. The bank does not store medical documents on its central servers but offers a document registry with links to documents stored at participating health care providers. Subject to valid patient consent, the LSBH grants access to these documents to authorized health care providers. To implement our services, we chose the established technical frameworks of the Integrating the Healthcare Enterprise (IHE) initiative using cross-enterprise document sharing (XDS).Conclusions: Different approaches to establish health information exchange (HIE) are in early stages and some have failed in the past. Health Record Banking can address major challenges described in the literature about HIE. The future will show if our provider-sponsored business model is sustainable. After reaching a stable network, we intend to add additional HCPs, e.g., care homes or ambulance services, to the network.
APA, Harvard, Vancouver, ISO, and other styles
45

von Bargen, T., M. Kohlmann, M. Marschollek, J. Schwartze, B. Song, M. Wagner, K. H. Wolf, R. Haux, and M. Gietzelt. "Home-centered Health-enabling Technologies and Regional Health Information Systems." Methods of Information in Medicine 53, no. 03 (2014): 160–66. http://dx.doi.org/10.3414/me13-02-0008.

Full text
Abstract:
SummaryIntroduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Using Data from Ambient Assisted Living and Smart Homes in Electronic Health Records”.Objectives: In this paper, we present a prototype of a Home-Centered Health- Enabling Technology (HET-HC), which is able to capture, store, merge and process data from various sensor systems at people’s home. In addition, we present an architecture designed to integrate HET-HC into an exem -plary regional Health Information System (rHIS).Methods: rHIS are traditionally document-based to fit to the needs in a clinical context. However, HET-HC are producing continuous data streams for which documents might be an inappropriate representation. Therefore, the HET-HC could register placeholder-documents at rHIS. These placeholder-documents are assembled upon user-authenticated request by the HET-HC and are always up-to-date. Moreover, it is not trivial to find a clinical coding system for continuous sensor data and to make the data machine-readable in order to enhance the interoperability of such systems. Therefore, we propose the use of SNOCAP-HET, which is a nomenclature to describe the context of sensor-based measurements in health-enabling technologies.Results: We present an architectural approach to integrate HET-HC into rHIS. Our solution is the centralized registration of placeholder-documents with rHIS and the decentralized data storage at people’s home.Conclusions: We concluded that the pre -sented architecture of integrating HET-HC into rHIS might fit well to the traditional approach of document-based data storage. Data security and privacy issues are also duly considered.
APA, Harvard, Vancouver, ISO, and other styles
46

Chung, J., T. Le, H. Thompson, G. Demiris, and B. Reeder. "Assessing Older Adults’ Perceptions of Sensor Data and Designing Visual Displays for Ambient Environments." Methods of Information in Medicine 53, no. 03 (2014): 152–59. http://dx.doi.org/10.3414/me13-02-0009.

Full text
Abstract:
SummaryIntroduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Using Data from Ambient Assisted Living and Smart Homes in Electronic Health Records“.Objectives: Our objectives were to: 1) characterize older adult participants’ perceived usefulness of in-home sensor data and 2) develop novel visual displays for sensor data from Ambient Assisted Living environments that can become part of electronic health records.Methods: Semi-structured interviews were conducted with community-dwelling older adult participants during three and six-month visits. We engaged participants in two design iterations by soliciting feedback about display types and visual displays of simulated data related to a fall scenario. Interview transcripts were analyzed to identify themes related to perceived usefulness of sensor data.Results: Thematic analysis identified three themes: perceived usefulness of sensor data for managing health; factors that affect perceived usefulness of sensor data and; perceived usefulness of visual displays. Visual displays were cited as potentially useful for family members and health care providers. Three novel visual displays were created based on interview results, design guidelines derived from prior AAL research, and principles of graphic design theory.Conclusions: Participants identified potential uses of personal activity data for monitoring health status and capturing early signs of illness. One area for future research is to determine how visual displays of AAL data might be utilized to connect family members and health care providers through shared understanding of activity levels versus a more simplified view of self-management. Connecting informal and formal caregiving networks may facilitate better communication between older adults, family members and health care providers for shared decision-making.
APA, Harvard, Vancouver, ISO, and other styles
47

Thiele, S., A. Häber, A. Winter, and T. Nitzsche. "Communication Architecture for AAL." Methods of Information in Medicine 53, no. 03 (2014): 167–72. http://dx.doi.org/10.3414/me13-02-0010.

Full text
Abstract:
SummaryIntroduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Using Data from Ambient Assisted Living and Smart Homes in Electronic Health Records”.Background: Concepts of Ambient Assisted Living (AAL) support a long-term health monitoring and further medical and other services for multi-morbid patients with chronic diseases. In Germany many AAL and telemedical applications exist. Synergy effects by common agreements for essential application components and standards are not achieved.Objectives: It is necessary to define a communication architecture which is based on common definitions of communication scenarios, application components and communication standards.Methods: The development of a communication architecture requires different steps. To gain a reference model for the problem area different AAL and telemedicine projects were compared and relevant data elements were generalized. The derived reference model defines standardized communication links.Results: As a result the authors present an approach towards a reference architecture for AAL-communication. The focus of the architecture lays on the communication layer. The necessary application components are identified and a communication based on standards and their extensions is highlighted.Conclusion: The exchange of patient in -dividual events supported by an event classification model, raw and aggregated data from the personal home area over a tele-medicine center to health care providers is possible.
APA, Harvard, Vancouver, ISO, and other styles
48

Evans, J., T. A. Oniki, J. F. Coyle, L. Bain, S. M. Huff, R. D. Kush, C. G. Chute, and G. Jiang. "Harmonization of Detailed Clinical Models with Clinical Study Data Standards." Methods of Information in Medicine 54, no. 01 (2015): 65–74. http://dx.doi.org/10.3414/me13-02-0019.

Full text
Abstract:
SummaryIntroduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Managing Interoperability and Complexity in Health Systems”.Background: Data sharing and integration between the clinical research data management system and the electronic health record system remains a challenging issue. To approach the issue, there is emerging interest in utilizing the Detailed Clinical Model (DCM) approach across a variety of contexts. The Intermountain Healthcare Clinical Element Models (CEMs) have been adopted by the Office of the National Coordinator awarded Strategic Health IT Advanced Research Projects for normalization (SHARPn) project for normalizing patient data from the electronic health records (EHR).Objective: The objective of the present study is to describe our preliminary efforts toward harmonization of the SHARPn CEMs with CDISC (Clinical Data Interchange Standards Consortium) clinical study data standards.Methods: We were focused on three generic domains: demographics, lab tests, and medications. We performed a panel review on each data element extracted from the CDISC templates and SHARPn CEMs.Results: We have identified a set of data elements that are common to the context of both clinical study and broad secondary use of EHR data and discussed outstanding harmonization issues.Conclusions: We consider that the outcomes would be useful for defining new requirements for the DCM modeling community and ultimately facilitating the semantic interoper-ability between systems for both clinical study and broad secondary use domains.
APA, Harvard, Vancouver, ISO, and other styles
49

García-Remesal, M., D. de la Iglesia, N. Graf, V. Maojo, and A. Anguita. "Toward a View-oriented Approach for Aligning RDF-based Biomedical Repositories." Methods of Information in Medicine 54, no. 01 (2015): 50–55. http://dx.doi.org/10.3414/me13-02-0020.

Full text
Abstract:
SummaryIntroduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Managing Interoperability and Complexity in Health Systems”.Background: The need for complementary access to multiple RDF databases has fostered new lines of research, but also entailed new challenges due to data representation disparities. While several approaches for RDF-based database integration have been proposed, those focused on schema alignment have become the most widely adopted. All state-of-the-art solutions for aligning RDF-based sources resort to a simple technique inherited from legacy relational database integration methods. This technique – known as element-to-element (e2e) mappings – is based on establishing 1:1 mappings between single primitive elements – e.g. concepts, attributes, relationships, etc. – belonging to the source and target schemas. However, due to the intrinsic nature of RDF – a representation language based on defining tuples < subject, predicate, object > –, one may find RDF elements whose semantics vary dramatically when combined into a view involving other RDF elements – i.e. they depend on their context. The latter cannot be adequately represented in the target schema by resorting to the traditional e2e approach. These approaches fail to properly address this issue without explicitly modifying the target ontology, thus lacking the required expressiveness for properly reflecting the intended semantics in the alignment information.Objectives: To enhance existing RDF schema alignment techniques by providing a mechanism to properly represent elements with context-dependent semantics, thus enabling users to perform more expressive alignments, including scenarios that cannot be adequately addressed by the existing approaches.Methods: Instead of establishing 1:1 correspondences between single primitive elements of the schemas, we propose adopting a view-based approach. The latter is targeted at establishing mapping relationships between RDF subgraphs – that can be regarded as the equivalent of views in traditional databases –, rather than between single schema elements. This approach enables users to represent scenarios defined by context-dependent RDF elements that cannot be properly represented when adopting the currently existing approaches.Results: We developed a software tool implementing our view-based strategy. Our tool is currently being used in the context of the European Commission funded p-medicine project, targeted at creating a technological framework to integrate clinical and genomic data to facilitate the development of personalized drugs and therapies for cancer, based on the genetic profile of the patient. We used our tool to integrate different RDF-based databases – including different repositories of clinical trials and DICOM images – using the Health Data Ontology Trunk (HDOT) ontology as the target schema.Conclusions: The importance of database integration methods and tools in the context of biomedical research has been widely recognized. Modern research in this area – e.g. identification of disease biomarkers, or design of personalized therapies – heavily relies on the availability of a technical framework to enable researchers to uniformly access disparate repositories. We present a method and a tool that implement a novel alignment method specifically designed to support and enhance the integration of RDF-based data sources at schema (metadata) level. This approach provides an increased level of expressiveness compared to other existing solutions, and allows solving heterogeneity scenarios that cannot be properly represented using other state-ofthe-art techniques.
APA, Harvard, Vancouver, ISO, and other styles
50

Vanhove, T., E. Dejonghe, M. Feys, T. Herinckx, J. Vankelecom, J. Decruyenaere, F. De Turck, and F. De Backere. "Platform for Efficient Switching between Multiple Devices in the Intensive Care Unit." Methods of Information in Medicine 54, no. 01 (2015): 05–15. http://dx.doi.org/10.3414/me13-02-0021.

Full text
Abstract:
SummaryIntroduction: This article is part of the Focus Theme of Methods of Information in Medicine on “Managing Interoperability and Complexity in Health Systems”.Objectives: Handheld computers, such as tablets and smartphones, are becoming more and more accessible in the clinical care setting and in Intensive Care Units (ICUs). By making the most useful and appropriate data available on multiple devices and facilitate the switching between those devices, staff members can efficiently integrate them in their workflow, allowing for faster and more accurate decisions. This paper addresses the design of a platform for the efficient switching between multiple devices in the ICU. The key functionalities of the platform are the integration of the platform into the workflow of the medical staff and providing tailored and dynamic information at the point of care.Methods: The platform is designed based on a 3-tier architecture with a focus on extensibility, scalability and an optimal user experience. After identification to a device using Near Field Communication (NFC), the appropriate medical information will be shown on the selected device. The visualization of the data is adapted to the type of the device. A web-centric approach was used to enable extensibility and portability.Results: A prototype of the platform was thoroughly evaluated. The scalability, performance and user experience were evaluated. Performance tests show that the response time of the system scales linearly with the amount of data. Measurements with up to 20 devices have shown no performance loss due to the concurrent use of multiple devices.Conclusions: The platform provides a scalable and responsive solution to enable the efficient switching between multiple devices. Due to the web-centric approach new devices can easily be integrated. The performance and scalability of the platform have been evaluated and it was shown that the response time and scalability of the platform was within an acceptable range.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography