Academic literature on the topic 'Medicine – Research – Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Medicine – Research – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Medicine – Research – Data processing"

1

Hilgers, Ralf-Dieter, Ralf Hofestädt, Petra Knaup-Gregori, Claudia Ose, Antje Timmer, and Alfred Winter. "Data Integration for Integrated Research and Care." Methods of Information in Medicine 55, no. 04 (2016): 365–66. http://dx.doi.org/10.3414/me2016040001.

Full text
Abstract:
SummaryA national German funding initiative for Medical Informatics focusing at data integration for medicine gives an opportunity to reopen a window to Germany. In the open window appears a best paper selection of the 2015 annual conference of the German Society of Medical Informatics, Biometry and Epidemiology and papers of the German journal GMS Medical Informatics, Biometry and Epidemiology (MIBE). The publications in focus deal with data integration by transferring clinical routine data into an electronic data capture (EDC) system, using natural language processing to make unstructured date processable, measuring quality of record linkage, and by using a unified metadata scheme for integrated documentation in laboratories. Two additional papers present methods for data analysis especially for change point detection in binary sequences and for analyzing categorial data.
APA, Harvard, Vancouver, ISO, and other styles
2

Bai, Qifeng. "Big Data Research: Database and Computing." Journal of Big Data Research 1, no. 1 (April 6, 2018): 1–4. http://dx.doi.org/10.14302/issn.2768-0207.jbr-17-1925.

Full text
Abstract:
Big data research has become popular and exciting studies in almost all scientific fields such as biology, chemistry, epidemiology, medicine and drug discovery. The various systems and platforms produce large amounts of data every day. It will be very helpful for the researchers and workers to deal with big data if the practical database and useful software are introduced in time. The Journal of Big Data Research (JBR) supplies an efficient and open access publishing platform for big data research. The first issue of JBR aims to foster the dissemination of high-quality big data studies in the biological, medical and chemical database as well as the new algorithm and software for big data processing. The database and computing framework are selected to introduce the development of big data in the biological, medicine and drug discovery. The mature and functional database can be serviced in big data research of scientific fields. It promotes the scientists to extract the useful and essential dataset from the massive data. The grid computing and cloud computing supplies a new paradigm that offers an effective framework of computing and services. The research papers are welcomed from the scopes of the practical database, new algorithm and software for big data studies. All these kinds of papers not only provide the effective application methods and platforms, but also give a good promising future for big data research.
APA, Harvard, Vancouver, ISO, and other styles
3

Peterková, Andrea, and German Michaľčonok. "Preprocessing Raw Data in Clinical Medicine for a Data Mining Purpose." Research Papers Faculty of Materials Science and Technology Slovak University of Technology 24, no. 39 (December 1, 2016): 117–22. http://dx.doi.org/10.1515/rput-2016-0025.

Full text
Abstract:
Abstract Dealing with data from the field of medicine is nowadays very current and difficult. On a global scale, a large amount of medical data is produced on an everyday basis. For the purpose of our research, we understand medical data as data about patients like results from laboratory analysis, results from screening examinations (CT, ECHO) and clinical parameters. This data is usually in a raw format, difficult to understand, non-standard and not suitable for further processing or analysis. This paper aims to describe the possible method of data preparation and preprocessing of such raw medical data into a form, where further analysis algorithms can be applied.
APA, Harvard, Vancouver, ISO, and other styles
4

Volchenboum, Samuel L., Suzanne M. Cox, Allison Heath, Adam Resnick, Susan L. Cohn, and Robert Grossman. "Data Commons to Support Pediatric Cancer Research." American Society of Clinical Oncology Educational Book, no. 37 (May 2017): 746–52. http://dx.doi.org/10.1200/edbk_175029.

Full text
Abstract:
The falling costs and increasing fidelity of high-throughput biomedical research data have led to a renaissance in cancer surveillance and treatment. Yet, the amount, velocity, and complexity of these data have overcome the capacity of the increasing number of researchers collecting and analyzing this information. By centralizing the data, processing power, and tools, there is a valuable opportunity to share resources and thus increase the efficiency, power, and impact of research. Herein, we describe current data commons and how they operate in the oncology landscape, including an overview of the International Neuroblastoma Risk Group data commons as a paradigm case. We outline the practical steps and considerations in building data commons. Finally, we discuss the unique opportunities and benefits of creating a data commons within the context of pediatric cancer research, highlighting the particular advantages for clinical oncology and suggested next steps.
APA, Harvard, Vancouver, ISO, and other styles
5

Taylor, R., M. H. Ali, and I. Varley. "Automating the processing of data in research. A proof of concept using elasticsearch." International Journal of Surgery 55 (July 2018): S41. http://dx.doi.org/10.1016/j.ijsu.2018.05.179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Qinglian, and Tao Qi. "Research of Digital Based on Network Model in the Fingerprint of Traditional Chinese Medicines (TCM)." ITM Web of Conferences 25 (2019): 01005. http://dx.doi.org/10.1051/itmconf/20192501005.

Full text
Abstract:
In this paper, a network model is used to simulate the fingerprint of TCM, the fingerprint is digitized, it is easy to store and display by simplifying the data processed. Application of digital processing technology and pattern recognition technology sets up the standard fingerprint. According to the standard fingerprint of TCM, Identify the quality and authenticity of the medicines, especially with the lack of medicines. It can realize medicinal materials testing in comprehensive evaluation of internal quality and overall control of the whole matter, according to the fingerprint of TCM. Improve the overall level of the TCM industry and promote the early development of the Chinese medicine industry to the world.
APA, Harvard, Vancouver, ISO, and other styles
7

Krylov, A. "Statistical methods in medicine: basic knowledge for the internist." Terapevt (General Physician), no. 4 (April 1, 2020): 28–33. http://dx.doi.org/10.33920/med-12-2004-04.

Full text
Abstract:
The article describes methods of statistical analysis that a physician needs to know in order to conduct medical research. Involvement of internists in training and use of statistical methods of processing and planning will improve their orientation in a variety of existing statistical methods of data processing, as well as will help to understand the importance and relevance of using statistics in medical research in order to create competent, personalized approaches to a patient.
APA, Harvard, Vancouver, ISO, and other styles
8

Fariha, Mutia. "Efektifitas Experiential Learning Method dalam Pembelajaran Pengolahan dan Analisis Data Penelitian Tindakan Kelas." Andragogi: Jurnal Diklat Teknis Pendidikan dan Keagamaan 8, no. 2 (December 31, 2020): 570–80. http://dx.doi.org/10.36052/andragogi.v8i2.178.

Full text
Abstract:
[EFFECTIVENESS OF EXPERIENTIAL LEARNING METHOD IN LEARNING ACTIVITIES DATA PROCESSING ANDANALYSIS CLASSROOM ACTION RESEARCH]. This research is qualitative research with a descriptive methodwhich aims to determine the effectiveness of the Experiential Learning Method in the learning process Processing and Data Analysis at the Classroom Action Research (CAR) Training based on the perceptions of the participants. Effectiveness is seen based on the suitability of the results obtained with the learning objectives. The research was conducted at CAR training in Aceh Barat Daya with the subject of research as many as 40 people. Collecting data using a questionnaire with google form and triangulation by interviews and observations of work documents. The results showed that according to participants' perceptions the application of the Experience Learning Method in learning was effectively based on the achievement of learning outcomes. The application of the Experience Learning Method can also increase motivation, provide experience in processing and analyzing data in real situations, and improve skills in processing and analyzing CAR data.
APA, Harvard, Vancouver, ISO, and other styles
9

Bernard, Francis, Clare Gallagher, Donald Griesdale, Andreas Kramer, Mypinder Sekhon, and Frederick A. Zeiler. "The CAnadian High-Resolution Traumatic Brain Injury (CAHR-TBI) Research Collaborative." Canadian Journal of Neurological Sciences / Journal Canadien des Sciences Neurologiques 47, no. 4 (March 16, 2020): 551–56. http://dx.doi.org/10.1017/cjn.2020.54.

Full text
Abstract:
ABSTRACT:In traumatic brain injury (TBI), future integration of multimodal monitoring of cerebral physiology and high-frequency signal processing techniques, with advanced neuroimaging, proteomic and genomic analysis, provides an opportunity to explore the molecular pathways involved in various aspects of cerebral physiologic dysfunction in vivo. The main issue with early and rapid discovery in this field of personalized medicine is the expertise and complexity of data involved. This brief communication highlights the CAnadian High-Resolution Traumatic Brain Injury (CAHR-TBI) Research Collaborative, which has been formed from centers with specific expertise in the area of high-frequency physiologic monitoring/processing, and outlines its objectives.
APA, Harvard, Vancouver, ISO, and other styles
10

Susanty, Aries, Bambang Purwanggono, Nia Budi Puspitasari, and Chellsy Allison. "Conjoint Analysis for Evaluation of Customer’s Preference of Analgesic Generic Medicines under Non-proprietary Names." E3S Web of Conferences 202 (2020): 12022. http://dx.doi.org/10.1051/e3sconf/202020212022.

Full text
Abstract:
The main objective of this research is to get greater insight into the customer preferences in purchasing analgesic generic medicines under the non-proprietary name and to identify clusters with different preference structures. This research uses conjoint analysis (CA) and cluster analysis as data processing. This research collects the data through questionnaire from 200 respondents and uses the convenience sampling method to choose 200 respondents from sixteen districts in Semarang. The result of data processing with conjoint analysis indicated that customer prefers the analgesic generic medicine under the non-proprietary name with the following condition: the price of 20% of analgesic generic-branded minutes, has 15 minutes onset time of effect, can be purchased at minimarket, in the form of syrup, and the source of information is family and friend. Moreover, the result of data processing also indicated that the importance of attribute is the place of purchase, followed by price, onset time of drugs, the form of drugs, and, the source of information. Then, the result of data processing with clustering analysis indicated that the respondent can be grouped into four clusters. The attribute that has the highest importance level in cluster 1 until cluster 4 is ‘form of drugs’, ‘the place of purchase’, ‘source of information’, and ‘price’, respectively.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Medicine – Research – Data processing"

1

Suwarno, Neihl Omar 1963. "A computer based data acquisition and analysis system for a cardiovascular research laboratory." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/558111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liang, Yiheng. "Computational Methods for Discovering and Analyzing Causal Relationships in Health Data." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc804966/.

Full text
Abstract:
Publicly available datasets in health science are often large and observational, in contrast to experimental datasets where a small number of data are collected in controlled experiments. Variables' causal relationships in the observational dataset are yet to be determined. However, there is a significant interest in health science to discover and analyze causal relationships from health data since identified causal relationships will greatly facilitate medical professionals to prevent diseases or to mitigate the negative effects of the disease. Recent advances in Computer Science, particularly in Bayesian networks, has initiated a renewed interest for causality research. Causal relationships can be possibly discovered through learning the network structures from data. However, the number of candidate graphs grows in a more than exponential rate with the increase of variables. Exact learning for obtaining the optimal structure is thus computationally infeasible in practice. As a result, heuristic approaches are imperative to alleviate the difficulty of computations. This research provides effective and efficient learning tools for local causal discoveries and novel methods of learning causal structures with a combination of background knowledge. Specifically in the direction of constraint based structural learning, polynomial-time algorithms for constructing causal structures are designed with first-order conditional independence. Algorithms of efficiently discovering non-causal factors are developed and proved. In addition, when the background knowledge is partially known, methods of graph decomposition are provided so as to reduce the number of conditioned variables. Experiments on both synthetic data and real epidemiological data indicate the provided methods are applicable to large-scale datasets and scalable for causal analysis in health data. Followed by the research methods and experiments, this dissertation gives thoughtful discussions on the reliability of causal discoveries computational health science research, complexity, and implications in health science research.
APA, Harvard, Vancouver, ISO, and other styles
3

Majeke, Lunga. "Preliminary investigation into estimating eye disease incidence rate from age specific prevalence data." Thesis, University of Fort Hare, 2011. http://hdl.handle.net/10353/464.

Full text
Abstract:
This study presents the methodology for estimating the incidence rate from the age specific prevalence data of three different eye diseases. We consider both situations where the mortality may differ from one person to another, with and without the disease. The method used was developed by Marvin J. Podgor for estimating incidence rate from prevalence data. It delves into the application of logistic regression to obtain the smoothed prevalence rates that helps in obtaining incidence rate. The study concluded that the use of logistic regression can produce a meaningful model, and the incidence rates of these diseases were not affected by the assumption of differential mortality.
APA, Harvard, Vancouver, ISO, and other styles
4

Shi, H. (Henglin). "A GQM-based open research data technology evalution method in open research context." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201605221853.

Full text
Abstract:
Open Research Data is gaining popularity nowadays, and various research units and individuals are interested to join this trend. However, due to variety of Open Research Data technologies, they have found it is difficult to select proper ones for their specific requirements. Thus, a method for evaluating of Open Research Data related technologies is developed in this study for researchers to select proper ones. Firstly, the theoretical knowledge of research data sharing and reusing barriers is resulted from a structured literature review. As a result, from the 19 primary studies, 96 instances of existing barriers are identified and classified to seven categories, where four of them are research data sharing barriers and rest of them are reusing barriers. This knowledge is regarded as an important resource for understanding researchers’ requirements on Open Research Data technologies, and utilized to develop the technology evaluation method. Additionally, the Open Research Data Technology Evaluation Method (ORDTEM) is developed basing on the Goal/Question/Metric (GQM) approach and resulted research data sharing and reusing barriers. To develop this method, the GQM approach is adopted as the main skeleton to transform these barriers to measurable criterion. Consequently, the ORDTEM, which is consisting of six GQM evaluation questions and 14 metrics, is developed for researchers to evaluate Open Research Data technologies. Furthermore, to validate the GQM-based ORDTEM, a focus groups study is conducted in a workshop. In the workshop, nine researchers who has the need to participate Open Research Data related activities are recruited to form a focus group to discuss the resulted ORDTEM. And by analysing the content of the discussion, 16 critical opinions are addressed which resulted eight improvements including one refinement on an existing metric and seven new metrics to ORDTEM. Lastly, a testing process of applying ORDTEM to evaluate four selected Open Research Data technologies is implemented also for validating whether it can be used in solving real-world evaluation tasks. And more than the validation, this experiment also results the materials about usage of ORDTEM, which is useful for future adopters. However, more than developing the solution to eliminate the difficulty of selecting technologies for participating Open Research Data movements, this study also provides two additional contributions. For one thing, resulted research data sharing and reusing barriers also direct the future effort to prompt Open Research Data and Open Science. Moreover, the experience of utilizing the GQM approach to transform existing requirements to evaluation criterion is possible to be studied for developing other requirement-specific evaluation.
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Chi-hung, and 李志鴻. "An efficient content-based searching engine for medical imagedatabase." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31215506.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lynch, Kevin John. "Data manipulation in collaborative research systems." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184923.

Full text
Abstract:
This dissertation addresses data manipulation in collaborative research systems, including what data should be stored, the operations to be performed on that data, and a programming interface to effect this manipulation. Collaborative research systems are discussed, and requirements for next-generation systems are specified, incorporating a range of emerging technologies including multimedia storage and presentation, expert systems, and object-oriented database management systems. A detailed description of a generic query processor constructed specifically for one collaborative research system is given, and its applicability to next-generation systems and emerging technologies is examined. Chapter 1 discusses the Arizona Analyst Information System (AAIS), a successful collaborative research system being used at the University of Arizona and elsewhere. Chapter 2 describes the generic query processing approach used in the AAIS, as an efficient, nonprocedural, high-level programmer interface to databases. Chapter 3 specifies requirements for next-generation collaborative research systems that encompass the entire research cycle for groups of individuals working on related topics over time. These requirements are being used to build a next-generation collaborative research system at the University of Arizona called CARAT, for Computer Assisted Research and Analysis Tool. Chapter 4 addresses the underlying data management systems in terms of the requirements specified in Chapter 3. Chapter 5 revisits the generic query processing approach used in the AAIS, in light of the requirements of Chapter 3, and the range of data management solutions described in Chapter 4. Chapter 5 demonstrates the generic query processing approach as a viable one, for both the requirements of Chapter 3 and the DBMSs of Chapter 4. The significance of this research takes several forms. First, Chapters 1 and 3 provide detailed views of a current collaborative research system, and of a set of requirements for next-generation systems based on years of experience both using and building the AAIS. Second, the generic query processor described in Chapters 2 and 5 is shown to be an effective, portable programming language to database interface, ranging across the set of requirements for collaborative research systems as well as a number of underlying data management solutions.
APA, Harvard, Vancouver, ISO, and other styles
7

Woldeselassie, Tilahun. "A simple microcomputer-based nuclear medicine data processing system design and performance testing." Thesis, University of Aberdeen, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316066.

Full text
Abstract:
This thesis investigates the feasibility of designing a simple nuclear medicine data processing system based on an inexpensive microcomputer system, which is affordable to small hospitals and to developing countries where resources are limited. Since the main need for a computer is to allow dynamic studies to be carried out, the relevant criteria for choosing the computer are its speed and memory capacity. The benchmark chosen for these criteria is renography, one of the commonest nuclear medicine procedures. The Acorn Archimedes model 310 microcomputer was found to meet these requirements, and a suitable camera-computer interface has been designed. Because of the need for ensuring that the gain and offset controls of the interface are set optimally before connecting to the camera, it was necessary to design a circuit which produces a test pattern on the screen for use during this operation. Having also developed and tested the data acquisition and image display software successfully, atttention was concentrated on finding ways of characterising and measuring the performance of the computer interface and the display device, two important areas which have been largely neglected in the quality control of camera-computer systems. One of the characteristics of the interface is its deadtime. A procedure has been outlined for measuring this by means of a variable frequency pulse generator and also for interpreting the data correctly. A theoretical analysis of the way in which the interface deadtime affects the overall count rate performance of the system has also been provided. The spatial linearity, resolution and uniformity characteristics of the interface are measured using a special dual staircase generator circuit designed to simulate the camera position and energy signals. The test pattern set up on the screen consists of an orthogonal grid of points which can be used for a visual assessment of linearity, while analysis of the data in memory enables performance indices for resolution, linearity and uniformity to be computed. The thesis investigates the performance characteristics of display devices by means of radiometric measurements of screen luminance. These reveal that the relationship between screen luminance and display grey level value can be taken as quadratic. Characterisation of the display device in this way enables software techniques to be employed to ensure that screen luminance is a linear function of display grey level value; screen luminance measurements, coupled with film density measurements, are also used to optimise the settings of the display controls for using the film in the linear range of its optical densities. This in turn ensures that film density is a linear function of grey level value. An alternative approach for correcting for display nonlinearity is by means of an electronic circuit described in this thesis. Intensity coding schemes for improving the quality of grey scale images can be effective only if distortion due to the display device is corrected for. The thesis also draws attention to significant variations in film density which may have their origins in nonuniformities in the display screen, the recording film, or in the performance of the film processor. The work on display devices has been published in two papers.
APA, Harvard, Vancouver, ISO, and other styles
8

Herzberg, Nico, and Mathias Weske. "Enriching raw events to enable process intelligence : research challenges." Universität Potsdam, 2013. http://opus.kobv.de/ubp/volltexte/2013/6401/.

Full text
Abstract:
Business processes are performed within a company’s daily business. Thereby, valuable data about the process execution is produced. The quantity and quality of this data is very dependent on the process execution environment that reaches from predominantly manual to fullautomated. Process improvement is one essential cornerstone of business process management to ensure companies’ competitiveness and relies on information about the process execution. Especially in manual process environments data directly related to the process execution is rather sparse and incomplete. In this paper, we present an approach that supports the usage and enrichment of process execution data with context data – data that exists orthogonally to business process data – and knowledge from the corresponding process models to provide a high-quality event base for process intelligence subsuming, among others, process monitoring, process analysis, and process mining. Further, we discuss open issues and challenges that are subject to our future work.
Die wertschöpfenden Tätigkeiten in Unternehmen folgen definierten Geschäftsprozessen und werden entsprechend ausgeführt. Dabei werden wertvolle Daten über die Prozessausführung erzeugt. Die Menge und Qualität dieser Daten ist sehr stark von der Prozessausführungsumgebung abhängig, welche überwiegend manuell als auch vollautomatisiert sein kann. Die stetige Verbesserung von Prozessen ist einer der Hauptpfeiler des Business Process Managements, mit der Aufgabe die Wettbewerbsfähigkeit von Unternehmen zu sichern und zu steigern. Um Prozesse zu verbessern muss man diese analysieren und ist auf Daten der Prozessausführung angewiesen. Speziell bei manueller Prozessausführung sind die Daten nur selten direkt zur konkreten Prozessausführung verknüpft. In dieser Arbeit präsentieren wir einen Ansatz zur Verwendung und Anreicherung von Prozessausführungsdaten mit Kontextdaten – Daten die unabhängig zu den Prozessdaten existieren – und Wissen aus den dazugehörigen Prozessmodellen, um ein hochwertige Event- Datenbasis für Process Intelligence Anwendungen, wie zum Beispiel Prozessmonitoring, Prozessanalyse und Process Mining, sicherstellen zu können. Des Weiteren zeigen wir offene Fragestellungen und Herausforderungen auf, welche in Zukunft Gegenstand unserer Forschung sein werden.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhu, Hui, and 朱暉. "Deformable models and their applications in medical image processing." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31238075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Halpin, Ross William. "A history of concern the ethical dilemma of using Nazi medical research data in contemporary medical and scientific research /." University of Sydney, 2008. http://hdl.handle.net/2123/4010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Medicine – Research – Data processing"

1

Association, Veterinary Marketing. Research and analysis: Systems usage in veterinary practice. Irchester, Northants: Veterinary Marketing Association, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Reichert, Kurt L. Automation & autoanalysis in medicine: Research & reference guidebook. Washington, D.C: Abbe Publishers Association, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Abell, Alphonse R. Recent advances of computers in medicine: Guidebook for reference & research. Washington, D.C: Abbe Publishers Association, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Elpida, Karavnou-Papiliou, and Shahar Yuval, eds. Temporal information systems in medicine. New York: Springer, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Weiner, John M. Knowledge utilization: Paths to creativity. Gallatin, TN: XXIV Century Press, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Alan, Cantor, ed. SAS survival analysis techniques for medical research. 2nd ed. Cary, NC, USA: SAS Pub., 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

M, Gudnov V., Raushenbakh G. V, Kurochkina A. I, and Nauchnyĭ t͡sentr biologicheskikh issledovaniĭ (Akademii͡a nauk SSSR), eds. Materialy I Vsesoi͡uznoĭ shkoly-seminara Programmno-algoritmicheskoe obespechenie analiza dannykh v mediko-biologicheskikh issledovanii͡akh: 3-6 ii͡uni͡a 1985 g., Pushchino. Pushchino: Nauch. t͡sentr biologicheskikh issledovaniĭ AN SSSR v Pushchine, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Extending SAS survival analysis techniques for medical research. Cary, NC: SAS Institute, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Russ, Zajtchuk, Goeringer Fred, Mun Seong K, and United States. Army Medical Research and Materiel Command, eds. Proceedings of the National Forum: Military Telemedicine On-Line Today Research, Practice, and Opportunities, March 27-29, 1995, Mclean, Virginia. Los Alamitos, Calif: IEEE Computer Society Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cesario, Alfredo. Cancer Systems Biology, Bioinformatics and Medicine: Research and Clinical Applications. Dordrecht: Springer Science+Business Media B.V., 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Medicine – Research – Data processing"

1

Links, Jonathan M. "Nuclear Medicine Physics, Instrumentation, and Data Processing in Pharmaceutical Research." In Nuclear Imaging in Drug Discovery, Development, and Approval, 11–31. Boston, MA: Birkhäuser Boston, 1993. http://dx.doi.org/10.1007/978-1-4684-6808-3_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lalova, Teodora, Anastassia Negrouk, Laurent Dollé, Sofie Bekaert, Annelies Debucquoy, Jean-Jacques Derèze, Peggy Valcke, Els J. Kindt, and Isabelle Huys. "An Overview of Belgian Legislation Applicable to Biobank Research and Its Interplay with Data Protection Rules." In GDPR and Biobanking, 187–213. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-49388-2_10.

Full text
Abstract:
AbstractThis contribution aims to present in a clear and concise manner the intricate legal framework for biobank research in Belgium. In Part 1, we describe the Belgian biobank infrastructure, with a focus on the concept of biobank. In Part 2, we provide an overview of the applicable legal framework, namely the Act of 19 December 2008 on Human Body Material (HBM), and its amendments. Attention is given to an essential piece of self-regulation, namely the Compendium on biobanks issued by the Federal Agency on Medicine Products and Health (FAMPH). Furthermore, we delineate the interplay with relevant data protection rules. Part 3 is dedicated to the main research oversight bodies in the field of biobanking. In Part 4, we provides several examples of the ‘law in context’. In particular, we discuss issues pertaining to presumed consent, processing of personal data associated with HBM, and information provided to the donor of HBM. Finally, Part 5 and 6 addresses the impact of the EU General Data Protection Regulation (GDPR), suggests lines for further research, and outline the future possibilities for biobanking in Belgium.
APA, Harvard, Vancouver, ISO, and other styles
3

Durieux, Eric, and Luca Fiorani. "Data Processing." In Instrument Development for Atmospheric Research and Monitoring, 89–116. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997. http://dx.doi.org/10.1007/978-3-662-03405-7_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Indrayan, Abhaya. "Processing of Data." In Research Methods for Medical Graduates, 181–207. Boca Raton : CRC Press, [2020]: CRC Press, 2019. http://dx.doi.org/10.1201/9780429435034-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nuyts, J. "Image Formation and Data Processing." In Diagnostic Nuclear Medicine, 237–50. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-662-06590-7_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

de Smit, Jacob. "Electronic Data Processing in Strategic Planning." In Operations Research Proceedings, 256. Berlin, Heidelberg: Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/978-3-642-73778-7_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ljubisavljevic, M., and M. B. Popovic. "Data Acquisition, Processing and Storage." In Modern Techniques in Neuroscience Research, 1277–311. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/978-3-642-58552-4_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aksoy, Demet, Mehmet Altinel, Rahul Bose, Ugur Cetintemel, Michael Franklin, Jane Wang, and Stan Zdonik. "Research in Data Broadcast and Dissemination." In Advanced Multimedia Content Processing, 194–207. Berlin, Heidelberg: Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48962-2_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

De Prins, J., and B. Hecquet. "Data Processing in Chronobiological Studies." In Biologic Rhythms in Clinical and Laboratory Medicine, 90–113. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-78734-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bog, Anja. "Combined Transaction Processing and Reporting Benchmark." In In-Memory Data Management Research, 65–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38070-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Medicine – Research – Data processing"

1

"Research on Application of Large Data Processing Technology in Interactive Design." In 2017 3rd International Conference on Environment, Biology, Medicine and Computer Applications. Francis Academic Press, 2017. http://dx.doi.org/10.25236/icebmca.2017.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Veloso, L., J. McHugh, E. von Weltin, S. Lopez, I. Obeid, and J. Picone. "Big data resources for EEGs: Enabling deep learning research." In 2017 IEEE Signal Processing in Medicine and Biology Symposium (SPMB). IEEE, 2017. http://dx.doi.org/10.1109/spmb.2017.8257044.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Thiagarajan, Tara. "Brainbase: A research and data management platform for human EEG." In 2017 IEEE Signal Processing in Medicine and Biology Symposium (SPMB). IEEE, 2017. http://dx.doi.org/10.1109/spmb.2017.8257017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

D'Souza, M. J., D. Wentzien, R. Bautista, J. Santana, M. Skivers, S. Stotts, and F. Fiedler. "Data-intensive Undergraduate Research Project Informs to Advance Healthcare Analytics." In 2018 IEEE Signal Processing in Medicine and Biology Symposium (SPMB). IEEE, 2018. http://dx.doi.org/10.1109/spmb.2018.8615591.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Hongtao, Yuyan Zhang, Xinli Chi, and Xichang Han. "Research and Practice on Teaching Mode of Error Theory and Data Processing." In 2016 7th International Conference on Education, Management, Computer and Medicine (EMCM 2016). Paris, France: Atlantis Press, 2017. http://dx.doi.org/10.2991/emcm-16.2017.121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rahman, S., M. Miranda, I. Obeid, and J. Picone. "Software and Data Resources to Advance Machine Learning Research in Electroencephalography." In 2019 IEEE Signal Processing in Medicine and Biology Symposium (SPMB). IEEE, 2019. http://dx.doi.org/10.1109/spmb47826.2019.9037851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Song, Changxin, and Ke Ma. "Research on Hypertension Prediction and Diagnosis Based on Big Data." In DMIP '20: 2020 3rd International Conference on Digital Medicine and Image Processing. New York, NY, USA: ACM, 2020. http://dx.doi.org/10.1145/3441369.3441376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Medelli´n Castillo, Hugo I., and Manuel A. Ochoa Alfaro. "Development of a Tridimensional Visualization and Model Reconstruction System Based on Computed Tomographic Data." In ASME 2011 International Mechanical Engineering Congress and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/imece2011-62822.

Full text
Abstract:
Medical image processing constitutes an important research area of the biomedical engineering since it provides accurate human body information for 3D visualization and analysis, diagnostic, surgical treatment planning, surgical training, prosthesis and implant design, wafer and surgical guides design. Computed tomography (CT) and magnetic resonance imaging (MRI) have had a great impact in the medicine since they can represent complex three dimensional (3D) anomalities or deformities. In this paper, the development of a system for tridimensional visualization and model reconstruction based on CT data is presented. The aim is to provide a system capable to assist the design process of prosthesis, implants and surgical guides by reconstructing anatomical 3D models which can be exported to any CAD program or computer aided surgery (CAS) system. A complete description of the proposed system is presented. The new system is able to visualize and reconstruct bone and/or soft tissues. Three types of renders are used: one for 3D visualization based on three planes, other for 3D surface reconstruction based on the well known marching cubes algorithm, and the other for 3D volume visualization based on the ray-casting algorithm. The functionality and performance of the system are evaluated by means of four case studies. The results have proved the capability of the system to visualize and reconstruct anatomical 3D models from medical images.
APA, Harvard, Vancouver, ISO, and other styles
9

Kotina, Elena D., Dmitri A. Ovsyannikov, Victor A. Ploskikh, Victor N. Latipov, Andrey V. Babin, and Alexander Yu Shirokolobov. "Data processing in nuclear medicine." In 2014 20th International Workshop on Beam Dynamics and Optimization (BDO). IEEE, 2014. http://dx.doi.org/10.1109/bdo.2014.6890037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cepraga, D. G., G. Cambi, M. Frisoni, and D. Ene. "Cemented Containers Radiological Data From a Disused Uranium Mine Low-Level Waste Repository: A Calculated-Experiment Cross-Check for Data Verification and Validation." In ASME 2003 9th International Conference on Radioactive Waste Management and Environmental Remediation. ASMEDC, 2003. http://dx.doi.org/10.1115/icem2003-4516.

Full text
Abstract:
Code validation problems involve calculation of experiments and a comparison experiment-calculation. Experimental data and physical properties of these systems are used to determine the range of applicability of the validation. Once a sequence-code of calculations has been validated, it has to be underlined that the comparison experimental-calculated results involving “complex systems” or “complex experimental measures” permits also a bi-lateral cross-check between the calculation scheme and the experimental procedures. The results of the testing and the validation effort related to the collection of information and measured data and the comparison between code results with experimental data coming from a “low-level waste” repository are presented in this paper. The Baita-Bihor repository, sited into former disused uranium mine in Transylvania, has been considered as the source of experimental data. The study was developed through the following steps: a) collection and processing of measured data (radioactivity content and dose rate), from the cemented containers of the Baita-Bihor repository; b) decay gamma source calculation by the ANITA-2000 code package (the input data for the calculations are the measured isotope activities for each container); c) decay gamma transport calculation by the SCALENEA-1 shielding Sn sequence approach (Nitawl-Xsdrnpm-Xsdose modules of the Scale 4.4a code system, using the Vitenea-J library, based on FENDL/E-2 data) to obtain dose rates on the surfaces and at various points outside the containers; d) comparison experimental-calculated dose rates, taking into account also the measurement uncertainties. The new version of the ANITA-2000 activation code package used makes possible to assess the behaviour of irradiated materials independently from the knowledge of the irradiation scenario but using only data on the isotope radioactive material composition. Radioactive waste disposed of at Baita Bihor repository consists of worn reactor parts, resins and filters, packing materials, mop heads, protective clothing, temporary floor coverings and tools, the sources normally generated during the day-to-day operation of research reactors, the remediation-treatment stations and the medicine and biological activities. The low and intermediate wastes are prepared for shipping and disposal in the treatment stations by confining them in a cement matrix inside 220 litre metallic drums. Each container consists of an iron cladding filled by concrete Portland. Radioisotope composition and radioactivity distributions inside the drum are measured. The gamma spectroscopy has been used for. The calibration technique was based on the assumption of a uniform distribution of the source activity in the drum and also of a uniform sample matrix. Dose rate measurements are done continuously, circularly, in the central plan on the surface of the drum and 1 m from the surface, in the air. A “stuffing factor” model has been adopted to simulate, for the calculation, the spatial distribution of the gamma sources in the concrete region. In order to guarantee a complete Quality Assurance for codes and procedures, a simulation of the radioactive containers to evaluate the dose rates was done also by using the Monte Carlo MCNP-4C code. Its calculation results are in a very good agreement with those obtained by the Sn approach (discrepancies are around 2%, using the spherical approximation).
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Medicine – Research – Data processing"

1

Leavy, Michelle B., Danielle Cooke, Sarah Hajjar, Erik Bikelman, Bailey Egan, Diana Clarke, Debbie Gibson, Barbara Casanova, and Richard Gliklich. Outcome Measure Harmonization and Data Infrastructure for Patient-Centered Outcomes Research in Depression: Report on Registry Configuration. Agency for Healthcare Research and Quality (AHRQ), November 2020. http://dx.doi.org/10.23970/ahrqepcregistryoutcome.

Full text
Abstract:
Background: Major depressive disorder is a common mental disorder. Many pressing questions regarding depression treatment and outcomes exist, and new, efficient research approaches are necessary to address them. The primary objective of this project is to demonstrate the feasibility and value of capturing the harmonized depression outcome measures in the clinical workflow and submitting these data to different registries. Secondary objectives include demonstrating the feasibility of using these data for patient-centered outcomes research and developing a toolkit to support registries interested in sharing data with external researchers. Methods: The harmonized outcome measures for depression were developed through a multi-stakeholder, consensus-based process supported by AHRQ. For this implementation effort, the PRIME Registry, sponsored by the American Board of Family Medicine, and PsychPRO, sponsored by the American Psychiatric Association, each recruited 10 pilot sites from existing registry sites, added the harmonized measures to the registry platform, and submitted the project for institutional review board review Results: The process of preparing each registry to calculate the harmonized measures produced three major findings. First, some clarifications were necessary to make the harmonized definitions operational. Second, some data necessary for the measures are not routinely captured in structured form (e.g., PHQ-9 item 9, adverse events, suicide ideation and behavior, and mortality data). Finally, capture of the PHQ-9 requires operational and technical modifications. The next phase of this project will focus collection of the baseline and follow-up PHQ-9s, as well as other supporting clinical documentation. In parallel to the data collection process, the project team will examine the feasibility of using natural language processing to extract information on PHQ-9 scores, adverse events, and suicidal behaviors from unstructured data. Conclusion: This pilot project represents the first practical implementation of the harmonized outcome measures for depression. Initial results indicate that it is feasible to calculate the measures within the two patient registries, although some challenges were encountered related to the harmonized definition specifications, the availability of the necessary data, and the clinical workflow for collecting the PHQ-9. The ongoing data collection period, combined with an evaluation of the utility of natural language processing for these measures, will produce more information about the practical challenges, value, and burden of using the harmonized measures in the primary care and mental health setting. These findings will be useful to inform future implementations of the harmonized depression outcome measures.
APA, Harvard, Vancouver, ISO, and other styles
2

Cheng, Yi-Wen, and Christian L. Sargent. Data-reduction and analysis procedures used in NIST's thermomechanical processing research. Gaithersburg, MD: National Institute of Standards and Technology, 1990. http://dx.doi.org/10.6028/nist.ir.3950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Duchesne, M. J., S. G. Kang, and U. Jang. 2017 Korea-Canada-U.S.A. Beaufort Sea (offshore Yukon and Northwest Territories) research program: processing of 2-D seismic data collected during expedition ARA08C. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 2019. http://dx.doi.org/10.4095/313533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Salter, R., Quyen Dong, Cody Coleman, Maria Seale, Alicia Ruvinsky, LaKenya Walker, and W. Bond. Data Lake Ecosystem Workflow. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40203.

Full text
Abstract:
The Engineer Research and Development Center, Information Technology Laboratory’s (ERDC-ITL’s) Big Data Analytics team specializes in the analysis of large-scale datasets with capabilities across four research areas that require vast amounts of data to inform and drive analysis: large-scale data governance, deep learning and machine learning, natural language processing, and automated data labeling. Unfortunately, data transfer between government organizations is a complex and time-consuming process requiring coordination of multiple parties across multiple offices and organizations. Past successes in large-scale data analytics have placed a significant demand on ERDC-ITL researchers, highlighting that few individuals fully understand how to successfully transfer data between government organizations; future project success therefore depends on a small group of individuals to efficiently execute a complicated process. The Big Data Analytics team set out to develop a standardized workflow for the transfer of large-scale datasets to ERDC-ITL, in part to educate peers and future collaborators on the process required to transfer datasets between government organizations. Researchers also aim to increase workflow efficiency while protecting data integrity. This report provides an overview of the created Data Lake Ecosystem Workflow by focusing on the six phases required to efficiently transfer large datasets to supercomputing resources located at ERDC-ITL.
APA, Harvard, Vancouver, ISO, and other styles
5

Thomas, Müller. The 1946 - 1956 Hydrographic Data Archive at the Institut für Meereskunde, Kiel, digitized: a data guide. GEOMAR, 2021. http://dx.doi.org/10.3289/geomar_rep_ns_58_2021.

Full text
Abstract:
This report is thought as a guide to early hydrographic log sheets of bottle data obtained by the former Institut für Meereskunde, Kiel (IFMK, now integrated into GEOMAR) in the post-war years 1946 to 1956, and which in summer 2018 when a building used by GEOMAR was to clear were not available in digitized format at GEOMAR. The data mostly were taken by the research cutter FK “Südfall” in the Baltic. It turned out that some of these data from 1950 to 1956 were available in digitized form at the on-line data bank of the International Council for the Exploration of the Sea (ICES). Comparison with the original logged data sheets, however, showed that they needed to be improved w/r to time and position and to be completed by missing data. This report shortly describes the methods of sampling and measuring these old data, and the processing steps applied to improve the data set by using the data log sheets before archiving and submitting the now improved and complete data set to data centres for archiving.
APA, Harvard, Vancouver, ISO, and other styles
6

Lazdane, Gunta, Dace Rezeberga, Ieva Briedite, Elizabete Pumpure, Ieva Pitkevica, Darja Mihailova, and Marta Laura Gravina. Sexual and reproductive health in the time of COVID-19 in Latvia, qualitative research interviews and focus group discussions, 2020 (in Latvian). Rīga Stradiņš University, February 2021. http://dx.doi.org/10.25143/fk2/lxku5a.

Full text
Abstract:
Qualitative research is focused on the influence of COVID-19 pandemic and restriction measures on sexual and reproductive health in Latvia. Results of the anonymous online survey (I-SHARE) of 1173 people living in Latvia age 18 and over were used as a background in finalization the interview and the focus group discussion protocols ensuring better understanding of the influencing factors. Protocols included 9 parts (0.Introduction. 1. COVID-19 general influence, 2. SRH, 3. Communication with health professionals, 4.Access to SRH services, 5.Communication with population incl. three target groups 5.1. Pregnant women, 5.2. People with suspected STIs, 5.3.Women, who require abortion, 6. HIV/COVID-19, 7. External support, 8. Conclusions and recommendations. Data include audiorecords in Latvian of: 1) 11 semi-structures interviews with policy makers including representatives from governmental and non-governmental organizations involved in sexual and reproductive health, information and health service provision. 2) 12 focus group discussions with pregnant women (1), women in postpartum period (3) and their partners (3), people living with HIV (1), health care providers involved in maternal health care and emergency health care for women (4) (2021-02-18) Subject: Medicine, Health and Life Sciences Keywords: Sexual and reproductive health, COVID-19, access to services, Latvia
APA, Harvard, Vancouver, ISO, and other styles
7

Murdick, Dewey, Daniel Chou, Ryan Fedasiuk, and Emily Weinstein. The Public AI Research Portfolio of China’s Security Forces. Center for Security and Emerging Technology, March 2021. http://dx.doi.org/10.51593/20200057.

Full text
Abstract:
New analytic tools are used in this data brief to explore the public artificial intelligence (AI) research portfolio of China’s security forces. The methods contextualize Chinese-language scholarly papers that claim a direct working affiliation with components of the Ministry of Public Security, People's Armed Police Force, and People’s Liberation Army. The authors review potential uses of computer vision, robotics, natural language processing and general AI research.
APA, Harvard, Vancouver, ISO, and other styles
8

Lasko, Kristofer, and Sean Griffin. Monitoring Ecological Restoration with Imagery Tools (MERIT) : Python-based decision support tools integrated into ArcGIS for satellite and UAS image processing, analysis, and classification. Engineer Research and Development Center (U.S.), April 2021. http://dx.doi.org/10.21079/11681/40262.

Full text
Abstract:
Monitoring the impacts of ecosystem restoration strategies requires both short-term and long-term land surface monitoring. The combined use of unmanned aerial systems (UAS) and satellite imagery enable effective landscape and natural resource management. However, processing, analyzing, and creating derivative imagery products can be time consuming, manually intensive, and cost prohibitive. In order to provide fast, accurate, and standardized UAS and satellite imagery processing, we have developed a suite of easy-to-use tools integrated into the graphical user interface (GUI) of ArcMap and ArcGIS Pro as well as open-source solutions using NodeOpenDroneMap. We built the Monitoring Ecological Restoration with Imagery Tools (MERIT) using Python and leveraging third-party libraries and open-source software capabilities typically unavailable within ArcGIS. MERIT will save US Army Corps of Engineers (USACE) districts significant time in data acquisition, processing, and analysis by allowing a user to move from image acquisition and preprocessing to a final output for decision-making with one application. Although we designed MERIT for use in wetlands research, many tools have regional or global relevancy for a variety of environmental monitoring initiatives.
APA, Harvard, Vancouver, ISO, and other styles
9

DiGrande, Laura, Christine Bevc, Jessica Williams, Lisa Carley-Baxter, Craig Lewis-Owen, and Suzanne Triplett. Pilot Study on the Experiences of Hurricane Shelter Evacuees. RTI Press, September 2019. http://dx.doi.org/10.3768/rtipress.2019.rr.0035.1909.

Full text
Abstract:
Community members who evacuate to shelters may represent the most socially and economically vulnerable group within a hurricane’s affected geographic area. Disaster research has established associations between socioeconomic conditions and adverse effects, but data are overwhelmingly collected retrospectively on large populations and lack further explication. As Hurricane Florence approached North Carolina in September 2018, RTI International developed a pilot survey for American Red Cross evacuation shelter clients. Two instruments, an interviewer-led paper questionnaire and a short message service (SMS text) questionnaire, were tested. A total of 200 evacuees completed the paper survey, but only 34 participated in the SMS text portion of the study. Data confirmed that the sample represented very marginalized coastline residents: 60 percent were unemployed, 70 percent had no family or friends to stay with during evacuation, 65 percent could not afford to evacuate to another location, 36 percent needed medicine/medical care, and 11 percent were homeless. Although 19 percent of participants had a history of evacuating for prior hurricanes/disasters and 14 percent had previously utilized shelters, we observed few associations between previous experiences and current evacuation resources, behaviors, or opinions about safety. This study demonstrates that, for vulnerable populations exposed to storms of increasing intensity and frequency, traditional survey research methods are best employed to learn about their experiences and needs.
APA, Harvard, Vancouver, ISO, and other styles
10

Dy, Sydney M., Arjun Gupta, Julie M. Waldfogel, Ritu Sharma, Allen Zhang, Josephine L. Feliciano, Ramy Sedhom, et al. Interventions for Breathlessness in Patients With Advanced Cancer. Agency for Healthcare Research and Quality (AHRQ), November 2020. http://dx.doi.org/10.23970/ahrqepccer232.

Full text
Abstract:
Objectives. To assess benefits and harms of nonpharmacological and pharmacological interventions for breathlessness in adults with advanced cancer. Data sources. We searched PubMed®, Embase®, CINAHL®, ISI Web of Science, and the Cochrane Central Register of Controlled Trials through early May 2020. Review methods. We included randomized controlled trials (RCTs) and observational studies with a comparison group evaluating benefits and/or harms, and cohort studies reporting harms. Two reviewers independently screened search results, serially abstracted data, assessed risk of bias, and graded strength of evidence (SOE) for key outcomes: breathlessness, anxiety, health-related quality of life, and exercise capacity. We performed meta-analyses when possible and calculated standardized mean differences (SMDs). Results. We included 48 RCTs and 2 retrospective cohort studies (4,029 patients). The most commonly reported cancer types were lung cancer and mesothelioma. The baseline level of breathlessness varied in severity. Several nonpharmacological interventions were effective for breathlessness, including fans (SMD -2.09 [95% confidence interval (CI) -3.81 to -0.37]) (SOE: moderate), bilevel ventilation (estimated slope difference -0.58 [95% CI -0.92 to -0.23]), acupressure/reflexology, and multicomponent nonpharmacological interventions (behavioral/psychoeducational combined with activity/rehabilitation and integrative medicine). For pharmacological interventions, opioids were not more effective than placebo (SOE: moderate) for improving breathlessness (SMD -0.14 [95% CI -0.47 to 0.18]) or exercise capacity (SOE: moderate); most studies were of exertional breathlessness. Different doses or routes of administration of opioids did not differ in effectiveness for breathlessness (SOE: low). Anxiolytics were not more effective than placebo for breathlessness (SOE: low). Evidence for other pharmacological interventions was limited. Opioids, bilevel ventilation, and activity/rehabilitation interventions had some harms compared to usual care. Conclusions. Some nonpharmacological interventions, including fans, acupressure/reflexology, multicomponent interventions, and bilevel ventilation, were effective for breathlessness in advanced cancer. Evidence did not support opioids or other pharmacological interventions within the limits of the identified studies. More research is needed on when the benefits of opioids may exceed harms for broader, longer term outcomes related to breathlessness in this population.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography