To see the other types of publications on this topic, follow the link: Electronic data processing – Data entry.

Journal articles on the topic 'Electronic data processing – Data entry'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Electronic data processing – Data entry.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Guerra Chala, Bárbara, Cíntia Burille, and Lucas Moreschi Paulo. "The Protection of Consumer’s Personal Data and the Electronic Geodiscrimination Practice." Revista da Faculdade de Direito da Universidade Federal de Uberlândia 49, no. 1 (September 7, 2021): 709–31. http://dx.doi.org/10.14393/rfadir-v49n1a2021-62777.

Full text
Abstract:
The purpose of this study is to analyse the General Data Protection Law for the Protection of Personal Data from the perspective of the protection of the consumer's personal data, with a view to ascertaining the main aspects of the legislation and verifying its impacts in relation to geopricing practices and geoblocking. To that effect, it begins by addressing the principles of the new legislation that inform the activity of processing personal data. Right after, the main axes of structuring the law are presented, focusing on aspects that concern the processing of consumer data. Finally, the practices of geodiscrimination will be examined, with the effect of assessing the legal treatment in relation to such techniques and how they may be affected after the entry into force of the General Data Protection Law. For that, the hypothetico-deductive methodology and the bibliographic research technique were adopted. Thus, it is observed that new data protection legislation added to the protection of consumers' rights in relation to the practices of geopricing and geoblocking, insofar as the standard was designed to prevent the disinformation of the personal data holder on the purpose of the treatment of your information and the illegitimate treatment of personal data, as well as covering the possibility of redressing the consumer who holds personal data if he experiences damage.
APA, Harvard, Vancouver, ISO, and other styles
2

Meystre, S. "Electronic Patient Records: Some Answers to the Data Representation and Reuse Challenges." Yearbook of Medical Informatics 16, no. 01 (August 2007): 47–48. http://dx.doi.org/10.1055/s-0038-1638525.

Full text
Abstract:
SummaryTo summarize current excellent research in the field of patient records.Synopsis of the papers selected for the IMIA Yearbook 2007.The Electronic Patient Record encompasses a broad field of research and development. Some current research topics were selected for this IMIA Yearbook: EHR representation and communication standards, and secondary uses of clinical data for research and decision support. Four excellent papers representing the research in those fields were selected for the Patient Records section.The best papers selected for this section focus on the analysis and comparison of two important clinical documents representation standards, on direct structured data entry, on the use of Natural Language Processing to detect adverse events, and on the development and evaluation of a clinical text corpus annotated for part-of-speech information.
APA, Harvard, Vancouver, ISO, and other styles
3

Sukumar, Sreenivas R., Ramachandran Natarajan, and Regina K. Ferrell. "Quality of Big Data in health care." International Journal of Health Care Quality Assurance 28, no. 6 (July 13, 2015): 621–34. http://dx.doi.org/10.1108/ijhcqa-07-2014-0080.

Full text
Abstract:
Purpose – The current trend in Big Data analytics and in particular health information technology is toward building sophisticated models, methods and tools for business, operational and clinical intelligence. However, the critical issue of data quality required for these models is not getting the attention it deserves. The purpose of this paper is to highlight the issues of data quality in the context of Big Data health care analytics. Design/methodology/approach – The insights presented in this paper are the results of analytics work that was done in different organizations on a variety of health data sets. The data sets include Medicare and Medicaid claims, provider enrollment data sets from both public and private sources, electronic health records from regional health centers accessed through partnerships with health care claims processing entities under health privacy protected guidelines. Findings – Assessment of data quality in health care has to consider: first, the entire lifecycle of health data; second, problems arising from errors and inaccuracies in the data itself; third, the source(s) and the pedigree of the data; and fourth, how the underlying purpose of data collection impact the analytic processing and knowledge expected to be derived. Automation in the form of data handling, storage, entry and processing technologies is to be viewed as a double-edged sword. At one level, automation can be a good solution, while at another level it can create a different set of data quality issues. Implementation of health care analytics with Big Data is enabled by a road map that addresses the organizational and technological aspects of data quality assurance. Practical implications – The value derived from the use of analytics should be the primary determinant of data quality. Based on this premise, health care enterprises embracing Big Data should have a road map for a systematic approach to data quality. Health care data quality problems can be so very specific that organizations might have to build their own custom software or data quality rule engines. Originality/value – Today, data quality issues are diagnosed and addressed in a piece-meal fashion. The authors recommend a data lifecycle approach and provide a road map, that is more appropriate with the dimensions of Big Data and fits different stages in the analytical workflow.
APA, Harvard, Vancouver, ISO, and other styles
4

Dewri, Rinku, Toan Ong, and Ramakrishna Thurimella. "Linking Health Records for Federated Query Processing." Proceedings on Privacy Enhancing Technologies 2016, no. 3 (July 1, 2016): 4–23. http://dx.doi.org/10.1515/popets-2016-0013.

Full text
Abstract:
Abstract A federated query portal in an electronic health record infrastructure enables large epidemiology studies by combining data from geographically dispersed medical institutions. However, an individual’s health record has been found to be distributed across multiple carrier databases in local settings. Privacy regulations may prohibit a data source from revealing clear text identifiers, thereby making it non-trivial for a query aggregator to determine which records correspond to the same underlying individual. In this paper, we explore this problem of privately detecting and tracking the health records of an individual in a distributed infrastructure. We begin with a secure set intersection protocol based on commutative encryption, and show how to make it practical on comparison spaces as large as 1010 pairs. Using bigram matching, precomputed tables, and data parallelism, we successfully reduced the execution time to a matter of minutes, while retaining a high degree of accuracy even in records with data entry errors. We also propose techniques to prevent the inference of identifier information when knowledge of underlying data distributions is known to an adversary. Finally, we discuss how records can be tracked utilizing the detection results during query processing.
APA, Harvard, Vancouver, ISO, and other styles
5

Rousseau, Justin, Ivan Ip, Ali Raja, Vladimir Valtchinov, Laila Cochon, Jeremiah Schuur, and Ramin Khorasani. "Can Automated Retrieval of Data from Emergency Department Physician Notes Enhance the Imaging Order Entry Process?" Applied Clinical Informatics 10, no. 02 (March 2019): 189–98. http://dx.doi.org/10.1055/s-0039-1679927.

Full text
Abstract:
Background When a paucity of clinical information is communicated from ordering physicians to radiologists at the time of radiology order entry, suboptimal imaging interpretations and patient care may result. Objectives Compare documentation of relevant clinical information in electronic health record (EHR) provider note to computed tomography (CT) order requisition, prior to ordering of head CT for emergency department (ED) patients presenting with headache. Methods In this institutional review board-approved retrospective observational study performed between April 1, 2013 and September 30, 2014 at an adult quaternary academic hospital, we reviewed data from 666 consecutive ED encounters for patients with headaches who received head CT. The primary outcome was the number of concept unique identifiers (CUIs) relating to headache extracted via ontology-based natural language processing from the history of present illness (HPI) section in ED notes compared with the number of concepts obtained from the imaging order requisition. Results Our analysis was conducted on cases where the HPI note section was completed prior to image order entry, which occurred in 23.1% (154/666) of encounters. For these 154 encounters, the number of CUIs specific to headache per note extracted from the HPI (median = 3, interquartile range [IQR]: 2–4) was significantly greater than the number of CUIs per encounter obtained from the imaging order requisition (median = 1, IQR: 1–2; Wilcoxon signed rank p < 0.0001). Extracted concepts from notes were distinct from order requisition indications in 92.9% (143/154) of cases. Conclusion EHR provider notes are a valuable source of relevant clinical information at the time of imaging test ordering. Automated extraction of clinical information from notes to prepopulate imaging order requisitions may improve communication between ordering physicians and radiologists, enhance efficiency of ordering process by reducing redundant data entry, and may help improve clinical relevance of clinical decision support at the time of order entry, potentially reducing provider burnout from extraneous alerts.
APA, Harvard, Vancouver, ISO, and other styles
6

KANEKO, Haruhiko, and Eiji FUJIWARA. "M-Ary Substitution/Deletion/Insertion/Adjacent-Symbol-Transposition Error Correcting Codes for Data Entry Systems." IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences E92-A, no. 7 (2009): 1668–76. http://dx.doi.org/10.1587/transfun.e92.a.1668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Haile-Mariam, M., E. Schelfhorst, and M. E. Goddard. "Effect of data collection methods on the availability of calving ease, fertility and herd health data for evaluating Australian dairy cattle." Australian Journal of Experimental Agriculture 47, no. 6 (2007): 664. http://dx.doi.org/10.1071/ea05267.

Full text
Abstract:
There is concern in the Australian dairy industry that the fertility, calving ease and disease resistance of cows is declining and that this decline is, at least in part, a genetic change. Improvement in these traits might be achieved through better herd management and genetic selection. Both these strategies are dependant on the availability of suitable data. The Australian Dairy Herd Improvement Scheme publishes estimated breeding values for fertility, calving ease and somatic cell count. However, the accuracy of the estimated breeding values is limited by the amount and quality of data collected. This paper reports on a project conducted to identify a more efficient system for collecting non-production data, with the hypothesis that quantity and quality of data collected would improve if farmers used electronic data collection methods instead of ‘traditional’ methods, such as writing in a notebook. Of 78 farmers involved in the trial, 51 used a PALM handheld (PALM group), 18 wrote data on paper and later entered it in their farm computer (PC group) and nine submitted a paper record to their data processing centres for entry into the centres’ computers (PAPER group). Data collected from these 78 trial herds during the trial period (2002–04) were compared to data collected from 88 similar non-trial farms, which kept records on PC or paper. The ratio of number of events (health, calving ease or fertility) recorded to number of calvings was considered as a measure of level of recording. The results showed that, after adjusting for location and level of recording before the trial started, the PALM group collected significantly more calving ease, pregnancy test and other fertility data per calving than farmers who were not involved in the trial and PAPER and PC groups. The number of records collected by the PALM group increased from 0.13 pregnancy tests in 2001 to 0.36 per calving in 2004, whereas there was little change in the amount of data collected by the other groups. Similarly, the number of calving ease records increased from 0.26 in 2001 to 0.33 in 2004 and the number of heats recorded increased from 0.02 in 2001 to 0.12 in 2004. This increase in data capture among farmers using the PALM was partly due to an increase in the number of farmers who submitted any data at all. For instance, of the PALM group, 86% sent data on calving ease and 61% on pregnancy, as compared to those from the PC and PAPER groups (below 57%) or those who were not involved in the trial (below 44%). When farmers who at least submitted one record of each type of data are considered, farmers in the PALM group still submitted significantly more fertility event data than those who were not involved in the trial and those in the PAPER group. The quality of the data did not appear to be affected by the data collection methods, though the completeness of the mating data was better in PALM and PC users. The use of electronic data entry on farms would increase the amount of data available for the calculation of estimated breeding values and hence the accuracy of these values for fertility, calving ease and health traits.
APA, Harvard, Vancouver, ISO, and other styles
8

Al-Shahir, Ali Abdul Fatah. "Employ information technology capabilities in building a data warehouse Organization." Journal of University of Human Development 2, no. 1 (January 31, 2016): 273. http://dx.doi.org/10.21928/juhd.v2n1y2016.pp273-295.

Full text
Abstract:
The researcher seeks in their first step of their paper to find on concept of information technology capabilities and statement types. After exhibiting the views of the writers and the researchers concerning about them. As a second step, they present their proceeding concept for information technology capabilities. The third step seeks to show the concept of a data warehouse and characteristics of the organization, and in the light of the views of writers and researchers as well, And ended committed to the concept of procedural to the data warehouse, as well as the data warehouse architecture and data modeling process it. The fourth step was to stand up to the reality of information technology capabilities in the company surveyed using the checklist as a tool to collect data and information. As well as the proposal of a model for building a data warehouse in the Home Furniture Company. The researcher reached to a number of conclusions, mainly the possibility of building a data warehouse in the company surveyed, because it will help in achieving client satisfaction through low times to provide information after they take a long time, due to the system's ability to store data and information type and quantity and in an orderly fashion, as well as non- a redundancy in data collection and entry and processing. In light of this, the researcher presented his proposals that demonstrated the most important expand reliance on electronic systems, and address weaknesses in the IT infrastructure, as well as the need to conduct new research in other organizations to draw the attention of the departments of the importance of the data warehouse as that can contribute to the success of their organizations.
APA, Harvard, Vancouver, ISO, and other styles
9

Madhavan, Ramesh, Chi Tang, Pratik Bhattacharya, Fadi Delly, and Maysaa M. Basha. "Evaluation of Documentation Patterns of Trainees and Supervising Physicians Using Data Mining." Journal of Graduate Medical Education 6, no. 3 (September 1, 2014): 577–80. http://dx.doi.org/10.4300/jgme-d-13-00267.1.

Full text
Abstract:
Abstract Background The electronic health record (EHR) includes a rich data set that may offer opportunities for data mining and natural language processing to answer questions about quality of care, key aspects of resident education, or attributes of the residents' learning environment. Objective We used data obtained from the EHR to report on inpatient documentation practices of residents and attending physicians at a large academic medical center. Methods We conducted a retrospective observational study of deidentified patient notes entered over 7 consecutive months by a multispecialty university physician group at an urban hospital. A novel automated data mining technology was used to extract patient note–related variables. Results A sample of 26 802 consecutive patient notes was analyzed using the data mining and modeling tool Healthcare Smartgrid. Residents entered most of the notes (33%, 8178 of 24 787) between noon and 4 pm and 31% (7718 of 24 787) of notes between 8 am and noon. Attending physicians placed notes about teaching attestations within 24 hours in only 73% (17 843 of 24 443) of the records. Surgical residents were more likely to place notes before noon (P &lt; .001). Nonsurgical faculty were more likely to provide attestation of resident notes within 24 hours (P &lt; .001). Conclusions Data related to patient note entry was successfully used to objectively measure current work flow of resident physicians and their supervising faculty, and the findings have implications for physician oversight of residents' clinical work. We were able to demonstrate the utility of a data mining model as an assessment tool in graduate medical education.
APA, Harvard, Vancouver, ISO, and other styles
10

Feng, Chenchen, David Le, and Allison McCoy. "Using Electronic Health Records to Identify Adverse Drug Events in Ambulatory Care: A Systematic Review." Applied Clinical Informatics 10, no. 01 (January 2019): 123–28. http://dx.doi.org/10.1055/s-0039-1677738.

Full text
Abstract:
Objective We identified the methods used and determined the roles of electronic health records (EHRs) in detecting and assessing adverse drug events (ADEs) in the ambulatory setting. Methods We performed a systematic literature review by searching PubMed and Google Scholar for studies on ADEs detected in the ambulatory setting involving any EHR use published before June 2017. We extracted study characteristics from included studies related to ADE detection methods for analysis. Results We identified 30 studies that evaluated ADEs in an ambulatory setting with an EHR. In 27 studies, EHRs were used only as the data source for ADE identification. In two studies, the EHR was used as both a data source and to deliver decision support to providers during order entry. In one study, the EHR was a source of data and generated patient safety reports that researchers used in the process of identifying ADEs. Methods of identification included manual chart review by trained nurses, pharmacists, and/or physicians; prescription review; computer monitors; electronic triggers; International Classification of Diseases codes; natural language processing of clinical notes; and patient phone calls and surveys. Seven studies provided examples of search phrases, laboratory values, and rules used to identify ADEs. Conclusion The majority of studies examined used EHRs as sources of data for ADE detection. This retrospective approach is appropriate to measure incidence rates of ADEs but not adequate to detect preventable ADEs before patient harm occurs. New methods involving computer monitors and electronic triggers will enable researchers to catch preventable ADEs and take corrective action.
APA, Harvard, Vancouver, ISO, and other styles
11

González Bernaldo de Quirós, Fernán, Carlos Otero, and Daniel Luna. "Terminology Services: Standard Terminologies to Control Health Vocabulary." Yearbook of Medical Informatics 27, no. 01 (April 22, 2018): 227–33. http://dx.doi.org/10.1055/s-0038-1641200.

Full text
Abstract:
SummaryHealthcare Information Systems should capture clinical data in a structured and preferably coded format. This is crucial for data exchange between health information systems, epidemiological analysis, quality and research, clinical decision support systems, administrative functions, among others. Structured data entry is an obstacle for the usability of electronic health record (EHR) applications and their acceptance by physicians who prefer to document patient EHRs using “free text”. Natural language allows for rich expressiveness but at the same time is ambiguous; it has great dependence on context and uses jargon and acronyms. Although much progress has been made in knowledge and natural language processing techniques, the result is not yet satisfactory enough for the use of free text in all dimensions of clinical documentation. In order to address the trade-off between capturing data with free text and at the same time coding data for computer processing, numerous terminological systems for the systematic recording of clinical data have been developed. The purpose of terminology services consists of representing facts that happen in the real world through database management in order to allow for semantic interoperability and computerized applications. These systems interrelate concepts of a particular domain and provide references to related terms with standards codes. In this way, standard terminologies allow the creation of a controlled medical vocabulary, making terminology services a fundamental component for health data management in the healthcare environment. The Hospital Italiano de Buenos Aires has been working in the development of its own terminology server. This work describes its experience in the field.
APA, Harvard, Vancouver, ISO, and other styles
12

Zolynski, Célia. "OS BIG DATA E OS DADOS PESSOAIS ENTRE OS PRINCÍPIOS DA PROTEÇÃO E DA INOVAÇÃO." Law, State and Telecommunications Review 12, no. 1 (March 16, 2020): 225–45. http://dx.doi.org/10.26512/lstr.v12i1.30007.

Full text
Abstract:
Objective ”“ The article contrasts the problem of Big Data with the possibilities and limits of personal data protection. It is an original contribution to the academic discussion about the regulation of the Internet and the management of algorithms, focusing on Big Data. Methodology/approach/design ”“ The article provides bibliographic research on the opposition between Big Data and personal data protection, focusing on European Union law and French law. From the research is possible to identify regulatory alternatives do Big Data, whether legal-administrative nature or technological nature. Findings ”“ The article enlightens that, in addition to the traditional regulatory options, based on the law, there are technological options for regulating Big Data and algorithms. The article goes through an analysis of administrative performance, such as France’s CNIL (Commission nationale informatique et libertés, CNIL), to show that it has limits. Thus, the article concludes that there is a need to build a new type of regulation, one that is open to the inputs of regulated parties and civil society, in the form of new co-regulatory arrangements. Practical implications ”“ The article has an obvious application since the production of legal solutions for Internet regulation requires combining them with technological solutions. Brazil and several Latin American countries are experiencing this agenda, as they are building institutions and solutions to solve the dilemma of personal data protection. Originality/value ”“ The article clarifies several parts of the General Data Protection Regulation (EU Regulation 2016/679) and its applicability to Big Data. These new types of data processing impose several legal and regulatory challenges, whose solutions cannot be trivial and will rely on new theories and practices.
APA, Harvard, Vancouver, ISO, and other styles
13

Moacdieh, Nadine Marie, Travis Ganje, and Nadine Sarter. "Physician perceptions of clutter in electronic medical records." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 61, no. 1 (September 2017): 572. http://dx.doi.org/10.1177/1541931213601627.

Full text
Abstract:
Electronic medical records (EMRs) are now used by more than 95% of US hospitals (American Hospital Association (AHA) Annual Survey Information Technology Supplement, 2013). EMR systems typically provide a wide range of functionalities, including computerized physician order entry and the storage and presentation of patient medical data. The expectation has always been that these EMR functions would contribute to increased efficiency and safety of operations in hospital environments (Blumenthal & Glaser, 2007). However, display clutter in EMRs can lead to negative performance effects that can compromise the efficiency and safety of medical environments (e.g., Moacdieh & Sarter, 2015; Murphy, Reis, Sittig, & Singh 2012). However, it is not clear to what extent physicians view clutter as an impediment to their work, and, if so, whether it is solely the amount of visual data that leads to their perception of “clutter”. To this end, the aims of this study were to determine 1) whether physicians believe the nature and amount of EMR visual data affect their use of EMRs, 2) whether physicians think improvements are needed, and 3) to what extent it is the amount of data that leads to clutter versus some other qualitative aspect of the data. An anonymous survey was conducted among emergency medicine residents at the University of Michigan Department of Emergency Medicine. The response rate was around 60%, with 31 residents responding (age range 21-40 years). Residents had to respond to 18 questions. The first five questions asked for demographic information and participants responded using a dropdown menu. The next eight questions asked participants for their opinions about their satisfaction with their current EMR and the effects of visual data load on their work; participants responded using a 5-point Likert scale (strongly disagree or not at all (1) to strongly agree or extremely important (5)). The next three questions were free text and allowed residents to suggest design improvements to their current EMRs. Finally, the last two questions asked residents to rate, on a 100-point scale, the amount of clutter and the amount of information on sample screenshots from their current EMRs. This data was then correlated with each of the clutter image processing algorithms of Rosenholtz, Li, & Nakano (2007): feature congestion, subband entropy, and edge density. In general, results showed that physicians place a lot of importance on the design of visual information. Of the residents who responded, 52% indicated that visual data representation was “extremely important” for safety and the same percentage also said it was “extremely important” for efficiency. Also, 41% of residents agreed or strongly agreed that problems with visual data presentation have led to medical errors in their experience. In the free text space, physicians described many improvements that could be made to their EMR displays, particularly the reduction of excess irrelevant data. In addition, the correlation coefficients between the algorithm values and the ratings of amount of information were lower than the coefficients for ratings of clutter. This suggests that it is not just the quantity of information that factors into physicians’ perception of clutter; other factors, such as color variation and organization, play a role as well. In conclusion, this study showed that there is more to EMR clutter than merely excess data, and physicians appear to be aware of the dangers of clutter in their EMR displays.
APA, Harvard, Vancouver, ISO, and other styles
14

Simbolon, Nanci Yosepin, Muhammad Yasid, Beny Syahputra Sinaga, and Nancy Saragih. "PERLINDUNGAN HUKUM BAGI PENANAMAN MODAL ASING (PMA) DI INDONESIA." Jurnal Darma Agung 28, no. 1 (May 20, 2020): 64. http://dx.doi.org/10.46930/ojsuda.v28i1.461.

Full text
Abstract:
The existence of foreign investment in Indonesia is very important and strategic in supporting the implementation of national economic development and increasing economic growth in the community, which is regulated in law number 25 of 2007. This research is an empirical normative research, data collection is done by interview and literature study. Data processing is done by checking the data, the analysis in this study is done descriptive, that is, systematically doing research in the field to obtain data.The results show that in every foreign company that wants to invest in Indonesia, it must take care of the principle permit to the Investment Coordinating Board (BKPM) and the One-Stop Integrated Investment and Service Office (PTSP). So that the company can be established in Indonesia and its establishment is in accordance with the procedures of the foreign investment law so that the foreign investment company is fully under the responsibility and supervision of The Republic of Indonesia. The Government also provides relief to foreign investment in Indonesia in foreign up the cost of the entry of goods or electronic equipment for the purposes of Foreign Investment in Indonesia in investing.
APA, Harvard, Vancouver, ISO, and other styles
15

Almog, Yasmeen Adar, Angshu Rai, Patrick Zhang, Amanda Moulaison, Ross Powell, Anirban Mishra, Kerry Weinberg, et al. "Deep Learning With Electronic Health Records for Short-Term Fracture Risk Identification: Crystal Bone Algorithm Development and Validation." Journal of Medical Internet Research 22, no. 10 (October 16, 2020): e22550. http://dx.doi.org/10.2196/22550.

Full text
Abstract:
Background Fractures as a result of osteoporosis and low bone mass are common and give rise to significant clinical, personal, and economic burden. Even after a fracture occurs, high fracture risk remains widely underdiagnosed and undertreated. Common fracture risk assessment tools utilize a subset of clinical risk factors for prediction, and often require manual data entry. Furthermore, these tools predict risk over the long term and do not explicitly provide short-term risk estimates necessary to identify patients likely to experience a fracture in the next 1-2 years. Objective The goal of this study was to develop and evaluate an algorithm for the identification of patients at risk of fracture in a subsequent 1- to 2-year period. In order to address the aforementioned limitations of current prediction tools, this approach focused on a short-term timeframe, automated data entry, and the use of longitudinal data to inform the predictions. Methods Using retrospective electronic health record data from over 1,000,000 patients, we developed Crystal Bone, an algorithm that applies machine learning techniques from natural language processing to the temporal nature of patient histories to generate short-term fracture risk predictions. Similar to how language models predict the next word in a given sentence or the topic of a document, Crystal Bone predicts whether a patient’s future trajectory might contain a fracture event, or whether the signature of the patient’s journey is similar to that of a typical future fracture patient. A holdout set with 192,590 patients was used to validate accuracy. Experimental baseline models and human-level performance were used for comparison. Results The model accurately predicted 1- to 2-year fracture risk for patients aged over 50 years (area under the receiver operating characteristics curve [AUROC] 0.81). These algorithms outperformed the experimental baselines (AUROC 0.67) and showed meaningful improvements when compared to retrospective approximation of human-level performance by correctly identifying 9649 of 13,765 (70%) at-risk patients who did not receive any preventative bone-health-related medical interventions from their physicians. Conclusions These findings indicate that it is possible to use a patient’s unique medical history as it changes over time to predict the risk of short-term fracture. Validating and applying such a tool within the health care system could enable automated and widespread prediction of this risk and may help with identification of patients at very high risk of fracture.
APA, Harvard, Vancouver, ISO, and other styles
16

Lifshin, Eric. "Minimizing Errors in Electron Microprobe Analysis." Microscopy and Microanalysis 5, S2 (August 1999): 568–69. http://dx.doi.org/10.1017/s1431927600016160.

Full text
Abstract:
Back in the sixties our ability to do both qualitative and quantitative microprobe analysis was often restricted by many factors we now take for granted. Limitations of power supplies, electron optical systems and the mechanical aspects of specimen stages and crystal spectrometers often led to unacceptable drift in both beam position and x-ray intensity measurements. Detector electronics were relatively primitive and phenomena such as pulse distribution shifts with count rate could cause serious errors when pulse height analysis was incorrectly used for noise reduction. The ZAF method, the principal approach for quantitative analysis, was undergoing frequent modification to make it applicable to a broader range of specimens. There were also continuing disputes about which equations to use for a given situation and what experimentally determined constants worked best in the models. Data logging and specimen stage and spectrometer automation were in their infancy and data processing was done on calculators or in batch mode on central computers. Opportunities for errors in data entry abounded.
APA, Harvard, Vancouver, ISO, and other styles
17

Rosa, F., P. Knesewitsch, K. Hahn, and W. Koch. "Guideline on radiation protection in medicine requires documentation of radioiodine therapy and follow-up." Nuklearmedizin 44, no. 02 (2005): 49–55. http://dx.doi.org/10.1055/s-0038-1623925.

Full text
Abstract:
Summary Aim: The lately updated German guideline on radiation protection in medicine (Richtlinie Strahlenschutz in der Medizin) requires the physician who administers radioactive substances for therapy, to perform and document follow-ups. In order to decrease the administrative burden, an electronic database was developed that interfaces with a word processing software to generate written reports and statistic analysis. Methods: Based on Microsoft® Access and Microsoft® Visual Basic a database was created to monitor patients with benign and malignant thyroid disorders after radioiodine therapy. It permits automatic creation of therapy documents and necessary patient reports in Microsoft® Word. Intuitive handling, third level of normalization in database architecture and automatic plausibility checks guarantee integrity of the data and the efficacy of the database. Results, conclusion: The new software has been a success in over 1500 patients and over 3800 in- and outpatient therapies and visits. The effort of data entry is easily offset by the automatic generation of the necessary patient reports. The required supervision of the follow-up appointments is now also user-friendly and efficient.
APA, Harvard, Vancouver, ISO, and other styles
18

Møller, Jens Kjølseth, Martin Sørensen, and Christian Hardahl. "Prediction of risk of acquiring urinary tract infection during hospital stay based on machine-learning: A retrospective cohort study." PLOS ONE 16, no. 3 (March 31, 2021): e0248636. http://dx.doi.org/10.1371/journal.pone.0248636.

Full text
Abstract:
Background Healthcare associated infections (HAI) are a major burden for the healthcare system and associated with prolonged hospital stay, increased morbidity, mortality and costs. Healthcare associated urinary tract infections (HA-UTI) accounts for about 20–30% of all HAI’s, and with the emergence of multi-resistant urinary tract pathogens, the total burden of HA-UTI will most likely increase. Objective The aim of the current study was to develop two predictive models, using data from the index admission as well as historic data on a patient, to predict the development of UTI at the time of entry to the hospital and after 48 hours of admission (HA-UTI). The ultimate goal is to predict the individual patient risk of acquiring HA-UTI before it occurs so that health care professionals may take proper actions to prevent it. Methods Retrospective cohort analysis of approx. 300 000 adult admissions in a Danish region was performed. We developed models for UTI prediction with five machine-learning algorithms using demographic information, laboratory results, data on antibiotic treatment, past medical history (ICD10 codes), and clinical data by transformation of unstructured narrative text in Electronic Medical Records to structured data by Natural Language Processing. Results The five machine-learning algorithms have been evaluated by the performance measures average squared error, cumulative lift, and area under the curve (ROC-index). The algorithms had an area under the curve (ROC-index) ranging from 0.82 to 0.84 for the entry model (T = 0 hours after admission) and from 0.71 to 0.77 for the HA-UTI model (T = 48 hours after admission). Conclusion The study is proof of concept that it is possible to create machine-learning models that can serve as early warning systems to predict patients at risk of acquiring urinary tract infections during admission. The entry model and the HA-UTI models perform with a high ROC-index indicating a sufficient sensitivity and specificity, which may make both models instrumental in individualized prevention of UTI in hospitalized patients. The favored machine-learning methodology is Decision Trees to ensure the most transparent results and to increase clinical understanding and implementation of the models.
APA, Harvard, Vancouver, ISO, and other styles
19

Motulsky, Aude, Claude Sicotte, Marie-Pierre Gagnon, Julie Payne-Gagnon, Julie-Alexandra Langué-Dubé, Christian M. Rochefort, and Robyn Tamblyn. "Challenges to the implementation of a nationwide electronic prescribing network in primary care: a qualitative study of users’ perceptions." Journal of the American Medical Informatics Association 22, no. 4 (April 16, 2015): 838–48. http://dx.doi.org/10.1093/jamia/ocv026.

Full text
Abstract:
Abstract Background and objective The objective of this study was to identify physicians’ and pharmacists’ perceptions of the challenges and benefits to implementing a nationwide electronic prescribing network linking medical clinics and community pharmacies in Quebec, Canada. Methods Forty-nine people (12 general practitioners, 2 managers, 33 community pharmacists, and 2 pharmacy staff members) from 40 points of care (10 primary care clinics (42% of all the connected sites) and 30 community pharmacies (44%)) were interviewed in 2013. Verbatim transcripts were analyzed using thematic analysis. Results A low level of network use was observed. Most pharmacists processed e-prescriptions by manual entry instead of importing electronically. They reported concerns about potential errors generated by importing e-prescriptions, mainly due to the instruction field. Paper prescriptions were still perceived as the best means for safe and effective processing of prescriptions in pharmacies. Speed issues when validating e-prescription messages were seen as an irritant by physicians, and resulted in several of them abandoning transmission. Displaying the medications based on the dispensing data was identified as the main obstacle to meaningful use of medication histories. Conclusions Numerous challenges impeded realization of the benefits of this network. Standards for e-prescription messages, as well as rules for message validation, need to be improved to increase the potential benefits of e-prescriptions. Standard drug terminology including the concept of clinical medication should be developed, and the implementation of rules in local applications to allow for the classification and reconciliation of medication lists from dispensing data should be made a priority.
APA, Harvard, Vancouver, ISO, and other styles
20

Szakács, Béla Benedek, and Tamás Mészáros. "Hybrid Distance-based, CNN and Bi-LSTM System for Dictionary Expansion." Infocommunications journal, no. 4 (2020): 6–13. http://dx.doi.org/10.36244/icj.2020.4.2.

Full text
Abstract:
Dictionaries like Wordnet can help in a variety of Natural Language Processing applications by providing additional morphological data. They can be used in Digital Humanities research, building knowledge graphs and other applications. Creating dictionaries from large corpora of texts written in a natural language is a task that has not been a primary focus of research, as other tasks have dominated the field (such as chat-bots), but it can be a very useful tool in analysing texts. Even in the case of contemporary texts, categorizing the words according to their dictionary entry is a complex task, and for less conventional texts (in old or less researched languages) it is even harder to solve this problem automatically. Our task was to create a software that helps in expanding a dictionary containing word forms and tagging unprocessed text. We used a manually created corpus for training and testing the model. We created a combination of Bidirectional Long-Short Term Memory networks, convolutional networks and a distancebased solution that outperformed other existing solutions. While manual post-processing for the tagged text is still needed, it significantly reduces the amount of it.
APA, Harvard, Vancouver, ISO, and other styles
21

Dovzhik, V., G. Dovzhik, and T. Fedyanina. "Using e-marketing on the example of detailed targeting technology." E-Management 2, no. 3 (October 29, 2019): 29–37. http://dx.doi.org/10.26425/2658-3445-2019-3-29-37.

Full text
Abstract:
The concept of “Big data” as a way of processing huge amounts of data has been considered, the principles of working with big data have been formulated, the techniques of electronic marketing have been reviewed. The definition of targeting has been given, its possibilities have been described. The sources of information for the General targeting field have been presented. The concept of hyperlocal targeting has been interpreted. The specificity of the use of e-marketing technologies depending on the goals of promotion and the ways of creating individual sales tunnels, the so-called “narrow entry” technique, have been investigated. The content essence and possibilities of using the technology of detailed marketing have been revealed. On the example of using the technology of detailed targeting by means of automatic redirection of users from one URL to another (redirect) and on the basis of the obtained data the formation of a portrait of a potential client and segmentation of target audiences an algorithm for the use of detailed targeting technology has been developed. The process of segmentation and creation of a typical portrait of a representative of the target audience has been shown. On the basis of the developed portraits of representatives of the target audience, the client’s path design was created. Herewith, the standard logical chains of customer involvement in the process of making a purchase, namely, attracting a potential buyer, increasing his level of interest in the product and the final sale, have been described. The most significant technological difficulties have been studied, consisting primarily in the simultaneous connection of a variety of platforms and modules necessary for the automation of processes and the collection of statistics for analytical work. The list of tools used in the implementation of detailed targeting technology has been presented: LeeLoo.ai, AmoCRM, LPGenerator, Yandex.Metrics.The logic of expansion of the applied tools of electronic marketing for different target audiences for the purpose of increase of conversion and, accordingly, economy of the advertizing budget has been described.
APA, Harvard, Vancouver, ISO, and other styles
22

Cordonnier-Pratt, Marie-Michèle, Chun Liang, Haiming Wang, Dmitri S. Kolychev, Feng Sun, Robert Freeman, Robert Sullivan, and Lee H. Pratt. "MAGIC Database and Interfaces: An Integrated Package for Gene Discovery and Expression." Comparative and Functional Genomics 5, no. 3 (2004): 268–75. http://dx.doi.org/10.1002/cfg.399.

Full text
Abstract:
The rapidly increasing rate at which biological data is being produced requires a corresponding growth in relational databases and associated tools that can help laboratories contend with that data. With this need in mind, we describe here a Modular Approach to a Genomic, Integrated and Comprehensive (MAGIC) Database. This Oracle 9i database derives from an initial focus in our laboratory on gene discovery via production and analysis of expressed sequence tags (ESTs), and subsequently on gene expression as assessed by both EST clustering and microarrays. The MAGIC Gene Discovery portion of the database focuses on information derived from DNA sequences and on its biological relevance. In addition to MAGIC SEQ-LIMS, which is designed to support activities in the laboratory, it contains several additional subschemas. The latter include MAGIC Admin for database administration, MAGIC Sequence for sequence processing as well as sequence and clone attributes, MAGIC Cluster for the results of EST clustering, MAGIC Polymorphism in support of microsatellite and single-nucleotide-polymorphism discovery, and MAGIC Annotation for electronic annotation by BLAST and BLAT. The MAGIC Microarray portion is a MIAME-compliant database with two components at present. These are MAGIC Array-LIMS, which makes possible remote entry of all information into the database, and MAGIC Array Analysis, which provides data mining and visualization. Because all aspects of interaction with the MAGIC Database are via a web browser, it is ideally suited not only for individual research laboratories but also for core facilities that serve clients at any distance.
APA, Harvard, Vancouver, ISO, and other styles
23

Schellenberger, Sven, Kilin Shi, Fabian Michler, Fabian Lurz, Robert Weigel, and Alexander Koelpin. "Continuous In-Bed Monitoring of Vital Signs Using a Multi Radar Setup for Freely Moving Patients." Sensors 20, no. 20 (October 15, 2020): 5827. http://dx.doi.org/10.3390/s20205827.

Full text
Abstract:
In hospitals, continuous monitoring of vital parameters can provide valuable information about the course of a patient’s illness and allows early warning of emergencies. To enable such monitoring without restricting the patient’s freedom of movement and comfort, a radar system is attached under the mattress which consists of four individual radar modules to cover the entire width of the bed. Using radar, heartbeat and respiration can be measured without contact and through clothing. By processing the raw radar data, the presence of a patient can be determined and movements are categorized into the classes “bed exit”, “bed entry”, and “on bed movement”. Using this information, the vital parameters can be assessed in sections where the patient lies calmly in bed. In the first step, the presence and movement classification is demonstrated using recorded training and test data. Next, the radar was modified to perform vital sign measurements synchronized to a gold standard device. The evaluation of the individual radar modules shows that, regardless of the lying position of the test person, at least one of the radar modules delivers accurate results for continuous monitoring.
APA, Harvard, Vancouver, ISO, and other styles
24

Andronova, Everita, Renāte Siliņa-Piņķe, Anta Trumpa, and Pēteris Vanags. "The Electronic Historical Latvian Dictionary Based on the Corpus of Early Written Latvian Texts." Acta Baltico-Slavica 40 (December 28, 2016): 1–37. http://dx.doi.org/10.11649/abs.2016.018.

Full text
Abstract:
The Electronic Historical Latvian Dictionary Based on the Corpus of Early Written Latvian Texts This article deals with the development of the Electronic Historical Latvian Diction­ary (http://www.tezaurs.lv/lvvv) based on the Corpus of Early Written Latvian Texts (http://www.korpuss.lv/senie/). Some issues concerning the compilation and processing of the corpus data are discussed and the main sources added to the Corpus during the four-year project are described: the 16th c. Lord’s Prayers, 17th c. dictionaries, texts of oaths and laws, religious texts and so-called dedication poetry. The aim of the project is to compile a pilot electronic dictionary of 16th–17th century Latvian where all parts of speech are represented among the entries. This dictionary will contain ca. 1,200 entries, including both proper names and common nouns.The main emphasis is on the description of the dictionary entries supplied with relevant practical and theoretical observations. Each part of the dictionary entry is discussed, followed by comments on various issues pertaining to that part (e.g., the choice of headword and the representation of spelling versions) and how these were resolved. Special attention is paid to the head of entry, explanation of meaning deduced from the examples found in the corpus, different types of collocations and their representation in the dictionary, as well as etymological information. Finally, we present a brief review of the dictionary writing software TLex 2013 based on our experience with this tool. Elektroniczny historyczny słownik łotewski oparty na korpusie wczesnych tekstów łotewskichArtykuł poświęcony jest powstawaniu Elektronicznego historycznego słownika łotewskiego (http://www.tezaurs.lv/lvvv) w oparciu o korpus wczesnych tekstów łotewskich (http://www.korpuss.lv/senie/). Omówiono niektóre kwestie odnoszące się do opraco­wania i przetwarzania danych korpusowych. Ponadto opisano główne źródła dodane do korpusu w okresie czteroletniej pracy nad nim: Modlitewnik z XVI w., słowniki, teksty ślubowań i statutów, teksty religijne i tzw. poezja dedykowana z XVII w. Celem projektu jest opracowanie pilotażowego elektronicznego słownika szesnasto- i siedemnastowiecznego języka łotewskiego, w którym hasła obejmą wszystkie części mowy. Słownik będzie zawierał około 1200 haseł, w tym nazwy własne i rzeczowniki pospolite.Główny nacisk położono na opis haseł słownikowych, zawierający istotne uwagi praktyczne i teoretyczne. Omówiono poszczególne części hasła słownikowego, po czym umieszczono komentarz odnoszący się do różnych kwestii związanych z daną częścią (np. wybór hasła wyrazowego i przedstawienia wersji pisowni) i do przyję­tych rozwiązań. Szczególną uwagę poświęcono główce hasła, objaśnieniu znaczenia wynikającego z przykładów występujacych w korpusie, różnym rodzajom kolokacji i ich przedstawieniu w słowniku, jak też informacjom etymologicznym. Na końcu zamieszczono zwięzły przegląd oprogramowania słownikowego TLex 2013, oparty na doświadczeniu autorów, zdobytym podczas pracy z tym narzędziem.
APA, Harvard, Vancouver, ISO, and other styles
25

Junger, A., L. Quinzio, A. Michel, G. Sciuk, C. Fuchs, K. Marquardt, G. Hempelmann, and M. Benson. "Data Processing at the Anesthesia Workstation: from Data Entry to Data Presentation." Methods of Information in Medicine 39, no. 04/05 (2000): 319–24. http://dx.doi.org/10.1055/s-0038-1634450.

Full text
Abstract:
Abstract:Main requirements for an Anesthesia Information Management System (AIMS) are the supply of additional information for the anesthesiologist at his workstation and complete documentation of the anesthetic procedure. With the implementation of an AIMS (NarkoData) and effective user support, the quality of documentation and the information flow at the anesthesia workstation could be increased. Today, more than 20,000 anesthesia procedures are annually recorded with the AIMS at 112 decentralized workstations. The network for data entry and the presentation and evaluation of data, statistics and results directly available at the clinical workstation was made operational.
APA, Harvard, Vancouver, ISO, and other styles
26

Schromm, T., T. Frankewitsch, M. Giehl, F. Keller, and D. Zellner. "Structured Data Entry for Reliable Acquisition of Pharmacokinetic Data." Methods of Information in Medicine 35, no. 03 (May 1996): 261–64. http://dx.doi.org/10.1055/s-0038-1634673.

Full text
Abstract:
Abstract:A pharmacokinetic database was constructed that is as free of errors as possible. Pharmacokinetic parameters were derived from the literature using a text-processing system and a database system. A random data sample from each system was compared with the original literature. The estimated error frequencies using statistical methods differed significantly between the two systems. The estimated error frequency in the text-processing system was 7.2%, that in the database system 2.7%. Compared with the original values in the literature, the estimated probability of error for identical pharmacokinetic parameters recorded in both systems is 2.4% and is not significantly different from the error frequency in the database. Parallel data entry with a text-processing system and a database system is, therefore, not significantly better than structured data entry for reducing the error frequency.
APA, Harvard, Vancouver, ISO, and other styles
27

McCoy, A. B., A. Wright, G. Eysenbach, B. A. Malin, E. S. Patterson, H. Xu, and D. F. Sittig. "State of the Art in Clinical Informatics: Evidence and Examples." Yearbook of Medical Informatics 22, no. 01 (August 2013): 13–19. http://dx.doi.org/10.1055/s-0038-1638827.

Full text
Abstract:
Summary Objective: The field of clinical informatics has expanded substantially in the six decades since its inception. Early research focused on simple demonstrations that health information technology (HIT) such as electronic health records (EHRs), computerized provider order entry (CPOE), and clinical decision support (CDS) systems were feasible and potentially beneficial in clinical practice. Methods: In this review, we present recent evidence on clinical informatics in the United States covering three themes: 1) clinical informatics systems and interventions for providers, including EHRs, CPOE, CDS, and health information exchange; 2) consumer health informatics systems, including personal health records and web-based and mobile HIT; and 3) methods and governance for clinical informatics, including EHR usability; data mining, text mining, natural language processing, privacy, and security. Results: Substantial progress has been made in demonstrating that various clinical informatics methodologies and applications improve the structure, process, and outcomes of various facets of the healthcare system. Conclusion: Over the coming years, much more will be expected from the field. As we move past the “early adopters” in Rogers' diffusion of innovations' curve through the “early majority” and into the “late majority,” there will be a crucial need for new research methodologies and clinical applications that have been rigorously demonstrated to work (i.e., to improve health outcomes) in multiple settings with different types of patients and clinicians.
APA, Harvard, Vancouver, ISO, and other styles
28

Rizqi, Muhammad Nur. "PERANAN SOFTWARE ACCURATE DALAM MENUNJANG SISTEM INFORMASI AKUNTANSI (Studi Kasus pada CV. Nuansa Elektronik)." Jurnal Ilmiah Binaniaga 8, no. 01 (May 24, 2019): 21. http://dx.doi.org/10.33062/jib.v8i01.316.

Full text
Abstract:
Along with the development of the age, economy in each country will experience growth. Evenntually, the development will take effect on the behavior and customs of the community of a country.Result generated by the end of the accounting information are the financial reports. Financial information of course is very important for the progress of a company. In order to provide a strategic competitive advantage for the company then a system that sets the accounting information is neededThe purpose of financial statement is to provide the information that connected with financial position, working method and changes in financial position of a company which useful for must user on making economic decision. The financial statement that sets for this purpose is expected can fulfill most of the user need. The financial statement is useful for most user when the information of financial statement can be understood, relevance, reliable and can be compared. However, the financial statement not supplied all information that users need in making economic decision. Generally, financial statement described financial effect of the past, and not requires providing non-financial information. This research examined about utilization of Accurate Accounting Programme, presentation of financial statement, and the function of Accurate Accounting Programme to increase presentation effectivity of financial statement at CV Nuansa Electronic. The purpose of this research is to know how about effects the function of Accurate Accounting Programme to increase presentation effectivity of financial statement. In business field, the use of personal computer has been an absolute needed, like a data entry, inventory, selling, human resources department, production process, and other office administration, also for correspondence only. The popular programmed such as WordStar, Lotus, and Dbase are used for that activity. Although that programmed give such big help, but there still have some problems and only can be used for some tasks. Although it can be used, the procedure is too complicated and data processing takes more time. Even though, financial statement has to prepared fast, correctly and accurately as information for management to take the right decision in financial. To handle the problems, CV Nuansa Electronic decided to use Accurate Accounting Program. The process of this program is very easy and quick in accordance with excel programme. Keywords : Software Accurate, System Information Accounting
APA, Harvard, Vancouver, ISO, and other styles
29

Burger, Franziska, Mark A. Neerincx, and Willem-Paul Brinkman. "Technological State of the Art of Electronic Mental Health Interventions for Major Depressive Disorder: Systematic Literature Review." Journal of Medical Internet Research 22, no. 1 (January 20, 2020): e12599. http://dx.doi.org/10.2196/12599.

Full text
Abstract:
Background Electronic mental (e-mental) health care for depression aims to overcome barriers to and limitations of face-to-face treatment. Owing to the high and growing demand for mental health care, a large number of such information and communication technology systems have been developed in recent years. Consequently, a diverse system landscape formed. Objective This literature review aims to give an overview of this landscape of e-mental health systems for the prevention and treatment of major depressive disorder, focusing on three main research questions: (1) What types of systems exist? (2) How technologically advanced are these systems? (3) How has the system landscape evolved between 2000 and 2017? Methods Publications eligible for inclusion described e-mental health software for the prevention or treatment of major depressive disorder. Additionally, the software had to have been evaluated with end users and developed since 2000. After screening, 270 records remained for inclusion. We constructed a taxonomy concerning software systems, their functions, how technologized these were in their realization, and how systems were evaluated, and then, we extracted this information from the included records. We define here as functions any component of the system that delivers either treatment or adherence support to the user. For this coding process, an elaborate classification hierarchy for functions was developed yielding a total of 133 systems with 2163 functions. The systems and their functions were analyzed quantitatively, with a focus on technological realization. Results There are various types of systems. However, most are delivered on the World Wide Web (76%), and most implement cognitive behavioral therapy techniques (85%). In terms of content, systems contain twice as many treatment functions as adherence support functions, on average. Furthermore, autonomous systems, those not including human guidance, are equally as technologized and have one-third less functions than guided ones. Therefore, lack of guidance is neither compensated with additional functions nor compensated by technologizing functions to a greater degree. Although several high-tech solutions could be found, the average system falls between a purely informational system and one that allows for data entry but without automatically processing these data. Moreover, no clear increase in the technological capabilities of systems showed in the field, between 2000 and 2017, despite a marked growth in system quantity. Finally, more sophisticated systems were evaluated less often in comparative trials than less sophisticated ones (OR 0.59). Conclusions The findings indicate that when developers create systems, there is a greater focus on implementing therapeutic treatment than adherence support. Although the field is very active, as evidenced by the growing number of systems developed per year, the technological possibilities explored are limited. In addition to allowing developers to compare their system with others, we anticipate that this review will help researchers identify opportunities in the field.
APA, Harvard, Vancouver, ISO, and other styles
30

Thompson, David A., Donald C. McEvers, and Charles H. Olson. "Case Study in Data Entry System Design." Proceedings of the Human Factors Society Annual Meeting 30, no. 8 (September 1986): 744–48. http://dx.doi.org/10.1177/154193128603000803.

Full text
Abstract:
This is a case study of changes successfully made in the job design, workspace layout, and wage payment plan for a very large mail remittance processing facility. The changes were necessitated by excessive musculoskeletal injury and disability claims history, by low morale, and by low production. In addition to the other work changes, programmed exercise breaks for all employees were introduced. Measures of the economic impact of ergonomic changes were made.
APA, Harvard, Vancouver, ISO, and other styles
31

Hammond, E., and A. McIndoe. "Electronic anaesthetic logbooks; standards for data entry." Anaesthesia 51, no. 1 (January 1996): 93–94. http://dx.doi.org/10.1111/j.1365-2044.1996.tb07674.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Weller, Susan C., and Roberta D. Baer. "Using Electronic Scanning Forms for Data Entry." Field Methods 13, no. 2 (May 2001): 198–203. http://dx.doi.org/10.1177/1525822x0101300205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Kompara, Tomaž, Janez Perš, David Susič, and Matjaž Gams. "A One-Dimensional Non-Intrusive and Privacy-Preserving Identification System for Households." Electronics 10, no. 5 (February 27, 2021): 559. http://dx.doi.org/10.3390/electronics10050559.

Full text
Abstract:
In many ambient-intelligence applications, including intelligent homes and cities, awareness of an inhabitant’s presence and identity is of great importance. Such an identification system should be non-intrusive and therefore seamless for the user, especially if our goal is ubiquitous and pervasive surveillance. However, due to privacy concerns and regulatory restrictions, such a system should also strive to preserve the user’s privacy as much as possible. In this paper, a novel identification system is presented based on a network of laser sensors, each attached on top of the room entry. Its sensor modality, a one-dimensional depth sensor, was chosen with privacy in mind. Each sensor is mounted on the top of a doorway, facing towards the entrance, at an angle. This position allows acquiring the user’s body shape while the user is crossing the doorway, and the classification is performed by classical machine learning methods. The system is non-intrusive, non-intrusive and preserves privacy—it omits specific user-sensitive information such as activity, facial expression or clothing. No video or audio data are required. The feasibility of such a system was tested on a nearly 4000-person, publicly available database of anthropometric measurements to analyze the relationships among accuracy, measured data and number of residents, while the evaluation of the system was conducted in a real-world scenario on 18 subjects. The evaluation was performed on a closed dataset with a 10-fold cross validation and showed 98.4% accuracy for all subjects. The accuracy for groups of five subjects averaged 99.1%. These results indicate that a network of one-dimensional depth sensors is suitable for the identification task with purposes such as surveillance and intelligent ambience.
APA, Harvard, Vancouver, ISO, and other styles
34

Davidyuk, A. A., and I. V. Kambur. "Implementing the State Budget Accounting Policy with the Use of Automated Systems in Ukraine." Business Inform 2, no. 517 (2021): 232–39. http://dx.doi.org/10.32983/2222-4459-2021-2-232-239.

Full text
Abstract:
The functioning of budgetary institution in the market conditions is impossible to imagine without the computerization of its main economic and production processes. Accountance is no exception. To facilitate routine accounting work, a large number of different accounting programs have been created in recent years. The introduction of accounting software products allows not only automating the accountance, but also establishing order in inventory control, in the supply and sale of products and goods, tracking contracts, quickly calculating wages, timely submitting official statistical reports. Computerization of accountance (both in general and considering its separate sections) not only facilitates the process of management of activities, but also is a prerequisite for fulfilling a number of legislative requirements, in particular regarding the submission of electronic reporting. In addition, automated accounting systems are used to process accounting information. The use of software allows the accountant to obtain up-to-date information, reduce the number of errors in the entry and processing of data caused by the human factor. The publication examines the current state and substantiates the need to automate the accounting system of budgetary institutions in order to ensure their efficient activities. The main features of software products for accountance automation in budgetary organizations are considered, also the main advantages of using these applications to optimize and adapt the accounting system of domestic institutions to the contemporary challenges of the information society and international practice of budget accounting are defined. The peculiarities of use of the most popular software products of budgetary institutions of Ukraine are analyzed; the key requirements for companies – manufacturers of accounting automation software products for the budgetary institutions of Ukraine are substantiated; a phased algorithm is substantiated to ensure the effective implementation of the process of automation of the accounting system in domestic budgetary organizations.
APA, Harvard, Vancouver, ISO, and other styles
35

Spalevic, Zaklina, Milos Ilic, and Dusan Jerotijevic. "Electronic monitoring devices and data processing." Ekonomika 62, no. 4 (2016): 155–66. http://dx.doi.org/10.5937/ekonomika1604155s.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Jacobson, Robert V. "Electronic Data Processing Security—An Overview." International Journal of Network Management 6, no. 2 (March 1996): 77–93. http://dx.doi.org/10.1002/(sici)1099-1190(199603/04)6:2<77::aid-nem184>3.0.co;2-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Jiang, Chunxiao, Nengqiang He, Yong Ren, Canfeng Chen, and Jian Ma. "uSD: universal sensor data entry card." IEEE Transactions on Consumer Electronics 56, no. 3 (August 2010): 1450–56. http://dx.doi.org/10.1109/tce.2010.5606282.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hyde, Andrew William. "The Changing Face of Electronic Data Capture: From Remote Data Entry to Direct Data Capture." Drug Information Journal 32, no. 4 (October 1998): 1089–92. http://dx.doi.org/10.1177/009286159803200429.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Li Ping. "The Data Acquisition and Processing of Cutting Parameter Database." Advanced Materials Research 945-949 (June 2014): 1717–21. http://dx.doi.org/10.4028/www.scientific.net/amr.945-949.1717.

Full text
Abstract:
Appropriate reasonable cutting parameters is one of the important factors to improve the product quality and the efficiency.Based on dynamic simulation optimization function of cutting parameter database through the definitions on the acquisition data standardization, the coefficient of cutting force using Dynacut dynamic test and identification experiment, and through the simulation and optimization and final of Opticut Simucut actual cutting parameters of process validation entry in the database.
APA, Harvard, Vancouver, ISO, and other styles
40

TSOI, A. C., and D. K. PATEL. "Invited paper. A speech entry data logger." International Journal of Electronics 61, no. 2 (August 1986): 169–90. http://dx.doi.org/10.1080/00207218608920857.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Moomba, K., A. Williams, T. Savory, M. Lumpa, P. Chilembo, H. Tweya, A. D. Harries, and M. Herce. "Effects of real-time electronic data entry on HIV programme data quality in Lusaka, Zambia." Public Health Action 10, no. 1 (March 21, 2020): 47–52. http://dx.doi.org/10.5588/pha.19.0068.

Full text
Abstract:
Setting: Human immunodeficiency virus (HIV) clinics in five hospitals and five health centres in Lusaka, Zambia, which transitioned from daily entry of paper-based data records to an electronic medical record (EMR) system by dedicated data staff (Electronic-Last) to direct real-time data entry into the EMR by frontline health workers (Electronic-First).Objective: To compare completeness and accuracy of key HIV-related variables before and after transition of data entry from Electronic-Last to Electronic-First.Design: Comparative cross-sectional study using existing secondary data.Results: Registration data (e.g., date of birth) was 100% complete and pharmacy data (e.g., antiretroviral therapy regimen) was <90% complete under both approaches. Completeness of anthropometric and vital sign data was <75% across all facilities under Electronic-Last, and this worsened after Electronic-First. Completeness of TB screening and World Health Organization clinical staging data was also <75%, but improved with Electronic-First. Data entry errors for registration and clinical consultations decreased under Electronic-First, but errors increased for all anthropometric and vital sign variables. Patterns were similar in hospitals and health centres.Conclusion: With the notable exception of clinical consultation data, data completeness and accuracy did not improve after transitioning from Electronic-Last to Electronic-First. For anthropometric and vital sign variables, completeness and accuracy decreased. Quality improvement interventions are needed to improve Electronic-First implementation.
APA, Harvard, Vancouver, ISO, and other styles
42

Mitchel, Jules T., Yong Joong Kim, Joonhyuk Choi, Glen Park, Silvana Cappi, David Horn, Morgan Kist, and Ralph B. D’agostino. "Evaluation of Data Entry Errors and Data Changes to an Electronic Data Capture Clinical Trial Database." Drug Information Journal 45, no. 4 (July 2011): 421–30. http://dx.doi.org/10.1177/009286151104500404.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Dunn, Edward J., and Paul J. Moga. "Patient Misidentification in Laboratory Medicine: A Qualitative Analysis of 227 Root Cause Analysis Reports in the Veterans Health Administration." Archives of Pathology & Laboratory Medicine 134, no. 2 (February 1, 2010): 244–55. http://dx.doi.org/10.5858/134.2.244.

Full text
Abstract:
Abstract Context Mislabeled laboratory specimens are a common source of harm to patients, such as repeat phlebotomy; repeat diagnostic procedure, including tissue biopsy; delay in a necessary surgical procedure; and the execution of an unnecessary surgical procedure. Mislabeling has been estimated to occur at a rate of 0.1% of all laboratory and anatomic pathology specimens submitted. Objective To identify system vulnerabilities in specimen collection, processing, analysis, and reporting associated with patient misidentification involving the clinical laboratory, anatomic pathology, and blood transfusion services. Design A qualitative analysis was performed on 227 root cause analysis reports from the Veterans Health Administration. Content analysis of case reports from March 9, 2000, to March 1, 2008, was facilitated by a Natural Language Processing program. Data were categorized by the 3 stages of the laboratory test cycle. Results Patient misidentification accounted for 182 of 253 adverse events, which occurred in all 3 stages of the test cycle. Of 132 misidentification events occurring in the preanalytic phase, events included wrist bands labeled for the wrong patient were applied on admission (n = 8), and laboratory tests were ordered for the wrong patient by selecting the wrong electronic medical record from a menu of similar names and Social Security numbers (n = 31). Specimen mislabeling during collection was associated with “batching” of specimens and printed labels (n = 35), misinformation from manual entry on laboratory forms (n = 14), failure of 2-source patient identification for clinical laboratory specimens (n = 24), and failure of 2-person verification of patient identity for blood bank specimens (n = 20). Of 37 events in the analytic phase, relabeling all specimens with accession numbers was associated with mislabeled specimen containers, tissue cassettes, and microscopic slides (n = 27). Misidentified microscopic slides were associated with a failure of 2-pathologist verification for cancer diagnosis (n = 4), and wrong patient transfusions were associated with mislabeled blood products (n = 3) and a failure of 2-person verification for blood products before release by the blood bank (n = 3). There were 13 events in the postanalytic phase in which results were reported into the wrong patient medical record (n = 8), and incompatible blood transfusions were associated with failed 2-person verification of blood products (n = 5). Conclusions Patient misidentification in the clinical laboratory, anatomic pathology, and blood transfusion processes were due to a limited set of causal factors in all 3 phases of the test cycle. A focus on these factors will inform systemic mitigation and prevention strategies.
APA, Harvard, Vancouver, ISO, and other styles
44

Howland, Marie. "Technological change and the spatial restructuring of data entry and processing services." Technological Forecasting and Social Change 43, no. 2 (March 1993): 185–96. http://dx.doi.org/10.1016/0040-1625(93)90016-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

M.Essa, Youssef, Gamal ATTIYA, Ayman El-Sayed, and Ahmed ElMahalawy. "Data processing platforms for electronic health records." Health and Technology 8, no. 4 (January 31, 2018): 271–80. http://dx.doi.org/10.1007/s12553-018-0219-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Mutiarni, Rita. "Implementasi Electronic Data Processing Pada Koperasi Wanita." Eksis: Jurnal Riset Ekonomi dan Bisnis 12, no. 2 (October 20, 2017): 135–48. http://dx.doi.org/10.26533/eksis.v12i2.203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Fowler, Kim. "Data processing in measurement instrumentation." IEEE Instrumentation and Measurement Magazine 9, no. 6 (December 2006): 36–42. http://dx.doi.org/10.1109/mim.2006.250649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Cole, Elodia, Etta D. Pisano, Gregory J. Clary, Donglin Zeng, Marcia Koomen, Cherie M. Kuzmiak, Bo Kyoung Seo, Yeonhee Lee, and Dag Pavic. "A comparative study of mobile electronic data entry systems for clinical trials data collection." International Journal of Medical Informatics 75, no. 10-11 (October 2006): 722–29. http://dx.doi.org/10.1016/j.ijmedinf.2005.10.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Hsiao, A., A. Fenick, and J. Michel. "Using a Scripted Data Entry Process to Transfer Legacy Immunization Data While Transitioning Between Electronic Medical Record Systems." Applied Clinical Informatics 05, no. 01 (2014): 284–98. http://dx.doi.org/10.4338/aci-2013-11-ra-0096.

Full text
Abstract:
SummaryBackground: Transitioning between Electronic Medical Records (EMR) can result in patient data being stranded in legacy systems with subsequent failure to provide appropriate patient care. Manual chart abstraction is labor intensive, error-prone, and difficult to institute for immunizations on a systems level in a timely fashion.Objectives: We sought to transfer immunization data from two of our health system’s soon to be replaced EMRs to the future EMR using a single process instead of separate interfaces for each facility.Methods: We used scripted data entry, a process where a computer automates manual data entry, to insert data into the future EMR. Using the Center for Disease Control’s CVX immunization codes we developed a bridge between immunization identifiers within our system’s EMRs. We performed a two-step process evaluation of the data transfer using automated data comparison and manual chart review.Results: We completed the data migration from two facilities in 16.8 hours with no data loss or corruption. We successfully populated the future EMR with 99.16% of our legacy immunization data – 500,906 records – just prior to our EMR transition date. A subset of immunizations, first recognized during clinical care, had not originally been extracted from the legacy systems. Once identified, this data – 1,695 records – was migrated using the same process with minimal additional effort.Conclusions: Scripted data entry for immunizations is more accurate than published estimates for manual data entry and we completed our data transfer in 1.2% of the total time we predicted for manual data entry. Performing this process before EMR conversion helped identify obstacles to data migration. Drawing upon this work, we will reuse this process for other healthcare facilities in our health system as they transition to the future EMR.Citation: Michel J, Hsiao A, Fenick A. Using a scripted data entry process to transfer legacy immunization data while transitioning between electronic medical record systems. Appl Clin Inf 2014; 5: 284–298 http://dx.doi.org/10.4338/ACI-2013-11-RA-0096
APA, Harvard, Vancouver, ISO, and other styles
50

Marehn, David Thomas, Detlef Wilhelm, Heike Pospisil, and Roberto Pizzoferrato. "Double entry method for the verification of data a chromatography data system receives." Journal of Sensors and Sensor Systems 8, no. 1 (May 17, 2019): 207–14. http://dx.doi.org/10.5194/jsss-8-207-2019.

Full text
Abstract:
Abstract. The importance of software validation increases since the need for high usability and suitability of software applications grows. In order to reduce costs and manage risk factors, more and more recommendations and rules have been established. In the field of pharmacy the vendors of so-called chromatography data systems (CDSs) had to implement the guidelines of the Code of Federal Regulations Title 21 (CFR 21) during the last few years in order to fulfill the increasing requirements. The CFR 21 part 11 deals with electronic records and signatures. This part is binding for each company in the regulated environment that wishes to create, edit and sign electronic information instead of printing them on paper. Subsection CFR 21 part 11.10(h) explains how to perform an input check for manual user entries as well as for data that will be collected from an external device. In this article we present an approach performing the double entry method on data provided by the hardware instrument in order to investigate possible influences on the raw data by the handling CDS. A software tool has been written which allows us to communicate with a high-performance liquid chromatography (HPLC) detector and acquire data from it. The communication is completely independent of a CDS which is started separately and connected to the same system. Using this configuration we made a parallel data acquisition of two instances at the same time possible. Two CDSs have been tested and for at least one of them it has been shown that a comparison of the acquired data can be done as with the double entry method for the data verification. For the second CDS we checked whether it would be applicable after a few modifications. The given approach could be either used for a live data verification of produced raw data or as a single test during a software operational qualification to verify the data acquisition functionality of the software.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography