To see the other types of publications on this topic, follow the link: Biology – Research – Data processing.

Journal articles on the topic 'Biology – Research – Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Biology – Research – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ahdesmäki, Miika J. "Improved PDX and CDX Data Processing—Letter." Molecular Cancer Research 16, no. 11 (2018): 1813. http://dx.doi.org/10.1158/1541-7786.mcr-18-0534.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Khandelwal, Garima, and Crispin Miller. "Improved PDX and CDX Data Processing—Response." Molecular Cancer Research 16, no. 11 (2018): 1814. http://dx.doi.org/10.1158/1541-7786.mcr-18-0535.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bai, Qifeng. "Big Data Research: Database and Computing." Journal of Big Data Research 1, no. 1 (2018): 1–4. http://dx.doi.org/10.14302/issn.2768-0207.jbr-17-1925.

Full text
Abstract:
Big data research has become popular and exciting studies in almost all scientific fields such as biology, chemistry, epidemiology, medicine and drug discovery. The various systems and platforms produce large amounts of data every day. It will be very helpful for the researchers and workers to deal with big data if the practical database and useful software are introduced in time. The Journal of Big Data Research (JBR) supplies an efficient and open access publishing platform for big data research. The first issue of JBR aims to foster the dissemination of high-quality big data studies in the biological, medical and chemical database as well as the new algorithm and software for big data processing. The database and computing framework are selected to introduce the development of big data in the biological, medicine and drug discovery. The mature and functional database can be serviced in big data research of scientific fields. It promotes the scientists to extract the useful and essential dataset from the massive data. The grid computing and cloud computing supplies a new paradigm that offers an effective framework of computing and services. The research papers are welcomed from the scopes of the practical database, new algorithm and software for big data studies. All these kinds of papers not only provide the effective application methods and platforms, but also give a good promising future for big data research.
APA, Harvard, Vancouver, ISO, and other styles
4

Fodje, Michel, Kathryn Janzen, Shanunivan Labiuk, James Gorin, and Pawel Grochulski. "AutoProcess:Automated strategy calculation, data processing & structure solution." Acta Crystallographica Section A Foundations and Advances 70, a1 (2014): C791. http://dx.doi.org/10.1107/s2053273314092080.

Full text
Abstract:
Two critical aspect of macromolecular crystallography experiments are (1) Determining the optimal parameters and strategy for collecting good quality data and (2) Optimal processing of the collected data to obtain to facilitate structure determination. These tasks can be daunting to inexperienced crystallographers and often lead to inefficiencies as valuable beam-time is used up. To support automation, remote access and high-throughput crystallography, we have developed a software system for automation of all data processing tasks required at the synchrotron. AutoProcess, is layered on the XDS data processing package and makes use of other utilities such as BEST from the European Molecular Biology Laboratory (EMBL), CCP4 utilities and SHELX. The software can be used from the command line as a standalone application but can also be run as a service on a high-performance computing cluster, and integrated into beamline control and information management systems such as MxDC and MxLIVE to allows users to determine the optimal strategy for data collection, and/or process full datasets with the click of a button. Users are presented with a graphical data processing report as well as reflection output files in popular formats automatically. For small molecule and peptide structures, an unrefined initial structure with an electron density map is automatically generated using only the raw diffraction images and the chemical composition of the molecule. Future developments will include sub-structure solution for MAD/SAD/SIRAS data. The software is freely available under an open-source license from the authors. The Canadian Light Source is supported by the Natural Sciences and Engineering Research Council of Canada, the National Research Council Canada, the Canadian Institutes of Health Research, the Province of Saskatchewan, Western Economic Diversification Canada, and the University of Saskatchewan.
APA, Harvard, Vancouver, ISO, and other styles
5

Azhar, Dwi Yunar, Mieke Miarsyah, and Erna Heryanti. "A Correlation Self-Efficacy with Biology Students Participation in Waste Processing." BIOSFER: JURNAL PENDIDIKAN BIOLOGI 8, no. 1 (2018): 47–50. http://dx.doi.org/10.21009/biosferjpb.8-1.7.

Full text
Abstract:
Jakarta residents’ waste volume increased twofoldon 2015.If the waste is negligible, there will beaccumulation ofwastewhich in turn damage the environmentandharm thesurrounding community. That requires participation in processing waste in the community, including biology students. One of the factors that can affect participation of biology students in processing waste is their self-efficacy. The aims of this study is to determine the correlation between self-efficacy with participation of biology students in processing waste. The research was conducted at State University of Jakarta on May 2015. Survey method with correlational studies used in this research, andit took 116 biology students which were taken by simple random sampling. after prerequisites was tested, it was found that data of this research was nomally distributed and homogeneous. The simple regression equation is Ŷ = 35,04 + 0,74X. Correlation coefficient obtained is 0,68 which means that there is a correlation between self-efficacy with participation of biology students in waste processing. Self-efficacy biology students contributed a total of 46.11% in the participation of waste management.
APA, Harvard, Vancouver, ISO, and other styles
6

Kahsay, Robel, Jeet Vora, Rahi Navelkar, et al. "GlyGen data model and processing workflow." Bioinformatics 36, no. 12 (2020): 3941–43. http://dx.doi.org/10.1093/bioinformatics/btaa238.

Full text
Abstract:
Abstract Summary Glycoinformatics plays a major role in glycobiology research, and the development of a comprehensive glycoinformatics knowledgebase is critical. This application note describes the GlyGen data model, processing workflow and the data access interfaces featuring programmatic use case example queries based on specific biological questions. The GlyGen project is a data integration, harmonization and dissemination project for carbohydrate and glycoconjugate-related data retrieved from multiple international data sources including UniProtKB, GlyTouCan, UniCarbKB and other key resources. Availability and implementation GlyGen web portal is freely available to access at https://glygen.org. The data portal, web services, SPARQL endpoint and GitHub repository are also freely available at https://data.glygen.org, https://api.glygen.org, https://sparql.glygen.org and https://github.com/glygener, respectively. All code is released under license GNU General Public License version 3 (GNU GPLv3) and is available on GitHub https://github.com/glygener. The datasets are made available under Creative Commons Attribution 4.0 International (CC BY 4.0) license. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
7

Almasoud, Ameera, Hend Al-Khalifa, AbdulMalik Al-salman, and Miltiadis Lytras. "A Framework for Enhancing Big Data Integration in Biological Domain Using Distributed Processing." Applied Sciences 10, no. 20 (2020): 7092. http://dx.doi.org/10.3390/app10207092.

Full text
Abstract:
Massive heterogeneous big data residing at different sites with various types and formats need to be integrated into a single unified view before starting data mining processes. Furthermore, in most of applications and research, a single big data source is not enough to complete the analysis and achieve goals. Unfortunately, there is no general or standardized integration process; the nature of an integration process depends on the data type, domain, and integration purpose. Based on these parameters, we proposed, implemented, and tested a big data integration framework that integrates big data in the biology domain, based on the domain ontology and using distributed processing. The integration resulted in the same result as that obtained from the local integration. The results are equivalent in terms of the ontology size before the integration; in the number of added items, skipped items, and overlapped items; in the ontology size after the integration; and in the number of edges, vertices, and roots. The results also do not violate any logical consistency rules, passing all the logical consistency tests, such as Jena Ontology API, HermiT, and Pellet reasoners. The integration result is a new big data source that combines big data from several critical sources in the biology domain and transforms it into one unified format to help researchers and specialists use it for further research and analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

Hoffmann, R., J. A. Schultz, R. Schellhorn, et al. "Non-invasive imaging methods applied to neo- and paleontological cephalopod research." Biogeosciences Discussions 10, no. 11 (2013): 18803–51. http://dx.doi.org/10.5194/bgd-10-18803-2013.

Full text
Abstract:
Abstract. Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum-maximum size of objects that can be studied, of the degree of post-processing needed and availability. Main application of the methods is seen in morphometry and volumetry of cephalopod shells in order to improve our understanding of diversity and disparity, functional morphology and biology of extinct and extant cephalopods.
APA, Harvard, Vancouver, ISO, and other styles
9

Назипова, Н. Н., and N. N. Nazipova. "Big Data in Bioinformatics." Mathematical Biology and Bioinformatics 12, no. 1 (2017): 102–19. http://dx.doi.org/10.17537/2017.12.102.

Full text
Abstract:
Sequencing of the human genome began in 1994. It took 10 years of collaborative work of many research groups from different countries in order to provide a draft of the human DNA. Modern technologies allow sequencing of a whole genome in a few days. We discuss here the advances in modern bioinformatics related to the emergence of high-performance sequencing platforms, which not only contributed to the expansion of capabilities of biology and related sciences, but also gave rise to the phenomenon of Big Data in biology. The necessity for development of new technologies and methods for organization of storage, management, analysis and visualization of big data is substantiated. Modern bioinformatics is facing not only the problem of processing enormous volumes of heterogeneous data, but also a variety of methods of interpretation and presentation of the results, the simultaneous existence of various software tools and data formats. The ways of solving the arising challenges are discussed, in particular by using experiences from other areas of modern life, such as web and business intelligence. The former is the area of scientific research and development that explores the roles and makes use of artificial intelligence and information technology (IT) for new products, services and frameworks that are empowered by the World Wide Web; the latter is the domain of IT, which addresses the issues of decision-making. New database management systems, other than relational ones, will help solve the problem of storing huge data and providing an acceptable timescale for performing search queries. New programming technologies, such as generic programming and visual programming, are designed to solve the problem of the diversity of genomic data formats and to provide the ability to quickly create one's own scripts for data processing.
APA, Harvard, Vancouver, ISO, and other styles
10

Copenhaver-Parry, Paige E. "Taking Temperature with Leaves: A Semester-Long Structured-Inquiry Research Investigation for Undergraduate Plant Biology." American Biology Teacher 82, no. 4 (2020): 247–55. http://dx.doi.org/10.1525/abt.2020.82.4.247.

Full text
Abstract:
Inquiry- and course-based research pedagogies have demonstrated effectiveness for preparing undergraduate biology students with authentic scientific skills and competencies, yet many students lack the experience to engage successfully in open-ended research activities without sufficient scaffolding and structure. Further, curricula for student-centered laboratory activities are lacking for several biological disciplines, including plant biology and botany. In this article, I describe a semester-long structured-inquiry research curriculum developed for a plant biology course taught to second-year biology students that integrates key elements of inquiry and discovery while providing a structured approach to gaining research skills. In the research project, students collect leaves from woody dicot plants across a range of environments that are characterized by different mean annual temperatures, and investigate the relationship between various leaf characteristics and temperature. Curricular materials are provided to teach skills in scientific paper reading, field data collection, data processing including microscopy and image analysis, quantitative data analysis in R, biological inference, and scientific writing. This comprehensive, ready-to-implement curriculum is suitable for plant biology, botany, and plant ecology courses and is particularly valuable for students with no prior research experience.
APA, Harvard, Vancouver, ISO, and other styles
11

Gîfu, Daniela, Diana Trandabăț, Kevin Cohen, and Jingbo Xia. "Special Issue on the Curative Power of Medical Data." Data 4, no. 2 (2019): 85. http://dx.doi.org/10.3390/data4020085.

Full text
Abstract:
With the massive amounts of medical data made available online, language technologies have proven to be indispensable in processing biomedical and molecular biology literature, health data or patient records. With huge amount of reports, evaluating their impact has long ceased to be a trivial task. Linking the contents of these documents to each other, as well as to specialized ontologies, could enable access to and the discovery of structured clinical information and could foster a major leap in natural language processing and in health research. The aim of this Special Issue, “Curative Power of Medical Data” in Data, is to gather innovative approaches for the exploitation of biomedical data using semantic web technologies and linked data by developing a community involvement in biomedical research. This Special Issue contains four surveys, which include a wide range of topics, from the analysis of biomedical articles writing style, to automatically generating tests from medical references, constructing a Gold standard biomedical corpus or the visualization of biomedical data.
APA, Harvard, Vancouver, ISO, and other styles
12

Kristiana, Evi. "Case Study: Learning Difficulties of Qualitative Research Methodology at Biology Education Postgraduate." IJECA (International Journal of Education and Curriculum Application) 3, no. 1 (2020): 31. http://dx.doi.org/10.31764/ijeca.v3i1.2039.

Full text
Abstract:
Qualitative research methodology courses become a provision for postgraduate students to carry out research, as one of the three obligations of college. The course is a new subject and most students had never practiced it and had never been familiar with the concepts. The aims of the study is to analyzed the causes of learning difficulties and solutions carried out by students and by lecturers in qualitative research methodology courses. This study used a qualitative research approach with a case study design. Data were obtained from questionnaires and interviews with 20 postgraduate students of biology education, Universitas Negeri Malang. Data processing consists of: data reduction, data presentation, and making conclusions. The students experienced main problems in understanding the teaching materials used because it was written in English and the methods in various qualitative studies tend to be similar so causing confusion. Various solutions are carried out by students, for example: conducting discussions, compiling concept maps, and conducting qualitative research practices. Supporting lecturers have main difficulty in assessment. Lecturers take advantage of online media to facilitate assessment. The findings of this study provided alternative solutions for lecturers and students who take qualitative research methodology courses in order to minimize the constraints.
APA, Harvard, Vancouver, ISO, and other styles
13

Feltus, Frank A., Joseph R. Breen, Juan Deng, et al. "The Widening Gulf between Genomics Data Generation and Consumption: A Practical Guide to Big Data Transfer Technology." Bioinformatics and Biology Insights 9s1 (January 2015): BBI.S28988. http://dx.doi.org/10.4137/bbi.s28988.

Full text
Abstract:
In the last decade, high-throughput DNA sequencing has become a disruptive technology and pushed the life sciences into a distributed ecosystem of sequence data producers and consumers. Given the power of genomics and declining sequencing costs, biology is an emerging “Big Data” discipline that will soon enter the exabyte data range when all subdisciplines are combined. These datasets must be transferred across commercial and research networks in creative ways since sending data without thought can have serious consequences on data processing time frames. Thus, it is imperative that biologists, bioinformaticians, and information technology engineers recalibrate data processing paradigms to fit this emerging reality. This review attempts to provide a snapshot of Big Data transfer across networks, which is often overlooked by many biologists. Specifically, we discuss four key areas: 1) data transfer networks, protocols, and applications; 2) data transfer security including encryption, access, firewalls, and the Science DMZ; 3) data flow control with software-defined networking; and 4) data storage, staging, archiving and access. A primary intention of this article is to orient the biologist in key aspects of the data transfer process in order to frame their genomics-oriented needs to enterprise IT professionals.
APA, Harvard, Vancouver, ISO, and other styles
14

Finak, Greg, Bryan Mayer, William Fulp, et al. "DataPackageR: Reproducible data preprocessing, standardization and sharing using R/Bioconductor for collaborative data analysis." Gates Open Research 2 (June 22, 2018): 31. http://dx.doi.org/10.12688/gatesopenres.12832.1.

Full text
Abstract:
A central tenet of reproducible research is that scientific results are published along with the underlying data and software code necessary to reproduce and verify the findings. A host of tools and software have been released that facilitate such work-flows and scientific journals have increasingly demanded that code and primary data be made available with publications. There has been little practical advice on implementing reproducible research work-flows for large ’omics’ or systems biology data sets used by teams of analysts working in collaboration. In such instances it is important to ensure all analysts use the same version of a data set for their analyses. Yet, instantiating relational databases and standard operating procedures can be unwieldy, with high "startup" costs and poor adherence to procedures when they deviate substantially from an analyst’s usual work-flow. Ideally a reproducible research work-flow should fit naturally into an individual’s existing work-flow, with minimal disruption. Here, we provide an overview of how we have leveraged popular open source tools, including Bioconductor, Rmarkdown, git version control, R, and specifically R’s package system combined with a new tool DataPackageR, to implement a lightweight reproducible research work-flow for preprocessing large data sets, suitable for sharing among small-to-medium sized teams of computational scientists. Our primary contribution is the DataPackageR tool, which decouples time-consuming data processing from data analysis while leaving a traceable record of how raw data is processed into analysis-ready data sets. The software ensures packaged data objects are properly documented and performs checksum verification of these along with basic package version management, and importantly, leaves a record of data processing code in the form of package vignettes. Our group has implemented this work-flow to manage, analyze and report on pre-clinical immunological trial data from multi-center, multi-assay studies for the past three years.
APA, Harvard, Vancouver, ISO, and other styles
15

Finak, Greg, Bryan Mayer, William Fulp, et al. "DataPackageR: Reproducible data preprocessing, standardization and sharing using R/Bioconductor for collaborative data analysis." Gates Open Research 2 (July 10, 2018): 31. http://dx.doi.org/10.12688/gatesopenres.12832.2.

Full text
Abstract:
A central tenet of reproducible research is that scientific results are published along with the underlying data and software code necessary to reproduce and verify the findings. A host of tools and software have been released that facilitate such work-flows and scientific journals have increasingly demanded that code and primary data be made available with publications. There has been little practical advice on implementing reproducible research work-flows for large ’omics’ or systems biology data sets used by teams of analysts working in collaboration. In such instances it is important to ensure all analysts use the same version of a data set for their analyses. Yet, instantiating relational databases and standard operating procedures can be unwieldy, with high "startup" costs and poor adherence to procedures when they deviate substantially from an analyst’s usual work-flow. Ideally a reproducible research work-flow should fit naturally into an individual’s existing work-flow, with minimal disruption. Here, we provide an overview of how we have leveraged popular open source tools, including Bioconductor, Rmarkdown, git version control, R, and specifically R’s package system combined with a new tool DataPackageR, to implement a lightweight reproducible research work-flow for preprocessing large data sets, suitable for sharing among small-to-medium sized teams of computational scientists. Our primary contribution is the DataPackageR tool, which decouples time-consuming data processing from data analysis while leaving a traceable record of how raw data is processed into analysis-ready data sets. The software ensures packaged data objects are properly documented and performs checksum verification of these along with basic package version management, and importantly, leaves a record of data processing code in the form of package vignettes. Our group has implemented this work-flow to manage, analyze and report on pre-clinical immunological trial data from multi-center, multi-assay studies for the past three years.
APA, Harvard, Vancouver, ISO, and other styles
16

Bensmail, Halima, and Abdelali Haoudi. "Postgenomics: Proteomics and Bioinformatics in Cancer Research." Journal of Biomedicine and Biotechnology 2003, no. 4 (2003): 217–30. http://dx.doi.org/10.1155/s1110724303209207.

Full text
Abstract:
Now that the human genome is completed, the characterization of the proteins encoded by the sequence remains a challenging task. The study of the complete protein complement of the genome, the “proteome,” referred to as proteomics, will be essential if new therapeutic drugs and new disease biomarkers for early diagnosis are to be developed. Research efforts are already underway to develop the technology necessary to compare the specific protein profiles of diseased versus nondiseased states. These technologies provide a wealth of information and rapidly generate large quantities of data. Processing the large amounts of data will lead to useful predictive mathematical descriptions of biological systems which will permit rapid identification of novel therapeutic targets and identification of metabolic disorders. Here, we present an overview of the current status and future research approaches in defining the cancer cell's proteome in combination with different bioinformatics and computational biology tools toward a better understanding of health and disease.
APA, Harvard, Vancouver, ISO, and other styles
17

Yip, Y. L. "Accelerating Knowledge Discovery through Community Data Sharing and Integration." Yearbook of Medical Informatics 18, no. 01 (2009): 117–20. http://dx.doi.org/10.1055/s-0038-1638650.

Full text
Abstract:
Summary Objectives To summarize current excellent research in the field of bioinformatics. Method Synopsis of the articles selected for the IMIA Yearbook 2009. Results The selection process for this yearbook’s section on Bioinformatics results in six excellent articles highlighting several important trends First, it can be noted that Semantic Web technology continues to play an important role in heterogeneous data integration. Novel applications also put more emphasis on its ability to make logical inferences leading to new insights and discoveries.Second, translational research, due to its complex nature, increasingly relies on collective intelligence made available through the adoption of community-defined protocols or software architectures for secure data annotation, sharing and analysis. Advances in systems biology, bio-ontologies and text-ming can also be noted. Conclusions Current biomedical research gradually evolves towards an environment characterized by intensive collaboration and more sophisticated knowledge processing activities. Enabling technologies, either Semantic Web or other solutions, are expected to play an increasingly important role in generating new knowledge in the foreseeable future.
APA, Harvard, Vancouver, ISO, and other styles
18

Likhoshway, E. V. "Actual trends in water ecosystem biology development." Marine Biological Journal 2, no. 4 (2017): 3–14. http://dx.doi.org/10.21072/mbj.2017.02.4.01.

Full text
Abstract:
This synopsis characterizes new trends in oceanology arising for the last some years as a result of practical application of actual methods in data obtaining and processing. First of all, these are methods of massive parallel sequencing, “-omics” and bioinformatics methods of data storage and analysis. Identifying biologically active substances in water environment and results of laboratory-based experiments show the existence of molecular signal transduction both at the level of population and interspecies relations between microorganisms and at the level of their trophic connections. “From molecules to ecosystem” – is an actual trend in biology of marine ecosystems. Unification and analysis of large databases including space imagery and “cloud” technologies created a new trend of research in ecoinformatics; this allows to understand structural-functional organization of water ecosystems as a whole.
APA, Harvard, Vancouver, ISO, and other styles
19

Agustina, Putri, and Ike Wartini Ningsih. "The Observation of Biology Practical in Grade XI SMA Muhammadiyah 1 Surakarta 2015/2016 Based on Biology Practical Implementation Standard." Bioeducation Journal 1, no. 1 (2017): 34–44. http://dx.doi.org/10.24036/bioedu.v1i1.24.

Full text
Abstract:
Biology learning quality must be supported by practicum in the laboratory. Practicum will done well if all of the component on it standarized as stated in minimum standar of practicum at school. The components are laboratory and its administrator, teacher, learning process, and learning material that used. This research aim to analyze Biology practicum process in SMA Muhammadiyah 1 Surakarta based on Biology practicum processing standar at school. This research conducted in class XI SMA Muhammadiyah 1 Surakarta at second semester year 2015/2016. This research used expositori research design with qualitative approach. The data consist of laboratory condition, laboratory administratif, students worksheet, and practicum process based on observation, documentation, and interview. The result showed that: (1) biology laboratory of SMA Muhammadiyah 1 Surakarta in the good criteria (score: 80); (2) SMA Muhammadiyah 1 Surakarta do not have special laboratory administrator (laborant and laboratory technician); (3) students of SMA Muhammadiyah 1 Surakarta used students worksheet to work at laboratory; and (4) practical feasibility of class XI SMA Muhammadiyah 1 Surakarta in year 2015/2016 in the very good criteria (score: 92).
APA, Harvard, Vancouver, ISO, and other styles
20

Kachala, Michael, John Westbrook, and Dmitri Svergun. "Extension of the sasCIF format and its applications for data processing and deposition." Journal of Applied Crystallography 49, no. 1 (2016): 302–10. http://dx.doi.org/10.1107/s1600576715024942.

Full text
Abstract:
Recent advances in small-angle scattering (SAS) experimental facilities and data analysis methods have prompted a dramatic increase in the number of users and of projects conducted, causing an upsurge in the number of objects studied, experimental data available and structural models generated. To organize the data and models and make them accessible to the community, the Task Forces on SAS and hybrid methods for the International Union of Crystallography and the Worldwide Protein Data Bank envisage developing a federated approach to SAS data and model archiving. Within the framework of this approach, the existing databases may exchange information and provide independent but synchronized entries to users. At present, ways of exchanging information between the various SAS databases are not established, leading to possible duplication and incompatibility of entries, and limiting the opportunities for data-driven research for SAS users. In this work, a solution is developed to resolve these issues and provide a universal exchange format for the community, based on the use of the widely adopted crystallographic information framework (CIF). The previous version of the sasCIF format, implemented as an extension of the core CIF dictionary, has been available since 2000 to facilitate SAS data exchange between laboratories. The sasCIF format has now been extended to describe comprehensively the necessary experimental information, results and models, including relevant metadata for SAS data analysis and for deposition into a database. Processing tools for these files (sasCIFtools) have been developed, and these are available both as standalone open-source programs and integrated into the SAS Biological Data Bank, allowing the export and import of data entries as sasCIF files. Software modules to save the relevant information directly from beamline data-processing pipelines in sasCIF format are also developed. This update of sasCIF and the relevant tools are an important step in the standardization of the way SAS data are presented and exchanged, to make the results easily accessible to users and to promote further the application of SAS in the structural biology community.
APA, Harvard, Vancouver, ISO, and other styles
21

Griffin, Philippa C., Jyoti Khadake, Kate S. LeMay, et al. "Best practice data life cycle approaches for the life sciences." F1000Research 6 (August 31, 2017): 1618. http://dx.doi.org/10.12688/f1000research.12344.1.

Full text
Abstract:
Throughout history, the life sciences have been revolutionised by technological advances; in our era this is manifested by advances in instrumentation for data generation, and consequently researchers now routinely handle large amounts of heterogeneous data in digital formats. The simultaneous transitions towards biology as a data science and towards a ‘life cycle’ view of research data pose new challenges. Researchers face a bewildering landscape of data management requirements, recommendations and regulations, without necessarily being able to access data management training or possessing a clear understanding of practical approaches that can assist in data management in their particular research domain. Here we provide an overview of best practice data life cycle approaches for researchers in the life sciences/bioinformatics space with a particular focus on ‘omics’ datasets and computer-based data processing and analysis. We discuss the different stages of the data life cycle and provide practical suggestions for useful tools and resources to improve data management practices.
APA, Harvard, Vancouver, ISO, and other styles
22

Griffin, Philippa C., Jyoti Khadake, Kate S. LeMay, et al. "Best practice data life cycle approaches for the life sciences." F1000Research 6 (June 4, 2018): 1618. http://dx.doi.org/10.12688/f1000research.12344.2.

Full text
Abstract:
Throughout history, the life sciences have been revolutionised by technological advances; in our era this is manifested by advances in instrumentation for data generation, and consequently researchers now routinely handle large amounts of heterogeneous data in digital formats. The simultaneous transitions towards biology as a data science and towards a ‘life cycle’ view of research data pose new challenges. Researchers face a bewildering landscape of data management requirements, recommendations and regulations, without necessarily being able to access data management training or possessing a clear understanding of practical approaches that can assist in data management in their particular research domain. Here we provide an overview of best practice data life cycle approaches for researchers in the life sciences/bioinformatics space with a particular focus on ‘omics’ datasets and computer-based data processing and analysis. We discuss the different stages of the data life cycle and provide practical suggestions for useful tools and resources to improve data management practices.
APA, Harvard, Vancouver, ISO, and other styles
23

Van Chi, Phan, and Le Thi Bich Thao. "Proteogenomics and its applications in biology and precision medicine." Vietnam Journal of Biotechnology 19, no. 1 (2021): 1–14. http://dx.doi.org/10.15625/1811-4989/15386.

Full text
Abstract:
In this review, we briefly discuss proteogenomics, the integration of proteomics with genomics and transcriptomics, whereby the underlying technologies are next-generation sequencing (NGS) and mass spectrometry (MS) with processing the resulting data, an emerging field that promises to accelerate fundamental research related to transcription and translation, as well as its applicability. By combining genomic and proteomic information, scientists are achieving new results due to a more complete and unified understanding of complex molecular biological processes. Part of this review introduces some of the results of using proteogenomics in solving problems such as annotation, gene/genome re-annotation, including editing of open reading frames (ORFs), or improving a process to detect new genes in a number of different organisms, including humans. In particular, the paper also discusses the potential of proteogenomics through research achievements on human genome/proteome in precision medicine, especially in projects on phylogenetic and diagnostic research. and cancer treatment. The challenges and future of proteogenomics are also discussed and documented.
APA, Harvard, Vancouver, ISO, and other styles
24

Grunspan, Daniel Z., Benjamin L. Wiggins, and Steven M. Goodreau. "Understanding Classrooms through Social Network Analysis: A Primer for Social Network Analysis in Education Research." CBE—Life Sciences Education 13, no. 2 (2014): 167–78. http://dx.doi.org/10.1187/cbe.13-08-0162.

Full text
Abstract:
Social interactions between students are a major and underexplored part of undergraduate education. Understanding how learning relationships form in undergraduate classrooms, as well as the impacts these relationships have on learning outcomes, can inform educators in unique ways and improve educational reform. Social network analysis (SNA) provides the necessary tool kit for investigating questions involving relational data. We introduce basic concepts in SNA, along with methods for data collection, data processing, and data analysis, using a previously collected example study on an undergraduate biology classroom as a tutorial. We conduct descriptive analyses of the structure of the network of costudying relationships. We explore generative processes that create observed study networks between students and also test for an association between network position and success on exams. We also cover practical issues, such as the unique aspects of human subjects review for network studies. Our aims are to convince readers that using SNA in classroom environments allows rich and informative analyses to take place and to provide some initial tools for doing so, in the process inspiring future educational studies incorporating relational data.
APA, Harvard, Vancouver, ISO, and other styles
25

Alexiou, Athanasios, Dimitrios Zisis, Ioannis Kavakiotis, et al. "DIANA-mAP: Analyzing miRNA from Raw NGS Data to Quantification." Genes 12, no. 1 (2020): 46. http://dx.doi.org/10.3390/genes12010046.

Full text
Abstract:
microRNAs (miRNAs) are small non-coding RNAs (~22 nts) that are considered central post-transcriptional regulators of gene expression and key components in many pathological conditions. Next-Generation Sequencing (NGS) technologies have led to inexpensive, massive data production, revolutionizing every research aspect in the fields of biology and medicine. Particularly, small RNA-Seq (sRNA-Seq) enables small non-coding RNA quantification on a high-throughput scale, providing a closer look into the expression profiles of these crucial regulators within the cell. Here, we present DIANA-microRNA-Analysis-Pipeline (DIANA-mAP), a fully automated computational pipeline that allows the user to perform miRNA NGS data analysis from raw sRNA-Seq libraries to quantification and Differential Expression Analysis in an easy, scalable, efficient, and intuitive way. Emphasis has been given to data pre-processing, an early, critical step in the analysis for the robustness of the final results and conclusions. Through modularity, parallelizability and customization, DIANA-mAP produces high quality expression results, reports and graphs for downstream data mining and statistical analysis. In an extended evaluation, the tool outperforms similar tools providing pre-processing without any adapter knowledge. Closing, DIANA-mAP is a freely available tool. It is available dockerized with no dependency installations or standalone, accompanied by an installation manual through Github.
APA, Harvard, Vancouver, ISO, and other styles
26

Hammersley, A. P. "FIT2D: a multi-purpose data reduction, analysis and visualization program." Journal of Applied Crystallography 49, no. 2 (2016): 646–52. http://dx.doi.org/10.1107/s1600576716000455.

Full text
Abstract:
FIT2D is one of the principal area detector data reduction, analysis and visualization programs used at the European Synchrotron Radiation Facility and is also used by more than 400 research groups worldwide, including many other synchrotron radiation facilities. It has been developed for X-ray science, but is applicable to other structural techniques and is used in analysing electron diffraction data and microscopy, and neutron diffraction and scattering data. FIT2D works for both interactive and `batch'-style data processing. Calibration and correction of detector distortions, integration of two-dimensional data to a variety of one-dimensional scans, and one- and two-dimensional model fitting are the main uses. Many other general-purpose image processing and image visualization operations are available. Commands are available through a `graphical user interface' and operations common to certain types of analysis are grouped within `interfaces'. Executable versions for most workstation and personal computer systems, and web page documentation, are available at http://www.esrf.eu/computing/scientific/FIT2D.
APA, Harvard, Vancouver, ISO, and other styles
27

Chen, Long, Chunhua Zhang, Yanling Wang, et al. "Data mining and pathway analysis of glucose-6-phosphate dehydrogenase with natural language processing." Molecular Medicine Reports 16, no. 2 (2017): 1900–1910. http://dx.doi.org/10.3892/mmr.2017.6785.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Urbano, Ferdinando, Francesca Cagnacci, Clément Calenge, Holger Dettki, Alison Cameron, and Markus Neteler. "Wildlife tracking data management: a new vision." Philosophical Transactions of the Royal Society B: Biological Sciences 365, no. 1550 (2010): 2177–85. http://dx.doi.org/10.1098/rstb.2010.0081.

Full text
Abstract:
To date, the processing of wildlife location data has relied on a diversity of software and file formats. Data management and the following spatial and statistical analyses were undertaken in multiple steps, involving many time-consuming importing/exporting phases. Recent technological advancements in tracking systems have made large, continuous, high-frequency datasets of wildlife behavioural data available, such as those derived from the global positioning system (GPS) and other animal-attached sensor devices. These data can be further complemented by a wide range of other information about the animals' environment. Management of these large and diverse datasets for modelling animal behaviour and ecology can prove challenging, slowing down analysis and increasing the probability of mistakes in data handling. We address these issues by critically evaluating the requirements for good management of GPS data for wildlife biology. We highlight that dedicated data management tools and expertise are needed. We explore current research in wildlife data management. We suggest a general direction of development, based on a modular software architecture with a spatial database at its core, where interoperability, data model design and integration with remote-sensing data sources play an important role in successful GPS data handling.
APA, Harvard, Vancouver, ISO, and other styles
29

Hoffmann, R., J. A. Schultz, R. Schellhorn, et al. "Non-invasive imaging methods applied to neo- and paleo-ontological cephalopod research." Biogeosciences 11, no. 10 (2014): 2721–39. http://dx.doi.org/10.5194/bg-11-2721-2014.

Full text
Abstract:
Abstract. Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum/maximum size of objects that can be studied, the degree of post-processing needed and availability. The main application of the methods is seen in morphometry and volumetry of cephalopod shells. In particular we present a method for precise buoyancy calculation. Therefore, cephalopod shells were scanned together with different reference bodies, an approach developed in medical sciences. It is necessary to know the volume of the reference bodies, which should have similar absorption properties like the object of interest. Exact volumes can be obtained from surface scanning. Depending on the dimensions of the study object different computed tomography techniques were applied.
APA, Harvard, Vancouver, ISO, and other styles
30

Li, Zhi, Tianyue Zhang, Haojie Lei, et al. "Research on Gastric Cancer’s Drug-resistant Gene Regulatory Network Model." Current Bioinformatics 15, no. 3 (2020): 225–34. http://dx.doi.org/10.2174/1574893614666190722102557.

Full text
Abstract:
Objective: Based on bioinformatics, differentially expressed gene data of drug-resistance in gastric cancer were analyzed, screened and mined through modeling and network modeling to find valuable data associated with multi-drug resistance of gastric cancer. Methods: First, data sets were preprocessed from three aspects: data processing, data annotation and classification, and functional clustering. Secondly, based on the preprocessed data, each classified primary gene regulatory network was constructed by mining interactions among the genes. This paper computed the values of each node in each classified primary gene regulatory network and ranked these nodes according to their scores. On the basis of this, the appropriate core node was selected and the corresponding core network was developed. Results and Conclusion:: Finally, core network modules were analyzed, which were mined. After the correlation analysis, the result showed that the constructed network module had 20 core genes. This module contained valuable data associated with multi-drug resistance in gastric cancer.
APA, Harvard, Vancouver, ISO, and other styles
31

Wajid, Bilal, Muhammad U. Sohail, Ali R. Ekti, and Erchin Serpedin. "The A, C, G, and T of Genome Assembly." BioMed Research International 2016 (2016): 1–10. http://dx.doi.org/10.1155/2016/6329217.

Full text
Abstract:
Genome assembly in its two decades of history has produced significant research, in terms of both biotechnology and computational biology. This contribution delineates sequencing platforms and their characteristics, examines key steps involved in filtering and processing raw data, explains assembly frameworks, and discusses quality statistics for the assessment of the assembled sequence. Furthermore, the paper explores recent Ubuntu-based software environments oriented towards genome assembly as well as some avenues for future research.
APA, Harvard, Vancouver, ISO, and other styles
32

Playdon, Mary C., Amit D. Joshi, Fred K. Tabung, et al. "Metabolomics Analytics Workflow for Epidemiological Research: Perspectives from the Consortium of Metabolomics Studies (COMETS)." Metabolites 9, no. 7 (2019): 145. http://dx.doi.org/10.3390/metabo9070145.

Full text
Abstract:
The application of metabolomics technology to epidemiological studies is emerging as a new approach to elucidate disease etiology and for biomarker discovery. However, analysis of metabolomics data is complex and there is an urgent need for the standardization of analysis workflow and reporting of study findings. To inform the development of such guidelines, we conducted a survey of 47 cohort representatives from the Consortium of Metabolomics Studies (COMETS) to gain insights into the current strategies and procedures used for analyzing metabolomics data in epidemiological studies worldwide. The results indicated a variety of applied analytical strategies, from biospecimen and data pre-processing and quality control to statistical analysis and reporting of study findings. These strategies included methods commonly used within the metabolomics community and applied in epidemiological research, as well as novel approaches to pre-processing pipelines and data analysis. To help with these discrepancies, we propose use of open-source initiatives such as the online web-based tool COMETS Analytics, which includes helpful tools to guide analytical workflow and the standardized reporting of findings from metabolomics analyses within epidemiological studies. Ultimately, this will improve the quality of statistical analyses, research findings, and study reproducibility.
APA, Harvard, Vancouver, ISO, and other styles
33

Radha, K., and B. Thirumala Rao. "A Study on Big Data Techniques and Applications." International Journal of Advances in Applied Sciences 5, no. 2 (2016): 101. http://dx.doi.org/10.11591/ijaas.v5.i2.pp101-108.

Full text
Abstract:
<p>We are living in on-Demand Digital Universe with data spread by users and organizations at a very high rate. This data is categorized as Big Data because of its Variety, Velocity, Veracity and Volume. This data is again classified into unstructured, semi-structured and structured. Large datasets require special processing systems; it is a unique challenge for academicians and researchers. Map Reduce jobs use efficient data processing techniques which are applied in every phases of Map Reduce such as Mapping, Combining, Shuffling, Indexing, Grouping and Reducing. Big Data has essential characteristics as follows Variety, Volume and Velocity, Viscosity, Virality. Big Data is one of the current and future research frontiers. In many areas Big Data is changed such as public administration, scientific research, business, The Financial Services Industry, Automotive Industry, Supply Chain, Logistics, and Industrial Engineering, Retail, Entertainment, etc. Other Big Data applications are exist in atmospheric science, astronomy, medicine, biologic, biogeochemistry, genomics and interdisciplinary and complex researches. This paper is presents the Essential Characteristics of Big Data Applications and State of-the-art tools and techniques to handle data-intensive applications and also building index for web pages available online and see how Map and Reduce functions can be executed by considering input as a set of documents.</p><p> </p>
APA, Harvard, Vancouver, ISO, and other styles
34

Coffa, Jordy, Mark A. van de Wiel, Begoña Diosdado, Beatriz Carvalho, Jan Schouten, and Gerrit A. Meijer. "MLPAnalyzer: Data Analysis Tool for Reliable Automated Normalization of MLPA Fragment Data." Analytical Cellular Pathology 30, no. 4 (2008): 323–35. http://dx.doi.org/10.1155/2008/605109.

Full text
Abstract:
Background: Multiplex Ligation dependent Probe Amplification (MLPA) is a rapid, simple, reliable and customized method for detection of copy number changes of individual genes at a high resolution and allows for high throughput analysis. This technique is typically applied for studying specific genes in large sample series. The large amount of data, dissimilarities in PCR efficiency among the different probe amplification products, and sample-to-sample variation pose a challenge to data analysis and interpretation. We therefore set out to develop an MLPA data analysis strategy and tool that is simple to use, while still taking into account the above-mentioned sources of variation.Materials and Methods: MLPAnalyzer was developed in Visual Basic for Applications, and can accept a large number of file formats directly from capillary sequence systems. Sizes of all MLPA probe signals are determined and filtered, quality control steps are performed, and variation in peak intensity related to size is corrected for. DNA copy number ratios of test samples are computed, displayed in a table view and a set of comprehensive figures is generated. To validate this approach, MLPA reactions were performed using a dedicated MLPA mix on 6 different colorectal cancer cell lines. The generated data were normalized using our program and results were compared to previously performed array-CGH results using both statistical methods and visual examination.Results and Discussion: Visual examination of bar graphs and direct ratios for both techniques showed very similar results, while the average Pearson moment correlation over all MLPA probes was found to be 0.42. Our results thus show that automated MLPA data processing following our suggested strategy may be of significant use, especially when handling large MLPA data sets, when samples are of different quality, or interpretation of MLPA electropherograms is too complex. It remains, however, important to recognize that automated MLPA data processing may only be successful when a dedicated experimental setup is also considered.
APA, Harvard, Vancouver, ISO, and other styles
35

Пахомова, Валентина, Valentina Pahomova, Насия Даминова, and Nasiya Daminova. "SCIENTIFIC BASIS OF ANTIOXIDANTS APPLICATION IN AGRICULTURE AND OTHER SPHERA OF BIOLOGY." Vestnik of Kazan State Agrarian University 12, no. 4 (2018): 53–56. http://dx.doi.org/10.12737/article_5a84350a4b57e1.91267946.

Full text
Abstract:
The article is considered a review of experimental data on the universal functional role of antioxidants in biosystems cells under stressful conditions of existence, as well as the practical application of synthetic and natural antioxidants in crop production, livestock, processing and food industries, pharmacology and medicine, etc. Particular attention is paid to own research on screening and scientific substantiation of the use of compounds with antioxidant effect, including their liposomal forms, in plant growing, livestock and and poultry farming. The diversified use of the natural universal antioxidant dihydroquercetin in various fields of modern bioeconomics is described.
APA, Harvard, Vancouver, ISO, and other styles
36

Torkamaneh, Davoud, Jérôme Laroche, and François Belzile. "Fast-GBS v2.0: an analysis toolkit for genotyping-by-sequencing data." Genome 63, no. 11 (2020): 577–81. http://dx.doi.org/10.1139/gen-2020-0077.

Full text
Abstract:
Genotyping-by-sequencing (GBS) is a rapid, flexible, low-cost, and robust genotyping method that simultaneously discovers variants and calls genotypes within a broad range of samples. These characteristics make GBS an excellent tool for many applications and research questions from conservation biology to functional genomics in both model and non-model species. Continued improvement of GBS relies on a more comprehensive understanding of data analysis, development of fast and efficient bioinformatics pipelines, accurate missing data imputation, and active post-release support. Here, we present the second generation of Fast-GBS (v2.0) that offers several new options (e.g., processing paired-end reads and imputation of missing data) and features (e.g., summary statistics of genotypes) to improve the GBS data analysis process. The performance assessment analysis showed that Fast-GBS v2.0 outperformed other available analytical pipelines, such as GBS-SNP-CROP and Gb-eaSy. Fast-GBS v2.0 provides an analysis platform that can be run with different types of sequencing data, modest computational resources, and allows for missing-data imputation for various species in different contexts.
APA, Harvard, Vancouver, ISO, and other styles
37

Armstrong, Nicola J., and Mark A. van de Wiel. "Microarray Data Analysis: From Hypotheses to Conclusions Using Gene Expression Data." Analytical Cellular Pathology 26, no. 5-6 (2004): 279–90. http://dx.doi.org/10.1155/2004/943940.

Full text
Abstract:
We review several commonly used methods for the design and analysis of microarray data. To begin with, some experimental design issues are addressed. Several approaches for pre‐processing the data (filtering and normalization) before the statistical analysis stage are then discussed. A common first step in this type of analysis is gene selection based on statistical testing. Two approaches, permutation and model‐based methods are explained and we emphasize the need to correct for multiple testing. Moreover, powerful approaches based on gene sets are mentioned. Clustering of either genes or samples is frequently performed when analyzing microarray data. We summarize the basics of both supervised and unsupervised clustering (classification). The latter may be of use for creating diagnostic arrays, for example. Construction of biological networks, such as pathways, is a statistically challenging but complex task that is a relatively new development and hence mentioned only briefly. We finish with some remarks on literature and software. The emphasis in this paper is on the philosophy behind several statistical issues and on a critical interpretation of microarray related analysis methods.
APA, Harvard, Vancouver, ISO, and other styles
38

Dill-McFarland, Kimberly A., Stephan G. König, Florent Mazel, et al. "An integrated, modular approach to data science education in microbiology." PLOS Computational Biology 17, no. 2 (2021): e1008661. http://dx.doi.org/10.1371/journal.pcbi.1008661.

Full text
Abstract:
We live in an increasingly data-driven world, where high-throughput sequencing and mass spectrometry platforms are transforming biology into an information science. This has shifted major challenges in biological research from data generation and processing to interpretation and knowledge translation. However, postsecondary training in bioinformatics, or more generally data science for life scientists, lags behind current demand. In particular, development of accessible, undergraduate data science curricula has the potential to improve research and learning outcomes as well as better prepare students in the life sciences to thrive in public and private sector careers. Here, we describe the Experiential Data science for Undergraduate Cross-Disciplinary Education (EDUCE) initiative, which aims to progressively build data science competency across several years of integrated practice. Through EDUCE, students complete data science modules integrated into required and elective courses augmented with coordinated cocurricular activities. The EDUCE initiative draws on a community of practice consisting of teaching assistants (TAs), postdocs, instructors, and research faculty from multiple disciplines to overcome several reported barriers to data science for life scientists, including instructor capacity, student prior knowledge, and relevance to discipline-specific problems. Preliminary survey results indicate that even a single module improves student self-reported interest and/or experience in bioinformatics and computer science. Thus, EDUCE provides a flexible and extensible active learning framework for integration of data science curriculum into undergraduate courses and programs across the life sciences.
APA, Harvard, Vancouver, ISO, and other styles
39

Sahlabadi, Amirhossein, Ravie Chandren Muniyandi, Mahdi Sahlabadi, and Hossein Golshanbafghy. "Framework for Parallel Preprocessing of Microarray Data Using Hadoop." Advances in Bioinformatics 2018 (March 29, 2018): 1–9. http://dx.doi.org/10.1155/2018/9391635.

Full text
Abstract:
Nowadays, microarray technology has become one of the popular ways to study gene expression and diagnosis of disease. National Center for Biology Information (NCBI) hosts public databases containing large volumes of biological data required to be preprocessed, since they carry high levels of noise and bias. Robust Multiarray Average (RMA) is one of the standard and popular methods that is utilized to preprocess the data and remove the noises. Most of the preprocessing algorithms are time-consuming and not able to handle a large number of datasets with thousands of experiments. Parallel processing can be used to address the above-mentioned issues. Hadoop is a well-known and ideal distributed file system framework that provides a parallel environment to run the experiment. In this research, for the first time, the capability of Hadoop and statistical power of R have been leveraged to parallelize the available preprocessing algorithm called RMA to efficiently process microarray data. The experiment has been run on cluster containing 5 nodes, while each node has 16 cores and 16 GB memory. It compares efficiency and the performance of parallelized RMA using Hadoop with parallelized RMA using affyPara package as well as sequential RMA. The result shows the speed-up rate of the proposed approach outperforms the sequential approach and affyPara approach.
APA, Harvard, Vancouver, ISO, and other styles
40

Le Merrer, Julie, Jérôme A. J. Becker, Katia Befort, and Brigitte L. Kieffer. "Reward Processing by the Opioid System in the Brain." Physiological Reviews 89, no. 4 (2009): 1379–412. http://dx.doi.org/10.1152/physrev.00005.2009.

Full text
Abstract:
The opioid system consists of three receptors, mu, delta, and kappa, which are activated by endogenous opioid peptides processed from three protein precursors, proopiomelanocortin, proenkephalin, and prodynorphin. Opioid receptors are recruited in response to natural rewarding stimuli and drugs of abuse, and both endogenous opioids and their receptors are modified as addiction develops. Mechanisms whereby aberrant activation and modifications of the opioid system contribute to drug craving and relapse remain to be clarified. This review summarizes our present knowledge on brain sites where the endogenous opioid system controls hedonic responses and is modified in response to drugs of abuse in the rodent brain. We review 1) the latest data on the anatomy of the opioid system, 2) the consequences of local intracerebral pharmacological manipulation of the opioid system on reinforced behaviors, 3) the consequences of gene knockout on reinforced behaviors and drug dependence, and 4) the consequences of chronic exposure to drugs of abuse on expression levels of opioid system genes. Future studies will establish key molecular actors of the system and neural sites where opioid peptides and receptors contribute to the onset of addictive disorders. Combined with data from human and nonhuman primate (not reviewed here), research in this extremely active field has implications both for our understanding of the biology of addiction and for therapeutic interventions to treat the disorder.
APA, Harvard, Vancouver, ISO, and other styles
41

Rahayu, Aneke Dewi, and Ari Prasetyoaji. "PROBLEMATIC INTERNET USE (PIU) IDENTIFICATION USING THE BIOPSYCHOSOCIAL MODEL APPROACH IN EMERGING ADULTHOOD." International Journal of Business, Humanities, Education and Social Sciences (IJBHES) 2, no. 1 (2020): 19–23. http://dx.doi.org/10.46923/ijbhes.v2i1.58.

Full text
Abstract:
Individuals with Problematic Internet Use (PIU) who use the internet excessively, it gave the causes of the difficulties to control using the internet and bad impact of physical and mental disorder. The research aims to show the relationship among biological, psychological, and social factors with PIU so it can be created some relation model which explain about this phenomenon. Simple random sampling is a sampling technique used of this research. Subjects used of this research were 403 individuals who were in the age of emerging adulthood. Data collection method used in this research is PIU and bio psychosocial scale which consists of three parts are biology, psychology, and social. Based on the data processing, a relationship model among biology, psychology, social, and PIU where the relationship model has a chi square score of 0.102 with a probability score of 0.061, a CMIN / DF score of 1.518, a GFI of 0.919, an AGFI of 0.971, a CFI equal to 1.00, TLI of 0.90, and the score of RMSEA 0.072 so the model was proposed as strong and acceptable category. The model shows strongest relationship is the relationship between social and PIU was 0.47, psychology with PIU was 0.22, and biology with PIU was 0.12.
APA, Harvard, Vancouver, ISO, and other styles
42

Peris-Díaz, Manuel D., Shannon R. Sweeney, Olga Rodak, Enrique Sentandreu, and Stefano Tiziani. "R-MetaboList 2: A Flexible Tool for Metabolite Annotation from High-Resolution Data-Independent Acquisition Mass Spectrometry Analysis." Metabolites 9, no. 9 (2019): 187. http://dx.doi.org/10.3390/metabo9090187.

Full text
Abstract:
Technological advancements have permitted the development of innovative multiplexing strategies for data independent acquisition (DIA) mass spectrometry (MS). Software solutions and extensive compound libraries facilitate the efficient analysis of MS1 data, regardless of the analytical platform. However, the development of comparable tools for DIA data analysis has significantly lagged. This research introduces an update to the former MetaboList R package and a workflow for full-scan MS1 and MS/MS DIA processing of metabolomic data from multiplexed liquid chromatography high-resolution mass spectrometry (LC-HRMS) experiments. When compared to the former version, new functions have been added to address isolated MS1 and MS/MS workflows, processing of MS/MS data from stepped collision energies, performance scoring of metabolite annotations, and batch job analysis were incorporated into the update. The flexibility and efficiency of this strategy were assessed through the study of the metabolite profiles of human urine, leukemia cell culture, and medium samples analyzed by either liquid chromatography quadrupole time-of-flight (q-TOF) or quadrupole orbital (q-Orbitrap) instruments. This open-source alternative was designed to promote global metabolomic strategies based on recursive retrospective research of multiplexed DIA analysis.
APA, Harvard, Vancouver, ISO, and other styles
43

Zinchenko, M. O., K. B. Sukhomlin, O. P. Zinchenko, and V. S. Tepliuk. "The biology of Simulium noelleri and Simulium dolini: morphological, ecological and molecular data." Biosystems Diversity 29, no. 2 (2021): 180–84. http://dx.doi.org/10.15421/012122.

Full text
Abstract:
Molecular genetic research has revolutionized the taxonomy and systematics of the Simuliidae family. Simulium noelleri Friederichs, 1920 is a species of blackfly, common in the Holarctic, reported for 33 countries. In 1954, Topchiev recorded it in Ukraine for the first time. Simulium dolini Usova et Sukhomlin, 1989 has been recorded at the borders of Ukraine and Belarus. It was described for the first time by Usova and Sukhomlin in 1989 from the collection from the territory of Volyn region in 1985. Usova and Sukhomlin, Yankovsky, Adler state that S. noelleri and S. dolini are different species by the morphological characteristics that differ in all phases of development. Adults differ in the structure of the genital appendages, palps, the margin and shape of the face and forehead, the colour of the legs; the larva – in the pattern on the frontal capsule, the number of rays in the fans, mandibular teeth and the hypostoma, the structure of the hind organ of attachment; pupae – in the branching way of gills. Molecular data are becoming an increasingly important tool in insect taxonomy. Therefore, we had to check that these two closely related species also have genetic difference. The development of S. noelleri and S. dolini was studied in four small rivers of Volyn region, Ukraine (Chornohuzka, Konopelka, Putylivka, Omelyanivka) in the period from 2017 to 2019. During initial processing of insect samples, we used the standard protocols EPPO PM7/129. We obtained the nucleotide sequence of S. dolini. It was proved that the populations of S. noelleri and S. dolini from medium and small rivers of Volyn differ in biological, morphological, behavioural and genetic characteristics. Comparison of the species S. noelleri with the data of the GenBank confirms the identification of three distinct morphotypes from Volyn, Great Britain and Canada. As a result of the conducted researches, it was confirmed that two close species of S. dolini and S. noelleri from the noelleri species group differ in the structure of mitochondrial DNA, which confirms their independent taxonomic status. Additional studies comprising more individuals from larger areas of Europe are required to verify the taxonomic position of these two species.
APA, Harvard, Vancouver, ISO, and other styles
44

Mabrouk, Mai S., Safaa M. Naeem, and Mohamed A. Eldosoky. "DIFFERENT GENOMIC SIGNAL PROCESSING METHODS FOR EUKARYOTIC GENE PREDICTION: A SYSTEMATIC REVIEW." Biomedical Engineering: Applications, Basis and Communications 29, no. 01 (2017): 1730001. http://dx.doi.org/10.4015/s1016237217300012.

Full text
Abstract:
Bioinformatics field has now solidly settled itself as a control in molecular biology and incorporates an extensive variety of branches of knowledge from structural biology, genomics to gene expression studies. Bioinformatics is the application of computer technology to the management of biological information. Genomic signal processing (GSP) techniques have been connected most all around in bioinformatics and will keep on assuming an essential part in the investigation of biomedical issues. GSP refers to using the digital signal processing (DSP) methods for genomic data (e.g. DNA sequences) analysis. Recently, applications of GSP in bioinformatics have obtained great consideration such as identification of DNA protein coding regions, identification of reading frames, cancer detection and others. Cancer is one of the most dangerous diseases that the world faces and has raised the death rate in recent years, it is known medically as malignant neoplasm, so detection of it at the early stage can yield a promising approach to determine and take actions to treat with this risk. GSP is a method which can be used to detect the cancerous cells that are often caused due to genetic abnormality. This systematic review discusses some of the GSP applications in bioinformatics generally. The GSP techniques, used for cancer detection especially, are presented to collect the recent results and what has been reached at this point to be a new subject of research.
APA, Harvard, Vancouver, ISO, and other styles
45

Nengsih, Sri, and Winda Afriani. "Pengembangan LKS Biologi Berbasis Inkuiri Terbimbing Materi Sistem Regulasi." BIOEDUSAINS: Jurnal Pendidikan Biologi dan Sains 2, no. 1 (2019): 50–59. http://dx.doi.org/10.31539/bioedusains.v2i1.618.

Full text
Abstract:
The purpose of this study is to produce valid and practical guided inquiry based Biology LKS. This type of research is research and development. In this study, researchers used the development method using a 4-D model (four D), defining, designing, developing and disseminating. Guided Biological LKS based on valid inquiry, then tested in a limited way in MAN 3 Payakumbuh for students of class XI MIA. To see the practicality of Biology LKS, researchers used a questionnaire response sheet given to teachers and students. , resulting in an average percentage of 81.98% which belongs to a very valid category. The results of processing data from the practicality questionnaire the teacher's response obtained an average yield of 88.2% with a very practical category and the practicality of the student's response to an average of 81.9% with a very practical category. Conclusion, the development of guided inquiry-based Biology LKS material is a very practical regulatory system used for the learning process.
 Keywords: guided inquiry, LKS, regulation system
APA, Harvard, Vancouver, ISO, and other styles
46

Swift, S., and N. Peek. "Intelligent Data Analysis for Knowl edge Discovery, Patient Monitoring and Quality Assessment." Methods of Information in Medicine 51, no. 04 (2012): 318–22. http://dx.doi.org/10.1055/s-0038-1627045.

Full text
Abstract:
Summary Objective: To introduce the focus theme of Methods of Information in Medicine on Intelligent Data Analysis for Knowledge Discovery, Patient Monitoring and Quality Assessment. Methods: Based on two workshops on Intelligent Data Analysis in bioMedicine (IDAMAP) held in Washington, DC, USA (2010) and Bled, Slovenia (2011), six authors were invited to write full papers for the focus theme. Each paper was throughly reviewed by anonymous referees and revised one or more times by the authors. Results: The selected papers cover four ongoing and emerging topics in Intelligent Data Analysis (IDA), being i) systems biology and metabolic pathway modelling; ii) gene expression data modelling; iii) signal processing from in-home monitoring systems; and iv) quality of care assessment. Each of these topics is discussed in detail to introduce the papers to the reader. Conclusion: The development and application of IDA methods in biomedicine is an active area of research which continues to blend with other subfields of medical informatics. As data become increasingly ubiquitous in the biomedical domain, the demand for fast, smart and flexible data analysis methods is undiminished.
APA, Harvard, Vancouver, ISO, and other styles
47

Mölder, Felix, Kim Philipp Jablonski, Brice Letcher, et al. "Sustainable data analysis with Snakemake." F1000Research 10 (January 18, 2021): 33. http://dx.doi.org/10.12688/f1000research.29032.1.

Full text
Abstract:
Data analysis often entails a multitude of heterogeneous steps, from the application of various command line tools to the usage of scripting languages like R or Python for the generation of plots and tables. It is widely recognized that data analyses should ideally be conducted in a reproducible way. Reproducibility enables technical validation and regeneration of results on the original or even new data. However, reproducibility alone is by no means sufficient to deliver an analysis that is of lasting impact (i.e., sustainable) for the field, or even just one research group. We postulate that it is equally important to ensure adaptability and transparency. The former describes the ability to modify the analysis to answer extended or slightly different research questions. The latter describes the ability to understand the analysis in order to judge whether it is not only technically, but methodologically valid. Here, we analyze the properties needed for a data analysis to become reproducible, adaptable, and transparent. We show how the popular workflow management system Snakemake can be used to guarantee this, and how it enables an ergonomic, combined, unified representation of all steps involved in data analysis, ranging from raw data processing, to quality control and fine-grained, interactive exploration and plotting of final results.
APA, Harvard, Vancouver, ISO, and other styles
48

Mölder, Felix, Kim Philipp Jablonski, Brice Letcher, et al. "Sustainable data analysis with Snakemake." F1000Research 10 (April 19, 2021): 33. http://dx.doi.org/10.12688/f1000research.29032.2.

Full text
Abstract:
Data analysis often entails a multitude of heterogeneous steps, from the application of various command line tools to the usage of scripting languages like R or Python for the generation of plots and tables. It is widely recognized that data analyses should ideally be conducted in a reproducible way. Reproducibility enables technical validation and regeneration of results on the original or even new data. However, reproducibility alone is by no means sufficient to deliver an analysis that is of lasting impact (i.e., sustainable) for the field, or even just one research group. We postulate that it is equally important to ensure adaptability and transparency. The former describes the ability to modify the analysis to answer extended or slightly different research questions. The latter describes the ability to understand the analysis in order to judge whether it is not only technically, but methodologically valid. Here, we analyze the properties needed for a data analysis to become reproducible, adaptable, and transparent. We show how the popular workflow management system Snakemake can be used to guarantee this, and how it enables an ergonomic, combined, unified representation of all steps involved in data analysis, ranging from raw data processing, to quality control and fine-grained, interactive exploration and plotting of final results.
APA, Harvard, Vancouver, ISO, and other styles
49

Aquili, Luca. "The Role of Tryptophan and Tyrosine in Executive Function and Reward Processing." International Journal of Tryptophan Research 13 (January 2020): 117864692096482. http://dx.doi.org/10.1177/1178646920964825.

Full text
Abstract:
The serotonergic precursor tryptophan and the dopaminergic precursor tyrosine have been shown to be important modulators of mood, behaviour and cognition. Specifically, research on the function of tryptophan has characterised this molecule as particularly relevant in the context of pathological disorders such as depression. Moreover, a large body of evidence has now been accumulated to suggest that tryptophan may also be involved in executive function and reward processing. Despite some clear differentiation with tryptophan, the data reviewed in this paper illustrates that tyrosine shares similar functions with tryptophan in the regulation of executive function and reward, and that these processes in turn, rather than acting in isolation, causally influence each other.
APA, Harvard, Vancouver, ISO, and other styles
50

Song, Bosheng, Zimeng Li, Xuan Lin, Jianmin Wang, Tian Wang, and Xiangzheng Fu. "Pretraining model for biological sequence data." Briefings in Functional Genomics 20, no. 3 (2021): 181–95. http://dx.doi.org/10.1093/bfgp/elab025.

Full text
Abstract:
Abstract With the development of high-throughput sequencing technology, biological sequence data reflecting life information becomes increasingly accessible. Particularly on the background of the COVID-19 pandemic, biological sequence data play an important role in detecting diseases, analyzing the mechanism and discovering specific drugs. In recent years, pretraining models that have emerged in natural language processing have attracted widespread attention in many research fields not only to decrease training cost but also to improve performance on downstream tasks. Pretraining models are used for embedding biological sequence and extracting feature from large biological sequence corpus to comprehensively understand the biological sequence data. In this survey, we provide a broad review on pretraining models for biological sequence data. Moreover, we first introduce biological sequences and corresponding datasets, including brief description and accessible link. Subsequently, we systematically summarize popular pretraining models for biological sequences based on four categories: CNN, word2vec, LSTM and Transformer. Then, we present some applications with proposed pretraining models on downstream tasks to explain the role of pretraining models. Next, we provide a novel pretraining scheme for protein sequences and a multitask benchmark for protein pretraining models. Finally, we discuss the challenges and future directions in pretraining models for biological sequences.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!