To see the other types of publications on this topic, follow the link: Information extraction strategies.

Journal articles on the topic 'Information extraction strategies'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Information extraction strategies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

SUKHAHUTA, RATTASIT, and DAN SMITH. "Information Extraction Strategies for Thai Documents." International Journal of Computer Processing of Languages 14, no. 02 (June 2001): 153–72. http://dx.doi.org/10.1142/s0219427901000357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Barrio, Pablo, and Luis Gravano. "Sampling strategies for information extraction over the deep web." Information Processing & Management 53, no. 2 (March 2017): 309–31. http://dx.doi.org/10.1016/j.ipm.2016.11.006.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Anderson-Mayes, Ann-Marie. "Strategies to Improve Information Extraction from Multivariate Geophysical Data Suites." Exploration Geophysics 33, no. 2 (June 2002): 57–64. http://dx.doi.org/10.1071/eg02057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gutierrez, Fernando, Dejing Dou, Stephen Fickas, Daya Wimalasuriya, and Hui Zong. "A hybrid ontology-based information extraction system." Journal of Information Science 42, no. 6 (July 11, 2016): 798–820. http://dx.doi.org/10.1177/0165551515610989.

Full text
Abstract:
Information Extraction is the process of automatically obtaining knowledge from plain text. Because of the ambiguity of written natural language, Information Extraction is a difficult task. Ontology-based Information Extraction (OBIE) reduces this complexity by including contextual information in the form of a domain ontology. The ontology provides guidance to the extraction process by providing concepts and relationships about the domain. However, OBIE systems have not been widely adopted because of the difficulties in deployment and maintenance. The Ontology-based Components for Information Extraction (OBCIE) architecture has been proposed as a form to encourage the adoption of OBIE by promoting reusability through modularity. In this paper, we propose two orthogonal extensions to OBCIE that allow the construction of hybrid OBIE systems with higher extraction accuracy and a new functionality. The first extension utilizes OBCIE modularity to integrate different types of implementation into one extraction system, producing a more accurate extraction. For each concept or relationship in the ontology, we can select the best implementation for extraction, or we can combine both implementations under an ensemble learning schema. The second extension is a novel ontology-based error detection mechanism. Following a heuristic approach, we can identify sentences that are logically inconsistent with the domain ontology. Because the implementation strategy for the extraction of a concept is independent of the functionality of the extraction, we can design a hybrid OBIE system with concepts utilizing different implementation strategies for extracting correct or incorrect sentences. Our evaluation shows that, in the implementation extension, our proposed method is more accurate in terms of correctness and completeness of the extraction. Moreover, our error detection method can identify incorrect statements with a high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
5

Esponda, Ignacio, and Emanuel Vespa. "Hypothetical Thinking and Information Extraction in the Laboratory." American Economic Journal: Microeconomics 6, no. 4 (November 1, 2014): 180–202. http://dx.doi.org/10.1257/mic.6.4.180.

Full text
Abstract:
In several common-value environments (e.g., auctions or elections), players should make informational inferences from opponents' strategies under certain hypothetical events (e.g., winning the auction or being pivotal). We design a voting experiment that identifies whether subjects make these inferences and distinguishes between hypothetical thinking and information extraction. Depending on feedback, between 50 and 80 percent of subjects behave nonoptimally. More importantly, these mistakes are driven by difficulty in extracting information from hypothetical, but not from actual, events. Mistakes are robust to experience and hints, and also arise in more general settings where players have no private information. (JEL C91, D71, D72, D82, D83)
APA, Harvard, Vancouver, ISO, and other styles
6

Maddumala, Venkata Rao, and Arunkumar R. "A Weight Based Feature Extraction Model on Multifaceted Multimedia Bigdata Using Convolutional Neural Network." Ingénierie des systèmes d information 25, no. 6 (December 31, 2020): 729–35. http://dx.doi.org/10.18280/isi.250603.

Full text
Abstract:
This paper intends to present main technique for feature extraction on multimeda getting well versed and a challenging task to handle big data. Analyzing and feature extracting valuable data from high dimensional dataset challenges the bounds of measurable methods and strategies. Conventional techniques in general have less performance while managing high dimensional datasets. Lower test size has consistently been an issue in measurable tests, which get bothered in high dimensional information due to more equivalent or higher component size than the quantity of tests. The intensity of any measurable test is legitimately relative to its capacity to lesser an invalid theory, and test size is a significant central factor in producing probabilities of errors for making substantial ends. Thus one of the effective methods for taking care of high dimensional datasets is by lessening its measurement through feature selection and extraction with the goal that substantial accurate data can be practically performed. Clustering is the act of finding hidden or comparable data in information. It is one of the most widely recognized techniques for realizing useful features where a weight is given to each feature without predefining the various classes. In any feature selection and extraction procedures, the three main considerations of concern are measurable exactness, model interpretability and computational multifaceted nature. For any classification model, it is important to ensure that the productivity of any of these three components isn't undermined. In this manuscript, a Weight Based Feature Extraction Model on Multifaceted Multimedia Big Data (WbFEM-MMB) is proposed which extracts useful features from videos. The feature extraction strategies utilize features from the discrete cosine methods and the features are extracted using a pre-prepared Convolutional Neural Network (CNN). The proposed method is compared with traditional methods and the results show that the proposed method exhibits better performance and accuracy in extracting features from multifaceted multimedia data.
APA, Harvard, Vancouver, ISO, and other styles
7

Et.al, Mahyuddin K. M. Nasution. "Social Network Extraction Unsupervised." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (April 11, 2021): 4443–49. http://dx.doi.org/10.17762/turcomat.v12i3.1824.

Full text
Abstract:
In the era of information technology, the two developing sides are data science and artificial intelligence. In terms of scientific data, one of the tasks is the extraction of social networks from information sources that have the nature of big data. Meanwhile, in terms of artificial intelligence, the presence of contradictory methods has an impact on knowledge. This article describes an unsupervised as a stream of methods for extracting social networks from information sources. There are a variety of possible approaches and strategies to superficial methods as a starting concept. Each method has its advantages, but in general, it contributes to the integration of each other, namely simplifying, enriching, and emphasizing the results.
APA, Harvard, Vancouver, ISO, and other styles
8

Wang Yun-Bo, Li Gong-Ping, Pan Xiao-Dong, and Xu Nan-Nan. "Simulation of X-ray refraction information extraction using multiple image-collecting strategies." Acta Physica Sinica 63, no. 10 (2014): 104206. http://dx.doi.org/10.7498/aps.63.104206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Silva, João Figueira, João Rafael Almeida, and Sérgio Matos. "Extraction of Family History Information From Clinical Notes: Deep Learning and Heuristics Approach." JMIR Medical Informatics 8, no. 12 (December 29, 2020): e22898. http://dx.doi.org/10.2196/22898.

Full text
Abstract:
Background Electronic health records store large amounts of patient clinical data. Despite efforts to structure patient data, clinical notes containing rich patient information remain stored as free text, greatly limiting its exploitation. This includes family history, which is highly relevant for applications such as diagnosis and prognosis. Objective This study aims to develop automatic strategies for annotating family history information in clinical notes, focusing not only on the extraction of relevant entities such as family members and disease mentions but also on the extraction of relations between the identified entities. Methods This study extends a previous contribution for the 2019 track on family history extraction from national natural language processing clinical challenges by improving a previously developed rule-based engine, using deep learning (DL) approaches for the extraction of entities from clinical notes, and combining both approaches in a hybrid end-to-end system capable of successfully extracting family member and observation entities and the relations between those entities. Furthermore, this study analyzes the impact of factors such as the use of external resources and different types of embeddings in the performance of DL models. Results The approaches developed were evaluated in a first task regarding entity extraction and in a second task concerning relation extraction. The proposed DL approach improved observation extraction, obtaining F1 scores of 0.8688 and 0.7907 in the training and test sets, respectively. However, DL approaches have limitations in the extraction of family members. The rule-based engine was adjusted to have higher generalizing capability and achieved family member extraction F1 scores of 0.8823 and 0.8092 in the training and test sets, respectively. The resulting hybrid system obtained F1 scores of 0.8743 and 0.7979 in the training and test sets, respectively. For the second task, the original evaluator was adjusted to perform a more exact evaluation than the original one, and the hybrid system obtained F1 scores of 0.6480 and 0.5082 in the training and test sets, respectively. Conclusions We evaluated the impact of several factors on the performance of DL models, and we present an end-to-end system for extracting family history information from clinical notes, which can help in the structuring and reuse of this type of information. The final hybrid solution is provided in a publicly available code repository.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Su, Ming Ya Wang, Jun Zheng, and Kai Zheng. "A Hybrid Keyword Extraction Method Based on TF and Semantic Strategies for Chinese Document." Applied Mechanics and Materials 635-637 (September 2014): 1476–79. http://dx.doi.org/10.4028/www.scientific.net/amm.635-637.1476.

Full text
Abstract:
Keyword extraction is important for information retrieval. This paper gave a hybrid keyword extraction method based on TF and semantic strategies for Chinese document. A new word finding method was proposed to find the new word not exist in the dictionary. Moreover the semantic strategies were introduced to filter the dependent words and remove the synonyms. Experimental results show that the proposed method can improve the accuracy and performance of keyword extraction.
APA, Harvard, Vancouver, ISO, and other styles
11

Farina, Dario, Roberto Merletti, and Roger M. Enoka. "The extraction of neural strategies from the surface EMG." Journal of Applied Physiology 96, no. 4 (April 2004): 1486–95. http://dx.doi.org/10.1152/japplphysiol.01070.2003.

Full text
Abstract:
This brief review examines some of the methods used to infer central control strategies from surface electromyogram (EMG) recordings. Among the many uses of the surface EMG in studying the neural control of movement, the review critically evaluates only some of the applications. The focus is on the relations between global features of the surface EMG and the underlying physiological processes. Because direct measurements of motor unit activation are not available and many factors can influence the signal, these relations are frequently misinterpreted. These errors are compounded by the counterintuitive effects that some system parameters can have on the EMG signal. The phenomenon of crosstalk is used as an example of these problems. The review describes the limitations of techniques used to infer the level of muscle activation, the type of motor unit recruited, the upper limit of motor unit recruitment, the average discharge rate, and the degree of synchronization between motor units. Although the global surface EMG is a useful measure of muscle activation and assessment, there are limits to the information that can be extracted from this signal.
APA, Harvard, Vancouver, ISO, and other styles
12

Farina, Dario, Roberto Merletti, and Roger M. Enoka. "The extraction of neural strategies from the surface EMG: an update." Journal of Applied Physiology 117, no. 11 (December 1, 2014): 1215–30. http://dx.doi.org/10.1152/japplphysiol.00162.2014.

Full text
Abstract:
A surface EMG signal represents the linear transformation of motor neuron discharge times by the compound action potentials of the innervated muscle fibers and is often used as a source of information about neural activation of muscle. However, retrieving the embedded neural code from a surface EMG signal is extremely challenging. Most studies use indirect approaches in which selected features of the signal are interpreted as indicating certain characteristics of the neural code. These indirect associations are constrained by limitations that have been detailed previously (Farina D, Merletti R, Enoka RM. J Appl Physiol 96: 1486–1495, 2004) and are generally difficult to overcome. In an update on these issues, the current review extends the discussion to EMG-based coherence methods for assessing neural connectivity. We focus first on EMG amplitude cancellation, which intrinsically limits the association between EMG amplitude and the intensity of the neural activation and then discuss the limitations of coherence methods (EEG-EMG, EMG-EMG) as a way to assess the strength of the transmission of synaptic inputs into trains of motor unit action potentials. The debated influence of rectification on EMG spectral analysis and coherence measures is also discussed. Alternatively, there have been a number of attempts to identify the neural information directly by decomposing surface EMG signals into the discharge times of motor unit action potentials. The application of this approach is extremely powerful, but validation remains a central issue.
APA, Harvard, Vancouver, ISO, and other styles
13

Popa, Ovidiu, Ellen Oldenburg, and Oliver Ebenhöh. "From sequence to information." Philosophical Transactions of the Royal Society B: Biological Sciences 375, no. 1814 (November 2, 2020): 20190448. http://dx.doi.org/10.1098/rstb.2019.0448.

Full text
Abstract:
Today massive amounts of sequenced metagenomic and metatranscriptomic data from different ecological niches and environmental locations are available. Scientific progress depends critically on methods that allow extracting useful information from the various types of sequence data. Here, we will first discuss types of information contained in the various flavours of biological sequence data, and how this information can be interpreted to increase our scientific knowledge and understanding. We argue that a mechanistic understanding of biological systems analysed from different perspectives is required to consistently interpret experimental observations, and that this understanding is greatly facilitated by the generation and analysis of dynamic mathematical models. We conclude that, in order to construct mathematical models and to test mechanistic hypotheses, time-series data are of critical importance. We review diverse techniques to analyse time-series data and discuss various approaches by which time-series of biological sequence data have been successfully used to derive and test mechanistic hypotheses. Analysing the bottlenecks of current strategies in the extraction of knowledge and understanding from data, we conclude that combined experimental and theoretical efforts should be implemented as early as possible during the planning phase of individual experiments and scientific research projects. This article is part of the theme issue ‘Integrative research perspectives on marine conservation’.
APA, Harvard, Vancouver, ISO, and other styles
14

Et. al., Shilpa Deshmukh,. "Efficient Methodology for Deep Web Data Extraction." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 1S (April 11, 2021): 286–93. http://dx.doi.org/10.17762/turcomat.v12i1s.1769.

Full text
Abstract:
Deep Web substance are gotten to by inquiries submitted to Web information bases and the returned information records are enwrapped in progressively created Web pages (they will be called profound Web pages in this paper). Removing organized information from profound Web pages is a difficult issue because of the fundamental mind boggling structures of such pages. As of not long ago, an enormous number of strategies have been proposed to address this issue, however every one of them have characteristic impediments since they are Web-page-programming-language subordinate. As the mainstream two-dimensional media, the substance on Web pages are constantly shown routinely for clients to peruse. This inspires us to look for an alternate path for profound Web information extraction to beat the constraints of past works by using some fascinating normal visual highlights on the profound Web pages. In this paper, a novel vision-based methodology that is Visual Based Deep Web Data Extraction (VBDWDE) Algorithm is proposed. This methodology basically uses the visual highlights on the profound Web pages to execute profound Web information extraction, including information record extraction and information thing extraction. We additionally propose another assessment measure amendment to catch the measure of human exertion expected to create wonderful extraction. Our investigations on a huge arrangement of Web information bases show that the proposed vision-based methodology is exceptionally viable for profound Web information extraction.
APA, Harvard, Vancouver, ISO, and other styles
15

Rybinski, Maciej, Xiang Dai, Sonit Singh, Sarvnaz Karimi, and Anthony Nguyen. "Extracting Family History Information From Electronic Health Records: Natural Language Processing Analysis." JMIR Medical Informatics 9, no. 4 (April 30, 2021): e24020. http://dx.doi.org/10.2196/24020.

Full text
Abstract:
Background The prognosis, diagnosis, and treatment of many genetic disorders and familial diseases significantly improve if the family history (FH) of a patient is known. Such information is often written in the free text of clinical notes. Objective The aim of this study is to develop automated methods that enable access to FH data through natural language processing. Methods We performed information extraction by using transformers to extract disease mentions from notes. We also experimented with rule-based methods for extracting family member (FM) information from text and coreference resolution techniques. We evaluated different transfer learning strategies to improve the annotation of diseases. We provided a thorough error analysis of the contributing factors that affect such information extraction systems. Results Our experiments showed that the combination of domain-adaptive pretraining and intermediate-task pretraining achieved an F1 score of 81.63% for the extraction of diseases and FMs from notes when it was tested on a public shared task data set from the National Natural Language Processing Clinical Challenges (N2C2), providing a statistically significant improvement over the baseline (P<.001). In comparison, in the 2019 N2C2/Open Health Natural Language Processing Shared Task, the median F1 score of all 17 participating teams was 76.59%. Conclusions Our approach, which leverages a state-of-the-art named entity recognition model for disease mention detection coupled with a hybrid method for FM mention detection, achieved an effectiveness that was close to that of the top 3 systems participating in the 2019 N2C2 FH extraction challenge, with only the top system convincingly outperforming our approach in terms of precision.
APA, Harvard, Vancouver, ISO, and other styles
16

Gururaj, C., and Satish Tunga. "AI Based Feature Extraction Through Content Based Image Retrieval." Journal of Computational and Theoretical Nanoscience 17, no. 9 (July 1, 2020): 4050–54. http://dx.doi.org/10.1166/jctn.2020.9018.

Full text
Abstract:
Therapeutic pictures are progressively being utilized inside human services for conclusion, arranging treatment, controlling treatment and checking sickness movement. In reality, helpful imaging prevalently shapes vague, missing, dubious, essential, clashing, dull restricting, contorted data additionally, information has a strong fundamental character. The proposed approach can be used to achieve the accuracy by using the artificial intelligence techniques wherein the disease level is identified by comparing it with the artificial intelligence data. The two fold merit of this system is it provides better accuracy and also determines all the possibilities of spreading of the disease including the various stages of the disease. This research work also represents new automated strategies of the division and arrangement of therapeutic pictures utilizing computerized reasoning, i.e., delicate processing strategies, data combination and particular area information. Promising outcomes demonstrate the predominance of the delicate processing and information based approach over best customary systems as far as division mistakes. The arrangement of various structures is made by executing rules obtained by both space literature and by medical experts.
APA, Harvard, Vancouver, ISO, and other styles
17

Hsu, William. "Strategies for Managing Timbre and Interaction in Automatic Improvisation Systems." Leonardo Music Journal 20 (December 2010): 33–39. http://dx.doi.org/10.1162/lmj_a_00010.

Full text
Abstract:
Earlier interactive improvisation systems have mostly worked with note-level musical events such as pitch, loudness and duration. Timbre is an integral component of the musical language of many improvisers; some recent systems use timbral information in a variety of ways to enhance interactivity. This article describes the timbre-aware ARHS improvisation system, designed in collaboration with saxophonist John Butcher, in the context of recent improvisation systems that incorporate timbral information. Common practices in audio feature extraction, performance state characterization and management, response synthesis and control of improvising agents are summarized and compared.
APA, Harvard, Vancouver, ISO, and other styles
18

Hahn, Udo, and Michel Oleynik. "Medical Information Extraction in the Age of Deep Learning." Yearbook of Medical Informatics 29, no. 01 (August 2020): 208–20. http://dx.doi.org/10.1055/s-0040-1702001.

Full text
Abstract:
Objectives: We survey recent developments in medical Information Extraction (IE) as reported in the literature from the past three years. Our focus is on the fundamental methodological paradigm shift from standard Machine Learning (ML) techniques to Deep Neural Networks (DNNs). We describe applications of this new paradigm concentrating on two basic IE tasks, named entity recognition and relation extraction, for two selected semantic classes—diseases and drugs (or medications)—and relations between them. Methods: For the time period from 2017 to early 2020, we searched for relevant publications from three major scientific communities: medicine and medical informatics, natural language processing, as well as neural networks and artificial intelligence. Results: In the past decade, the field of Natural Language Processing (NLP) has undergone a profound methodological shift from symbolic to distributed representations based on the paradigm of Deep Learning (DL). Meanwhile, this trend is, although with some delay, also reflected in the medical NLP community. In the reporting period, overwhelming experimental evidence has been gathered, as illustrated in this survey for medical IE, that DL-based approaches outperform non-DL ones by often large margins. Still, small-sized and access-limited corpora create intrinsic problems for data-greedy DL as do special linguistic phenomena of medical sublanguages that have to be overcome by adaptive learning strategies. Conclusions: The paradigm shift from (feature-engineered) ML to DNNs changes the fundamental methodological rules of the game for medical NLP. This change is by no means restricted to medical IE but should also deeply influence other areas of medical informatics, either NLP- or non-NLP-based.
APA, Harvard, Vancouver, ISO, and other styles
19

Fox, Julianne, David Merwin, Roger Marsh, George McConkie, and Arthur Kramer. "Information Extraction during Instrument Flight: An Evaluation of the Validity of the Eye-Mind Hypothesis." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 40, no. 2 (October 1996): 77–81. http://dx.doi.org/10.1177/154193129604000215.

Full text
Abstract:
A study was performed to determine the extent to which flight-relevant information on instruments peripheral to fixation is extracted and used during fixed-wing instrument flight. Twenty student and twenty instructor pilots flew a series of missions in a fixed-wing flight simulator which was interfaced with an eye-tracker. In one mission flight-relevant information was removed from instruments peripheral to fixation while in the other mission peripheral information was intact. Pilots' performance was degraded and eye scan strategies were modified when peripheral information was removed. Furthermore, in several situations instructor pilots' performance was more adversely influenced by the removal of peripheral information than was student pilots' performance. The data are discussed in terms of attentional strategies during flight.
APA, Harvard, Vancouver, ISO, and other styles
20

Monteiro, Juliana Cristina dos Santos, and Solina Richter. "The process of developing a content analysis study to evaluate the quality of breastfeeding information on the Internet-based media." Methodological Innovations 12, no. 2 (May 2019): 205979911986328. http://dx.doi.org/10.1177/2059799119863286.

Full text
Abstract:
The Internet offers a powerful network of information on breastfeeding that is used by doctors, patients, and scientists. The objective of this study is to describe the process of development of a data extraction tool to evaluate the content and quality of breastfeeding information on the Internet. Using a descriptive study method, we examined Internet pages to determine which variables needed to be measured in order to develop the data extraction tool. A purposive sampling of websites was selected to pilot test this tool. The developed data extraction tool has a descriptive structure to characterize websites and text pages. Using the developed tool, we can assess whether the information on text pages is supportive of breastfeeding and whether other strategies that protect breastfeeding are followed. The developed data extraction tool is a useful instrument that can assist researchers in evaluating the quality of information posted on the Internet related to breastfeeding.
APA, Harvard, Vancouver, ISO, and other styles
21

Dogon-yaro, M. A., P. Kumar, A. Abdul Rahman, and G. Buyuksalih. "EXTRACTION OF URBAN TREES FROM INTEGRATED AIRBORNE BASED DIGITAL IMAGE AND LIDAR POINT CLOUD DATASETS - INITIAL RESULTS." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W1 (October 26, 2016): 81–88. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w1-81-2016.

Full text
Abstract:
Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.
APA, Harvard, Vancouver, ISO, and other styles
22

Vanegas, Jorge A., Sérgio Matos, Fabio González, and José L. Oliveira. "An Overview of Biomolecular Event Extraction from Scientific Documents." Computational and Mathematical Methods in Medicine 2015 (2015): 1–19. http://dx.doi.org/10.1155/2015/571381.

Full text
Abstract:
This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Li, Zhi-Ming, Wen-Juan Li, and Jun Wang. "Self-Adapting Patch Strategies for Face Recognition." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 02 (June 13, 2019): 2056002. http://dx.doi.org/10.1142/s0218001420560029.

Full text
Abstract:
In this paper, we propose two self-adapting patch strategies, which are obtained by employing the integral projection technique on images’ edge images, while the edge images are recovered by the two-dimensional discrete wavelet transform. The patch strategies are equipped with the advantage of considering the single image’s unique properties and maintaining the integrity of some particular local information. Combining the self-adapting patch strategies with local binary pattern feature extraction and the classifier of the forward and backward greedy algorithms under strong sparse constraint, we propose two new face recognition methods. Experiments are run on the Georgia Tech, LFW and AR face databases. The obtained numerical results show that the new methods outperform some related patch-based methods to a larger extent.
APA, Harvard, Vancouver, ISO, and other styles
24

Zerhouni, Mourad, and Sidi Mohamed Benslimane. "Large-Scale Ontology Alignment- An Extraction Based Method to Support Information System Interoperability." International Journal of Strategic Information Technology and Applications 10, no. 2 (April 2019): 59–84. http://dx.doi.org/10.4018/ijsita.2019040104.

Full text
Abstract:
Ontology alignment is an important way of establishing interoperability between Semantic Web applications that use different but related ontologies. Ontology alignment is the process of identifying semantically equivalent entities from multiple ontologies. This is not always obvious because technical constraints such as data volume and execution time are determining factors in the choice of an alignment algorithm. Nowadays, partitioning and modularization are two main strategies for breaking down large ontologies into blocks or ontology modules respectively to align ontologies. This article proposes ONTEM as an effective alignment method for large-scale ontology based on the ontology entities extraction. This article conducts a comprehensive evaluation using the datasets of the OAEI 2018 campaign. The obtained results are promising, and they revealed that ONTEM is one of the most effective systems.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhu, Yu, Shun Liang Mei, and Jian Le. "Co-Processing Strategies for Feature Detecting in Smart Video Sensor." Applied Mechanics and Materials 610 (August 2014): 320–24. http://dx.doi.org/10.4028/www.scientific.net/amm.610.320.

Full text
Abstract:
To assist man who is blind or visually impaired on the navigation, a vision system depending upon the DM3730 as the core is developed, which can provide accurate information regarding the environment surrounding the patients. As one part of the project, the feature detection module must be suitable for an embedded implementation, hardware parallelisms and algorithmic modifications need. The Harris detection is chosen as the main algorithm of this module. The experimental results show that the precision and speed of feature extraction are both better than of the PC platform.
APA, Harvard, Vancouver, ISO, and other styles
26

Klampfl, Stefan, Robert Legenstein, and Wolfgang Maass. "Spiking Neurons Can Learn to Solve Information Bottleneck Problems and Extract Independent Components." Neural Computation 21, no. 4 (April 2009): 911–59. http://dx.doi.org/10.1162/neco.2008.01-07-432.

Full text
Abstract:
Independent component analysis (or blind source separation) is assumed to be an essential component of sensory processing in the brain and could provide a less redundant representation about the external world. Another powerful processing strategy is the optimization of internal representations according to the information bottleneck method. This method would allow extracting preferentially those components from high-dimensional sensory input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. However, there exists a lack of models that could explain how spiking neurons could learn to execute either of these two processing strategies. We show in this article how stochastically spiking neurons with refractoriness could in principle learn in an unsupervised manner to carry out both information bottleneck optimization and the extraction of independent components. We derive suitable learning rules, which extend the well-known BCM rule, from abstract information optimization principles. These rules will simultaneously keep the firing rate of the neuron within a biologically realistic range.
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Wan, Lee Ann Kahlor, Won-Ki Moon, and Hilary Clement Olson. "Person, Place, or Thing: Individual, Community, and Risk Information Seeking." Science Communication 43, no. 3 (January 15, 2021): 307–35. http://dx.doi.org/10.1177/1075547020986805.

Full text
Abstract:
This study focuses on the relationship between the community and the environment to explore (1) how community attachment affects residents’ risk perceptions and risk-coping strategies and (2) how risk knowledge is influenced by community-level psychological factors and, in turn, affects the decision to seek risk information. To find answers, 438 Texans were randomly surveyed on the topic of seismic activity induced by nearby natural gas extraction activities. The findings suggest that risk knowledge and risk information seeking intent are related to lower community attachment. Insights and implications related to the study have been provided for communication practitioners.
APA, Harvard, Vancouver, ISO, and other styles
28

Tresilian, James R. "Four Questions of Time to Contact: A Critical Examination of Research on Interceptive Timing." Perception 22, no. 6 (June 1993): 653–80. http://dx.doi.org/10.1068/p220653.

Full text
Abstract:
Four questions concerning the use and perception of time to contact, tc, are identified. (i) Is tc information used in the timing of interceptive actions? (ii) If so, what control strategies are used? (iii) What are the perceptual sources of tc information and which of them do people use? (iv) How is the information extracted by the perceptual systems? Research relevant to these questions is reviewed and analysed. In connection with question (i), theoretical work on the special case of catching a moving object is analysed. It is concluded that treatments of catching which involve the use of tc information provide the best account of timing. In connection with question (ii), two types of control strategy suggested in the literature are identified: an intermittent strategy and a continuous strategy. Evidence for a continuous strategy is reconsidered and shown to be at least as well if not better accounted for by an intermittent strategy. Other empirical evidence for intermittent control is also discussed. In connection with question (iii) a simple unifying method is outlined with which all tc information so far presented in the literature can be derived, and examples are given. The viability of various types of information as sources of tc is examined by considering the errors which would result from their use. Finally, in connection with question (iv) the role of ‘looming detectors’ in the extraction of tc information is considered. These are frequently proposed as mechanisms for extracting the tc information provided by Lee's optic variable, tau. The analysis provided indicates that, despite the existence of a well-known and popular theory, due mainly to Lee, about how interceptive actions are timed, very little is actually known about perceptual timing. It is not yet certain whether tc information is used in interceptive timing tasks, what kinds of control strategies are involved, what sources of information people use to time their actions, or what perceptual processing is involved in the extraction of tc information.
APA, Harvard, Vancouver, ISO, and other styles
29

Ghule, Sayalee. "Log File Data Extraction or Mining." International Journal for Research in Applied Science and Engineering Technology 9, no. VI (June 30, 2021): 4802–6. http://dx.doi.org/10.22214/ijraset.2021.35833.

Full text
Abstract:
Log records contain data generally Client Title, IP Address, Time Stamp, Get to Ask, number of Bytes Exchanged, Result Status, URL that Intimated, and Client Chairman. The log records are kept up by the internet servers. By analysing these log records gives a flawless thought to the client. The wide Web may be a solid store of web pages that gives the Net clients piles of information. With the change in the number and complexity of Websites, the degree of the net has gotten to be massively wide. Web Utilization Mining may be a division of web mining that consolidates the application of mining strategies to web server logs in coordination to expel the behaviour of clients. Log records contain basic data around the execution of a framework. This data is frequently utilized for investigating, operational profiling, finding quirks, recognizing security dangers, measuring execution,
APA, Harvard, Vancouver, ISO, and other styles
30

S, Santhosh Kumar, Vishnu Vardhan S, Wasim Jaffar M, Sultan Saleem A, and Sharmasth Vali Y. "Social Communicative Extraction Analysis." International Research Journal of Multidisciplinary Technovation 2, no. 4 (September 26, 2020): 4–10. http://dx.doi.org/10.34256/irjmt2042.

Full text
Abstract:
The distinguishing proof of online networking networks has as of late been of significant worry, since clients taking an interest in such networks can add to viral showcasing efforts. Right now center around clients' correspondence considering character as a key trademark for recognizing informative systems for example systems with high data streams. We portray the Twitter Personality based Communicative Communities Extraction (T-PCCE) framework that recognizes the most informative networks in a Twitter organize chart thinking about clients' character. We at that point grow existing methodologies as a part of client’s character extraction by collecting information that speak to a few parts of client conduct utilizing AI strategies. We utilize a current measured quality based network discovery calculation and we expand it by embeddings a post-preparing step that dispenses with diagram edges dependent on clients' character. The adequacy of our methodology is exhibited by testing the Twitter diagram and looking at the correspondence quality of the removed networks with and without considering the character factor. We characterize a few measurements to tally the quality of correspondence inside every network. Our algorithmic system and the resulting usage utilize the cloud foundation and utilize the MapReduce Programming Environment. Our outcomes show that the T-PCCE framework makes the most informative networks.
APA, Harvard, Vancouver, ISO, and other styles
31

Han, Kyoung-Soo, Young-In Song, Sang-Bum Kim, and Hae-Chang Rim. "Answer extraction and ranking strategies for definitional question answering using linguistic features and definition terminology." Information Processing & Management 43, no. 2 (March 2007): 353–64. http://dx.doi.org/10.1016/j.ipm.2006.07.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Trappey, Amy J. C., Charles V. Trappey, and Ai-Che Chang. "Intelligent Extraction of a Knowledge Ontology From Global Patents." International Journal on Semantic Web and Information Systems 16, no. 4 (October 2020): 61–80. http://dx.doi.org/10.4018/ijswis.2020100104.

Full text
Abstract:
The growth of global patents increased over the last decade as enterprises and inventors sought greater protection of their intellectual property (IP) rights. Global patents represent state-of-the-art knowledge for given domains. This research develops a hierarchical Latent Dirichlet Allocation (LDA)-based approach as a computational intelligent method to discover topics and form a top-down ontology, a semantic schema, representing the collective patent knowledge. To validate the knowledge extraction, 1,546 smart retailing patents collected from the Derwent Innovation platform from 2011 and 2016 are used to build the domain ontology schema. The patent set focuses on in-use, globally established, and non-disputed IP covering payment, user experience, and information integration for smart retailing. The clustering and LDA-based ontology system automatically build the knowledge map, which identifies the technology trends and the technology gaps enabling the development of competitive R&D and management strategies.
APA, Harvard, Vancouver, ISO, and other styles
33

MARINO, SIMEONE, and EBERHARD O. VOIT. "AN AUTOMATED PROCEDURE FOR THE EXTRACTION OF METABOLIC NETWORK INFORMATION FROM TIME SERIES DATA." Journal of Bioinformatics and Computational Biology 04, no. 03 (June 2006): 665–91. http://dx.doi.org/10.1142/s0219720006002259.

Full text
Abstract:
Novel high-throughput measurement techniques in vivo are beginning to produce dense high-quality time series which can be used to investigate the structure and regulation of biochemical networks. We propose an automated information extraction procedure which takes advantage of the unique S-system structure and supports model building from time traces, curve fitting, model selection, and structure identification based on parameter estimation. The procedure comprises of three modules: model Generation, parameter estimation or model Fitting, and model Selection (GFS algorithm).The GFS algorithm has been implemented in MATLAB and returns a list of candidate S-systems which adequately explain the data and guides the search to the most plausible model for the time series under study. By combining two strategies (namely decoupling and limiting connectivity) with methods of data smoothing, the proposed algorithm is scalable up to realistic situations of moderate size. We illustrate the proposed methodology with a didactic example.
APA, Harvard, Vancouver, ISO, and other styles
34

Souissi, Nessim. "The Implied Risk Neutral Density Dynamics: Evidence from the S&P TSX 60 Index." Journal of Applied Mathematics 2017 (2017): 1–10. http://dx.doi.org/10.1155/2017/3156250.

Full text
Abstract:
The risk neutral density is an important tool for analyzing the dynamics of financial markets and traders’ attitudes and reactions to already experienced shocks by financial markets as well as the potential ones. In this paper, we present a new method for the extraction information content from option prices. By eliminating bias caused by daily variation of contract maturity through a completely nonparametric technique based on kernel regression, we allow comparing evolution of risk neutral density and extracting from time continuous indicators that detect evolution of traders’ attitudes, risk perception, and belief homogeneity. This method is useful to develop trading strategies and monetary policies.
APA, Harvard, Vancouver, ISO, and other styles
35

L'Homme, Marie-Claude, Loubna Benali, Claudine Bertrand, and Patricia Lauduique. "Definition of an evaluation grid for term-extraction software." Terminology 3, no. 2 (January 1, 1996): 291–312. http://dx.doi.org/10.1075/term.3.2.04hom.

Full text
Abstract:
This paper examines evaluation criteria for term-extraction software. These tools have gained popularity over the past few years, but they come in all sorts of structures and their performance cannot be compared (qualitatively) to that of humans performing the same task. The lists obtained after an automated extraction must always be filtered by users. The evaluation form proposed here consists of a certain number of preprocessing criteria (such as the language analyzed by the software, identification strategies used, etc.) and a postprocessing criterion (performance of software) that users must take into account before they start using such systems. Each criterion is defined and illustrated with examples. Commercial tools have also been tested.
APA, Harvard, Vancouver, ISO, and other styles
36

Vivaldi, Jorge, and Horacio Rodríguez. "Evaluation of terms and term extraction systems." Terminology 13, no. 2 (November 19, 2007): 225–48. http://dx.doi.org/10.1075/term.13.2.06viv.

Full text
Abstract:
Term extraction may be defined as a text mining activity whose main purpose is to obtain all the terms included in a text of a given domain. Since the eighties, and mainly due to the rapid scientific advances as well as the evolution of the communication systems, there has been a growing interest in obtaining the terms found in written documents. A number of techniques and strategies have been proposed for satisfying this requirement. At present it seems that term extraction has reached a maturity stage. Nevertheless, many of the systems proposed fail to qualitatively present their results, almost every system evaluates its abilities in an ad hoc manner (if any, many times). Often, the authors do not explain their evaluation methodology; therefore comparisons between different implementations are difficult to draw. In this paper, we review the state-of-the-art of term extraction systems evaluation in the framework of natural language systems evaluation. The main approaches are presented, with a focus on their limitations. As an instantiation of some ideas for overcoming these limitations, the evaluation framework is applied to YATE, a hybrid term extractor.
APA, Harvard, Vancouver, ISO, and other styles
37

Buongiorno, Domenico, Giacomo Donato Cascarano, Cristian Camardella, Irio De Feudis, Antonio Frisoli, and Vitoantonio Bevilacqua. "Task-Oriented Muscle Synergy Extraction Using An Autoencoder-Based Neural Model." Information 11, no. 4 (April 17, 2020): 219. http://dx.doi.org/10.3390/info11040219.

Full text
Abstract:
The growing interest in wearable robots opens the challenge for developing intuitive and natural control strategies. Among several human–machine interaction approaches, myoelectric control consists of decoding the motor intention from muscular activity (or EMG signals) with the aim of driving prosthetic or assistive robotic devices accordingly, thus establishing an intimate human–machine connection. In this scenario, bio-inspired approaches, e.g., synergy-based controllers, are revealed to be the most robust. However, synergy-based myo-controllers already proposed in the literature consider muscle patterns that are computed considering only the total variance reconstruction rate of the EMG signals, without taking into account the performance of the controller in the task (or application) space. In this work, extending a previous study, the authors presented an autoencoder-based neural model able to extract muscles synergies for motion intention detection while optimizing the task performance in terms of force/moment reconstruction. The proposed neural topology has been validated with EMG signals acquired from the main upper limb muscles during planar isometric reaching tasks performed in a virtual environment while wearing an exoskeleton. The presented model has been compared with the non-negative matrix factorization algorithm (i.e., the most used approach in the literature) in terms of muscle synergy extraction quality, and with three techniques already presented in the literature in terms of goodness of shoulder and elbow predicted moments. The results of the experimental comparisons have showed that the proposed model outperforms the state-of-art synergy-based joint moment estimators at the expense of the quality of the EMG signals reconstruction. These findings demonstrate that a trade-off, between the capability of the extracted muscle synergies to better describe the EMG signals variability and the task performance in terms of force reconstruction, can be achieved. The results of this study might open new horizons on synergies extraction methodologies, optimized synergy-based myo-controllers and, perhaps, reveals useful hints about their origin.
APA, Harvard, Vancouver, ISO, and other styles
38

Drouin, Nicolas, Serge Rudaz, and Julie Schappler. "Sample preparation for polar metabolites in bioanalysis." Analyst 143, no. 1 (2018): 16–20. http://dx.doi.org/10.1039/c7an01333g.

Full text
Abstract:
Sample preparation is a primary step of any bioanalytical workflow, especially in metabolomics where maximum information has to be obtained without spoiling the analytical instrument. The sample extraction of polar metabolites is still challenging but strategies exist to enable the phase transfer of hydrophilic metabolites from the biological phase to a clean interference-free phase.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Ruichuan, and Nora El-Gohary. "A deep neural network-based method for deep information extraction using transfer learning strategies to support automated compliance checking." Automation in Construction 132 (December 2021): 103834. http://dx.doi.org/10.1016/j.autcon.2021.103834.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Cheng, Yi-Ting, Ankit Patel, Chenglu Wen, Darcy Bullock, and Ayman Habib. "Intensity Thresholding and Deep Learning Based Lane Marking Extraction and Lane Width Estimation from Mobile Light Detection and Ranging (LiDAR) Point Clouds." Remote Sensing 12, no. 9 (April 27, 2020): 1379. http://dx.doi.org/10.3390/rs12091379.

Full text
Abstract:
Lane markings are one of the essential elements of road information, which is useful for a wide range of transportation applications. Several studies have been conducted to extract lane markings through intensity thresholding of Light Detection and Ranging (LiDAR) point clouds acquired by mobile mapping systems (MMS). This paper proposes an intensity thresholding strategy using unsupervised intensity normalization and a deep learning strategy using automatically labeled training data for lane marking extraction. For comparative evaluation, original intensity thresholding and deep learning using manually established labels strategies are also implemented. A pavement surface-based assessment of lane marking extraction by the four strategies is conducted in asphalt and concrete pavement areas covered by MMS equipped with multiple LiDAR scanners. Additionally, the extracted lane markings are used for lane width estimation and reporting lane marking gaps along various highways. The normalized intensity thresholding leads to a better lane marking extraction with an F1-score of 78.9% in comparison to the original intensity thresholding with an F1-score of 72.3%. On the other hand, the deep learning model trained with automatically generated labels achieves a higher F1-score of 85.9% than the one trained on manually established labels with an F1-score of 75.1%. In concrete pavement area, the normalized intensity thresholding and both deep learning strategies obtain better lane marking extraction (i.e., lane markings along longer segments of the highway have been extracted) than the original intensity thresholding approach. For the lane width results, more estimates are observed, especially in areas with poor edge lane marking, using the two deep learning models when compared with the intensity thresholding strategies due to the higher recall rates for the former. The outcome of the proposed strategies is used to develop a framework for reporting lane marking gap regions, which can be subsequently visualized in RGB imagery to identify their cause.
APA, Harvard, Vancouver, ISO, and other styles
41

Ang, Teik-Hun, Kunlanan Kiatkittipong, Worapon Kiatkittipong, Siong-Chin Chua, Jun Wei Lim, Pau-Loke Show, Mohammed J. K. Bashir, and Yeek-Chia Ho. "Insight on Extraction and Characterisation of Biopolymers as the Green Coagulants for Microalgae Harvesting." Water 12, no. 5 (May 14, 2020): 1388. http://dx.doi.org/10.3390/w12051388.

Full text
Abstract:
This review presents the extractions, characterisations, applications and economic analyses of natural coagulant in separating pollutants and microalgae from water medium, known as microalgae harvesting. The promising future of microalgae as a next-generation energy source is reviewed and the significant drawbacks of conventional microalgae harvesting using alum are evaluated. The performances of natural coagulant in microalgae harvesting are studied and proven to exceed the alum. In addition, the details of each processing stage in the extraction of natural coagulant (plant, microbial and animal) are comprehensively discussed with justifications. This information could contribute to future exploration of novel natural coagulants by providing description of optimised extraction steps for a number of natural coagulants. Besides, the characterisations of natural coagulants have garnered a great deal of attention, and the strategies to enhance the flocculating activity based on their characteristics are discussed. Several important characterisations have been tabulated in this review such as physical aspects, including surface morphology and surface charges; chemical aspects, including molecular weight, functional group and elemental properties; and thermal stability parameters including thermogravimetry analysis and differential scanning calorimetry. Furthermore, various applications of natural coagulant in the industries other than microalgae harvesting are revealed. The cost analysis of natural coagulant application in mass harvesting of microalgae is allowed to evaluate its feasibility towards commercialisation in the industrial. Last, the potentially new natural coagulants, which are yet to be exploited and applied, are listed as the additional information for future study.
APA, Harvard, Vancouver, ISO, and other styles
42

KRESTEL, RALF, SABINE BERGLER, and RENÉ WITTE. "Modeling human newspaper readers: The Fuzzy Believer approach." Natural Language Engineering 20, no. 2 (October 12, 2012): 261–88. http://dx.doi.org/10.1017/s1351324912000289.

Full text
Abstract:
AbstractThe growing number of publicly available information sources makes it impossible for individuals to keep track of all the various opinions on one topic. The goal of ourFuzzy Believersystem presented in this paper is to extract and analyze statements of opinion from newspaper articles. Beliefs are modeled using the fuzzy set theory, applied after Natural Language Processing-based information extraction. The Fuzzy Believer models a human agent, deciding what statements to believe or reject based on a range of configurable strategies.
APA, Harvard, Vancouver, ISO, and other styles
43

Wissel, Tobias, and Ramaswamy Palaniappan. "Considerations on Strategies to Improve EOG Signal Analysis." International Journal of Artificial Life Research 2, no. 3 (July 2011): 6–21. http://dx.doi.org/10.4018/jalr.2011070102.

Full text
Abstract:
Electrooculogram (EOG) signals have been used in designing Human-Computer Interfaces, though not as popularly as electroencephalogram (EEG) or electromyogram (EMG) signals. This paper explores several strategies for improving the analysis of EOG signals. This article explores its utilization for the extraction of features from EOG signals compared with parametric, frequency-based approach using an autoregressive (AR) model as well as template matching as a time based method. The results indicate that parametric AR modeling using the Burg method, which does not retain the phase information, gives poor class separation. Conversely, the projection on the approximation space of the fourth level of Haar wavelet decomposition yields feature sets that enhance the class separation. Furthermore, for this method the number of dimensions in the feature space is much reduced as compared to template matching, which makes it much more efficient in terms of computation. This paper also reports on an example application utilizing wavelet decomposition and the Linear Discriminant Analysis (LDA) for classification, which was implemented and evaluated successfully. In this application, a virtual keyboard acts as the front-end for user interactions.
APA, Harvard, Vancouver, ISO, and other styles
44

Sosa-Hernández, Juan Eduardo, Zamantha Escobedo-Avellaneda, Hafiz M. N. Iqbal, and Jorge Welti-Chanes. "State-of-the-Art Extraction Methodologies for Bioactive Compounds from Algal Biome to Meet Bio-Economy Challenges and Opportunities." Molecules 23, no. 11 (November 12, 2018): 2953. http://dx.doi.org/10.3390/molecules23112953.

Full text
Abstract:
Over the years, significant research efforts have been made to extract bioactive compounds by applying different methodologies for various applications. For instance, the use of bioactive compounds in several commercial sectors such as biomedical, pharmaceutical, cosmeceutical, nutraceutical and chemical industries, has promoted the need of the most suitable and standardized methods to extract these bioactive constituents in a sophisticated and cost-effective manner. In practice, several conventional extraction methods have numerous limitations, e.g., lower efficacy, high energy cost, low yield, etc., thus urges for new state-of-the-art extraction methodologies. Thus, the optimization along with the integration of efficient pretreatment strategies followed by traditional extraction and purification processes, have been the primary goal of current research and development studies. Among different sources, algal biome has been found as a promising and feasible source to extract a broader spectrum of bioactive compounds with point-of-care application potentialities. As evident from the literature, algal bio-products includes biofuels, lipids, polyunsaturated fatty acids, pigments, enzymes, polysaccharides, and proteins. The recovery of products from algal biomass is a matter of constant development and progress. This review covers recent advancements in the extraction methodologies such as enzyme-assisted extraction (EAE), supercritical-fluid extraction (SFE), microwave-assisted extraction (MAE) and pressurized-liquid extraction (PLF) along with their working mechanism for extracting bioactive compounds from algal-based sources to meet bio-economy challenges and opportunities. A particular focus has been given to design characteristics, performance evaluation, and point-of-care applications of different bioactive compounds of microalgae. The previous and recent studies on the anticancer, antibacterial, and antiviral potentialities of algal-based bioactive compounds have also been discussed with particular reference to the mechanism underlying the effects of these active constituents with the related pathways. Towards the end, the information is also given on the possible research gaps, future perspectives and concluding remarks.
APA, Harvard, Vancouver, ISO, and other styles
45

Yang, Xi, Hansi Zhang, Xing He, Jiang Bian, and Yonghui Wu. "Extracting Family History of Patients From Clinical Narratives: Exploring an End-to-End Solution With Deep Learning Models." JMIR Medical Informatics 8, no. 12 (December 15, 2020): e22982. http://dx.doi.org/10.2196/22982.

Full text
Abstract:
Background Patients’ family history (FH) is a critical risk factor associated with numerous diseases. However, FH information is not well captured in the structured database but often documented in clinical narratives. Natural language processing (NLP) is the key technology to extract patients’ FH from clinical narratives. In 2019, the National NLP Clinical Challenge (n2c2) organized shared tasks to solicit NLP methods for FH information extraction. Objective This study presents our end-to-end FH extraction system developed during the 2019 n2c2 open shared task as well as the new transformer-based models that we developed after the challenge. We seek to develop a machine learning–based solution for FH information extraction without task-specific rules created by hand. Methods We developed deep learning–based systems for FH concept extraction and relation identification. We explored deep learning models including long short-term memory-conditional random fields and bidirectional encoder representations from transformers (BERT) as well as developed ensemble models using a majority voting strategy. To further optimize performance, we systematically compared 3 different strategies to use BERT output representations for relation identification. Results Our system was among the top-ranked systems (3 out of 21) in the challenge. Our best system achieved micro-averaged F1 scores of 0.7944 and 0.6544 for concept extraction and relation identification, respectively. After challenge, we further explored new transformer-based models and improved the performances of both subtasks to 0.8249 and 0.6775, respectively. For relation identification, our system achieved a performance comparable to the best system (0.6810) reported in the challenge. Conclusions This study demonstrated the feasibility of utilizing deep learning methods to extract FH information from clinical narratives.
APA, Harvard, Vancouver, ISO, and other styles
46

Morán, Antonio, Serafín Alonso, Daniel Pérez, Miguel A. Prada, Juan José Fuertes, and Manuel Domínguez. "Feature Extraction from Building Submetering Networks Using Deep Learning." Sensors 20, no. 13 (June 30, 2020): 3665. http://dx.doi.org/10.3390/s20133665.

Full text
Abstract:
The understanding of the nature and structure of energy use in large buildings is vital for defining novel energy and climate change strategies. The advances on metering technology and low-cost devices make it possible to form a submetering network, which measures the main supply and other intermediate points providing information of the behavior of different areas. However, an analysis by means of classical techniques can lead to wrong conclusions if the load is not balanced. This paper proposes the use of a deep convolutional autoencoder to reconstruct the whole consumption measured by the submeters using the learnt features in order to analyze the behavior of different building areas. The display of weights and information of the latent space provided by the autoencoder allows us to obtain precise details of the influence of each area in the whole building consumption and its dependence on external factors such as temperature. A submetering network is deployed in the León University Hospital building in order to test the proposed methodology. The results show different correlations between environmental variables and building areas and indicate that areas can be grouped depending on their function in the building performance. Furthermore, this approach is able to provide discernible results in the presence of large differences with respect to the consumption ranges of the different areas, unlike conventional approaches where the influence of smaller areas is usually hidden.
APA, Harvard, Vancouver, ISO, and other styles
47

Hatefi Hesari, Shahram, Mohammad Aminul Haque, and Nicole McFarlane. "A Comprehensive Survey of Readout Strategies for SiPMs Used in Nuclear Imaging Systems." Photonics 8, no. 7 (July 7, 2021): 266. http://dx.doi.org/10.3390/photonics8070266.

Full text
Abstract:
Silicon photomultipliers (SiPMs) offer advantages such as lower relative cost, smaller size, and lower operating voltages compared to photomultiplier tubes. A SiPM’s readout circuit topology can significantly affect the characteristics of an imaging array. In nuclear imaging and detection, energy, timing, and position are the primary characteristics of interest. Nuclear imaging has applications in the medical, astronomy, and high energy physics fields, making SiPMs an active research area. This work is focused on the circuit topologies required for nuclear imaging. We surveyed the readout strategies including the front end preamplification topology choices of transimpedance amplifier, charge amplifier, and voltage amplifier. In addition, a review of circuit topologies suitable for energy, timing, and position information extraction was performed along with a summary of performance limitations and current challenges.
APA, Harvard, Vancouver, ISO, and other styles
48

K. AL-Taie, Rana Riad, Basma Jumaa Saleh, Ahmed Yousif Falih Saedi, and Lamees Abdalhasan Salman. "Analysis of WEKA data mining algorithms Bayes net, random forest, MLP and SMO for heart disease prediction system: A case study in Iraq." International Journal of Electrical and Computer Engineering (IJECE) 11, no. 6 (December 1, 2021): 5229. http://dx.doi.org/10.11591/ijece.v11i6.pp5229-5239.

Full text
Abstract:
Data mining is defined as a search through large amounts of data for valuable information. The association rules, grouping, clustering, prediction, sequence modeling is some essential and most general strategies for data extraction. The processing of data plays a major role in the healthcare industry's disease detection. A variety of disease evaluations should be required to diagnose the patient. However, using data mining strategies, the number of examinations should be decreased. This decreased examination plays a crucial role in terms of time and results. Heart disease is a death-provoking disorder. In this recent instance, health issues are immense because of the availability of health issues and the grouping of various situations. Today, secret information is important in the healthcare industry to make decisions. For the prediction of cardiovascular problems, (Weka 3.8.3) tools for this analysis are used for the prediction of data extraction algorithms like sequential minimal optimization (SMO), multilayer perceptron (MLP), random forest and Bayes net. The data collected combine the prediction accuracy results, the receiver operating characteristic (ROC) curve, and the PRC value. The performance of Bayes net (94.5%) and random forest (94%) technologies indicates optimum performance rather than the sequential minimal optimization (SMO) and multilayer perceptron (MLP) methods.
APA, Harvard, Vancouver, ISO, and other styles
49

Friesen, Deanna C., and Bailey Frid. "Predictors of Successful Reading Comprehension in Bilingual Adults: The Role of Reading Strategies and Language Proficiency." Languages 6, no. 1 (January 28, 2021): 18. http://dx.doi.org/10.3390/languages6010018.

Full text
Abstract:
The current study investigated the type of strategies that English–French bilingual adults utilize when reading in their dominant and non-dominant languages and which of these strategies are associated with reading comprehension success. Thirty-nine participants read short texts while reporting aloud what they were thinking as they read. Following each passage, readers answered three comprehension questions. Questions either required information found directly in the text (literal question) or required a necessary inference or an elaborative inference. Readers reported more necessary and elaborative inferences and referred to more background knowledge in their dominant language than in their non-dominant language. Engaging in both text analysis strategies and meaning extraction strategies predicted reading comprehension success in both languages, with differences observed depending on the type of question posed. Results are discussed with respect to how strategy use supports the development of text representations.
APA, Harvard, Vancouver, ISO, and other styles
50

Mao, Xuegang, Yueqing Deng, Liang Zhu, and Yao Yao. "Hierarchical Geographic Object-Based Vegetation Type Extraction Based on Multi-Source Remote Sensing Data." Forests 11, no. 12 (November 28, 2020): 1271. http://dx.doi.org/10.3390/f11121271.

Full text
Abstract:
Providing vegetation type information with accurate surface distribution is one of the important tasks of remote sensing of the ecological environment. Many studies have explored ecosystem structure information at specific spatial scales based on specific remote sensing data, but it is still rare to extract vegetation information at various landscape levels from a variety of remote sensing data. Based on Gaofen-1 satellite (GF-1) Wide-Field-View (WFV) data (16 m), Ziyuan-3 satellite (ZY-3) and airborne LiDAR data, this study comparatively analyzed the four levels of vegetation information by using the geographic object-based image analysis method (GEOBIA) on the typical natural secondary forest in Northeast China. The four levels of vegetation information include vegetation/non-vegetation (L1), vegetation type (L2), forest type (L3) and canopy and canopy gap (L4). The results showed that vegetation height and density provided by airborne LiDAR data could extract vegetation features and categories more effectively than the spectral information provided by GF-1 and ZY-3 images. Only 0.5 m LiDAR data can extract four levels of vegetation information (L1–L4); and from L1 to L4, the total accuracy of the classification decreased orderly 98%, 93%, 80% and 69%. Comparing with 2.1 m ZY-3, the total classification accuracy of L1, L2 and L3 extracted by 2.1 m LiDAR data increased by 3%, 17% and 43%, respectively. At the vegetation/non-vegetation level, the spatial resolution of data plays a leading role, and the data types used at the vegetation type and forest type level become the main influencing factors. This study will provide reference for data selection and mapping strategies for hierarchical multi-scale vegetation type extraction.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography