Artigos de revistas sobre o tema "User­Driven Classification"

Siga este link para ver outros tipos de publicações sobre o tema: User­Driven Classification.

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "User­Driven Classification".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Todeschini, R., e E. Marengo. "Linear discriminant classification tree: A user-driven multicriteria classification method". Chemometrics and Intelligent Laboratory Systems 16, n.º 1 (setembro de 1992): 25–35. http://dx.doi.org/10.1016/0169-7439(92)80075-f.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Arvor, Damien, Julie Betbeder, Felipe R. G. Daher, Tim Blossier, Renan Le Roux, Samuel Corgne, Thomas Corpetti, Vinicius de Freitas Silgueiro e Carlos Antonio da Silva Junior. "Towards user-adaptive remote sensing: Knowledge-driven automatic classification of Sentinel-2 time series". Remote Sensing of Environment 264 (outubro de 2021): 112615. http://dx.doi.org/10.1016/j.rse.2021.112615.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Bhukya, Raghuram. "Generalization Driven Fuzzy Classification Rules Extraction using OLAM Data Cubes". International Journal of Engineering and Computer Science 9, n.º 2 (28 de fevereiro de 2020): 24962–69. http://dx.doi.org/10.18535/ijecs/v9i2.4444.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
An fuzzy classification rules extraction model for online analytical mining (OLAM) was explained in this article. The efficient integration of the concept of data warehousing, online analytical processing (OLAP) and data mining systems converges to OLAM results in an efficient decision support system. Even after associative classification proved as most efficient classification technique there is a lack of associative classification proposals in field of OLAM. While most of existing data cube models claims their superiority over other the fuzzy multidimensional data cubes proved to be more intuitive in user perspective and effectively manage data imprecision. Considering these factors, in this paper we propose an associative classification model which can perform classification over fuzzy data cubes. Our method aimed to improve accuracy and intuitive ness of classification model using fuzzy concepts and hierarchical relations. We also proposed a generalization-based criterion for ranking associative classification rules to improve classifier accuracy. The model accuracy tested on UCI standard database.
4

Yamini, B., J. Sherine Glory e S. Aravindkumar. "Intelligence Driven-Depression Identification of Facebook Users". Journal of Computational and Theoretical Nanoscience 17, n.º 8 (1 de agosto de 2020): 3770–75. http://dx.doi.org/10.1166/jctn.2020.9318.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Depression is a severe dispute in wide-ranging personal health for all. Every year millions of populace endures depression and some few of them receive adequate treatment. Through their Face-book postings and statuses, whatsapp statuses and shares of post, expressive words handling during they speak or post, emotional icons or pictures they post or from their browse histories, expressing user interests, feelings and daily routines can be measured. Many researchers proved that usage of User Generated Content in a proper mode helps to decide one’s depression level. Analyzing the Generated user related contents helps in prediction of depression (Marcus, M., et al., 2012. Depression: A Global Public Health Concern. WHO Dataset, pp.6–8). Leveraging Social media like Facebook, Twitter and Instagram has valuable sign for designating depression in persons. In the proposed work, potential use of Facebook to sense and identify main depressive chaos in individual is explored. Sharing or posting of images or text by an individual plays a vital role in identifying the victim with depression. Text based analysis done on posted or shared textual content, whereas Chromatic analysis done on the images along with text based analysis for the images with textual contents. Emotion features and color features of these two analyses are used to identify the depression level of the individual by psychological classification using Support Vector Machine (SVM). The performance of the SVM is compared with Naïve Bayes Classifier. The Findings and methods of the proposed work, presents a road map in developing new methodologies in identifying the major depression levels of those who suffer and to guide healthcare agencies.
5

Malik, Sadaf, Nadia Kanwal, Mamoona Naveed Asghar, Mohammad Ali A. Sadiq, Irfan Karamat e Martin Fleury. "Data Driven Approach for Eye Disease Classification with Machine Learning". Applied Sciences 9, n.º 14 (11 de julho de 2019): 2789. http://dx.doi.org/10.3390/app9142789.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Medical health systems have been concentrating on artificial intelligence techniques for speedy diagnosis. However, the recording of health data in a standard form still requires attention so that machine learning can be more accurate and reliable by considering multiple features. The aim of this study is to develop a general framework for recording diagnostic data in an international standard format to facilitate prediction of disease diagnosis based on symptoms using machine learning algorithms. Efforts were made to ensure error-free data entry by developing a user-friendly interface. Furthermore, multiple machine learning algorithms including Decision Tree, Random Forest, Naive Bayes and Neural Network algorithms were used to analyze patient data based on multiple features, including age, illness history and clinical observations. This data was formatted according to structured hierarchies designed by medical experts, whereas diagnosis was made as per the ICD-10 coding developed by the American Academy of Ophthalmology. Furthermore, the system is designed to evolve through self-learning by adding new classifications for both diagnosis and symptoms. The classification results from tree-based methods demonstrated that the proposed framework performs satisfactorily, given a sufficient amount of data. Owing to a structured data arrangement, the random forest and decision tree algorithms’ prediction rate is more than 90% as compared to more complex methods such as neural networks and the naïve Bayes algorithm.
6

Maggipinto, Marco, Elena Pesavento, Fabio Altinier, Giuliano Zambonin, Alessandro Beghi e Gian Antonio Susto. "Laundry Fabric Classification in Vertical Axis Washing Machines Using Data-Driven Soft Sensors". Energies 12, n.º 21 (25 de outubro de 2019): 4080. http://dx.doi.org/10.3390/en12214080.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Embedding household appliances with smart capabilities is becoming common practice among major fabric-care producers that seek competitiveness on the market by providing more efficient and easy-to-use products. In Vertical Axis Washing Machines (VA-WM), knowing the laundry composition is fundamental to setting the washing cycle properly with positive impact both on energy/water consumption and on washing performance. An indication of the load typology composition (cotton, silk, etc.) is typically provided by the user through a physical selector that, unfortunately, is often placed by the user on the most general setting due to the discomfort of manually changing configurations. An automated mechanism to determine such key information would thus provide increased user experience, better washing performance, and reduced consumption; for this reason, we present here a data-driven soft sensor that exploits physical measurements already available on board a commercial VA-WM to provide an estimate of the load typology through a machine-learning-based statistical model of the process. The proposed method is able to work in a resource-constrained environment such as the firmware of a VA-WM.
7

Bi, Jun, Ru Zhi, Dong-Fan Xie, Xiao-Mei Zhao e Jun Zhang. "Capturing the Characteristics of Car-Sharing Users: Data-Driven Analysis and Prediction Based on Classification". Journal of Advanced Transportation 2020 (9 de março de 2020): 1–11. http://dx.doi.org/10.1155/2020/4680959.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
This work explores the characteristics of the usage behaviour of station-based car-sharing users based on the actual operation data from a car-sharing company in Gansu, China. We analyse the characteristics of the users’ demands, such as usage frequency and order quantity, for a day with 24 1 h time intervals. Results show that most car-sharing users are young and middle-aged men with a low reuse rate. The distribution of users’ usage during weekdays shows noticeable morning and evening peaks. We define two attributes, namely, the latent ratio and persistence ratio, as classification indicators to understand the user diversity and heterogeneity thoroughly. We apply the k-means clustering algorithm to group the users into four categories, namely, lost, early loyal, late loyal, and motivated users. The usage characteristics of lost users, including maximum rental time and travel distance, minimum percentage of same pickup and return station, and low percentage of locals, have noticeable differences from those of the other users. Late loyal users have lower rental time and travel distance than those of the other users. This manifestation is in line with the short-term lease of shared cars to complete short- and medium-distance travel design concepts. We also propose a model that predicts the driver cluster based on the decision tree. Numerical tests indicate that the accuracy is 91.61% when the user category is predicted four months in advance using the observation-to-judgment period ratio of 3 : 1. The results in this study can support enterprises in user management.
8

Liu, Weitao, Fuqing Wang, Hang Shi, Yan Zhang e Ruobo Chen. "Analysis of User Energy Consumption Patterns Based on Data Mining". E3S Web of Conferences 213 (2020): 02040. http://dx.doi.org/10.1051/e3sconf/202021302040.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The energy use behavior analysis method can dig out the user’s energy use behavior rules from the energy use big data, thereby improving the quality of the grid-side management service in the integrated energy system. Firstly, it summarizes the characteristics of the integrated energy system and constructs the integrated energy system service system; secondly, it summarizes the data-driven electricity consumption behavior analysis research model. Then, it elaborates on the collection and aggregation of electricity consumption information, and refined user classification. Next, the comprehensive application of energy consumption behavior analysis in load forecasting, demand response modeling and other typical scenarios is deeply analyzed. Finally, the challenges that may be encountered in further research are clarified and the follow-up work is prospected.
9

Seymour, Zakiya A., Eugene Cloete, Margaret McCurdy, Mira Olson e Joseph Hughes. "Understanding values of sanitation users: examining preferences and behaviors for sanitation systems". Journal of Water, Sanitation and Hygiene for Development 11, n.º 2 (11 de janeiro de 2021): 195–207. http://dx.doi.org/10.2166/washdev.2021.119.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract Sanitation policy and development has undergone a paradigm shift away from supply-driven toward behavioral-based demand-driven approaches. This shift to increase sanitation demand requires multiple stakeholders with varying degrees of interest, knowledge, and capacity. Currently, the design of appropriate sanitation technology disconnects user preference integration from sanitation technology design, resulting in fewer sanitation technologies being adopted and used. This research examines how preferences for specific attributes of appropriate sanitation technologies and implementation arrangements influence their adoption and usage. Data collected included interviews of 1,002 sanitation users living in a peri-urban area of South Africa; the surveyed respondents were asked about their existing sanitation technology, their preferences for various sanitation technology design attributes, as well as their perspectives on current and preferred sanitation implementation arrangements. The data revealed that user acceptability of appropriate sanitation technology is influenced by the adoption classification of the users. Statistically significant motives and barriers to sanitation usage showed a differentiation between users who share private sanitation from those who use communal sanitation facilities. The user acceptability of appropriate sanitation systems is dependent on the technical design attributes of sanitation. The development of utility functions detailed the significance of seven technical design attributes and determined their respective priorities.
10

Hess, M. R., V. Petrovic e F. Kuester. "INTERACTIVE CLASSIFICATION OF CONSTRUCTION MATERIALS: FEEDBACK DRIVEN FRAMEWORK FOR ANNOTATION AND ANALYSIS OF 3D POINT CLOUDS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W5 (18 de agosto de 2017): 343–47. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w5-343-2017.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.
11

Suznjevic, Mirko, Lea Skorin-Kapov e Iztok Humar. "Statistical user behavior detection and QoE evaluation for thin client services". Computer Science and Information Systems 12, n.º 2 (2015): 587–605. http://dx.doi.org/10.2298/csis140810018s.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Remote desktop connection (RDC) services offer clients the ability to access remote content and services, commonly in the context of accessing their working environment. With the advent of cloud-based services, an example use case is that of delivering virtual PCs to users in WAN environments. In this paper, we aim to detect and analyze common user behavior when accessing RDC services, and use this as input for making Quality of Experience (QoE) estimations and subsequently providing input for effective QoE management mechanisms. We first identify different behavioral categories, and conduct traffic analysis to determine a feature set to be used for classification purposes. We propose a machine learning approach to be used for classifying behavior, and use this approach to classify a large number of real-world RDCs. We further conduct QoE evaluation studies to determine the relationship between different network conditions and subjective end user QoE for all identified behavioral categories. Results show an exponential relationship between QoE and delay and loss degradations, and a logarithmic relationship between QoE and bandwidth limitations. Obtained results may be applied in the context of network resource planning, as well as in making QoE-driven resource allocation decisions.
12

Kundra, Ritika, Hongxin Zhang, Robert Sheridan, Sahussapont Joseph Sirintrapun, Avery Wang, Angelica Ochoa, Manda Wilson et al. "OncoTree: A Cancer Classification System for Precision Oncology". JCO Clinical Cancer Informatics, n.º 5 (março de 2021): 221–30. http://dx.doi.org/10.1200/cci.20.00108.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
PURPOSE Cancer classification is foundational for patient care and oncology research. Systems such as International Classification of Diseases for Oncology (ICD-O), Systematized Nomenclature of Medicine Clinical Terms (SNOMED-CT), and National Cancer Institute Thesaurus (NCIt) provide large sets of cancer classification terminologies but they lack a dynamic modernized cancer classification platform that addresses the fast-evolving needs in clinical reporting of genomic sequencing results and associated oncology research. METHODS To meet these needs, we have developed OncoTree, an open-source cancer classification system. It is maintained by a cross-institutional committee of oncologists, pathologists, scientists, and engineers, accessible via an open-source Web user interface and an application programming interface. RESULTS OncoTree currently includes 868 tumor types across 32 organ sites. OncoTree has been adopted as the tumor classification system for American Association for Cancer Research (AACR) Project Genomics Evidence Neoplasia Information Exchange (GENIE), a large genomic and clinical data-sharing consortium, and for clinical molecular testing efforts at Memorial Sloan Kettering Cancer Center and Dana-Farber Cancer Institute. It is also used by precision oncology tools such as OncoKB and cBioPortal for Cancer Genomics. CONCLUSION OncoTree is a dynamic and flexible community-driven cancer classification platform encompassing rare and common cancers that provides clinically relevant and appropriately granular cancer classification for clinical decision support systems and oncology research.
13

WROE, CHRIS, ROBERT STEVENS, CAROLE GOBLE, ANGUS ROBERTS e MARK GREENWOOD. "A SUITE OF DAML+OIL ONTOLOGIES TO DESCRIBE BIOINFORMATICS WEB SERVICES AND DATA". International Journal of Cooperative Information Systems 12, n.º 02 (junho de 2003): 197–224. http://dx.doi.org/10.1142/s0218843003000711.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The growing quantity and distribution of bioinformatics resources means that finding and utilizing them requires a great deal of expert knowledge, especially as many resources need to be tied together into a workflow to accomplish a useful goal. We want to formally capture at least some of this knowledge within a virtual workbench and middleware framework to assist a wider range of biologists in utilizing these resources. Different activities require different representations of knowledge. Finding or substituting a service within a workflow is often best supported by a classification. Marshalling and configuring services is best accomplished using a formal description. Both representations are highly interdependent and maintaining consistency between the two by hand is difficult. We report on a description logic approach using the web ontology language DAML+OIL that uses property based service descriptions. The ontology is founded on DAML-S to dynamically create service classifications. These classifications are then used to support semantic service matching and discovery in a large grid based middleware project [Formula: see text]. We describe the extensions necessary to DAML-S in order to support bioinformatics service description; the utility of DAML+OIL in creating dynamic classifications based on formal descriptions; and the implementation of a DAML+OIL ontology service to support partial user-driven service matching and composition.
14

Rajaram, Gangothri, e KR Manjula. "Exploiting the Potential of VGI Metadata to Develop A Data-Driven Framework for Predicting User’s Proficiency in OpenStreetMap Context". ISPRS International Journal of Geo-Information 8, n.º 11 (31 de outubro de 2019): 492. http://dx.doi.org/10.3390/ijgi8110492.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Volunteered geographic information (VGI) encourages citizens to contribute geographic data voluntarily that helps to enhance geospatial databases. VGI’s significant limitations are trustworthiness and reliability concerning data quality due to the anonymity of data contributors. We propose a data-driven model to address these issues on OpenStreetMap (OSM), a particular case of VGI in recent times. This research examines the hypothesis of evaluating the proficiency of the contributor to assess the credibility of the data contributed. The proposed framework consists of two phases, namely, an exploratory data analysis phase and a learning phase. The former explores OSM data history to perform feature selection, resulting in “OSM Metadata” summarized using principal component analysis. The latter combines unsupervised and supervised learning through K-means for user-clustering and multi-class logistic regression for user classification. We identified five major classes representing user-proficiency levels based on contribution behavior in this study. We tested the framework with India OSM data history, where 17% of users are key contributors, and 27% are unexperienced local users. The results for classifying new users are satisfactory with 95.5% accuracy. Our conclusions recognize the potential of OSM metadata to illustrate the user’s contribution behavior without the knowledge of the user’s profile information.
15

Cornelissen, Frans, Miroslav Cik e Emmanuel Gustin. "Phaedra, a Protocol-Driven System for Analysis and Validation of High-Content Imaging and Flow Cytometry". Journal of Biomolecular Screening 17, n.º 4 (10 de janeiro de 2012): 496–506. http://dx.doi.org/10.1177/1087057111432885.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
High-content screening has brought new dimensions to cellular assays by generating rich data sets that characterize cell populations in great detail and detect subtle phenotypes. To derive relevant, reliable conclusions from these complex data, it is crucial to have informatics tools supporting quality control, data reduction, and data mining. These tools must reconcile the complexity of advanced analysis methods with the user-friendliness demanded by the user community. After review of existing applications, we realized the possibility of adding innovative new analysis options. Phaedra was developed to support workflows for drug screening and target discovery, interact with several laboratory information management systems, and process data generated by a range of techniques including high-content imaging, multicolor flow cytometry, and traditional high-throughput screening assays. The application is modular and flexible, with an interface that can be tuned to specific user roles. It offers user-friendly data visualization and reduction tools for HCS but also integrates Matlab for custom image analysis and the Konstanz Information Miner (KNIME) framework for data mining. Phaedra features efficient JPEG2000 compression and full drill-down functionality from dose-response curves down to individual cells, with exclusion and annotation options, cell classification, statistical quality controls, and reporting.
16

Cotter, Kelley, Mel Medeiros, Chankyung Pak e Kjerstin Thorson. "“Reach the right people”: The politics of “interests” in Facebook’s classification system for ad targeting". Big Data & Society 8, n.º 1 (janeiro de 2021): 205395172199604. http://dx.doi.org/10.1177/2053951721996046.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Political campaigns increasingly rely on Facebook for reaching their constituents, particularly through ad targeting. Facebook’s business model is premised on a promise to connect advertisers with the “right” users: those likely to click, download, engage, purchase. The company pursues this promise (in part) by algorithmically inferring users’ interests from their data and providing advertisers with a means of targeting users by their inferred interests. In this study, we explore for whom this interest classification system works in order to build on conversations in critical data studies about the ways such systems produce knowledge about the world rooted in power structures. We critically analyze the classification system from a variety of empirical vantage points—via user data; Facebook documentation, training, and patents; and Facebook’s tools for advertisers—and through theoretical concepts from a variety of domains. In this, we focus on the ways the classification system shapes possibilities for political representation and voice, particularly for people of color, women, and LGBTQ+ people. We argue that this “big data-driven” classification system should be read as political: it articulates a stance not only on what issues are or are not important in the U.S. public sphere, but also on who is considered a significant enough public to be adequately accounted for.
17

Hansen, Brandon, Cody Coleman, Yi Zhang e Maria Seale. "Text Classification and Tagging of United States Army Ground Vehicle Fault Descriptions in Support of Data-Driven Prognostics". Annual Conference of the PHM Society 12, n.º 1 (3 de novembro de 2020): 8. http://dx.doi.org/10.36001/phmconf.2020.v12i1.1154.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The manner in which a prognostics problem is framed is critical for enabling its solution by the proper method. Recently, data-driven prognostics techniques have demonstrated enormous potential when used alone, or as part of a hybrid solution in conjunction with physics-based models. Historical maintenance data constitutes a critical element for the use of a data-driven approach to prognostics, such as supervised machine learning. The historical data is used to create training and testing data sets to develop the machine learning model. Categorical classes for prediction are required for machine learning methods; however, faults of interest in US Army Ground Vehicle Maintenance Records appear as natural language text descriptions rather than a finite set of discrete labels. Transforming linguistically complex data into a set of prognostics classes is necessary for utilizing supervised machine learning approaches for prognostics. Manually labeling fault description instances is effective, but extremely time-consuming; thus, an automated approach to labelling is preferred. The approach described in this paper examines key aspects of the fault text relevant to enabling automatic labeling. A method was developed based on the hypothesis that a given fault description could be generalized into a category. This method uses various natural language processing (NLP) techniques and a priori knowledge of ground vehicle faults to assign classes to the maintenance fault descriptions. The core component of the method used in this paper is a Word2Vec word-embedding model. Word embeddings are used in conjunction with a token-oriented rule-based data structure for document classification. This methodology tags text with user-provided classes using a corpus of similar text fields as its training set. With classes of faults reliably assigned to a given description, supervised machine learning with these classes can be applied using related maintenance information that preceded the fault. This method was developed for labeling US Army Ground Vehicle Maintenance Records, but is general enough to be applied to any natural language data sets accompanied with a priori knowledge of its contents for consistent labeling. In addition to applications in machine learning, generated labels are also conducive to general summarization and case-by-case analysis of faults. The maintenance components of interest for this current application are alternators and gaskets, with future development directed towards determining the RUL of these components based on the labeled data.
18

Konstantakis, Markos, Georgios Alexandridis e George Caridakis. "A Personalized Heritage-Oriented Recommender System Based on Extended Cultural Tourist Typologies". Big Data and Cognitive Computing 4, n.º 2 (4 de junho de 2020): 12. http://dx.doi.org/10.3390/bdcc4020012.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Recent developments in digital technologies regarding the cultural heritage domain have driven technological trends in comfortable and convenient traveling, by offering interactive and personalized user experiences. The emergence of big data analytics, recommendation systems and personalization techniques have created a smart research field, augmenting cultural heritage visitor’s experience. In this work, a novel, hybrid recommender system for cultural places is proposed, that combines user preference with cultural tourist typologies. Starting with the McKercher typology as a user classification research base, which extracts five categories of heritage tourists out of two variables (cultural centrality and depth of user experience) and using a questionnaire, an enriched cultural tourist typology is developed, where three additional variables governing cultural visitor types are also proposed (frequency of visits, visiting knowledge and duration of the visit). The extracted categories per user are fused in a robust collaborative filtering, matrix factorization-based recommendation algorithm as extra user features. The obtained results on reference data collected from eight cities exhibit an improvement in system performance, thereby indicating the robustness of the presented approach.
19

Toader, Bogdan, Assaad Moawad, Thomas Hartmann e Francesco Viti. "A Data-Driven Scalable Method for Profiling and Dynamic Analysis of Shared Mobility Solutions". Journal of Advanced Transportation 2021 (18 de janeiro de 2021): 1–15. http://dx.doi.org/10.1155/2021/5943567.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The advent of Internet of Things will revolutionise the sharing mobility by enabling high connectivity between passengers and means of transport. This generates enormous quantity of data which can reveal valuable knowledge and help understand complex travel behaviour. At the same time, it challenges analytics platforms to discover knowledge from data in motion (i.e., the analytics occur in real time as the event happens), extract travel habits, and provide reliable and faster sharing mobility services in dynamic contexts. In this paper, a scalable method for dynamic profiling is introduced, which allows the extraction of users’ travel behaviour and valuable knowledge about visited locations, using only geolocation data collected from mobile devices. The methodology makes use of a compact representation of time-evolving graphs that can be used to analyse complex data in motion. In particular, we demonstrate that using a combination of state-of-the-art technologies from data science domain coupled with methodologies from the transportation domain, it is possible to implement, with the minimum of resources, the next generation of autonomous sharing mobility services (i.e., long-term and on-demand parking sharing and combinations of car sharing and ride sharing) and extract from raw data, without any user input and in near real time, valuable knowledge (i.e., location labelling and activity classification).
20

Aguilera-Rueda, Vicente-Josué, Nicandro Cruz-Ramírez e Efrén Mezura-Montes. "Data-Driven Bayesian Network Learning: A Bi-Objective Approach to Address the Bias-Variance Decomposition". Mathematical and Computational Applications 25, n.º 2 (20 de junho de 2020): 37. http://dx.doi.org/10.3390/mca25020037.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
We present a novel bi-objective approach to address the data-driven learning problem of Bayesian networks. Both the log-likelihood and the complexity of each candidate Bayesian network are considered as objectives to be optimized by our proposed algorithm named Nondominated Sorting Genetic Algorithm for learning Bayesian networks (NS2BN) which is based on the well-known NSGA-II algorithm. The core idea is to reduce the implicit selection bias-variance decomposition while identifying a set of competitive models using both objectives. Numerical results suggest that, in stark contrast to the single-objective approach, our bi-objective approach is useful to find competitive Bayesian networks especially in the complexity. Furthermore, our approach presents the end user with a set of solutions by showing different Bayesian network and their respective MDL and classification accuracy results.
21

Villar, José R., Paula Vergara, Manuel Menéndez, Enrique de la Cal, Víctor M. González e Javier Sedano. "Generalized Models for the Classification of Abnormal Movements in Daily Life and its Applicability to Epilepsy Convulsion Recognition". International Journal of Neural Systems 26, n.º 06 (19 de julho de 2016): 1650037. http://dx.doi.org/10.1142/s0129065716500374.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The identification and the modeling of epilepsy convulsions during everyday life using wearable devices would enhance patient anamnesis and monitoring. The psychology of the epilepsy patient penalizes the use of user-driven modeling, which means that the probability of identifying convulsions is driven through generalized models. Focusing on clonic convulsions, this pre-clinical study proposes a method for generating a type of model that can evaluate the generalization capabilities. A realistic experimentation with healthy participants is performed, each with a single 3D accelerometer placed on the most affected wrist. Unlike similar studies reported in the literature, this proposal makes use of [Formula: see text] cross-validation scheme, in order to evaluate the generalization capabilities of the models. Event-based error measurements are proposed instead of classification-error measurements, to evaluate the generalization capabilities of the model, and Fuzzy Systems are proposed as the generalization modeling technique. Using this method, the experimentation compares the most common solutions in the literature, such as Support Vector Machines, [Formula: see text]-Nearest Neighbors, Decision Trees and Fuzzy Systems. The event-based error measurement system records the results, penalizing those models that raise false alarms. The results showed the good generalization capabilities of Fuzzy Systems.
22

Jabla, Roua, Félix Buendía, Maha Khemaja e Sami Faiz. "Balancing Timing and Accuracy Requirements in Human Activity Recognition Mobile Applications". Proceedings 31, n.º 1 (20 de novembro de 2019): 15. http://dx.doi.org/10.3390/proceedings2019031015.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Timing requirements are present in many current context-aware and ambient intelligent applications. These kinds of applications usually demand a timing response according to needs dealing with context changes and user interactions. The current work introduces an approach that combines knowledge-driven and data-driven methods to check these requirements in the area of human activity recognition. Such recognition is traditionally based on machine learning classification algorithms. Since these algorithms are highly time consuming, it is necessary to choose alternative approaches when timing requirements are tight. In this case, the main idea consists of taking advantage of semantic ontology models that allow maintaining a level of accuracy during the recognition process while achieving the required response times. The experiments performed and their results in terms of checking such timing requirements along with keeping acceptable recognition levels confirm this idea as shown in the final section of the work.
23

Siddiquee, Masudur R., Roozbeh Atri, J. Sebastian Marquez, S. M. Shafiul Hasan, Rodrigo Ramon e Ou Bai. "Sensor Location Optimization of Wireless Wearable fNIRS System for Cognitive Workload Monitoring Using a Data-Driven Approach for Improved Wearability". Sensors 20, n.º 18 (7 de setembro de 2020): 5082. http://dx.doi.org/10.3390/s20185082.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Functional Near-Infrared Spectroscopy (fNIRS) is a hemodynamic modality in human cognitive workload assessment receiving popularity due to its easier implementation, non-invasiveness, low cost and other benefits from the signal-processing point of view. Wearable wireless fNIRS systems used in research have promisingly shown that fNIRS could be used in cognitive workload assessment in out-of-the-lab scenarios, such as in operators’ cognitive workload monitoring. In such a scenario, the wearability of the system is a significant factor affecting user comfort. In this respect, the wearability of the system can be improved if it is possible to minimize an fNIRS system without much compromise of the cognitive workload detection accuracy. In this study, cognitive workload-related hemodynamic changes were acquired using an fNIRS system covering the whole forehead, which is the region of interest in most cognitive workload-monitoring studies. A machine learning approach was applied to explore how the mean accuracy of the cognitive workload classification accuracy varied across various sensing locations on the forehead such as the Left, Mid, Right, Left-Mid, Right-Mid and Whole forehead. The statistical significance analysis result showed that the Mid location could result in significant cognitive workload classification accuracy compared to Whole forehead sensing, with a statistically insignificant difference in the mean accuracy. Thus, the wearable fNIRS system can be improved in terms of wearability by optimizing the sensor location, considering the sensing of the Mid location on the forehead for cognitive workload monitoring.
24

Zhao, Yan Ling, Chun Yu Che, Hong Bo Wang, Zi Yan Yun e Guan Hong Xu. "Application of Knowledge Based Engineering in Aeronautics Mold Parts Parametric Design". Advanced Materials Research 765-767 (setembro de 2013): 47–50. http://dx.doi.org/10.4028/www.scientific.net/amr.765-767.47.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In the process of designing aeronautic composite mold parts, previous designing knowledge and expertise cant be well used, the cycle of parts modeling is long and its production efficiency is low. According to the classification and characteristics of aeronautic mold parts and the knowledge engineering technology, based on the fusion module in UG6.0, we establish its repository by the knowledge acquisition, knowledge representation and knowledge reasoning of aeronautic mold parts. With the UG secondary development tools, UG/Open Menuscript and UG/Open Uistyler, we develop user menu and parametric design interface of aeronautic mold parts, make good use of its designing knowledge, and fulfill the knowledge-driven parts parametric design. They satisfy the rapid design requires of aeronautic mold parts and shorten its designing cycle.
25

Siddharth, Siddharth, e Mohan M. Trivedi. "On Assessing Driver Awareness of Situational Criticalities: Multi-modal Bio-Sensing and Vision-Based Analysis, Evaluations, and Insights". Brain Sciences 10, n.º 1 (15 de janeiro de 2020): 46. http://dx.doi.org/10.3390/brainsci10010046.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Automobiles for our roadways are increasingly using advanced driver assistance systems. The adoption of such new technologies requires us to develop novel perception systems not only for accurately understanding the situational context of these vehicles, but also to infer the driver’s awareness in differentiating between safe and critical situations. This manuscript focuses on the specific problem of inferring driver awareness in the context of attention analysis and hazardous incident activity. Even after the development of wearable and compact multi-modal bio-sensing systems in recent years, their application in driver awareness context has been scarcely explored. The capability of simultaneously recording different kinds of bio-sensing data in addition to traditionally employed computer vision systems provides exciting opportunities to explore the limitations of these sensor modalities. In this work, we explore the applications of three different bio-sensing modalities namely electroencephalogram (EEG), photoplethysmogram (PPG) and galvanic skin response (GSR) along with a camera-based vision system in driver awareness context. We assess the information from these sensors independently and together using both signal processing- and deep learning-based tools. We show that our methods outperform previously reported studies to classify driver attention and detecting hazardous/non-hazardous situations for short time scales of two seconds. We use EEG and vision data for high resolution temporal classification (two seconds) while additionally also employing PPG and GSR over longer time periods. We evaluate our methods by collecting user data on twelve subjects for two real-world driving datasets among which one is publicly available (KITTI dataset) while the other was collected by us (LISA dataset) with the vehicle being driven in an autonomous mode. This work presents an exhaustive evaluation of multiple sensor modalities on two different datasets for attention monitoring and hazardous events classification.
26

Dunai, Larisa, Martin Novak e Carmen García Espert. "Human Hand Anatomy-Based Prosthetic Hand". Sensors 21, n.º 1 (28 de dezembro de 2020): 137. http://dx.doi.org/10.3390/s21010137.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The present paper describes the development of a prosthetic hand based on human hand anatomy. The hand phalanges are printed with 3D printing with Polylactic Acid material. One of the main contributions is the investigation on the prosthetic hand joins; the proposed design enables one to create personalized joins that provide the prosthetic hand a high level of movement by increasing the degrees of freedom of the fingers. Moreover, the driven wire tendons show a progressive grasping movement, being the friction of the tendons with the phalanges very low. Another important point is the use of force sensitive resistors (FSR) for simulating the hand touch pressure. These are used for the grasping stop simulating touch pressure of the fingers. Surface Electromyogram (EMG) sensors allow the user to control the prosthetic hand-grasping start. Their use may provide the prosthetic hand the possibility of the classification of the hand movements. The practical results included in the paper prove the importance of the soft joins for the object manipulation and to get adapted to the object surface. Finally, the force sensitive sensors allow the prosthesis to actuate more naturally by adding conditions and classifications to the Electromyogram sensor.
27

Böhm, J., M. Bredif, T. Gierlinger, M. Krämer, R. Lindenberg, K. Liu, F. Michel e B. Sirmacek. "THE IQMULUS URBAN SHOWCASE: AUTOMATIC TREE CLASSIFICATION AND IDENTIFICATION IN HUGE MOBILE MAPPING POINT CLOUDS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (9 de junho de 2016): 301–7. http://dx.doi.org/10.5194/isprsarchives-xli-b3-301-2016.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.
28

Böhm, J., M. Bredif, T. Gierlinger, M. Krämer, R. Lindenberg, K. Liu, F. Michel e B. Sirmacek. "THE IQMULUS URBAN SHOWCASE: AUTOMATIC TREE CLASSIFICATION AND IDENTIFICATION IN HUGE MOBILE MAPPING POINT CLOUDS". ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLI-B3 (9 de junho de 2016): 301–7. http://dx.doi.org/10.5194/isprs-archives-xli-b3-301-2016.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Current 3D data capturing as implemented on for example airborne or mobile laser scanning systems is able to efficiently sample the surface of a city by billions of unselective points during one working day. What is still difficult is to extract and visualize meaningful information hidden in these point clouds with the same efficiency. This is where the FP7 IQmulus project enters the scene. IQmulus is an interactive facility for processing and visualizing big spatial data. In this study the potential of IQmulus is demonstrated on a laser mobile mapping point cloud of 1 billion points sampling ~ 10 km of street environment in Toulouse, France. After the data is uploaded to the IQmulus Hadoop Distributed File System, a workflow is defined by the user consisting of retiling the data followed by a PCA driven local dimensionality analysis, which runs efficiently on the IQmulus cloud facility using a Spark implementation. Points scattering in 3 directions are clustered in the tree class, and are separated next into individual trees. Five hours of processing at the 12 node computing cluster results in the automatic identification of 4000+ urban trees. Visualization of the results in the IQmulus fat client helps users to appreciate the results, and developers to identify remaining flaws in the processing workflow.
29

Sharif, Mohammadreza. "Particle Filters vs Hidden Markov Models for Prosthetic Robot Hand Grasp Selection". International Journal of Robotic Computing 1, n.º 2 (1 de dezembro de 2019): 98–122. http://dx.doi.org/10.35708/rc1868-126253.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Robotic prosthetic hands are commonly controlled using electromyography (EMG) signals as a means of inferring user intention. However, relying on EMG signals alone, although provides very good results in lab settings, is not sufficiently robust to real-life conditions. For this reason, taking advantage of other contextual clues are proposed in previous works. In this work, we propose a method for intention inference based on particle filtering (PF) based on user hand's trajectory information. Our methodology, also provides an estimate of time-to-arrive, i.e. time left until reaching to the object, which is an essential variable in successful grasping of objects. The proposed probabilistic framework can incorporate available sources of information to improve the inference process. We also provide a data-driven method based on hidden Markov model (HMM) as a baseline for intention inference. HMM is widely used for human gesture classification. The algorithms were tested (and trained) with regards to 160 reaching trajectories collected from 10 subjects reaching to one of four objects at a time.
30

Xiao, Mansheng, Yuezhong Wu, Guocai Zuo, Shuangnan Fan, Huijun Yu, Zeeshan Azmat Shaikh e Zhiqiang Wen. "Addressing Overfitting Problem in Deep Learning-Based Solutions for Next Generation Data-Driven Networks". Wireless Communications and Mobile Computing 2021 (10 de agosto de 2021): 1–10. http://dx.doi.org/10.1155/2021/8493795.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Next-generation networks are data-driven by design but face uncertainty due to various changing user group patterns and the hybrid nature of infrastructures running these systems. Meanwhile, the amount of data gathered in the computer system is increasing. How to classify and process the massive data to reduce the amount of data transmission in the network is a very worthy problem. Recent research uses deep learning to propose solutions for these and related issues. However, deep learning faces problems like overfitting that may undermine the effectiveness of its applications in solving different network problems. This paper considers the overfitting problem of convolutional neural network (CNN) models in practical applications. An algorithm for maximum pooling dropout and weight attenuation is proposed to avoid overfitting. First, design the maximum value pooling dropout in the pooling layer of the model to sparse the neurons and then introduce the regularization based on weight attenuation to reduce the complexity of the model when the gradient of the loss function is calculated by backpropagation. Theoretical analysis and experiments show that the proposed method can effectively avoid overfitting and can reduce the error rate of data set classification by more than 10% on average than other methods. The proposed method can improve the quality of different deep learning-based solutions designed for data management and processing in next-generation networks.
31

Sevastjanova, Rita, Wolfgang Jentner, Fabian Sperrle, Rebecca Kehlbeck, Jürgen Bernard e Mennatallah El-assady. "QuestionComb: A Gamification Approach for the Visual Explanation of Linguistic Phenomena through Interactive Labeling". ACM Transactions on Interactive Intelligent Systems 11, n.º 3-4 (31 de dezembro de 2021): 1–38. http://dx.doi.org/10.1145/3429448.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Linguistic insight in the form of high-level relationships and rules in text builds the basis of our understanding of language. However, the data-driven generation of such structures often lacks labeled resources that can be used as training data for supervised machine learning. The creation of such ground-truth data is a time-consuming process that often requires domain expertise to resolve text ambiguities and characterize linguistic phenomena. Furthermore, the creation and refinement of machine learning models is often challenging for linguists as the models are often complex, in-transparent, and difficult to understand. To tackle these challenges, we present a visual analytics technique for interactive data labeling that applies concepts from gamification and explainable Artificial Intelligence (XAI) to support complex classification tasks. The visual-interactive labeling interface promotes the creation of effective training data. Visual explanations of learned rules unveil the decisions of the machine learning model and support iterative and interactive optimization. The gamification-inspired design guides the user through the labeling process and provides feedback on the model performance. As an instance of the proposed technique, we present QuestionComb , a workspace tailored to the task of question classification (i.e., in information-seeking vs. non-information-seeking questions). Our evaluation studies confirm that gamification concepts are beneficial to engage users through continuous feedback, offering an effective visual analytics technique when combined with active learning and XAI.
32

Baek, Keon, Sehyun Kim, Eunjung Lee, Yongjun Cho e Jinho Kim. "Data-Driven Evaluation for Demand Flexibility of Segmented Electric Vehicle Chargers in the Korean Residential Sector". Energies 14, n.º 4 (7 de fevereiro de 2021): 866. http://dx.doi.org/10.3390/en14040866.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The rapid spread of renewable energy resources has increased need for demand flexibility as one of the solutions to power system imbalance. However, to properly estimate the demand flexibility, demand characteristics must be analyzed first and the corresponding flexibility measures must be validated. Thus, in this study, a novel approach is proposed to evaluate the demand flexibility provided by Electric Vehicle Chargers (EVC) in the residential sector based upon a new process of electric charging demand characteristic data analysis. The proposed model estimates the frequency, consistency, and operation scores of the flexible demand resource (FDR) during identified ramp-up/down intervals presented in our previous work. The scores are included in the components that calculate the flexibility score referring that the closer it is to 1, the higher utilization as an FDR. A case study was conducted by considering EV user segmentation based on their demand characteristic analysis. The results confirm that flexibility scores of segmented EVC groups are about 0.0273 in ramp-up and ramp-down intervals. Based on the experimental results, the flexibility score can be utilized for multi-dimensional analysis and verification in perspectives of seasonality, participation time interval, customer group classification, and evaluation. Thus, the proposed method can be used as an indicator to determine how a segmented EVC group is adequate to participate as an FDR while suggesting meaningful implications through EVC demand data analysis.
33

Cheng, Qi, Shuchun Wang e Xifeng Fang. "Intelligent design technology of automobile inspection tool based on 3D MBD model intelligent retrieval". Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering 235, n.º 10-11 (2 de março de 2021): 2917–27. http://dx.doi.org/10.1177/09544070211000174.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The existing process equipment design resource utilization rate in automobile industry is low, so it is urgent to change the design method to improve the design efficiency. This paper proposed a fast design method of process equipment driven by classification retrieval of 3D model-based definition (MBD). Firstly, an information integration 3D model is established to fully express the product information definition and to effectively express the design characteristics of the existing 3D model. Through the classification machine-learning algorithm of 3D MBD model based on Extreme Learning Machine (ELM), the 3D MBD model with similar characteristics to the auto part model to be designed was retrieved from the complex process equipment case database. Secondly, the classification and retrieval of the model are realized, and the process equipment of retrieval association mapping with 3D MBD model is called out. The existing process equipment model is adjusted and modified to complete the rapid design of the process equipment of the product to be designed. Finally, a corresponding process equipment design system was developed and verified through a case study. The application of machine learning to the design of industrial equipment greatly shortens the development cycle of equipment. In the design system, the system learns from engineers, making them understand the design better than engineers. Therefore, it can help any user to quickly design 3D models of complex products.
34

Gradolewski, Dawid, Damian Dziak, Damian Kaniecki, Adam Jaworski, Michal Skakuj e Wlodek J. Kulesza. "A Runway Safety System Based on Vertically Oriented Stereovision". Sensors 21, n.º 4 (20 de fevereiro de 2021): 1464. http://dx.doi.org/10.3390/s21041464.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
In 2020, over 10,000 bird strikes were reported in the USA, with average repair costs exceeding $200 million annually, rising to $1.2 billion worldwide. These collisions of avifauna with airplanes pose a significant threat to human safety and wildlife. This article presents a system dedicated to monitoring the space over an airport and is used to localize and identify moving objects. The solution is a stereovision based real-time bird protection system, which uses IoT and distributed computing concepts together with advanced HMI to provide the setup’s flexibility and usability. To create a high degree of customization, a modified stereovision system with freely oriented optical axes is proposed. To provide a market tailored solution affordable for small and medium size airports, a user-driven design methodology is used. The mathematical model is implemented and optimized in MATLAB. The implemented system prototype is verified in a real environment. The quantitative validation of the system performance is carried out using fixed-wing drones with GPS recorders. The results obtained prove the system’s high efficiency for detection and size classification in real-time, as well as a high degree of localization certainty.
35

Lakhanpal, Shailendra, Kailee Hawkins, Steven G. Dunder, Karri Donahue, Madeline Richey, Edward Liu, Alexandra Brachfeld et al. "An automated EHR-based tool to facilitate patient identification for biomarker-driven trials." Journal of Clinical Oncology 39, n.º 15_suppl (20 de maio de 2021): 1539. http://dx.doi.org/10.1200/jco.2021.39.15_suppl.1539.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
1539 Background: Clinical trial eligibility increasingly requires information found in NGS tests; lack of structured NGS results hinders the automation of trial matching for this criterion, which may be a deterrent to open biomarker-driven trials in certain sites. We developed a machine learning tool that infers the presence of NGS results in the EHR, facilitating clinical trial matching. Methods: The Flatiron Health EHR-derived database contains patient-level pathology and genetic counseling reports from community oncology practices. An internal team of clinical experts reviewed a random sample of patients across this network to generate labels of whether each patient had been NGS tested. A supervised ML model was trained by scanning documents in the EHR and extracting n-gram features from text snippets surrounding relevant keywords (i.e. 'Lung biomarker', 'Biomarker negative'). Through k-fold cross-validation and l2-regularization, we found that a logistic regression was able to classify patients' NGS testing status. The model's offline performance on a 20% hold-out test set was measured with standard classification metrics: sensitivity, specificity, positive predictive value (PPV) and NPV. In an online setting, we integrated the tool into Flatiron's clinical trial matching software OncoTrials by including in each patient's profile an indicator of "likely NGS tested" or "unlikely NGS tested" based on the classifier's prediction. For patients inferred as tested, the model linked users to a test report view in the EHR. In this online setting, we measured sensitivity and specificity of the model after user review in two community oncology practices. Results: This NGS testing status inference model was characterized using a test sample of 15,175 patients. The model sensitivity and specificity (95%CI) were 91.3% (90.2, 92.3) and 96.2% (95.8, 96.5), respectively; PPV was 84.5% (83.2, 85.8) and NPV was 98.0% (97.7, 98.2). In the validation sample (N = 200 originated from 2 distinct care sites), users identified NGS testing status with a sensitivity of 95.2% (88.3%, 98.7%). Conclusions: This machine learning model facilitates the screening for potential patient enrollment in biomarker-driven trials by automatically surfacing patients with NGS test results at high sensitivity and specificity into a trial matching application to identify candidates. This tool could mitigate a key barrier for participation in biomarker-driven trials for community clinics.
36

Maheshwari, Nishith, Srishti Srivastava e Krishnan Sundara Rajan. "Development of an Indoor Space Semantic Model and Its Implementation as an IndoorGML Extension". ISPRS International Journal of Geo-Information 8, n.º 8 (27 de julho de 2019): 333. http://dx.doi.org/10.3390/ijgi8080333.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Geospatial data capture and handling of indoor spaces is increasing over the years and has had a varied history of data sources ranging from architectural and building drawings to indoor data acquisition approaches. While these have been more data format and information driven primarily for the physical representation of spaces, it is important to note that many applications look for the semantic information to be made available. This paper proposes a space classification model leading to an ontology for indoor spaces that accounts for both the semantic and geometric characteristics of the spaces. Further, a Space semantic model is defined, based on this ontology, which can then be used appropriately in multiple applications. To demonstrate the utility of the model, we also present an extension to the IndoorGML data standard with a set of proposed classes that can help capture both the syntactic and semantic components of the model. It is expected that these proposed classes can be appropriately harnessed for use in diverse applications ranging from indoor data visualization to more user customised building evacuation path planning with a semantic overtone.
37

Kog, Faikcan, e Hakan Yaman. "A multi-agent systems-based contractor pre-qualification model". Engineering, Construction and Architectural Management 23, n.º 6 (21 de novembro de 2016): 709–26. http://dx.doi.org/10.1108/ecam-01-2016-0013.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Purpose The selection of the contractor, as a main participant of a construction project, is the most important and challenging decision process for a client. The purpose of this paper is to propose a multi-agent systems (MAS)-based contractor pre-qualification (CP) model for the construction sector in the frame of the tender management system. Design/methodology/approach The meta-classification and analysis study of the existing literature on CP, contractor selection and criteria weighting issues, which examines the current and important CP criteria, other than price, is introduced structurally. A quantitative survey, which is carried out to estimate initial weightings of the identified criteria, is overviewed. MAS are used to model the pre-qualification process and workflows are shown in Petri nets formalism. A user-friendly prototype program is created in order to simulate the tendering process. In addition, a real case regarding the construction work in Turkey is analyzed. Findings There is a lack of non-human-driven solutions and automation in CP and in the selection problem. The proposed model simulates the pre-qualification process and provides consistent results. Research limitations/implications The meta-classification study consists of only peer-reviewed papers between 1992 and 2013 and the quantitative survey initiates the perspectives of the actors of Turkish construction sector. Only the traditional project delivery method is selected for the proposed model, that is other delivery methods such as design/build, project management, etc., are not considered. Open, selective limited and negotiated tendering processes are examined in the study and the direct supply is not considered in the scope. Practical implications The implications will help to provide an objective CP and selection process and to prevent the delays, costs and other troubles, which are caused by the false selection of a contractor. Originality/value Automation and simulation in the pre-qualification and the selection of the contractor with a non-human-driven intelligent solution ease the decision processes of clients in terms of cost, time and quality.
38

Kasapakis, Vlasios, e Damianos Gavalas. "Revisiting design guidelines for pervasive games". International Journal of Pervasive Computing and Communications 13, n.º 4 (6 de novembro de 2017): 386–407. http://dx.doi.org/10.1108/ijpcc-d-17-00007.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Purpose Existing guidelines are typically extracted from a few empirical evaluations of pervasive game prototypes featuring incompatible scenarios, game play design and technical characteristics. Hence, the applicability of those design guidelines across the increasingly diverse landscape of pervasive games is questionable and should be investigated. Design/methodology/approach This paper presents Barbarossa, a scenario-driven pervasive game that encompasses different game modes, purposely adopting opposing principles in addressing the core elements of challenge and control. Using Barbarossa as a testbed, this study aims at validating the applicability of existing design guidelines across diverse game design approaches. Findings The compilation of Barbarossa user evaluation results confirmed the limited applicability of existing guidelines and provided evidence that developers should handle core game elements, taking into account the game play characteristics derived from their scenario. Originality/value Stepping upon those findings, the authors propose a revision of design guidelines relevant to control and challenge based on elaborate classification criteria for pervasive game prototypes.
39

Ahmad, Naim, Noorulhasan Quadri, Mohamed Qureshi e Mohammad Alam. "Relationship Modeling of Critical Success Factors for Enhancing Sustainability and Performance in E-Learning". Sustainability 10, n.º 12 (14 de dezembro de 2018): 4776. http://dx.doi.org/10.3390/su10124776.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
E-learning, a technology-mediated learning approach, is a pervasively adopted teaching/learning mode for transferring knowledge. Some of the motivational factors for its wide adoption are time and location independence, user-friendliness, on-demand service, resource richness, and multi-media and technology driven factors. Achieving sustainability and performance in its delivery is of paramount importance. This research utilizes the critical success factors (CSFs) approach to identify the sustainable E-learning implementation model. Fifteen CSFs have been identified through the literature review, expert opinions, and in-depth interviews. These CSFs have been modeled for interdependence using interpretive structural modeling and Matriced’ Impacts Croise’s Multiplication Appliquée a UN Classement (MICMAC) analysis. Further, the model has been validated through in-depth interviews. The present research provides quantification of CSFs of E-learning in terms of their driving and dependence powers and their classification thorough MICMAC analysis. The E-learning system organizers may focus on improving upon the enablers such as organizational infrastructure readiness, efficient technology infrastructure, appropriate E-learning course design, course flexibility, understandable relevant content, stakeholders’ training, security, access control and privileges, commitment, and being user–friendly and well-organized, in order to enhance the sustainability and performance in E-learning. This study will also help E-learning stakeholders in relocating and prioritizing resources.
40

Crocamo, Cristina, Marco Viviani, Francesco Bartoli, Giuseppe Carrà e Gabriella Pasi. "Detecting Binge Drinking and Alcohol-Related Risky Behaviours from Twitter’s Users: An Exploratory Content- and Topology-Based Analysis". International Journal of Environmental Research and Public Health 17, n.º 5 (26 de fevereiro de 2020): 1510. http://dx.doi.org/10.3390/ijerph17051510.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Binge Drinking (BD) is a common risky behaviour that people hardly report to healthcare professionals, although it is not uncommon to find, instead, personal communications related to alcohol-related behaviors on social media. By following a data-driven approach focusing on User-Generated Content, we aimed to detect potential binge drinkers through the investigation of their language and shared topics. First, we gathered Twitter threads quoting BD and alcohol-related behaviours, by considering unequivocal keywords, identified by experts, from previous evidence on BD. Subsequently, a random sample of the gathered tweets was manually labelled, and two supervised learning classifiers were trained on both linguistic and metadata features, to classify tweets of genuine unique users with respect to media, bot, and commercial accounts. Based on this classification, we observed that approximately 55% of the 1 million alcohol-related collected tweets was automatically identified as belonging to non-genuine users. A third classifier was then trained on a subset of manually labelled tweets among those previously identified as belonging to genuine accounts, to automatically identify potential binge drinkers based only on linguistic features. On average, users classified as binge drinkers were quite similar to the standard genuine Twitter users in our sample. Nonetheless, the analysis of social media contents of genuine users reporting risky behaviours remains a promising source for informed preventive programs.
41

Grant, Susan B. "Classifying emerging knowledge sharing practices and some insights into antecedents to social networking: a case in insurance". Journal of Knowledge Management 20, n.º 5 (12 de setembro de 2016): 898–917. http://dx.doi.org/10.1108/jkm-11-2015-0432.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Purpose The paper aims to explore a case of early adoption of the use of social media tools for the purposes of knowledge and information sharing across a supply chain in the UK home insurance market. Design/methodology/approach The methodology used includes genre and content analysis to analyze empirical data from blogs and posts via a customized social extranet [Engaging in Knowledge Networking via an interactive 3D Social Supplier Network (KNOWLEDGE NETWORK)] involving 130 users over a 13-month period. Findings The results uncover a set of emerging practices which support both information and knowledge exchange, but which are mainly driven by organizational factors such as buyer power and supplier competitive influencing. Research limitations/implications This study has contributed an overall conceptual understanding of reasons behind social media adoption by identifying organizational attributes of buyer power and supplier influence as key antecedents to knowledge sharing within a supply chain. Originality/value This paper builds on current thinking in social media theory by providing a window into organizational and supply chain attributes that can explain social media adoption within the context of knowledge sharing supply chains. A systematic classification of user posts over an extended period enabled this work to illuminate not only emerging knowledge sharing practices across a buyer-led supply chain but also the effects of buyer power on users in an online community.
42

Meiner, Andrus. "Integration of GIS and a dynamic spatially distributed model for non-point source pollution management". Water Science and Technology 33, n.º 4-5 (1 de fevereiro de 1996): 211–18. http://dx.doi.org/10.2166/wst.1996.0507.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
The integrated system described in the paper combines GIS software based on a vector data structure with a simulation model for non-point source pollution management. Territory is disaggregated to spatial modelling units through a series of map overlays. The two major software components are linked through a set of the GIS package's macro language routines and operating system script files. The level of integration can be described as “running the model as a GIS subsystem.” It contains the following steps: invoking the GIS software; calling the model for a simulation run (with possible input editing); import of the model output to a GIS compatible database file; establishing a file relation between the model output and the georeferenced attribute file; and displaying and analysing model results. The user interface is menu driven, and options are provided for defining the temporal and spatial extent of the modelling session. After a simulation run, results are available for further analysis in the form of a classification map and a network loaded with the simulated variable.
43

Capdevila, Ignasi. "Joining a collaborative space: is it really a better place to work?" Journal of Business Strategy 40, n.º 2 (15 de abril de 2019): 14–21. http://dx.doi.org/10.1108/jbs-09-2017-0140.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Purpose Collaborative spaces such as Fab Labs, Living Labs, coworking spaces, hackerspaces, makerspaces, etc. are localized spaces that offer open access to resources. The purpose of this paper is to explain what motivates participants in such spaces, according to different innovation logics. Design/methodology/approach The paper is based on qualitative studies of 43 collaborative spaces in Paris and Barcelona. Findings This paper proposes a typology of different collaborative spaces to understand what motivates their participants. The classification is based on the innovation approach of each type of space: methods and techniques of ideation, social innovation, open innovation and user-driven innovation. Research limitations/implications The classification of collaborative spaces clearly identifies different innovation approaches. However, it might result to be too simplistic and may not represent all spaces under the same denomination. Practical implications This paper provides some guidelines for managers who run or intend to open a collaborative space. In bottom-up innovation modes, to increase the commitment of the participants, managers should provide the tools and resources needed to successfully achieve the goals of the members’ projects. In top-down innovation modes, managers should rather focus on designing an attractive and rewarding process of ideation. Originality/value This paper contributes to the understanding of collaborative spaces; it shows that participants’ engagement is related to the nature of the innovation activities that take place in collaborative spaces, and it compares different types of spaces to explain their differences and similarities.
44

Irawan, Dasapta Erwin, Muhammad Aswan Syahputra, Prana Ugi e Deny Juanda Puradimaja. "Thermostats: an Open Source Shiny App for Your Open Data Repository". JOIV : International Journal on Informatics Visualization 3, n.º 2-2 (17 de agosto de 2019): 233. http://dx.doi.org/10.30630/joiv.3.2-2.282.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Hydrochemical analysis has emerged as a powerful methodology in geothermal system profiling. Indonesia is the capital of geothermal energy with its more than 100 active volcanoes. Therefore we need to have an analytical, data-driven, and user-focused online application of geothermal water quality. Proudly we introduce Thermostats (https://aswansyahputra.shinyapps.io/thermostats/). We collected water quality from 416 geothermal sites across Indonesia. Three main objectives are to provide an online open-free to use data repository, to visualize the dataset to suit user’s needs, and to help users understand the geothermal system of each particular site. At the end, we hope they like this system and donate their own dataset to make it better for future users. We designed this online app using Shiny, because it’s open source, lightweight and portable. It’s very intuitive to load our descriptive, bivariate and multivariate statistics. We selected Principal Component Analysis and Cluster Analysis as two strong statistics for water sample classification. Users could add their own dataset by making a pull request on Github (https://github.com/dasaptaerwin/thermostats) or sending it to us by email to make it visible in the application and included in the visualization. We make this application portable, so it can be installed on a local computer or a server, to enable an easy and fluid way of data sharing between collaborators.
45

Han, Te, Dongxiang Jiang, Qi Zhao, Lei Wang e Kai Yin. "Comparison of random forest, artificial neural networks and support vector machine for intelligent diagnosis of rotating machinery". Transactions of the Institute of Measurement and Control 40, n.º 8 (1 de junho de 2017): 2681–93. http://dx.doi.org/10.1177/0142331217708242.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Nowadays, the data-driven diagnosis method, exploiting pattern recognition method to diagnose the fault patterns automatically, achieves much success for rotating machinery. Some popular classification algorithms such as artificial neural networks and support vector machine have been extensively studied and tested with many application cases, while the random forest, one of the present state-of-the-art classifiers based on ensemble learning strategy, is relatively unknown in this field. In this paper, the behavior of random forest for the intelligent diagnosis of rotating machinery is investigated with various features on two datasets. A framework for the comparison of different methods, that is, random forest, extreme learning machine, probabilistic neural network and support vector machine, is presented to find the most efficient one. Random forest has been proven to outperform the comparative classifiers in terms of recognition accuracy, stability and robustness to features, especially with a small training set. Additionally, compared with traditional methods, random forest is not easily influenced by environmental noise. Furthermore, the user-friendly parameters in random forest offer great convenience for practical engineering. These results suggest that random forest is a promising pattern recognition method for the intelligent diagnosis of rotating machinery.
46

Himeur, Yassine, Abdullah Alsalemi, Faycal Bensaali e Abbes Amira. "A Novel Approach for Detecting Anomalous Energy Consumption Based on Micro-Moments and Deep Neural Networks". Cognitive Computation 12, n.º 6 (25 de setembro de 2020): 1381–401. http://dx.doi.org/10.1007/s12559-020-09764-y.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
AbstractNowadays, analyzing, detecting, and visualizing abnormal power consumption behavior of householders are among the principal challenges in identifying ways to reduce power consumption. This paper introduces a new solution to detect energy consumption anomalies based on extracting micro-moment features using a rule-based model. The latter is used to draw out load characteristics using daily intent-driven moments of user consumption actions. Besides micro-moment features extraction, we also experiment with a deep neural network architecture for efficient abnormality detection and classification. In the following, a novel anomaly visualization technique is introduced that is based on a scatter representation of the micro-moment classes, and hence providing consumers an easy solution to understand their abnormal behavior. Moreover, in order to validate the proposed system, a new energy consumption dataset at appliance level is also designed through a measurement campaign carried out at Qatar University Energy Lab, namely, Qatar University dataset. Experimental results on simulated and real datasets collected at two regions, which have extremely different climate conditions, confirm that the proposed deep micro-moment architecture outperforms other machine learning algorithms and can effectively detect anomalous patterns. For example, 99.58% accuracy and 97.85% F1 score have been achieved under Qatar University dataset. These promising results establish the efficacy of the proposed deep micro-moment solution for detecting abnormal energy consumption, promoting energy efficiency behaviors, and reducing wasted energy.
47

Athanas, Argus J., Jamison M. McCorrison, Susan Smalley, Jamie Price, Jim Grady, Julie Campistron e Nicholas J. Schork. "Association Between Improvement in Baseline Mood and Long-Term Use of a Mindfulness and Meditation App: Observational Study". JMIR Mental Health 6, n.º 5 (8 de maio de 2019): e12617. http://dx.doi.org/10.2196/12617.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Background The use of smartphone apps to monitor and deliver health care guidance and interventions has received considerable attention recently, particularly with regard to behavioral disorders, stress relief, negative emotional state, and poor mood in general. Unfortunately, there is little research investigating the long-term and repeated effects of apps meant to impact mood and emotional state. Objective We aimed to investigate the effects of both immediate point-of-intervention and long-term use (ie, at least 10 engagements) of a guided meditation and mindfulness smartphone app on users’ emotional states. Data were collected from users of a mobile phone app developed by the company Stop, Breathe & Think (SBT) for achieving emotional wellness. To explore the long-term effects, we assessed changes in the users’ basal emotional state before they completed an activity (eg, a guided meditation). We also assessed the immediate effects of the app on users’ emotional states from preactivity to postactivity. Methods The SBT app collects information on the emotional state of the user before and after engagement in one or several mediation and mindfulness activities. These activities are recommended and provided by the app based on user input. We considered data on over 120,000 users of the app who collectively engaged in over 5.5 million sessions with the app during an approximate 2-year period. We focused our analysis on users who had at least 10 engagements with the app over an average of 6 months. We explored the changes in the emotional well-being of individuals with different emotional states at the time of their initial engagement with the app using mixed-effects models. In the process, we compared 2 different methods of classifying emotional states: (1) an expert-defined a priori mood classification and (2) an empirically driven cluster-based classification. Results We found that among long-term users of the app, there was an association between the length of use and a positive change in basal emotional state (4% positive mood increase on a 2-point scale every 10 sessions). We also found that individuals who were anxious or depressed tended to have a favorable long-term emotional transition (eg, from a sad emotional state to a happier emotional state) after using the app for an extended period (the odds ratio for achieving a positive emotional state was 3.2 and 6.2 for anxious and depressed individuals, respectively, compared with users with fewer sessions). Conclusions Our analyses provide evidence for an association between both immediate and long-term use of an app providing guided meditations and improvements in the emotional state.
48

Thiesen, Stephanie, Paul Darscheid e Uwe Ehret. "Identifying rainfall-runoff events in discharge time series: a data-driven method based on information theory". Hydrology and Earth System Sciences 23, n.º 2 (19 de fevereiro de 2019): 1015–34. http://dx.doi.org/10.5194/hess-23-1015-2019.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract. In this study, we propose a data-driven approach for automatically identifying rainfall-runoff events in discharge time series. The core of the concept is to construct and apply discrete multivariate probability distributions to obtain probabilistic predictions of each time step that is part of an event. The approach permits any data to serve as predictors, and it is non-parametric in the sense that it can handle any kind of relation between the predictor(s) and the target. Each choice of a particular predictor data set is equivalent to formulating a model hypothesis. Among competing models, the best is found by comparing their predictive power in a training data set with user-classified events. For evaluation, we use measures from information theory such as Shannon entropy and conditional entropy to select the best predictors and models and, additionally, measure the risk of overfitting via cross entropy and Kullback–Leibler divergence. As all these measures are expressed in “bit”, we can combine them to identify models with the best tradeoff between predictive power and robustness given the available data. We applied the method to data from the Dornbirner Ach catchment in Austria, distinguishing three different model types: models relying on discharge data, models using both discharge and precipitation data, and recursive models, i.e., models using their own predictions of a previous time step as an additional predictor. In the case study, the additional use of precipitation reduced predictive uncertainty only by a small amount, likely because the information provided by precipitation is already contained in the discharge data. More generally, we found that the robustness of a model quickly dropped with the increase in the number of predictors used (an effect well known as the curse of dimensionality) such that, in the end, the best model was a recursive one applying four predictors (three standard and one recursive): discharge from two distinct time steps, the relative magnitude of discharge compared with all discharge values in a surrounding 65 h time window and event predictions from the previous time step. Applying the model reduced the uncertainty in event classification by 77.8 %, decreasing conditional entropy from 0.516 to 0.114 bits. To assess the quality of the proposed method, its results were binarized and validated through a holdout method and then compared to a physically based approach. The comparison showed similar behavior of both models (both with accuracy near 90 %), and the cross-validation reinforced the quality of the proposed model. Given enough data to build data-driven models, their potential lies in the way they learn and exploit relations between data unconstrained by functional or parametric assumptions and choices. And, beyond that, the use of these models to reproduce a hydrologist's way of identifying rainfall-runoff events is just one of many potential applications.
49

Vogeler, Jody, Robert Slesak, Patrick Fekety e Michael Falkowski. "Characterizing over Four Decades of Forest Disturbance in Minnesota, USA". Forests 11, n.º 3 (24 de março de 2020): 362. http://dx.doi.org/10.3390/f11030362.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Spatial information about disturbance driven patterns of forest structure and ages across landscapes provide a valuable resource for all land management efforts including cross-ownership collaborative forest treatments and restoration. While disturbance events in general are known to impact stand characteristics, the agent of change may also influence recovery and the supply of ecosystem services. Our study utilizes the full extent of the Landsat archive to identify the timing, extent, magnitude, and agent, of the most recent fast disturbance event for all forested lands within Minnesota, USA. To account for the differences in the Landsat sensors through time, specifically the coarser spatial, spectral, and radiometric resolutions of the early MSS sensors, we employed a two-step approach, first harmonizing spectral indices across the Landsat sensors, then applying a segmentation algorithm to fit temporal trends to the time series to identify abrupt forest disturbance events. We further incorporated spectral, topographic, and land protection information in our classification of the agent of change for all disturbance patches. After allowing two years for the time series to stabilize, we were able to identify the most recent fast disturbance events across Minnesota from 1974–2018 with a change versus no-change validation accuracy of 97.2% ± 1.9%, and higher omission (14.9% ± 9.3%) than commission errors (1.6% ± 1.9%) for the identification of change patches. Our classification of the agent of change exhibited an overall accuracy of 96.5% ± 1.9% with classes including non-disturbed forest, land conversion, fire, flooding, harvest, wind/weather, and other rare natural events. Individual class errors varied, but all class user and producer accuracies were above 78%. The unmatched nature of the Landsat archive for providing comparable forest attribute and change information across more than four decades highlights the value of the totality of the Landsat program to the larger geospatial, ecological research, and forest management communities.
50

Chauhan, Swarup, Kathleen Sell, Wolfram Rühaak, Thorsten Wille e Ingo Sass. "CobWeb 1.0: machine learning toolbox for tomographic imaging". Geoscientific Model Development 13, n.º 1 (31 de janeiro de 2020): 315–34. http://dx.doi.org/10.5194/gmd-13-315-2020.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
Resumo:
Abstract. Despite the availability of both commercial and open-source software, an ideal tool for digital rock physics analysis for accurate automatic image analysis at ambient computational performance is difficult to pinpoint. More often, image segmentation is driven manually, where the performance remains limited to two phases. Discrepancies due to artefacts cause inaccuracies in image analysis. To overcome these problems, we have developed CobWeb 1.0, which is automated and explicitly tailored for accurate greyscale (multiphase) image segmentation using unsupervised and supervised machine learning techniques. In this study, we demonstrate image segmentation using unsupervised machine learning techniques. The simple and intuitive layout of the graphical user interface enables easy access to perform image enhancement and image segmentation, and further to obtain the accuracy of different segmented classes. The graphical user interface enables not only processing of a full 3-D digital rock dataset but also provides a quick and easy region-of-interest selection, where a representative elementary volume can be extracted and processed. The CobWeb software package covers image processing and machine learning libraries of MATLAB® used for image enhancement and image segmentation operations, which are compiled into series of Windows-executable binaries. Segmentation can be performed using unsupervised, supervised and ensemble classification tools. Additionally, based on the segmented phases, geometrical parameters such as pore size distribution, relative porosity trends and volume fraction can be calculated and visualized. The CobWeb software allows the export of data to various formats such as ParaView (.vtk), DSI Studio (.fib) for visualization and animation, and Microsoft® Excel and MATLAB® for numerical calculation and simulations. The capability of this new software is verified using high-resolution synchrotron tomography datasets, as well as lab-based (cone-beam) X-ray microtomography datasets. Regardless of the high spatial resolution (submicrometre), the synchrotron dataset contained edge enhancement artefacts which were eliminated using a novel dual filtering and dual segmentation procedure.

Vá para a bibliografia