Academic literature on the topic 'Dataset selection'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Dataset selection.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Dataset selection"

1

Kumar, H. M. Keerthi, and B. S. Harish. "A New Feature Selection Method for Sentiment Analysis in Short Text." Journal of Intelligent Systems 29, no. 1 (December 4, 2018): 1122–34. http://dx.doi.org/10.1515/jisys-2018-0171.

Full text
Abstract:
Abstract In recent internet era, micro-blogging sites produce enormous amount of short textual information, which appears in the form of opinions or sentiments of users. Sentiment analysis is a challenging task in short text, due to use of formal language, misspellings, and shortened forms of words, which leads to high dimensionality and sparsity. In order to deal with these challenges, this paper proposes a novel, simple, and yet effective feature selection method, to select frequently distributed features related to each class. In this paper, the feature selection method is based on class-wise information, to identify the relevant feature related to each class. We evaluate the proposed feature selection method by comparing with existing feature selection methods like chi-square ( χ2), entropy, information gain, and mutual information. The performances are evaluated using classification accuracy obtained from support vector machine, K nearest neighbors, and random forest classifiers on two publically available datasets viz., Stanford Twitter dataset and Ravikiran Janardhana dataset. In order to demonstrate the effectiveness of the proposed feature selection method, we conducted extensive experimentation by selecting different feature sets. The proposed feature selection method outperforms the existing feature selection methods in terms of classification accuracy on the Stanford Twitter dataset. Similarly, the proposed method performs competently equally in terms of classification accuracy compared to other feature selection methods in most of the feature subsets on Ravikiran Janardhana dataset.
APA, Harvard, Vancouver, ISO, and other styles
2

Endalie, Demeke, and Getamesay Haile. "Hybrid Feature Selection for Amharic News Document Classification." Mathematical Problems in Engineering 2021 (March 11, 2021): 1–8. http://dx.doi.org/10.1155/2021/5516262.

Full text
Abstract:
Today, the amount of Amharic digital documents has grown rapidly. Because of this, automatic text classification is extremely important. Proper selection of features has a crucial role in the accuracy of classification and computational time. When the initial feature set is considerably larger, it is important to pick the right features. In this paper, we present a hybrid feature selection method, called IGCHIDF, which consists of information gain (IG), chi-square (CHI), and document frequency (DF) features’ selection methods. We evaluate the proposed feature selection method on two datasets: dataset 1 containing 9 news categories and dataset 2 containing 13 news categories. Our experimental results showed that the proposed method performs better than other methods on both datasets 1and 2. The IGCHIDF method’s classification accuracy is up to 3.96% higher than the IG method, up to 11.16% higher than CHI, and 7.3% higher than DF on dataset 2, respectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Peter, Timm J., and Oliver Nelles. "Fast and simple dataset selection for machine learning." at - Automatisierungstechnik 67, no. 10 (October 25, 2019): 833–42. http://dx.doi.org/10.1515/auto-2019-0010.

Full text
Abstract:
Abstract The task of data reduction is discussed and a novel selection approach which allows to control the optimal point distribution of the selected data subset is proposed. The proposed approach utilizes the estimation of probability density functions (pdfs). Due to its structure, the new method is capable of selecting a subset either by approximating the pdf of the original dataset or by approximating an arbitrary, desired target pdf. The new strategy evaluates the estimated pdfs solely on the selected data points, resulting in a simple and efficient algorithm with low computational and memory demand. The performance of the new approach is investigated for two different scenarios. For representative subset selection of a dataset, the new approach is compared to a recently proposed, more complex method and shows comparable results. For the demonstration of the capability of matching a target pdf, a uniform distribution is chosen as an example. Here the new method is compared to strategies for space-filling design of experiments and shows convincing results.
APA, Harvard, Vancouver, ISO, and other styles
4

Perez-Alvarez, Susana, Guadalupe Gómez, and Christian Brander. "FARMS: A New Algorithm for Variable Selection." BioMed Research International 2015 (2015): 1–11. http://dx.doi.org/10.1155/2015/319797.

Full text
Abstract:
Large datasets including an extensive number of covariates are generated these days in many different situations, for instance, in detailed genetic studies of outbreed human populations or in complex analyses of immune responses to different infections. Aiming at informing clinical interventions or vaccine design, methods for variable selection identifying those variables with the optimal prediction performance for a specific outcome are crucial. However, testing for all potential subsets of variables is not feasible and alternatives to existing methods are needed. Here, we describe a new method to handle such complex datasets, referred to as FARMS, that combines forward and all subsets regression for model selection. We apply FARMS to a host genetic and immunological dataset of over 800 individuals from Lima (Peru) and Durban (South Africa) who were HIV infected and tested for antiviral immune responses. This dataset includes more than 500 explanatory variables: around 400 variables with information on HIV immune reactivity and around 100 individual genetic characteristics. We have implemented FARMS inRstatistical language and we showed that FARMS is fast and outcompetes other comparable commonly used approaches, thus providing a new tool for the thorough analysis of complex datasets without the need for massive computational infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
5

Dash, Ch Sanjeev Kumar, Ajit Kumar Behera, Sarat Chandra Nayak, Satchidananda Dehuri, and Sung-Bae Cho. "An Integrated CRO and FLANN Based Classifier for a Non-Imputed and Inconsistent Dataset." International Journal on Artificial Intelligence Tools 28, no. 03 (May 2019): 1950013. http://dx.doi.org/10.1142/s0218213019500131.

Full text
Abstract:
This paper presents an integrated approach by considering chemical reaction optimization (CRO) and functional link artificial neural networks (FLANNs) for building a classifier from the dataset with missing value, inconsistent records, and noisy instances. Here, imputation is carried out based on the known value of two nearest neighbors to address dataset plagued with missing values. The probabilistic approach is used to remove the inconsistency from either of the datasets like original or imputed. The resulting dataset is then given as an input to boosted instance selection approach for selection of relevant instances to reduce the size of the dataset without loss of generality and compromising classification accuracy. Finally, the transformed dataset (i.e., from non-imputed and inconsistent dataset to imputed and consistent dataset) is used for developing a classifier based on CRO trained FLANN. The method is evaluated extensively through a few bench-mark datasets obtained from University of California, Irvine (UCI) repository. The experimental results confirm that our preprocessing tasks along with integrated approach can be a promising alternative tool for mitigating missing value, inconsistent records, and noisy instances.
APA, Harvard, Vancouver, ISO, and other styles
6

Jamjoom, Mona. "The pertinent single-attribute-based classifier for small datasets classification." International Journal of Electrical and Computer Engineering (IJECE) 10, no. 3 (June 1, 2020): 3227. http://dx.doi.org/10.11591/ijece.v10i3.pp3227-3234.

Full text
Abstract:
Classifying a dataset using machine learning algorithms can be a big challenge when the target is a small dataset. The OneR classifier can be used for such cases due to its simplicity and efficiency. In this paper, we revealed the power of a single attribute by introducing the pertinent single-attribute-based-heterogeneity-ratio classifier (SAB-HR) that used a pertinent attribute to classify small datasets. The SAB-HR’s used feature selection method, which used the Heterogeneity-Ratio (H-Ratio) measure to identify the most homogeneous attribute among the other attributes in the set. Our empirical results on 12 benchmark datasets from a UCI machine learning repository showed that the SAB-HR classifier significantly outperformed the classical OneR classifier for small datasets. In addition, using the H-Ratio as a feature selection criterion for selecting the single attribute was more effectual than other traditional criteria, such as Information Gain (IG) and Gain Ratio (GR).
APA, Harvard, Vancouver, ISO, and other styles
7

Dif, Nassima, and Zakaria Elberrichi. "An Enhanced Recursive Firefly Algorithm for Informative Gene Selection." International Journal of Swarm Intelligence Research 10, no. 2 (April 2019): 21–33. http://dx.doi.org/10.4018/ijsir.2019040102.

Full text
Abstract:
Feature selection is the process of identifying good performing combinations of significant features among many possibilities. This preprocess improves the classification accuracy and facilitates the learning task. For this optimization problem, the authors have used a metaheuristics approach. Their main objective is to propose an enhanced version of the firefly algorithm as a wrapper approach by adding a recursive behavior to improve the search of the optimal solution. They applied SVM classifier to investigate the proposed method. For the authors experimentations, they have used the benchmark microarray datasets. The results show that the new enhanced recursive FA (RFA) outperforms the standard version with a reduction of dimensionality for all the datasets. As an example, for the leukemia microarray dataset, they have a perfect performance score of 100% with only 18 informative selected genes among the 7,129 of the original dataset. The RFA was competitive compared to other state-of-art approaches and achieved the best results for CNS, Ovarian cancer, MLL, prostate, Leukemia_4c, and lymphoma datasets.
APA, Harvard, Vancouver, ISO, and other styles
8

Omara, Hicham, Mohamed Lazaar, and Youness Tabii. "Effect of Feature Selection on Gene Expression Datasets Classification Accurac." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 5 (October 1, 2018): 3194. http://dx.doi.org/10.11591/ijece.v8i5.pp3194-3203.

Full text
Abstract:
<span>Feature selection attracts researchers who deal with machine learning and data mining. It consists of selecting the variables that have the greatest impact on the dataset classification, and discarding the rest. This dimentionality reduction allows classifiers to be fast and more accurate. This paper traits the effect of feature selection on the accuracy of widely used classifiers in literature. These classifiers are compared with three real datasets which are pre-processed with feature selection methods. More than 9% amelioration in classification accuracy is observed, and k-means appears to be the most sensitive classifier to feature selection.</span>
APA, Harvard, Vancouver, ISO, and other styles
9

ROCCO, C. B. V., R. L. SILVA, O. C. JUNIOR, and M. RUDEK. "SELEÇÃO DE SOFTWARE BASEADA EM AHP PARA CRIAÇÃO DE DATASET SINTÉTICO 3D." Revista SODEBRAS 15, no. 176 (August 2020): 50–55. http://dx.doi.org/10.29367/issn.1809-3957.15.2020.176.50.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Devaraj, Senthilkumar, and S. Paulraj. "An Efficient Feature Subset Selection Algorithm for Classification of Multidimensional Dataset." Scientific World Journal 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/821798.

Full text
Abstract:
Multidimensional medical data classification has recently received increased attention by researchers working on machine learning and data mining. In multidimensional dataset (MDD) each instance is associated with multiple class values. Due to its complex nature, feature selection and classifier built from the MDD are typically more expensive or time-consuming. Therefore, we need a robust feature selection technique for selecting the optimum single subset of the features of the MDD for further analysis or to design a classifier. In this paper, an efficient feature selection algorithm is proposed for the classification of MDD. The proposed multidimensional feature subset selection (MFSS) algorithm yields a unique feature subset for further analysis or to build a classifier and there is a computational advantage on MDD compared with the existing feature selection algorithms. The proposed work is applied to benchmark multidimensional datasets. The number of features was reduced to 3% minimum and 30% maximum by using the proposed MFSS. In conclusion, the study results show that MFSS is an efficient feature selection algorithm without affecting the classification accuracy even for the reduced number of features. Also the proposed MFSS algorithm is suitable for both problem transformation and algorithm adaptation and it has great potentials in those applications generating multidimensional datasets.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Dataset selection"

1

Sousa, Massáine Bandeira e. "Improving accuracy of genomic prediction in maize single-crosses through different kernels and reducing the marker dataset." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/11/11137/tde-07032018-163203/.

Full text
Abstract:
In plant breeding, genomic prediction (GP) may be an efficient tool to increase the accuracy of selecting genotypes, mainly, under multi-environments trials. This approach has the advantage to increase genetic gains of complex traits and reduce costs. However, strategies are needed to increase the accuracy and reduce the bias of genomic estimated breeding values. In this context, the objectives were: i) to compare two strategies to obtain markers subsets based on marker effect regarding their impact on the prediction accuracy of genome selection; and, ii) to compare the accuracy of four GP methods including genotype × environment interaction and two kernels (GBLUP and Gaussian). We used a rice diversity panel (RICE) and two maize datasets (HEL and USP). These were evaluated for grain yield and plant height. Overall, the prediction accuracy and relative efficiency of genomic selection were increased using markers subsets, which has the potential for build fixed arrays and reduce costs with genotyping. Furthermore, using Gaussian kernel and the including G×E effect, there is an increase in the accuracy of the genomic prediction models.
No melhoramento de plantas, a predição genômica (PG) é uma eficiente ferramenta para aumentar a eficiência seletiva de genótipos, principalmente, considerando múltiplos ambientes. Esta técnica tem como vantagem incrementar o ganho genético para características complexas e reduzir os custos. Entretanto, ainda são necessárias estratégias que aumentem a acurácia e reduzam o viés dos valores genéticos genotípicos. Nesse contexto, os objetivos foram: i) comparar duas estratégias para obtenção de subconjuntos de marcadores baseado em seus efeitos em relação ao seu impacto na acurácia da seleção genômica; ii) comparar a acurácia seletiva de quatro modelos de PG incluindo o efeito de interação genótipo × ambiente (G×A) e dois kernels (GBLUP e Gaussiano). Para isso, foram usados dados de um painel de diversidade de arroz (RICE) e dois conjuntos de dados de milho (HEL e USP). Estes foram avaliados para produtividade de grãos e altura de plantas. Em geral, houve incremento da acurácia de predição e na eficiência da seleção genômica usando subconjuntos de marcadores. Estes poderiam ser utilizados para construção de arrays e, consequentemente, reduzir os custos com genotipagem. Além disso, utilizando o kernel Gaussiano e incluindo o efeito de interação G×A há aumento na acurácia dos modelos de predição genômica.
APA, Harvard, Vancouver, ISO, and other styles
2

Awwad, Tarek. "Context-aware worker selection for efficient quality control in crowdsourcing." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI099/document.

Full text
Abstract:
Le crowdsourcing est une technique qui permet de recueillir une large quantité de données d'une manière rapide et peu onéreuse. Néanmoins, La disparité comportementale et de performances des "workers" d’une part et la variété en termes de contenu et de présentation des tâches par ailleurs influent considérablement sur la qualité des contributions recueillies. Par conséquent, garder leur légitimité impose aux plateformes de crowdsourcing de se doter de mécanismes permettant l’obtention de réponses fiables et de qualité dans un délai et avec un budget optimisé. Dans cette thèse, nous proposons CAWS (Context AwareWorker Selection), une méthode de contrôle de la qualité des contributions dans le crowdsourcing visant à optimiser le délai de réponse et le coût des campagnes. CAWS se compose de deux phases, une phase d’apprentissage opérant hors-ligne et pendant laquelle les tâches de l’historique sont regroupées de manière homogène sous forme de clusters. Pour chaque cluster, un profil type optimisant la qualité des réponses aux tâches le composant, est inféré ; la seconde phase permet à l’arrivée d’une nouvelle tâche de sélectionner les meilleurs workers connectés pour y répondre. Il s’agit des workers dont le profil présente une forte similarité avec le profil type du cluster de tâches, duquel la tâche nouvellement créée est la plus proche. La seconde contribution de la thèse est de proposer un jeu de données, appelé CrowdED (Crowdsourcing Evaluation Dataset), ayant les propriétés requises pour, d’une part, tester les performances de CAWS et les comparer aux méthodes concurrentes et d’autre part, pour tester et comparer l’impact des différentes méthodes de catégorisation des tâches de l’historique (c-à-d, la méthode de vectorisation et l’algorithme de clustering utilisé) sur la qualité du résultat, tout en utilisant un jeu de tâches unique (obtenu par échantillonnage), respectant les contraintes budgétaires et gardant les propriétés de validité en terme de dimension. En outre, CrowdED rend possible la comparaison de méthodes de contrôle de qualité quelle que soient leurs catégories, du fait du respect d’un cahier des charges lors de sa constitution. Les résultats de l’évaluation de CAWS en utilisant CrowdED comparés aux méthodes concurrentes basées sur la sélection de workers, donnent des résultats meilleurs, surtout en cas de contraintes temporelles et budgétaires fortes. Les expérimentations réalisées avec un historique structuré en catégories donnent des résultats comparables à des jeux de données où les taches sont volontairement regroupées de manière homogène. La dernière contribution de la thèse est un outil appelé CREX (CReate Enrich eXtend) dont le rôle est de permettre la création, l’extension ou l’enrichissement de jeux de données destinés à tester des méthodes de crowdsourcing. Il propose des modules extensibles de vectorisation, de clusterisation et d’échantillonnages et permet une génération automatique d’une campagne de crowdsourcing
Crowdsourcing has proved its ability to address large scale data collection tasks at a low cost and in a short time. However, due to the dependence on unknown workers, the quality of the crowdsourcing process is questionable and must be controlled. Indeed, maintaining the efficiency of crowdsourcing requires the time and cost overhead related to this quality control to stay low. Current quality control techniques suffer from high time and budget overheads and from their dependency on prior knowledge about individual workers. In this thesis, we address these limitation by proposing the CAWS (Context-Aware Worker Selection) method which operates in two phases: in an offline phase, the correlations between the worker declarative profiles and the task types are learned. Then, in an online phase, the learned profile models are used to select the most reliable online workers for the incoming tasks depending on their types. Using declarative profiles helps eliminate any probing process, which reduces the time and the budget while maintaining the crowdsourcing quality. In order to evaluate CAWS, we introduce an information-rich dataset called CrowdED (Crowdsourcing Evaluation Dataset). The generation of CrowdED relies on a constrained sampling approach that allows to produce a dataset which respects the requester budget and type constraints. Through its generality and richness, CrowdED helps also in plugging the benchmarking gap present in the crowdsourcing community. Using CrowdED, we evaluate the performance of CAWS in terms of the quality, the time and the budget gain. Results shows that automatic grouping is able to achieve a learning quality similar to job-based grouping, and that CAWS is able to outperform the state-of-the-art profile-based worker selection when it comes to quality, especially when strong budget ant time constraints exist. Finally, we propose CREX (CReate Enrich eXtend) which provides the tools to select and sample input tasks and to automatically generate custom crowdsourcing campaign sites in order to extend and enrich CrowdED
APA, Harvard, Vancouver, ISO, and other styles
3

Lingle, Jeremy Andrew. "Evaluating the Performance of Propensity Scores to Address Selection Bias in a Multilevel Context: A Monte Carlo Simulation Study and Application Using a National Dataset." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/eps_diss/56.

Full text
Abstract:
When researchers are unable to randomly assign students to treatment conditions, selection bias is introduced into the estimates of treatment effects. Random assignment to treatment conditions, which has historically been the scientific benchmark for causal inference, is often impossible or unethical to implement in educational systems. For example, researchers cannot deny services to those who stand to gain from participation in an academic program. Additionally, students select into a particular treatment group through processes that are impossible to control, such as those that result in a child dropping-out of high school or attending a resource-starved school. Propensity score methods provide valuable tools for removing the selection bias from quasi-experimental research designs and observational studies through modeling the treatment assignment mechanism. The utility of propensity scores has been validated for the purposes of removing selection bias when the observations are assumed to be independent; however, the ability of propensity scores to remove selection bias in a multilevel context, in which group membership plays a role in the treatment assignment, is relatively unknown. A central purpose of the current study was to begin filling in the gaps in knowledge regarding the performance of propensity scores for removing selection bias, as defined by covariate balance, in multilevel settings using a Monte Carlo simulation study. The performance of propensity scores were also examined using a large-scale national dataset. Results from this study provide support for the conclusion that multilevel characteristics of a sample have a bearing upon the performance of propensity scores to balance covariates between treatment and control groups. Findings suggest that propensity score estimation models should take into account the cluster-level effects when working with multilevel data; however, the numbers of treatment and control group individuals within each cluster must be sufficiently large to allow estimation of those effects. Propensity scores that take into account the cluster-level effects can have the added benefit of balancing covariates within each cluster as well as across the sample as a whole.
APA, Harvard, Vancouver, ISO, and other styles
4

Zoghi, Zeinab. "Ensemble Classifier Design and Performance Evaluation for Intrusion Detection Using UNSW-NB15 Dataset." University of Toledo / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1596756673292254.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Silva, Wilbor Poletti. "Archaeomagnetic field intensity evolution during the last two millennia." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/14/14132/tde-19092018-135335/.

Full text
Abstract:
Temporal variations of Earth\'s magnetic field provide a great range of geophysical information about the dynamics at different layers of the Earth. Since it is a planetary field, regional and global aspects can be explored, depending on the timescale of variations. In this thesis, the geomagnetic field variations for the last two millennia were investigated. For that, some improvement on the methods to recover the ancient magnetic field intensity from archeological material were done, new data was acquired and a critical assessment of the global archaeomagnetic database was performed. Two methodological advances are reported, comprising: i) the correction for microwave method of the cooling rate effect, which is associated to the difference between the cooling times during the manufactory of the material and that of the heating steps during the archaeointensity experiment; (ii) a test for thermoremanent anisotropy correction from the arithmetic mean of six orthogonal samples. The temporal variation of the magnetic intensity for South America was investigated from nine new data, three from ruins of the Guaraní Jesuit Missions and six from archaeological sites associated with jerky beef farms, both located in Rio Grande do Sul, Brazil, with ages covering the last 400 years. These data combined with the regional archaeointensity database, demonstrates that the influence of significant non-dipole components in South America started at ~1800 CE. Finally, from a reassessment of the global archaeointensity database, a new interpretation was proposed about the geomagnetic axial dipole evolution, where this component falls constantly since ~700 CE associated to the breaking of the symmetry of the advective sources operating in the outer core.
Variações temporais do campo magnético da Terra fornecem uma grande diversidade de informações geofísicas sobre a dinâmica das diferentes camadas da Terra. Por ser um campo planetário, aspectos regionais e globais podem ser explorados, dependendo da escala de tempo das variações. Nesta tese, foram investigadas as variações do campo geomagnético para os dois últimos milênios. Para isso, aprimoramentos nos métodos de aquisição da intensidade geomagnética registrada em materiais arqueológicos foram realizados, bem como a aquisição de novos dados e uma avaliação crítica da base de dados arqueomagnética global. Dois novos avanços metodológicos são aqui propostos, sendo eles: i) correção para o método de micro-ondas do efeito da taxa de resfriamento, que está associada à diferença entre os tempos de resfriamento durante a manufatura do material e o das etapas de aquecimento durante o experimento de arqueointensidade; (ii) teste para correção da anisotropia termorremanente a partir da média aritmética de seis amostras posicionadas ortogonalmente umas às outras durante o experimento de arqueointensidade. A variação temporal da intensidade magnética para a América do Sul foi investigada a partir de nove dados inéditos, sendo três provenientes das ruínas das Missões Jesuíticas Guaraníticas e seis de sítios arqueológicos associados a fazendas de charque, ambos localizados no Rio Grande do Sul, Brasil, com idades que cobrem os últimos 400 anos. Esses dados, combinados com o banco de dados regionais de arqueointensidade, demonstram que a influência significativa de componentes não-dipolares do campo magnético na América do Sul começou em ~1800 CE. Finalmente, a partir de uma reavaliação do banco de dados globais de arqueointensidade uma nova interpretação foi proposta a respeito da evolução do dipolo axial geomagnético, sugerindo que essa componente está decrescendo constantemente desde ~700 CE devido à quebra da simetria das fontes advectivas que operam no núcleo externo.
APA, Harvard, Vancouver, ISO, and other styles
6

Hrabina, Martin. "VÝVOJ ALGORITMŮ PRO ROZPOZNÁVÁNÍ VÝSTŘELŮ." Doctoral thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-409087.

Full text
Abstract:
Táto práca sa zaoberá rozpoznávaním výstrelov a pridruženými problémami. Ako prvé je celá vec predstavená a rozdelená na menšie kroky. Ďalej je poskytnutý prehľad zvukových databáz, významné publikácie, akcie a súčasný stav veci spoločne s prehľadom možných aplikácií detekcie výstrelov. Druhá časť pozostáva z porovnávania príznakov pomocou rôznych metrík spoločne s porovnaním ich výkonu pri rozpoznávaní. Nasleduje porovnanie algoritmov rozpoznávania a sú uvedené nové príznaky použiteľné pri rozpoznávaní. Práca vrcholí návrhom dvojstupňového systému na rozpoznávanie výstrelov, monitorujúceho okolie v reálnom čase. V závere sú zhrnuté dosiahnuté výsledky a načrtnutý ďalší postup.
APA, Harvard, Vancouver, ISO, and other styles
7

Khan, Md Jafar Ahmed. "Robust linear model selection for high-dimensional datasets." Thesis, University of British Columbia, 2006. http://hdl.handle.net/2429/31082.

Full text
Abstract:
This study considers the problem of building a linear prediction model when the number of candidate covariates is large and the dataset contains a fraction of outliers and other contaminations that are difficult to visualize and clean. We aim at predicting the future non-outlying cases. Therefore, we need methods that are robust and scalable at the same time. We consider two different strategies for model selection: (a) one-step model building and (b) two-step model building. For one-step model building, we robustify the step-by-step algorithms forward selection (FS) and stepwise (SW), with robust partial F-tests as stopping rules. Our two-step model building procedure consists of sequencing and segmentation. In sequencing, the input variables are sequenced to form a list such that the good predictors are likely to appear in the beginning, and the first m variables of the list form a reduced set for further consideration. For this step we robustify Least Angle Regression (LARS) proposed by Efron, Hastie, Johnstone and Tibshirani (2004). We use bootstrap to stabilize the results obtained by robust LARS, and use "learning curves" to determine the size of the reduced set. The second step (of the two-step model building procedure) - which we call segmentation - carefully examines subsets of the covariates in the reduced set in order to select the final prediction model. For this we propose a computationally suitable robust cross-validation procedure. We also propose a robust bootstrap procedure for segmentation, which is similar to the method proposed by Salibian-Barrera and Zamar (2002) to conduct robust inferences in linear regression. We introduce the idea of "multivariate-Winsorization" which we use for robust data cleaning (for the robustification of LARS). We also propose a new correlation estimate which we call the "adjusted-Winsorized correlation estimate". This estimate is consistent and has bounded influence, and has some advantages over univariate-Winsorized correlation estimate (Huber 1981 and Alqallaf 2003).
Science, Faculty of
Statistics, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
8

Mo, Dengyao. "Robust and Efficient Feature Selection for High-Dimensional Datasets." University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1299010108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Poolsawad, Nongnuch. "Practical approaches to mining of clinical datasets : from frameworks to novel feature selection." Thesis, University of Hull, 2014. http://hydra.hull.ac.uk/resources/hull:8620.

Full text
Abstract:
Research has investigated clinical data that have embedded within them numerous complexities and uncertainties in the form of missing values, class imbalances and high dimensionality. The research in this thesis was motivated by these challenges to minimise these problems whilst, at the same time, maximising classification performance of data and also selecting the significant subset of variables. As such, this led to the proposal of a data mining framework and feature selection method. The proposed framework has a simple algorithmic framework and makes use of a modified form of existing frameworks to address a variety of different data issues, called the Handling Clinical Data Framework (HCDF). The assessment of data mining techniques reveals that missing values imputation and resampling data for class balancing can improve the performance of classification. Next, the proposed feature selection method was introduced; it involves projecting onto principal component method (FS-PPC) and draws on ideas from both feature extraction and feature selection to select a significant subset of features from the data. This method selects features that have high correlation with the principal component by applying symmetrical uncertainty (SU). However, irrelevant and redundant features are removed by using mutual information (MI). However, this method provides confidence in the selected subset of features that will yield realistic results with less time and effort. FS-PPC is able to retain classification performance and meaningful features while consisting of non-redundant features. The proposed methods have been practically applied to analysis of real clinical data and their effectiveness has been assessed. The results show that the proposed methods are enable to minimise the clinical data problems whilst, at the same time, maximising classification performance of data.
APA, Harvard, Vancouver, ISO, and other styles
10

Kurra, Goutham. "Pattern Recognition in Large Dimensional and Structured Datasets." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1014322308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Dataset selection"

1

Geological Survey (U.S.) and Geological Survey (U.S.)., eds. Production of a national 1:1,000,000-scale hydrography dataset for the United States: Feature selection, simplification, and refinement. Reston, Va: U.S. Dept. of the Interior, U.S. Geological Survey, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Veech, Joseph A. Habitat Ecology and Analysis. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198829287.001.0001.

Full text
Abstract:
Habitat is crucial to the survival and reproduction of individual organisms as well as persistence of populations. As such, species-habitat relationships have long been studied, particularly in the field of wildlife ecology and to a lesser extent in the more encompassing discipline of ecology. The habitat requirements of a species largely determine its spatial distribution and abundance in nature. One way to recognize and appreciate the over-riding importance of habitat is to consider that a young organism must find and settle into the appropriate type of habitat as one of the first challenges of life. This process can be cast in a probabilistic framework and used to better understand the mechanisms behind habitat preferences and selection. There are at least six distinctly different statistical approaches to conducting a habitat analysis – that is, identifying and quantifying the environmental variables that a species most strongly associates with. These are (1) comparison among group means (e.g., ANOVA), (2) multiple linear regression, (3) multiple logistic regression, (4) classification and regression trees, (5) multivariate techniques (Principal Components Analysis and Discriminant Function Analysis), and (6) occupancy modelling. Each of these is lucidly explained and demonstrated by application to a hypothetical dataset. The strengths and weaknesses of each method are discussed. Given the ongoing biodiversity crisis largely caused by habitat destruction, there is a crucial and general need to better characterize and understand the habitat requirements of many different species, particularly those that are threatened and endangered.
APA, Harvard, Vancouver, ISO, and other styles
3

Lutz, Wolfgang, William P. Butz, and Samir KC, eds. World Population & Human Capital in the Twenty-First Century. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198813422.001.0001.

Full text
Abstract:
Condensed into a detailed analysis and a selection of continent-wide datasets, this revised edition of World Population & Human Capital in the Twenty-First Century addresses the role of educational attainment in global population trends and models. Presenting the full chapter text of the original edition alongside a concise selection of data, it summarizes past trends in fertility, mortality, migration, and education, and examines relevant theories to identify key determining factors. Deriving from a global survey of hundreds of experts and five expert meetings on as many continents, World Population & Human Capital in the Twenty-First Century: An Overview emphasizes alternative trends in human capital, new ways of studying ageing and the quantification of alternative population, and education pathways in the context of global sustainable development. It is an ideal companion to the county specific online Wittgenstein Centre Data Explorer.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Dataset selection"

1

Pfeuffer, Simon, Daniel Wehner, and Raed Bouslama. "Managing Uncertainties in LCA Dataset Selection." In Sustainable Design and Manufacturing 2019, 73–75. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-9271-9_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sahipov, Ilya, Alexey Zabashta, and Andrey Filchenkov. "Stabilization of Dataset Matrix Form for Classification Dataset Generation and Algorithm Selection." In Lecture Notes in Computer Science, 66–75. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-62365-4_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Refenes, Apostolos N. "ConSTrainer: A Generic Toolkit for Connectionist Dataset Selection." In Konnektionismus in Artificial Intelligence und Kognitionsforschung, 163–72. Berlin, Heidelberg: Springer Berlin Heidelberg, 1990. http://dx.doi.org/10.1007/978-3-642-76070-9_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Reggiani, Claudio, Yann-Aël Le Borgne, and Gianluca Bontempi. "Feature Selection in High-Dimensional Dataset Using MapReduce." In Communications in Computer and Information Science, 101–15. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-76892-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Rughbeer, Yastil, Anban W. Pillay, and Edgar Jembere. "Dataset Selection for Transfer Learning in Information Retrieval." In Artificial Intelligence Research, 53–65. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-66151-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Xian, and Aidong Zhang. "Boost Feature Subset Selection: A New Gene Selection Algorithm for Microarray Dataset." In Computational Science – ICCS 2006, 670–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11758525_91.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shinmura, Shuichi. "Matroska Feature-Selection Method for Microarray Dataset (Method 2)." In New Theory of Discriminant Analysis After R. Fisher, 163–89. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-2164-0_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pati, Soumen Kumar, and Asit Kumar Das. "Gene Selection and Classification Rule Generation for Microarray Dataset." In Advances in Computing and Information Technology, 73–83. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-31600-5_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gupta, Shelly, Shailendra Narayan Singh, and Parsid Kumar Jain. "Feature Selection on Public Maternal Healthcare Dataset for Classification." In Lecture Notes in Networks and Systems, 573–83. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-15-9712-1_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chowdhury, Kuntal, Debasis Chaudhuri, and Arup Kumar Pal. "Optimal Number of Seed Point Selection Algorithm of Unknown Dataset." In Proceedings of 3rd International Conference on Computer Vision and Image Processing, 257–69. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-32-9291-8_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Dataset selection"

1

Dittman, David J., Taghi Khoshgoftaar, Randall Wald, and Amri Napolitano. "Gene selection stability's dependence on dataset difficulty." In 2013 IEEE 14th International Conference on Information Reuse & Integration (IRI). IEEE, 2013. http://dx.doi.org/10.1109/iri.2013.6642491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zargari, Shahrzad, and Dave Voorhis. "Feature Selection in the Corrected KDD-dataset." In 2012 Third International Conference on Emerging Intelligent Data and Web Technologies (EIDWT). IEEE, 2012. http://dx.doi.org/10.1109/eidwt.2012.10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Paiva, Antonio R. C. "Information-theoretic dataset selection for fast kernel learning." In 2017 International Joint Conference on Neural Networks (IJCNN). IEEE, 2017. http://dx.doi.org/10.1109/ijcnn.2017.7966107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Budhraja, Karan Kumar, and Tim Oates. "Dataset Selection for Controlling Swarms by Visual Demonstration." In 2017 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2017. http://dx.doi.org/10.1109/icdmw.2017.128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Siegert, Ingo, Ronald Bock, Andreas Wendemuth, and Bogdan Vlasenko. "Exploring dataset similarities using PCA-based feature selection." In 2015 International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 2015. http://dx.doi.org/10.1109/acii.2015.7344600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Khode, Disha, and Antara Bhattacharya. "A New Feature Selection Algorithm for DNA Dataset." In Third International Conference on Advances in Computer Science and Application - CSA 2014 and Third International Conference on Advances in Signal Processing and Communication - SPC 2014. Singapore: Research Publishing Services, 2014. http://dx.doi.org/10.3850/978-981-09-1137-9_p013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Khode, Disha, and Antara Bhattacharya. "A New Feature Selection Algorithm for DNA Dataset." In Third International Conference on Advances in Computer Science and Application - CSA 2014 and Third International Conference on Advances in Signal Processing and Communication - SPC 2014. Singapore: Research Publishing Services, 2014. http://dx.doi.org/10.3850/978-981-09-2579-6_p013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ziheng, Liu. "Comparison of Feature Selection Methods on Arrhythmia Dataset." In IPMV 2021: 2021 3rd International Conference on Image Processing and Machine Vision. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3469951.3469963.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Tchaye-Kondi, Jude, Yanlong Zhai, and Liehuang Zhu. "A New Hashing based Nearest Neighbors Selection Technique for Big Datasets." In 11th International Conference on Computer Science and Information Technology (CCSIT 2021). AIRCC Publishing Corporation, 2021. http://dx.doi.org/10.5121/csit.2021.110708.

Full text
Abstract:
KNN has the reputation of being a simple and powerful supervised learning algorithm used for either classification or regression. Although KNN prediction performance highly depends on the size of the training dataset, when this one is large, KNN suffers from slow decision making. This is because each decision-making process requires the KNN algorithm to look for nearest neighbors within the entire dataset. To overcome this slowness problem, we propose a new technique that enables the selection of nearest neighbors directly in the neighborhood of a given data point. The proposed approach consists of dividing the data space into sub-cells of a virtual grid built on top of the dataset. The mapping between data points and sub-cells is achieved using hashing. When it comes to selecting the nearest neighbors of a new observation, we first identify the central cell where the observation is contained. Once that central cell is known, we then start looking for the nearest neighbors from it and the cells around. From our experimental performance analysis of publicly available datasets, our algorithm outperforms the original KNN with a predictive quality as good and offers competitive performance with solutions such as KDtree.
APA, Harvard, Vancouver, ISO, and other styles
10

Singh, Raman, Harish Kumar, and R. K. Singla. "Analysis of Feature Selection Techniques for Network Traffic Dataset." In 2013 International Conference on Machine Intelligence and Research Advancement (ICMIRA). IEEE, 2013. http://dx.doi.org/10.1109/icmira.2013.15.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Dataset selection"

1

McNabb, Kyle, Annalena Oppel, and Daniel Chachu. Government Revenue Dataset (2021): source selection. UNU-WIDER, August 2021. http://dx.doi.org/10.35188/unu-wider/wtn/2021-10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gradín, Carlos. WIID Companion (March 2021): data selection. UNU-WIDER, March 2021. http://dx.doi.org/10.35188/unu-wider/wtn/2021-4.

Full text
Abstract:
This document is part of a series of technical notes describing the compilation of a new companion database that complements the World Income Inequality Database (WIID). It aims at facilitating the analysis of inequality as well as progress in achieving the global goal of reducing inequality within and across countries. This new dataset also includes an annual series reporting the income distribution at the percentile level for all citizens in the world, regardless of where they live, from 1950 to the present. This technical note describes the first stage in constructing the first version of the companion datasets: data selection. It provides an overview of the approach followed in the selection of the series from different sources with information on income distribution and inequality that best represent each country and period. It also discusses the general criteria used and their implementation, which are illustrated with a few country examples.
APA, Harvard, Vancouver, ISO, and other styles
3

Pradhananga, Saurav, Arthur Lutz, Archana Shrestha, Indira Kadel, Bikash Nepal, and Santosh Nepal. Selection and downscaling of general circulation model datasets and extreme climate indices analysis - Manual. International Centre for Integrated Mountain Development (ICIMOD), 2020. http://dx.doi.org/10.53055/icimod.4.

Full text
Abstract:
A supplement to the Climate Change Scenarios for Nepal report published by the Ministry of Forests and Environment for the National Adaptation Plan (NAP) Process, this manual provides detailed information about the processes through which the assessment highlighted in the report can be carried out. They include – selection of the general circulation/climate models (GCMs), downscaling of the GCM dataset, assessment of changes in precipitation and temperature, and assessment of change in climate extremes. The manual downscales climate datasets for the Koshi River basin, the Kabul River basin, and the Kailash Sacred Landscape to analyse future scenarios in these basins and the landscape.
APA, Harvard, Vancouver, ISO, and other styles
4

Gradín, Carlos. WIID Companion (March 2021): global income distribution. UNU-WIDER, March 2021. http://dx.doi.org/10.35188/unu-wider/wtn/2021-6.

Full text
Abstract:
This document is part of a series of technical notes describing the compilation of a new companion database that complements the UNU-WIDER World Income Inequality Database. It aims at facilitating the analysis of inequality as well as progress in achieving the global goal of reducing inequality within and across countries. This new dataset includes an annual series reporting the income distribution at the percentile level for all citizens in the world, regardless of where they live, from 1950 to the present. The global distribution is displayed along with the country-level information used to produce it. The dataset also includes estimates of various global absolute and relative inequality measures, and the income share of key population groups. All estimates are further disaggregated by the contribution of inequalities within and between countries, as well as by each country’s geographical region and income group. While previous technical notes described the selection of country income distribution series and the integration and standardization process to overcome the heterogeneity in original welfare concepts and other methods, I here describe all the necessary additional steps and assumptions made to construct the new global dataset.
APA, Harvard, Vancouver, ISO, and other styles
5

Gradín, Carlos. WIID Companion (March 2021): integrated and standardized series. UNU-WIDER, March 2021. http://dx.doi.org/10.35188/unu-wider/wtn/2021-5.

Full text
Abstract:
This document is part of a series of technical notes describing the compilation of a new companion database that complements the World Income Inequality Database. It aims at facilitating the analysis of inequality as well as progress in achieving the global goal of reducing inequality within and across countries. This new dataset also includes an annual series reporting the income distribution at the percentile level for all citizens in the world, regardless of where they live, since 1950 to present. A previous note described the selection of income distribution series. Since these series may differ across welfare concepts and other methods used, this technical note describes the second stage, constructing integrated and standardized country series. It discusses all the necessary adjustments conducted to construct the final series for each country, with consistent estimates of the distribution of net income per capita over the entire period for which information is available. This is mainly divided into two stages. First, integrating country series by interlinking series that overlap over time, then using a more general regression-based approach.
APA, Harvard, Vancouver, ISO, and other styles
6

Bustelo, Monserrat, Suzanne Duryea, Claudia Piras, Breno Sampaio, Giuseppe Trevisan, and Mariana Viollaz. The Gender Pay Gap in Brazil: It Starts with College Students' Choice of Major. Inter-American Development Bank, January 2021. http://dx.doi.org/10.18235/0003011.

Full text
Abstract:
We herein discuss how college major choice affects gender wage gaps by highlighting the role that STEM majors play in explaining the gender wage gap in a developing country. We focus on a Latin American country where a systematic analysis of the interaction between students choice of college major and the gender wage gap is currently lacking. We take advantage of a very unique dataset of college students from the Universidade Federal de Pernambuco (UFPE), Brazil, to decompose the raw gender gap in hourly wages into one component that can be explained by differences in endowments between men and women as well as a second or residual component that reflects gender differences in the prices of market skills. We implement the commonly applied decomposition approach at the wage distributions mean and a decomposition procedure that considers variations across the wage distribution. Our results reveal that the majors that women and men select explain 50% of the gender wage gap at the mean, and STEM majors contribute to 30% of this difference. When examining different percentiles of the wage distribution, we find that the selection of a major is more important at the middle of the distribution than at the bottom or top.
APA, Harvard, Vancouver, ISO, and other styles
7

Najafi, Farzaneh, Gamaleldin F. Elsayed, Robin Cao, Eftychios Pnevmatikakis, Peter E. Latham, John P. Cunningham, and Anne K. Churchland. Dataset from "Farzaneh Najafi, Gamaleldin F Elsayed, Robin Cao, Eftychios Pnevmatikakis, Peter E. Latham, John P Cunningham, Anne K Churchland (bioRxiv, 2018); Excitatory and inhibitory subnetworks are equally selective during decision-making and emerge simultaneously during learning.”. Cold Spring Harbor Laboratory, February 2019. http://dx.doi.org/10.14224/1.37693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

de Caritat, Patrice, Brent McInnes, and Stephen Rowins. Towards a heavy mineral map of the Australian continent: a feasibility study. Geoscience Australia, 2020. http://dx.doi.org/10.11636/record.2020.031.

Full text
Abstract:
Heavy minerals (HMs) are minerals with a specific gravity greater than 2.9 g/cm3. They are commonly highly resistant to physical and chemical weathering, and therefore persist in sediments as lasting indicators of the (former) presence of the rocks they formed in. The presence/absence of certain HMs, their associations with other HMs, their concentration levels, and the geochemical patterns they form in maps or 3D models can be indicative of geological processes that contributed to their formation. Furthermore trace element and isotopic analyses of HMs have been used to vector to mineralisation or constrain timing of geological processes. The positive role of HMs in mineral exploration is well established in other countries, but comparatively little understood in Australia. Here we present the results of a pilot project that was designed to establish, test and assess a workflow to produce a HM map (or atlas of maps) and dataset for Australia. This would represent a critical step in the ability to detect anomalous HM patterns as it would establish the background HM characteristics (i.e., unrelated to mineralisation). Further the extremely rich dataset produced would be a valuable input into any future machine learning/big data-based prospectivity analysis. The pilot project consisted in selecting ten sites from the National Geochemical Survey of Australia (NGSA) and separating and analysing the HM contents from the 75-430 µm grain-size fraction of the top (0-10 cm depth) sediment samples. A workflow was established and tested based on the density separation of the HM-rich phase by combining a shake table and the use of dense liquids. The automated mineralogy quantification was performed on a TESCAN® Integrated Mineral Analyser (TIMA) that identified and mapped thousands of grains in a matter of minutes for each sample. The results indicated that: (1) the NGSA samples are appropriate for HM analysis; (2) over 40 HMs were effectively identified and quantified using TIMA automated quantitative mineralogy; (3) the resultant HMs’ mineralogy is consistent with the samples’ bulk geochemistry and regional geological setting; and (4) the HM makeup of the NGSA samples varied across the country, as shown by the mineral mounts and preliminary maps. Based on these observations, HM mapping of the continent using NGSA samples will likely result in coherent and interpretable geological patterns relating to bedrock lithology, metamorphic grade, degree of alteration and mineralisation. It could assist in geological investigations especially where outcrop is minimal, challenging to correctly attribute due to extensive weathering, or simply difficult to access. It is believed that a continental-scale HM atlas for Australia could assist in derisking mineral exploration and lead to investment, e.g., via tenement uptake, exploration, discovery and ultimately exploitation. As some HMs are hosts for technology critical elements such as rare earth elements, their systematic and internally consistent quantification and mapping could lead to resource discovery essential for a more sustainable, lower-carbon economy.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography