Academic literature on the topic 'Analytical solutions for data mining'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Analytical solutions for data mining.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Analytical solutions for data mining"

1

Bhargavi Konda. "The impact of data preprocessing on data mining outcomes." World Journal of Advanced Research and Reviews 15, no. 3 (2022): 540–44. https://doi.org/10.30574/wjarr.2022.15.3.0931.

Full text
Abstract:
Data preprocessing is a vital initial step during knowledge discovery because it determines the success of data mining projects. A dataset's quality and representation stand as the primary element because any presence of redundant, irrelevant, too noisy, or unreliable information will severely disrupt the knowledge discovery process. The preprocessing phase first converts unstructured data into an analytical format alongside solutions for data inconsistencies, errors, and missing values to maintain data mining result integrity. The preprocessing corrects data quality problems and arranges data properly, improving data mining model accuracy, efficiency, and interpretability. The data mining pipeline requires data preprocessing as its essential foundation since it provides multiple techniques to convert raw data into an effective analytical format. Data mining depends heavily on preprocessing operations because they guarantee proper analysis results through accurate correction of errors and optimal data structure development and absent data point management.
APA, Harvard, Vancouver, ISO, and other styles
2

Matthew, N. O. Sadiku, K. Suman Guddi, and M. Musa Sarhan. "Big Data in Cybersecurity: A Primer." Journal of Scientific and Engineering Research 8, no. 9 (2021): 6–13. https://doi.org/10.5281/zenodo.10612875.

Full text
Abstract:
<strong>Abstract</strong> Big data refers to mining usable information from the massive amounts of data. It is becoming a focal point of cybersecurity. Cybersecurity has become a big data problem due to the size and complexity of the data and due to the fact that sophistication of threats has increased dramatically. While businesses and government agencies take advantage of big data analytics to improve operations, cyber criminals are mining the same data for unethical reasons. Traditional protection tools used for data mining and cyber-attack prevention are insufficient for several companies. Modern cybersecurity solutions are mostly driven by big data. Intelligent big data analytics allows data specialists to develop a predictive model. This paper is a primer on big data in cybersecurity.
APA, Harvard, Vancouver, ISO, and other styles
3

Gonnade, Priyanka, and Sonali Ridhorkar. "Data Driven Decision making Framework for Businesses." Journal of Neonatal Surgery 14, no. 6S (2025): 648–60. https://doi.org/10.52783/jns.v14.2301.

Full text
Abstract:
Digital technologies have revolutionized the way businesses are built and managed, requiring the development of new solutions and a diverse set of applications. Massive volumes of data are now easily accessible and database capacity has risen tremendously and data collection methods have altered. As a result, while mining big data, issues with regression, the analytical process, and the complexity of the large data all arise.To cope with the aforementioned issues a data analysis framework is proposed for various business decision-making processes which collects the data and saves in Hadoop, a java-based data management system that allows enormous amounts of data to be handled in parallel clusters without failure. Generally, node failures are not concentrated in data storage management systems. Consequently, data mining techniques are used in the design phase to obtain pertinent and essential vital information. Thus, the proposed framework efficiently provides data analytics for various decision making processes with improved accuracy.
APA, Harvard, Vancouver, ISO, and other styles
4

Dandi Sudrajat and Nur Alamsyah. "Penerapan Data Mining Menganalisa Pola Pembelian Sayur Hidroponik Sawargaloka Hydrofarm Metode Apriori." SABER : Jurnal Teknik Informatika, Sains dan Ilmu Komunikasi 2, no. 1 (2023): 200–210. http://dx.doi.org/10.59841/saber.v2i1.690.

Full text
Abstract:
The aim of this research is to apply an a priori algorithm to determine vegetable purchasing patterns and analyze the results in order to control vegetable stocks at Sawargaloka Hydroponic Hydrofarm. The need for quality and safe food supplies is increasing along with population growth, where plants are grown without using land, but using nutrient solutions that are rich in important substances, the application of data mining using the Apriori method can provide valuable insight into the purchasing patterns of hydroponic vegetables by customers. By understanding these patterns, companies can improve marketing strategies, plan production more efficiently, and provide product recommendations to customers. The results of analytical research using the Apriori method on hydroponic vegetable purchase data at Sawargaloka Hydrofarm, it can be concluded that the application of data mining has great potential in identifying significant purchasing patternsa
APA, Harvard, Vancouver, ISO, and other styles
5

Bawiskar, Saurav. "Smart Profitable Solutions with Recommendation Framework." International Journal for Research in Applied Science and Engineering Technology 10, no. 6 (2022): 4099–105. http://dx.doi.org/10.22214/ijraset.2022.44835.

Full text
Abstract:
Abstract: Discovering the frequent patterns in transactional databases is one of the crucial functionalities of apriori algorithm. Apriori algorithm is an algorithm which works on the principle of association rule mining. It is a dynamic and skillful algorithm used for discovering frequent patterns in a database, hence proving out to be efficient and important in data mining. Apriori algorithm finds associations between different sets of data. Every different set of data has a collective number of items and is called a transaction. The accomplishment of apriori is the set of rules that expose us how often any particular item or a set of items is contained in a set of data. In our proposed system, to provide efficiency, our basic aim is to implement apriori algorithm by setting up a threshold value and a varying support count which will act as a filter for our recommendation data. We can adjust the threshold value in order to increase or decrease the accuracy of the system. We have used apriori algorithm keeping in mind, its application in retailing industry and its capability of computing and handling large datasets and especially for the purpose of market basket analysis. The use of apriori algorithm along with analytical tools can provide insights into data and help the user in management and decision making provided that the user feeds the system in a correct way. Our aim is to provide user with recommendations which would ultimately help them in improving their business operations.
APA, Harvard, Vancouver, ISO, and other styles
6

Klebanov, A. F., A. V. Bondarenko, Yu L. Zhukovsky, and D. A. Klebanov. "Establishing remote control centers of a mining operation: strategic prerequisites and implementation stages." Mining Industry Journal (Gornay Promishlennost), no. 4/2024 (August 23, 2024): 174–83. http://dx.doi.org/10.30686/1609-9192-2024-4-174-183.

Full text
Abstract:
The article proposes the following plan to implement a project to create a remote control center of a mining company: (1) creation of infrastructural and technological conditions for remote control of equipment and mining operations; (2) organization of the control center that is located at a significant distance from the mining operations and successively transfer to it the functions of planning, monitoring, control and dispatching; (3) development of methodological and regulatory support for mining operations with the use of robotic equipment and transition to remote control and autonomous mining technologies. It is shown that the necessary condition for effective execution of the project is the development and industrial implementation of digital platform solutions for integration, end-to-end optimization, centralized data collection and analysis, control and monitoring of the complete management cycle of mining production. Arguments are provided for the expediency of organizing dedicated service management companies (based on IT companies, i.e. developers and/or integrators of digital mining technologies) for remote management of the Intelligent Mining Enterprise. The necessity of creating analytical centers to support decision making for optimization of mining production processes (as one of the key sub-stages of the project) on the basis of leading research organizations and Universities of mining profile is justified. Goals and objectives of the Remote Analytical Center are formulated using the case of the Digital Mining Production Laboratory at the Empress Catherine II St. Petersburg Mining University. It is stated that creation of analytical centers for decision support will contribute to training of qualified academic staff and accelerate the transformation processes of the Russian higher education.
APA, Harvard, Vancouver, ISO, and other styles
7

Oursatyev, Oleksii A. "Data Research in Industrial Data Mining Projects in the Big Data Generation Era." Control Systems and Computers, no. 3 (303) (2023): 33–53. http://dx.doi.org/10.15407/csc.2023.03.033.

Full text
Abstract:
Introduction. The review material is based mainly on business intelligence (BI) solutions designed for tasks with corporate data. But all the main aspects of working with data discussed in the work are also used on data processing platforms (Data Science Platform). Many BI vendors have expanded the capabilities of their systems to perform more advanced analytics, including Data Science. They added the phrase “Data Science” to their marketing research, and the term “advanced analytics” lost some popularity in relation to corporate data. The Data Science Platform provides a comprehensive set of tools for use by advanced users who traditionally work with data. Capabilities that allow you to connect to multi-structured data across different types of storage platforms, both on-premises and in the cloud, and the infrastructure architecture of a modern BI analytics platform enable high-performance workloads, including business intelligence. It uses distributed architecture, massively parallel processing, data virtualization, in-memory computing, etc. The combination of traditional relational data processing with calculations on the well-known Apache Hadoop software infrastructure, which integrates a number of components of the Hadoop ecosystem (Apache Hive, HBase, Spark, Solr, etc.) with the necessary target functions, allows you to create a fully functional platform for storing and processing structured and non-structures data. Purpose. A review of data processing problems and an analysis of the use of world-class mathematical apparatus and tools for obtaining knowledge from information were carried out. Methods. The paper describes the use of Data Mining methods in big data processing tasks, as well as methods of business, recommendation and predictive analytics. Result. The study suggests that machine learning-enhanced master data management (MDM), data quality, data preparation, and data catalogs will converge into a single, modern Enterprise Information Management (EIM) platform applicable to most new analytics projects. The results of the analysis of the process of identifying useful data can be useful to researchers and developers of modern platforms for processing and researching data in various spheres of society. Conclusion. A review of data processing problems and an analysis of the use of world-class mathematical apparatus and tools for obtaining knowledge from information were carried out. It is shown that a high-quality solution to the problems of working with first-level data indicated in this review will be provided by data research in modern analytical platforms. Successful penetration into their essence at the level of obtaining knowledge using machine learning and artificial intelligence algorithms will make it possible to predict future results in managed objects (processes) and make informed decisions.
APA, Harvard, Vancouver, ISO, and other styles
8

Siswono, Siswono. "Peran Business Intelligence dalam Solusi Bisnis." ComTech: Computer, Mathematics and Engineering Applications 4, no. 2 (2013): 812. http://dx.doi.org/10.21512/comtech.v4i2.2518.

Full text
Abstract:
The purpose of this study is to give examine the use of Business Intelligence as a critical technology solutions in the decision making by management. Business Intelligence application is able to address the needs of organizations in improving problem analytical skills encountered in making decisions with the ability tocollect, store, analyze and provide access to data, as well as dovarious activities such as statistical analysis, forecasting, and data mining.
APA, Harvard, Vancouver, ISO, and other styles
9

Goncharenko, S. N., and A. B. Avdeev. "Problem-oriented information analytical system development for management, planningand mining enterprise production activities optimization." Issues of radio electronics, no. 11 (November 20, 2019): 82–86. http://dx.doi.org/10.21778/2218-5453-2019-11-82-86.

Full text
Abstract:
The paper considers the scenario analysis tool in the form initial scenarios set simulation and optimal option selection enterprise development on the basis information analytical system – integrated planning system (IPS) is considered. The IPS practical application field as a tool of simulation modeling for optimization the enterprise activity in terms the main technical and economic indicators, strategic planning and production system development, valuation and structuring assets is shown. Such tasks in the IPS framework will improve the plan recosting speed and planning accuracy, model multiple enterprise development scenarios, taking into account changes in macroeconomic indicators and solve the task choosing the optimal one, which will significantly affect the quality and speed of management decisions. This system will allow data consolidation and integration with other information and industrial enterprise analytical systems. The IPS contains applications for creating and viewing reports, as well as data preparation and analysis functionality, tools for developing and editing program code, analytical solutions for studying data structure and building analytical models, including scenario models, and is a complete analytical platform that provides a secure multi-user environment for simultaneous access to data. The system provides a specialized data warehouse section, the logical and physical structure which is designed to create a special report or a group reports for a specific section the subject area.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Yang, Yourong Chen, Kelei Miao, Tiaojuan Ren, Changchun Yang, and Meng Han. "A Novel Data-Driven Evaluation Framework for Fork after Withholding Attack in Blockchain Systems." Sensors 22, no. 23 (2022): 9125. http://dx.doi.org/10.3390/s22239125.

Full text
Abstract:
In the blockchain system, mining pools are popular for miners to work collectively and obtain more revenue. Nowadays, there are consensus attacks that threaten the efficiency and security of mining pools. As a new type of consensus attack, the Fork After Withholding (FAW) attack can cause huge economic losses to mining pools. Currently, there are a few evaluation tools for FAW attacks, but it is still difficult to evaluate the FAW attack protection capability of target mining pools. To address the above problem, this paper proposes a novel evaluation framework for FAW attack protection of the target mining pools in blockchain systems. In this framework, we establish the revenue model for mining pools, including honest consensus revenue, block withholding revenue, successful fork revenue, and consensus cost. We also establish the revenue functions of target mining pools and other mining pools, respectively. In particular, we propose an efficient computing power allocation optimization algorithm (CPAOA) for FAW attacks against multiple target mining pools. We propose a model-solving algorithm based on improved Aquila optimization by improving the selection mechanism in different optimization stages, which can increase the convergence speed of the model solution and help find the optimal solution in computing power allocation. Furthermore, to greatly reduce the possibility of falling into local optimal solutions, we propose a solution update mechanism that combines the idea of scout bees in an artificial bee colony optimization algorithm and the constraint of allocating computing power. The experimental results show that the framework can effectively evaluate the revenue of various mining pools. CPAOA can quickly and accurately allocate the computing power of FAW attacks according to the computing power of the target mining pool. Thus, the proposed evaluation framework can effectively help evaluate the FAW attack protection capability of multiple target mining pools and ensure the security of the blockchain system.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Analytical solutions for data mining"

1

Reinartz, Thomas [Verfasser]. "Focusing solutions for data mining : analytical studies and experimental results in real world domains / T. Reinartz." Berlin, 1999. http://d-nb.info/965635090/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Zhao. "Spatial Data Mining Analytical Environment for Large Scale Geospatial Data." ScholarWorks@UNO, 2016. http://scholarworks.uno.edu/td/2284.

Full text
Abstract:
Nowadays, many applications are continuously generating large-scale geospatial data. Vehicle GPS tracking data, aerial surveillance drones, LiDAR (Light Detection and Ranging), world-wide spatial networks, and high resolution optical or Synthetic Aperture Radar imagery data all generate a huge amount of geospatial data. However, as data collection increases our ability to process this large-scale geospatial data in a flexible fashion is still limited. We propose a framework for processing and analyzing large-scale geospatial and environmental data using a “Big Data” infrastructure. Existing Big Data solutions do not include a specific mechanism to analyze large-scale geospatial data. In this work, we extend HBase with Spatial Index(R-Tree) and HDFS to support geospatial data and demonstrate its analytical use with some common geospatial data types and data mining technology provided by the R language. The resulting framework has a robust capability to analyze large-scale geospatial data using spatial data mining and making its outputs available to end users.
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Chenghui. "Data mining for direct marketing, problems and solutions." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ39847.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ur-Rahman, Nadeem. "Textual data mining applications for industrial knowledge management solutions." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/6373.

Full text
Abstract:
In recent years knowledge has become an important resource to enhance the business and many activities are required to manage these knowledge resources well and help companies to remain competitive within industrial environments. The data available in most industrial setups is complex in nature and multiple different data formats may be generated to track the progress of different projects either related to developing new products or providing better services to the customers. Knowledge Discovery from different databases requires considerable efforts and energies and data mining techniques serve the purpose through handling structured data formats. If however the data is semi-structured or unstructured the combined efforts of data and text mining technologies may be needed to bring fruitful results. This thesis focuses on issues related to discovery of knowledge from semi-structured or unstructured data formats through the applications of textual data mining techniques to automate the classification of textual information into two different categories or classes which can then be used to help manage the knowledge available in multiple data formats. Applications of different data mining techniques to discover valuable information and knowledge from manufacturing or construction industries have been explored as part of a literature review. The application of text mining techniques to handle semi-structured or unstructured data has been discussed in detail. A novel integration of different data and text mining tools has been proposed in the form of a framework in which knowledge discovery and its refinement processes are performed through the application of Clustering and Apriori Association Rule of Mining algorithms. Finally the hypothesis of acquiring better classification accuracies has been detailed through the application of the methodology on case study data available in the form of Post Project Reviews (PPRs) reports. The process of discovering useful knowledge, its interpretation and utilisation has been automated to classify the textual data into two classes.
APA, Harvard, Vancouver, ISO, and other styles
5

Cranley, Nikki. "Challenges and Solutions for Complex Gigabit FTI Networks." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595664.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada<br>This paper presents a case study of an FTI system with complex requirements in terms of the data acquisition, recording, and post-analysis. Gigabit Ethernet was the technology of choice to facilitate such a system. Recording in a Gigabit Ethernet environment raises a fresh challenge to perform fast data reduction and data mining for post-flight analysis. This paper describes the Quick Access Recorder used in this system and how it addresses this challenge.
APA, Harvard, Vancouver, ISO, and other styles
6

Laurinen, P. (Perttu). "A top-down approach for creating and implementing data mining solutions." Doctoral thesis, University of Oulu, 2006. http://urn.fi/urn:isbn:9514281268.

Full text
Abstract:
Abstract The information age is characterized by ever-growing amounts of data surrounding us. By reproducing this data into usable knowledge we can start moving toward the knowledge age. Data mining is the science of transforming measurable information into usable knowledge. During the data mining process, the measurements pass through a chain of sophisticated transformations in order to acquire knowledge. Furthermore, in some applications the results are implemented as software solutions so that they can be continuously utilized. It is evident that the quality and amount of the knowledge formed is highly dependent on the transformations and the process applied. This thesis presents an application independent concept that can be used for managing the data mining process and implementing the acquired results as software applications. The developed concept is divided into two parts – solution formation and solution implementation. The first part presents a systematic way for finding a data mining solution from a set of measurement data. The developed approach allows for easier application of a variety of algorithms to the data, manages the work chain, and differentiates between the data mining tasks. The method is based on storage of the data between the main stages of the data mining process, where the different stages of the process are defined on the basis of the type of algorithms applied to the data. The efficiency of the process is demonstrated with a case study presenting new solutions for resistance spot welding quality control. The second part of the concept presents a component-based data mining application framework, called Smart Archive, designed for implementing the solution. The framework provides functionality that is common to most data mining applications and is especially suitable for implementing applications that process continuously acquired measurements. The work also proposes an efficient algorithm for utilizing cumulative measurement data in the history component of the framework. Using the framework, it is possible to build high-quality data mining applications with shorter development times by configuring the framework to process application-specific data. The efficiency of the framework is illustrated using a case study presenting the results and implementation principles of an application developed for predicting steel slab temperatures in a hot strip mill. In conclusion, this thesis presents a concept that proposes solutions for two fundamental issues of data mining, the creation of a working data mining solution from a set of measurement data and the implementation of it as a stand-alone application.
APA, Harvard, Vancouver, ISO, and other styles
7

Javaid, Muhammad Athar [Verfasser], and Wolfgang [Akademischer Betreuer] Keller. "Data mining in GRACE monthly solutions / Muhammad Athar Javaid ; Betreuer: Wolfgang Keller." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2019. http://d-nb.info/1186063777/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Schwarz, Holger. "Integration von Data-Mining und online analytical processing : eine Analyse von Datenschemata, Systemarchitekturen und Optimierungsstrategien /." [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10720634.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

He, Jianyi. "THE COMMERCIAL IMPACT ON BUSINESS MODELS OF MEDICAL IMAGING SOLUTIONS THROUGH DATA-ANALYTICAL METHODOLOGIES." Case Western Reserve University School of Graduate Studies / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=case1620233525109266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Techaplahetvanich, Kesaraporn. "A visualization framework for exploring correlations among atributes of a large dataset and its applications in data mining." University of Western Australia. School of Computer Science and Software Engineering, 2007. http://theses.library.uwa.edu.au/adt-WU2007.0216.

Full text
Abstract:
[Truncated abstract] Many databases in scientific and business applications have grown exponentially in size in recent years. Accessing and using databases is no longer a specialized activity as more and more ordinary users without any specialized knowledge are trying to gain information from databases. Both expert and ordinary users face significant challenges in understanding the information stored in databases. The databases are so large in most cases that it is impossible to gain useful information by inspecting data tables, which are the most common form of storing data in relational databases. Visualization has emerged as one of the most important techniques for exploring data stored in large databases. Appropriate visualization techniques can reveal trends, correlations and associations in data that are very difficult to understand from a textual representation of the data. This thesis presents several new frameworks for data visualization and visual data mining. The first technique, VisEx, is useful for visual exploration of large multi-attribute datasets and especially for exploring the correlations among the attributes in such datasets. Most previous visualization techniques can display correlations among two or three attributes at a time without excessive screen clutter. ... Although many algorithms for mining association rules have been researched extensively, they do not incorporate users in the process and most of them generate a large number of association rules. It is quite often difficult for the user to analyze a large number of rules to identify a small subset of rules that is of importance to the user. In this thesis I present a framework for the user to interactively mine association rules visually. Another challenging task in data mining is to understand the correlations among the mined association rules. It is often difficult to identify a relevant subset of association rules from a large number of mined rules. A further contribution of this thesis is a simple framework in the VisAR system that allows the user to explore a large number of association rules visually. A variety of businesses have adopted new technologies for storing large amounts of data. Analysis of historical data quite often offers new insights into business processes that may increase productivity and profit. On-line analytical processing (OLAP) has become a powerful tool for business analysts to explore historical data. Effective visualization techniques are very important for supporting OLAP technology. A new technique for the visual exploration of OLAP data cubes is also presented in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Analytical solutions for data mining"

1

Reinartz, Thomas, ed. Focusing Solutions for Data Mining. Springer Berlin Heidelberg, 1999. http://dx.doi.org/10.1007/3-540-48316-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Malaska, Ted, and Jonathan Seidman. Foundations for Architecting Data Solutions: Managing Successful Data Projects. Edited by Nicole Tache and Michele Cronin. O’Reilly Media, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Furtado, Pedro Nuno San-Banto, 1968-, ed. Evolving application domains of data warehousing and mining: Trends and solutions. Information Science Reference, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Root, Randal. Pro SQL Server 2012 BI Solutions. Apress, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Furtado, Pedro Nuno San-Banto, 1968-, ed. Evolving application domains of data warehousing and mining: Trends and solutions. Information Science Reference, 2010.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Westphal, Christopher R. Data mining solutions: Methods and tools for solving real-world problems. Wiley, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

1954-, Eyob Ephrem, ed. Social implications of data mining and information privacy: Interdisciplinary frameworks and solutions. Information Science Reference, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jos, van Dongen, ed. Pentaho Solutions: Business Intelligence and Data Warehousing with Pentaho and MySQL. Wiley [Imprint], 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Institute, SAS, ed. Predictive modeling with SAS Enterprise Miner: Practical solutions for business applications. 2nd ed. SAS Institute, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sarma, Kattamuri S. Predictive modeling with SAS Enterprise Miner: Practical solutions for business applications. SAS Institute, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Analytical solutions for data mining"

1

Wong, Voon Hee, Wei Lun Tan, Jia Li Kor, and Xiao Ven Wan. "Autonomous Language Processing and Text Mining by Data Analytics for Business Solutions." In Proceedings of the International Conference on Mathematical Sciences and Statistics 2022 (ICMSS 2022). Atlantis Press International BV, 2022. http://dx.doi.org/10.2991/978-94-6463-014-5_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nurmalitasari, Zalizah Awang Long, and Mohammad Faizuddin Mohd Noor. "The Predictive Learning Analytics for Student Dropout Using Data Mining Technique: A Systematic Literature Review." In Advances in Technology Transfer Through IoT and IT Solutions. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-25178-8_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Naumenko, Maksym, Iryna Hrashchenko, Tetiana Tsalko, Svitlana Nevmerzhytska, Svitlana Krasniuk, and Yurii Kulynych. "Innovative technological modes of data mining and modelling for adaptive project management of food industry competitive enterprises in crisis conditions." In PROJECT MANAGEMENT: INDUSTRY SPECIFICS. TECHNOLOGY CENTER PC, 2024. https://doi.org/10.15587/978-617-8360-03-0.ch2.

Full text
Abstract:
Developed in this research scientific and practical applied project solutions regarding Data Mining for enterprises and companies (on the example of food industry) involve the application of advanced cybernetic computing methods/algorithms, technological modes and scenarios (for integration, pre-processing, machine learning, testing and in-depth comprehensive interpretation of the results) of analysis and analytics of large structured and semi-structured data sets for training high-quality descriptive, predictive and even prescriptive models. The proposed by authors multi-mode adaptive Data Mining synergistically combines in parallel and sequential scenarios: methods of preliminary EDA, statistical analysis methods, business intelligence methods, classical machine learning algorithms and architectures, advanced methods of testing and verification of the obtained results, methods of interdisciplinary empirical expert interpretation of results, knowledge engineering formats/techniques – for discovery/detection previously unknown, hidden and potentially useful patterns, relationships and trends (for innovative project management). The main methodological and technological goal of this developed methodology of multi-mode adaptive Data Mining for food industry enterprises is to increase the completeness (support) and accuracy of business and technical-technological modeling on all levels of project management of food industry enterprises: strategic, tactical and operational. By optimally configuring hyperparameters, parameters, algorithms/methods and architecture of multi-target and multidimensional explicit and implicit descriptive and predicative models, using high-performance hybrid parallel soft computing for machine learning – the improved methodology of multimode Data Mining (proposed by the authors) allows to find/detect/mine for new, useful, hidden corporate knowledge from previously collected, extracted, integrated Data Lakes, stimulating the overall efficiency, sustainability, and therefore competitiveness, of food industry enterprises at various organizational scales (from individual, craft productions to integrated international holdings) and in various food product groups and niches. In more detail, the purposes of this research are revealed in two meaningful modules: The first part of the detailed goals and objectives of this research relate to the effective use of Data Mining (and modeling) in the competitive management of enterprises and companies in modern economy, namely: – research and verification of the effectiveness of the basic/main three types of Data Mining in the management of a competitive enterprise; – detection of basic/main difficulties and challenges of Data Mining technology in the management of a competitive enterprise; – research and generation of a list of basic/main expedient functional applied corporate tasks for the application of the improved concept of Data Mining; – determination of the list of basic/main results of using the proposed Data Mining concept and methodology for an effective and competitive enterprise in dynamic and crisis conditions; – finding the basic/main advantages of using the proposed Data Mining concept and methodology for an effective and competitive enterprise in dynamic and crisis conditions: – research of the basic/main technological problems of using the proposed concept and methodology of Data Mining for an effective and competitive enterprise in dynamic and crisis conditions; – detection of the basic/main ethical problems of using the proposed Data Mining concept and methodology for an effective and competitive enterprise in dynamic and crisis conditions; – research and search for basic/main perspectives of intelligent data analysis in the management of a competitive enterprise or company. 2. The second and main part of the detailed goals and objectives of this publication relate to the effective use of Data Mining (and modeling) in the competitive management of enterprises and companies in the food industry, namely: – determination of features and methods of analysis and analytics of High Dimensional big data of at enterprises of the food industry; – research of features and development of methodological and technological techniques for effective mode of OnLine Data Mining at food industry enterprises; – research of specifics and development of recommendations regarding the effective mode of Ad-Hoc Data Mining at food industry enterprises; – research of the specifics and development of applied recommendations regarding the effective mode of Anomaly &amp; Fraud Detection of technological data of food industry enterprises; – identification of directions and development of recommendations for effective use of Hybrid Data Mining at food industry enterprises; – detection of features and development of a complex of scientific and practical recommendations regarding the effective regime of Crisis Data Mining at food industry enterprises in dynamic and unstable external conditions; – identification of directions and development of recommendations for future trends in the effective use of Data Mining at food industry enterprises. It can not be argued that in modern conditions (pre-crisis, crisis and post-crisis conditions of both regional food industries and the global world; globalization and simultaneous very narrow specialization of the food industry sectors; the need to take into account a huge amount of stream and packet information from various sources and various formats; the need for a quick adaptive optimal management response/adaptation in response to rapid changes in the global or regional market situation; unstable and difficult to predict dynamics of external influences: international, national, sectoral, local direct regulatory and indirect public regulation of the food industry) – deployment of the multi-mode adaptive Data Mining methodology proposed by the authors – will result in enterprises, companies and organizations/institutions of the food industry gaining additional competitive advantages at the state, regional, branch and corporate management levels.
APA, Harvard, Vancouver, ISO, and other styles
4

Mucherino, Antonio, Petraq J. Papajorgji, and Panos M. Pardalos. "Solutions to Exercises." In Data Mining in Agriculture. Springer New York, 2009. http://dx.doi.org/10.1007/978-0-387-88615-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Chong Ho Alex. "Cutting Edge Data Analytical Tools." In Data Mining and Exploration. CRC Press, 2022. http://dx.doi.org/10.1201/9781003153658-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Besong, Tabot M. D., and Arthur J. Rowe. "Acquisition and Analysis of Data from High Concentration Solutions." In Analytical Ultracentrifugation. Springer Japan, 2016. http://dx.doi.org/10.1007/978-4-431-55985-6_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Boire, Richard. "The Data Mining Process: Creation of the Analytical File." In Data Mining for Managers. Palgrave Macmillan US, 2014. http://dx.doi.org/10.1057/9781137406194_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Salleh, Mohd Najib Mohd, Noureen Talpur, and Kashif Hussain. "Adaptive Neuro-Fuzzy Inference System: Overview, Strengths, Limitations, and Solutions." In Data Mining and Big Data. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61845-6_52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

van der Aalst, Wil M. P., Matthias Jarke, István Koren, and Christoph Quix. "Digital Shadows: Infrastructuring the Internet of Production." In Internet of Production. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-030-98062-7_25-1.

Full text
Abstract:
AbstractDigitization in the field of production is fragmented in very different domains, ranging from materials to production technology to process and business models. Each domain comes with specialized knowledge, often incorporated into mathematical models. This heterogeneity makes it hard to naively exploit advances in data-driven machine learning that could facilitate situation adaptation and experience transfer. Innovative combinations of model-driven and data-driven solutions must be invented but also made comparable and interoperable to avoid ending up in information silos. In future World Wide Labs (WWLs), experiences can be shared, aggregated, and used for innovation. WWLs will be complex, evolving socio-technical networks of interconnected devices, software, data stores, and humans as users and contributors of expert knowledge and feedback. Integrating a large number of research labs, engineering, and production sites requires a capable cross-domain Internet of Production (IoP) infrastructure. The IoP project claims Digital Shadows (DSs) to offer a shared conceptual foundation for infrastructuring the IoP. In engineering, DSs were introduced as the data provision link to Digital Twins, whereas in computer science, DSs generalize the well-established concept of database views. In this chapter, we elaborate on the roles of DSs in infrastructuring the IoP from three perspectives: analytic functionality, conceptual organization, and technical networking. As an example where an integrative DS-like approach is already highly successful, we showcase the approach and infrastructure of the process mining field.
APA, Harvard, Vancouver, ISO, and other styles
10

van der Aalst, Wil, Matthias Jarke, István Koren, and Christoph Quix. "Digital Shadows: Infrastructuring the Internet of Production." In Internet of Production. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-44497-5_25.

Full text
Abstract:
AbstractDigitization in the field of production is fragmented in very different domains, ranging from materials to production technology to process and business models. Each domain comes with specialized knowledge, often incorporated into mathematical models. This heterogeneity makes it hard to naively exploit advances in data-driven machine learning that could facilitate situation adaptation and experience transfer. Innovative combinations of model-driven and data-driven solutions must be invented but also made comparable and interoperable to avoid ending up in information silos. In future World Wide Labs (WWLs), experiences can be shared, aggregated, and used for innovation. WWLs will be complex, evolving socio-technical networks of interconnected devices, software, data stores, and humans as users and contributors of expert knowledge and feedback. Integrating a large number of research labs, engineering, and production sites requires a capable cross-domain Internet of Production (IoP) infrastructure. The IoP project claims Digital Shadows (DSs) to offer a shared conceptual foundation for infrastructuring the IoP. In engineering, DSs were introduced as the data provision link to Digital Twins, whereas in computer science, DSs generalize the well-established concept of database views. In this chapter, we elaborate on the roles of DSs in infrastructuring the IoP from three perspectives: analytic functionality, conceptual organization, and technical networking. As an example where an integrative DS-like approach is already highly successful, we showcase the approach and infrastructure of the process mining field.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Analytical solutions for data mining"

1

Chen, Daxuan, Tristan S. Dyck, Carson K. Leung, Trevor D. Neudorf, and Linpu Zhang. "A Visual Analytics Solution for Analyzing and Mining Infectious Disease Data." In 2024 28th International Conference Information Visualisation (IV). IEEE, 2024. http://dx.doi.org/10.1109/iv64223.2024.00058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Horvath, Jack W., Jose Cavallaro, and Craig Fissette. "A New Analytical Technique for Chemical Cleaning Solutions." In CORROSION 1987. NACE International, 1987. https://doi.org/10.5006/c1987-87390.

Full text
Abstract:
Abstract A new method was developed utilizing ion liquid chromatography (ILC) to analyze chelant boiler cleaning solutions. First efforts at analyzing Fe-Cu-(NH4)4 EDTA solutions encountered a separation problem. The iron and copper chelates were not separated enough to allow quantitative analysis of each. The problem was solved by changing the eluent composition. Linearity of chromatograph peak area versus concentration plots for each component was demonstrated and reproducibility confirmed. Commercial AA standards were used to prepare calibration solutions, and analytical data was confirmed by duplicate analyses of samples using separate analytical methods. The newly developed analytical method was first used in a mobile laboratory to analyze cleaning solutions during the cleaning of boiler number two and its economizer at Basin Electric Power Cooperative’s Laramie River Station near Wheatland, Wyoming. During the job, ILC analytical results were checked by titration and by spectrophotometric methods. Data from the different methods were found to be in close agreement. A major benefit of ILC for during the job boiler cleaning solution analyses is that three analytical tests are replaced by one procedure. This allows a single chemist or technician to better be able to keep up with frequent sampling schedules. Not being so hard pressed, he is better able to do careful work and produce accurate results.
APA, Harvard, Vancouver, ISO, and other styles
3

Kelkar, Anuja, Utkarsh Naiknaware, Sachin Sukhlecha, Ashish Sanadhya, Maitreya Natu, and Vaishali Sadaphal. "Analytics-Based Solutions for Improving Alert Management Service for Enterprise Systems." In 2013 IEEE 13th International Conference on Data Mining Workshops (ICDMW). IEEE, 2013. http://dx.doi.org/10.1109/icdmw.2013.166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zaluski, Marvin, Sylvain Le´tourneau, Jeff Bird, and Chunsheng Yang. "Developing Data Mining-Based Prognostic Models for CF-18 Aircraft." In ASME Turbo Expo 2010: Power for Land, Sea, and Air. ASMEDC, 2010. http://dx.doi.org/10.1115/gt2010-22944.

Full text
Abstract:
The CF-18 aircraft is a complex system for which a variety of data are systematically being recorded: operational flight data from sensors and Built-In Test Equipment (BITE) and maintenance activities recorded by personnel. These data resources are stored and used within the operating organization but new analytical and statistical techniques and tools are being developed that could be applied to these data to benefit the organization. This paper investigates the utility of readily available CF-18 data to develop data mining-based models for prognostics and health management (PHM) systems. We introduce a generic data mining methodology developed to build prognostic models from operational and maintenance data and elaborate on challenges specific to the use of CF-18 data from the Canadian Forces. We focus on a number of key data mining tasks including: data gathering, information fusion, data pre-processing, model building, and evaluation. The solutions developed to address these tasks are described. A software tool developed to automate the model development process is also presented. Finally, the paper discusses preliminary results on the creation of models to predict F404 No. 4 Bearing and MFC (Main Fuel Control) failures on the CF-18.
APA, Harvard, Vancouver, ISO, and other styles
5

"SMART CITY SOLUTIONS FOR LOCATION TRACKING." In 15th International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing (CGVCVIP 2021), the 7th International Conference on Connected Smart Cities (CSC 2021) and 6th International Conference on Big Data Analytics, Data Mining and Computational Intelligence (BigDaCI’21). IADIS Press, 2021. http://dx.doi.org/10.33965/mccsis2021_202107l012.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yan, Xin, Mu Qiao, Timothy W. Simpson, Jia Li, and Xiaolong Luke Zhang. "LIVE: A Work-Centered Approach to Support Visual Analytics of Multi-Dimensional Engineering Design Data With Interactive Visualization and Data-Mining." In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-48333.

Full text
Abstract:
During the process of trade space exploration, information overload has become a notable problem. To find the best design, designers need more efficient tools to analyze the data, explore possible hidden patterns, and identify preferable solutions. When dealing with large-scale, multi-dimensional, continuous data sets (e.g., design alternatives and potential solutions), designers can be easily overwhelmed by the volume and complexity of the data. Traditional information visualization tools have some limits to support the analysis and knowledge exploration of such data, largely because they usually emphasize the visual presentation of and user interaction with data sets, and lack the capacity to identify hidden data patterns that are critical to in-depth analysis. There is a need for the integration of user-centered visualization designs and data-oriented data analysis algorithms in support of complex data analysis. In this paper, we present a work-centered approach to support visual analytics of multi-dimensional engineering design data by combining visualization, user interaction, and computational algorithms. We describe a system, Learning-based Interactive Visualization for Engineering design (LIVE), that allows designer to interactively examine large design input data and performance output data analysis simultaneously through visualization. We expect that our approach can help designers analyze complex design data more efficiently and effectively. We report our preliminary evaluation on the use of our system in analyzing a design problem related to aircraft wing sizing.
APA, Harvard, Vancouver, ISO, and other styles
7

Sarbu, Danielaanca. "ELEARNING SOLUTION TO SUSTAIN THE DECISION MAKING PROCESS FOR PROVISIONING OPTIMIZED TELEMEDICINE SERVICES." In eLSE 2014. Editura Universitatii Nationale de Aparare "Carol I", 2014. http://dx.doi.org/10.12753/2066-026x-14-024.

Full text
Abstract:
As eLearning is more and more employed nowadays as an alternative to traditional learning scenarios, a multitude of eLearning solutions and providers have emerged on the market. Moreover, this domain has advanced to such lengths that most organizations possess already a standard suite of such applications. This paper introduces an eLearning solution that is focused on exposing specialized and personalized content describing the usage of telemedicine services across widely spread geographical areas. This eLearning solution mainly consists of interactive dashboards that present in a simple, intuitive and highly visual manner the results of some complex CDR (Call Detail Records) analyses. Therefore, the system is fed by an analytical platform, whose architectural design will be further described in this paper together with an argumentation why the proposed data modelling approach and techniques are optimal and are ready to respond to the pursued areas of interest. Overall, the aim of the eLearning solution is to help decision makers to better perceive their business from a global and analytical point of view and to understand behaviours in terms of telemedicine usage trends. This paper focuses on a line of research that has not been yet pursued, but that could impact the optimization of telemedicine services offering. In a nutshell, exposing the analysis of terabytes of CDRs through specific data-mining techniques in a clear and simplified manner supports learning about business facts and trends. Moreover, the assimilation of this type of knowledge is vital in enabling informed decisions about the optimization of provisioning telemedicine services in conformity with their actual usage. In addition, we envision an analogy to traditional Learning Analytics tools centred on personalized data visualization.
APA, Harvard, Vancouver, ISO, and other styles
8

Milani, Alessandra Maciel Paz, Fernando V. Paulovich, and Isabel Harb Manssour. "Preprocessing Profiling Model for Visual Analytics." In Conference on Graphics, Patterns and Images. Sociedade Brasileira de Computação, 2020. http://dx.doi.org/10.5753/sibgrapi.est.2020.12991.

Full text
Abstract:
Analyzing and managing raw data are still a challenging part of the data analysis process, mainly regarding data preprocessing. Although we can find studies proposing design implications or recommendations for visualization solutions in the data analysis scope, they do not focus on challenges during the preprocessing phase. Likewise, the current Visual Analytics processes do not consider preprocessing an equally important stage in their process. Thus, with this study, we aim to contribute to the discussion of how we can use and combine methods of visualization and data mining to assist data analysts during the preprocessing activities. To achieve that, we introduce the Preprocessing Profiling Model for Visual Analytics, which contemplates a set of features to inspire the implementation of new solutions. In turn, these features were designed considering a list of insights we obtained during an interview study with thirteen data analysts. Our contributions can be summarized as offering resources to promote a shift to a visual preprocessing.
APA, Harvard, Vancouver, ISO, and other styles
9

Yesselbayeva, Saltanat, Máté Urbán, and Péter Görög. "EFFECT OF SEISMIC LOAD ON THE STABILITY OF A DOLOMITE QUARRY IN HUNGARY." In 3rd Croatian Conference on Earthquake Engineering. University of Zagreb Faculty of Civil Engineering, 2025. https://doi.org/10.5592/co/3crocee.2025.140.

Full text
Abstract:
The rock slope stability in seismically active regions is a critical aspect for safe and sustainable quarrying. This study investigates the effect of seismic loading on the stability of a dolomite quarry, integrating traditional analytical methods with advanced two-dimensional finite element models. The site, located near a historically active earthquake zone, provides an ideal environment to explore slope stability under both static and dynamic conditions. Site measurements and rock mass data, supplemented by geological data from comparable formations, inform the development of a comprehensive geotechnical model. The methodology follows the slope design framework of Read and Stacey (2009), emphasizing kinematic analysis and failure mode evaluation, particularly relevant to the structural integrity of strong, jointed rock masses like dolomite. Regulations, including Eurocode 7 partial factors and criteria for safety and failure probability, frame the analytical approach. Slope stability is first assessed through empirical techniques, such as SMR and Q Slope, then through detailed analytical solutions using RocScience software. Through RocScience RS2 could examine the application of seismic load in finite element models and highlight potential failures under earthquake conditions. While optimization of mining slopes is outside the scope, this analysis emphasizes model sensitivity to seismic inputs, joint geometry, and mesh size, demonstrating critical conditions under dynamic loading. Limitations in data availability, particularly historical earthquake data, are addressed with approximate time-history data.
APA, Harvard, Vancouver, ISO, and other styles
10

Berta, Doraanca. "BUSINESS IINTELLIGENCE IN EDUCATION." In eLSE 2012. Editura Universitara, 2012. http://dx.doi.org/10.12753/2066-026x-12-101.

Full text
Abstract:
In order to steer their internal organization, companies are experiencing mountains of data need to obtain information about the processes and people within the organization, the companies’ environment, and other factors that influence their business. Business Intelligence (BI) provides support for delivering this information and helps organizations achieve this focus giving the complete vision to learn from the past, monitor the present and gain an inside into the future. Nowadays, on the market there is a wide range of software rather included in the term "umbrella", products that are nothing but a combination of decision support systems, query and reporting tools, online analytical processing (OLAP) and also, forecasting and data mining systems. To discover at first hidden informations that are received from a data warehouse, BI uses a variety of algorithms to establish relationships between different data and variables. The effectiveness of these tools is quite high given that have began to appear BI solutions that not only serve the economic profile organizations but also other types of organizations to which we wouldn’t think, like education. In this article we propose to prove that Business Intelligence solutions can be successfully used also in education. Educational environment need this software because Business Intelligence combines data gathering, data storage, and knowledge management with analytical tools to present complex, useful and competitive information about students, results, performance and interesting correlations between different educational variables. BI solutions include the latest and most advanced technologies to support decision making and covers all information resources necessary to support decisions. They allow the models offered to make calculations and highlight the knowledge, while the final decision-maker appreciates reporting to reality and taking decision, even in education.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Analytical solutions for data mining"

1

Kamath, C., J. Franzman, and R. Ponmalai. Data Mining for Faster, Interpretable Solutions to Inverse Problems:A Case Study Using Additive Manufacturing. Office of Scientific and Technical Information (OSTI), 2021. http://dx.doi.org/10.2172/1763188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gates, Allison, Michelle Gates, Shannon Sim, Sarah A. Elliott, Jennifer Pillay, and Lisa Hartling. Creating Efficiencies in the Extraction of Data From Randomized Trials: A Prospective Evaluation of a Machine Learning and Text Mining Tool. Agency for Healthcare Research and Quality (AHRQ), 2021. http://dx.doi.org/10.23970/ahrqepcmethodscreatingefficiencies.

Full text
Abstract:
Background. Machine learning tools that semi-automate data extraction may create efficiencies in systematic review production. We prospectively evaluated an online machine learning and text mining tool’s ability to (a) automatically extract data elements from randomized trials, and (b) save time compared with manual extraction and verification. Methods. For 75 randomized trials published in 2017, we manually extracted and verified data for 21 unique data elements. We uploaded the randomized trials to ExaCT, an online machine learning and text mining tool, and quantified performance by evaluating the tool’s ability to identify the reporting of data elements (reported or not reported), and the relevance of the extracted sentences, fragments, and overall solutions. For each randomized trial, we measured the time to complete manual extraction and verification, and to review and amend the data extracted by ExaCT (simulating semi-automated data extraction). We summarized the relevance of the extractions for each data element using counts and proportions, and calculated the median and interquartile range (IQR) across data elements. We calculated the median (IQR) time for manual and semiautomated data extraction, and overall time savings. Results. The tool identified the reporting (reported or not reported) of data elements with median (IQR) 91 percent (75% to 99%) accuracy. Performance was perfect for four data elements: eligibility criteria, enrolment end date, control arm, and primary outcome(s). Among the top five sentences for each data element at least one sentence was relevant in a median (IQR) 88 percent (83% to 99%) of cases. Performance was perfect for four data elements: funding number, registration number, enrolment start date, and route of administration. Among a median (IQR) 90 percent (86% to 96%) of relevant sentences, pertinent fragments had been highlighted by the system; exact matches were unreliable (median (IQR) 52 percent [32% to 73%]). A median 48 percent of solutions were fully correct, but performance varied greatly across data elements (IQR 21% to 71%). Using ExaCT to assist the first reviewer resulted in a modest time savings compared with manual extraction by a single reviewer (17.9 vs. 21.6 hours total extraction time across 75 randomized trials). Conclusions. Using ExaCT to assist with data extraction resulted in modest gains in efficiency compared with manual extraction. The tool was reliable for identifying the reporting of most data elements. The tool’s ability to identify at least one relevant sentence and highlight pertinent fragments was generally good, but changes to sentence selection and/or highlighting were often required.
APA, Harvard, Vancouver, ISO, and other styles
3

Navas-Alemán, Lizbeth. Innovation and Competitiveness in Mining Value Chains: The Case of Brazil. Inter-American Development Bank, 2021. http://dx.doi.org/10.18235/0003813.

Full text
Abstract:
Mining companies have mirrored other large multinational companies in setting up global value chains (GVCs), sourcing their inputs and services from an ever-larger number of highly capable suppliers in developing countries, such as those in resource-rich Latin America. However, recent empirical studies on the mining GVC in that region suggest that even innovative local suppliers find it difficult to exploit their innovations in local and foreign markets. Using a conceptual framework that combines literature on innovation and GVCs, this study analyzed how global/regional- and firm-level factors interact to explain the acquisition of local suppliers capabilities within Brazils mining industry. The study explored these issues using original data gathered in 2019 and secondary sources from Brazil. The main findings are related to (i) strategies used by domestic suppliers to develop innovative solutions for leading mining companies, (ii) how health and safety concerns spurred innovation after the disasters in Mariana and Brumadinho, (iii) new-to-the-world innovation capabilities among Brazilian suppliers to the mining industry, and (iv) the main barriers to developing innovative practices among domestic suppliers. The authors propose public policies to support major mining companies in acquiring innovations from domestic suppliers to the mining industry. Opportunities such as a Copper Rush in Brazil that could foster further innovations in mining are discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Neyedley, K., J. J. Hanley, Z. Zajacz, and M. Fayek. Accessory mineral thermobarometry, trace element chemistry, and stable O isotope systematics, Mooshla Intrusive Complex (MIC), Doyon-Bousquet-LaRonde mining camp, Abitibi greenstone belt, Québec. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/328986.

Full text
Abstract:
The Mooshla Intrusive Complex (MIC) is an Archean polyphase magmatic body located in the Doyon-Bousquet-LaRonde (DBL) mining camp of the Abitibi greenstone belt, Québec, that is spatially associated with numerous gold (Au)-rich VMS, epizonal 'intrusion-related' Au-Cu vein systems, and shear zone-hosted (orogenic?) Au deposits. To elucidate the P-T conditions of crystallization, and oxidation state of the MIC magmas, accessory minerals (zircon, rutile, titanite) have been characterized using a variety of analytical techniques (e.g., trace element thermobarometry). The resulting trace element and oxythermobarometric database for accessory minerals in the MIC represents the first examination of such parameters in an Archean magmatic complex in a world-class mineralized district. Mineral thermobarometry yields P-T constraints on accessory mineral crystallization consistent with the expected conditions of tonalite-trondhjemite-granite (TTG) magma genesis, well above peak metamorphic conditions in the DBL camp. Together with textural observations, and mineral trace element data, the P-T estimates reassert that the studied minerals are of magmatic origin and not a product of metamorphism. Oxygen fugacity constraints indicate that while the magmas are relatively oxidizing (as indicated by the presence of magmatic epidote, titanite, and anhydrite), zircon trace element systematics indicate that the magmas were not as oxidized as arc magmas in younger (post-Archean) porphyry environments. The data presented provides first constraints on the depth and other conditions of melt generation and crystallization of the MIC. The P-T estimates and qualitative fO2 constraints have significant implications for the overall model for formation (crystallization, emplacement) of the MIC and potentially related mineral deposits.
APA, Harvard, Vancouver, ISO, and other styles
5

Robinson, W., Jeremiah Stache, Jeb Tingle, Carlos Gonzalez, Anastasios Ioannides, and James Rushing. Naval expeditionary runway construction criteria : P-8 Poseidon pavement requirements. Engineer Research and Development Center (U.S.), 2023. http://dx.doi.org/10.21079/11681/46857.

Full text
Abstract:
A full-scale airfield pavement test section was constructed and trafficked by the US Army Engineer Research and Development Center to determine minimum rigid and flexible pavement thickness requirements to support contingency operations of the P-8 Poseidon aircraft. Additionally, airfield damage repair solutions were tested to evaluate the compatibility of those solutions with the P-8 Poseidon. The test items consisted of various material thickness and strengths to yield a range of operations to failure allowing development of performance predictions at a relatively lower number of design operations than are considered in traditional sustainment pavement design scenarios. Test items were trafficked with a dual-wheel P-8 test gear on a heavy-vehicle simulator. Flexible pavement rutting, rigid pavement cracking and spalling, instrumentation response, and falling-weight deflectometer data were monitored at select traffic intervals. The results of the trafficking tests indicated that existing design predictions were generally overconservative. Thus, minimum pavement layer thickness recommendations were made to support a minimum level of contingency operations. The results of full-scale flexible pavement experiment were utilized to support an analytical modeling effort to extend flexible pavement thickness recommendations beyond those evaluated.
APA, Harvard, Vancouver, ISO, and other styles
6

bin Ahsan, Wahid. The EDIT UX Framework: A User-Centered Approach to Effective Product Redesign. Userhub, 2024. http://dx.doi.org/10.58947/zxkd-kldq.

Full text
Abstract:
In the dynamic domain of web and mobile application development, the imperative to continuously evolve and enhance user experience is paramount. The EDIT UX Framework offers a robust, systematic approach to redesign, aimed at significantly enhancing user engagement, accessibility, and business performance. This framework is delineated into four pivotal stages: (1) Evaluation, which establishes a solid analytical foundation by synthesizing metrics analysis, heuristic evaluations, accessibility assessments, and user insights; (2) Design, where ideation and prototyping are driven by user-centric insights, fostering innovative solutions; (3) Iteration, a phase dedicated to refining designs through iterative user feedback and rigorous testing, with an unwavering focus on inclusivity and accessibility; and (4) Transformation, which transitions the refined product into the market, emphasizing continuous evaluation and iterative enhancements post-launch. By integrating principles of user-centered design, data-driven decision-making, and comprehensive accessibility, the EDIT UX Framework empowers design teams to create digital experiences that not only meet but exceed user expectations, ensuring a product's resilience and adaptability in an ever-evolving digital landscape.
APA, Harvard, Vancouver, ISO, and other styles
7

Neyedley, K., J. J. Hanley, P. Mercier-Langevin, and M. Fayek. Ore mineralogy, pyrite chemistry, and S isotope systematics of magmatic-hydrothermal Au mineralization associated with the Mooshla Intrusive Complex (MIC), Doyon-Bousquet-LaRonde mining camp, Abitibi greenstone belt, Québec. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/328985.

Full text
Abstract:
The Mooshla Intrusive Complex (MIC) is an Archean polyphase magmatic body located in the Doyon-Bousquet-LaRonde (DBL) mining camp of the Abitibi greenstone belt, Québec. The MIC is spatially associated with numerous gold (Au)-rich VMS, epizonal 'intrusion-related' Au-Cu vein systems, and shear zone-hosted (orogenic?) Au deposits. To elucidate genetic links between deposits and the MIC, mineralized samples from two of the epizonal 'intrusion-related' Au-Cu vein systems (Doyon and Grand Duc Au-Cu) have been characterized using a variety of analytical techniques. Preliminary results indicate gold (as electrum) from both deposits occurs relatively late in the systems as it is primarily observed along fractures in pyrite and gangue minerals. At Grand Duc gold appears to have formed syn- to post-crystallization relative to base metal sulphides (e.g. chalcopyrite, sphalerite, pyrrhotite), whereas base metal sulphides at Doyon are relatively rare. The accessory ore mineral assemblage at Doyon is relatively simple compared to Grand Duc, consisting of petzite (Ag3AuTe2), calaverite (AuTe2), and hessite (Ag2Te), while accessory ore minerals at Grand Duc are comprised of tellurobismuthite (Bi2Te3), volynskite (AgBiTe2), native Te, tsumoite (BiTe) or tetradymite (Bi2Te2S), altaite (PbTe), petzite, calaverite, and hessite. Pyrite trace element distribution maps from representative pyrite grains from Doyon and Grand Duc were collected and confirm petrographic observations that Au occurs relatively late. Pyrite from Doyon appears to have been initially trace-element poor, then became enriched in As, followed by the ore metal stage consisting of Au-Ag-Te-Bi-Pb-Cu enrichment and lastly a Co-Ni-Se(?) stage enrichment. Grand Duc pyrite is more complex with initial enrichments in Co-Se-As (Stage 1) followed by an increase in As-Co(?) concentrations (Stage 2). The ore metal stage (Stage 3) is indicated by another increase in As coupled with Au-Ag-Bi-Te-Sb-Pb-Ni-Cu-Zn-Sn-Cd-In enrichment. The final stage of pyrite growth (Stage 4) is represented by the same element assemblage as Stage 3 but at lower concentrations. Preliminary sulphur isotope data from Grand Duc indicates pyrite, pyrrhotite, and chalcopyrite all have similar delta-34S values (~1.5 � 1 permille) with no core-to-rim variations. Pyrite from Doyon has slightly higher delta-34S values (~2.5 � 1 permille) compared to Grand Duc but similarly does not show much core-to-rim variation. At Grand Duc, the occurrence of Au concentrating along the rim of pyrite grains and associated with an enrichment in As and other metals (Sb-Ag-Bi-Te) shares similarities with porphyry and epithermal deposits, and the overall metal association of Au with Te and Bi is a hallmark of other intrusion-related gold systems. The occurrence of the ore metal-rich rims on pyrite from Grand Duc could be related to fluid boiling which results in the destabilization of gold-bearing aqueous complexes. Pyrite from Doyon does not show this inferred boiling texture but shares characteristics of dissolution-reprecipitation processes, where metals in the pyrite lattice are dissolved and then reconcentrated into discrete mineral phases that commonly precipitate in voids and fractures created during pyrite dissolution.
APA, Harvard, Vancouver, ISO, and other styles
8

Kong, Zhihao, and Na Lu. Determining Optimal Traffic Opening Time Through Concrete Strength Monitoring: Wireless Sensing. Purdue University, 2023. http://dx.doi.org/10.5703/1288284317613.

Full text
Abstract:
Construction and concrete production are time-sensitive and fast-paced; as such, it is crucial to monitor the in-place strength development of concrete structures in real-time. Existing concrete strength testing methods, such as the traditional hydraulic compression method specified by ASTM C 39 and the maturity method specified by ASTM C 1074, are labor-intensive, time consuming, and difficult to implement in the field. INDOT’s previous research (SPR-4210) on the electromechanical impedance (EMI) technique has established its feasibility for monitoring in-situ concrete strength to determine the optimal traffic opening time. However, limitations of the data acquisition and communication systems have significantly hindered the technology’s adoption for practical applications. Furthermore, the packaging of piezoelectric sensor needs to be improved to enable robust performance and better signal quality. In this project, a wireless concrete sensor with a data transmission system was developed. It was comprised of an innovated EMI sensor and miniaturized datalogger with both wireless transmission and USB module. A cloud-based platform for data storage and computation was established, which provides the real time data visualization access to general users and data access to machine learning and data mining developers. Furthermore, field implementations were performed to prove the functionality of the innovated EMI sensor and wireless sensing system for real-time and in-place concrete strength monitoring. This project will benefit the DOTs in areas like construction, operation, and maintenance scheduling and asset management by delivering applicable concrete strength monitoring solutions.
APA, Harvard, Vancouver, ISO, and other styles
9

Rimpel, Aaron. PR-316-17200-R03 A Study of the Effects of Liquid Contamination on Seal Performance. Pipeline Research Council International, Inc. (PRCI), 2021. http://dx.doi.org/10.55274/r0012015.

Full text
Abstract:
This project is a continuation of research to enhance dry gas seal (DGS) reliability. Previous work reviewed failures from literature and experience of manufacturers and end-users and identified that liquid contamination was the most common cause, but it was concluded there was insufficient quantitative data to base recommendations on for further DGS reliability enhancements. Therefore, experimental and analytical investigations were pursued to fill the void. The ultimate objective was to be able to predict DGS failures due to liquid contamination, which could lead to greater DGS reliability through improvements in design, instrumentation, and monitoring. From the previous project phase, testing had demonstrated that the introduction of small quantities of oil (liquid mass fraction up to 3%) produced a slight increase in torque but impacts on temperatures and leakage were negligible. Previous simulations demonstrated converged two-phase computational fluid dynamics (CFD) with conjugate heat transfer (CHT) solutions of the seal and reasonable trends, but the agreement with test data was lower than desired. The current project phase made significant improvements to the single- and two-phase CFD simulation of the DGS, lowering the discrepancy of all previously reported performance parameters. The current simulations were performed only at the 700 psi supply pressure case. Ideal gas was used, and CHT coupling was used to predict temperatures of the primary ring. The previous wall thermal boundary conditions were not well understood, so the current work focused on establishing performance with adiabatic walls.
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Honghai, Lihwa Lin, Cody Johnson, et al. A revisit and update on the verification and validation of the Coastal Modeling System (CMS) : report 1--hydrodynamics and waves. Engineer Research and Development Center (U.S.), 2022. http://dx.doi.org/10.21079/11681/45444.

Full text
Abstract:
This is the first part of a two-part report that revisits and updates the verification and validation (V&amp;V) of the Coastal Modeling System (CMS). The V&amp;V study in this part of the report focuses on hydrodynamic and wave modeling. With the updated CMS code (Version 5) and its latest graphical user interface, the Surface-water Modeling System (Version 13), the goal of this study is to revisit some early CMS V&amp;V cases and assess some new cases on model performance in coastal applications. The V&amp;V process includes the comparison and evaluation of the CMS output against analytical solutions, laboratory experiments in prototype cases, and field cases in and around coastal inlets and navigation projects. The V&amp;V results prove that the basic physics incorporated are represented well, the computational algorithms implemented are accurate, and the coastal processes are reproduced well. This report provides the detailed descriptions of those test simulations, which include the model configuration, the selection of model parameters, the determination of model forcing, and the quantitative assessment of the model and data comparisons. It is to be hoped that, through the V&amp;V process, the CMS users will better understand the model’s capability and limitation as a tool to solve real-world problems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!