To see the other types of publications on this topic, follow the link: Data structures and algorithms for data management.

Journal articles on the topic 'Data structures and algorithms for data management'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Data structures and algorithms for data management.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nadkarni, Prakash M. "Management of Evolving Map Data: Data Structures and Algorithms Based on the Framework Map." Genomics 30, no. 3 (December 1995): 565–73. http://dx.doi.org/10.1006/geno.1995.1278.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Satya Sai Kumar, Avula, S. Mohan, and R. Arunkumar. "A Survey on Security Models for Data Privacy in Big Data Analytics." Asian Journal of Computer Science and Technology 7, S1 (November 5, 2018): 87–89. http://dx.doi.org/10.51983/ajcst-2018.7.s1.1798.

Full text
Abstract:
As emerging data world like Google and Wikipedia, volume of the data growing gradually for centralization and provide high availability. The storing and retrieval in large volume of data is specialized with the big data techniques. In addition to the data management, big data techniques should need more concentration on the security aspects and data privacy when the data deals with authorized and confidential. It is to provide secure encryption and access control in centralized data through Attribute Based Encryption (ABE) Algorithm. A set of most descriptive attributes is used as categorize to produce secret private key and performs access control. Several works proposed in existing based on the different access structures of ABE algorithms. Thus the algorithms and the proposed applications are literally surveyed and detailed explained and also discuss the functionalities and performance aspects comparison for desired ABE systems.
APA, Harvard, Vancouver, ISO, and other styles
3

Sapiecha, Krzysztof, and Grzegorz Lukawski. "Scalable Distributed Two-Layer Data Structures (SD2DS)." International Journal of Distributed Systems and Technologies 4, no. 2 (April 2013): 15–30. http://dx.doi.org/10.4018/jdst.2013040102.

Full text
Abstract:
Scalability and fault tolerance are important features of modern applications designed for the distributed, loosely-coupled computer systems. In the paper, two-layer scalable structures for storing data in a distributed RAM of a multicomputer (SD2DS) are introduced. A data unit of SD2DS (a component) is split into a header and a body. The header identifies the body and contains its address in a network. The headers are stored in the first layer of SD2DS, called the component file, while the bodies are stored in the second layer, called the component storage. Both layers are managed independently. Details of the management algorithms are given, along with SD2DS variant suitable for storing plain records of data. The SD2DS is compared to similar distributed structures and frameworks. Comparison considerations together with test results are also given. The results proved superiority of SD2DS over similar structures.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Chin Chun, Yuan Horng Lin, and Jeng Ming Yih. "Management of Abstract Algebra Concepts Based on Knowledge Structure." Applied Mechanics and Materials 284-287 (January 2013): 3537–42. http://dx.doi.org/10.4028/www.scientific.net/amm.284-287.3537.

Full text
Abstract:
Knowledge Management of Mathematics Concepts was essential in educational environment. The purpose of this study is to provide an integrated method of fuzzy theory basis for individualized concept structure analysis. This method integrates Fuzzy Logic Model of Perception (FLMP) and Interpretive Structural Modeling (ISM). The combined algorithm could analyze individualized concepts structure based on the comparisons with concept structure of expert. Fuzzy clustering algorithms are based on Euclidean distance function, which can only be used to detect spherical structural clusters. A Fuzzy C-Means algorithm based on Mahalanobis distance (FCM-M) was proposed to improve those limitations of GG and GK algorithms, but it is not stable enough when some of its covariance matrices are not equal. A new improved Fuzzy C-Means algorithm based on a Normalized Mahalanobis distance (FCM-NM) is proposed. Use the best performance of clustering Algorithm FCM-NM in data analysis and interpretation. Each cluster of data can easily describe features of knowledge structures. Manage the knowledge structures of Mathematics Concepts to construct the model of features in the pattern recognition completely. This procedure will also useful for cognition diagnosis. To sum up, this integrated algorithm could improve the assessment methodology of cognition diagnosis and manage the knowledge structures of Mathematics Concepts easily.
APA, Harvard, Vancouver, ISO, and other styles
5

Denoyelle, Nicolas, John Tramm, Kazutomo Yoshii, Swann Perarnau, and Pete Beckman. "NUMA-AWARE DATA MANAGEMENT FOR NEUTRON CROSS SECTION DATA IN CONTINUOUS ENERGY MONTE CARLO NEUTRON TRANSPORT SIMULATION." EPJ Web of Conferences 247 (2021): 04020. http://dx.doi.org/10.1051/epjconf/202124704020.

Full text
Abstract:
The calculation of macroscopic neutron cross-sections is a fundamental part of the continuous-energy Monte Carlo (MC) neutron transport algorithm. MC simulations of full nuclear reactor cores are computationally expensive, making high-accuracy simulations impractical for most routine reactor analysis tasks because of their long time to solution. Thus, preparation of MC simulation algorithms for next generation supercomputers is extremely important as improvements in computational performance and efficiency will directly translate into improvements in achievable simulation accuracy. Due to the stochastic nature of the MC algorithm, cross-section data tables are accessed in a highly randomized manner, resulting in frequent cache misses and latency-bound memory accesses. Furthermore, contemporary and next generation non-uniform memory access (NUMA) computer architectures, featuring very high latencies and less cache space per core, will exacerbate this behaviour. The absence of a topology-aware allocation strategy in existing high-performance computing (HPC) programming models is a major source of performance problems in NUMA systems. Thus, to improve performance of the MC simulation algorithm, we propose a topology-aware data allocation strategies that allow full control over the location of data structures within a memory hierarchy. A new memory management library, known as AML, has recently been created to facilitate this mapping. To evaluate the usefulness of AML in the context of MC reactor simulations, we have converted two existing MC transport cross-section lookup “proxy-applications” (XSBench and RSBench) to utilize the AML allocation library. In this study, we use these proxy-applications to test several continuous-energy cross-section data lookup strategies (the nuclide grid, unionized grid, logarithmic hash grid, and multipole methods) with a number of AML allocation schemes on a variety of node architectures. We find that the AML library speeds up cross-section lookup performance up to 2x on current generation hardware (e.g., a dual-socket Skylake-based NUMA system) as compared with naive allocation. These exciting results also show a path forward for efficient performance on next-generation exascale supercomputer designs that feature even more complex NUMA memory hierarchies.
APA, Harvard, Vancouver, ISO, and other styles
6

Ciaburro, Giuseppe, and Gino Iannace. "Machine Learning-Based Algorithms to Knowledge Extraction from Time Series Data: A Review." Data 6, no. 6 (May 25, 2021): 55. http://dx.doi.org/10.3390/data6060055.

Full text
Abstract:
To predict the future behavior of a system, we can exploit the information collected in the past, trying to identify recurring structures in what happened to predict what could happen, if the same structures repeat themselves in the future as well. A time series represents a time sequence of numerical values observed in the past at a measurable variable. The values are sampled at equidistant time intervals, according to an appropriate granular frequency, such as the day, week, or month, and measured according to physical units of measurement. In machine learning-based algorithms, the information underlying the knowledge is extracted from the data themselves, which are explored and analyzed in search of recurring patterns or to discover hidden causal associations or relationships. The prediction model extracts knowledge through an inductive process: the input is the data and, possibly, a first example of the expected output, the machine will then learn the algorithm to follow to obtain the same result. This paper reviews the most recent work that has used machine learning-based techniques to extract knowledge from time series data.
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Zhenlong, Wenwu Tang, Qunying Huang, Eric Shook, and Qingfeng Guan. "Introduction to Big Data Computing for Geospatial Applications." ISPRS International Journal of Geo-Information 9, no. 8 (August 12, 2020): 487. http://dx.doi.org/10.3390/ijgi9080487.

Full text
Abstract:
The convergence of big data and geospatial computing has brought challenges and opportunities to GIScience with regards to geospatial data management, processing, analysis, modeling, and visualization. This special issue highlights recent advancements in integrating new computing approaches, spatial methods, and data management strategies to tackle geospatial big data challenges and meanwhile demonstrates the opportunities for using big data for geospatial applications. Crucial to the advancements highlighted here is the integration of computational thinking and spatial thinking and the transformation of abstract ideas and models to concrete data structures and algorithms. This editorial first introduces the background and motivation of this special issue followed by an overview of the ten included articles. Conclusion and future research directions are provided in the last section.
APA, Harvard, Vancouver, ISO, and other styles
8

Kallinikos, Jannis, and Ioanna D. Constantiou. "Big Data Revisited: A Rejoinder." Journal of Information Technology 30, no. 1 (March 2015): 70–74. http://dx.doi.org/10.1057/jit.2014.36.

Full text
Abstract:
We elaborate on key issues of our paper New games, new rules: big data and the changing context of strategy as a means of addressing some of the concerns raised by the paper's commentators. We initially deal with the issue of social data and the role it plays in the current data revolution. The massive involvement of lay publics as instrumented by social media breaks with the strong expert cultures that have underlain the production and use of data in modern organizations. It also sets apart the interactive and communicative processes by which social data is produced from sensor data and the technological recording of facts. We further discuss the significance of the very mechanisms by which big data is produced as distinct from the very attributes of big data, often discussed in the literature. In the final section of the paper, we qualify the alleged importance of algorithms and claim that the structures of data capture and the architectures in which data generation is embedded are fundamental to the phenomenon of big data.
APA, Harvard, Vancouver, ISO, and other styles
9

Livnat, Joshua, and Jyoti Singh. "Machine Learning Algorithms to Classify Future Returns Using Structured and Unstructured Data." Journal of Investing 30, no. 3 (February 12, 2021): 62–78. http://dx.doi.org/10.3905/joi.2021.1.169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fayyadh, Moatasem Mohammed, and Hashim Abdul Razak. "DAMAGE IDENTIFICATION AND ASSESSMENT IN RC STRUCTURES USING VIBRATION DATA: A REVIEW." Journal of Civil Engineering and Management 19, no. 3 (June 14, 2013): 375–86. http://dx.doi.org/10.3846/13923730.2012.744773.

Full text
Abstract:
Inspection of structural components for damage is essential to decision-making for the maintenance of such structures. There have been many studies to assess the reinforced concrete (RC) structural elements. However, the experimental approach is still based on the conventional static test, which is time-consuming, costly, has intensive equipment and labour requirements and causes major disruptions to the existing use. Modal testing provides an integrated approach, i.e. both local and global characteristics can be ascertained for structural assessment. Depending on the accessibility to damage elements, little or no disruption to the existing use is incurred during testing works. The approach towards structural assessment work provides not only a viable but also a robust, less expensive and powerful alternative to conventional techniques. This paper presents the background of the behaviour of the RC material at different loading and unloading conditions, in order to understand its effect on the modal parameters. The use of modal testing for support stiffness deterioration is highlighted and studies on the use of modal testing for classification of damage source are presented. Studies on the use of modal testing for detection of damage severity and location algorithms and procedures are also presented.
APA, Harvard, Vancouver, ISO, and other styles
11

Liu, Jin Gang. "Applied Research of Somatosensory Game Based on Kinect and Unity 3D Data Integration Technology." Applied Mechanics and Materials 667 (October 2014): 177–82. http://dx.doi.org/10.4028/www.scientific.net/amm.667.177.

Full text
Abstract:
The paper analyzes data integration technology about Kinect and Unity3D. Authors design scheme based on WPF and Unity 3D internal calling mode. System includes screen show module, Unity3D and Kinect interface module and data acquisition module. There are scene settings, rigging, mirroring sports, close-range model, smoothing processing and other functions in Unity3D; and codes implement the device control, rigging algorithm, equipment image acquisition in Kinect. Tested by C # on unmanaged dll's management, it is a good scheme to import Kinect hardware driver program and calling a custom data structures and algorithms to achieve the unity 3D scene. In the unity 3D scene, Kinect somatosensory camera control motion of models to improve the development efficiency of somatosensory game, which has certain social value in development and application of somatosensory game.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Jiangning, Jing Ren, Tianyu Xi, Siqin Ge, and Liqiang Ji. "Specifications and Standards for Insect 3D Data." Biodiversity Information Science and Standards 2 (May 21, 2018): e26561. http://dx.doi.org/10.3897/biss.2.26561.

Full text
Abstract:
With the continuous development of imaging technology, the amount of insect 3D data is increasing, but research on data management is still virtually non-existent. This paper will discuss the specifications and standards relevant to the process of insect 3D data acquisition, processing and analysis. The collection of 3D data of insects includes specimen collection, sample preparation, image scanning specifications and 3D model specification. The specimen collection information uses existing biodiversity information standards such as Darwin Core. However, the 3D scanning process contains unique specifications for specimen preparation, depending on the scanning equipment, to achieve the best imaging results. Data processing of 3D images includes 3D reconstruction, tagging morphological structures (such as muscle and skeleton), and 3D model building. There are different algorithms in the 3D reconstruction process, but the processing results generally follow DICOM (Digital Imaging and Communications in Medicine) standards. There is no available standard for marking morphological structures, because this process is currently executed by individual researchers who create operational specifications according to their own needs. 3D models have specific file specifications, such as object files (https://en.wikipedia.org/wiki/Wavefront_.obj_file) and 3D max format (https://en.wikipedia.org/wiki/.3ds), which are widely used at present. There are only some simple tools for analysis of three-dimensional data and there are no specific standards or specifications in Audubon Core (https://terms.tdwg.org/wiki/Audubon_Core), the TDWG standard for biodiversity-related multi-media. There are very few 3D databases of animals at this time. Most of insect 3D data are created by individual entomologists and are not even stored in databases. Specifications for the management of insect 3D data need to be established step-by-step. Based on our attempt to construct a database of 3D insect data, we preliminarily discuss the necessary specifications.
APA, Harvard, Vancouver, ISO, and other styles
13

Artemeva, O. V., S. Zareie, Y. Elhaei, N. A. Pozdnyakova, and N. D. Vasilev. "USING REMOTE SENSING DATA TO CREATE MAPS OF VEGETATION AND RELIEF FOR NATURAL RESOURCE MANAGEMENT OF A LARGE ADMINISTRATIVE REGION." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W18 (October 18, 2019): 103–9. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w18-103-2019.

Full text
Abstract:
Abstract. The authors offer methods for mapping nature, in particular, vegetation and relief maps using remote sensing data. These thematic maps are most often used by administrators of different levels for environmental and territorial management. In the Russian Federation administrative territories occupied large areas. The algorithm for constructing visual models using remote sensing data for large administrative areas differs from the algorithms for working with small territories. Automated mapping method includes the analysis of the territory by indicators of topography and dominant vegetation, the selection of satellite images, processing, composing mosaics, composites, classification of plant objects, post-processing. The authors offer to use a specific data source, because the quality of the materials is sufficient for working with large areas. Classifications – the most complicated section. At the moment, scientists have not proposed an unambiguous solution to the choice of algorithm. However, the authors of this study experimentally came to the most convenient algorithm, which we characterize as the main one precisely for the purposes of managing natural resources of large administrative structures (regions with legally fixed boundaries). Examples of the thematic maps fragments and results of intermediate versions of visual models built by automated methods are given. The potential use of methods by municipal employees, rather than narrow specialists, was taken into account. In this regard, the value of the study is an exclusively applied nature and can be used in the administrative structures of the executive authorities.
APA, Harvard, Vancouver, ISO, and other styles
14

Fu, Jiake, Huijing Tian, Lingguang Song, Mingchao Li, Shuo Bai, and Qiubing Ren. "Productivity estimation of cutter suction dredger operation through data mining and learning from real-time big data." Engineering, Construction and Architectural Management 28, no. 7 (January 25, 2021): 2023–41. http://dx.doi.org/10.1108/ecam-05-2020-0357.

Full text
Abstract:
PurposeThis paper presents a new approach of productivity estimation of cutter suction dredger operation through data mining and learning from real-time big data.Design/methodology/approachThe paper used big data, data mining and machine learning techniques to extract features of cutter suction dredgers (CSD) for predicting its productivity. ElasticNet-SVR (Elastic Net-Support Vector Machine) method is used to filter the original monitoring data. Along with the actual working conditions of CSD, 15 features were selected. Then, a box plot was used to clean the corresponding data by filtering out outliers. Finally, four algorithms, namely SVR (Support Vector Regression), XGBoost (Extreme Gradient Boosting), LSTM (Long-Short Term Memory Network) and BP (Back Propagation) Neural Network, were used for modeling and testing.FindingsThe paper provided a comprehensive forecasting framework for productivity estimation including feature selection, data processing and model evaluation. The optimal coefficient of determination (R2) of four algorithms were all above 80.0%, indicating that the features selected were representative. Finally, the BP neural network model coupled with the SVR model was selected as the final model.Originality/valueMachine-learning algorithm incorporating domain expert judgments was used to select predictive features. The final optimal coefficient of determination (R2) of the coupled model of BP neural network and SVR is 87.6%, indicating that the method proposed in this paper is effective for CSD productivity estimation.
APA, Harvard, Vancouver, ISO, and other styles
15

Annicchiarico, W., and M. Cerrolaza. "Identification of the dynamical properties of structures using free vibration data and distributed genetic algorithms." Engineering Optimization 39, no. 8 (December 2007): 969–80. http://dx.doi.org/10.1080/03052150701551628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Nixon, Zachary, William Holton, Mark White, and Chris Locke. "Shoreline Oiling Data Management for the Deepwater Horizon oil spill, Gulf of Mexico, USA: Implications for Data Management Standards for Future Spills of Significance." International Oil Spill Conference Proceedings 2014, no. 1 (May 1, 2014): 300040. http://dx.doi.org/10.7901/2169-3358-2014-1-300040.1.

Full text
Abstract:
The Shoreline Cleanup Assessment Technique (SCAT) data management program for the Deepwater Horizon oil spill is, to our knowledge, the largest shoreline spill data management effort in history, still currently providing continuous data management of over 35,000 separate surveys covering over 22,000 survey-miles of shoreline and tens of millions of individual pieces of data. The program flexibly expanded the scope of coverage to support the changing requirements of survey and cleanup over the response, from surface and subsurface oiling survey data to treatment and administrative status, while maintaining backward compatibility and constant availability for planning and analysis. While we hope many of the tools and products developed will be of value in near-term future spill responses, we are less clear on the specific recommendations we can make to data managers during the next nationally significant spill in 10 or 20 years. The differing needs of each spill, varying levels of agency and industry personnel participation, and the inevitable and increasingly rapid change in software and information technology environments, all point to the need for a different way forward. We feel the most important contribution we can make is the recommendation of an open data standard for shoreline oiling and related data management, allowing interested parties in future responses to be assured of data interoperability, transparency, and data quality, while permitting required flexibility. We propose such a standard here. The proposed data standard does not compete with or supplant any existing vendor or solution, and is completely agnostic about physical spill environment, data collection methods, algorithms, software and computing environment. The standard requires only the most basic structured data so as to preserve the maximum flexibility for spill specific conditions and the unanticipated needs of future data collection. The core components are spatial geometry, temporal relationship, and database attribute and relationship requirements for storing and querying oiling, response, and administrative status via standard geometries over multiple time periods. We present generic entity-relationship diagrams and example core and expansion data structures that outline the proposed standard. In the same way that SCAT methodology protocols and terminology have been adopted across the spill response community by agency and industry, we suggest a core standard for storage and analysis of shoreline oiling and response data. This standard reflects, we hope, a root-level understanding of the purpose and scope of SCAT and related data, and imposes only the minimum uniformity required to be of use.
APA, Harvard, Vancouver, ISO, and other styles
17

Ottaviano, Flavia, Fabing Cui, and Andy H. F. Chow. "Modeling and Data Fusion of Dynamic Highway Traffic." Transportation Research Record: Journal of the Transportation Research Board 2644, no. 1 (January 2017): 92–99. http://dx.doi.org/10.3141/2644-11.

Full text
Abstract:
This paper presents a data fusion framework for processing and integrating data collected from heterogeneous sources on motorways to generate short-term predictions. Considering the heterogeneity in spatiotemporal granularity in data from different sources, an adaptive kernel-based smoothing method was first used to project all data onto a common space–time grid. The data were then integrated through a Kalman filter framework build based on the cell transmission model for generating short-term traffic state prediction. The algorithms were applied and tested with real traffic data collected from the California I-880 corridor in the San Francisco Bay Area from the Mobile Century experiment. Results revealed that the proposed fusion algorithm can work with data sources that are different in their spatiotemporal granularity and improve the accuracy of state estimation through incorporating multiple data sources. The present work contributed to the field of traffic engineering and management with the application of big data analytics.
APA, Harvard, Vancouver, ISO, and other styles
18

Butts, Carter T. "8. Permutation Models for Relational Data." Sociological Methodology 37, no. 1 (August 2007): 257–81. http://dx.doi.org/10.1111/j.1467-9531.2007.00183.x.

Full text
Abstract:
A common problem in sociology, psychology, biology, geography, and management science is the comparison of dyadic relational structures (i.e., graphs). Where these structures are formed on a common set of elements, a natural question that arises is whether there is a tendency for elements that are strongly connected in one set of structures to be more—or less—strongly connected within another set. We may ask, for instance, whether there is a correspondence between golf games and business deals, trade and warfare, or spatial proximity and genetic similarity. In each case, the data for such comparisons may be continuous or discrete, and multiple relations may be involved simultaneously (e.g., when comparing multiple measures of international trade volume with multiple types of political interactions). We propose here an exponential family of permutation models that is suitable for inferring the direction and strength of association among dyadic relational structures. A linear-time algorithm is shown for MCMC simulation of model draws, as is the use of simulated draws for maximum likelihood estimation (MCMC-MLE) and/or estimation of Monte Carlo standard errors. We also provide an easily performed maximum pseudo-likelihood estimation procedure for the permutation model family, which provides a reasonable means of generating seed models for the MCMC-MLE procedure. Use of the modeling framework is demonstrated via an application involving relationships among managers in a high-tech firm.
APA, Harvard, Vancouver, ISO, and other styles
19

Arif, Dashne Raouf, and Nzar Abdulqadir Ali. "Improving the performance of big data databases." Kurdistan Journal of Applied Research 4, no. 2 (December 31, 2019): 206–20. http://dx.doi.org/10.24017/science.2019.2.20.

Full text
Abstract:
Real-time monitoring systems utilize two types of database, they are relational databases such as MySQL and non-relational databases such as MongoDB. A relational database management system (RDBMS) stores data in a structured format using rows and columns. It is relational because the values of the tables are connected. A non-relational database is a database that does not adopt the relational structure given by traditional. In recent years, this class of databases has also been referred to as Not only SQL (NoSQL). This paper discusses many comparisons that have been conducted on the execution time performance of types of databases (SQL and NoSQL). In SQL (Structured Query Language) databases different algorithms are used for inserting and updating data, such as indexing, bulk insert and multiple updating. However, in NoSQL different algorithms are used for inserting and updating operations such as default-indexing, batch insert, multiple updating and pipeline aggregation. As a result, firstly compared with related papers, this paper shows that the performance of both SQL and NoSQL can be improved. Secondly, performance can be dramatically improved for inserting and updating operations in the NoSQL database compared to the SQL database. To demonstrate the performance of the different algorithms for entering and updating data in SQL and NoSQL, this paper focuses on a different number of data sets and different performance results. The SQL part of the paper is conducted on 50,000 records to 3,000,000 records, while the NoSQL part of the paper is conducted on 50,000 to 16,000,000 documents (2GB) for NoSQL. In SQL, three million records are inserted within 606.53 seconds, while in NoSQL this number of documents is inserted within 67.87 seconds. For updating data, in SQL 300,000 records are updated within 271.17 seconds, while for NoSQL this number of documents is updated within just 46.02 seconds.
APA, Harvard, Vancouver, ISO, and other styles
20

Fernando, Yudi, Ramanathan R. M. Chidambaram, and Ika Sari Wahyuni-TD. "The impact of Big Data analytics and data security practices on service supply chain performance." Benchmarking: An International Journal 25, no. 9 (November 29, 2018): 4009–34. http://dx.doi.org/10.1108/bij-07-2017-0194.

Full text
Abstract:
PurposeThe purpose of this paper is to investigate the effects of Big Data analytics, data security and service supply chain innovation capabilities on services supply chain performance.Design/methodology/approachThe paper draws on the relational view of resource-based theory to propose a theoretical model. The data were collected through survey of 145 service firms.FindingsThe results of this study found that the Big Data analytics has a positive and significant relationship with a firm’s ability to manage data security and a positive impact on service supply chain innovation capabilities and service supply chain performance. This study also found that most service firms participating in this study used Big Data analytics to execute existing algorithms faster with larger data sets.Practical implicationsA main recommendation of this study is that service firms empower a chief data officer to establish the data needed and design the governance of data in the company to eliminate any security issues. Data security was a concern if a firm did not have ample data governance and protection as the information was shared among members of service supply chain networks.Originality/valueBig Data analytics are a useful technology tool to forecast market preference based on open source, structured and unstructured data.
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Chin Chun, Yuan Horng Lin, Jeng Ming Yih, and Sue Fen Huang. "Construct Knowledge Structure of Linear Algebra." Advanced Materials Research 211-212 (February 2011): 793–97. http://dx.doi.org/10.4028/www.scientific.net/amr.211-212.793.

Full text
Abstract:
Apply interpretive structural modeling to construct knowledge structure of linear algebra. New fuzzy clustering algorithms improved fuzzy c-means algorithm based on Mahalanobis distance has better performance than fuzzy c-means algorithm. Each cluster of data can easily describe features of knowledge structures individually. The results show that there are six clusters and each cluster has its own cognitive characteristics. The methodology can improve knowledge management in classroom more feasible.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Ying, and Baotian Dong. "The Algebraic Operations and Their Implementation Based on a Two-Layer Cloud Data Model." Cybernetics and Information Technologies 16, no. 6 (December 1, 2016): 5–26. http://dx.doi.org/10.1515/cait-2016-0074.

Full text
Abstract:
Abstract The existing cloud data models cannot meet the management requirements of structured data very well including a great deal of relational data, therefore a two-layer cloud data model is proposed. The composite object is defined to model the nested data in the representation layer, while a 4-tuple is defined to model the non-nested data in the storage layer. Referring the relational algebra, the concept of SNO (Simple Nested Object) is defined as basic operational unit of the algebraic operations; the formal definitions of the algebraic operations consisting of the set operations and the query operations on the representation layer are proposed. The algorithm of extracting all SNOs from a CAO (Component-Attribute-Object) set of a composite object is proposed firstly as the foundation, and then as the idea; the pseudo code implementation of algorithms of the algebraic operations on the storage layer are proposed. Logic proof and example proof indicate that the definition and the algorithms of the algebraic operations are correct.
APA, Harvard, Vancouver, ISO, and other styles
23

Turochy, Rod E., and Brian L. Smith. "New Procedure for Detector Data Screening in Traffic Management Systems." Transportation Research Record: Journal of the Transportation Research Board 1727, no. 1 (January 2000): 127–31. http://dx.doi.org/10.3141/1727-16.

Full text
Abstract:
Automated monitoring of traffic conditions in traffic management systems is of increasing importance as the sizes and complexities of these systems expand. Accurate monitoring of traffic conditions is dependent on accurate input data, yet techniques that can be used to screen data and remove erroneous records are not used in many traffic management systems. Procedures that can be used to perform quality checks on the data before their use in traffic management applications play a critical role in ensuring the proper functioning of condition-monitoring methods such as incident detection algorithms. Tests that screen traffic data can be divided into two categories: threshold value tests and tests that apply basic traffic flow theory principles. Tests that use traffic flow theory use the inherent relationships among speed, volume, and occupancy to assess data validity. In particular, a test that derives the average effective vehicle length from the observed traffic variables detects a wide range of erroneous data. A new data-screening procedure combines both threshold value tests and traffic flow theory–based tests and can serve as a valuable tool in traffic management applications.
APA, Harvard, Vancouver, ISO, and other styles
24

Mahmood, Sadiqa, Eli Schwamm, Kenneth L. Kehl, Brett Glotzbecker, Robert Andrews, Prabhsimranjot Singh, Belen Fraile, et al. "A data driven approach to immunotherapy toxicity management." Journal of Clinical Oncology 36, no. 30_suppl (October 20, 2018): 326. http://dx.doi.org/10.1200/jco.2018.36.30_suppl.326.

Full text
Abstract:
326 Background: The immune checkpoint inhibitors (ICIs) confer a risk of unique inflammatory immune-related adverse events (irAEs), which are highly distinct from the adverse events historically observed with cytotoxic therapy. To develop a strategy for easier identification and mitigation of irAEs, we sought to understand the frequency of ED visits and hospitalizations in 90 days following ICI start by implementing a rapid learning system (RLS). Methods: We convened an Immunotherapy Toxicities Management Committee with representatives from the Center for Immuno-Oncology, Quality and Patient Safety, and Informatics to draft a series of recommendations for the development of an irAE rapid learning system. The Committee requested an audit of all irAEs between June 2015 and April 2018 by drug, event type (clinical diagnosis), outcomes (including ED visits, hospitalization with length of stay, and death), and time-frame (days since beginning ICI course). An automated pipeline was created to merge structured data from the Electronic Health Record (Epic) and billing system (EPSi). This data was used to design a tool which consisted of an automated dashboard to monitor patient and enable interventions. Results: Over the course of 3 years, a total of 2,020 unique patients receiving ICIs were seen. 918 were treated with Pembrolizumab (45.4%), 768 with Nivolumab (38.0%), 234 with Atezolizumab (11.6%), 111 with Nivolumab & Ipilmumab (5.6%), 68 with Ipilimumab (3.4%), 9 with Durvalumab (0.4%), and 9 with Avelumab (0.4%). ED visits and hospitalization rates over 90 days were similar among the three most prescribed therapies, ranging from 332 unique patient events in the Pembrolizumab Cohort (36.1%) to 96 unique patient events in the Atezolizumab cohort (41.0%). Conclusions: The dashboard is effective tool to build a RLS for irAEs. The immediate output of this tool is using natural language processing (NLP) to distinguish between irAEs related ED visits and hospitalizations or regular disease progression, and measure the impact of interventions including (a) developing standardized algorithms for monitoring for irAEs, (b) designing an educational program for providers, and (c) developing an inpatient and outpatient immunotherapy toxicity management service.
APA, Harvard, Vancouver, ISO, and other styles
25

Isheyskiy, Valentin, Evgeny Martinyskin, Sergey Smirnov, Anton Vasilyev, Kirill Knyazev, and Timur Fatyanov. "Specifics of MWD Data Collection and Verification during Formation of Training Datasets." Minerals 11, no. 8 (July 22, 2021): 798. http://dx.doi.org/10.3390/min11080798.

Full text
Abstract:
This paper presents a structured analysis in the area of measurement while drilling (MWD) data processing and verification methods, as well as describes the main nuances and certain specifics of “clean” data selection in order to build a “parent” training database for subsequent use in machine learning algorithms. The main purpose of the authors is to create a trainable machine learning algorithm, which, based on the available “clean” input data associated with specific conditions, could correlate, process and select parameters obtained from the drilling rig and use them for further estimation of various rock characteristics, prediction of optimal drilling and blasting parameters, and blasting results. The paper is a continuation of a series of publications devoted to the prospects of using MWD technology for the quality management of drilling and blasting operations at mining enterprises.
APA, Harvard, Vancouver, ISO, and other styles
26

Funkhouser, Thomas, Seth Teller, Carlo Séquin, and Delnaz Khorramabadi. "The UC Berkeley System for Interactive Visualization of Large Architectural Models." Presence: Teleoperators and Virtual Environments 5, no. 1 (January 1996): 13–44. http://dx.doi.org/10.1162/pres.1996.5.1.13.

Full text
Abstract:
Realistic-looking architectural models with furniture may consist of millions of polygons and require gigabytes of data—far than today's workstations can render at interactive frame rates or store in physical memory. We have developed data structures and algorithms for identifying a small portion of a large model to load into memory and render during each frame of an interactive walkthrough. Our algorithms rely upon an efficient display database that represents a building model as a set of objects, each of which can be described at multiple levels of detail, and contains an index of spatial cells with precomputed cell-to-cell and cell-to-object visibility information. As the observer moves through the model interactively, a real-time visibility algorithm traces sightline beams through transparent cell boundaries to determine a small set of objects potentially visible to the observer. An optimization algorithm dynamically selects a level of detail and rendering algorithm with which to display each potentially visible object to meet a userspecified target frame time. Throughout, memory management algorithms predict observer motion and prefetch objects from disk that may become visible during imminent frames. This paper describes an interactive building walkthrough system that uses these data structures and algorithms to maintain interactive frame rates during visualization of very large models. So far, the implementation supports models whose major occluding surfaces are axis-aligned rectangles (e.g., typical buildings). This system is able to maintain over twenty frames per second with little noticeable detail elision during interactive walkthroughs of a building model containing over one million polygons.
APA, Harvard, Vancouver, ISO, and other styles
27

Moertini, Veronica, and Mariskha Adithia. "Uncovering Active Communities from Directed Graphs on Distributed Spark Frameworks, Case Study: Twitter Data." Big Data and Cognitive Computing 5, no. 4 (September 22, 2021): 46. http://dx.doi.org/10.3390/bdcc5040046.

Full text
Abstract:
Directed graphs can be prepared from big data containing peoples’ interaction information. In these graphs the vertices represent people, while the directed edges denote the interactions among them. The number of interactions at certain intervals can be included as the edges’ attribute. Thus, the larger the count, the more frequent the people (vertices) interact with each other. Subgraphs which have a count larger than a threshold value can be created from these graphs, and temporal active communities can then be mined from each of these subgraphs. Apache Spark has been recognized as a data processing framework that is fast and scalable for processing big data. It provides DataFrames, GraphFrames, and GraphX APIs which can be employed for analyzing big graphs. We propose three kinds of active communities, namely, Similar interest communities (SIC), Strong-interacting communities (SC), and Strong-interacting communities with their “inner circle” neighbors (SCIC), along with algorithms needed to uncover them. The algorithm design and implementation are based on these APIs. We conducted experiments on a Spark cluster using ten machines. The results show that our proposed algorithms are able to uncover active communities from public big graphs as well from Twitter data collected using Spark structured streaming. In some cases, the execution time of the algorithms that are based on GraphFrames’ motif findings is faster.
APA, Harvard, Vancouver, ISO, and other styles
28

Gong, Peisong, Haixiang Guo, Yuanyue Huang, and Shengyu Guo. "SAFETY RISK EVALUATIONS OF DEEP FOUNDATION CONSTRUCTION SCHEMES BASED ON IMBALANCED DATA SETS." JOURNAL OF CIVIL ENGINEERING AND MANAGEMENT 26, no. 4 (April 20, 2020): 380–95. http://dx.doi.org/10.3846/jcem.2020.12321.

Full text
Abstract:
Safety risk evaluations of deep foundation construction schemes are important to ensure safety. However, the amount of knowledge on these evaluations is large, and the historical data of deep foundation engineering is imbalanced. Some adverse factors influence the quality and efficiency of evaluations using traditional manual evaluation tools. Machine learning guarantees the quality of imbalanced data classifications. In this study, three strategies are proposed to improve the classification accuracy of imbalanced data sets. First, data set information redundancy is reduced using a binary particle swarm optimization algorithm. Then, a classification algorithm is modified using an Adaboost-enhanced support vector machine classifier. Finally, a new classification evaluation standard, namely, the area under the ROC curve, is adopted to ensure the classifier to be impartial to the minority. A transverse comparison experiment using multiple classification algorithms shows that the proposed integrated classification algorithm can overcome difficulties associated with correctly classifying minority samples in imbalanced data sets. The algorithm can also improve construction safety management evaluations, relieve the pressure from the lack of experienced experts accompanying rapid infrastructure construction, and facilitate knowledge reuse in the field of architecture, engineering, and construction.
APA, Harvard, Vancouver, ISO, and other styles
29

Wang, Daopeng, Jifei Fan, Hanliang Fu, and Bing Zhang. "Research on Optimization of Big Data Construction Engineering Quality Management Based on RNN-LSTM." Complexity 2018 (July 5, 2018): 1–16. http://dx.doi.org/10.1155/2018/9691868.

Full text
Abstract:
Construction industry is the largest data industry, but with the lowest degree of datamation. With the development and maturity of BIM information integration technology, this backward situation will be completely changed. Different business data from a construction phase and operation and a maintenance phase will be collected to add value to the data. As the BIM information integration technology matures, different business data from the design phase to the construction phase are integrated. Because BIM integrates massive, repeated, and unordered feature text data, we first use integrated BIM data as a basis to perform data cleansing and text segmentation on text big data, making the integrated data a “clean and orderly” valuable data. Then, with the aid of word cloud visualization and cluster analysis, the associations between data structures are tapped, and the integrated unstructured data is converted into structured data. Finally, the RNN-LSTM network was used to predict the quality problems of steel bars, formworks, concrete, cast-in-place structures, and masonry in the construction project and to pinpoint the occurrence of quality problems in the implementation of the project. Through the example verification, the algorithm proposed in this paper can effectively reduce the incidence of construction project quality problems, and it has a promotion. And it is of great practical significance to improving quality management of construction projects and provides new ideas and methods for future research on the construction project quality problem.
APA, Harvard, Vancouver, ISO, and other styles
30

Gajewski, Byron J., Shawn M. Turner, William L. Eisele, and Clifford H. Spiegelman. "Intelligent Transportation System Data Archiving: Statistical Techniques for Determining Optimal Aggregation Widths for Inductive Loop Detector Speed Data." Transportation Research Record: Journal of the Transportation Research Board 1719, no. 1 (January 2000): 85–93. http://dx.doi.org/10.3141/1719-11.

Full text
Abstract:
Although most traffic management centers collect intelligent transportation system (ITS) traffic monitoring data from local controllers in 20-s to 30-s intervals, the time intervals for archiving data vary considerably from 1 to 5, 15, or even 60 min. Presented are two statistical techniques that can be used to determine optimal aggregation levels for archiving ITS traffic monitoring data: the cross-validated mean square error and the F-statistic algorithm. Both techniques seek to determine the minimal sufficient statistics necessary to capture the full information contained within a traffic parameter distribution. The statistical techniques were applied to 20-s speed data archived by the TransGuide center in San Antonio, Texas. The optimal aggregation levels obtained by using the two algorithms produced reasonable and intuitive results—both techniques calculated optimal aggregation levels of 60 min or more during periods of low traffic variability. Similarly, both techniques calculated optimal aggregation levels of 1 min or less during periods of high traffic variability (e.g., congestion). A distinction is made between conclusions about the statistical techniques and how the techniques can or should be applied to ITS data archiving. Although the statistical techniques described may not be disputed, there is a wide range of possible aggregation solutions based on these statistical techniques. Ultimately, the aggregation solutions may be driven by nonstatistical parameters such as cost (e.g., “How much do we/the market value the data?”), ease of implementation, system requirements, and other constraints.
APA, Harvard, Vancouver, ISO, and other styles
31

Yang, Fan, Wenjin Zhang, Laifa Tao, and Jian Ma. "Transfer Learning Strategies for Deep Learning-based PHM Algorithms." Applied Sciences 10, no. 7 (March 30, 2020): 2361. http://dx.doi.org/10.3390/app10072361.

Full text
Abstract:
As we enter the era of big data, we have to face big data generated by industrial systems that are massive, diverse, high-speed, and variability. In order to effectively deal with big data possessing these characteristics, deep learning technology has been widely used. However, the existing methods require great human involvement that is heavily depend on domain expertise and may thus be non-representative and biased from task to similar task, so for a wide variety of prognostic and health management (PHM) tasks, how to apply the developed deep learning algorithms to similar tasks to reduce the amount of development and data collection costs has become an urgent problem. Based on the idea of transfer learning and the structures of deep learning PHM algorithms, this paper proposes two transfer strategies via transferring different elements of deep learning PHM algorithms, analyzes the possible transfer scenarios in practical application, and proposes transfer strategies applicable in each scenario. At the end of this paper, the deep learning algorithm of bearing fault diagnosis based on convolutional neural networks (CNN) is transferred based on the proposed method, which was carried out under different working conditions and for different objects, respectively. The experiments verify the value and effectiveness of the proposed method and give the best choice of transfer strategy.
APA, Harvard, Vancouver, ISO, and other styles
32

Byun, Namju, Whi Seok Han, Young Woong Kwon, and Young Jong Kang. "Development of BIM-Based Bridge Maintenance System Considering Maintenance Data Schema and Information System." Sustainability 13, no. 9 (April 26, 2021): 4858. http://dx.doi.org/10.3390/su13094858.

Full text
Abstract:
Due to the significant increase in the age of infrastructure globally, maintenance of existing structures has been prioritized over the construction of new structures, which are very costly. However, many infrastructure facilities have not been managed efficiently due to a lack of well-trained staff and budget limitations. Bridge management systems (BMSs) have been constructed and operated globally to maintain the originally designed structural performance and to overcome the inefficiency of maintenance practices for existing bridges. Unfortunately, because most of the current BMSs are based on 2D information systems, bridge maintenance data and information are not utilized effectively for bridge management. To overcome these problems, studies of BMSs based on building information modeling (BIM) have significantly increased in number. Most previous studies have proposed comprehensive frameworks containing approximate and limited information for maintenance to utilize BIM technology. Moreover, the utilization level of the maintenance information is less efficient because detailed information regarding safety diagnosis and maintenance are not included in data formats that are interpretable by computer algorithms. Therefore, in this study, a BIM-based BMS, including detailed information relating to safety diagnosis and maintenance, was constructed for the sustainability of bridge maintenance. To consider detailed information in the BMS, a maintenance data schema and its information system were established via the compilation of detailed information for safety diagnosis, repair and strengthening, remaining life, and valuation. In addition, a web data management program (WDMP) was developed using the maintenance data schema and information system, and was connected with the Midas CIM, which is a 3D modeling program. Finally, a prototype of the proposed BMS was established for an actual bridge in Korea. The proposed BMS in this study may be expected to improve the existing management practices for maintenance, and to reduce maintenance cost and information loss.
APA, Harvard, Vancouver, ISO, and other styles
33

Stantic, Bela, Rodney Topor, Justin Terry, and Abdul Sattar. "Advanced indexing technique for temporal data." Computer Science and Information Systems 7, no. 4 (2010): 679–703. http://dx.doi.org/10.2298/csis101020035s.

Full text
Abstract:
The need for efficient access and management of time dependent data in modern database applications is well recognized and researched. Existing access methods are mostly derived from the family of spatial R-tree indexing techniques. These techniques are particularly not suitable to handle data involving open ended intervals, which are common in temporal databases. This is due to overlapping between nodes and huge dead space found in the database. In this study, we describe a detailed investigation of a new approach called ?Triangular Decomposition Tree? (TD-Tree). The underlying idea for the TD-Tree is to manage temporal intervals by virtual index structures relying on geometric interpretations of intervals, and a space partition method that results in an unbalanced binary tree. We demonstrate that the unbalanced binary tree can be efficiently manipulated using a virtual index. We also show that the single query algorithm can be applied uniformly to different query types without the need of dedicated query transformations. In addition to the advantages related to the usage of a single query algorithm for different query types and better space complexity, the empirical performance of the TD-tree has been found to be superior to its best known competitors.
APA, Harvard, Vancouver, ISO, and other styles
34

Jeihouni, Mehrdad, Ara Toomanian, and Ali Mansourian. "Decision Tree-Based Data Mining and Rule Induction for Identifying High Quality Groundwater Zones to Water Supply Management: a Novel Hybrid Use of Data Mining and GIS." Water Resources Management 34, no. 1 (December 9, 2019): 139–54. http://dx.doi.org/10.1007/s11269-019-02447-w.

Full text
Abstract:
AbstractGroundwater is an important source to supply drinking water demands in both arid and semi-arid regions. Nevertheless, locating high quality drinking water is a major challenge in such areas. Against this background, this study proceeds to utilize and compare five decision tree-based data mining algorithms including Ordinary Decision Tree (ODT), Random Forest (RF), Random Tree (RT), Chi-square Automatic Interaction Detector (CHAID), and Iterative Dichotomiser 3 (ID3) for rule induction in order to identify high quality groundwater zones for drinking purposes. The proposed methodology works by initially extracting key relevant variables affecting water quality (electrical conductivity, pH, hardness and chloride) out of a total of eight existing parameters, and using them as inputs for the rule induction process. The algorithms were evaluated with reference to both continuous and discrete datasets. The findings were speculative of the superiority, performance-wise, of rule induction using the continuous dataset as opposed to the discrete dataset. Based on validation results, in continuous dataset, RF and ODT showed higher and RT showed acceptable performance. The groundwater quality maps were generated by combining the effective parameters distribution maps using inducted rules from RF, ODT, and RT, in GIS environment. A quick glance at the generated maps reveals a drop in the quality of groundwater from south to north as well as from east to west in the study area. The RF showed the highest performance (accuracy of 97.10%) among its counterparts; and so the generated map based on rules inducted from RF is more reliable. The RF and ODT methods are more suitable in the case of continuous dataset and can be applied for rule induction to determine water quality with higher accuracy compared to other tested algorithms.
APA, Harvard, Vancouver, ISO, and other styles
35

Potiyaraj, Pranut, Chutipak Subhakalin, Benchaphon Sawangharsub, and Werasak Udomkichdecha. "Recognition and re‐visualization of woven fabric structures." International Journal of Clothing Science and Technology 22, no. 2/3 (June 15, 2010): 79–87. http://dx.doi.org/10.1108/09556221011018577.

Full text
Abstract:
PurposeThe purpose of this paper is to develop a computerized program that can recognize woven fabric structures and simultaneously use the obtained data to 3D re‐visualize the corresponding woven fabric structures.Design/methodology/approachA 2D bitmap image of woven fabric was initially acquired using an ordinary desktop flatbed scanner. Through several image‐processing and analysis techniques as well as recognition algorithms, the weave pattern was then identified and stored in a digital format. The weave pattern data were then used to construct warp and weft yarn paths based on Peirce's geometrical model.FindingsBy combining relevant weave parameters, including yarn sizes, warp and weft densities, yarn colours as well as cross‐sectional shapes, a 3D image of yarns assembled together as a woven fabric structure is produced and shown on a screen through the virtual reality modelling language browser.Originality/valueWoven fabric structures can now be recognised and simultaneously use the obtained data to 3D re‐visualize the corresponding woven fabric structures.
APA, Harvard, Vancouver, ISO, and other styles
36

Allerton, D. J., and M. C. Gia. "The Application of Oct-Tree Terrain Models to Real-Time Aircraft Flight Path Planning." Journal of Navigation 53, no. 3 (September 2000): 483–98. http://dx.doi.org/10.1017/s037346330000103x.

Full text
Abstract:
This paper outlines a technique to represent terrain using tree structures, based on Morton ordering to avoid the use of pointers. This approach enables terrain data to be organised in a hierarchical form affording a trade-off between the speed of access to the terrain database and resolution of the terrain data extracted from the tree. A set of database access algorithms is developed that form the basis of path extraction needed for real-time mission management. Several examples are presented to illustrate the performance of the routeing algorithms developed in the paper.
APA, Harvard, Vancouver, ISO, and other styles
37

Zhu, Yan, and Hai Tao Ma. "Algorithms for Generating XML Documents from Probabilistic XML." Applied Mechanics and Materials 263-266 (December 2012): 1578–83. http://dx.doi.org/10.4028/www.scientific.net/amm.263-266.1578.

Full text
Abstract:
Uncertain relational data management has been investigated for a few years, but few works on uncertain XML. The natural structures with high flexibility make XML more appropriate for representing uncertain information. Based on the semantic of possible world and probabilistic models with independent distribution and mutual exclusive distribution nodes, the problem of how to generate instance from a probabilistic XML and calculate its probability was studied, which is one of the key problems of uncertain XML management. Moreover, an algorithm for a generating XML document from a probabilistic XML and calculating its probability are also proposed, which has linear time complexity. Finally, experiment results are made to show up the correct and efficiency of the algorithm.
APA, Harvard, Vancouver, ISO, and other styles
38

Mazari, Mehran, Cesar Tirado, Soheil Nazarian, and Raed Aldouri. "Impact of Geospatial Classification Method on Interpretation of Intelligent Compaction Data." Transportation Research Record: Journal of the Transportation Research Board 2657, no. 1 (January 2017): 37–46. http://dx.doi.org/10.3141/2657-05.

Full text
Abstract:
Intelligent compaction is an emerging technology in the management of pavement layers, more specifically, of unbound geomaterial layers. Different types of intelligent compaction measurement values (ICMVs) are available on the basis of the configuration of the roller, vibration mechanism, and data collection and reduction algorithms. The spatial distribution of the estimated ICMVs is usually displayed as a color-coded map, with the ICMVs categorized into a number of classes with specific color codes. The number of classes, as well as the values of the breaks between classes, significantly affect the perception of compaction quality during the quality management process. In this study, three sets of ICMV data collected as a part of a field investigation were subjected to geostatistical analyses to evaluate different classification scenarios and their impact on the interpretation of the data. The classification techniques were evaluated on the basis of the information theory concept of minimizing the information loss ratio. The effect of the ICMV distribution on the selection of the classification method was also studied. An optimization technique was developed to find the optimal class breaks that minimize the information loss ratio. The optimization algorithm returned the best results, followed by the natural breaks and quantile methods, which are suited to the skewness of the ICMV distribution. The identification of less-stiff areas by using the methods presented will assist highway agencies to improve process control approaches and further evaluate construction quality criteria. Although the concepts discussed can apply to any compacted geomaterial layer, the conclusions apply to the type of compacted soil in this particular test section.
APA, Harvard, Vancouver, ISO, and other styles
39

Кулова, Nina Kulova, Аньшин, and Valeriy Anshin. "Innovation Project Management Based on Network Interactions." Economics 4, no. 1 (February 18, 2016): 41–49. http://dx.doi.org/10.12737/17720.

Full text
Abstract:
The article reviews innovation management concept based on inter-organizational collaboration. Classification of collaboration types and benefits from virtual networking in are given. The key principals of organizational interaction in the field of innovation are discussed along with effective organizational structures. Presented data was integrated after academic journals’ review and study of structural collaboration models proposed by several analytical centers and groups. Eventually the basic algorithms for collaborative network construction and operation are proposed by the authors. Prospects for future research in the field of innovative actors’ collaboration are indicated within the article.
APA, Harvard, Vancouver, ISO, and other styles
40

Kamyab, Mohsen, Stephen Remias, Erfan Najmi, Sanaz Rabinia, and Jonathan M. Waddell. "Machine Learning Approach to Forecast Work Zone Mobility using Probe Vehicle Data." Transportation Research Record: Journal of the Transportation Research Board 2674, no. 9 (July 12, 2020): 157–67. http://dx.doi.org/10.1177/0361198120927401.

Full text
Abstract:
The aim of deploying intelligent transportation systems (ITS) is often to help engineers and operators identify traffic congestion. The future of ITS-based traffic management is the prediction of traffic conditions using ubiquitous data sources. There are currently well-developed prediction models for recurrent traffic congestion such as during peak hour. However, there is a need to predict traffic congestion resulting from non-recurring events such as highway lane closures. As agencies begin to understand the value of collecting work zone data, rich data sets will emerge consisting of historical work zone information. In the era of big data, rich mobility data sources are becoming available that enable the application of machine learning to predict mobility for work zones. The purpose of this study is to utilize historical lane closure information with supervised machine learning algorithms to forecast spatio-temporal mobility for future lane closures. Various traffic data sources were collected from 1,160 work zones on Michigan interstates between 2014 and 2017. This study uses probe vehicle data to retrieve a mobility profile for these historical observations, and uses these profiles to apply random forest, XGBoost, and artificial neural network (ANN) classification algorithms. The mobility prediction results showed that the ANN model outperformed the other models by reaching up to 85% accuracy. The objective of this research was to show that machine learning algorithms can be used to capture patterns for non-recurrent traffic congestion even when hourly traffic volume is not available.
APA, Harvard, Vancouver, ISO, and other styles
41

Mita, Akira. "SHM System Integration and Supporting Algorithms." Advances in Science and Technology 56 (September 2008): 386–94. http://dx.doi.org/10.4028/www.scientific.net/ast.56.386.

Full text
Abstract:
Sustainability of urban structures is dependent on quantitative and reliable information of their conditions, such as levels of deterioration and safety. The structural health monitoring (SHM) system has been extensively studied to acquire the health data and provide information. The SHM requires modern technologies to extract the relevant information relevant to the health of a structure from the enormous amount of data gathered by the system. In this paper, integral components of the SHM systems are presented which have been developed and studied in our laboratory for several years. They include a smart sensor network for data acquisition and a database server for data storage and management together with diagnosis and prognosis applications. This paper aims to introduce the concept of the integrated SHM system and the related technologies. Sensors and networks, however, can be extended to more novel roles for civil and building engineering applications, such as detecting and recording the histories of environmental conditions and activities of the residents. Among many potential applications, we are particularly interested in using robots as moving sensors to gather information in living spaces. The information obtained by robotic sensors is used to record any activities in the living spaces as “genes”, to transform the environment and its “genes”, and to pass on selected information to future “generations” of living spaces. We call this concept “biofication of living spaces.” We are extensively studying to evolve the concept to be applied to real buildings.
APA, Harvard, Vancouver, ISO, and other styles
42

Bai, Luyi, Nan Li, Lishuang Liu, and Xuesong Hao. "Querying multi-source heterogeneous fuzzy spatiotemporal data." Journal of Intelligent & Fuzzy Systems 40, no. 5 (April 22, 2021): 9843–54. http://dx.doi.org/10.3233/jifs-202357.

Full text
Abstract:
With the rapid development of the environmental, meteorological and marine data management, fuzzy spatiotemporal data has received considerable attention. Even though some achievements in querying aspect have been made, there are still some unsolved problems. Semantic and structural heterogeneity may exist among different data sources, which will lead to incomplete results. In addition, there are ambiguous query intentions and conditions when the user queries the data. This paper proposes a fuzzy spatiotemporal data semantic model. Based on this model, the RDF local semantic models are converted into a RDF global semantic model after mapping relational data and XML data to RDF local semantic models. The existing methods mainly convert relational data to RDF Schema directly. But our approach converts relational data to XML Schema and then converts it to RDF, which utilizes the semi-structured feature of XML schema to solve the structural heterogeneity between different data sources. The integration process enables us to perform global queries against different data sources. In the proposed query algorithms, the query conditions inputted are converted into exact queries before the results are returned. Finally, this paper has carried out extensive experiments, calculated the recall, precision and F-Score of the experimental results, and compared with other state-of-the-art query methods. It shows the importance of the data integration method and the effectiveness of the query method proposed in this paper.
APA, Harvard, Vancouver, ISO, and other styles
43

Wu, Lei, Ran Ding, Zhaohong Jia, and Xuejun Li. "Cost-Effective Resource Provisioning for Real-Time Workflow in Cloud." Complexity 2020 (March 30, 2020): 1–15. http://dx.doi.org/10.1155/2020/1467274.

Full text
Abstract:
In the era of big data, mining and analysis of the enormous amount of data has been widely used to support decision-making. This complex process including huge-volume data collecting, storage, transmission, and analysis could be modeled as workflow. Meanwhile, cloud environment provides sufficient computing and storage resources for big data management and analytics. Due to the clouds providing the pay-as-you-go pricing scheme, executing a workflow in clouds should pay for the provisioned resources. Thus, cost-effective resource provisioning for workflow in clouds is still a critical challenge. Also, the responses of the complex data management process are usually required to be real-time. Therefore, deadline is the most crucial constraint for workflow execution. In order to address the challenge of cost-effective resource provisioning while meeting the real-time requirements of workflow execution, a resource provisioning strategy based on dynamic programming is proposed to achieve cost-effectiveness of workflow execution in clouds and a critical-path based workflow partition algorithm is presented to guarantee that the workflow can be completed before deadline. Our approach is evaluated by simulation experiments with real-time workflows of different sizes and different structures. The results demonstrate that our algorithm outperforms the existing classical algorithms.
APA, Harvard, Vancouver, ISO, and other styles
44

Loginovskiy, O. V., A. A. Maximov, S. A. Zolotykh, and V. O. Loginovskaya. "DEVELOPMENT OF ORGANIZATIONAL AND CORPORATE SYSTEMS USING MODERN MATHEMATICAL METHODS AND MODELS." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 21, no. 1 (February 2021): 116–35. http://dx.doi.org/10.14529/ctcr210111.

Full text
Abstract:
Analysis of modern technologies, methods and models used in various types of organizational and corporate structures convincingly proves that the improvement of preparation and decision-making for the management of these structures is currently carried out mainly on the basis of often outdated and not quite corresponding to modern capabilities of computer technology, and also information, software and software developments. The article shows that in modern conditions of global instability in the world, it becomes necessary to use adequate methods for data analysis and preparation of managerial decision-making on the development of organizational and corporate structures. Proposals and recommendations for improving the processes of analytical data processing, extracting useful information from large amounts of data located in the relevant organizational and corporate systems are presented, as well as adequate mathematical models and algorithms that can be successfully used to improve the quality of management decisions by the management of companies. The purpose of the study is to form methods and models for the analysis of strategic alternatives for the development of organizational and corporate systems using the concept of big data, technologies for extracting the necessary information from existing data banks, etc. Materials and methods. The research methods are based on modern information and analytical technologies, data science and models developed by the authors for the analysis of strategic alternatives for the deve¬lopment of organizational and corporate systems. Results. The scientific provisions and developments presented in the article can be used to improve the efficiency of management in various information and analytical systems for various management structures. Conclusion. The results of the research presented in this article make it possible to perform a qualitative analysis of data, to model the options for the work of organizational and corporate structures in an on-line mode, which makes it possible to increase the efficiency of managing their development based on a comparison of alternative options for management decisions.
APA, Harvard, Vancouver, ISO, and other styles
45

Guo, Yuanjun, Zhile Yang, Shengzhong Feng, and Jinxing Hu. "Complex Power System Status Monitoring and Evaluation Using Big Data Platform and Machine Learning Algorithms: A Review and a Case Study." Complexity 2018 (September 20, 2018): 1–21. http://dx.doi.org/10.1155/2018/8496187.

Full text
Abstract:
Efficient and valuable strategies provided by large amount of available data are urgently needed for a sustainable electricity system that includes smart grid technologies and very complex power system situations. Big Data technologies including Big Data management and utilization based on increasingly collected data from every component of the power grid are crucial for the successful deployment and monitoring of the system. This paper reviews the key technologies of Big Data management and intelligent machine learning methods for complex power systems. Based on a comprehensive study of power system and Big Data, several challenges are summarized to unlock the potential of Big Data technology in the application of smart grid. This paper proposed a modified and optimized structure of the Big Data processing platform according to the power data sources and different structures. Numerous open-sourced Big Data analytical tools and software are integrated as modules of the analytic engine, and self-developed advanced algorithms are also designed. The proposed framework comprises a data interface, a Big Data management, analytic engine as well as the applications, and display module. To fully investigate the proposed structure, three major applications are introduced: development of power grid topology and parallel computing using CIM files, high-efficiency load-shedding calculation, and power system transmission line tripping analysis using 3D visualization. The real-system cases demonstrate the effectiveness and great potential of the Big Data platform; therefore, data resources can achieve their full potential value for strategies and decision-making for smart grid. The proposed platform can provide a technical solution to the multidisciplinary cooperation of Big Data technology and smart grid monitoring.
APA, Harvard, Vancouver, ISO, and other styles
46

Chen, Chin Chun, Yuan Horng Lin, Jeng Ming Yih, and Shu Yi Juan. "Construct Concept Structure for Linear Algebra Based on Cognition Diagnosis and Clustering with Mahalanobis Distances." Advanced Materials Research 211-212 (February 2011): 756–60. http://dx.doi.org/10.4028/www.scientific.net/amr.211-212.756.

Full text
Abstract:
Euclidean distance function based fuzzy clustering algorithms can only be used to detect spherical structural clusters. The purpose of this study is improved Fuzzy C-Means algorithm based on Mahalanobis distance to identify concept structure for Linear Algebra. In addition, Concept structure analysis (CSA) could provide individualized knowledge structure. CSA algorithm is the major methodology and it is based on fuzzy logic model of perception (FLMP) and interpretive structural modeling (ISM). CSA could display individualized knowledge structure and clearly represent hierarchies and linkage among concepts for each examinee. Each cluster of data can easily describe features of knowledge structures. The results show that there are five clusters and each cluster has its own cognitive characteristics. In this study, the author provide the empirical data for concepts of linear algebra from university students. To sum up, the methodology can improve knowledge management in classroom more feasible. Finally, the result shows that Algorithm based on Mahalanobis distance has better performance than Fuzzy C-Means algorithm.
APA, Harvard, Vancouver, ISO, and other styles
47

Yang, Zhaohui, Qingwang Liu, Peng Luo, Qiaolin Ye, Guangshuang Duan, Ram P. Sharma, Huiru Zhang, Guangxing Wang, and Liyong Fu. "Prediction of Individual Tree Diameter and Height to Crown Base Using Nonlinear Simultaneous Regression and Airborne LiDAR Data." Remote Sensing 12, no. 14 (July 13, 2020): 2238. http://dx.doi.org/10.3390/rs12142238.

Full text
Abstract:
The forest growth and yield models, which are used as important decision-support tools in forest management, are commonly based on the individual tree characteristics, such as diameter at breast height (DBH), crown ratio, and height to crown base (HCB). Taking direct measurements for DBH and HCB through the ground-based methods is cumbersome and costly. The indirect method of getting such information is possible from remote sensing databases, which can be used to build DBH and HCB prediction models. The DBH and HCB of the same trees are significantly correlated, and so their inherent correlations need to be appropriately accounted for in the DBH and HCB models. However, all the existing DBH and HCB models, including models based on light detection and ranging (LiDAR) have ignored such correlations and thus failed to account for the compatibility of DBH and HCB estimates, in addition to disregarding measurement errors. To address these problems, we developed a compatible simultaneous equation system of DBH and HCB error-in-variable (EIV) models using LiDAR-derived data and ground-measurements for 510 Picea crassifolia Kom trees in northwest China. Four versatile algorithms, such as nonlinear seemingly unrelated regression (NSUR), two-stage least square (2SLS) regression, three-stage least square (3SLS) regression, and full information maximum likelihood (FIML) were evaluated for their estimating efficiencies and precisions for a simultaneous equation system of DBH and HCB EIV models. In addition, two other model structures, namely, nonlinear least squares with HCB estimation not based on the DBH (NLS and NBD) and nonlinear least squares with HCB estimation based on the DBH (NLS and BD) were also developed, and their fitting precisions with a simultaneous equation system compared. The leave-one-out cross-validation method was applied to evaluate all estimating algorithms and their resulting models. We found that only the simultaneous equation system could illustrate the effect of errors associated with the regressors on the response variables (DBH and HCB) and guaranteed the compatibility between the DBH and HCB models at an individual level. In addition, such an established system also effectively accounted for the inherent correlations between DBH with HCB. However, both the NLS and BD model and the NLS and NBD model did not show these properties. The precision of a simultaneous equation system developed using NSUR appeared the best among all the evaluated algorithms. Our equation system does not require the stand-level information as input, but it does require the information of tree height, crown width, and crown projection area, all of which can be readily derived from LiDAR imagery using the delineation algorithms and ground-based DBH measurements. Our results indicate that NSUR is a more reliable and quicker algorithm for developing DBH and HCB models using large scale LiDAR-based datasets. The novelty of this study is that the compatibility problem of the DBH model and the HCB EIV model was properly addressed, and the potential algorithms were compared to choose the most suitable one (NSUR). The presented method and algorithm will be useful for establishing similar compatible equation systems of tree DBH and HCB EIV models for other tree species.
APA, Harvard, Vancouver, ISO, and other styles
48

Li, Jing Zhao, Jian Cheng, and Xing Jin. "Research on Industrial Field Diagnostics and Management System Based on the IOT." Applied Mechanics and Materials 63-64 (June 2011): 765–69. http://dx.doi.org/10.4028/www.scientific.net/amm.63-64.765.

Full text
Abstract:
Industrial field diagnosis and management system based on the Internet of things which has advanced technologies and perfect functions was designed, through the analysis on industrial field diagnosis technology, information fusion methods and the framework of industrial the Internet of things,. Field fault detection, fault diagnosis and fault isolation and the structures of the sensing layer, the middleware layer and the application layer of the Internet of things of industrial as well as information fusion algorithms of data level, feature level and decision level were made to correspond to each other and site diagnosis knowledge, the Internet of things technology and information fusion algorithm were used to achieve remote monitoring center or handheld terminal on site for industrial site diagnosis and management functions. A good solution was provided for equipment manufacturers and industrial applying the Internet of things to do site diagnosis and management.
APA, Harvard, Vancouver, ISO, and other styles
49

Wang, Z., and A. Zipf. "USING OPENSTREETMAP DATA TO GENERATE BUILDING MODELS WITH THEIR INNER STRUCTURES FOR 3D MAPS." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W4 (September 14, 2017): 411–16. http://dx.doi.org/10.5194/isprs-annals-iv-2-w4-411-2017.

Full text
Abstract:
With the development of Web 2.0, more and more data related to indoor environments has been collected within the volunteered geographic information (VGI) framework, which creates a need for construction of indoor environments from VGI. In this study, we focus on generating 3D building models from OpenStreetMap (OSM) data, and provide an approach to support construction and visualization of indoor environments on 3D maps. In this paper, we present an algorithm which can extract building information from OSM data, and can construct building structures as well as inner building components (e.g., doors, rooms, and windows). A web application is built to support the processing and visualization of the building models on a 3D map. We test our approach with an indoor dataset collected from the field. The results show the feasibility of our approach and its potentials to provide support for a wide range of applications, such as indoor and outdoor navigation, urban planning, and incident management.
APA, Harvard, Vancouver, ISO, and other styles
50

Sołkowski, Juliusz, and Maciej Jamka. "Deformations of the surface and trackbed beside the bridge objects - tests and diagnostics." Transportation Overview - Przeglad Komunikacyjny 2016, no. 4 (April 1, 2016): 46–60. http://dx.doi.org/10.35117/a_eng_16_04_06.

Full text
Abstract:
In the paper the results of research into dynamic rail deflections and rail accelerations under passing trains are presented. The research was carried out on five railway bridges within the precinct of Krakow Railway Management. The objects varied as to their construction (ballasted, ballastless structures) and tonnage borne. The measurements were carried out in the period of 18 months. As a result, special algorithms for predicting the geometrical deformations and the changes in the dynamic stiffness of the railway structure (track, subgrade, bridge) were worked out. These algorithms were implemented in the diagnostic data base DIAGTOR. Some examples of the algorithms and calculations are presented in the paper.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography