Academic literature on the topic 'Massive data set post-processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Massive data set post-processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Massive data set post-processing"

1

Singh, Gurinderbeer, Sreeraman Rajan, and Shikharesh Majumdar. "A Fast-Iterative Data Association Technique for Multiple Object Tracking." International Journal of Semantic Computing 12, no. 02 (June 2018): 261–85. http://dx.doi.org/10.1142/s1793351x18400135.

Full text
Abstract:
A massive amount of video data is recorded daily for forensic post analysis and computer vision applications. The analyses of this data often require multiple object tracking (MOT). Advancements in image analysis algorithms and global optimization techniques have improved the accuracy of MOT, often at the cost of slow processing speed which limits its applications only to small video datasets. With the focus on speed, a fast-iterative data association technique (FIDA) for MOT that uses a tracking-by-detection paradigm and finds a locally optimal solution with a low computational overhead is introduced. The performance analyses conducted on a set of benchmark video datasets show that the proposed technique is significantly faster (50–600 times) than the existing state-of-the-art techniques that produce a comparable tracking accuracy.
APA, Harvard, Vancouver, ISO, and other styles
2

Nagy, Máté, János Tapolcai, and Gábor Rétvári. "R3D3: A Doubly Opportunistic Data Structure for Compressing and Indexing Massive Data." Infocommunications journal, no. 2 (2019): 58–66. http://dx.doi.org/10.36244/icj.2019.2.7.

Full text
Abstract:
Opportunistic data structures are used extensively in big data practice to break down the massive storage space requirements of processing large volumes of information. A data structure is called (singly) opportunistic if it takes advantage of the redundancy in the input in order to store it in iformationtheoretically minimum space. Yet, efficient data processing requires a separate index alongside the data, whose size often substantially exceeds that of the compressed information. In this paper, we introduce doubly opportunistic data structures to not only attain best possible compression on the input data but also on the index. We present R3D3 that encodes a bitvector of length n and Shannon entropy H0 to nH0 bits and the accompanying index to nH0(1/2 + O(log C/C)) bits, thus attaining provably minimum space (up to small error terms) on both the data and the index, and supports a rich set of queries to arbitrary position in the compressed bitvector in O(C) time when C = o(log n). Our R3D3 prototype attains several times space reduction beyond known compression techniques on a wide range of synthetic and real data sets, while it supports operations on the compressed data at comparable speed.
APA, Harvard, Vancouver, ISO, and other styles
3

Chang, Xu, Shan Shan Pei, and Na Su. "Research on Real-Time Network Forensics Based on Improved Data Mining Algorithm." Applied Mechanics and Materials 380-384 (August 2013): 1881–85. http://dx.doi.org/10.4028/www.scientific.net/amm.380-384.1881.

Full text
Abstract:
According to the characteristics of high precision and massive amounts of data processing during real-time network forensic, combining the defects of traditional Apriori algorithm which scan data sets more times, the paper improved Apriori algorithm, the data set is divided into parallel processing blocks, and then use dynamic itemsets counting method weight each block to construct tree, and depth-first search the tree, mark the data set which is divided out of the data block, and dynamic evaluation all the items set which has counted in order to acquire frequent itemsets, reducing the number of scanning, improved data processing capability of network forensics, use K-mediods algorithm for secondary mining to improve the accuracy, reduce network data loss, improve legal effect of network crime evidence.
APA, Harvard, Vancouver, ISO, and other styles
4

Lam, Ping-Man, Chi-Sing Leung, and Tien-Tsin Wong. "A compression method for a massive image data set in image-based rendering." Signal Processing: Image Communication 19, no. 8 (September 2004): 741–54. http://dx.doi.org/10.1016/j.image.2004.04.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Jia, Chen, and Hong Wei Chen. "The HGEDA Hybrid Algorithm for OLAP Data Cubes." Applied Mechanics and Materials 130-134 (October 2011): 3158–62. http://dx.doi.org/10.4028/www.scientific.net/amm.130-134.3158.

Full text
Abstract:
On-Line Analytical Processing (OLAP) tools are frequently used in business, science and health to extract useful knowledge from massive databases. An important and hard optimization problem in OLAP data warehouses is the view selection problem, consisting of selecting a set of aggregate views of the data for speeding up future query processing. In this paper we present a new approach, named HGEDA, which is a new hybrid algorithm based on genetic and estimation of distribution algorithms. The original objective is to get benefits from both approaches. Experimental results show that the HGEDA are competitive with the genetic algorithm on a variety of problem instances, often finding approximate optimal solutions in a reasonable amount of time.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Jia, Hong Wei Chen, and Xin Rong Hu. "Simulation for View Selection in Data Warehouse." Advanced Materials Research 748 (August 2013): 1028–32. http://dx.doi.org/10.4028/www.scientific.net/amr.748.1028.

Full text
Abstract:
On-Line Analytical Processing (OLAP) tools are frequently used in business, science and health to extract useful knowledge from massive databases. An important and hard optimization problem in OLAP data warehouses is the view selection problem, consisting of selecting a set of aggregate views of the data for speeding up future query processing. We apply one n Estimation of Distribution Algorithms (EDAs) to view selection under a size constraint. Our emphasis is to determine the suitability of the combination of EDAs with constraint handling to the view selection problem, compared to a widely used genetic algorithm. The EDAs are competitive with the genetic algorithm on a variety of problem instances, often finding approximate optimal solutions in a reasonable amount of time.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Wei. "Optimization of Intelligent Data Mining Technology in Big Data Environment." Journal of Advanced Computational Intelligence and Intelligent Informatics 23, no. 1 (January 20, 2019): 129–33. http://dx.doi.org/10.20965/jaciii.2019.p0129.

Full text
Abstract:
At present, storage technology cannot save data completely. Therefore, in such a big data environment, data mining technology needs to be optimized for intelligent data. Firstly, in the face of massive intelligent data, the potential relationship between data items in the database is firstly described by association rules. The data items are measured by support degree and confidence level, and the data set with minimum support is found. At the same time, strong association rules are obtained according to the given confidence level of users. Secondly, in order to effectively improve the scanning speed of data items, an optimized association data mining technology based on hash technology and optimized transaction compression technology is proposed. A hash function is used to count the item set in the set of waiting options, and the count is less than its support, then the pruning is done, and then the object compression technique is used to delete the item and the transaction which is unrelated to the item set, so as to improve the processing efficiency of the association rules. Experiments show that the optimized data mining technology can significantly improve the efficiency of obtaining valuable intelligent data.
APA, Harvard, Vancouver, ISO, and other styles
8

Annapoorani, S., and B. Srinivasan. "Implementation of Effective Data Emplacement Algorithm in Heterogeneous Cloud Environment." Asian Journal of Computer Science and Technology 8, S1 (February 5, 2019): 87–88. http://dx.doi.org/10.51983/ajcst-2019.8.s1.1944.

Full text
Abstract:
This paper is concerned with the study and implementation of effective Data Emplacement Algorithm in large set of databases called Big Data and proposes a model for improving the efficiency of data processing and storage utilization for dynamic load imbalance among nodes in a heterogeneous cloud environment. With the era of explosive information and data receiving, more and more fields need to deal with massive, large scale of data. A method has been proposed with an improved Data Placement algorithm called Effective Data Emplacement Algorithm with computing capacity of each node as a predominant factor that promotes and improves the efficiency in data processing in a short duration time from large set of data. The adaptability of the proposed model can be obtained by minimizing the time with processing efficiency through the computing capacity of each node in the cluster. The proposed solution improves the performance of the heterogeneous cluster environment by effectively distributing data based on the performance oriented sampling as the experimental results made with word count applications.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, De Wen, and Lin Xiao He. "A Fault Diagnosis Model for Power Transformer Using Association Rule Mining-Based on Rough Set." Applied Mechanics and Materials 519-520 (February 2014): 1169–72. http://dx.doi.org/10.4028/www.scientific.net/amm.519-520.1169.

Full text
Abstract:
With the development of on-line monitoring technology of electric power equipment, and the accumulation of both on-line monitoring data and off-line testing data, the data available to fault diagnosis of power transformer is bound to be massive. How to utilize those massive data reasonably is the issue that eagerly needs us to study. Since the on-line monitoring technology is not totally mature, which resulting in incomplete, noisy, wrong characters for monitoring data, so processing the initial data by using rough set is necessary. Furthermore, when the issue scale becomes larger, the computing amount of association rule mining grows dramatically, and its easy to cause data expansion. So it needs to use attribute reduction algorithm of rough set theory. Taking the above two points into account, this paper proposes a fault diagnosis model for power transformer using association rule mining-based on rough set.
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen Mau Quoc, Hoan, Martin Serrano, Han Mau Nguyen, John G. Breslin, and Danh Le-Phuoc. "EAGLE—A Scalable Query Processing Engine for Linked Sensor Data." Sensors 19, no. 20 (October 9, 2019): 4362. http://dx.doi.org/10.3390/s19204362.

Full text
Abstract:
Recently, many approaches have been proposed to manage sensor data using semantic web technologies for effective heterogeneous data integration. However, our empirical observations revealed that these solutions primarily focused on semantic relationships and unfortunately paid less attention to spatio–temporal correlations. Most semantic approaches do not have spatio–temporal support. Some of them have attempted to provide full spatio–temporal support, but have poor performance for complex spatio–temporal aggregate queries. In addition, while the volume of sensor data is rapidly growing, the challenge of querying and managing the massive volumes of data generated by sensing devices still remains unsolved. In this article, we introduce EAGLE, a spatio–temporal query engine for querying sensor data based on the linked data model. The ultimate goal of EAGLE is to provide an elastic and scalable system which allows fast searching and analysis with respect to the relationships of space, time and semantics in sensor data. We also extend SPARQL with a set of new query operators in order to support spatio–temporal computing in the linked sensor data context.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Massive data set post-processing"

1

Mortensen, Clifton H. "A Computational Fluid Dynamics Feature Extraction Method Using Subjective Logic." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2208.

Full text
Abstract:
Computational fluid dynamics simulations are advancing to correctly simulate highly complex fluid flow problems that can require weeks of computation on expensive high performance clusters. These simulations can generate terabytes of data and pose a severe challenge to a researcher analyzing the data. Presented in this document is a general method to extract computational fluid dynamics flow features concurrent with a simulation and as a post-processing step to drastically reduce researcher post-processing time. This general method uses software agents governed by subjective logic to make decisions about extracted features in converging and converged data sets. The software agents are designed to work inside the Concurrent Agent-enabled Feature Extraction concept and operate efficiently on massively parallel high performance computing clusters. Also presented is a specific application of the general feature extraction method to vortex core lines. Each agent's belief tuple is quantified using a pre-defined set of information. The information and functions necessary to set each component in each agent's belief tuple is given along with an explanation of the methods for setting the components. A simulation of a blunt fin is run showing convergence of the horseshoe vortex core to its final spatial location at 60% of the converged solution. Agents correctly select between two vortex core extraction algorithms and correctly identify the expected probabilities of vortex cores as the solution converges. A simulation of a delta wing is run showing coherently extracted primary vortex cores as early as 16% of the converged solution. Agents select primary vortex cores extracted by the Sujudi-Haimes algorithm as the most probable primary cores. These simulations show concurrent feature extraction is possible and that intelligent agents following the general feature extraction method are able to make appropriate decisions about converging and converged features based on pre-defined information.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Massive data set post-processing"

1

Shahabi, Cyrus, and Farnoush Banaei-Kashani. "Querical Data Networks." In Encyclopedia of Database Technologies and Applications, 493–99. IGI Global, 2005. http://dx.doi.org/10.4018/978-1-59140-560-3.ch082.

Full text
Abstract:
Recently, a family of massive self-organizing data networks has emerged. These networks mainly serve as large-scale distributed query-processing systems. We term these networks querical data networks (QDN). A QDN is a federation of a dynamic set of peer, autonomous nodes communicating through a transient-form interconnection. Data is naturally distributed among the QDN nodes in extra-fine grain, where a few data items are dynamically created, collected, and/or stored at each node. Therefore, the network scales linearly to the size of the data set. With a dynamic data set, a dynamic and large set of nodes, and a transient-form communication infrastructure, QDNs should be considered as the new generation of distributed database systems with significantly less constraining assumptions as compared to their ancestors. Peer-to-peer networks (Daswani, Garcia-Molina, & Yang, 2003) and sensor networks (Akyildiz, Su, Sankarasubramaniam, & Cayirci, 2002; Estrin, Govindan, Heidemann, & Kumar, 1999) are well-known examples of QDNs.
APA, Harvard, Vancouver, ISO, and other styles
2

Shahabi, Cyrus, and Farnoush Banaei-Kashani. "Querical Data Networks." In Handbook of Research on Innovations in Database Technologies and Applications, 788–97. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-242-8.ch083.

Full text
Abstract:
Recently, a family of massive self-organizing data networks has emerged. These networks mainly serve as large-scale distributed query processing systems. We term these networks Querical Data Networks (QDN). A QDN is a federation of a dynamic set of peer, autonomous nodes communicating through a transient-form interconnection. Data is naturally distributed among the QDN nodes in extra-fine grain, where a few data items are dynamically created, collected, and/or stored at each node. Therefore, the network scales linearly to the size of the dataset. With a dynamic dataset, a dynamic and large set of nodes, and a transient-form communication infrastructure, QDNs should be considered as the new generation of distributed database systems with significantly less constraining assumptions as compared to their ancestors. Peer-to-peer networks (Daswani, 2003) and sensor networks (Estrin, 1999, Akyildiz, 2002) are well-known examples of QDN.
APA, Harvard, Vancouver, ISO, and other styles
3

Feng, Qinrong, Duoqian Miao, and Ruizhi Wang. "Multidimensional Model-Based Decision Rules Mining." In Post-Mining of Association Rules, 311–34. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-404-0.ch016.

Full text
Abstract:
Decision rules mining is an important technique in machine learning and data mining, it has been studied intensively during the past few years. However, most existing algorithms are based on flat data tables, from which sets of decision rules mined may be very large for massive data sets. Such sets of rules are not easily understandable and really useful for users. Moreover, too many rules may lead to over-fitting. Thus, a method of decision rules mining from different abstract levels was provided in this chapter, which aims to improve the efficiency of decision rules mining by combining the hierarchical structure of multidimensional model and the techniques of rough set theory. Our algorithm for decision rules mining follows the so called separate-and-conquer strategy. Namely, certain rules were mined beginning from the most abstract level, and supporting sets of those certain rules were removed from the universe, then drill down to the next level to recursively mine other certain rules which supporting sets are included in the remaining objects until no objects remain in the universe or getting to the primitive level. So this algorithm can output some generalized rules with different degree of generalization.
APA, Harvard, Vancouver, ISO, and other styles
4

Kaur, Harleen, Ritu Chauhan, and M. Alam. "An Optimal Categorization of Feature Selection Methods for Knowledge Discovery." In Data Mining, 92–106. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2455-9.ch005.

Full text
Abstract:
With the continuous availability of massive experimental medical data has given impetus to a large effort in developing mathematical, statistical and computational intelligent techniques to infer models from medical databases. Feature selection has been an active research area in pattern recognition, statistics, and data mining communities. However, there have been relatively few studies on preprocessing data used as input for data mining systems in medical data. In this chapter, the authors focus on several feature selection methods as to their effectiveness in preprocessing input medical data. They evaluate several feature selection algorithms such as Mutual Information Feature Selection (MIFS), Fast Correlation-Based Filter (FCBF) and Stepwise Discriminant Analysis (STEPDISC) with machine learning algorithm naive Bayesian and Linear Discriminant analysis techniques. The experimental analysis of feature selection technique in medical databases has enable the authors to find small number of informative features leading to potential improvement in medical diagnosis by reducing the size of data set, eliminating irrelevant features, and decreasing the processing time.
APA, Harvard, Vancouver, ISO, and other styles
5

Basirat, Amir, Asad I. Khan, and Heinz W. Schmidt. "Pattern Recognition for Large-Scale Data Processing." In Big Data, 929–40. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9840-6.ch043.

Full text
Abstract:
One of the main challenges for large-scale computer clouds dealing with massive real-time data is in coping with the rate at which unprocessed data is being accumulated. Transforming big data into valuable information requires a fundamental re-think of the way in which future data management models will need to be developed on the Internet. Unlike the existing relational schemes, pattern-matching approaches can analyze data in similar ways to which our brain links information. Such interactions when implemented in voluminous data clouds can assist in finding overarching relations in complex and highly distributed data sets. In this chapter, a different perspective of data recognition is considered. Rather than looking at conventional approaches, such as statistical computations and deterministic learning schemes, this chapter focuses on distributed processing approach for scalable data recognition and processing.
APA, Harvard, Vancouver, ISO, and other styles
6

Basirat, Amir, Asad I. Khan, and Heinz W. Schmidt. "Pattern Recognition for Large-Scale Data Processing." In Strategic Data-Based Wisdom in the Big Data Era, 198–208. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8122-4.ch011.

Full text
Abstract:
One of the main challenges for large-scale computer clouds dealing with massive real-time data is in coping with the rate at which unprocessed data is being accumulated. Transforming big data into valuable information requires a fundamental re-think of the way in which future data management models will need to be developed on the Internet. Unlike the existing relational schemes, pattern-matching approaches can analyze data in similar ways to which our brain links information. Such interactions when implemented in voluminous data clouds can assist in finding overarching relations in complex and highly distributed data sets. In this chapter, a different perspective of data recognition is considered. Rather than looking at conventional approaches, such as statistical computations and deterministic learning schemes, this chapter focuses on distributed processing approach for scalable data recognition and processing.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Huawen, Jigui Sun, and Huijie Zhang. "Post-Processing for Rule Reduction Using Closed Set." In Post-Mining of Association Rules, 81–99. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-404-0.ch005.

Full text
Abstract:
In data mining, rule management is getting more and more important. Usually, a large number of rules will be induced from large databases in many fields, especially when they are dense. This, however, directly leads to the gained knowledge hard to be understood and interpreted. To eliminate redundant rules from rule base, many efforts have been made and various efficient and outstanding algorithms have been proposed. However, end-users are often unable to complete a mining task because there are still insignificant rules. Thus, it becomes apparent that an efficient technique is needed to discard useless rules as more as possible, without information lossless. To achieve this goal, in this paper we propose an efficient method to filter superfluous rules from knowledge base in a post-processing manner. The main character of our method lies in that it eliminates redundancy of rules by dependent relation, which can be discovered by closed set mining technique. Their performance evaluations show that the compression degree achieved by our proposed method is better and its efficiency is also higher than those of other techniques.
APA, Harvard, Vancouver, ISO, and other styles
8

Massaro, Alessandro, and Angelo Galiano. "Image Processing and Post-Data Mining Processing for Security in Industrial Applications." In Handbook of Research on Intelligent Data Processing and Information Security Systems, 117–46. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-1290-6.ch006.

Full text
Abstract:
The chapter analyzes scientific approaches suitable for industrial security involving environment health monitoring and safety production control. In particular, it discusses data mining algorithms able to add hidden information important for security improvement. In particular k-means and artificial intelligence algorithms are applied in different cases of study by discussing the procedures useful to set the model including image processing and post clustering processing facilities. The chapter is focused on the discussion of information provided by data mining results. The proposed model is matched with different architectures involving different industrial applications such as biometric classification and transport security, railway inspections, video surveillance image processing, and quarry risk evaluation. These architectures refer to specific industry projects. As advanced applications, a clustering analysis approach applied on thermal radiometric images and a dynamic contour extraction process suitable for oil spill monitoring are proposed.
APA, Harvard, Vancouver, ISO, and other styles
9

Smith, Gary. "Intelligent or obedient?" In The AI Delusion. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198824305.003.0003.

Full text
Abstract:
Jeopardy! is a popular game show that, in various incarnations, has been on television for more than 50 years. The show is a test of general knowledge with the twist that the clues are answers and the contestants respond with questions that fit the answers. For example, the clue, “16th President of the United States,” would be answered correctly with “Who is Abraham Lincoln?” There are three contestants, and the first person to push his or her button is given the first chance to answer the question orally (with the exception of the Final Jeopardy clue, when all three contestants are given 30 seconds to write down their answers). In many ways, the show is ideally suited for computers because computers can store and retrieve vast amounts of information without error. (At a teen Jeopardy tournament, a boy lost the championship because he wrote “Who Is Annie Frank?” instead of “Who is Anne Frank.”A computer would not make such an error.) On the other hand, the clues are not always straightforward, and sometimes obscure. One clue was “Sink it and you’ve scratched.” It is difficult for a computer that is nothing more than an encyclopedia of facts to come up with the correct answer: “What is the cue ball?” Another challenging clue was, “When translated, the full name of this major league baseball team gets you a double redundancy.” (Answer: “What is the Los Angeles Angels?”) In 2005 a team of 15 IBM engineers set out to design a computer that could compete with the best Jeopardy players. They named it Watson, after IBM’s first CEO, Thomas J. Watson, who expanded IBM from 1,300 employees and less than $5 million in revenue in 1914 to 72,500 employees and $900 million in revenue when he died in 1956. The Watson program stored the equivalent of 200 million pages of information and could process the equivalent of a million books per second. Beyond its massive memory and processing speed, Watson can understand natural spoken language and use synthesized speech to communicate. Unlike search engines that provide a list of relevant documents or web sites, Watson was programmed to find specific answers to clues. Watson used hundreds of software programs to identify the keywords and phrases in a clue, match these to keywords and phrases in its massive data base, and then formulate possible responses.
APA, Harvard, Vancouver, ISO, and other styles
10

Phu, Vo Ngoc, and Vo Thi Ngoc Tran. "Artificial Neural Network Models for Large-Scale Data." In Advances in Data Mining and Database Management, 406–39. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7432-3.ch022.

Full text
Abstract:
Artificial intelligence (ARTINT) and information have been famous fields for many years. A reason has been that many different areas have been promoted quickly based on the ARTINT and information, and they have created many significant values for many years. These crucial values have certainly been used more and more for many economies of the countries in the world, other sciences, companies, organizations, etc. Many massive corporations, big organizations, etc. have been established rapidly because these economies have been developed in the strongest way. Unsurprisingly, lots of information and large-scale data sets have been created clearly from these corporations, organizations, etc. This has been the major challenges for many commercial applications, studies, etc. to process and store them successfully. To handle this problem, many algorithms have been proposed for processing these big data sets.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Massive data set post-processing"

1

Zhe, Dai, and Liu Jianhui. "A dimensionality reduction based on rough set theory for complex massive data." In 2015 8th International Congress on Image and Signal Processing (CISP). IEEE, 2015. http://dx.doi.org/10.1109/cisp.2015.7408125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Ziyue, Zhilin Chen, Ya-Feng Liu, Foad Sohrab, and Wei Yu. "An Efficient Active Set Algorithm for Covariance Based Joint Data and Activity Detection for Massive Random Access with Massive MIMO." In ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021. http://dx.doi.org/10.1109/icassp39728.2021.9413525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wilcox, P. D. "Exploiting the Full Data Set from Ultrasonic Arrays by Post-Processing." In QUANTITATIVE NONDESTRUCTIVE EVALUATION. AIP, 2006. http://dx.doi.org/10.1063/1.2184614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Adam, Erick, Elizabeth L'Heureux, Emmanuel Bongajum, and Bernd Milkereit. "3D Seismic imaging of Massive Sulfides: seismic modeling, data acquisition and processing issues." In SEG Technical Program Expanded Abstracts 2008. Society of Exploration Geophysicists, 2008. http://dx.doi.org/10.1190/1.3064083.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pistone, Elisabetta, Hanno Töll, and Thomas Hauser. "Continuous Monitoring System of Metro Lines to Assess Long-term Behaviour of Massive Train Wheels." In IABSE Symposium, Guimarães 2019: Towards a Resilient Built Environment Risk and Asset Management. Zurich, Switzerland: International Association for Bridge and Structural Engineering (IABSE), 2019. http://dx.doi.org/10.2749/guimaraes.2019.0425.

Full text
Abstract:
<p>This paper presents the results of a continuous monitoring system placed on the Metro line in Vienna, Austria, aimed at assessing the long-term behaviour of massive train wheels. Until today, conventional resilient wheels are used on Viennese metro trains. However, it is planned to substitute this type of wheels with massive wheels.</p><p>Since 2016 three train sets were therefore equipped with massive wheels and let circulate in the underground railway network under trial runs. Nine measuring systems were installed within the metro network in the form of monitoring stations to continuously record data during these train passages. Selected indicators are permanently measured, post-processed and transmitted in real time to a web-interface accessible. On the basis of approximately 2,000 daily recorded trains, statistical analysis has been performed, thus providing information on train condition and on the impact of massive wheels.</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Bongajum, E. L., I. White, and B. Milkereit. "Elastic seismic wave scattering and imaging of massive sulfides: rock physics and implications for seismic data acquisition and processing." In SEG Technical Program Expanded Abstracts 2010. Society of Exploration Geophysicists, 2010. http://dx.doi.org/10.1190/1.3513194.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Imai, Seira, Yasuharu Nakajima, and Motohiko Murai. "Experimental Study on Bubble Size Measurement for Development of Seafloor Massive Sulfides." In ASME 2019 38th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/omae2019-95186.

Full text
Abstract:
Abstract Seafloor Massive Sulfides have been expected to be future mineral resources. To promote the development of Seafloor Massive Sulfides, Seafloor Mineral Processing, a method of extracting valuable minerals from the ores on deep seafloor using flotation to reduce the lifting cost of ores from the seafloor to the sea surface, was proposed. To apply flotation for the seafloor mineral processing, a measurement method of bubble size applicable to deep-sea conditions has been desired because it is necessary to generate fine air bubbles suitable to flotation under pressure conditions on deep seafloor. Then, the authors have studied on bubble size measurement by image analysis, which is expected to be applicable to deep-sea conditions. At the first phase of this study, photographic conditions suitable to image analysis of air bubbles were distinguished. Air bubbles were generated by using a porous nozzle at some air flow rates in a bubble column with a rectangular cross-section. Video images of air bubbles were taken by using both high-speed camera and video camera with a low frame rate. Bubble size was measured by binarizing the video images of bubbles. Under optimal photographic conditions, bubble size was obtained from not only the high-speed camera but also the video camera; and both size data agreed relatively well, which implies that bubble size measurement by using image analysis would be applicable to deep-sea conditions. At the second phase, experiments were carried out under high-pressure conditions up to 2.4 MPa. Single bubble generation by using a capillary nozzle in a small pressure chamber with a sight glass was observed by using a digital microscope. Bubble size measurement by image analysis was carried out by the procedure established at the first phase. While the process of bubble generation at the high pressures was similar to that at the atmospheric pressure, the bubble size was decreased as the pressure rose. The result implies there is a strong correlation between the pressure and the bubble size.
APA, Harvard, Vancouver, ISO, and other styles
8

Pisani, Flávia, and Edson Borin. "Leveraging Constrained Devices for Custom Code Execution in the Internet of Things." In XXXVIII Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos. Sociedade Brasileira de Computação, 2020. http://dx.doi.org/10.5753/sbrc_estendido.2020.12413.

Full text
Abstract:
With the ever-growing scale of the IoT, transmitting a massive volume of sensor data through the network will be too taxing. However, it will be challenging to include resource-constrained IoT devices as processing nodes in the fog computing hierarchy. To allow the execution of custom code sent by users on these devices, which are too limited for many current tools, we developed a platform called LibMiletusCOISA (LMC). Moreover, we created two models where the user can choose a cost metric (e.g., energy consumption) and then use it to decide whether to execute their code on the cloud or on the device that collected the data. We employed these models to characterize different scenarios and simulate future situations where changes in the technology can impact this decision.
APA, Harvard, Vancouver, ISO, and other styles
9

Dubetz, M. W., J. G. Kuhl, and E. J. Haug. "A Network Implementation of Real-Time Dynamic Simulation With Interactive Animated Graphics." In ASME 1988 Design Technology Conferences. American Society of Mechanical Engineers, 1988. http://dx.doi.org/10.1115/detc1988-0066.

Full text
Abstract:
Abstract This paper presents a network based implementation of real-time dynamic simulation methods. An interactive animated graphics environment is presented that permits the engineer to view high quality animated graphics rendering of dynamic performance, to interact with the simulation, and to study the effects of design variations, while the simulation is being carried out. An industry standard network computing system is employed to interface the parallel processor that carries out the dynamic simulation and a high speed graphics processor that creates and displays animated graphics. Multi-windowing and graphics processing methods that are employed to provide visualization and operator control of the simulation are presented. A vehicle dynamics application is used to illustrate the methods developed and to analyze communication bandwidth requirements for implementation with a compute server that is remote from the graphics workstation. It is shown that, while massive data sets are generated on the parallel processor during realtime dynamic simulation and extensive graphics data are generated on the workstation during rendering and display, data communication requirements between the compute server and the workstation are well within the capability of existing networks.
APA, Harvard, Vancouver, ISO, and other styles
10

Yasa, Tolga, and Guillermo Paniagua. "Robust Post-Processing Procedure for Multi-Hole Pressure Probes." In ASME 2011 Turbo Expo: Turbine Technical Conference and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/gt2011-46719.

Full text
Abstract:
Aerodynamic probes have been extensively used in turbine performance measurements for over 60 years to provide flow direction and Mach numbers. In turbomachinery applications the absence of adequate optical access prevents the use of laser-Doppler-anemometry (LDA), laser-two-focus velocimetry, particle-image-velocimetry (PIV). Moreover, multi-hole pressure probes are more robust than hot-wire or hot-fiber probes, and less susceptible to gas contamination. The pressure readings are converted into flow direction using calibration maps. Some researchers tried to model theoretically or numerically the calibration map to speed up the process. Due to manufacturing abnormalities, experimental calibration is still essential. The calibration map is obtained in a wind tunnel varying the yaw and pitch angles, while recording the hole-pressures. With the advent of powerful computers, researchers introduced sophisticated techniques to process the calibration data. Depending on the geometry or manufacturing imperfections a conventional calibration map is distorted, with multiple crossings resulting in the inability to identify a unique flow direction. In the current paper, a new calibration and data processing procedure is introduced for multi-hole probe measurements. The new technique relies on a set of calibration data rather than a calibration map. The pressure readings from each hole are considered individually through a minimization algorithm. Hence, the new technique allows computing flow direction even when a hole is blocked during the test campaign. The new methodology is demonstrated in a five-hole probe including estimates on the uncertainty.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Massive data set post-processing"

1

Thomas, Müller. The 1946 - 1956 Hydrographic Data Archive at the Institut für Meereskunde, Kiel, digitized: a data guide. GEOMAR, 2021. http://dx.doi.org/10.3289/geomar_rep_ns_58_2021.

Full text
Abstract:
This report is thought as a guide to early hydrographic log sheets of bottle data obtained by the former Institut für Meereskunde, Kiel (IFMK, now integrated into GEOMAR) in the post-war years 1946 to 1956, and which in summer 2018 when a building used by GEOMAR was to clear were not available in digitized format at GEOMAR. The data mostly were taken by the research cutter FK “Südfall” in the Baltic. It turned out that some of these data from 1950 to 1956 were available in digitized form at the on-line data bank of the International Council for the Exploration of the Sea (ICES). Comparison with the original logged data sheets, however, showed that they needed to be improved w/r to time and position and to be completed by missing data. This report shortly describes the methods of sampling and measuring these old data, and the processing steps applied to improve the data set by using the data log sheets before archiving and submitting the now improved and complete data set to data centres for archiving.
APA, Harvard, Vancouver, ISO, and other styles
2

Bauer, Andrew, James Forsythe, Jayanarayanan Sitaraman, Andrew Wissink, Buvana Jayaraman, and Robert Haehnel. In situ analysis and visualization to enable better workflows with CREATE-AV™ Helios. Engineer Research and Development Center (U.S.), June 2021. http://dx.doi.org/10.21079/11681/40846.

Full text
Abstract:
The CREATE-AV™ Helios CFD simulation code has been used to accurately predict rotorcraft performance under a variety of flight conditions. The Helios package contains a suite of tools that contain almost the entire set of functionality needed for a variety of workflows. These workflows include tools customized to properly specify many in situ analysis and visualization capabilities appropriate for rotorcraft analysis. In situ is the process of computing analysis and visualization information during a simulation run before data is saved to disk. In situ has been referred to with a variety of terms including co-processing, covisualization, coviz, etc. In this paper we describe the customization of the pre-processing GUI and corresponding development of the Helios solver code-base to effectively implement in situ analysis and visualization to reduce file IO and speed up workflows for CFD analysts. We showcase how the workflow enables the wide variety of Helios users to effectively work in post-processing tools they are already familiar with as opposed to forcing them to learn new tools in order post-process in situ data extracts being produced by Helios. These data extracts include various sources of information customized to Helios, such as knowledge about the near- and off-body grids, internal surface extracts with patch information, and volumetric extracts meant for fast post-processing of data. Additionally, we demonstrate how in situ can be used by workflow automation tools to help convey information to the user that would be much more difficult when using full data dumps.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography