To see the other types of publications on this topic, follow the link: High volume big data.

Journal articles on the topic 'High volume big data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'High volume big data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

S., Senthil Kumar, and Ms.V.Kirthika. "Big Data Analytics Architecture and Challenges, Issues of Big Data Analytics." International Journal of Trend in Scientific Research and Development 1, no. 6 (2017): 669–73. https://doi.org/10.31142/ijtsrd4673.

Full text
Abstract:
Big Data technologies uses a new generation of technologies and architectures, designed for organizations can extract value from very large volumes of a wide variety of data by enabling high velocity capture, discovery, and or analysis. Big data is a massive amount of digital data being collected from various sources that are too large. Big data deals with challenges like complexity, security, risks to privacy. Big data is redefining as data management from extraction, transformation, cleaning and reducing. The size and variety of data lead us to think ahead and develop new and faster methods
APA, Harvard, Vancouver, ISO, and other styles
2

ADEBO, PHILIP. "BIG DATA IN BUSINESS." International Journal of Advanced Research in Computer Science and Software Engineering 8, no. 1 (2018): 160. http://dx.doi.org/10.23956/ijarcsse.v8i1.543.

Full text
Abstract:
ABSTRACTBusiness has always desired to derive insights from big data in order to make better, smarter, data-driven decisions. Big data refers to data that are generated at high volume, high velocity, high variety, high veracity, and high value. It has fundamentally changed the way business companies operate, make decisions, and compete. It can create value for businesses. This paper provides a brief introduction to how big data is being used in businesses.
APA, Harvard, Vancouver, ISO, and other styles
3

Lakshmanasamy, Rameshbabu, and Girish Ganachari. "Data Integrity Problems in High-Volume High-Velocity Data Ingestion." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 10 (2024): 1–6. http://dx.doi.org/10.55041/ijsrem14175.

Full text
Abstract:
In the era of bigdata, and never ending data push from IoT devices, the IT infrastructure are built to be scalable to handle the huge batch loads or continuous streaming live data. However, the big question is how can be establish the data integrity. How can we make sure no data is lost from Origin till the destination passing through numerous touch points enroute ? How can we ensure the quality with continuous inflow ? Should the inflow be suspended to perform the DQ checks? Or should it be a totally independent parallel activity. Let’s explore. Key words: Quality Data Management, Data Pipeli
APA, Harvard, Vancouver, ISO, and other styles
4

U., Prathibha, Thillainayaki M., and Jenneth A. "Big Data Analysis with R Programming and RHadoop." International Journal of Trend in Scientific Research and Development 2, no. 4 (2018): 2623–27. https://doi.org/10.31142/ijtsrd15705.

Full text
Abstract:
Big data is a technology to access huge data sets, have high Velocity, high Volume and high Variety and complex structure with the difficulties of management, analyzing, storing and processing. The paper focuses on extraction of data efficiently in big data tools using R programming techniques and how to manage the data and the components that are useful in handling big data. Data can be classified as public, confidential and sensitive. This paper proposes the big data applications with the Hadoop Distributed Framework for storing huge data in cloud in a highly efficient manner. This paper des
APA, Harvard, Vancouver, ISO, and other styles
5

Huda, M. Misbachul, Dian Rahma Latifa Hayun, and Zhin Martun. "Data Modeling for Big Data." Jurnal ULTIMA InfoSys 6, no. 1 (2015): 1–11. http://dx.doi.org/10.31937/si.v6i1.273.

Full text
Abstract:
Today the rapid growth of the internet and the massive usage of the data have led to the increasing CPU requirement, velocity for recalling data, a schema for more complex data structure management, the reliability and the integrity of the available data. This kind of data is called as Large-scale Data or Big Data. Big Data demands high volume, high velocity, high veracity and high variety. Big Data has to deal with two key issues, the growing size of the datasets and the increasing of data complexity. To overcome these issues, today researches are devoted to kind of database management system
APA, Harvard, Vancouver, ISO, and other styles
6

Jeong, Yoon-su, and Seung-soo Shin. "A Multidata Connection Scheme for Big Data High-Dimension Using the Data Connection Coefficient." Mathematical Problems in Engineering 2015 (2015): 1–6. http://dx.doi.org/10.1155/2015/931352.

Full text
Abstract:
In the era of big data and cloud computing, sources and types of data vary, and the volume and flow of data are massive and continuous. With the widespread use of mobile devices and the Internet, huge volumes of data distributed over heterogeneous networks move forward and backward across networks. In order to meet the demands of big data service providers and customers, efficient technologies for processing and transferring big data over networks are needed. This paper proposes a multidata connection (MDC) scheme that decreases the amount of power and time necessary for information to be comm
APA, Harvard, Vancouver, ISO, and other styles
7

Chaudhary, Dr Sunita. "Analysis of Concept of Big Data Process, Strategies, Adoption and Implementation." International Journal on Future Revolution in Computer Science & Communication Engineering 8, no. 1 (2022): 05–08. http://dx.doi.org/10.17762/ijfrcsce.v8i1.2065.

Full text
Abstract:
Big data is the data set which contains variety of data which increases on a daily basis in an organisation and also it has large volume of data with high velocity. It is the complex form of data which is usually identified from new set of data sources. It is widely used in most of the companies as it will have latest data which is useful to solve critical issues in the business or an organisation. BIG data has such large volume of data that it cannot be handled by simple data handling software and need latest technology software to handle such large volumes of data. The amount of data present
APA, Harvard, Vancouver, ISO, and other styles
8

Borodo, Salisu Musa, Siti Mariyam Shamsuddin, and Shafaatunnur Hasan. "Big Data Platforms and Techniques." Indonesian Journal of Electrical Engineering and Computer Science 1, no. 1 (2016): 191. http://dx.doi.org/10.11591/ijeecs.v1.i1.pp191-200.

Full text
Abstract:
Data is growing at unprecedented rate and has led to huge volume generated; the data sources include mobile, internet and sensors. This voluminous data is generated and updated at high velocity by batch and streaming platforms. This data is also varied along structured and unstructured types. This volume, velocity and variety of data led to the term big data. Big data has been premised to contain untapped knowledge, its exploration and exploitation is termed big data analytics. This literature reviewed platforms such as batch processing, real time processing and interactive analytics used in b
APA, Harvard, Vancouver, ISO, and other styles
9

Prof., Sachchidanand Nimankar Prof. Sushant Dagare. "BIG DATA ANALYTICS: 4A's." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 7, no. 2 (2018): 328–32. https://doi.org/10.5281/zenodo.1173488.

Full text
Abstract:
Basically, data process is seen to be gathering, processing and management of data for giving output of “new” information for end users [2]. Over time, key challenges are related to mining, storage, transportation and processing of high throughput data. It is different from Big Data challenges to which we have to add Volume, Velocity, Value, Veracity, variety, Visualization and Variability [4]. Consequently, these requirements imply an additional step where data are cleaned, tagged, classified and formatted. Big Data analysis currently splits into four steps: Acquisition or Access,
APA, Harvard, Vancouver, ISO, and other styles
10

Kadhim Jawad, Wasnaa, and Abbas M. Al-Bakry. "Big Data Analytics: A Survey." Iraqi Journal for Computers and Informatics 49, no. 1 (2022): 41–51. http://dx.doi.org/10.25195/ijci.v49i1.384.

Full text
Abstract:
Internet-based programs and communication techniques have become widely used and respected in the IT industry recently. A persistent source of "big data," or data that is enormous in volume, diverse in type, and has a complicated multidimensional structure, is internet applications and communications. Today, several measures are routinely performed with no assurance that any of them will be helpful in understanding the phenomenon of interest in an era of automatic, large-scale data collection. Online transactions that involve buying, selling, or even investing are all examples of e-commerce. A
APA, Harvard, Vancouver, ISO, and other styles
11

Parmar, Tarun. "Scaling Data Infrastructure for High-Volume Manufacturing: Challenges and Solutions in Big Data Engineering." International Scientific Journal of Engineering and Management 04, no. 01 (2025): 1–6. https://doi.org/10.55041/isjem01352.

Full text
Abstract:
Abstract—Scaling data infrastructure for high- volume manufacturing presents significant challenges owing to the rapid growth, diversity, and complexity of the data generated by modern production processes. This review explores the key challenges and solutions in big-data engineering to enable efficient, scalable, and reliable data management in manufacturing environments. The primary challenges include handling the volume, velocity, and variety of data; ensuring real-time processing and analysis; managing data storage and retrieval at scale; and maintaining data quality and consistency. To ad
APA, Harvard, Vancouver, ISO, and other styles
12

Shah, Syed Iftikhar Hussain, Vassilios Peristeras, and Ioannis Magnisalis. "DaLiF: a data lifecycle framework for data-driven governments." Journal of Big Data 8 (June 14, 2021): 1–44. https://doi.org/10.1186/s40537-021-00481-3.

Full text
Abstract:
The public sector, private firms, business community, and civil society are generating data that is high in volume, veracity, velocity and comes from a diversity of sources. This kind of data is known as big data. Public Administrations (PAs) pursue big data as “new oil” and implement data-centric policies to transform data into knowledge, to promote good governance, transparency, innovative digital services, and citizens’ engagement in public policy. From the above, the Government Big Data Ecosystem (GBDE) emerges. Managing big data throughout its lifecycle becomes a challen
APA, Harvard, Vancouver, ISO, and other styles
13

U., Prathibha, G. Anitha Dr., and Ramyabharathi J. "Secured File Storage System In Big Data With Cloud Access Using Security Algorithms." International Journal of Trend in Scientific Research and Development 2, no. 5 (2018): 594–602. https://doi.org/10.31142/ijtsrd15895.

Full text
Abstract:
Big data is a technology to huge data sets, have high Velocity, high Volume and high Variety and complex structure with the difficulties of management, analyzing, storing and processing. The paper focuses on extraction of data efficiently in big data and how to manage the data and the components that are useful in handling big data. Security in the era of big data and especially to the problem of reconciling security and privacy models by exploiting the map reduce framework. Data can be classified as public, confidential and sensitive This paper proposes the big data applications with the Hado
APA, Harvard, Vancouver, ISO, and other styles
14

Hotton, Jonatan. "Modernisasi Official Statistic Dengan Big Data." Madani: Jurnal Ilmiah Multidisiplin 1, no. 11 (2023): 693–98. https://doi.org/10.5281/zenodo.10408398.

Full text
Abstract:
<em>In this paper, we describe and discuss opportunities for modernizing official statistics through big data. Big data comes in high volume, high speed, and high variety. High volume can lead to better accuracy and more detail, high speed can lead to more frequent and more timely statistical estimates, and high variation can provide opportunities for statistics in new areas. However, there are also many challenges: there are uncontrollable changes in sources that threaten continuity and comparability, and data that are only indirectly related to phenomena of statistical interest. In addition,
APA, Harvard, Vancouver, ISO, and other styles
15

S.Swarnalatha, K. Vidya. "Big Data Analytics: Map Reduce Function using BIRCH Clustering Algorithm." International Journal of Information Technology 1, no. 3 (2020): 1–7. https://doi.org/10.5281/zenodo.3674245.

Full text
Abstract:
It is well known that, in Big Data information is represented in unstructured form and NoSQL is used for query processing. The volume of data also too large and simple Query processing is not sufficient and irrelevant. From that large volume of data, extracting the knowledgeable information is a&nbsp; big challenge. To analyze that, various Big Data analytical techniques are available in the market, that uncovers hidden patterns, market trends, customer preferences and other useful information that can help the organization to take useful decisions within less amount of time. For such applicat
APA, Harvard, Vancouver, ISO, and other styles
16

GORLEVSKAYA, L. E., Vladimir Dmitrievich SEKERIN, A. Z. GUSOV, and A. E. GOROKHOVA. "Analytical Review of Interest for Big Data." Journal of Advanced Research in Law and Economics 8, no. 8 (2018): 2399. http://dx.doi.org/10.14505//jarle.v8.8(30).10.

Full text
Abstract:
High importance and great power of Big Data in connection with the lack of common understanding of the term has led to multiple challenges in the field. In order to overcome some of the challenges this study appeals to the Big Data term and presents an overview of approaches to Big Data. Authors argue that volume is the key characteristic of Big Data and suggest making the definition which is clear to a wide audience. This paper determines modern trends of interest for Big Data from different economy participants. It shows that the Big Data term has become mature. The paper proves high steady
APA, Harvard, Vancouver, ISO, and other styles
17

Mahur, Chandramani, Shivam Tiwari, and Sansar Singh Chauhan. "A Review: Different Dimensions of Processing Units of Big Data." Journal of Big Data Technology and Business Analytics 1, no. 3 (2022): 11–16. http://dx.doi.org/10.46610/jbdtba.2022.v01i03.003.

Full text
Abstract:
Data generation is too easy and at the same time to dealing with large amount of data is hard to manage. In this regard big data came into picture. Data are being presented in different formats like structure, semistructure, and unstructured form. Every device which is connecting to the internet generates different type of data in same or different condition with large volume. Big data deals processing, storing, accessing and managing of high volume of data in stored or real time. In this paper, all focus goes to the problem to manage high volume. In big data, Hadoop, map-reduce, no-sql these
APA, Harvard, Vancouver, ISO, and other styles
18

Gayatri Tavva. "Maximizing ETL efficiency: Patterns for high-volume data." International Journal of Science and Research Archive 15, no. 2 (2025): 1063–70. https://doi.org/10.30574/ijsra.2025.15.2.1477.

Full text
Abstract:
The increasing demands of big data environments have placed a renewed emphasis on the efficiency of Extract, Transform, and Load (ETL) processes. Traditional batch-oriented ETL approaches struggle to cope with the scale, velocity, and variety of modern datasets. This review explores emerging patterns and architectures for maximizing ETL efficiency in high-volume data contexts, focusing on serverless frameworks, real-time processing, distributed computation models, and cost optimization strategies. Experimental evaluations demonstrate that serverless and stream-based ETL frameworks achieve supe
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Weiru, Jared Oliverio, Jin Ho Kim, and Jiayue Shen. "The Modeling and Simulation of Data Clustering Algorithms in Data Mining with Big Data." Journal of Industrial Integration and Management 04, no. 01 (2019): 1850017. http://dx.doi.org/10.1142/s2424862218500173.

Full text
Abstract:
Big Data is a popular cutting-edge technology nowadays. Techniques and algorithms are expanding in different areas including engineering, biomedical, and business. Due to the high-volume and complexity of Big Data, it is necessary to conduct data pre-processing methods when data mining. The pre-processing methods include data cleaning, data integration, data reduction, and data transformation. Data clustering is the most important step of data reduction. With data clustering, mining on the reduced data set should be more efficient yet produce quality analytical results. This paper presents the
APA, Harvard, Vancouver, ISO, and other styles
20

Vargas Vieira dos Santos, Vinícius. "Práticas linguísticas em Big Data / Practical language in Big Data." Texto Livre: Linguagem e Tecnologia 10, no. 1 (2017): 31–52. http://dx.doi.org/10.17851/1983-3652.10.1.31-52.

Full text
Abstract:
RESUMO: O presente artigo objetiva tratar possíveis relações entre novas mídias digitais e certos aspectos conceituais da linguagem, como significado e performatividade. Big data é o termo que se refere ao acúmulo de dados digitais que caracterizou as mídias de comunicação em massa nas duas últimas décadas e está diretamente relacionado à atual configuração da plataforma de serviços de tecnologia Web 2.0. As escalas de desmedido volume e variedade de dados digitais e altos índices de velocidade que caracterizam o Big data modificam as paisagens de contexto social, provocando, consequentemente,
APA, Harvard, Vancouver, ISO, and other styles
21

Chen, Tao, Jie Ma, Yi Liu, et al. "iProX in 2021: connecting proteomics data sharing with big data." Nucleic Acids Research 50, no. D1 (2021): D1522—D1527. http://dx.doi.org/10.1093/nar/gkab1081.

Full text
Abstract:
Abstract The rapid development of proteomics studies has resulted in large volumes of experimental data. The emergence of big data platform provides the opportunity to handle these large amounts of data. The integrated proteome resource, iProX (https://www.iprox.cn), which was initiated in 2017, has been greatly improved with an up-to-date big data platform implemented in 2021. Here, we describe the main iProX developments since its first publication in Nucleic Acids Research in 2019. First, a hyper-converged architecture with high scalability supports the submission process. A hadoop cluster
APA, Harvard, Vancouver, ISO, and other styles
22

Samyukta Rongala. "Optimizing ETL Processes for High-Volume Data Warehousing in Financial Applications." Journal of Information Systems Engineering and Management 10, no. 8s (2025): 700–708. https://doi.org/10.52783/jisem.v10i8s.1130.

Full text
Abstract:
The Extract, Transform, Load (ETL) process is a critical backbone in financial data warehousing, where large-scale data volumes demand optimized performance to meet industry requirements. Financial institutions rely heavily on ETL systems to integrate, cleanse, and structure data for decision-making and regulatory compliance. This paper delves into the optimization of ETL processes for high-volume data warehousing in financial applications. By analyzing current challenges, exploring advanced architectures, and incorporating emerging technologies such as Big Data frameworks and cloud solutions,
APA, Harvard, Vancouver, ISO, and other styles
23

K, Arulkumar. "High Performance and Fault Tolerent Techniques Used to Improve the Data Processing Performance in Big Data." Shanlax International Journal of Arts, Science and Humanities 6, S1 (2018): 47–54. https://doi.org/10.5281/zenodo.1410965.

Full text
Abstract:
Big data plays a major role in the real world. Every day the database access may&nbsp;be in increased manner. The size of the database is also increased as in terabytes.&nbsp;Different types of people in different places can access and store their data that&nbsp;are needed for their usage. Big data may contain a large volume of data. Everyone&nbsp;access data simultaneously. To retrieve the particular data from the large database&nbsp;is not easy. To avoid these types of problems, many of te chniques are developed to&nbsp;extract the needed data in a large database. Sometimes the larger size o
APA, Harvard, Vancouver, ISO, and other styles
24

Sazu, Mesbaul Haque, and Sakila Akter Jahan. "HIGH EFFICIENCY PUBLIC TRANSPORTATION SYSTEM: ROLE OF BIG DATA IN MAKING RECOMMENDATIONS." Journal of process management and new technologies 10, no. 3-4 (2022): 9–21. http://dx.doi.org/10.5937/jpmnt10-38013.

Full text
Abstract:
Big data has a huge impact on urban planning and cities morphology. Big data is utilized to appraise the requirements of the shared transport structure, by focusing on funding and portability plans inside the key cities. The research provides a recommendation-making system (RMS) focused on suggesting transport methods to automobile consumption by detailing a huge volume of transport methods information originating from various products. The research focuses on the utilization of big data to come down with shared transport, and presents a structural understanding for gathering, combining, aggre
APA, Harvard, Vancouver, ISO, and other styles
25

Uhrin, Martin, and Sebastiaan Huber. "kiwiPy: Robust, high-volume, messaging for big-data and computational science workflows." Journal of Open Source Software 5, no. 52 (2020): 2351. http://dx.doi.org/10.21105/joss.02351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

J., Christina Bai Annapoorani, and Ruby Gnanaselvam C. "EMERGING TRENDS IN BIG DATA ANALYTICS IN HEALTHCARE." International Journal of Multidisciplinary Research and Modern Education 3, no. 1 (2017): 143–47. https://doi.org/10.5281/zenodo.437951.

Full text
Abstract:
Everyday data is growing exponentially and it becomes necessary to analyze the massive amount of data to achieve meaning full results. Big data Analytics help in analyzing high volume of data to identify useful information patterns and trends using various Algorithms. Healthcare is one of the most important areas for developing and developed countries to facilitate the priceless human resource. The healthcare industry has produced enormous amount of data through patient information and record storage. This papers focus on the emerging trends in big data analytics in healthcare industry.
APA, Harvard, Vancouver, ISO, and other styles
27

Sundarakumar M. R., Mahadevan G., Ramasubbareddy Somula, Sankar Sennan, and Bharat S. Rawal. "An Approach in Big Data Analytics to Improve the Velocity of Unstructured Data Using MapReduce." International Journal of System Dynamics Applications 10, no. 4 (2021): 1–25. http://dx.doi.org/10.4018/ijsda.20211001.oa6.

Full text
Abstract:
Big Data Analytics is an innovative approach for extracting the data from a huge volume of data warehouse systems. It reveals the method to compress the high volume of data into clusters by MapReduce and HDFS. However, the data processing has taken more time for extract and store in Hadoop clusters. The proposed system deals with the challenges of time delay in shuffle phase of map-reduce due to scheduling and sequencing. For improving the speed of big data, this proposed work using the Compressed Elastic Search Index (CESI) and MapReduce-Based Next Generation Sequencing Approach (MRBNGSA). Th
APA, Harvard, Vancouver, ISO, and other styles
28

Najna Nazir M K and Ambili Antharjanam. "BIG DATA FRAMEWORK FOR EDUCATIONAL ANALYSIS." international journal of engineering technology and management sciences 7, no. 2 (2023): 860–65. http://dx.doi.org/10.46647/ijetms.2023.v07i02.096.

Full text
Abstract:
Huge amounts of educational data are being produced, and a common challenge that many educational organizations confront, is finding an effective method to harness and analyze this data for continuously delivering enhanced education. Nowadays, the educational data is evolving and has become large in volume, wide in variety and high in velocity. This produced data needs to be handled in an efficient manner to extract value and make informed decisions. For that, the proposed system confronts such data as a big data challenge and presents a comprehensive platform tailored to perform educational b
APA, Harvard, Vancouver, ISO, and other styles
29

B, Sarada, Vinayaka Murthy. M, and Udaya Rani. V. "An approach to achieve high efficiency for large volume data processing using multiple clustering algorithms." International Journal of Engineering & Technology 7, no. 4.5 (2018): 689. http://dx.doi.org/10.14419/ijet.v7i4.5.25059.

Full text
Abstract:
Now a days data is increasing exponentially daily in terms of velocity, variety and volume which is also known as Big data. When the dataset has small number of dimensions, limited number of clusters and less number of data points the existing traditional clustering al- gorithms will give the expected results. As we know this is the Big Data age, with large volume of data sets through the traditional clus- tering algorithms we will not be able to get expected results. So there is a need to develop a new approach which gives better accuracy and computational time for large volume of data proces
APA, Harvard, Vancouver, ISO, and other styles
30

Tahmazli-Khaligova, Firuza. "CHALLENGES OF USING BIG DATA IN DISTRIBUTED EXASCALE SYSTEMS." Azerbaijan Journal of High Performance Computing 3, no. 2 (2020): 245–54. http://dx.doi.org/10.32010/26166127.2020.3.2.245.254.

Full text
Abstract:
In a traditional High Performance Computing system, it is possible to process a huge data volume. The nature of events in classic High Performance computing is static. In Distributed Exa-scale System has a different nature. The processing Big data in a distributed exascale system evokes a new challenge. The dynamic and interactive character of a distributed exascale system changes processes status and system elements. This paper discusses the challenge that Big data attributes: volume, velocity, variety, how they influence distributed exascale system dynamic and interactive nature. While inves
APA, Harvard, Vancouver, ISO, and other styles
31

Deng, Yaotian, Han Zheng, and Jingshi Yan. "Applications of Big Data in Economic Information Analysis and Decision-Making under the Background of Wireless Communication Networks." Wireless Communications and Mobile Computing 2022 (January 17, 2022): 1–7. http://dx.doi.org/10.1155/2022/7084969.

Full text
Abstract:
Owing to the growing volumes of mobile telecommunications customers, Internet websites, and digital services, there are more and more big data styles and types around the world. With the help of big data technology with high semantic information, this paper focuses on exploring the value and corresponding application of big data in finance. By comparing with the existing methods in terms of search speed and data volume, we can effectively see the effectiveness and superiority of the algorithm proposed in this paper. Furthermore, the algorithm proposed in this paper can provide some reference i
APA, Harvard, Vancouver, ISO, and other styles
32

Manivannan, Tamilselvan, and Susainathan Amuthan. "Encroachment in Data Processing using Big Data Technology." International Journal of Computing Algorithm 9, no. 1 (2020): 10–18. http://dx.doi.org/10.20894/ijcoa.101.009.001.002.

Full text
Abstract:
the nature of big data is now growing and information is present all around us in different kind of forms. The big data information plays crucial role and it provides business value for the firms and its benefits sectors by accumulating knowledge. This growth of big data around all the concerns is high and challenge in data processing technique because it contains variety of data in enormous volume. The tools which are built on the data mining algorithm provides efficient data processing mechanisms, but not fulfill the pattern of heterogeneous, so the emerging tools such like Hadoop MapReduce,
APA, Harvard, Vancouver, ISO, and other styles
33

Shivaji, Dodmise Usha. "Real-time Big Data Analytics for Financial Markets." International Journal of Science and Social Science Research 1, no. 3 (2024): 365–69. https://doi.org/10.5281/zenodo.13623483.

Full text
Abstract:
Financial markets are constantly generating new data, which can be used to make better investment decisions. However, the real-time processing of big data in financial markets is challenging due to the high volume and velocity of the data. This research topic seeks to develop real-time big data analytics methods for financial markets. Gap: Existing real-time big data analytics methods for financial markets are often not accurate enough to make reliable investment decisions. This research topic seeks to develop new methods that can improve the accuracy of real-time big data analytics for financ
APA, Harvard, Vancouver, ISO, and other styles
34

Tao, Cui Xia. "The Research of High Efficient Data Mining Algorithms for Massive Data Sets." Applied Mechanics and Materials 556-562 (May 2014): 3901–4. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.3901.

Full text
Abstract:
Data mining means to extract information and knowledge that potentially useful while still unknown in advance, from a large quantity of implicit incomplete, random data. With the quick advancement of modern information technology, people are accumulating data volume on the increase sharply, often at the speed of TB. How to extract meaningful information from large amounts of data has become a big problem must be tackled. In view of the huge amounts of data mining, distributed parallel processing and incremental processing is valid solution.
APA, Harvard, Vancouver, ISO, and other styles
35

Abbasova, P. A. "Big Data Analysys in Cyber Physical Systems." Herald of Azerbaijan Engineering Academy 16, no. 3 (2024): 81–86. https://doi.org/10.52171/2076-0515_2024_16_03_81_86.

Full text
Abstract:
Big Data is a modern analytical trend that enables decision-making based on more data thanever before. Developments in mobile computing, communications, and mass storage architecturesover the past decade have given rise to Big Data, an unprecedented amount of valuable informationgenerated in various forms at high speeds. The ability to process this large volume of data in realtime using Big Data analytics tools brings many benefits that can be used in cyber threat analysissystems. This article describes Big Data for cyber-physical systems, real-time stream processing,modeling and behavioral le
APA, Harvard, Vancouver, ISO, and other styles
36

Klipa, Djuro, Igor Ristić, Aleksandar Radonjić, and Ivan Scepanović. "BIG DATA AND ARTIFICIAL INTELLIGENCE." International Journal of Management Trends: Key Concepts and Research 1, no. 1 (2022): 3–14. http://dx.doi.org/10.58898/ijmt.v1i1.03-14.

Full text
Abstract:
The Big Data embodies a technology that permits the storage, processing, and management of broad and complex data sets in which traditional data processing applications are not applicable. These sets of data are usually characterized by a substantial volume of information that they carry, variety, versatility in terms of the format in which they are written, as well as a high-speed ingress which is often greater than the speed of processing. A particular challenge is the data which are coming from the Internet of Things (IoT) “world” that is constantly expanding and which already consists of s
APA, Harvard, Vancouver, ISO, and other styles
37

Tiwari, Jyotindra, Dr Mahesh Pawar, and Dr Anjajana Pandey. "A Survey on Accelerated Mapreduce for Hadoop." Oriental journal of computer science and technology 10, no. 3 (2017): 597–602. http://dx.doi.org/10.13005/ojcst/10.03.07.

Full text
Abstract:
Big Data is defined by 3Vs which stands for variety, volume and velocity. The volume of data is very huge, data exists in variety of file types and data grows very rapidly. Big data storage and processing has always been a big issue. Big data has become even more challenging to handle these days. To handle big data high performance techniques have been introduced. Several frameworks like Apache Hadoop has been introduced to process big data. Apache Hadoop provides map/reduce to process big data. But this map/reduce can be further accelerated. In this paper a survey has been performed for map/r
APA, Harvard, Vancouver, ISO, and other styles
38

Satya Sai Kumar, Avula, S. Mohan, and R. Arunkumar. "A Survey on Security Models for Data Privacy in Big Data Analytics." Asian Journal of Computer Science and Technology 7, S1 (2018): 87–89. http://dx.doi.org/10.51983/ajcst-2018.7.s1.1798.

Full text
Abstract:
As emerging data world like Google and Wikipedia, volume of the data growing gradually for centralization and provide high availability. The storing and retrieval in large volume of data is specialized with the big data techniques. In addition to the data management, big data techniques should need more concentration on the security aspects and data privacy when the data deals with authorized and confidential. It is to provide secure encryption and access control in centralized data through Attribute Based Encryption (ABE) Algorithm. A set of most descriptive attributes is used as categorize t
APA, Harvard, Vancouver, ISO, and other styles
39

S.P., Siddique Ibrahim. "EXTRACT DATA IN LARGE DATABASE WITH HADOOP." International Journal of Advances in Engineering & Scientific Research 1, no. 7 (2014): 05–09. https://doi.org/10.5281/zenodo.10725257.

Full text
Abstract:
<strong><em>Abstract</em></strong><strong><em>:</em></strong> <strong>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </strong><em>Data is basic building block of any organization and extracting useful information from raw available data is the big task and high complexity task. Data are the patterns which are used to develop or enhance knowledge. The rapid growth in the size of datasets that are collected from different resources has made capturing, managing and analyzing the datasets beyond the ability of most software tools. The current
APA, Harvard, Vancouver, ISO, and other styles
40

Jo, Junghee, and Kang-Woo Lee. "High-Performance Geospatial Big Data Processing System Based on MapReduce." ISPRS International Journal of Geo-Information 7, no. 10 (2018): 399. http://dx.doi.org/10.3390/ijgi7100399.

Full text
Abstract:
With the rapid development of Internet of Things (IoT) technologies, the increasing volume and diversity of sources of geospatial big data have created challenges in storing, managing, and processing data. In addition to the general characteristics of big data, the unique properties of spatial data make the handling of geospatial big data even more complicated. To facilitate users implementing geospatial big data applications in a MapReduce framework, several big data processing systems have extended the original Hadoop to support spatial properties. Most of those platforms, however, have incl
APA, Harvard, Vancouver, ISO, and other styles
41

Miss, Jayshree Dnyandeo Muley1 &. Prof.Harsha R. Vyawahare2. "A HYBRID APPROACH FOR INFORMATION RETRIEVAL USING BIG DATA ANALYTICS." GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES [NC-Rase 18] (November 13, 2018): 171–78. https://doi.org/10.5281/zenodo.1485451.

Full text
Abstract:
Digital world is growing very fast and become more complex in the volume (terabyte to petabyte), variety (structured and un-structured and hybrid), velocity (high speed in growth) in nature. This refers to as &lsquo;Big Data&rsquo; that is a global phenomenon. NoSQL databases are better solution for the Big Data demands. Because the users want to analyze this data together, the integration of relational and NoSQL databases becomes necessity. A vast amount of research work has been done in the multimedia area, targeting different aspects of big data analytics, such as the capture, storage, inde
APA, Harvard, Vancouver, ISO, and other styles
42

Deshai, N., S. Venkataramana, I. Hemalatha, and G. P. S. Varma. "A Study on Big Data Hadoop Map Reduce Job Scheduling." International Journal of Engineering & Technology 7, no. 3.31 (2018): 59. http://dx.doi.org/10.14419/ijet.v7i3.31.18202.

Full text
Abstract:
A latest tera to zeta era has been created during huge volume of data sets, which keep on collected from different social networks, machine to machine devices, google, yahoo, sensors etc. called as big data. Because day by day double the data storage size, data processing power, data availability and digital world data size in zeta bytes. Apache Hadoop is latest market weapon to handle huge volume of data sets by its most popular components like hdfs and mapreduce, to achieve an efficient storage ability and efficient processing on massive volume of data sets. To design an effective algorithm
APA, Harvard, Vancouver, ISO, and other styles
43

Bojkovic, Zoran, and Dragorad Milovanovic. "Mobile cloud analytics in Big data era." WSEAS TRANSACTIONS ON COMPUTER RESEARCH 10 (March 22, 2022): 25–28. http://dx.doi.org/10.37394/232018.2022.10.3.

Full text
Abstract:
Voluminous data are generated from a variety of users and devices and are to be stored and processed in powerful data center. As such, there is a strong demand for building a network infrastructure to gather distributed and rapidly generated data and move them to data center for knowledge discovery. Big data has received considerable attention, because it can mine new knowledge for economic growth and technical innovation. Many research efforts have been directed to big data processing due to its high volume, velocity and variety, referred to as 3V. This paper first describes challenges for bi
APA, Harvard, Vancouver, ISO, and other styles
44

Nityash Solanki, Et al. "Challenges of Big Data Technology in Aviation Management." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 9 (2023): 4501–5. http://dx.doi.org/10.17762/ijritcc.v11i9.9956.

Full text
Abstract:
Traditional data systems fall short of processing Big Data since high volume of information collected to create successful business opportunities requires high performance computer programs including AI tools or processors. Deep flakes fabricate recordings, videos and images in a manner that misrepresent the receiver of the information regarding an event that in reality never took place. Processing Big Data safely for Aviation industry’s benefit requires dedicated legislative regimes to improve predictability in decision making at the market place. Placing reliance on Big Data technology surel
APA, Harvard, Vancouver, ISO, and other styles
45

Liu, Pan, Yinzhao Zhou, Wenbin Xiong, et al. "Big Data-aided Study of the Physical Process of Volume Ignition." Journal of Physics: Conference Series 2381, no. 1 (2022): 012064. http://dx.doi.org/10.1088/1742-6596/2381/1/012064.

Full text
Abstract:
Abstract Volume ignition is a method of igniting a fuel as a whole by simultaneously achieving ignition conditions throughout the fuel zone. The basic criterion for ignition is that the thermonuclear energy is greater than the energy leakage at the fuel boundary, resulting in self-sustaining heating and deep combustion. Deuterium-tritium fuels are wrapped in medium to high Z media to reduce radiative leakage and achieve lower-temperature holistic ignition and non-equilibrium combustion, ultimately allowing the fuel to achieve high combustion efficiency. Volume ignition is the use of energy bal
APA, Harvard, Vancouver, ISO, and other styles
46

Monsia, Symphorien, and Sami Faiz. "A High-Level Interactive Query Language for Big Data Analytics Based on a Functional Model." International Journal of Data Analytics 1, no. 1 (2020): 22–37. http://dx.doi.org/10.4018/ijda.2020010102.

Full text
Abstract:
Information technologies such as the internet, and social networks, produce vast amounts of data exponentially (known as Big Data) and use conventional information systems. Big Data is characterized by volume, a high rate of generation, and variety. Systems integration and data querying systems must be adapted to cope with the emergence of Big Data. The authors' interest is with the impact Big Data has on the decision-making environment, most particularly, the data querying phase. Their contribution is the development of a parallel and distributed platform, named high level query language for
APA, Harvard, Vancouver, ISO, and other styles
47

Gabr, Menna Ibrahim, Yehia M. Helmy, and Doaa Saad Elzanfaly. "DATA QUALITY DIMENSIONS, METRICS, AND IMPROVEMENT TECHNIQUES." Future Computing and Informatics Journal 6, no. 1 (2021): 25–44. http://dx.doi.org/10.54623/fue.fcij.6.1.3.

Full text
Abstract:
Achieving high level of data quality is considered one of the most important assets for any small, medium and large size organizations. Data quality is the main hype for both practitioners and researchers who deal with traditional or big data. The level of data quality is measured through several quality dimensions. High percentage of the current studies focus on assessing and applying data quality on traditional data. As we are in the era of big data, the attention should be paid to the tremendous volume of generated and processed data in which 80% of all the generated data is unstructured. H
APA, Harvard, Vancouver, ISO, and other styles
48

Radha, K., and B. Thirumala Rao. "A Study on Big Data Techniques and Applications." International Journal of Advances in Applied Sciences 5, no. 2 (2016): 101. http://dx.doi.org/10.11591/ijaas.v5.i2.pp101-108.

Full text
Abstract:
&lt;p&gt;We are living in on-Demand Digital Universe with data spread by users and organizations at a very high rate. This data is categorized as Big Data because of its Variety, Velocity, Veracity and Volume. This data is again classified into unstructured, semi-structured and structured. Large datasets require special processing systems; it is a unique challenge for academicians and researchers. Map Reduce jobs use efficient data processing techniques which are applied in every phases of Map Reduce such as Mapping, Combining, Shuffling, Indexing, Grouping and Reducing. Big Data has essential
APA, Harvard, Vancouver, ISO, and other styles
49

ZASKÓRSKI, Piotr, and Wojciech ZASKÓRSKI. ""Big-Data" systems in improving modern organizations." Nowoczesne Systemy Zarządzania 12, no. 2 (2017): 119–32. http://dx.doi.org/10.37055/nsz/129432.

Full text
Abstract:
An attempt has been made to specify a determinant of effectiveness of a modern organization in this article. One of the determinants is information effectiveness. Gathering various types of informa­tion has been the basis for enhancing modern operating entities. Very large information resources can be subject to multifaceted exploration and discovery of knowledge. Extracting/generating maximum amount of knowledge from systems which gather polymorphic information of very high volume (so called "Big-Data" systems) in the form of numerical data, text ńles, images (video) is possible due to variou
APA, Harvard, Vancouver, ISO, and other styles
50

Zhao, Jing, Lin Li, LiXia Luo, Fei Liu, and YuGang Wang. "Analyzing and Applying Big Data Technologies Based on Industrial Internet Data." Journal of Physics: Conference Series 2665, no. 1 (2023): 012003. http://dx.doi.org/10.1088/1742-6596/2665/1/012003.

Full text
Abstract:
Abstract With the development of the industrial internet, data has gradually become the key engine driving the digital transformation of enterprises. The progress of the industrial internet is inseparable from the application of industrial data. To more effectively utilize data to empower businesses, many enterprises have put forward higher requirements for the real-time and reliability of industrial data collection. In the actual process of industrial data collection, issues such as large data volume, high real-time data processing requirements, and inconsistent data protocol standards are of
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!