Academic literature on the topic 'Cloud-based big data analytics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Cloud-based big data analytics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Cloud-based big data analytics"

1

Msweli, Andile Precious, Tshinakaho Seaba, Victor Ntala Paledi, and KHULISO SIGAMA. "Technology Factors Required for Adopting Cloud-Based Big Data Analytics in South African Banking." International Journal of Science Annals 7, no. 2 (2025): 47–55. https://doi.org/10.26697/ijsa.2024.2.5.

Full text
Abstract:
<strong>Background and Aim of Study:&nbsp;</strong>South African banks are generally known for early technology adoption. While this is so, there is a need to integrate some of the fourth industrial revolution technologies such as big data analytics and cloud computing collectively referred to as cloud-based big data analytics; and subsequently consider technology related aspects required for adopting integrated technologies of this nature.The aim of the study is to identify technology related factors that are necessary for adopting cloud-based big data analytics in South African banking.<strong>Material and Methods:</strong>&nbsp;A qualitative research approach was followed as well as an interpretivism paradigm and a single case study research strategy. Semi-structured interviews were employed for data collection from eleven professionals in the Information Technology division of a South African bank.<strong>Results: In total,</strong>&nbsp;35 technology factors required for adopting cloud-based big data analytics were identified in this study and furthermore categorized into; internal cloud-based big data analytics criteria, cloud-based big data analytics capabilities or skills, cloud-based big data analytics data integrity levels, data security and readiness for adopting cloud-based big data analytics and cloud-based big data analytics external criteria.<strong>Conclusions:</strong> The results of this study could imply that the adoption of cloud-based big data analytics in the banking sector takes into consideration an outsourcing model or setting. In this structure, technology factors are not only specific to the bank concerned. The banking sector has its own technology requirements that banks are expected to adhere to or take into consideration, while some technology factors could only be addressed by the cloud-based big data analytics service providers. The identified factors could be used in the conceptualization of a cloud-based big data analytics framework in future research.
APA, Harvard, Vancouver, ISO, and other styles
2

Sabbani, Goutham. "Big Data Analytics in Cloud Computing." International Journal of Science and Research (IJSR) 13, no. 6 (2024): 359–63. http://dx.doi.org/10.21275/sr24604002336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

C, Pradeep, and Prof Rahul Pawar. "Big Data Analytics in Cloud Environments." International Journal of Research Publication and Reviews 5, no. 3 (2024): 4240–46. http://dx.doi.org/10.55248/gengpi.5.0324.07105.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Researcher. "Cloud-Based AI and Big Data Analytics for Real-Time Business Decision-Making." International Journal of Finance (IJFIN) 36, no. 6 (2023): 96–123. https://doi.org/10.5281/zenodo.14905134.

Full text
Abstract:
<em>The rising sun of technological development has arrived to illuminate and innovate the traditional business operational processes. Providing academic and practical contributions, this essay explores the effect of cloud-based artificial intelligence and big data analytics on business decision-making. It is observed that cloud-based AI and big data analytics support real-time business decision-making activities. Unlike the traditional business decision support framework, contemporary business decision-support systems depend on different categories of data analysis fields such as artificial intelligence, big data analytics, advanced analytics, and business intelligence. The innovative data analysis process of cloud-based AI and big data analytics is transforming business processes too. The findings are expected to generate new knowledge about the role of contemporary AI and big data analytical tools in business intelligence and to bridge the gap between AI, business intelligence, and big data analytics by investigating the effect of AI and big data analytics on business intelligence environments. Furthermore, it holds the potential to motivate and encourage further studies in utilizing new AI and big data analytical techniques in the field of business decision-making.</em> <em>Real-time decision-making has become a significant aspect of business operations in the era of digitization and the technological evolution of contemporary artificial intelligence, deep learning, and machine learning. The theoretical and industry-oriented analysis of artificial intelligence, big data analytics, and personal learning accurately in the context of cloud computing is lacking. The purpose of this essay is to understand the effect of cloud-based AI and big data analytics on business decision-making. The findings of the essay may yield an innovative understanding of groundbreaking AI and personal data analytical techniques in the field of business intelligence and decision-making under complex situations. </em>
APA, Harvard, Vancouver, ISO, and other styles
5

Vistro, Daniel Mago. "IoT based Big Data Analytics for Cloud Storage Using Edge Computing." Journal of Advanced Research in Dynamical and Control Systems 12, SP7 (2020): 1594–98. http://dx.doi.org/10.5373/jardcs/v12sp7/20202262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Anugraha, P. P. Hiba Fathima K. P. "Big Data Analytics In Cloud Computing." International Journal of Scientific Research and Technology 2, no. 1 (2025): 167–75. https://doi.org/10.5281/zenodo.14637762.

Full text
Abstract:
The convergence of big data and cloud comput- ing offers numerous advantages, including scalability, cost- effectiveness, flexibility, collaboration, and accessibility. Cloud platforms allow for seamless resource scaling, eliminating the need for heavy infrastructure investments. Paying only for utilized resources reduces upfront expenses. Cloud-based solu- tions provide flexibility in storage and processing capabilities, allowing for tailored adjustments as organizational needs evolve. Collaboration is fostered, enabling data sharing and teamwork among diverse users and teams. Accessibility becomes universal, harnessing the potential of big data analytics from any location with an internet connection. However, challenges such as data security and privacy, latency issues, and the cost of long-term storage and complex analytics tasks in the cloud need to be addressed. Robust security measures, efficient data management strategies, and adherence to compliance standards are necessary to ensure the safe and effective utilization of big data within cloud environments.
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Xiang Ju. "Research of Big Data Processing Platform." Applied Mechanics and Materials 484-485 (January 2014): 922–26. http://dx.doi.org/10.4028/www.scientific.net/amm.484-485.922.

Full text
Abstract:
This paper introduces the operational characteristics of the era of big data and the current era of big data challenges, and exhaustive research and design of big data analytics platform based on cloud computing, including big data analytics platform architecture system, big data analytics platform software architecture , big data analytics platform network architecture big data analysis platform unified program features and so on. The paper also analyzes the cloud computing platform for big data analysis program unified competitive advantage and development of business telecom operators play a certain role in the future.
APA, Harvard, Vancouver, ISO, and other styles
8

Umbu Zogara, Lukas, and Cecilia Dai Payon Binti Gabriel. "BIG DATA ANALYTICS FOR HEALTHCARE APPLICATIONS MOBILE CLOUD BASED." Scientific Journal of Information System 1, no. 1 (2024): 16–21. http://dx.doi.org/10.70429/sjis.v1i1.85.

Full text
Abstract:
Mobile devices are increasingly becoming one and more indispensable part of our daily lives, as it facilitates to perform various useful tasks. Mobile cloud integrates mobile and cloud computing to extend the benefits of the cloud itself, and overcome limitations in times of cloud such as limited memory, CPU power, big data analytics technology allows extracting value from data that has four Vs: volume, variety, speed, and honesty. This paper discusses mobile cloud-based healthcare and big data analytics in its application. The conclusion is drawn about the design of healthcare systems using big data and mobile cloud technologies.
APA, Harvard, Vancouver, ISO, and other styles
9

Yilmaz, Nesim, Tuncer Demir, Safak Kaplan, and Sevilin Demirci. "Demystifying Big Data Analytics in Cloud Computing." Fusion of Multidisciplinary Research, An International Journal 1, no. 01 (2020): 25–36. https://doi.org/10.63995/dopv8398.

Full text
Abstract:
Big Data Analytics in cloud computing represents a transformative synergy, enabling the processing and analysis of vast datasets with unprecedented efficiency and scalability. The cloud provides a flexible and cost-effective infrastructure for storing, managing, and analyzing big data, addressing the limitations of traditional on-premises systems. This combination allows organizations to harness the full potential of big data, deriving actionable insights to drive decision-making and innovation. The integration of big data analytics with cloud computing leverages advanced technologies such as distributed computing, machine learning, and artificial intelligence. These technologies facilitate the extraction of meaningful patterns and trends from large, complex datasets in real time. Key cloud-based platforms, including Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform, offer a range of tools and services designed to simplify the deployment and management of big data analytics. Challenges remain in areas such as data security, privacy, and governance, which are critical for maintaining the integrity and confidentiality of sensitive information. Additionally, optimizing the performance and cost-efficiency of big data analytics in the cloud requires careful planning and management. This abstract highlights the critical role of cloud computing in advancing big data analytics, emphasizing its potential to transform industries through enhanced data-driven strategies while acknowledging the associated challenges and considerations.
APA, Harvard, Vancouver, ISO, and other styles
10

Wang, Ruoyu, Daniel Sun, Guoqiang Li, Raymond Wong, and Shiping Chen. "Pipeline provenance for cloud‐based big data analytics." Software: Practice and Experience 50, no. 5 (2020): 658–74. http://dx.doi.org/10.1002/spe.2744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Cloud-based big data analytics"

1

Talevi, Iacopo. "Big Data Analytics and Application Deployment on Cloud Infrastructure." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14408/.

Full text
Abstract:
This dissertation describes a project began in October 2016. It was born from the collaboration between Mr.Alessandro Bandini and me, and has been developed under the supervision of professor Gianluigi Zavattaro. The main objective was to study, and in particular to experiment with, the cloud computing in general and its potentiality in the data elaboration field. Cloud computing is a utility-oriented and Internet-centric way of delivering IT services on demand. The first chapter is a theoretical introduction on cloud computing, analyzing the main aspects, the keywords, and the technologies behind clouds, as well as the reasons for the success of this technology and its problems. After the introduction section, I will briefly describe the three main cloud platforms in the market. During this project we developed a simple Social Network. Consequently in the third chapter I will analyze the social network development, with the initial solution realized through Amazon Web Services and the steps we took to obtain the final version using Google Cloud Platform with its charateristics. To conclude, the last section is specific for the data elaboration and contains a initial theoretical part that describes MapReduce and Hadoop followed by a description of our analysis. We used Google App Engine to execute these elaborations on a large dataset. I will explain the basic idea, the code and the problems encountered.
APA, Harvard, Vancouver, ISO, and other styles
2

Saker, Vanessa. "Automated feature synthesis on big data using cloud computing resources." Master's thesis, University of Cape Town, 2020. http://hdl.handle.net/11427/32452.

Full text
Abstract:
The data analytics process has many time-consuming steps. Combining data that sits in a relational database warehouse into a single relation while aggregating important information in a meaningful way and preserving relationships across relations, is complex and time-consuming. This step is exceptionally important as many machine learning algorithms require a single file format as an input (e.g. supervised and unsupervised learning, feature representation and feature learning, etc.). An analyst is required to manually combine relations while generating new, more impactful information points from data during the feature synthesis phase of the feature engineering process that precedes machine learning. Furthermore, the entire process is complicated by Big Data factors such as processing power and distributed data storage. There is an open-source package, Featuretools, that uses an innovative algorithm called Deep Feature Synthesis to accelerate the feature engineering step. However, when working with Big Data, there are two major limitations. The first is the curse of modularity - Featuretools stores data in-memory to process it and thus, if data is large, it requires a processing unit with a large memory. Secondly, the package is dependent on data stored in a Pandas DataFrame. This makes the use of Featuretools with Big Data tools such as Apache Spark, a challenge. This dissertation aims to examine the viability and effectiveness of using Featuretools for feature synthesis with Big Data on the cloud computing platform, AWS. Exploring the impact of generated features is a critical first step in solving any data analytics problem. If this can be automated in a distributed Big Data environment with a reasonable investment of time and funds, data analytics exercises will benefit considerably. In this dissertation, a framework for automated feature synthesis with Big Data is proposed and an experiment conducted to examine its viability. Using this framework, an infrastructure was built to support the process of feature synthesis on AWS that made use of S3 storage buckets, Elastic Cloud Computing services, and an Elastic MapReduce cluster. A dataset of 95 million customers, 34 thousand fraud cases and 5.5 million transactions across three different relations was then loaded into the distributed relational database on the platform. The infrastructure was used to show how the dataset could be prepared to represent a business problem, and Featuretools used to generate a single feature matrix suitable for inclusion in a machine learning pipeline. The results show that the approach was viable. The feature matrix produced 75 features from 12 input variables and was time efficient with a total end-to-end run time of 3.5 hours and a cost of approximately R 814 (approximately $52). The framework can be applied to a different set of data and allows the analysts to experiment on a small section of the data until a final feature set is decided. They are able to easily scale the feature matrix to the full dataset. This ability to automate feature synthesis, iterate and scale up, will save time in the analytics process while providing a richer feature set for better machine learning results.
APA, Harvard, Vancouver, ISO, and other styles
3

Flatt, Taylor. "CrowdCloud: Combining Crowdsourcing with Cloud Computing for SLO Driven Big Data Analysis." OpenSIUC, 2017. https://opensiuc.lib.siu.edu/theses/2234.

Full text
Abstract:
The evolution of structured data from simple rows and columns on a spreadsheet to more complex unstructured data such as tweets, videos, voice, and others, has resulted in a need for more adaptive analytical platforms. It is estimated that upwards of 80% of data on the Internet today is unstructured. There is a drastic need for crowdsourcing platforms to perform better in the wake of the tsunami of data. We investigated the employment of a monitoring service which would allow the system take corrective action in the event the results were trending in away from meeting the accuracy, budget, and time SLOs. Initial implementation and system validation has shown that taking corrective action generally leads to a better success rate of reaching the SLOs. Having a system which can dynamically adjust internal parameters in order to perform better can lead to more harmonious interactions between humans and machine algorithms and lead to more efficient use of resources.
APA, Harvard, Vancouver, ISO, and other styles
4

Rashid, A. N. M. Bazlur. "Cooperative co-evolution-based feature selection for big data analytics." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2021. https://ro.ecu.edu.au/theses/2428.

Full text
Abstract:
The rapid progress of modern technologies generates a massive amount of highthroughput data, called Big Data, which provides opportunities to find new insights using machine learning (ML) algorithms. Big Data consist of many features (attributes). However, irrelevant features may degrade the classification performance of ML algorithms. Feature selection (FS) is a combinatorial optimisation technique used to select a subset of relevant features that represent the dataset. For example, FS is an effective preprocessing step of anomaly detection techniques in Big Cybersecurity Datasets. Evolutionary algorithms (EAs) are widely used search strategies for feature selection. A variant of EAs, called a cooperative co-evolutionary algorithm (CCEA) or simply cooperative co-evolution (CC), which uses a divide-and-conquer approach, is a good choice for large-scale optimisation problems. The goal of this thesis is to investigate and develop three key research issues related to feature selection in Big Data and anomaly detection using feature selection in Big Cybersecurity Data. The first research problem of this thesis is to investigate and develop a feature selection framework using CCEA. The objective of feature selection is twofold: selecting a suitable subset of features or in other words, reducing the number of features to decrease computations and improving classification accuracy, which are contradictory, but can be achieved using a single objective function. Using only classification accuracy as the objective function for FS, EAs, such as CCEA, achieves higher accuracy, even with a higher number of features. Hence, this thesis proposes a penalty-based wrapper single objective function. This function has been used to evaluate the FS process using CCEA, henceforth called Cooperative Co-Evolutionary Algorithm-Based Feature Selection (CCEAFS). Experimental analysis was performed using six widely used classifiers on six different datasets, with and without FS. The experimental results indicate that the proposed objective function is efficient at reducing the number of features in the final feature subset without significantly reducing classification accuracy. Furthermore, the performance results have been compared with four other state-of-the-art techniques. CC decomposes a large and complex problem into several subproblems, optimises each subproblem independently, and collaborates different subproblems only to build a complete solution of the problem. The existing decomposition solutions have poor performance because of some limitations, such as not considering feature interactions, dealing with only an even number of features, and decomposing the dataset statically. However, for real-world problems without any prior information about how the features in a dataset interact, it is difficult to find a suitable problem decomposition technique for feature selection. Hence, the second research problem of this thesis is to investigate and develop a decomposition method that can decompose Big Datasets dynamically, and can ensure the probability of grouping interacting features into the same subcomponent. Accordingly, this thesis proposes a random feature grouping (RFG) with three variants. RFG has been used in the CC-based FS process, hence called Cooperative Co-Evolution-Based Feature Selection with Random Feature Grouping (CCFSRFG). Experiment analysis performed using six widely used ML classifiers on seven different datasets, with and without FS, indicates that, in most cases, the proposed CCFSRFG-1 outperforms CCEAFS and CCFSRFG-2, and also does so when using all features. Furthermore, the performance results have been compared with five other state-of-theart techniques. Anomaly detection from Big Cybersecurity Datasets is very important; however, this is a very challenging and computationally expensive task. Feature selection in cybersecurity datasets may improve and quantify the accuracy and scalability of both supervised and unsupervised anomaly detection techniques. The third research problem of this thesis is to investigate and develop an anomaly detection approach using feature selection that can improve the anomaly detection performance, and also reduce the execution time. Accordingly, this thesis proposes an Anomaly Detection Using Feature Selection (ADUFS) to deal with this research problem. Experiments were performed on five different benchmark cybersecurity datasets, with and without feature selection, and the performance of both supervised and unsupervised anomaly detection techniques were investigated by ADUFS. The experimental results indicate that, instead of using the original dataset, a dataset with a reduced number of features yields better performance in terms of true positive rate (TPR) and false positive rate (FPR) than the existing techniques for anomaly detection. In addition, all anomaly detection techniques require less computational time when using datasets with a suitable subset of features rather than entire datasets. Furthermore, the performance results have been compared with six other state-of-the-art techniques.
APA, Harvard, Vancouver, ISO, and other styles
5

Sellén, David. "Big Data analytics for the forest industry : A proof-of-conceptbuilt on cloud technologies." Thesis, Mittuniversitetet, Avdelningen för informations- och kommunikationssystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-28541.

Full text
Abstract:
Large amounts of data in various forms are generated at a fast pace in today´s society. This is commonly referred to as “Big Data”. Making use of Big Data has been increasingly important for both business and in research. The forest industry is generating big amounts of data during the different processes of forest harvesting. In Sweden, forest infor-mation is sent to SDC, the information hub for the Swedish forest industry. In 2014, SDC received reports on 75.5 million m3fub from harvester and forwarder machines. These machines use a global stand-ard called StanForD 2010 for communication and to create reports about harvested stems. The arrival of scalable cloud technologies that com-bines Big Data with machine learning makes it interesting to develop an application to analyze the large amounts of data produced by the forest industry. In this study, a proof-of-concept has been implemented to be able to analyze harvest production reports from the StanForD 2010 standard. The system consist of a back-end and front-end application and is built using cloud technologies such as Apache Spark and Ha-doop. System tests have proven that the concept is able to successfully handle storage, processing and machine learning on gigabytes of HPR files. It is capable of extracting information from raw HPR data into datasets and support a machine learning pipeline with pre-processing and K-Means clustering. The proof-of-concept has provided a code base for further development of a system that could be used to find valuable knowledge for the forest industry.
APA, Harvard, Vancouver, ISO, and other styles
6

Olsén, Cleas, and Gustav Lindskog. "Big Data Analytics : A potential way to Competitive Performance." Thesis, Linnéuniversitetet, Institutionen för informatik (IK), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-104372.

Full text
Abstract:
Big data analytics (BDA) has become an increasingly popular topic over the years amongst academics and practitioners alike. Big data, which is an important part of BDA, was originally defined with three Vs, being volume, velocity and variety. In later years more Vs have surfaced to better accommodate the current need. The analytics of BDA consists of different methods of analysing gathered data. Analysing data can provide insights to organisations which in turn can give organisations competitive advantage and enhance their businesses. Looking into the necessary resources needed to build big data analytic capabilities (BDAC), this thesis sought out to find how Swedish organisations enable and use BDA in their businesses. This thesis also investigated whether BDA could lead to performance enhancement and competitive advantage to organisations. A theoretical framework based on previous studies was adapted and used in order to help answer the thesis purpose. A qualitative study was deemed the most suitable for this study using semi-structured interviews. Previous studies in this field pointed to the fact that organisations may not be aware of how or why to use or enable BDA. According to current literature, different resources are needed to work in conjunction with each other in order to create BDAC and enable BDA to be utilized. Several different studies discuss challenges such as the culture of the organisation, human skills, and the need for top management to support BDA initiatives to succeed. The findings from the interviews in this study indicated that in a Swedish context the different resources, such as data, technical skills, and data driven culture, amongst others, are being used to enable BDA. Furthermore, the result showed that business process improvements are a first staple in organisations use of benefiting from BDA. This is because of the ease and security in calculating profits and effect from such an investment. Depending on how far an organisation have come in their transformation process they may also innovate and/or create products or services from insights made possible from BDA.<br>Big data analytics (BDA) har blivit ett populärt ämne under de senaste åren hos akademiker och utövare. Big data, som är en viktig del av BDA, var först definierad med tre Vs, volym, hastighet och varietet. På senare år har flertalet V framkommit för att bättre uttrycka det nuvarande behovet. Analysdelen i BDA består av olika metoder av analysering av data. Dataanalysering som görs kan ge insikter till organisationer, som i sin tur kan ge organisationer konkurrensfördelar och förbättra deras företag. Genom att definiera de resurser som krävs för att bygga big data analytic capabilities (BDAC), så försökte denna avhandling att visa hur svenska organisationer möjliggör och använder BDA i sina företag. Avhandlingen försökte också härleda om BDA kan leda till prestandaförbättringar och konkurrensfördelar för organisationer. Ett teoretiskt ramverk, baserat på tidigare studier, anpassades och användes för att hjälpa till att svara på avhandlingens syfte. En kvalitativ studie utsågs vara den mest passande ansatsen, tillsammans med semi-strukturerade intervjuer. Tidigare studier inom området visade på att organisationer kanske inte helt är medvetna om hur eller varför BDA möjliggörs eller kan användas. Enligt den nuvarande litteraturen så behöver olika resurser arbeta tillsammans med varandra för att skapa BDAC och möjliggöra att BDA kan utnyttjas till fullo. Flera olika studier diskuterade utmaningar såsom kulturen inom organisationen, kompetens hos anställda och att ledningen behöver stödja BDA initiativ för att lyckas. Fynden från studiens intervjuer indikerade, i ett svenskt sammanhang, att olika resurser såsom data, tekniska färdigheter och datadriven kultur bland annat, används för att möjliggöra BDA. Fortsättningsvis påvisade resultatet att affärsprocessförbättring är en första stapel i användandet av fördelarna från BDA. Anledningen till det är för att det är lättare och säkrare med beräkning av förtjänst och effekt från en sådan investering. Beroende på hur långt en organisation har kommit i deras transformationsprocess kan de också innovera och/eller skapa produkter eller tjänster som möjliggjorts av insikter från BDA.
APA, Harvard, Vancouver, ISO, and other styles
7

Denadija, Feda, and David Löfgren. "Revealing the Non-technical Side of Big Data Analytics : Evidence from Born analyticals and Big intelligent firms." Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-298137.

Full text
Abstract:
This study aspired to gain a more a nuanced understanding of the emerging analytics technologies and the vital capabilities that ultimately drive evidence-based decision making. Big data technology is widely discussed by varying groups in society and believed to revolutionize corporate decision making. In spite of big data's promising possibilities only a trivial fraction of firms deploying big data analytics (BDA) have gained significant benefits from their initiatives. Trying to explain this inability we leaned back on prior IT literature suggesting that IT resources can only be successfully deployed when combined with organizational capabilities. We identified key theoretical components at an organizational, relational, and human level. The data collection included 20 interviews with decision makers and data scientist from four analytical leaders. Early on we distinguished the companies into two categories based on their empirical characteristics. The terms “Born analyticals” and “Big intelligent firms” were coined. The analysis concluded that social, non-technical elements play a crucial role in building BDA abilities. These capabilities differ among companies but can still enable BDA in different ways, indicating that organizations´ history and context seem to influence how firms deploy capabilities. Some capabilities have proven to be more important than others. The individual mindset towards data is seemingly the most determining capability in building BDA ability. Varying mindsets foster different BDA-environments in which other capabilities behave accordingly. Born analyticals seemed to display an environment benefitting evidence based decisions.
APA, Harvard, Vancouver, ISO, and other styles
8

Barros, Victor Perazzolo. "Big data analytics em cloud gaming: um estudo sobre o reconhecimento de padrões de jogadores." Universidade Presbiteriana Mackenzie, 2017. http://tede.mackenzie.br/jspui/handle/tede/3405.

Full text
Abstract:
Submitted by Rosa Assis (rosa_assis@yahoo.com.br) on 2017-11-14T18:05:03Z No. of bitstreams: 2 VICTOR PERAZZOLO BARROS.pdf: 24134660 bytes, checksum: 8761000aa9ba093f81a14b3c2368f2b7 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)<br>Approved for entry into archive by Paola Damato (repositorio@mackenzie.br) on 2017-11-27T12:14:38Z (GMT) No. of bitstreams: 2 VICTOR PERAZZOLO BARROS.pdf: 24134660 bytes, checksum: 8761000aa9ba093f81a14b3c2368f2b7 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)<br>Made available in DSpace on 2017-11-27T12:14:38Z (GMT). No. of bitstreams: 2 VICTOR PERAZZOLO BARROS.pdf: 24134660 bytes, checksum: 8761000aa9ba093f81a14b3c2368f2b7 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-02-06<br>The advances in Cloud Computing and communication technologies enabled the concept of Cloud Gaming to become a reality. Through PCs, consoles, smartphones, tablets, smart TVs and other devices, people can access and use games via data streaming, regardless the computing power of these devices. The Internet is the fundamental way of communication between the device and the game, which is hosted and processed on an environment known as Cloud. In the Cloud Gaming model, the games are available on demand and offered in large scale to the users. The players' actions and commands are sent to servers that process the information and send the result (reaction) back to the players. The volume of data processed and stored in these Cloud environments exceeds the limits of analysis and manipulation of conventional tools, but these data contains information about the players' profile, its singularities, actions, behavior and patterns that can be valuable when analyzed. For a proper comprehension and understanding of this raw data and to make it interpretable, it is necessary to use appropriate techniques and platforms to manipulate this amount of data. These platforms belong to an ecosystem that involves the concepts of Big Data. The model known as Big Data Analytics is an effective and capable way to, not only work with these data, but understand its meaning, providing inputs for assertive analysis and predictive actions. This study searches to understand how these technologies works and propose a method capable to analyze and identify patterns in players' behavior and characteristics on a virtual environment. By knowing the patterns of different players, it is possible to group and compare information, in order to optimize the user experience, revenue for developers and raise the level of control over the environment in a way that players' actions can be predicted. The results presented are based on different analysis modeling using the Hadoop technology combined with data visualization tools and information from open data sources in a dataset of the World of Warcraft game. Fraud detection, users' game patterns, churn prevention inputs and relations with game attractiveness elements are examples of modeling used. In this research, it was possible to map and identify the players' behavior patterns and create a prediction of its frequency and tendency to evade or stay in the game.<br>Os avanços das tecnologias de Computacão em Nuvem (Cloud Computing) e comunicações possibilitaram o conceito de Jogos em Nuvem (Cloud Gaming) se tornar uma realidade. Por meio de computadores, consoles, smartphones, tablets, smart TVs e outros equipamentos é possível acessar via streaming e utilizar jogos independentemente da capacidade computacional destes dispositivos. Os jogos são hospedados e executados em um ambiente computacional conhecido como Nuvem, a Internet é o meio de comunicação entre estes dispositivos e o jogo. No modelo conhecido como Cloud Gaming, compreendesse que os jogos são disponibilizados sob demanda para os usuários e podem ser oferecidos em larga escala. Os comandos e ações dos jogadores são enviados para servidores que processam a informação e enviam o resultado (reação) para o jogador. A quantidade de dados que são processados e armazenados nestes ambientes em Nuvem superam os limites de análise e manipulação de plataformas convencionais, porém tais dados contém informacões sobre o perfil dos jogadores, suas particularidades, ações, comportamentos e padrões que podem ser importantes quando analisados. Para uma devida compreensão e lapidação destes dados brutos, a fim de torná-los interpretáveis, se faz necessário o uso de técnicas e plataformas apropriadas para manipulação desta quantidade de dados. Estas plataformas fazem parte de um ecossistema que envolvem os conceitos de Big Data. Arquiteturas e ferramentas de Big Data, mais especificamente, o modelo denominado Big Data Analytics, são instrumentos eficazes e capazes de não somente trabalhar com estes dados, mas entender seu significado, fornecendo insumos para análise assertiva e predição de acões. O presente estudo busca compreender o funcionamento destas tecnologias e fornecer um método capaz de identificar padrões nos comportamentos e características dos jogadores em ambiente virtual. Conhecendo os padrões de diferentes usuários é possível agrupar e comparar as informações, a fim de otimizar a experiência destes usuários no jogo, aumentar a receita para os desenvolvedores e elevar o nível de controle sobre o ambiente ao ponto que seja possível de prever ações futuras dos jogadores. Os resultados obtidos são derivados de diferentes modelagens de análise utilizando a tecnologia Hadoop combinada com ferramentas de visualização de dados e informações de fontes de dados abertas, em um dataset do jogo World of Warcraft. Detecção de fraude, padrões de jogo dos usuários, insumos para prevencão de churn e relações com elementos de atratividade no jogo, são exemplos de modelagens abordadas. Nesta pesquisa foi possível mapear e identificar os padrões de comportamento dos jogadores e criar uma previsão e tendência de assiduidade sobre evasão ou permanencia de usuários no jogo.
APA, Harvard, Vancouver, ISO, and other styles
9

Ceccarello, Matteo. "Clustering-based Algorithms for Big Data Computations." Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3424849.

Full text
Abstract:
In the age of big data, the amount of information that applications need to process often exceeds the computational capabilities of single machines. To cope with this deluge of data, new computational models have been defined. The MapReduce model allows the development of distributed algorithms targeted at large clusters, where each machine can only store a small fraction of the data. In the streaming model a single processor processes on-the-fly an incoming stream of data, using only limited memory. The specific characteristics of these models combined with the necessity of processing very large datasets rule out, in many cases, the adoption of known algorithmic strategies, prompting the development of new ones. In this context, clustering, the process of grouping together elements according to some proximity measure, is a valuable tool, which allows to build succinct summaries of the input data. In this thesis we develop novel algorithms for some fundamental problems, where clustering is a key ingredient to cope with very large instances or is itself the ultimate target. First, we consider the problem of approximating the diameter of an undirected graph, a fundamental metric in graph analytics, for which the known exact algorithms are too costly to use for very large inputs. We develop a MapReduce algorithm for this problem which, for the important class of graphs of bounded doubling dimension, features a polylogarithmic approximation guarantee, uses linear memory and executes in a number of parallel rounds that can be made sublinear in the input graph's diameter. To the best of our knowledge, ours is the first parallel algorithm with these guarantees. Our algorithm leverages a novel clustering primitive to extract a concise summary of the input graph on which to compute the diameter approximation. We complement our theoretical analysis with an extensive experimental evaluation, finding that our algorithm features an approximation quality significantly better than the theoretical upper bound and high scalability. Next, we consider the problem of clustering uncertain graphs, that is, graphs where each edge has a probability of existence, specified as part of the input. These graphs, whose applications range from biology to privacy in social networks, have an exponential number of possible deterministic realizations, which impose a big-data perspective. We develop the first algorithms for clustering uncertain graphs with provable approximation guarantees which aim at maximizing the probability that nodes be connected to the centers of their assigned clusters. A preliminary suite of experiments, provides evidence that the quality of the clusterings returned by our algorithms compare very favorably with respect to previous approaches with no theoretical guarantees. Finally, we deal with the problem of diversity maximization, which is a fundamental primitive in big data analytics: given a set of points in a metric space we are asked to provide a small subset maximizing some notion of diversity. We provide efficient streaming and MapReduce algorithms with approximation guarantees that can be made arbitrarily close to the ones of the best sequential algorithms available. The algorithms crucially rely on the use of a k-center clustering primitive to extract a succinct summary of the data and their analysis is expressed in terms of the doubling dimension of the input point set. Moreover, unlike previously known algorithms, ours feature an interesting tradeoff between approximation quality and memory requirements. Our theoretical findings are supported by the first experimental analysis of diversity maximization algorithms in streaming and MapReduce, which highlights the tradeoffs of our algorithms on both real-world and synthetic datasets. Moreover, our algorithms exhibit good scalability, and a significantly better performance than the approaches proposed in previous works.<br>Nell'era dei "big data", le capacità dei singoli computer non sono sufficienti a elaborare la quantità di informazioni disponibile. Per elaborare queste grandi moli di dati sono stati introdotti nuovi modelli di calcolo. Il modello MapReduce consente di sviluppare algoritmi distribuiti su grandi cluster, dove ogni macchina memorizza solo una piccola porzione dei dati. Nel modello streaming un singolo processore con memoria limitata elabora in tempo reale il flusso di dati. Le caratteristiche e limitazioni di questi modelli impediscono l'applicazione di strategie algoritmiche note, imponendo lo sviluppo di nuovi algoritmi. In questo contesto uno strumento molto utilizzato è il clustering, ovvero il raggruppare elementi dell'input in base a qualche metrica di vicinanza. Tale approccio consente di costruire rappresentazioni compatte dei dati in ingresso. In questa tesi sviluppiamo nuovi algoritmi per alcuni problemi fondamentali, dove il clustering è una componente chiave per gestire grandi moli di dati o è esso stesso l'obiettivo finale. Il primo problema che affrontiamo è l'approssimazione del diametro di grafi non diretti, una metrica fondamentale nell'analisi dei grafi, per la quale gli algoritmi conosciuti sono troppo costosi per essere usati su grandi input. Sviluppiamo un algoritmo MapReduce per questo problema. Tale algoritmo, per l'importante classe di grafi a "doubling dimension" limitata, ha un fattore di approssimazione polilogaritmico, usa memoria lineare ed esegue in un numero di round che può essere reso sublineare rispetto al diametro del grafo in ingresso. A nostra conoscenza, il nostro è il primo algoritmo parallelo con queste garanzie. Il nostro algoritmo utilizza una nuova primitiva di clustering per costruire una rappresentazione succinta del grafo di input, sulla quale viene calcolata l'approssimazione del diametro. La nostra analisi teorica è affiancata da una approfondita valutazione sperimentale, che mostra che il nostro algoritmo ha in pratica un fattore di approssimazione molto migliore di quello predetto dalla teoria e una elevata scalabilità. Il secondo problema considerato è quello del clustering di grafi uncertain, ovvero grafi il cui ogni arco ha una probabilità di esistenza. Questi grafi, che trovano applicazioni dalla biologia alla privacy nei social network, hanno una dimensione intrinseca molto elevata a causa del numero esponenziale di potenziali realizzazioni. Sviluppiamo i primi algoritmi per il clustering di questi grafi con garanzie teoriche con l'obiettivo di massimizzare la probabilità che i nodi siano connessi con i centri dei loro cluster. Degli esperimenti preliminari mostrano che la qualità del clustering trovato dai nostri algoritmi è comparabile, e in alcuni casi migliore, con approcci presenti in letteratura e mancanti di garanzie teoriche. Infine, ci interessiamo al problema della "diversity maximization", una primitiva fondamentale nell'analisi dei "big data": dato un insieme di punti in uno spazio metrico l'obiettivo è trovare un piccolo sottoinsieme che massimizzi una qualche nozione di diversità. Sviluppiamo algoritmi efficienti in streaming e MapReduce con garanzie teoriche che possono essere rese arbitrariamente vicine a quelle dei migliori algoritmi sequenziali. I nostri algoritmi si basano sull'utilizzo di una primitiva di clustering per estrarre una rappresentazione concisa dei dati. L'analisi è basata sulla doubling dimension dell'insieme in input. Inoltre, contrariamente ad altri approcci presenti in letteratura, il nostro mostra un interessante tradeoff tra la qualità dell'approssimazione e la richiesta di memoria. La nostra analisi teorica è supportata dalla prima analisi sperimentale di algoritmi di diversity maximization in streaming e MapReduce. Questa analisi mostra i tradeoff dei nostri algoritmi sia su dataset sintetici che reali. Inoltre, i nostri algoritmi hanno una buona scalabilità, e prestazioni decisamente migliori di quelli proposti in letteratura.
APA, Harvard, Vancouver, ISO, and other styles
10

Huai, Yin. "Building High Performance Data Analytics Systems based on Scale-out Models." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1427553721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Cloud-based big data analytics"

1

Trovati, Marcello, Richard Hill, Ashiq Anjum, Shao Ying Zhu, and Lu Liu, eds. Big-Data Analytics and Cloud Computing. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25313-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Das, Himansu, Rabindra K. Barik, Harishchandra Dubey, and Diptendu Sinha Roy, eds. Cloud Computing for Geospatial Big Data Analytics. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-03359-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sarma, Hiren Kumar Deva, Valentina Emilia Balas, Bhaskar Bhuyan, and Nitul Dutta, eds. Contemporary Issues in Communication, Cloud and Big Data Analytics. Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-4244-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hosseinian-Far, Amin, Muthu Ramachandran, and Dilshad Sarwar, eds. Strategic Engineering for Cloud Computing and Big Data Analytics. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-52491-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Upadhyay, Nitin. CABology: Value of Cloud, Analytics and Big Data Trio Wave. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-10-8675-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kumar Rana, Arun, Sudeshna Chakraborty, Pallavi Goel, Sumit Kumar Rana, and Ahmed A. Elngar. Internet of Things and Big Data Analytics-Based Manufacturing. CRC Press, 2024. http://dx.doi.org/10.1201/9781032673479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Das, Sanjoy, Ram Shringar Rao, Indrani Das, Vishal Jain, and Nanhay Singh. Cloud Computing Enabled Big-Data Analytics in Wireless Ad-hoc Networks. CRC Press, 2022. http://dx.doi.org/10.1201/9781003206453.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pani, Subhendu Kumar, Somanath Tripathy, Talal Ashraf Butt, Sumit Kundu, and George Jandieri. Applications of Machine Learning in Big-Data Analytics and Cloud Computing. River Publishers, 2022. http://dx.doi.org/10.1201/9781003337218.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lotfi, Chaimaa, Swetha Srinivasan, Myriam Ertz, and Imen Latrous. Exploring the Aggregated and Granular Impact of Big Data Analytics on a Firm’s Performance Through Web Scraping-Based Methodology. SAGE Publications Ltd, 2023. http://dx.doi.org/10.4135/9781529667394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

FGBOU, VO. Digital analytics and financial security control of socially significant organizations. INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1863937.

Full text
Abstract:
The monograph is devoted to the formation of the concept of digital financial security analytics. The use of the digital environment and big data analysis tools in the system of monitoring sectoral risks and monitoring the activities of socially significant organizations from the position of the ESG strategy is disclosed. At the same time, financial security is considered as an aggregated result of the action of economic, environmental and social factors in a rapidly changing economy.&#x0D; It covers several key areas that make it possible to digitalize and improve the effectiveness of monitoring the activities of socially significant organizations in a complex: the development of the conceptual apparatus of socially significant business; analytical tools for assessing and forecasting financial security risks based on the concept of sustainable development; standardization of risk management.&#x0D; For students, postgraduates, teachers, as well as for the professional development of managerial personnel in business and government structures.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Cloud-based big data analytics"

1

Matter, Ulrich. "Cloud Computing." In Big Data Analytics. Chapman and Hall/CRC, 2023. http://dx.doi.org/10.1201/9781003378822-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Oppitz, Marcus, and Peter Tomsu. "Big Data Analytics." In Inventing the Cloud Century. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-61161-7_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Demirbaga, Ümit, Gagangeet Singh Aujla, Anish Jindal, and Oğuzhan Kalyon. "Cloud Computing for Big Data Analytics." In Big Data Analytics. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-55639-5_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Casturi, Rao, and Rajshekhar Sunderraman. "Distributed Financial Calculation Framework on Cloud Computing Environment." In Big Data Analytics. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04780-1_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Lihui, and Xi Vincent Wang. "Big Data Analytics for Scheduling and Machining." In Cloud-Based Cyber-Physical Systems in Manufacturing. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67693-7_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sanjeev, B. S., and Dheeraj Chitara. "Big Data over Cloud: Enabling Drug Design Under Cellular Environment." In Big Data Analytics. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-93620-4_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gupta, Rajeev, Himanshu Gupta, and Mukesh Mohania. "Cloud Computing and Big Data Analytics: What Is New from Databases Perspective?" In Big Data Analytics. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-35542-4_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nascimento, Dimas C., Carlos Eduardo Pires, and Demetrio Mestre. "Data Quality Monitoring of Cloud Databases Based on Data Quality SLAs." In Big-Data Analytics and Cloud Computing. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-25313-8_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jugulum, Rajesh, David J. Fogarty, Chris Heien, and Surya Putchala. "Big Data and Cloud Solutions." In Big Data Management and Analytics. CRC Press, 2025. https://doi.org/10.1201/9781003190325-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mourya, Ashish Kumar, Shafqat-Ul-Ahsaan, and Sheikh Mohammad Idrees. "Cloud Computing-Based Approach for Accessing Electronic Health Record for Healthcare Sector." In Microservices in Big Data Analytics. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-0128-9_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Cloud-based big data analytics"

1

Gupta, Shubham, Munikrishnaiah Sundararamaiah, and Geeta Geeta. "Leveraging Cloud-Native Data Engineering for Big Data Analytics." In 2025 3rd International Conference on Advancement in Computation & Computer Technologies (InCACCT). IEEE, 2025. https://doi.org/10.1109/incacct65424.2025.11011292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhou, Kun, Xinde Yi, Yaxiong Zhang, and Zhiqiang Wu. "A DDA-Based Cloud Computing System." In 2025 10th International Conference on Cloud Computing and Big Data Analytics (ICCCBDA). IEEE, 2025. https://doi.org/10.1109/icccbda64898.2025.11030504.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ordonez, Carlos, and Wojciech Macyna. "Optimizing Energy Consumed by Analytics in the Cloud." In 2024 IEEE International Conference on Big Data (BigData). IEEE, 2024. https://doi.org/10.1109/bigdata62323.2024.10825559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tiwari, Trishita, Ata Turk, Alina Oprea, Katzalin Olcoz, and Ayse K. Coskun. "User-profile-based analytics for detecting cloud security breaches." In 2017 IEEE International Conference on Big Data (Big Data). IEEE, 2017. http://dx.doi.org/10.1109/bigdata.2017.8258494.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Manekar, Amit Kumar, and G. Pradeepini. "Cloud Based Big Data Analytics a Review." In 2015 International Conference on Computational Intelligence and Communication Networks (CICN). IEEE, 2015. http://dx.doi.org/10.1109/cicn.2015.160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mantzoukas, Konstantinos, Christos Kloukinas, and George Spanoudakis. "Monitoring Data Integrity in Big Data Analytics Services." In 2018 IEEE 11th International Conference on Cloud Computing (CLOUD). IEEE, 2018. http://dx.doi.org/10.1109/cloud.2018.00132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Babuji, Yadu N., Kyle Chard, Aaron Gerow, and Eamon Duede. "Cloud Kotta: Enabling secure and scalable data analytics in the cloud." In 2016 IEEE International Conference on Big Data (Big Data). IEEE, 2016. http://dx.doi.org/10.1109/bigdata.2016.7840616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wu, Dazhong, Connor Jennings, Janis Terpenny, and Soundar Kumara. "Cloud-based machine learning for predictive analytics: Tool wear prediction in milling." In 2016 IEEE International Conference on Big Data (Big Data). IEEE, 2016. http://dx.doi.org/10.1109/bigdata.2016.7840831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gehrmann, Christian, and Martin Gunnarsson. "An Identity Privacy Preserving IoT Data Protection Scheme for Cloud Based Analytics." In 2019 IEEE International Conference on Big Data (Big Data). IEEE, 2019. http://dx.doi.org/10.1109/bigdata47090.2019.9006017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Alrehaili, Ghadeer, Najla Galam, Rawan Alawad, and Lamya Albraheem. "Cloud-Based Big Data Analytics on IoT Applications." In 2023 International Conference on IT Innovation and Knowledge Discovery (ITIKD). IEEE, 2023. http://dx.doi.org/10.1109/itikd56332.2023.10100150.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Cloud-based big data analytics"

1

Mazorchuk, Mariia S., Tetyana S. Vakulenko, Anna O. Bychko, Olena H. Kuzminska, and Oleksandr V. Prokhorov. Cloud technologies and learning analytics: web application for PISA results analysis and visualization. [б. в.], 2021. http://dx.doi.org/10.31812/123456789/4451.

Full text
Abstract:
This article analyzes the ways to apply Learning Analytics, Cloud Technologies, and Big Data in the field of education on the international level. This paper provides examples of international analytical researches and cloud technologies used to process the results of those researches. It considers the PISA research methodology and related tools, including the IDB Analyzer application, free R intsvy environment for processing statistical data, and cloud-based web application PISA Data Explorer. The paper justifies the necessity of creating a stand-alone web application that supports Ukrainian localization and provides Ukrainian researchers with rapid access to well-structured PISA data. In particular, such an application should provide for data across the factorial features and indicators applied at the country level and demonstrate the Ukrainian indicators compared to the other countries’ results. This paper includes a description of the application core functionalities, architecture, and technologies used for development. The proposed solution leverages the shiny package available with R environment that allows implementing both the UI and server sides of the application. The technical implementation is a proven solution that allows for simplifying the access to PISA data for Ukrainian researchers and helping them utilize the calculation results on the key features without having to apply tools for processing statistical data.
APA, Harvard, Vancouver, ISO, and other styles
2

Cimene, Dr Francis Thaise A. Emerging Technological Trends and Business Process Management: Preparing the Philippines for the Future. Asian Productivity Organization, 2024. https://doi.org/10.61145/dktv2301.

Full text
Abstract:
The Philippine IT-BPM sector plays a vital role in driving economic growth and global competitiveness. This mini-report highlights how emerging technologies such as cloud computing, IoT, and big data analytics are transforming traditional business processes. Grounded in endogenous growth theory, the report emphasizes the impact of innovation and human capital on productivity. Policy recommendations are provided to bolster the nation’s position as a leading outsourcing hub and prepare for future technological advancements.
APA, Harvard, Vancouver, ISO, and other styles
3

Guicheney, William, Tinashe Zimani, Hope Kyarisiima, and Louisa Tomar. Big Data in the Public Sector: Selected Applications and Lessons Learned. Inter-American Development Bank, 2016. http://dx.doi.org/10.18235/0007024.

Full text
Abstract:
This paper analyzes different ways in which big data can be leveraged to improve the efficiency and effectiveness of government. It describes five cases where massive and diverse sets of information are gathered, processed, and analyzed in three different policy areas: smart cities, taxation, and citizen security. The cases, compiled from extensive desk research and interviews with leading academics and practitioners in the field of data analytics, have been analyzed from the perspective of public servants interested in big data and thus address both the technical and the institutional aspects of the initiatives. Based on the case studies, a policy guide was built to orient public servants in Latin America and the Caribbean in the implementation of big data initiatives and the promotion of a data ecosystem. The guide covers aspects such as leadership, governance arrangements, regulatory frameworks, data sharing, and privacy, as well as considerations for storing, processing, analyzing, and interpreting data.
APA, Harvard, Vancouver, ISO, and other styles
4

van der Sloot, Bart. The Quality of Life: Protecting Non-personal Interests and Non-personal Data in the Age of Big Data. Universitätsbibliothek J. C. Senckenberg, Frankfurt am Main, 2021. http://dx.doi.org/10.21248/gups.64579.

Full text
Abstract:
Under the current legal paradigm, the rights to privacy and data protection provide natural persons with subjective rights to protect their private interests, such as related to human dignity, individual autonomy and personal freedom. In principle, when data processing is based on non-personal or aggregated data or when such data pro- cesses have an impact on societal, rather than individual interests, citizens cannot rely on these rights. Although this legal paradigm has worked well for decades, it is increasingly put under pressure because Big Data processes are typically based indis- criminate rather than targeted data collection, because the high volumes of data are processed on an aggregated rather than a personal level and because the policies and decisions based on the statistical correlations found through algorithmic analytics are mostly addressed at large groups or society as a whole rather than specific individuals. This means that large parts of the data-driven environment are currently left unregu- lated and that individuals are often unable to rely on their fundamental rights when addressing the more systemic effects of Big Data processes. This article will discuss how this tension might be relieved by turning to the notion ‘quality of life’, which has the potential of becoming the new standard for the European Court of Human Rights (ECtHR) when dealing with privacy related cases.
APA, Harvard, Vancouver, ISO, and other styles
5

Mohanty, Subhasish M., Bryan J. Jagielo, William I. Iverson, et al. Online stress corrosion crack and fatigue usages factor monitoring and prognostics in light water reactor components: Probabilistic modeling, system identification and data fusion based big data analytics approach. Office of Scientific and Technical Information (OSTI), 2014. http://dx.doi.org/10.2172/1168230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Dhulipala, Sravya Sree. Cloud-based data analytics for assessing retrofit needs of residential buildings in rural Iowa. Iowa State University, 2023. http://dx.doi.org/10.31274/cc-20240624-372.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nasr, Elhami, Tariq Shehab, Nigel Blampied, and Vinit Kanani. Estimating Models for Engineering Costs on the State Highway Operation and Protection Program (SHOPP) Portfolio of Projects. Mineta Transportation Institute, 2024. http://dx.doi.org/10.31979/mti.2024.2365.

Full text
Abstract:
The State Highway Operation and Protection Program (SHOPP) is crucial for maintaining California’s 15,000-mile state highway system, which includes projects like pavement rehabilitation, bridge repair, safety enhancements, and traffic management systems. Administered by Caltrans, SHOPP aims to preserve highway efficiency and safety, supporting economic growth and public safety. This research aimed to develop robust cost-estimating models to improve budgeting and financial planning, aiding Caltrans, the California Transportation Commission (CTC), and the Legislature. The research team collected and refined comprehensive data from Caltrans project expenditures from 1983 to 2021, ensuring a high-quality dataset. Subject matter experts validated the data, enhancing its reliability. Two models were developed: a statistical model using exponential regression to account for non-linear cost growth, and an AI model employing neural networks to handle complex relationships in the data. Model performance was evaluated based on accuracy and reliability through repeated testing and validation. Key findings indicated that the new models significantly improved the precision of cost forecasts, reducing the variance between predicted and actual project costs. This advancement minimizes budget overruns and enhances resource allocation efficiency. Additionally, leveraging historical data with current market trends refined the models’ predictive power, boosting stakeholder confidence in project budgeting and financial planning. The study’s innovative approach, integrating machine learning and big data analytics, transforms traditional estimation practices and serves as a reference for other state highway programs. Continuous improvement and broader application of these models are recommended to further enhance cost estimation accuracy and support informed decision-making in transportation infrastructure management.
APA, Harvard, Vancouver, ISO, and other styles
8

Saz-Carranza, Angel, Oscar Fernández, Marie Vandendriessche, Javier Franco, and Núria Agell. Citizen Perceptions on EU Security and Defence Integration: A Big Data-based Analysis. EsadeGeo. Center for Global Economy and Geopolitics, 2022. https://doi.org/10.56269/202209/asc.

Full text
Abstract:
As European integration has advanced, public opinion on the EU has increasingly come into the spotlight. Yet when it comes to the EU’s Common Security and Foreign Policy (CFSP) and its Common Security and Defence Policy (CSDP), research into acceptability of integration is still “in its infancy” (Biedenkopf et al., 2021, p. 339). Building on an analytical framework of acceptability developed by Michaels &amp; Kissack (2021), this paper sets out to investigate public beliefs and perceptions through big data-based analysis of security-related news published around the world. Our research design complements more traditional measurements of citizens’ perceptions such as opinion surveys and contributes to the growing body of literature on public opinion and acceptability in the EU. Basing ourselves on all web-based news published on security-related matters from 2017 to 2022, we first examine how public opinion in the EU and its Member States varies at key moments related to the EU’s CSDP, including reforms and mission launches. Our results reveal broad and steady acceptability among the public for EU efforts in defence and security, although attention to practical details related to EU defence and security operations and policies is low. We subsequently analyse in more detail the crisis moment of the 2022 Russian invasion of Ukraine to understand its effects on public perceptions of the EU’s role in security and defence. We find that the Russian invasion of Ukraine is a watershed moment, but rather than overturning existing trends in acceptability among the public, the invasion has accelerated them. In Member States where historical changes have been made to security and defence-related policies following the invasion, our data furthermore shows that the public was largely supportive of the EU’s role in security and defence even prior to the invasion, and that acceptability of CSDP can go hand in hand with NATO membership.
APA, Harvard, Vancouver, ISO, and other styles
9

Papadakis, Stamatios, Арнольд Юхимович Ків, Hennadiy M. Kravtsov, et al. Revolutionizing education: using computer simulation and cloud-based smart technology to facilitate successful open learning. Криворізький державний педагогічний університет, 2023. http://dx.doi.org/10.31812/123456789/7375.

Full text
Abstract:
The article presents the proceedings of two workshops: Cloud-based Smart Technologies for Open Education Workshop (CSTOE 2022) and Illia O. Teplytskyi Workshop on Computer Simulation in Education (CoSinE 2022) held in Kyiv, Ukraine, on December 22, 2022. The CoSinE workshop focuses on computer simulation in education, including topics such as computer simulation in STEM education, AI in education, and modeling systems in education. The CSTOE workshop deals with cloud-based learning resources, platforms, and infrastructures, with topics including personalized learning and research environment design, big data and smart data in open education and research, machine learning for open education and research, and more. The article includes a summary of successful cases and provides directions for future research in each workshop’s respective topics of interest. The proceedings consist of several peer-reviewed papers that present a state-of-the-art overview and provide guidelines for future research. The joint program committee consisted of members from universities and research institutions worldwide.
APA, Harvard, Vancouver, ISO, and other styles
10

Nalesso, Mauro, and Pedro Coli. Step by Step Guide: Hydro-BID Manual. Inter-American Development Bank, 2017. http://dx.doi.org/10.18235/0007997.

Full text
Abstract:
The following manual has been prepared with the idea of facilitating the learning process in the use of the Hydro-BID model and the Analytical Hydrographic Database for Latin America and the Caribbean (LAC-AHD). The instructions below are supported by the material distributed in Hydro-BID’s installation package and that is based on the simplified case study of a River basin in the state of Pernambuco, Brazil. By following the instructions you should be able to understand how to set up a simulation in Hydro-BID, how to proceed for the interpolation of climate data, how to calibrate the model and how to visualize the results obtained. The technical information relating to the model and the LAC-AHD database can be obtained by downloading the technical notes from our website www.hydrobidlac.org.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography