Academic literature on the topic 'Sqoon'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sqoon.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sqoon"

1

Porter, Nicholas. "Resolving the squoon." Physics World 33, no. 12 (February 1, 2021): 52. http://dx.doi.org/10.1088/2058-7058/33/12/35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Liu, Junghyun Ko, and Jeongmo Yeo. "Analysis of the Influence Factors of Data Loading Performance Using Apache Sqoop." KIPS Transactions on Software and Data Engineering 4, no. 2 (February 28, 2015): 77–82. http://dx.doi.org/10.3745/ktsde.2015.4.2.77.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bodepudi, Hariteja. "Data Transfer Between RDBMS and HDFS By Using the Spark Framework in Sqoop for Better Performance." International Journal of Computer Trends and Technology 69, no. 3 (March 25, 2021): 10–13. http://dx.doi.org/10.14445/22312803/ijctt-v69i3p103.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yue, Hang. "Unstructured Healthcare Data Archiving and Retrieval Using Hadoop and Drill." International Journal of Big Data and Analytics in Healthcare 3, no. 2 (July 2018): 28–44. http://dx.doi.org/10.4018/ijbdah.2018070103.

Full text
Abstract:
A healthcare hybrid Hadoop ecosystem is analyzed for unstructured healthcare data archives. This healthcare hybrid Hadoop ecosystem is composed of some components such as Pig, Hive, Sqoop and Zoopkeeper, Hadoop Distributed File System (HDFS), MapReduce and HBase. Also, Apache Drill is applied for unstructured healthcare data retrieval. This article will discuss the combination of Hadoop and Drill for data analysis applications. Based on the analysis of Hadoop components, (including HBase design) and the case studies of Drill query design regarding different unstructured healthcare data, the Hadoop ecosystem and Drill are valid tools to integrate and access voluminous complex healthcare data. They can improve the healthcare systems, achieve savings on patient care costs, optimize the healthcare supply chain and infer useful knowledge from noisy and heterogeneous healthcare data sources.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Changai, and Shan Jiang. "Research of the Big Data Platform and the Traditional Data Acquisition and Transmission based on Sqoop Technology." Open Automation and Control Systems Journal 7, no. 1 (September 14, 2015): 1174–80. http://dx.doi.org/10.2174/1874444301507011174.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Xu, Zhihai, Donglin Shi, and Zhiwei Tu. "Research on Diagnostic Information of Smart Medical Care Based on Big Data." Journal of Healthcare Engineering 2021 (June 2, 2021): 1–10. http://dx.doi.org/10.1155/2021/9977358.

Full text
Abstract:
The medical field has gradually become intelligent, and information and the research of intelligent medical diagnosis information have received extensive attention in the medical field. In response to this situation, this paper proposes a Hadoop-based medical big data processing system. The system first realized the ETL module based on Sqoop and the transmission function of unstructured data and then realized the distributed storage management function based on HDFS. Finally, a MapReduce algorithm with variable key values is proposed in the data statistical analysis module. Through simulation experiments on the system modules and each algorithm, the results show that because the traditional nondistributed big data acquisition module transmits the same scale of data, it consumes more than 3200 s and the transmission time exceeds 3000 s. The time consumption of smart medical care under the 6G protocol is 150 s, the transmission time is 146 s, and the time is reduced to 1/10 of the original. The research of intelligent medical diagnosis information based on big data has good rationality and feasibility.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Hongsong, and Zhongchuan Fu. "Hadoop-Based Healthcare Information System Design and Wireless Security Communication Implementation." Mobile Information Systems 2015 (2015): 1–9. http://dx.doi.org/10.1155/2015/852173.

Full text
Abstract:
Human health information from healthcare system can provide important diagnosis data and reference to doctors. However, continuous monitoring and security storage of human health data are challenging personal privacy and big data storage. To build secure and efficient healthcare application, Hadoop-based healthcare security communication system is proposed. In wireless biosensor network, authentication and key transfer should be lightweight. An ECC (Elliptic Curve Cryptography) based lightweight digital signature and key transmission method are proposed to provide wireless secure communication in healthcare information system. Sunspot wireless sensor nodes are used to build healthcare secure communication network; wireless nodes and base station are assigned different tasks to achieve secure communication goal in healthcare information system. Mysql database is used to store Sunspot security entity table and measure entity table. Hadoop is used to backup and audit the Sunspot security entity table. Sqoop tool is used to import/export data between Mysql database and HDFS (Hadoop distributed file system). Ganglia is used to monitor and measure the performance of Hadoop cluster. Simulation results show that the Hadoop-based healthcare architecture and wireless security communication method are highly effective to build a wireless healthcare information system.
APA, Harvard, Vancouver, ISO, and other styles
8

Syam Prasad, Gudapati, P. Rajesh, and Sk Wasim Akram. "A Unified Frame Work to Integrate Hadoop and IOT to Resolve the Issues of Storage, Processing with Leveraging Capacity of Analytics." International Journal of Engineering & Technology 7, no. 2.32 (May 31, 2018): 147. http://dx.doi.org/10.14419/ijet.v7i2.32.15390.

Full text
Abstract:
The new trend in the research and real time applications is Internet of Things (IOT). The functional benefits of IOT are ranging from smart house to smart cities. The main purpose of IOT is to integrate various devices logically and interacting between the devices without human intervention. The current discussion mainly focuses on leveraging the capacity of analytics in IOT and resolves the storage issues of the bulk data generated by IOT. The proposed idea gives the usage of Hadoop platform to store the data and from that data performing analytics for the sake of better utilization of IOT communications. The importance is explained with some real time scenarios where there is perfect blend of Hadoop platform and IOT. To store the various categories of the data Hadoop Distributed File System (HDFS) can be used, and to ingest the data from external platforms we can make use of Sqoop or Flume. The data available in HDFS can be used to process with the usage of Map Reduce (MR)technique. Once the data is available in HDFS the analytics can be performed with Hive, Pig or R in the context of Machine learning or data mining techniques. The outcome of the proposed idea is integration of Hadoop and IOT platforms with a unified frame work which accommodates the integration of Hadoop and IOT, storage provisions to handle bulk data, processing of the stored data and applying analytics so as to effectively serve various stake holders.
APA, Harvard, Vancouver, ISO, and other styles
9

Cholissodin, Imam, Diajeng Sekar Seruni, Junda Alfiah Zulqornain, Audi Nuermey Hanafi, Afwan Ghofur, Mikhael Alexander, and Muhammad Ismail Hasan. "Development of Big Data App for Classification based on Map Reduce of Naive Bayes with or without Web and Mobile Interface by RESTful API Using Hadoop and Spark." Journal of Information Technology and Computer Science 5, no. 3 (December 31, 2020): 302. http://dx.doi.org/10.25126/jitecs.202053233.

Full text
Abstract:
Big Data App is a developed framework that we made based on our previous project research and we have uploaded it on github, which is developing lightweight serverless both on Windows and Linux OS with the term of EdUBig as Open Source Hadoop Distribution. In this study, the focus is on solving problems related to difficulties in building a frontend and backend model of a Big Data application which by default only runs scripts through consoles in the terminal. This will be quite a tribulation for the end users when the Big Data application has been released and mass produced to general users (end users) and at the same time how the end users test the performance of the Map Reduce Naive Bayes algorithm used in several datasets. In accordance to these problems, we created the Big Data App framework to make the end users, especially developers, feel easier to build a Big Data application by integrating the frontend using the Web App from Django framework and Mobile App Native, while for the backend, we use Django framework that is able to communicate directly with the script either hadoop batch, streaming processing or spark streaming very easily and also to use the script for pig, hive, web hdfs, sqoop, oozie, etc. the making of which is extremely fast with reliable results. Based on the test results, a very significant result in the ease of data computation processing by the end users and the final results showing the highest classification accuracy of 88.3576% was obtained.Keywords: big data, map reduce of naive bayes, serverless, web and mobile app, restful api, django framework
APA, Harvard, Vancouver, ISO, and other styles
10

"Sqoop usage in Hadoop Distributed File System and Observations to Handle Common Errors." Regular 9, no. 4 (November 30, 2020): 452–54. http://dx.doi.org/10.35940/ijrte.d4980.119420.

Full text
Abstract:
The Hadoop framework provides a way of storing and processing the huge amounts of the data. The social media like Facebook, twitter and amazon uses Hadoop eco system tools so as to store the data in Hadoop distributed file system and to process the data Map Reduce (MR). The current work describes the usage of Sqoop in the process of import and export with HDFS. The work involves various possible import/export commands supported by the tool Sqoop in the eco system of Hadoop. The importance of the work is to highlight the common errors while installing Sqoop and working with Sqoop. Many developers and researchers were using Sqoop so as to perform the import/export process and to handle the source data in the relational format. In the current work the connectivity between mysql and sqoop were presented and various commands usage along with the results were presented. The outcome of the work is for each command the possible errors encountered and the corresponding solution is mentioned. The common configuration settings we have to follow so as to handle the Sqoop without any errors is also mentioned
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Sqoon"

1

Kiška, Vladislav. "Integrace Big Data a datového skladu." Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-359181.

Full text
Abstract:
Master thesis deals with a problem of data integration between Big Data platform and enterprise data warehouse. Main goal of this thesis is to create a complex transfer system to move data from a data warehouse to this platform using a suitable tool for this task. This system should also store and manage all metadata information about previous transfers. Theoretical part focuses on describing concepts of Big Data, brief introduction into their history and presents factors which led to need for this new approach. Next chapters describe main principles and attributes of these technologies and discuss benefits of their implementation within an enterprise. Thesis also describes technologies known as Business Intelligence, their typical use cases and their relation to Big Data. Minor chapter presents main components of Hadoop system and most popular related applications. Practical part of this work consists of implementation of a system to execute and manage transfers from traditional relation database, in this case representing a data warehouse, to cluster of a few computers running a Hadoop system. This part also includes a summary of most used applications to move data into Hadoop and a design of database metadata schema, which is used to manage these transfers and to store transfer metadata.
APA, Harvard, Vancouver, ISO, and other styles
2

Brotánek, Jan. "Apache Hadoop jako analytická platforma." Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-358801.

Full text
Abstract:
Diploma Thesis focuses on integrating Hadoop platform into current data warehouse architecture. In theoretical part, properties of Big Data are described together with their methods and processing models. Hadoop framework, its components and distributions are discussed. Moreover, compoments which enables end users, developers and analytics to access Hadoop cluster are described. Case study of batch data extraction from current data warehouse on Oracle platform with aid of Sqoop tool, their transformation in relational structures of Hive component and uploading them back to the original source is being discussed at practical part of thesis. Compression of data and efficiency of queries depending on various storage formats is also discussed. Quality and consistency of manipulated data is checked during all phases of the process. Fraction of practical part discusses ways of storing and capturing stream data. For this purposes tool Flume is used to capture stream data. Further this data are transformed in Pig tool. Purpose of implementing the process is to move part of data and its processing from current data warehouse to Hadoop cluster. Therefore process of integration of current data warehouse and Hortonworks Data Platform and its components, was designed
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Sqoon"

1

Inc, Game Counselor. Game Counselor's Answer Book for Nintendo Players. Redmond, USA: Microsoft Pr, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Apache Sqoop Cookbook. O'Reilly Media, Inc, USA, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ting, Kathleen, and Jarek Jarcec Cecho. Apache Sqoop Cookbook. O'Reilly Media, Incorporated, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Inc, Game Counsellor, ed. The Game Counsellor's answer book for Nintendo Game players: Hundredsof questions -and answers - about more than 250 popular Nintendo Games. Redmond, Washington: Microsoft Press, 1991.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sqoon"

1

Vohra, Deepak. "Apache Sqoop." In Practical Hadoop Ecosystem, 261–86. Berkeley, CA: Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-2199-0_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vohra, Deepak. "Using Apache Sqoop." In Pro Docker, 151–83. Berkeley, CA: Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-1830-3_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lakhe, Bhushan. "Implementing SQOOP and Flume-based Data Transfers." In Practical Hadoop Migration, 189–205. Berkeley, CA: Apress, 2016. http://dx.doi.org/10.1007/978-1-4842-1287-5_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Umachandran, Krishnan, and Debra Sharon Ferdinand-James. "Affordances of Data Science in Agriculture, Manufacturing, and Education." In Privacy and Security Policies in Big Data, 14–40. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-2486-1.ch002.

Full text
Abstract:
Continued technological advancements of the 21st Century afford massive data generation in sectors of our economy to include the domains of agriculture, manufacturing, and education. However, harnessing such large-scale data, using modern technologies for effective decision-making appears to be an evolving science that requires knowledge of Big Data management and analytics. Big data in agriculture, manufacturing, and education are varied such as voluminous text, images, and graphs. Applying Big data science techniques (e.g., functional algorithms) for extracting intelligence data affords decision markers quick response to productivity, market resilience, and student enrollment challenges in today's unpredictable markets. This chapter serves to employ data science for potential solutions to Big Data applications in the sectors of agriculture, manufacturing and education to a lesser extent, using modern technological tools such as Hadoop, Hive, Sqoop, and MongoDB.
APA, Harvard, Vancouver, ISO, and other styles
5

Umachandran, Krishnan, and Debra Sharon Ferdinand-James. "Affordances of Data Science in Agriculture, Manufacturing, and Education." In Web Services, 953–78. IGI Global, 2019. http://dx.doi.org/10.4018/978-1-5225-7501-6.ch052.

Full text
Abstract:
Continued technological advancements of the 21st Century afford massive data generation in sectors of our economy to include the domains of agriculture, manufacturing, and education. However, harnessing such large-scale data, using modern technologies for effective decision-making appears to be an evolving science that requires knowledge of Big Data management and analytics. Big data in agriculture, manufacturing, and education are varied such as voluminous text, images, and graphs. Applying Big data science techniques (e.g., functional algorithms) for extracting intelligence data affords decision markers quick response to productivity, market resilience, and student enrollment challenges in today's unpredictable markets. This chapter serves to employ data science for potential solutions to Big Data applications in the sectors of agriculture, manufacturing and education to a lesser extent, using modern technological tools such as Hadoop, Hive, Sqoop, and MongoDB.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography