To see the other types of publications on this topic, follow the link: SQL-to-NoSQL.

Journal articles on the topic 'SQL-to-NoSQL'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'SQL-to-NoSQL.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Laksmita, Nadea Cipta, Erwin Apriliyanto, I. Wayan Pandu, and Kusrini Rini. "Comparison of NoSQL Database Performance with SQL Server Database on Online Airplane Ticket Booking." Indonesian Journal of Applied Informatics 4, no. 2 (August 9, 2020): 64. http://dx.doi.org/10.20961/ijai.v4i2.38956.

Full text
Abstract:
<em>Flight ticket booking services have become more advanced, where bookings can be made through the android / iOS application and through a web browser, ticket reservations, no longer have to come to travel agents or come to the airport to book plane tickets. In this study using an online ticket booking database where one database uses the NoSQL database and another database uses SQL Server. The purpose of this research is to test the performance of NoSQL speed with SQL Server with the Insert, Delete and Select commands. The testing method uses 100 records, 500 records, 1000 records, and 5000 records, with each record being tested four times and then taken on average. The results of this study are that the NoSQL database Insert command has a speed 4 times faster than the SQL Server database for under 500 records, whereas above 500 NoSQL database records 5 times slower, the Delete NoSQL database command has a speed 3 times faster than the SQL database Server, and the command Select 1 NoSQL database table 55 times faster than SQL Server databases, while 2 NoSQL database tables are 18 times slower than SQL Server databases, while 3 NoSQL database tables are 10 times slower than SQL Server databases, whereas 4 database tables NoSQL is 16 times slower than SQL Server databases.</em>
APA, Harvard, Vancouver, ISO, and other styles
2

Arif, Dashne Raouf, and Nzar Abdulqadir Ali. "Improving the performance of big data databases." Kurdistan Journal of Applied Research 4, no. 2 (December 31, 2019): 206–20. http://dx.doi.org/10.24017/science.2019.2.20.

Full text
Abstract:
Real-time monitoring systems utilize two types of database, they are relational databases such as MySQL and non-relational databases such as MongoDB. A relational database management system (RDBMS) stores data in a structured format using rows and columns. It is relational because the values of the tables are connected. A non-relational database is a database that does not adopt the relational structure given by traditional. In recent years, this class of databases has also been referred to as Not only SQL (NoSQL). This paper discusses many comparisons that have been conducted on the execution time performance of types of databases (SQL and NoSQL). In SQL (Structured Query Language) databases different algorithms are used for inserting and updating data, such as indexing, bulk insert and multiple updating. However, in NoSQL different algorithms are used for inserting and updating operations such as default-indexing, batch insert, multiple updating and pipeline aggregation. As a result, firstly compared with related papers, this paper shows that the performance of both SQL and NoSQL can be improved. Secondly, performance can be dramatically improved for inserting and updating operations in the NoSQL database compared to the SQL database. To demonstrate the performance of the different algorithms for entering and updating data in SQL and NoSQL, this paper focuses on a different number of data sets and different performance results. The SQL part of the paper is conducted on 50,000 records to 3,000,000 records, while the NoSQL part of the paper is conducted on 50,000 to 16,000,000 documents (2GB) for NoSQL. In SQL, three million records are inserted within 606.53 seconds, while in NoSQL this number of documents is inserted within 67.87 seconds. For updating data, in SQL 300,000 records are updated within 271.17 seconds, while for NoSQL this number of documents is updated within just 46.02 seconds.
APA, Harvard, Vancouver, ISO, and other styles
3

Chaudhary, Renu, and Gagangeet Singh. "A NOVEL TECHNIQUE IN NoSQL DATA EXTRACTION." International Journal of Research -GRANTHAALAYAH 1, no. 1 (August 31, 2014): 51–58. http://dx.doi.org/10.29121/granthaalayah.v1.i1.2014.3086.

Full text
Abstract:
NoSQL databases (commonly interpreted by developers as „not only SQL databases‟ and not „no SQL‟) is an emerging alternative to the most widely used relational databases. As the name suggests, it does not completely replace SQL but compliments it in such a way that they can co-exist. In this paper we will be discussing the NoSQL data model, types of NoSQL data stores, characteristics and features of each data store, query languages used in NoSQL, advantages and disadvantages of NoSQL over RDBMS and the future prospects of NoSQL. Motivation/Background:NoSQL systems exhibit the ability to store and index arbitrarily big data sets while enabling a large amount of concurrent user requests. Method:Many people think NoSQL is a derogatory term created to poke at SQL. In reality, the term means Not Only SQL. The idea is that both technologies can coexist and each has its place. Results:Large-scale data processing (parallel processing over distributed systems); Embedded IR (basic machine-to-machine information look-up & retrieval); Exploratory analytics on semi-structured data (expert level); Large volume data storage (unstructured, semi-structured, small-packet structured). Conclusions:This study report motivation to provide an independent understanding of the strengths and weaknesses of various NoSQL database approaches to supporting applications that process huge volumes of data; as well as to provide a global overview of this non-relational NoSQL databases.
APA, Harvard, Vancouver, ISO, and other styles
4

Sokolova, Marina V., Francisco J. Gómez, and Larisa N. Borisoglebskaya. "Migration from an SQL to a hybrid SQL/NoSQL data model." Journal of Management Analytics 7, no. 1 (December 15, 2019): 1–11. http://dx.doi.org/10.1080/23270012.2019.1700401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dai, Jiao. "SQL to NoSQL : What to do and How." IOP Conference Series: Earth and Environmental Science 234 (March 8, 2019): 012080. http://dx.doi.org/10.1088/1755-1315/234/1/012080.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Schreiner, Geomar A., Denio Duarte, and Ronaldo dos S. Melo. "When Relational-Based Applications Go to NoSQL Databases: A Survey." Information 10, no. 7 (July 16, 2019): 241. http://dx.doi.org/10.3390/info10070241.

Full text
Abstract:
Several data-centric applications today produce and manipulate a large volume of data, the so-called Big Data. Traditional databases, in particular, relational databases, are not suitable for Big Data management. As a consequence, some approaches that allow the definition and manipulation of large relational data sets stored in NoSQL databases through an SQL interface have been proposed, focusing on scalability and availability. This paper presents a comparative analysis of these approaches based on an architectural classification that organizes them according to their system architectures. Our motivation is that wrapping is a relevant strategy for relational-based applications that intend to move relational data to NoSQL databases (usually maintained in the cloud). We also claim that this research area has some open issues, given that most approaches deal with only a subset of SQL operations or give support to specific target NoSQL databases. Our intention with this survey is, therefore, to contribute to the state-of-art in this research area and also provide a basis for choosing or even designing a relational-to-NoSQL data wrapping solution.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Peng, and Yan Qi. "Research of Load Balancing Based on NOSQL Database." Applied Mechanics and Materials 602-605 (August 2014): 3371–74. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.3371.

Full text
Abstract:
The NOSQL database to support data and high concurrent read and write,scalability and high availability features in a distributed storage system which has been applied widely. In this paper, through the research of load balancing in distributed storage system,and it proposes the consistent hashing algorithm and the virtual node strategy, in order to improve the load balancing of the system and increase the cache hit ratio. For the load balancing principle of NOSQL and SQL Server, analysis and comparison of the experimental data.The result shows that, with the increase of the number of virtual nodes, the cache hit ratio of NOSQL is higher than the cache hit ratio of SQL Server.
APA, Harvard, Vancouver, ISO, and other styles
8

Bogdan, George Tudorica. "Challenges for the NoSQL systems." International Journal of Sustainable Economies Management 2, no. 1 (January 2013): 55–64. http://dx.doi.org/10.4018/ijsem.2013010106.

Full text
Abstract:
The concept described by the term NoSQL (Not Only SQL) is a database that is distributed, may not require fixed table schemas, usually avoids join operations and is typically horizontally scalable, it does not offer SQL query interface and is available in most cases as open source - some bibliographic sources use the term to refer to a completely unrelated system. This concept is also assimilated by sources in the academic world as a structured form of storage. The two terms seem not to be entirely equivalent; relational databases, for example, also meet the official definition of data storage structures, but they are somewhat opposite qualities to the concept of NoSQL. The aim of this paper is to discuss the challenges met by the NoSQL solutions and to propose solutions for these challenges.
APA, Harvard, Vancouver, ISO, and other styles
9

Rats, Juris, and Gints Ernestsons. "Clustering and Ranked Search for Enterprise Content Management." International Journal of E-Entrepreneurship and Innovation 4, no. 4 (October 2013): 20–31. http://dx.doi.org/10.4018/ijeei.2013100102.

Full text
Abstract:
The aim of this work is to understand more closely where the border lies between relational and Not Only Structured Query Language (NoSQL) platform as concerns Enterprise Content Management (ECM) area. Another objective (closely related to the first one) is to specify the conceptual architecture of the distributed ECM system. The authors specify the model of the prototype ECM system and compare two platforms for this model – MS SQL based for the relational platform and Clusterpoint for the NoSQL platform. The results of performance measurements of SQL and NoSQL technologies for Enterprise Content Management specific tasks are presented and analyzed. The viability of NoSQL Document-oriented database solution based on clustering and ranked search is demonstrated. The ways to leverage the improved performance and scalability of the software to better serve the business needs of the Enterprise are discussed. The conceptual architecture of the prototype system is outlined.
APA, Harvard, Vancouver, ISO, and other styles
10

Khashan, Eman, Ali Eldesouky, and Sally Elghamrawy. "An adaptive spark-based framework for querying large-scale NoSQL and relational databases." PLOS ONE 16, no. 8 (August 19, 2021): e0255562. http://dx.doi.org/10.1371/journal.pone.0255562.

Full text
Abstract:
The growing popularity of big data analysis and cloud computing has created new big data management standards. Sometimes, programmers may interact with a number of heterogeneous data stores depending on the information they are responsible for: SQL and NoSQL data stores. Interacting with heterogeneous data models via numerous APIs and query languages imposes challenging tasks on multi-data processing developers. Indeed, complex queries concerning homogenous data structures cannot currently be performed in a declarative manner when found in single data storage applications and therefore require additional development efforts. Many models were presented in order to address complex queries Via multistore applications. Some of these models implemented a complex unified and fast model, while others’ efficiency is not good enough to solve this type of complex database queries. This paper provides an automated, fast and easy unified architecture to solve simple and complex SQL and NoSQL queries over heterogeneous data stores (CQNS). This proposed framework can be used in cloud environments or for any big data application to automatically help developers to manage basic and complicated database queries. CQNS consists of three layers: matching selector layer, processing layer, and query execution layer. The matching selector layer is the heart of this architecture in which five of the user queries are examined if they are matched with another five queries stored in a single engine stored in the architecture library. This is achieved through a proposed algorithm that directs the query to the right SQL or NoSQL database engine. Furthermore, CQNS deal with many NoSQL Databases like MongoDB, Cassandra, Riak, CouchDB, and NOE4J databases. This paper presents a spark framework that can handle both SQL and NoSQL Databases. Four scenarios’ benchmarks datasets are used to evaluate the proposed CQNS for querying different NoSQL Databases in terms of optimization process performance and query execution time. The results show that, the CQNS achieves best latency and throughput in less time among the compared systems.
APA, Harvard, Vancouver, ISO, and other styles
11

Aftab, Zain, Waheed Iqbal, Khaled Mohamad Almustafa, Faisal Bukhari, and Muhammad Abdullah. "Automatic NoSQL to Relational Database Transformation with Dynamic Schema Mapping." Scientific Programming 2020 (July 1, 2020): 1–13. http://dx.doi.org/10.1155/2020/8813350.

Full text
Abstract:
Recently, the use of NoSQL databases has grown to manage unstructured data for applications to ensure performance and scalability. However, many organizations prefer to transfer data from an operational NoSQL database to a SQL-based relational database for using existing tools for business intelligence, analytics, decision making, and reporting. The existing methods of NoSQL to relational database transformation require manual schema mapping, which requires domain expertise and consumes noticeable time. Therefore, an efficient and automatic method is needed to transform an unstructured NoSQL database into a structured database. In this paper, we proposed and evaluated an efficient method to transform a NoSQL database into a relational database automatically. In our experimental evaluation, we used MongoDB as a NoSQL database, and MySQL and PostgreSQL as relational databases to perform transformation tasks for different dataset sizes. We observed excellent performance, compared to the existing state-of-the-art methods, in transforming data from a NoSQL database into a relational database.
APA, Harvard, Vancouver, ISO, and other styles
12

BenAli-Sougui, Ines, Minyar Sassi Hidri, and Amel Grissa-Touzi. "No-FSQL." International Journal of Fuzzy System Applications 5, no. 2 (April 2016): 54–63. http://dx.doi.org/10.4018/ijfsa.2016040104.

Full text
Abstract:
NoSQL (Not only SQL) is an efficient database model for storing and manipulating huge quantities of precise data. However, most NoSQL databases scale well as data grows and often are flexible enough to accommodate imprecise and ambiguous data. This comprehensive hands-on guide presents fundamental concepts and practical solutions for using fuzziness with NoSQL to deals with fuzzy databases (FDB). In this paper, the authors present a graph-based fuzzy NoSQL model to deal with large fuzzy databases while extending the NoSQL one. The authors consider the cypher declarative query language proposed for Neo4j which is the current leader on this market to querying fuzzy databases.
APA, Harvard, Vancouver, ISO, and other styles
13

Chung, Wu-Chun, Hung-Pin Lin, Shih-Chang Chen, Mon-Fong Jiang, and Yeh-Ching Chung. "JackHare: a framework for SQL to NoSQL translation using MapReduce." Automated Software Engineering 21, no. 4 (September 28, 2013): 489–508. http://dx.doi.org/10.1007/s10515-013-0135-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Silalahi, Mesri. "PERBANDINGAN PERFORMANSI DATABASE MONGODB DAN MYSQL DALAM APLIKASI FILE MULTIMEDIA BERBASIS WEB." Computer Based Information System Journal 6, no. 1 (March 31, 2018): 63. http://dx.doi.org/10.33884/cbis.v6i1.574.

Full text
Abstract:
Database appeared and began to develop in line with the needs of processing and data storage to meet the information needs. Database is part of an important building block in an information system. In addition to a relational database (SQL), which stores structured datas in tables with defined schemes, there is a non-relational databases (NoSQL) with a dynamic scheme or unstructured. This study will compare the performance between NoSQL database (MongoDB) and SQL database (MySQL) for a web-based multimedia file storage application that stores files as BLOBs. Performance comparison is based on the speed of execution and the computer resources usage (CPU, memory, and virtual memory).
APA, Harvard, Vancouver, ISO, and other styles
15

Adji, Teguh Bharata, Dwi Retno Puspita Sari, and Noor Akhmad Setiawan. "Relational into Non-Relational Database Migration with Multiple-Nested Schema Methods on Academic Data." IJITEE (International Journal of Information Technology and Electrical Engineering) 3, no. 1 (September 13, 2019): 16. http://dx.doi.org/10.22146/ijitee.46503.

Full text
Abstract:
The rapid development of internet technology has increased the need of data storage and processing technology application. One application is to manage academic data records at educational institutions. Along with massive growth of information, decrement in the traditional database performance is inevitable. Hence, there are many companies choose to migrate to NoSQL, a technology that is able to overcome the traditional database shortcomings. However, the existing SQL to NoSQL migration tools have not been able to represent SQL data relations in NoSQL without limiting query performance. In this paper, a relational database transformation system transforming MySQL into non-relational database MongoDB was developed, using the Multiple Nested Schema method for academic databases. The development began with a transformation scheme design. The transformation scheme was then implemented in the migration process, using PDI/Kettle. The testing was carried out on three aspects, namely query response time, data integrity, and storage requirements. The test results showed that the developed system successfully represented the relationship of SQL data in NoSQL, provided complex query performance 13.32 times faster in the migration database, basic query performance involving SQL transaction tables 28.6 times faster on migration results, and basic performance Queries without involving SQL transaction tables were 3.91 times faster in the migration source. This shows that the theory of the Multiple Nested Schema method, aiming to overcome the poor performance of queries involving many JOIN operations, is proved. In addition, the system is also proven to be able to maintain data integrity in all tested queries. The space performance test results indicated that the migrated database transformed using the Multiple Nested Schema method showed a storage requirement of 10.53 times larger than the migration source database. This is due to the large amount of data redundancy resulting from the transformation process. However, at present, storage performance is not a top priority in data processing technology, so large storage requirements are a consequence of obtaining efficient query performance, which is still considered as the first priority in data processing technology.
APA, Harvard, Vancouver, ISO, and other styles
16

Wu, Jiang, Du Ni, and Zhi Xiao. "N-Tier Soft Set Data Model: An Approach to Combine the Logicality of SQL and the Flexibility of NoSQL." Mobile Information Systems 2021 (June 25, 2021): 1–23. http://dx.doi.org/10.1155/2021/5567234.

Full text
Abstract:
To process a huge amount of data, computing resources need to be organized in clusters that can be scaled out easily. Still, traditional SQL databases built on the relational data model are difficult to be put to use in such clusters, which has motivated the movement named NoSQL. However, NoSQL databases have their limits by using their own data models. In this paper, the original soft set theory is extended, and a new theory system called n-tier soft set is brought up. We systematically constructed its concepts, definitions, and operations, establishing it as a novel soft set algebra. And some features of this algebra display its natural advantages as a data model which could combine the logicality of the SQL model (also known as the relational model) and the flexibility of NoSQL models. This data model provides a unified and normative perspective logic for organizing and manipulating data, combines metadata (semantic) and data to form a self-described structure, and combines index and data to realize fast locating and correlating.
APA, Harvard, Vancouver, ISO, and other styles
17

Balasubramanaian, Nagarajan, Suguna Jayapal, and Satheeshkumar Janakiraman. "A Contrivance to Encapsulate Virtual Scaffold with Comments and Notes." International Arab Journal of Information Technology 17, no. 3 (December 1, 2019): 338–46. http://dx.doi.org/10.34028/iajit/17/3/7.

Full text
Abstract:
CLOUD is an elision of Common Location-independent Online Utility available on-Demand and is based on Service Oriented Architecture (SOA). Today a chunk of researchers were working towards contrivance based on multi-tenant aware Software as a Service (SaaS) application development and still a precise pragmatic solution remains a challenge among the researchers. The first step towards resolving solution is to enhance the virtual scaffold and propose it as a System under Test (SuT). The entire work is proposed as a Model View Controller (MVC) where the tenant login through the View and write their snippet code for encapsulation. The proposed VirScaff schema acts as Controller and provides authentication and authorization by role/session assignment for tenant and thus helps to access data from the dashboard (Viz., Create, Read, Update and Delete (CRUD)). The SuT supports and accommodates both SQL and Not only Structured Query Language (NoSQL) dataset. Finally, this paper construed that SuT behaves well for both SQL and NoSQL dataset in terms of time and space complexities. To sum-up, the entire work addresses the challenges towards multitenant aware SaaS application development and highly commendable while using NoSQL dataset.
APA, Harvard, Vancouver, ISO, and other styles
18

Irshad, Lubna, Li Yan, and Zongmin Ma. "Schema-Based JSON Data Stores in Relational Databases." Journal of Database Management 30, no. 3 (July 2019): 38–70. http://dx.doi.org/10.4018/jdm.2019070103.

Full text
Abstract:
JSON is a simple, compact and light weighted data exchange format to communicate between web services and client applications. NoSQL document stores evolve with the popularity of JSON, which can support JSON schema-less storage, reduce cost, and facilitate quick development. However, NoSQL still lacks standard query language and supports eventually consistent BASE transaction model rather than the ACID transaction model. This is very challenging and a burden on the developer. The relational database management systems (RDBMS) support JSON in binary format with SQL functions (also known as SQL/JSON). However, these functions are not standardized yet and vary across vendors along with different limitations and complexities. More importantly, complex searches, partial updates, composite queries, and analyses are cumbersome and time consuming in SQL/JSON compared to standard SQL operations. It is essential to integrate JSON into databases that use standard SQL features, support ACID transactional models, and has the capability of managing and organizing data efficiently. In this article, we empower JSON to use relational databases for analysis and complex queries. The authors reveal that the descriptive nature of the JSON schema can be utilized to create a relational schema for the storage of the JSON document. Then, the powerful SQL features can be used to gain consistency and ACID compatibility for querying JSON instances from the relational schema. This approach will open a gateway to combine the best features of both worlds: the fast development of JSON, consistency of relational model, and efficiency of SQL.
APA, Harvard, Vancouver, ISO, and other styles
19

Schreiner, Geomar A., Denio Duarte, and Ronaldo dos Santos Mello. "Bringing SQL databases to key-based NoSQL databases: a canonical approach." Computing 102, no. 1 (June 29, 2019): 221–46. http://dx.doi.org/10.1007/s00607-019-00736-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Banerjee, Shreya, Sourabh Bhaskar, Anirban Sarkar, and Narayan C. Debnath. "A Unified Conceptual Model for Data Warehouses." Annals of Emerging Technologies in Computing 5, no. 5 (March 20, 2021): 162–69. http://dx.doi.org/10.33166/aetic.2021.05.020.

Full text
Abstract:
These days, NoSQL (Not only SQL) databases are being used as a deployment tool for Data Warehouses (DW) due to its support for dynamic and scalable data modeling capabilities. Yet, decision-makers have faced several challenges to accept it as a major choice for implementation of their DW. The most significant one among those challenges is a lack of common conceptual model and a systematic design methodology for different NoSQL databases. The objective of this paper is to resolve these challenges by proposing an ontology based formal conceptual model for NoSQL based DWs. These proposed concepts are capable of realizing the cube concepts for visualization of multi-dimensional data in NoSQL based DW solutions. In this context, two strategies are specified, implemented and illustrated using a case study for devising of the proposed conceptual model.
APA, Harvard, Vancouver, ISO, and other styles
21

Kaur, Harpreet. "Analysis of Nosql Database State-of-The-Art Techniques and their Security Issues." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 2 (April 11, 2021): 467–71. http://dx.doi.org/10.17762/turcomat.v12i2.852.

Full text
Abstract:
NOSql database systems are extremely optimized for performing retrieval and adjoining operations on large quantity of data as compared to relational models which are relatively inefficient. They are used majorly for real-time applications and statistically analyzing the growing amount of data. NoSQL databases emerging in market claim to outperform SQL databases. In Present time of technology, every person wants to save and secure its data so that no one can check their information without their permission .However, there are multifarious security issues which are yet to be resolved. In this paper, we are discussing and reviewing about the Nosql databases and their most popular security issues link (Cassandra and Mongo DB).
APA, Harvard, Vancouver, ISO, and other styles
22

Pokorný, Jaroslav. "Integration of Relational and NoSQL Databases." Vietnam Journal of Computer Science 06, no. 04 (November 2019): 389–405. http://dx.doi.org/10.1142/s2196888819500210.

Full text
Abstract:
The analysis of relational and NoSQL databases leads to the conclusion that these data processing systems are to some extent complementary. In the current Big Data applications, especially where extensive analyses (so-called Big Analytics) are needed, it turns out that it is nontrivial to design an infrastructure involving data and software of both types. Unfortunately, the complementarity negatively influences integration possibilities of these data stores both at the data model and data processing levels. In terms of performance, it may be beneficial to use a polyglot persistence, a multimodel approach or multilevel modeling, or even to transform the SQL database schema into NoSQL and to perform data migration between the relational and NoSQL databases. Another possibility is to integrate a NoSQL database and relational database with the help of a third data model. The aim of the paper is to show these possibilities and present some new methods of designing such integrated database architectures.
APA, Harvard, Vancouver, ISO, and other styles
23

Celesti, Antonio, Maria Fazio, and Massimo Villari. "A Study on Join Operations in MongoDB Preserving Collections Data Models for Future Internet Applications." Future Internet 11, no. 4 (March 27, 2019): 83. http://dx.doi.org/10.3390/fi11040083.

Full text
Abstract:
Presently, we are observing an explosion of data that need to be stored and processed over the Internet, and characterized by large volume, velocity and variety. For this reason, software developers have begun to look at NoSQL solutions for data storage. However, operations that are trivial in traditional Relational DataBase Management Systems (DBMSs) can become very complex in NoSQL DBMSs. This is the case of the join operation to establish a connection between two or more DB structures, whose construct is not explicitly available in many NoSQL databases. As a consequence, the data model has to be changed or a set of operations have to be performed to address particular queries on data. Thus, open questions are: how do NoSQL solutions work when they have to perform join operations on data that are not natively supported? What is the quality of NoSQL solutions in such cases? In this paper, we deal with such issues specifically considering one of the major NoSQL document oriented DB available on the market: MongoDB. In particular, we discuss an approach to perform join operations at application layer in MongoDB that allows us to preserve data models. We analyse performance of the proposes approach discussing the introduced overhead in comparison with SQL-like DBs.
APA, Harvard, Vancouver, ISO, and other styles
24

Ait El Mouden, Zakariyaa, and Abdeslam Jakimi. "A New Algorithm for Storing and Migrating Data Modelled by Graphs." International Journal of Online and Biomedical Engineering (iJOE) 16, no. 11 (October 5, 2020): 137. http://dx.doi.org/10.3991/ijoe.v16i11.15545.

Full text
Abstract:
<span>NoSQL databases have moved from theoretical solutions to exceed relational databases limits to a practical and indisputable application for storing and manipulation big data. In term of variety, NoSQL databases store heterogeneous data without being obliged to respect a predefined schema such as the case of relational and object-relational databases. NoSQL solutions surpass the traditional databases in storage capacity; we consider MongoDB for example, which is a document-oriented database capable of storing unlimited number of documents with a maximal size of 32TB depending on the machine that runs the database and also the operating system. Also, in term of velocity, many researches compared the execution time of different transactions and proved that NoSQL databases are the perfect solution for real-time applications. This paper presents an algorithm to store data modeled by graphs as NoSQL documents, the purpose of this study is to exploit the high amount of data stored in SQL databases and to make such data usable by recent clustering algorithms and other data science tools. This study links relational data to document datastores by defining an effective algorithm for reading relational data, modelling those data as graphs and storing those data as NoSQL documents.</span>
APA, Harvard, Vancouver, ISO, and other styles
25

Bhatewara, Ankita, and Kalyani Waghmare. "Highly Scalable Network Management Solution Using Cassandra." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 13, no. 10 (October 30, 2014): 5085–89. http://dx.doi.org/10.24297/ijct.v13i10.2330.

Full text
Abstract:
With the current emphasis on Big Data, NOSQL databases have surged in popularity. These databases are claimed to perform better than SQL databases. The traditional database is designed for the structured data and the complex query. In the environment of the cloud, the scale of data is very large, the data is non-structured, the request of the data is dynamic, these characteristics raise new challenges for the data storage and administration, in this context, the NOSQL database comes into picture. This paper discusses about some non-structured databases. It also shows how Cassandra is used to improve the scalability of the network compared to RDBMS.
APA, Harvard, Vancouver, ISO, and other styles
26

Matallah, Houcine, Ghalem Belalem, and Karim Bouamrane. "Comparative Study Between the MySQL Relational Database and the MongoDB NoSQL Database." International Journal of Software Science and Computational Intelligence 13, no. 3 (July 2021): 38–63. http://dx.doi.org/10.4018/ijssci.2021070104.

Full text
Abstract:
NoSQL databases are new architectures developed to remedy the various weaknesses that have affected relational databases in highly distributed systems such as cloud computing, social networks, electronic commerce. Several companies loyal to traditional relational SQL databases for several decades seek to switch to the new “NoSQL” databases to meet the new requirements related to the change of scale in data volumetry, the load increases, the diversity of types of data handled, and geographic distribution. This paper develops a comparative study in which the authors will evaluate the performance of two databases very widespread in the field: MySQL as a relational database and MongoDB as a NoSQL database. To accomplish this confrontation, this research uses the Yahoo! Cloud Serving Benchmark (YCSB). This contribution is to provide some answers to choose the appropriate database management system for the type of data used and the type of processing performed on that data.
APA, Harvard, Vancouver, ISO, and other styles
27

Bathla, Gourav, Rinkle Rani, and Himanshu Aggarwal. "Comparative study of NoSQL databases for big data storage." International Journal of Engineering & Technology 7, no. 2.6 (March 11, 2018): 83. http://dx.doi.org/10.14419/ijet.v7i2.6.10072.

Full text
Abstract:
Big data is a collection of large scale of structured, semi-structured and unstructured data. It is generated due to Social networks, Business organizations, interaction and views of social connected users. It is used for important decision making in business and research organizations. Storage which is efficient to process this large scale of data to extract important information in less response time is the need of current competitive time. Relational databases which have ruled the storage technology for such a long time seems not suitable for mixed types of data. Data can not be represented just in the form of rows and columns in tables. NoSQL (Not only SQL) is complementary to SQL technology which can provide various formats for storage that can be easily compatible with high velocity,large volume and different variety of data. NoSQL databases are categorized in four techniques- Column oriented, Key Value based, Graph based and Document oriented databases. There are approximately 120 real solutions existing for these categories; most commonly used solutions are elaborated in Introduction section. Several research works have been carried out to analyze these NoSQL technology solutions. These studies have not mentioned the situations in which a particular data storage technique is to be chosen. In this study and analysis, we have tried our best to provide answer on technology selection based on specific requirement to the reader. In previous research, comparisons amongNoSQL data storage techniques have been described by using real examples like MongoDB, Neo4J etc. Our observation is that if users have adequate knowledge of NoSQL categories and their comparison, then it is easy for them to choose best suitable category and then real solutions can be selected from this category.
APA, Harvard, Vancouver, ISO, and other styles
28

Bajaj, Akhilesh, and Wade Bick. "The Rise of NoSQL Systems." Journal of Database Management 31, no. 3 (July 2020): 67–82. http://dx.doi.org/10.4018/jdm.2020070104.

Full text
Abstract:
Transaction processing systems are primarily based on the relational model of data and offer the advantages of decades of research and experience in enforcing data quality through integrity constraints, allowing concurrent access and supporting recoverability. From a performance standpoint, they offer joins-based query optimization and data structures to promote fast reads and writes, but are usually vertically scalable from a hardware standpoint. NoSQL (Not Only SQL) systems follow different data representation formats than relations, such as key-value pairs, graphs, documents or column-families. They offer a flexible data representation format as well as horizontal hardware scalability so that Big Data can be processed in real time. In this review article, we review recent research on each type of system, and then discuss how teaching of NoSQL may be incorporated into traditional undergraduate database courses in information systems curricula.
APA, Harvard, Vancouver, ISO, and other styles
29

Esbai, Redouane, Fouad Elotmani, and Fatima Zahra Belkadi. "Toward Automatic Generation of Column-Oriented NoSQL Databases in Big Data Context." International Journal of Online and Biomedical Engineering (iJOE) 15, no. 09 (June 14, 2019): 4. http://dx.doi.org/10.3991/ijoe.v15i09.10433.

Full text
Abstract:
<span>The growth of application architectures in all areas (e.g. Astrology, Meteorology, E-commerce, social network, etc.) has resulted in an exponential increase in data volumes, now measured in Petabytes. Managing these volumes of data has become a problem that relational databases are no longer able to handle because of the acidity properties. In response to this scaling up, new concepts have emerged such as NoSQL. In this paper, we show how to design and apply transformation rules to migrate from an SQL relational database to a Big Data solution within NoSQL. For this, we use the Model Driven Architecture (MDA) and the transformation languages like as MOF 2.0 QVT (Meta-Object Facility 2.0 Query-View-Transformation) and Acceleo which define the meta-models for the development of transformation model. The transformation rules defined in this work can generate, from the class diagram, a CQL code for creation column-oriented NoSQL database.</span>
APA, Harvard, Vancouver, ISO, and other styles
30

Kriestanto, Danny, and Alif Benden Arnado. "IMPLEMENTASI WEBSITE PENCARIAN KOS DENGAN NoSQL." JIKO (Jurnal Informatika dan Komputer) 2, no. 2 (October 12, 2017): 103. http://dx.doi.org/10.26798/jiko.2017.v2i2.66.

Full text
Abstract:
The new technology of database has moved forward the relational databases. Now, the massive and unstructured data encourage experts to create a new type of database without using query. One of this technology is called NoSQL (Not Only SQL). One of the developing RDBMS that using this technique is MongoDB, which already supporting data storage technology that is no longer need for structured tables and rigid-typed of data. The schema was made flexible to handle the changes of data. The MongoDB data collecting characteristics in the form of arrays is considered suitable for the implementation of boarding house searching where each of the boarding houses have their own scenario structures. MongoDB also supports several programming language, including PHP with Bootstrap material as interface. The results of the research showed that there are alot of difference in implementing a NoSQL database with the regular relational one. NoSQL databases considered alot more complicated in structure, data type, even the CRUD system. The results also showed that in order to view an array inside another array will need two processes.
APA, Harvard, Vancouver, ISO, and other styles
31

Guo, Dongming, and Erling Onstein. "State-of-the-Art Geospatial Information Processing in NoSQL Databases." ISPRS International Journal of Geo-Information 9, no. 5 (May 19, 2020): 331. http://dx.doi.org/10.3390/ijgi9050331.

Full text
Abstract:
Geospatial information has been indispensable for many application fields, including traffic planning, urban planning, and energy management. Geospatial data are mainly stored in relational databases that have been developed over several decades, and most geographic information applications are desktop applications. With the arrival of big data, geospatial information applications are also being modified into, e.g., mobile platforms and Geospatial Web Services, which require changeable data schemas, faster query response times, and more flexible scalability than traditional spatial relational databases currently have. To respond to these new requirements, NoSQL (Not only SQL) databases are now being adopted for geospatial data storage, management, and queries. This paper reviews state-of-the-art geospatial data processing in the 10 most popular NoSQL databases. We summarize the supported geometry objects, main geometry functions, spatial indexes, query languages, and data formats of these 10 NoSQL databases. Moreover, the pros and cons of these NoSQL databases are analyzed in terms of geospatial data processing. A literature review and analysis showed that current document databases may be more suitable for massive geospatial data processing than are other NoSQL databases due to their comprehensive support for geometry objects and data formats and their performance, geospatial functions, index methods, and academic development. However, depending on the application scenarios, graph databases, key-value, and wide column databases have their own advantages.
APA, Harvard, Vancouver, ISO, and other styles
32

Nurhadi, Nurhadi, Rabiah Abdul Kadir, and Ely Salwana Mat Surin. "CLASSIFICATION COMPLEX QUERY SQL FOR DATA LAKE MANAGEMENT USING MACHINE LEARNING." Journal of Information System and Technology Management 6, no. 22 (September 1, 2021): 15–24. http://dx.doi.org/10.35631//jistm.622002.

Full text
Abstract:
A query is a request for data or information from a database table or a combination of tables. It allows for a more accurate database search. SQL queries are divided into two types, namely, simple queries and complex queries. Complex SQL is the use of SQL queries that go beyond standard SQL by using the SELECT and WHERE commands. Complex SQL queries often involve the use of complex joins and subqueries, where the queries are nested in a WHERE clause. Complex SQL queries can be grouped into two types of queries, namely, Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) queries. In the implementation of complex SQL queries in the NoSQL database, a classification process is needed due to the varying data formats, namely, structured, semi-structured, and unstructured data. The classification process aims to make it easier for the query data to be organized by type of query. The classification method used in this research is the Naive Bayes Classifier (NBC) which is generally often used in text data, and the Support Vector Machine (SVM), which is known to work very well on data with large dimensions. The two methods will be compared to determine the best classification result. The results showed that SVM was 84.61% accurate in terms of classification, and comparatively, NBC was at 76.92%.
APA, Harvard, Vancouver, ISO, and other styles
33

Ferencz, Katalin. "Overview of Modern Nosql Database Management Systems. Case Study: Apache Cassandra." Műszaki Tudományos Közlemények 9, no. 1 (October 1, 2018): 83–86. http://dx.doi.org/10.33894/mtk-2018.09.16.

Full text
Abstract:
Abstract The wide spread of IoT devices makes possible the collection of enormous amounts of sensor data. Traditional SQL (structured query language) database management systems are not the most appropriate for storing this type of data. For this task, distributed database management systems are the most adequate. Apache Cassandra is an open source, distributed database server software that stores large amounts of data on low-coast servers, providing high availability. The Cassandra uses the gossip protocol to exchange information between the distributed servers. The query language used is the CQL (Cassandra Query Language). In this paper we present an alternative solution to traditional SQL-based database management systems - the so called NoSQL type database management systems, summarize the main types of these systems and provide a detailed description of the Apache Cassandra open source distributed database server installation, configuration and operation.
APA, Harvard, Vancouver, ISO, and other styles
34

Bukhari, Syed Ahmad Chan, Hafsa Shareef Dar, M. Ikramullah Lali, Fazel Keshtkar, Khalid Mahmood Malik, and Seifedine Kadry. "Frameworks for Querying Databases Using Natural Language." International Journal of Data Warehousing and Mining 17, no. 2 (April 2021): 21–38. http://dx.doi.org/10.4018/ijdwm.2021040102.

Full text
Abstract:
A natural language interface is useful for a wide range of users to retrieve their desired information from databases without requiring prior knowledge of database query language such as SQL. The advent of user-friendly technologies, such as speech-enabled interfaces, have revived the use of natural language technology for querying databases; however, the most relevant and last work presenting state of the art was published back in 2013 and does not encompass several advancements. In this paper, the authors have reviewed 47 frameworks that have been developed during the last decade and categorized the SQL and NoSQL-based frameworks. Furthermore, the analysis of these frameworks is presented on the basis of criteria such as supporting language, scheme of heuristic rules, interoperability support, scope of the dataset, and overall performance score. The study concludes that the majority of frameworks focus on translating natural language queries to SQL and translates English language text to queries.
APA, Harvard, Vancouver, ISO, and other styles
35

Imam, Abdullahi Abubakar, Shuib Basri, Rohiza Ahmad, Amirudin A. Wahab, María T. González-Aparicio, Luiz Fernando Capretz, Ammar K. Alazzawi, and Abdullateef O. Balogun. "DSP: Schema Design for Non-Relational Applications." Symmetry 12, no. 11 (October 30, 2020): 1799. http://dx.doi.org/10.3390/sym12111799.

Full text
Abstract:
The way a database schema is designed has a high impact on its performance in relational databases, which are symmetric in nature. While the problem of schema optimization is even more significant for NoSQL (“Not only SQL”) databases, existing modeling tools for relational databases are inadequate for this asymmetric setting. As a result, NoSQL modelers rely on rules of thumb to model schemas that require a high level of competence. Several studies have been conducted to address this problem; however, they are either proprietary, symmetrical, relationally dependent or post-design assessment tools. In this study, a Dynamic Schema Proposition (DSP) model for NoSQL databases is proposed to handle the asymmetric nature of today’s data. This model aims to facilitate database design and improve its performance in relation to data availability. To achieve this, data modeling styles were aggregated and classified. Existing cardinality notations were empirically extended using synthetically generated queries. A binary integer formulation was used to guide the mapping of asymmetric entities from the application’s conceptual data model to a database schema. An experiment was conducted to evaluate the impact of the DSP model on NoSQL schema production and its performance. A profound improvement was observed in read/write query performance and schema production complexities. In this regard, DSP has significant potential to produce schemas that are capable of handling big data efficiently.
APA, Harvard, Vancouver, ISO, and other styles
36

Nwankwo, Wilson. "A Review of Critical Security Challenges in SQL-based and NoSQL Systems from 2010 to 2019." International Journal of Advanced Trends in Computer Science and Engineering 9, no. 2 (April 25, 2020): 2029–35. http://dx.doi.org/10.30534/ijatcse/2020/174922020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Fotache, Marin, and Ionuț Hrubaru. "Performance Analysis of Two Big Data Technologies on a Cloud Distributed Architecture. Results for Non-Aggregate Queries on Medium-Sized Data." Scientific Annals of Economics and Business 63, s1 (December 1, 2016): 21–50. http://dx.doi.org/10.1515/saeb-2016-0134.

Full text
Abstract:
Abstract Big Data systems manage and process huge volumes of data constantly generated by various technologies in a myriad of formats. Big Data advocates (and preachers) have claimed that, relative to classical, relational/SQL Data Base Management Systems, Big Data technologies such as NoSQL, Hadoop and in-memory data stores perform better. This paper compares data processing performance of two systems belonging to SQL (PostgreSQL/Postgres XL) and Big Data (Hadoop/Hive) camps on a distributed five-node cluster deployed in cloud. Unlike benchmarks in use (YCSB, TPC), a series of R modules were devised for generating random non-aggregate queries on different subschema (with increasing data size) of TPC-H database. Overall performance of the two systems was compared. Subsequently a number of models were developed for relating performance on the system and also on various query parameters such as the number of attributes in SELECT and WHERE clause, number of joins, number of processing rows etc.
APA, Harvard, Vancouver, ISO, and other styles
38

Kotenko, Igor, Andrey Krasov, Igor Ushakov, and Konstantin Izrailov. "An Approach for Stego-Insider Detection Based on a Hybrid NoSQL Database." Journal of Sensor and Actuator Networks 10, no. 2 (March 30, 2021): 25. http://dx.doi.org/10.3390/jsan10020025.

Full text
Abstract:
One of the reasons for the implementation of information security threats in organizations is the insider activity of its employees. There is a big challenge to detect stego-insiders-employees who create stego-channels to secretly receive malicious information and transfer confidential information across the organization’s perimeter. Especially presently, with great popularity of wireless sensor networks (WSNs) and Internet of Things (IoT) devices, there is a big variety of information that could be gathered and processed by stego-insiders. Consequently, the problem arises of identifying such intruders and their transmission channels. The paper proposes an approach to solving this problem. The paper provides a review of the related works in terms of insider models and methods of their identification, including techniques for handling insider attacks in WSN, as well methods of embedding and detection of stego-embeddings. This allows singling out the basic features of stego-insiders, which could be determined by their behavior in the network. In the interests of storing these attributes of user behavior, as well as storing such attributes from large-scale WSN, a hybrid NoSQL database is created based on graph and document-oriented approaches. The algorithms for determining each of the features using the NoSQL database are specified. The general scheme of stego-insider detection is also provided. To confirm the efficiency of the approach, an experiment was carried out on a real network. During the experiment, a database of user behavior was collected. Then, user behavior features were retrieved from the database using special SQL queries. The analysis of the results of SQL queries is carried out, and their applicability for determining the attribute is justified. Weak points of the approach and ways to improve them are indicated.
APA, Harvard, Vancouver, ISO, and other styles
39

Khine, Pwint Phyu, and Zhaoshun Wang. "A Review of Polyglot Persistence in the Big Data World." Information 10, no. 4 (April 16, 2019): 141. http://dx.doi.org/10.3390/info10040141.

Full text
Abstract:
The inevitability of the relationship between big data and distributed systems is indicated by the fact that data characteristics cannot be easily handled by a standalone centric approach. Among the different concepts of distributed systems, the CAP theorem (Consistency, Availability, and Partition Tolerant) points out the prominent use of the eventual consistency property in distributed systems. This has prompted the need for other, different types of databases beyond SQL (Structured Query Language) that have properties of scalability and availability. NoSQL (Not-Only SQL) databases, mostly with the BASE (Basically Available, Soft State, and Eventual consistency), are gaining ground in the big data era, while SQL databases are left trying to keep up with this paradigm shift. However, none of these databases are perfect, as there is no model that fits all requirements of data-intensive systems. Polyglot persistence, i.e., using different databases as appropriate for the different components within a single system, is becoming prevalent in data-intensive big data systems, as they are distributed and parallel by nature. This paper reflects the characteristics of these databases from a conceptual point of view and describes a potential solution for a distributed system—the adoption of polyglot persistence in data-intensive systems in the big data era.
APA, Harvard, Vancouver, ISO, and other styles
40

Bazila Banu, A., R. K. Priyadarshini, and Ponniah Thirumalaikolundusubramanian. "Prediction of Children Diabetes by Autoregressive Integrated Moving Averages Model Using Big Data and Not Only SQL." Journal of Computational and Theoretical Nanoscience 16, no. 8 (August 1, 2019): 3510–13. http://dx.doi.org/10.1166/jctn.2019.8315.

Full text
Abstract:
Enormous efforts have been made by the health care organizations to assess the frequency and occurrence of diabetes among children. The epidemiology of diabetes is estimated with different methods. However, to effectively manage and estimate the diabetes, monitoring systems like glucose meters and Continuous Glucose Monitoring Systems (CGM) can be used. CGM is a way to determine glucose levels right through the day and night. The data obtained from such systems can be utilized effectively to manage as well to predict the diabetes. As the glucose level of the patient is monitored throughout the day, it results in an enormous amount of data. It is difficult to analyze large datasets using SQL, therefore NoSQL is used for handling big data based prediction. One such NoSQL tool known as ArangoDB is used to process the dataset with Arango Query Language (AQL). Investigations relevant to selection of attributes required for the model are discussed. In this paper, ARIMA model has been implemented to predict the diabetes among children. The model is evaluated in terms of moving average of glucose value of a particular person on a specific day. The results show that ARIMA model is appropriate for predicting Time-Series data especially like data obtained by CGM systems.
APA, Harvard, Vancouver, ISO, and other styles
41

Wercelens, Polyane, Waldeyr da Silva, Fernanda Hondo, Klayton Castro, Maria Emília Walter, Aletéia Araújo, Sergio Lifschitz, and Maristela Holanda. "Bioinformatics Workflows With NoSQL Database in Cloud Computing." Evolutionary Bioinformatics 15 (January 2019): 117693431988997. http://dx.doi.org/10.1177/1176934319889974.

Full text
Abstract:
Scientific workflows can be understood as arrangements of managed activities executed by different processing entities. It is a regular Bioinformatics approach applying workflows to solve problems in Molecular Biology, notably those related to sequence analyses. Due to the nature of the raw data and the in silico environment of Molecular Biology experiments, apart from the research subject, 2 practical and closely related problems have been studied: reproducibility and computational environment. When aiming to enhance the reproducibility of Bioinformatics experiments, various aspects should be considered. The reproducibility requirements comprise the data provenance, which enables the acquisition of knowledge about the trajectory of data over a defined workflow, the settings of the programs, and the entire computational environment. Cloud computing is a booming alternative that can provide this computational environment, hiding technical details, and delivering a more affordable, accessible, and configurable on-demand environment for researchers. Considering this specific scenario, we proposed a solution to improve the reproducibility of Bioinformatics workflows in a cloud computing environment using both Infrastructure as a Service (IaaS) and Not only SQL (NoSQL) database systems. To meet the goal, we have built 3 typical Bioinformatics workflows and ran them on 1 private and 2 public clouds, using different types of NoSQL database systems to persist the provenance data according to the Provenance Data Model (PROV-DM). We present here the results and a guide for the deployment of a cloud environment for Bioinformatics exploring the characteristics of various NoSQL database systems to persist provenance data.
APA, Harvard, Vancouver, ISO, and other styles
42

Pietroń, Marcin. "Analysis of performance of selected geospatial analyses implemented on the basis of relational and NoSQL databases." Polish Cartographical Review 51, no. 4 (December 1, 2019): 167–79. http://dx.doi.org/10.2478/pcr-2019-0014.

Full text
Abstract:
Abstract Databases are a basic component of every GIS system and many geoinformation applications. They also hold a prominent place in the tool kit of any cartographer. Solutions based on the relational model have been the standard for a long time, but there is a new increasingly popular technological trend – solutions based on the NoSQL database which have many advantages in the context of processing of large data sets. This paper compares the performance of selected spatial relational and NoSQL databases executing queries with selected spatial operators. It has been hypothesised that a non-relational solution will prove to be more effective, which was confirmed by the results of the study. The same spatial data set was loaded into PostGIS and MongoDB databases, which ensured standardisation of data for comparison purposes. Then, SQL queries and JavaScript commands were used to perform specific spatial analyses. The parameters necessary to compare the performance were measured at the same time. The study’s results have revealed which approach is faster and utilises less computer resources. However, it is difficult to clearly identify which technology is better because of a number of other factors which have to be considered when choosing the right tool.
APA, Harvard, Vancouver, ISO, and other styles
43

Li, Ling, Zihao Liu, Feng Ye, Peng Zhang, and Ming Lu. "Availability Enhancement of Riak TS Using Resource-Aware Mechanism." Mathematical Problems in Engineering 2019 (June 24, 2019): 1–11. http://dx.doi.org/10.1155/2019/2189125.

Full text
Abstract:
The dependability and elasticity of various NoSQL stores in critical application are still worth studying. Currently, the cluster and backup technologies are commonly used for improving NoSQL availability, but these approaches do not consider the availability reduction when NoSQL stores encounter performance bottlenecks. In order to enhance the availability of Riak TS effectively, a resource-aware mechanism is proposed. Firstly, the data table is sampled according to time, the correspondence between time and data is acquired, and the real-time resource consumption is recorded by Prometheus. Based on the sampling results, the polynomial curve fitting algorithm is used to constructing prediction curve. Then the resources required for the upcoming operation are predicted by the time interval in the SQL statement, and the operation is evaluated by comparing with the remaining resources. Using the real hydrological sensor dataset as experimental data, the effectiveness of the mechanism is experimented in two aspects of sensitivity and specificity, respectively. The results show that through the availability enhancement mechanism, the average specificity is 80.55% and the sensitivity is 76.31% which use the initial sampling dataset. As training datasets increase, the specificity increases from 80.55% to 92.42%, and the sensitivity increases from 76.31% to 87.90%. Besides, the availability increases from 40.33% to 89.15% in hydrological application scenarios. Experimental results show that this resource-aware mechanism can effectively prevent potential availability problems and enhance the availability of Riak TS. Moreover, as the number of users and the size of the data collected grow, our method will become more accurate and perfect.
APA, Harvard, Vancouver, ISO, and other styles
44

Samra, Halima, Alice Li, and Ben Soh. "GENE2D: A NoSQL Integrated Data Repository of Genetic Disorders Data." Healthcare 8, no. 3 (August 6, 2020): 257. http://dx.doi.org/10.3390/healthcare8030257.

Full text
Abstract:
There are few sources from which to obtain clinical and genetic data for use in research in Saudi Arabia. Numerous obstacles led to the difficulty of integrating these data from silos and scattered sources to provide standardized access to large data sets for patients with common health conditions. To this end, we sought to contribute to this area and offer a practical and easy-to-implement solution. In this paper, we aim to design and implement a “not only SQL” (NoSQL) based integration framework to generate an Integrated Data Repository of Genetic Disorders Data (GENE2D) to integrate data from various genetic clinics and research centers in Saudi Arabia and provide an easy-to-use query interface for researchers to conduct their studies on large datasets. The major components involved in the GENE2D architecture consists of the data sources, the integrated data repository (IDR) as a central database, and the application interface. The IDR uses a NoSQL document store via MongoDB (an open source document-oriented database program) as a backend database. The application interface called Query Builder provides multiple services for data retrieval from the database using a custom query to answer simple or complex research questions. The GENE2D system demonstrates its potential to help grow and develop a national genetic disorders database in Saudi Arabia.
APA, Harvard, Vancouver, ISO, and other styles
45

Sahu, Arvind, and Swati Ahirrao. "Graph Based Workload Driven Partitioning System by using MongoDB." International Journal of Advances in Applied Sciences 7, no. 1 (March 1, 2018): 29. http://dx.doi.org/10.11591/ijaas.v7.i1.pp29-37.

Full text
Abstract:
<p>The web applications and websites of the enterprises are accessed by a huge number of users with the expectation of reliability and high availability. Social networking sites are generating the data exponentially large amount of data. It is a challenging task to store data efficiently. SQL and NoSQL are mostly used to store data. As RDBMS cannot handle the unstructured data and huge volume of data, so NoSQL is better choice for web applications. Graph database is one of the efficient ways to store data in NoSQL. Graph database allows us to store data in the form of relation. In Graph representation each tuple is represented by node and the relationship is represented by edge. But, to handle the exponentially growth of data into a single server might decrease the performance and increases the response time. Data partitioning is a good choice to maintain a moderate performance even the workload increases. There are many data partitioning techniques like Range, Hash and Round robin but they are not efficient for the small transactions that access a less number of tuples. NoSQL data stores provide scalability and availability by using various partitioning methods. To access the Scalability, Graph partitioning is an efficient way that can be easily represent and process that data. To balance the load data are partitioned horizontally and allocate data across the geographical available data stores. If the partitions are not formed properly result becomes expensive distributed transactions in terms of response time. So the partitioning of the tuple should be based on relation. In proposed system, Schism technique is used for partitioning the Graph. Schism is a workload aware graph partitioning technique. After partitioning the related tuples should come into a single partition. The individual node from the graph is mapped to the unique partition. The overall aim of Graph partitioning is to maintain nodes onto different distributed partition so that related data come onto the same cluster.</p>
APA, Harvard, Vancouver, ISO, and other styles
46

Estrela, Vania V. "Biomedical Cyber-Physical Systems in the Light of Database as a Service (DBaaS) Paradigm." Medical Technologies Journal 4, no. 3 (December 7, 2020): 577. http://dx.doi.org/10.26415/2572-004x-vol4iss3p577-577.

Full text
Abstract:
Background: A database (DB) to store indexed information about drug delivery, test, and their temporal behavior is paramount in new Biomedical Cyber-Physical Systems (BCPSs). The term Database as a Service (DBaaS) means that a corporation delivers the hardware, software, and other infrastructure required by companies to operate their databases according to their demands instead of keeping an internal data warehouse. Methods: BCPSs attributes are presented and discussed. One needs to retrieve detailed knowledge reliably to make adequate healthcare treatment decisions. Furthermore, these DBs store, organize, manipulate, and retrieve the necessary data from an ocean of Big Data (BD) associated processes. There are Search Query Language (SQL), and NoSQL DBs. Results: This work investigates how to retrieve biomedical-related knowledge reliably to make adequate healthcare treatment decisions. Furthermore, Biomedical DBaaSs store, organize, manipulate, and retrieve the necessary data from an ocean of Big Data (BD) associated processes. Conclusion: A NoSQL DB allows more flexibility with changes while the BCPSs are running, which allows for queries and data handling according to the context and situation. A DBaaS must be adaptive and permit the DB management within an extensive variety of distinctive sources, modalities, dimensionalities, and data handling according to conventional ways.
APA, Harvard, Vancouver, ISO, and other styles
47

Pereira, Óscar Mortágua, and Rui L. Aguiar. "Enhancing Call-Level Interfaces with Thread-Safe Local Memory Structures." International Journal of Software Engineering and Knowledge Engineering 27, no. 09n10 (November 2017): 1549–65. http://dx.doi.org/10.1142/s0218194017400101.

Full text
Abstract:
Database applications are being increasingly under pressure to respond effectively to ever more demanding performance requirements. Software architects can resort to several well-known architectural tactics to minimize the possibility of coming across with any performance bottleneck. The usage of call-level interfaces (CLIs) is a strategy aimed at reducing the overhead of business components. CLIs are low-level APIs that provide a high-performance environment to execute standard SQL statements on relational and also on some NoSQL database (DB) servers. In spite of these valuable features, CLIs are not thread-safe when distinct threads need to share datasets retrieved through Select statements from databases. Thus, even in situations where two or more threads could share a dataset, there is no other possibility than providing each thread with its own dataset, this way leading to an increased need of computational resources. To overcome this drawback, in this paper we propose a new natively thread-safe architecture. The implementation herein presented is based on a thread-safe updatable local memory structure (LMS) where the data retrieved from databases is kept. A proof of concept based on Java Database Connectivity type 4 (JDBC) for SQL Server 2008 is presented and also a performance assessment.
APA, Harvard, Vancouver, ISO, and other styles
48

Schuszter, Ioan Cristian, and Marius Cioca. "An implementation of a fault-tolerant database system using the actor model." MATEC Web of Conferences 342 (2021): 05001. http://dx.doi.org/10.1051/matecconf/202134205001.

Full text
Abstract:
Fault-tolerant systems are an important discussion subject in our world of interconnected devices. One of the major failure points of every distributed infrastructure is the database. A data migration or an overload of one of the servers could lead to a cascade of failures and service downtime for the users. NoSQL databases sacrifice some of the consistency provided by traditional SQL databases while privileging availability and partition tolerance. This paper presents the design and implementation of a distributed in-memory database that is based on the actor model. The benefits of the actor model and development using functional languages are detailed, and suitable performance metrics are presented. A case study is also performed, showcasing the system’s capacity to quickly recover from the loss of one of its machines and maintain functionality.
APA, Harvard, Vancouver, ISO, and other styles
49

Khennou, Fadoua, Nour El Houda Chaoui, and Youness Idrissi Khamlichi. "A Migration Methodology from Legacy to New Electronic Health Record based OpenEHR." International Journal of E-Health and Medical Communications 10, no. 1 (January 2019): 55–75. http://dx.doi.org/10.4018/ijehmc.2019010104.

Full text
Abstract:
Nowadays, having an electronic health record properly adopted by medical bodies is no longer a challenge. In fact, the critical issue for health practitioners is related to the exchange of health data between different institutes. While some existing standards provide interoperability for e-health systems, they still not offer a coherent solution that can be integrated and used easily. In this author, the paper present OpenEHR, a consistent health standard based on the dual-level scheme, which separates the reference model from the archetypes, allowing a flexible modeling of clinical concepts. However, getting into OpenEHR implementation can be very complex. The purpose of this article is to simplify the integration of OpenEHR, by introducing a stepwise methodology of the migration from legacy SQL-based EHR to an interoperable OpenEHR based NoSQL oriented document model. Successful consolidation was achieved through the deployment of metadata and mapping rules in Java environment project, which allowed a practical automation of the interoperability integration process.
APA, Harvard, Vancouver, ISO, and other styles
50

Ma, Zongmin, Miriam A. M. Capretz, and Li Yan. "Storing massive Resource Description Framework (RDF) data: a survey." Knowledge Engineering Review 31, no. 4 (September 2016): 391–413. http://dx.doi.org/10.1017/s0269888916000217.

Full text
Abstract:
AbstractThe Resource Description Framework (RDF) is a flexible model for representing information about resources on the Web. As a W3C (World Wide Web Consortium) Recommendation, RDF has rapidly gained popularity. With the widespread acceptance of RDF on the Web and in the enterprise, a huge amount of RDF data is being proliferated and becoming available. Efficient and scalable management of RDF data is therefore of increasing importance. RDF data management has attracted attention in the database and Semantic Web communities. Much work has been devoted to proposing different solutions to store RDF data efficiently. This paper focusses on using relational databases and NoSQL (for ‘not only SQL (Structured Query Language)’) databases to store massive RDF data. A full up-to-date overview of the current state of the art in RDF data storage is provided in the paper.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography