Academic literature on the topic 'Continuous Deep Analytics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Continuous Deep Analytics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Continuous Deep Analytics"

1

Koutsomitropoulos, Dimitrios, Spiridon Likothanassis, and Panos Kalnis. "Semantics in the Deep: Semantic Analytics for Big Data." Data 4, no. 2 (May 7, 2019): 63. http://dx.doi.org/10.3390/data4020063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Williams, Haney W., and Steven J. Simske. "Object Tracking Continuity through Track and Trace Method." Electronic Imaging 2020, no. 16 (January 26, 2020): 299–1. http://dx.doi.org/10.2352/issn.2470-1173.2020.16.avm-258.

Full text
Abstract:
The demand for object tracking (OT) applications has been increasing for the past few decades in many areas of interest: security, surveillance, intelligence gathering, and reconnaissance. Lately, newly-defined requirements for unmanned vehicles have enhanced the interest in OT. Advancements in machine learning, data analytics, and deep learning have facilitated the recognition and tracking of objects of interest; however, continuous tracking is currently a problem of interest to many research projects. This paper presents a system implementing a means to continuously track an object and predict its trajectory based on its previous pathway, even when the object is partially or fully concealed for a period of time. The system is composed of six main subsystems: Image Processing, Detection Algorithm, Image Subtractor, Image Tracking, Tracking Predictor, and the Feedback Analyzer. Combined, these systems allow for reasonable object continuity in the face of object concealment.
APA, Harvard, Vancouver, ISO, and other styles
3

Williams, Haney W., Steven J. Simske, and Fr Gregory Bishay. "Unify The View of Camera Mesh Network to a Common Coordinate System." Electronic Imaging 2021, no. 17 (January 18, 2021): 175–1. http://dx.doi.org/10.2352/issn.2470-1173.2021.17.avm-175.

Full text
Abstract:
The demand for object tracking (OT) applications has been increasing for the past few decades in many areas of interest, including security, surveillance, intelligence gathering, and reconnaissance. Lately, newly-defined requirements for unmanned vehicles have enhanced the interest in OT. Advancements in machine learning, data analytics, and AI/deep learning have facilitated the improved recognition and tracking of objects of interest; however, continuous tracking is currently a problem of interest in many research projects. [1] In our past research, we proposed a system that implements the means to continuously track an object and predict its trajectory based on its previous pathway, even when the object is partially or fully concealed for a period of time. The second phase of this system proposed developing a common knowledge among a mesh of fixed cameras, akin to a real-time panorama. This paper discusses the method to coordinate the cameras' view to a common frame of reference so that the object location is known by all participants in the network.
APA, Harvard, Vancouver, ISO, and other styles
4

Derhab, Abdelouahid, Arwa Aldweesh, Ahmed Z. Emam, and Farrukh Aslam Khan. "Intrusion Detection System for Internet of Things Based on Temporal Convolution Neural Network and Efficient Feature Engineering." Wireless Communications and Mobile Computing 2020 (December 22, 2020): 1–16. http://dx.doi.org/10.1155/2020/6689134.

Full text
Abstract:
In the era of the Internet of Things (IoT), connected objects produce an enormous amount of data traffic that feed big data analytics, which could be used in discovering unseen patterns and identifying anomalous traffic. In this paper, we identify five key design principles that should be considered when developing a deep learning-based intrusion detection system (IDS) for the IoT. Based on these principles, we design and implement Temporal Convolution Neural Network (TCNN), a deep learning framework for intrusion detection systems in IoT, which combines Convolution Neural Network (CNN) with causal convolution. TCNN is combined with Synthetic Minority Oversampling Technique-Nominal Continuous (SMOTE-NC) to handle unbalanced dataset. It is also combined with efficient feature engineering techniques, which consist of feature space reduction and feature transformation. TCNN is evaluated on Bot-IoT dataset and compared with two common machine learning algorithms, i.e., Logistic Regression (LR) and Random Forest (RF), and two deep learning techniques, i.e., LSTM and CNN. Experimental results show that TCNN achieves a good trade-off between effectiveness and efficiency. It outperforms the state-of-the-art deep learning IDSs that are tested on Bot-IoT dataset and records an accuracy of 99.9986% for multiclass traffic detection, and shows a very close performance to CNN with respect to the training time.
APA, Harvard, Vancouver, ISO, and other styles
5

Albrecht, Conrad M., Rui Zhang, Xiaodong Cui, Marcus Freitag, Hendrik F. Hamann, Levente J. Klein, Ulrich Finkler, et al. "Change Detection from Remote Sensing to Guide OpenStreetMap Labeling." ISPRS International Journal of Geo-Information 9, no. 7 (July 2, 2020): 427. http://dx.doi.org/10.3390/ijgi9070427.

Full text
Abstract:
The growing amount of openly available, meter-scale geospatial vertical aerial imagery and the need of the OpenStreetMap (OSM) project for continuous updates bring the opportunity to use the former to help with the latter, e.g., by leveraging the latest remote sensing data in combination with state-of-the-art computer vision methods to assist the OSM community in labeling work. This article reports our progress to utilize artificial neural networks (ANN) for change detection of OSM data to update the map. Furthermore, we aim at identifying geospatial regions where mappers need to focus on completing the global OSM dataset. Our approach is technically backed by the big geospatial data platform Physical Analytics Integrated Repository and Services (PAIRS). We employ supervised training of deep ANNs from vertical aerial imagery to segment scenes based on OSM map tiles to evaluate the technique quantitatively and qualitatively.
APA, Harvard, Vancouver, ISO, and other styles
6

Chen, Dr Joy Iong Zong, and Dr Smys S. "Social Multimedia Security and Suspicious Activity Detection in SDN using Hybrid Deep Learning Technique." June 2020 2, no. 2 (May 27, 2020): 108–15. http://dx.doi.org/10.36548/jitdw.2020.2.004.

Full text
Abstract:
Social multimedia traffic is growing exponentially with the increased usage and continuous development of services and applications based on multimedia. Quality of Service (QoS), Quality of Information (QoI), scalability, reliability and such factors that are essential for social multimedia networks are realized by secure data transmission. For delivering actionable and timely insights in order to meet the growing demands of the user, multimedia analytics is performed by means of a trust-based paradigm. Efficient management and control of the network is facilitated by limiting certain capabilities such as energy-aware networking and runtime security in Software Defined Networks. In social multimedia context, suspicious flow detection is performed by a hybrid deep learning based anomaly detection scheme in order to enhance the SDN reliability. The entire process is divided into two modules namely – Abnormal activities detection using support vector machine based on Gradient descent and improved restricted Boltzmann machine which facilitates the anomaly detection module, and satisfying the strict requirements of QoS like low latency and high bandwidth in SDN using end-to-end data delivery module. In social multimedia, data delivery and anomaly detection services are essential in order to improve the efficiency and effectiveness of the system. For this purpose, we use benchmark datasets as well as real time evaluation to experimentally evaluate the proposed scheme. Detection of malicious events like confidential data collection, profile cloning and identity theft are performed to analyze the performance of the system using CMU-based insider threat dataset for large scale analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Sapci, A. Hasan, and H. Aylin Sapci. "Artificial Intelligence Education and Tools for Medical and Health Informatics Students: Systematic Review." JMIR Medical Education 6, no. 1 (June 30, 2020): e19285. http://dx.doi.org/10.2196/19285.

Full text
Abstract:
Background The use of artificial intelligence (AI) in medicine will generate numerous application possibilities to improve patient care, provide real-time data analytics, and enable continuous patient monitoring. Clinicians and health informaticians should become familiar with machine learning and deep learning. Additionally, they should have a strong background in data analytics and data visualization to use, evaluate, and develop AI applications in clinical practice. Objective The main objective of this study was to evaluate the current state of AI training and the use of AI tools to enhance the learning experience. Methods A comprehensive systematic review was conducted to analyze the use of AI in medical and health informatics education, and to evaluate existing AI training practices. PRISMA-P (Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols) guidelines were followed. The studies that focused on the use of AI tools to enhance medical education and the studies that investigated teaching AI as a new competency were categorized separately to evaluate recent developments. Results This systematic review revealed that recent publications recommend the integration of AI training into medical and health informatics curricula. Conclusions To the best of our knowledge, this is the first systematic review exploring the current state of AI education in both medicine and health informatics. Since AI curricula have not been standardized and competencies have not been determined, a framework for specialized AI training in medical and health informatics education is proposed.
APA, Harvard, Vancouver, ISO, and other styles
8

Pereira, Filipe Dwan, Samuel C. Fonseca, Elaine H. T. Oliveira, David B. F. Oliveira, Alexandra I. Cristea, and Leandro S. G. Carvalho. "Deep learning for early performance prediction of introductory programming students: a comparative and explanatory study." Revista Brasileira de Informática na Educação 28 (October 12, 2020): 723–48. http://dx.doi.org/10.5753/rbie.2020.28.0.723.

Full text
Abstract:
Introductory programming may be complex for many students. Moreover, there is a high failure and dropout rate in these courses. A potential way to tackle this problem is to predict student performance at an early stage, as it facilitates human-AI collaboration towards prescriptive analytics, where the instructors/monitors will be told how to intervene and support students - where early intervention is crucial. However, the literature states that there is no reliable predictor yet for programming students’ performance, since even large-scale analysis of multiple features have resulted in only limited predictive power. Notice that Deep Learning (DL) can provide high-quality results for huge amount of data and complex problems. In this sense, we employed DL for early prediction of students’ performance using data collected in the very first two weeks from introductory programming courses offered for a total of 2058 students during 6 semesters (longitudinal study). We compared our results with the state-of-the-art, an Evolutionary Algorithm (EA) that automatic creates and optimises machine learning pipelines. Our DL model achieved an average accuracy of 82.5%, which is statistically superior to the model constructed and optimised by the EA (p-value << 0.05 even with Bonferroni correction). In addition, we also adapted the DL model in a stacking ensemble for continuous prediction purposes. As a result, our regression model explained ~62% of the final grade variance. In closing, we also provide results on the interpretation of our regression model to understand the leading factors of success and failure in introductory programming.
APA, Harvard, Vancouver, ISO, and other styles
9

Savoska, Snezana, and Blagoj Ristevski. "Towards Implementation of Big Data Concepts in a Pharmaceutical Company." Open Computer Science 10, no. 1 (October 27, 2020): 343–56. http://dx.doi.org/10.1515/comp-2020-0201.

Full text
Abstract:
AbstractNowadays, big data is a widely utilized concept that has been spreading quickly in almost every domain. For pharmaceutical companies, using this concept is a challenging task because of the permanent pressure and business demands created through the legal requirements, research demands and standardization that have to be adopted. These legal and standards’ demands are associated with human healthcare safety and drug control that demands continuous and deep data analysis. Companies update their procedures to the particular laws, standards, market demands and regulations all the time by using contemporary information technology. This paper highlights some important aspects of the experience and change methodology used in one Macedonian pharmaceutical company, which has employed information technology solutions that successfully tackle legal and business pressures when dealing with a large amount of data. We used a holistic view and deliverables analysis methodology to gain top-down insights into the possibilities of big data analytics. Also, structured interviews with the company’s managers were used for information collection and proactive methodology with workshops was used in data integration toward the implementation of big data concepts. The paper emphasizes the information and knowledge used in this domain to improve awareness for the needs of big data analysis to achieve a competitive advantage. The main results are focused on systematizing the whole company’s data, information and knowledge and propose a solution that integrates big data to support managers’ decision-making processes.
APA, Harvard, Vancouver, ISO, and other styles
10

Yadav, Piyush, Dhaval Salwala, Dibya Prakash Das, and Edward Curry. "Knowledge Graph Driven Approach to Represent Video Streams for Spatiotemporal Event Pattern Matching in Complex Event Processing." International Journal of Semantic Computing 14, no. 03 (September 2020): 423–55. http://dx.doi.org/10.1142/s1793351x20500051.

Full text
Abstract:
Complex Event Processing (CEP) is an event processing paradigm to perform real-time analytics over streaming data and match high-level event patterns. Presently, CEP is limited to process structured data stream. Video streams are complicated due to their unstructured data model and limit CEP systems to perform matching over them. This work introduces a graph-based structure for continuous evolving video streams, which enables the CEP system to query complex video event patterns. We propose the Video Event Knowledge Graph (VEKG), a graph-driven representation of video data. VEKG models video objects as nodes and their relationship interaction as edges over time and space. It creates a semantic knowledge representation of video data derived from the detection of high-level semantic concepts from the video using an ensemble of deep learning models. A CEP-based state optimization — VEKG-Time Aggregated Graph (VEKG-TAG) — is proposed over VEKG representation for faster event detection. VEKG-TAG is a spatiotemporal graph aggregation method that provides a summarized view of the VEKG graph over a given time length. We defined a set of nine event pattern rules for two domains (Activity Recognition and Traffic Management), which act as a query and applied over VEKG graphs to discover complex event patterns. To show the efficacy of our approach, we performed extensive experiments over 801 video clips across 10 datasets. The proposed VEKG approach was compared with other state-of-the-art methods and was able to detect complex event patterns over videos with [Formula: see text]-Score ranging from 0.44 to 0.90. In the given experiments, the optimized VEKG-TAG was able to reduce 99% and 93% of VEKG nodes and edges, respectively, with 5.19[Formula: see text] faster search time, achieving sub-second median latency of 4–20[Formula: see text]ms.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Continuous Deep Analytics"

1

Mickos, Johan. "Design of a Network Library for Continuous Deep Analytics." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232129.

Full text
Abstract:
Data-intensive stream processing applications have escalated in popularity in recent years, producing numerous designs and implementations for handling unbounded streams of high-volume data. The sheer size and dimensionality of these types of data requires multiple machines to push processing throughput of hundreds of millions events per second at low latencies. Advances in the fields of distributed deep learning and stream processing have highlighted networking-specific challenges and requirements such as flow control and scalable communication abstractions. Existing stream processing frameworks, however, only address subsets of these requirements. This thesis proposes a design and implementation in the Rust programming language for a modular networking library able to address these requirements together. The design entails protocol framing, buffer management, stream multiplexing, flow control, and stream prioritization. The implemented prototype handles multiplexing of logical streams and credit-based flow control through a flexible application programming interface. The prototype is tested for overall throughput and round-trip latency in a distributed environment, displaying promising results in both categories.
Under de senaste åren har applikationer för dataintensiv ström bearbetning blivit avsevärt mer vanliga. Detta har lett till en uppsjö av modeller och implementationer för hantering av dataströmmar av gränslös volym. Blotta datamängden och dess dimensionalitet kräver otaliga maskiner för att med låg latens hantera hundratals miljoner händelser per sekund. Framsteg inom området för distribuerad djupinlärning och ström bearbetning har blottlagt nätverksspecifika utmaningar och krav såsom flödeskontroll och skalbara kommunikationsabstraktioner. Nuvarande beräkningssystem för ström bearbetning uppfyller dessvärre bara en del av dessa villkor. Detta examensarbete presenterar en modell och implementation i programmeringsspråket Rust för ett modulärt nätverksbibliotek som kan hantera alla dessa krav på en gång. Modellen inbegriper datainramning, bufferhantering, ström multiplexing, flödeskontroll och ström prioritering. Prototypen som här implementerats hanterar multiplexing av logiska dataströmmar och kreditbaserad flödeskontroll genom ett flexibelt applikationsgränssnitt. Prototypen har testats i avseende å nätverk genomströmning och tur-och-returtid i ett distribuerat upplägg, med lovande resultat i bägge kategorier.
APA, Harvard, Vancouver, ISO, and other styles
2

Bjuhr, Oscar. "Dynamic Configuration of a Relocatable Driver and Code Generator for Continuous Deep Analytics." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232079.

Full text
Abstract:
Modern stream processing engines usually use the Java virtual machine (JVM) as execution platform. The JVM increases portability and safety of applications at the cost of not fully utilising the performance of the physical machines. Being able to use hardware accelerators such as GPUs for computationally heavy analysis of data streams is also restricted when using the JVM. The project Continuous Deep Analytics (CDA) explores the possibility of a stream processor executing native code directly on the underlying hardware using Rust. Rust is a young programming language which can statically guarantee the absence of memory errors and data races in programs without incurring performance penalties during runtime. Rust is built on top of LLVM which gives Rust a theoretical possibility to compile to a large set of target platforms. Each specific target platform does however require a specific configured runtime environment for Rust’s compiler to work properly. The CDA compiler will run in a distributed setting where the compiler has to be able to reallocate to different nodes to handle node failures. Setting up a reassignable Rust compiler in such a setting can be error prone and Docker is explored as a solution to this problem. A concurrent thread based system is implemented in Scala for building Docker images and compiling Rust in containers. Docker shows a potential of enabling easy reallocation of the driver without manual configuration. Docker has no major effect on Rust’s compile time. The large Docker images required to compile Rust is a drawback of the solution. They will require substantial network traffic to reallocate the driver. Reducing the size of the images would therefore make the solution more responsive.
Moderna strömprocessorer använder vanligtvis Javas virtuella maskin (JVM) som plattform för exekvering. Det gör strömprocessorerna portabla och säkra men begränsar hur väl de kan använda kapaciteten i den underliggande fysiska maskinen. Att kunna använda sig av hårdvaruaccelerator som t.ex. grafikkort för tung beräkning och analys av dataströmmar är en anledning till varför projektet Continuous Deep Analytics (CDA) utforskar möjligheten att istället exekvera en strömprocessor direkt i den underliggande maskinen. Rust är ett ungt programmeringsspråk som statiskt kan garantera att program inte innehåller minnesfel eller race conditions", detta utan att negativt påverka prestanda vid exekvering. Rust är byggt på LLVM vilket ger Rust en teoretisk möjlighet att kompilera till en stor mängd olika maskinarkitekturer. Varje specifik maskinarkitektur kräver dock att kompileringsmiljön är konfigurerad på ett specifikt sätt. CDAs kompilator kommer befinna sig i ett distribuerat system där kompilatorn kan bli flyttad till olika maskiner för att kunna hantera maskinfel. Att dynamiskt konfigurera kompilatorn i en sådan miljö kan leda till problem och därför testas Docker som en lösning på problemet. Ett trådbaserat system för parallell exekvering är implementerat i Scala för att bygga Docker bilder och kompilera Rust i containrar. Docker visar sig att ha en potential för att möjliggöra lätt omallokering av drivern utan manuell konfiguration. Docker har ingen stor påverkan på Rusts kompileringstid. De stora storlekarna på de Docker bilder som krävs för att kompilera Rust är en nackdel med lösningen. De gör att om allokering av drivern kräver mycket nätverkstrafik och kan därför ta lång tid. För att göra lösningen kvickare kan storleken av bilderna reduceras.
APA, Harvard, Vancouver, ISO, and other styles
3

Segeljakt, Klas. "A Scala DSL for Rust code generation." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235358.

Full text
Abstract:
Continuous Deep Analytics (CDA) is a new form of analytics with performance requirements exceeding what the current generation of distributed systems can offer. This thesis is part of a five year project in collaboration between RISE SICS and KTH to develop a next generation distributed system capable of CDA. The two issues which the system aims to solve are computation sharing and hardware acceleration. The former refers to how BigData and machine learning libraries such as TensorFlow, Pandas and Numpy must collaborate in the most efficient way possible. Hardware acceleration relates to how the back-end of current generation general purpose data processing systems such as Spark and Flink are bottlenecked by the Java Virtual Machine (JVM). As the JVM abstracts over the underlying hardware, its applications become portable but also forfeit the opportunity to fully exploit the available hardware resources. This thesis aims to explore the area of Domain Specific Languages (DSLs) and code generation as a solution to hardware acceleration. The idea is to translate incoming queries to the system into low-level code, tailor suited to each worker machine’s specific hardware. To this end, two Scala DSLs for generating Rust code have been developed for the translation step. Rust is a new, low-level programming language with a unique take on memory management which makes it as safe as Java and fast as C. Scala is a language which is well suited towards development of DSLs due to its flexible syntax and semantics. The first DSL is implemented as a string interpolator. The interpolator splices strings of Rust code together, at compile time or runtime, and passes the result to an external process for static checking. The second DSL instead provides an API for constructing an abstract syntax tree, which after construction can be traversed and printed into Rust source code. The API combines three concepts: heterogeneous lists, fluent interfaces, and algebraic data types. These allow the userto express advanced Rust syntax such as polymorphic structs, functions, and traits, without sacrificing type safety.
Kontinuerlig Djup Analys (CDA) är en ny form av analys med prestandakrav som överstiger vad den nuvarande generationen av distributerade system kan erbjuda. Den här avhandlingen är del av ett project mellan RISE SICS och KTH för att utveckla ett nästa-generations distribuerat system kapabelt av CDA. Det är två problem som systemet syftar på att lösa: hårdvaruacceleration och beräkningsdelning. Det första handlar om hur BigData och maskininlärningssystem som sådan som TensorFlow, Pandas och Numpy måste kunna samarbeta så effektivt som möjligt. Hårdvaruacceleration relaterar till hur back-end delen i den dagens distribuerade beräknings system, såsom Spark och Flink, flaskhalsas av Javas Virtuella Maskin. JVM:en abstraherar över den underliggande hårvaran. Som resultat blir dess applikationer portabla, men ger också upp möjligheten att fullständigt utnyttja de tillgängliga hårdvaruresurserna. Den här avhandlingen siktar på att utforska området kring Domänspecifika Språk (DSLer) och kodgenerering som en lösning till hårdvaruacceleration. Idén är att översätta inkommande förfrågningar till låg-nivå kod, skräddarsydd till varje arbetar maskin’s specifika hårdvara. Till detta ändamål har två Scala DSLer utvecklats för generering av Rust kod. Rust är ett nytt låg-nivå språk med ett unikt vidtagande kring minneshantering som gör det både lika säkert som Java och snabbt som C. Scala är ett språk som passar bra tillutveckling av DSLer pågrund av dess flexibla syntax och semantik. Den första DSLen är implementerad som en sträng-interpolator. Interpolatorn sammanfogar strängar av Rust kod, under kompileringstid eller exekveringstid, och passerar resultatet till enextern process för statisk kontroll. Den andra DSLen består istället av ett API för att konstruera ett abstrakt syntaxträd, som efteråt kan traverseras och skrivas ut till Rust kod. API:et kombinerar tre koncept: heterogena listor, flytande gränssnitt, och algebraiska datatyper. Dessa tillåter användaren att uttrycka avancerad Rust syntax, såsom polymorfiska strukts, funktioner, och traits, utan att uppoffra typsäkerhet.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Continuous Deep Analytics"

1

Crowley, Kate, Jenny Stewart, Adrian Kay, and Brian Head. Reconsidering Policy. Policy Press, 2020. http://dx.doi.org/10.1332/policypress/9781447333111.001.0001.

Full text
Abstract:
For all nation-states, the context in which public policies must be developed and applied continues to become more complex and demanding. Yet policy studies has not fully responded to the challenges and opportunities represented by these developments. While governance has drawn attention to a globalising and network-based policy world, politics and the role of the state have been de-emphasised. The book addresses this imbalance through a process of reconsideration – re-visiting traditional policy-analytic concepts and re-developing and extending new ones. The objects of reconsideration are of two types: firstly, themes relating to ‘deep’ policy: policy systems; institutions, the state and borders; and secondly, policy-in-action: information, advice, implementation and policy change. Through these eight perspectives, each developed as a chapter of this book, the authors have produced a melded approach to policy, which they call systemic institutionalism. They define this approach as one that provides a broad analytic perspective that links policy with governance (implemented action) on the one hand, and the state (structured authority) on the other. By identifying research agendas based on these insights, the book suggests how real world issues might be substantively addressed, in particular more complex and challenging issues, through examples that bring out the ‘policy’ (the history and potential for collective public action) in the system.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Continuous Deep Analytics"

1

Srivastava, Kavita. "Deep Learning With Analytics on Edge." In Cases on Edge Computing and Analytics, 111–33. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-4873-8.ch006.

Full text
Abstract:
The steep rise in autonomous systems and the internet of things in recent years has influenced the way in which computation has performed. With built-in AI (artificial intelligence) in IoT and cyber-physical systems, the need for high-performance computing has emerged. Cloud computing is no longer sufficient for the sensor-driven systems which continuously keep on collecting data from the environment. The sensor-based systems such as autonomous vehicles require analysis of data and predictions in real-time which is not possible only with the centralized cloud. This scenario has given rise to a new computing paradigm called edge computing. Edge computing requires the storage of data, analysis, and prediction performed on the network edge as opposed to a cloud server thereby enabling quick response and less storage overhead. The intelligence at the edge can be obtained through deep learning. This chapter contains information about various deep learning frameworks, hardware, and systems for edge computing and examples of deep neural network training using the Caffe 2 framework.
APA, Harvard, Vancouver, ISO, and other styles
2

Lu, Yang. "Deep Learning of Data Analytics in Healthcare." In Theory and Practice of Business Intelligence in Healthcare, 151–65. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2310-0.ch007.

Full text
Abstract:
The importance of data as the fuel of artificial intelligence is self-evident. As the degree of informatization in various industries deepens, the amount of accumulated data continues to increase; however, data processing capability lags far behind the exponential growth of data volume. To gather accurate results, more and more data should be collected. However, the more data collected, the slower the processing and analyzing of that data. The emergence of deep learning solves the problem of how to process large amounts of data quickly and precisely. With the advancement of technology, the healthcare industry has achieved a promising level of needed data. Moreover, if deep learning can be used to aid disease diagnosis, patient data can be processed efficiently, useful information can be screened, valuable diagnostic rules can be mined, and disease diagnosis results can be better formulated and treated. It is foreseeable that deep learning has the potential to improve the effectiveness and the efficiency of healthcare and relevant industries.
APA, Harvard, Vancouver, ISO, and other styles
3

Choudhury, Punam Dutta, Ankumoni Bora, and Kandarpa Kumar Sarma. "Big Spectrum Data and Deep Learning Techniques for Cognitive Wireless Networks." In Deep Learning and Neural Networks, 994–1015. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-0414-7.ch055.

Full text
Abstract:
The present world is data driven. From social sciences to frontiers of research in science and engineering, one common factor is the continuous data generation. It has started to affect our daily lives. Big data concepts are found to have significant impact in modern wireless communication systems. The analytical tools of big data have been identified as full scale autonomous mode of operation which necessitates a strong role to be played by learning based systems. The chapter has focused on the synergy of big data and deep learning for generating better efficiency in evolving communication frameworks. The chapter has also included discussion on machine learning and cognitive technologies w.r.t. big data and mobile communication. Cyber Physical Systems being indispensable elements of M2M communication, Wireless Sensor Networks and its role in CPS, cognitive radio networking and spectrum sensing have also been discussed. It is expected that spectrum sensing, big data and deep learning will play vital roles in enhancing the capabilities of wireless communication systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Choudhury, Punam Dutta, Ankumoni Bora, and Kandarpa Kumar Sarma. "Big Spectrum Data and Deep Learning Techniques for Cognitive Wireless Networks." In Deep Learning Innovations and Their Convergence With Big Data, 33–60. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-3015-2.ch003.

Full text
Abstract:
The present world is data driven. From social sciences to frontiers of research in science and engineering, one common factor is the continuous data generation. It has started to affect our daily lives. Big data concepts are found to have significant impact in modern wireless communication systems. The analytical tools of big data have been identified as full scale autonomous mode of operation which necessitates a strong role to be played by learning based systems. The chapter has focused on the synergy of big data and deep learning for generating better efficiency in evolving communication frameworks. The chapter has also included discussion on machine learning and cognitive technologies w.r.t. big data and mobile communication. Cyber Physical Systems being indispensable elements of M2M communication, Wireless Sensor Networks and its role in CPS, cognitive radio networking and spectrum sensing have also been discussed. It is expected that spectrum sensing, big data and deep learning will play vital roles in enhancing the capabilities of wireless communication systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Moorthy, Usha, and Usha Devi Gandhi. "A Survey of Big Data Analytics Using Machine Learning Algorithms." In Advances in Human and Social Aspects of Technology, 95–123. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-2863-0.ch005.

Full text
Abstract:
Big data is information management system through the integration of various traditional data techniques. Big data usually contains high volume of personal and authenticated information which makes privacy as a major concern. To provide security and effective processing of collected data various techniques are evolved. Machine Learning (ML) is considered as one of the data technology which handles one of the central and hidden parts of collected data. Same like ML algorithm Deep Learning (DL) algorithm learn program automatically from the data it is considered to enhance the performance and security of the collected massive data. This paper reviewed security issues in big data and evaluated the performance of ML and DL in a critical environment. At first, this paper reviewed about the ML and DL algorithm. Next, the study focuses towards issues and challenges of ML and their remedies. Following, the study continues to investigate DL concepts in big data. At last, the study figures out methods adopted in recent research trends and conclude with a future scope.
APA, Harvard, Vancouver, ISO, and other styles
6

Zinn-Justin, Jean. "Euclidean path integrals and quantum mechanics (QM)." In Quantum Field Theory and Critical Phenomena, 18–41. Oxford University Press, 2021. http://dx.doi.org/10.1093/oso/9780198834625.003.0002.

Full text
Abstract:
Functional integrals are basic tools to study first quantum mechanics (QM), and quantum field theory (QFT). The path integral formulation of QM is well suited to the study of systems with an arbitrary number of degrees of freedom. It makes a smooth transition between nonrelativistic QM and QFT possible. The Euclidean functional integral also emphasizes the deep connection between QFT and the statistical physics of systems with short-range interactions near a continuous phase transition. The path integral representation of the matrix elements of the quantum statistical operator e-β H for Hamiltonians of the simple separable form p2/2m +V(q) is derived. To the path integral corresponds a functional measure and expectation values called correlation functions, which are generalized moments, and related to quantum observables, after an analytic continuation in time. The path integral corresponding to the Euclidean action of a harmonic oscillator, to which is added a time-dependent external force, is calculated explicitly. The result is used to generate Gaussian correlation functions and also to reduce the evaluation of path integrals to perturbation theory. The path integral also provides a convenient tool to derive semi-classical approximations.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Continuous Deep Analytics"

1

Feng, Tengfei, Hong Tang, Miao Wang, Chi Zhang, Hongkai Wang, and Fengyu Cong. "Continuous Estimation of Left Ventricular Hemodynamic Parameters Based on Heart Sound and PPG Signals Using Deep Neural Network." In 2020 International Conference on Sensing, Measurement & Data Analytics in the era of Artificial Intelligence (ICSMD). IEEE, 2020. http://dx.doi.org/10.1109/icsmd50554.2020.9261681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jimeno Yepes, Antonio, Jianbin Tang, and Benjamin Scott Mashford. "Improving Classification Accuracy of Feedforward Neural Networks for Spiking Neuromorphic Chips." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/274.

Full text
Abstract:
Deep Neural Networks (DNN) achieve human level performance in many image analytics tasks but DNNs are mostly deployed to GPU platforms that consume a considerable amount of power. New hardware platforms using lower precision arithmetic achieve drastic reductions in power consumption. More recently, brain-inspired spiking neuromorphic chips have achieved even lower power consumption, on the order of milliwatts, while still offering real-time processing. However, for deploying DNNs to energy efficient neuromorphic chips the incompatibility between continuous neurons and synaptic weights of traditional DNNs, discrete spiking neurons and synapses of neuromorphic chips need to be overcome. Previous work has achieved this by training a network to learn continuous probabilities, before it is deployed to a neuromorphic architecture, such as IBM TrueNorth Neurosynaptic System, by random sampling these probabilities. The main contribution of this paper is a new learning algorithm that learns a TrueNorth configuration ready for deployment. We achieve this by training directly a binary hardware crossbar that accommodates the TrueNorth axon configuration constrains and we propose a different neuron model. Results of our approach trained on electroencephalogram (EEG) data show a significant improvement with previous work (76% vs 86% accuracy) while maintaining state of the art performance on the MNIST handwritten data set.
APA, Harvard, Vancouver, ISO, and other styles
3

Pennathur, Sumita, Fabio Baldessari, Mike Kattah, Paul J. Utz, and Juan G. Santiago. "Electrophoresis in Nanochannels." In ASME 2006 2nd Joint U.S.-European Fluids Engineering Summer Meeting Collocated With the 14th International Conference on Nuclear Engineering. ASMEDC, 2006. http://dx.doi.org/10.1115/fedsm2006-98558.

Full text
Abstract:
Micro- and nanofabrication technology enables the application of electrokinetics as a method of performing chemical analyses and achieving liquid pumping in electronically-controlled microchip systems with no moving parts. We are studying and leveraging the unique separation modalities offered by nanoscale electrokinetic channels. We report analytical, numerical, and experimental investigations of nanochannel electrophoretic transport and separation dynamics of neutral and charged analytes. Our study includes continuum-theory-based analytical and numerical studies of nanofluidic electrophoretic separation dynamics, as well as experimental validation of these models. We have used 40, 100, and 1,560 nm deep channels etched in fused silica to independently measure mobility and valence of small ions. We also use these devices to separate 10 to 100 base pair DNA in the absence of a gel separation matrix. The effective free-solution mobilities of the ds-DNA oligonucleotides measured in 1560 nm deep channel are consistent with reported literature values, while smaller values of the mobility were measured for 4o nm deep channels for the same charge-species. The goal of our work is to explore and exploit electrokinetic flow regimes with extreme scales of length and charge density.
APA, Harvard, Vancouver, ISO, and other styles
4

Pavlou, Dimitrios G. "Flow-Riser Interaction in Deep-Sea Mining: An Analytic Approach for Multi-Layered FRP Risers." In ASME 2018 37th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/omae2018-78576.

Full text
Abstract:
Fiber reinforced polymeric laminated materials are suitable for risers in deep-sea applications due to their superior strength, corrosion and fatigue resistance, light weight, low maintenance cost, low transportation cost, and ability for continuous manufacturing. However, due to their anisotropic material properties, the modeling of the dynamic response due to interaction with the internal flow and the sea water is more complicated. In the present work a model for flow induced instability analysis of long, multi-layered, fiber reinforced risers is performed. The motion equations take into account the elastic flexural restoring force of the anisotropic material, the centrifugal force of the fluid flowing in curved portions of the pipe, the Corriolis force, the inertia force of the mass of pump, pipe, and fluid, and the effect of the surrounding water. Combination of the motion equations yields a fourth order partial differential equation in terms of flexural displacements. The transfer matrix method is implemented to the above equation for the critical flow velocities calculation. The “global stiffness matrix” of the pipe-pump system containing the boundary conditions, the anisotropic material properties and the flow parameters, is derived. The condition for non-trivial solution is solved numerically yielding the values of the critical flow velocity, i.e. the internal flow velocity causing flow induced pipeline instability. The results are affected by the anisotropic properties of the material, the mass of the hanged pump, the drag coefficient, and the flow parameters. The results are commented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

Hsieh, Shou-Shing, Huang-Hsiu Tsai, Chih-Yi Lin, Ching-Fang Huang, and Cheng-Ming Chien. "Gaseous Slip Flow in a Micro-Channel." In ASME 2003 1st International Conference on Microchannels and Minichannels. ASMEDC, 2003. http://dx.doi.org/10.1115/icmm2003-1033.

Full text
Abstract:
An experimental and theoretical study of low Reynolds number compressible gas flow in a micro channel is presented. Nitrogen gas was used. The channel was microfabricated on silicon wafers and were 50 μm deep, 200 μm wide and 24000 μm long. The Knudsen number ranged from 0.001 to 0.02. Pressure drop were measured at different mass flow rates in terms of Re and found in good agreement with those predicted by analytical solutions in which a 2-D continuous flow model with first slip boundary conditions are employed and solved by perturbation methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Alexander, Paul W., Diann Brei, and John W. Halloran. "DEPP Co-Extruded Functionally Graded Piezoceramics." In ASME 2005 International Mechanical Engineering Congress and Exposition. ASMEDC, 2005. http://dx.doi.org/10.1115/imece2005-80217.

Full text
Abstract:
Functionally Graded Piezoceramics (FGP) offer performance similar to conventional piezoceramic actuators while reducing the problems associated with their bonded construction (high stress levels, large stress discontinuities, delamination, etc.). This paper presents the Dual Electro/Piezo Property (DEPP) gradient method and the tools necessary for designing, modeling, and producing DEPP FGP actuators including: material property gradient maps, a Micro-Fabrication by Co-eXtrusion (MFCX) process, and experimentally validated analytic and numeric performance and stress modeling methodologies that account for continuous and layered material gradients and complex electric field profiles. These models predict a dramatic internal stress reduction achieved by the DEPP method. Preliminary reliability testing confirm this with an increase in piezoelectric actuator lifetimes over 1010 cycles, improvement of almost four orders of magnitude compared to conventional piezoceramic actuation.
APA, Harvard, Vancouver, ISO, and other styles
7

Hasan, A. Rashid, Rayhana N. Sohel, and Xiaowei Wang. "Estimating Zonal Flow Contributions in Deep Water Assets From Pressure and Temperature Data." In ASME 2017 36th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/omae2017-62537.

Full text
Abstract:
Producing hydrocarbon from deep water assets is extremely challenging and expensive. A good estimate of rates from multiple pay zones is essential for well monitoring, surveillance, and workover decisions. Such information can be gleaned from flowing fluid pressure and temperature; deep-water wells are often well instrumented that offers such data on a continuous basis. In this study a model is presented that estimates zonal flow contributions based on energy and momentum balances. Kinetic and heat energy coming from the reservoir fluid to the production tubing is accounted for in the model. The momentum balance for wellbore takes into account differing flow profile in laminar and turbulent flows. In addition, when sandface temperature data are not available, a recently developed analytical model to estimate the effect of Joule-Thompson expansion on sandface temperature was used to estimate sandface temperature from reservoir temperature. The model developed can be applied to any reservoir with multiple pay zones and is especially useful for deep-water assets where production logging is practically impossible. Available field data for multiphase flow was used to validate the model. Sensitivity analyses were performed that showed accurate temperature data is essential for the model to estimate zonal contribution accurately.
APA, Harvard, Vancouver, ISO, and other styles
8

Baird, Eric, and Kamran Mohseni. "Surface Tension Actuators Droplets in Microchannels." In ASME 2005 International Mechanical Engineering Congress and Exposition. ASMEDC, 2005. http://dx.doi.org/10.1115/imece2005-79371.

Full text
Abstract:
A unified model is presented for the velocity of discrete droplets in microchannels actuated by surface tension modulation. Specific results are derived for the cases of electrowetting on dielectric (EWOD), dielectrophoresis (DEP), continuous electrowetting (CEW), and thermocapillary pumping (TCP). This treatment differs from previously published works by presenting one unified analytic model which is then simply applied to the specific cases of EWOD, CEW, DEP and TCP. In addition, the roles of equiliubrium contact angle and contact angle hysteresis are unambiguously described for each method. The model is shown to agree with experimental and theoretical results presented previously, predicting fluid velocities for a broad range of applications in digitized microfluidics.
APA, Harvard, Vancouver, ISO, and other styles
9

Fernandez, Charles, Arun Kr Dev, Rose Norman, Wai Lok Woo, and Shashi Bhushan Kumar. "Dynamic Positioning System: Systematic Weight Assignment for DP Sub-Systems Using Multi-Criteria Evaluation Technique Analytic Hierarchy Process and Validation Using DP-RI Tool With Deep Learning Algorithm." In ASME 2019 38th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2019. http://dx.doi.org/10.1115/omae2019-95485.

Full text
Abstract:
Abstract The Dynamic Positioning (DP) System of a vessel involves complex interactions between a large number of sub-systems. Each sub-system plays a unique role in the continuous overall DP function for safe and reliable operation of the vessel. Rating the significance or assigning weightings to the DP sub-systems in different operating conditions is a complex task that requires input from many stakeholders. The weighting assignment is a critical step in determining the reliability of the DP system during complex marine and offshore operations. Thus, an accurate weighting assignment is crucial as it, in turn, influences the decision-making of the operator concerning the DP system functionality execution. Often DP operators prefer to rely on intuition in assigning the weightings. However, it introduces an inherent uncertainty and level of inconsistency in the decision making. The systematic assignment of weightings requires a clear definition of criteria and objectives and data collection with the DP system operating continuously in different environmental conditions. The sub-systems of the overall DP system are characterized by multi-attributes resulting in a high number of comparisons thereby making weighting distribution complicated. If the weighting distribution was performed by simplifying the attributes, making the decision by excluding part of them or compromising the cognitive efforts, then this could lead to inaccurate decision making. Multi-Criteria Decision Making (MCDM) methods have evolved over several decades and have been used in various applications within the Maritime and Oil and Gas industries. DP, being a complex system, naturally lends itself to the implementation of MCDM techniques to assign weight distribution among its sub-systems. In this paper, the Analytic Hierarchy Process (AHP) methodology is used for weight assignment among the DP sub-systems. An AHP model is effective in obtaining the domain knowledge from numerous experts and representing knowledge-guided indexing. The approach involved examination of several criteria in terms of both quantitative and qualitative variables. A state-of-the-art advisory decision-making tool, Dynamic Positioning Reliability Index (DP-RI), is used to validate the results from AHP. The weighting assignments from AHP are close to the reality and verified using the tool through real-life scenarios.
APA, Harvard, Vancouver, ISO, and other styles
10

Agbaji, Armstrong Lee. "An Empirical Analysis of Artificial Intelligence, Big Data and Analytics Applications in Exploration and Production Operations." In International Petroleum Technology Conference. IPTC, 2021. http://dx.doi.org/10.2523/iptc-21312-ms.

Full text
Abstract:
Abstract Oil and Gas operations are now being "datafied." Datafication in the oil industry refers to systematically extracting data from the various oilfield activities that are naturally occurring. Successful digital transformation hinges critically on an organization's ability to extract value from data. Extracting and analyzing data is getting harder as the volume, variety, and velocity of data continues to increase. Analytics can help us make better decisions, only if we can trust the integrity of the data going into the system. As digital technology continues to play a pivotal role in the oil industry, the role of reliable data and analytics has never been more consequential. This paper is an empirical analysis of how Artificial Intelligence (AI), big data and analytics has redefined oil and gas operations. It takes a deep dive into various AI and analytics technologies reshaping the industry, specifically as it relates to exploration and production operations, as well as other sectors of the industry. Several illustrative examples of transformative technologies reshaping the oil and gas value chain along with their innovative applications in real-time decision making are highlighted. It also describes the significant challenges that AI presents in the oil industry including algorithmic bias, cybersecurity, and trust. With digital transformation poised to re-invent the oil & gas industry, the paper also discusses energy transition, and makes some bold predictions about the oil industry of the future and the role of AI in that future. Big data lays the foundation for the broad adoption and application of artificial intelligence. Analytics and AI are going to be very powerful tools for making predictions with a precision that was previously impossible. Analysis of some of the AI and analytics tools studied shows that there is a huge gap between the people who use the data and the metadata. AI is as good as the ecosystem that supports it. Trusting AI and feeling confident with its decisions starts with trustworthy data. The data needs to be clean, accurate, devoid of bias, and protected. As the relationship between man and machine continues to evolve, and organizations continue to rely on data analytics to provide decision support services, it is imperative that we safeguard against making important technical and management decisions based on invalid or biased data and algorithm. The variegated outcomes observed from some of the AI and analytics tools studied in this research shows that, when it comes to adopting AI and analytics, the worm remains buried in the apple.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Continuous Deep Analytics"

1

Yatsymirska, Mariya. SOCIAL EXPRESSION IN MULTIMEDIA TEXTS. Ivan Franko National University of Lviv, February 2021. http://dx.doi.org/10.30970/vjo.2021.49.11072.

Full text
Abstract:
The article investigates functional techniques of extralinguistic expression in multimedia texts; the effectiveness of figurative expressions as a reaction to modern events in Ukraine and their influence on the formation of public opinion is shown. Publications of journalists, broadcasts of media resonators, experts, public figures, politicians, readers are analyzed. The language of the media plays a key role in shaping the worldview of the young political elite in the first place. The essence of each statement is a focused thought that reacts to events in the world or in one’s own country. The most popular platform for mass information and social interaction is, first of all, network journalism, which is characterized by mobility and unlimited time and space. Authors have complete freedom to express their views in direct language, including their own word formation. Phonetic, lexical, phraseological and stylistic means of speech create expression of the text. A figurative word, a good aphorism or proverb, a paraphrased expression, etc. enhance the effectiveness of a multimedia text. This is especially important for headlines that simultaneously inform and influence the views of millions of readers. Given the wide range of issues raised by the Internet as a medium, research in this area is interdisciplinary. The science of information, combining language and social communication, is at the forefront of global interactions. The Internet is an effective source of knowledge and a forum for free thought. Nonlinear texts (hypertexts) – «branching texts or texts that perform actions on request», multimedia texts change the principles of information collection, storage and dissemination, involving billions of readers in the discussion of global issues. Mastering the word is not an easy task if the author of the publication is not well-read, is not deep in the topic, does not know the psychology of the audience for which he writes. Therefore, the study of media broadcasting is an important component of the professional training of future journalists. The functions of the language of the media require the authors to make the right statements and convincing arguments in the text. Journalism education is not only knowledge of imperative and dispositive norms, but also apodictic ones. In practice, this means that there are rules in media creativity that are based on logical necessity. Apodicticity is the first sign of impressive language on the platform of print or electronic media. Social expression is a combination of creative abilities and linguistic competencies that a journalist realizes in his activity. Creative self-expression is realized in a set of many important factors in the media: the choice of topic, convincing arguments, logical presentation of ideas and deep philological education. Linguistic art, in contrast to painting, music, sculpture, accumulates all visual, auditory, tactile and empathic sensations in a universal sign – the word. The choice of the word for the reproduction of sensory and semantic meanings, its competent use in the appropriate context distinguishes the journalist-intellectual from other participants in forums, round tables, analytical or entertainment programs. Expressive speech in the media is a product of the intellect (ability to think) of all those who write on socio-political or economic topics. In the same plane with him – intelligence (awareness, prudence), the first sign of which (according to Ivan Ogienko) is a good knowledge of the language. Intellectual language is an important means of organizing a journalistic text. It, on the one hand, logically conveys the author’s thoughts, and on the other – encourages the reader to reflect and comprehend what is read. The richness of language is accumulated through continuous self-education and interesting communication. Studies of social expression as an important factor influencing the formation of public consciousness should open up new facets of rational and emotional media broadcasting; to trace physical and psychological reactions to communicative mimicry in the media. Speech mimicry as one of the methods of disguise is increasingly becoming a dangerous factor in manipulating the media. Mimicry is an unprincipled adaptation to the surrounding social conditions; one of the most famous examples of an animal characterized by mimicry (change of protective color and shape) is a chameleon. In a figurative sense, chameleons are called adaptive journalists. Observations show that mimicry in politics is to some extent a kind of game that, like every game, is always conditional and artificial.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography