To see the other types of publications on this topic, follow the link: Framework II (Computer file).

Journal articles on the topic 'Framework II (Computer file)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Framework II (Computer file).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Flügel, W. A., and C. Busch. "Development and implementation of an Integrated Water Resources Management System (IWRMS)." Advances in Science and Research 7, no. 1 (April 26, 2011): 83–90. http://dx.doi.org/10.5194/asr-7-83-2011.

Full text
Abstract:
Abstract. One of the innovative objectives in the EC project BRAHMATWINN was the development of a stakeholder oriented Integrated Water Resources Management System (IWRMS). The toolset integrates the findings of the project and presents it in a user friendly way for decision support in sustainable integrated water resources management (IWRM) in river basins. IWRMS is a framework, which integrates different types of basin information and which supports the development of IWRM options for climate change mitigation. It is based on the River Basin Information System (RBIS) data models and delivers a graphical user interface for stakeholders. A special interface was developed for the integration of the enhanced DANUBIA model input and the NetSyMod model with its Mulino decision support system (mulino mDss) component. The web based IWRMS contains and combines different types of data and methods to provide river basin data and information for decision support. IWRMS is based on a three tier software framework which uses (i) html/javascript at the client tier, (ii) PHP programming language to realize the application tier, and (iii) a postgresql/postgis database tier to manage and storage all data, except the DANUBIA modelling raw data, which are file based and registered in the database tier. All three tiers can reside on one or different computers and are adapted to the local hardware infrastructure. IWRMS as well as RBIS are based on Open Source Software (OSS) components and flexible and time saving access to that database is guaranteed by web-based interfaces for data visualization and retrieval. The IWRMS is accessible via the BRAHMATWINN homepage: http://www.brahmatwinn.uni-jena.de and a user manual for the RBIS is available for download as well.
APA, Harvard, Vancouver, ISO, and other styles
2

BERGENTI, FEDERICO, and AGOSTINO POGGI. "SUPPORTING AGENT-ORIENTED MODELLING WITH UML." International Journal of Software Engineering and Knowledge Engineering 12, no. 06 (December 2002): 605–18. http://dx.doi.org/10.1142/s0218194002001086.

Full text
Abstract:
Software engineering relies on the possibility of describing a system at different levels of abstraction. Agent-oriented software engineering introduces a new level of abstraction, that we called agent level, to allow the architect modelling a system in terms of interacting agents. This level of abstraction is not supported by an accepted set of tools and notations yet, even if a number of proposals are available. This paper introduces: (i) An UML-based notation capable of modelling a system at the agent level and (ii) A development framework, called ParADE, exploiting such a notation. The notation we propose is formalized in terms of a UML profile and it supports the realisation of artefacts modelling two basic concepts of the agent level, i.e., the architecture of the multi-agent system and the ontology followed by agents. The choice of formalising our notation in terms of a UML profile allows using it with any off-the-shelf CASE tool. The ParADE framework takes advantage of this choice by providing a code generator capable of producing skeletons of FIPA-compliant agents from XMI files of agent-oriented models. The developer is requested to complete the generated skeletons exploiting the services that ParADE and the underlying agent platform provide.
APA, Harvard, Vancouver, ISO, and other styles
3

Widmann, Tobias, Lucas P. Kreuzer, Matthias Kühnhammer, Andreas J. Schmid, Lars Wiehemeier, Sebastian Jaksch, Henrich Frielinghaus, et al. "Flexible Sample Environment for the Investigation of Soft Matter at the European Spallation Source: Part II—The GISANS Setup." Applied Sciences 11, no. 9 (April 29, 2021): 4036. http://dx.doi.org/10.3390/app11094036.

Full text
Abstract:
The FlexiProb project is a joint effort of three soft matter groups at the Universities of Bielefeld, Darmstadt, and Munich with scientific support from the European Spallation Source (ESS), the small-K advanced diffractometer (SKADI) beamline development group of the Jülich Centre for Neutron Science (JCNS), and the Heinz Maier-Leibnitz Zentrum (MLZ). Within this framework, a flexible and quickly interchangeable sample carrier system for small-angle neutron scattering (SANS) at the ESS was developed. In the present contribution, the development of a sample environment for the investigation of soft matter thin films with grazing-incidence small-angle neutron scattering (GISANS) is introduced. Therefore, components were assembled on an optical breadboard for the measurement of thin film samples under controlled ambient conditions, with adjustable temperature and humidity, as well as the optional in situ recording of the film thickness via spectral reflectance. Samples were placed in a 3D-printed spherical humidity metal chamber, which enabled the accurate control of experimental conditions via water-heated channels within its walls. A separately heated gas flow stream supplied an adjustable flow of dry or saturated solvent vapor. First test experiments proved the concept of the setup and respective component functionality.
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Seokjun, Wooyeon Jo, Soowoong Eo, and Taeshik Shon. "ExtSFR: scalable file recovery framework based on an Ext file system." Multimedia Tools and Applications 79, no. 23-24 (January 29, 2019): 16093–111. http://dx.doi.org/10.1007/s11042-019-7199-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kim, Hyungchan, Sungbum Kim, Yeonghun Shin, Wooyeon Jo, Seokjun Lee, and Taeshik Shon. "Ext4 and XFS File System Forensic Framework Based on TSK." Electronics 10, no. 18 (September 20, 2021): 2310. http://dx.doi.org/10.3390/electronics10182310.

Full text
Abstract:
Recently, the number of Internet of Things (IoT) devices, such as artificial intelligence (AI) speakers and smartwatches, using a Linux-based file system has increased. Moreover, these devices are connected to the Internet and generate vast amounts of data. To efficiently manage these generated data and improve the processing speed, the function is improved by updating the file system version or using new file systems, such as an Extended File System (XFS), B-tree file system (Btrfs), or Flash-Friendly File System (F2FS). However, in the process of updating the existing file system, the metadata structure may be changed or the analysis of the newly released file system may be insufficient, making it impossible for existing commercial tools to extract and restore deleted files. In an actual forensic investigation, when deleted files become unrecoverable, important clues may be missed, making it difficult to identify the culprit. Accordingly, a framework for extracting and recovering files based on The Sleuth Kit (TSK) is proposed by deriving the metadata changed in Ext4 file system journal checksum v3 and XFS file system v5. Thereafter, by comparing the accuracy and recovery rate of the proposed framework with existing commercial tools using the experimental dataset, we conclude that sustained research on file systems should be conducted from the perspective of forensics.
APA, Harvard, Vancouver, ISO, and other styles
6

Sheu, Ruey-Kai, Shyan-Ming Yuan, Win-Tsung Lo, and Chan-I. Ku. "Design and Implementation of File Deduplication Framework on HDFS." International Journal of Distributed Sensor Networks 10, no. 4 (January 2014): 561340. http://dx.doi.org/10.1155/2014/561340.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jang, Hyeongwon, Sang Youp Rhee, Jae Eun Kim, Yoonhee Kim, Hyuck Han, Sooyong Kang, and Hyungsoo Jung. "AutoBahn: a concurrency control framework for non-volatile file buffer." Cluster Computing 23, no. 2 (July 29, 2019): 895–910. http://dx.doi.org/10.1007/s10586-019-02964-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Yuan, Shuang, Chenxi Zhang, and Pin‐Han Ho. "A secure business framework for file purchasing in vehicular networks." Security and Communication Networks 1, no. 3 (May 2008): 259–68. http://dx.doi.org/10.1002/sec.24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

BAFTIU, Naim, and Samedin KRABAJ. "Creating Prototype Virus - Destroying Files and Texts on Any Computer." PRIZREN SOCIAL SCIENCE JOURNAL 3, no. 1 (April 26, 2019): 62. http://dx.doi.org/10.32936/pssj.v3i1.78.

Full text
Abstract:
When we study how viruses work and prevent them, we've developed a very simple application where we can see a prototype of a virus and virus function, as well as neutralizing a file if we want to break it down its structure at the level of the bits Purpose-Understand how a virus works by programming it in a high programming language. In our case, the C # programming language with the Visual Studio program that uses the .Net Framework. With the Windows Form Application module, the same application we are creating can also use it to neutralize a sentence if we know it is infected by interfering with the file we set up itself and by disrupting the system his Binary. Key words: Component, Virus, File, C# Programming, Visual Studio.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Tao, XiangZheng Sun, Wei Xue, Nan Qiao, Huang Huang, JiWu Shu, and Weimin Zheng. "ParSA: High-throughput scientific data analysis framework with distributed file system." Future Generation Computer Systems 51 (October 2015): 111–19. http://dx.doi.org/10.1016/j.future.2014.10.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Prochnow, Dave. "Framework II: Database plus." Electronic Library 5, no. 2 (February 1987): 80–82. http://dx.doi.org/10.1108/eb044736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Won, Jiwoong, Oseok Kwon, Junhee Ryu, Dongeun Lee, and Kyungtae Kang. "iFetcher: User-Level Prefetching Framework With File-System Event Monitoring for Linux." IEEE Access 6 (2018): 46213–26. http://dx.doi.org/10.1109/access.2018.2864820.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Ozel, Filiz, Robert Pahle, and Manu Juyal. "An XML Framework for Simulation and Analysis of Buildings." International Journal of Architectural Computing 1, no. 2 (June 2003): 191–204. http://dx.doi.org/10.1260/147807703771799175.

Full text
Abstract:
This study focuses on the problem of how to structure spatial and component based building data with the intention to use it in the simulation and analysis of the performance of buildings. Special attention was paid to the interoperability and optimization of the resulting data files. The study builds its investigation onto XML (Extensible Markup Language) data modeling framework. The authors have studied different ways of arranging building information in XML format for effective data storage and have developed a data modeling framework called bmXML for buildings. Initial results are two-fold: a VBA application was developed to create the appropriate building model in AutoCAD with the intention to write building data in bmXML format, and a JAVA application to view the file thus created. This paper primarily focuses on the former, i.e. the AutoCAD application and the bmXML format.
APA, Harvard, Vancouver, ISO, and other styles
14

KIM, E. "A Video Streaming File Server Framework for Digital Video Broadcasting Environments." IEICE Transactions on Information and Systems E88-D, no. 3 (March 1, 2005): 654–57. http://dx.doi.org/10.1093/ietisy/e88-d.3.654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Madeja, Matej, Jaroslav Porubän, Sergej Chodarev, Matúš Sulír, and Filip Gurbáľ. "Empirical Study of Test Case and Test Framework Presence in Public Projects on GitHub." Applied Sciences 11, no. 16 (August 6, 2021): 7250. http://dx.doi.org/10.3390/app11167250.

Full text
Abstract:
Automated tests are often considered an indicator of project quality. In this paper, we performed a large analysis of 6.3 M public GitHub projects using Java as the primary programming language. We created an overview of tests occurrence in publicly available GitHub projects and the use of test frameworks in them. The results showed that 52% of the projects contain at least one test case. However, there is a large number of example tests that do not represent relevant production code testing. It was also found that there is only a poor correlation between the number of the word “test” in different parts of the project (e.g., file paths, file name, file content, etc.) and the number of test cases, creation date, date of the last commit, number of commits, or number of watchers. Testing framework analysis confirmed that JUnit is the most used testing framework with a 48% share. TestNG, considered the second most popular Java unit testing framework, occurred in only 3% of the projects.
APA, Harvard, Vancouver, ISO, and other styles
16

Eggbeer, D., R. Bibb, and R. Williams. "The computer-aided design and rapid prototyping fabrication of removable partial denture frameworks." Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 219, no. 3 (March 1, 2005): 195–202. http://dx.doi.org/10.1243/095441105x9372.

Full text
Abstract:
This study explores the application of computer-aided design and manufacture (CAD/CAM) to the process of electronically surveying a scanned dental cast as a prior stage to producing a sacrificial pattern for a removable partial denture (RPD) metal alloy framework. These are designed to retain artificial replacement teeth in the oral cavity. A cast produced from an impression of a patient's mouth was digitally scanned and the data converted to a three-dimensional computer file that could be read by the computer-aided design (CAD) software. Analysis and preparation were carried out in the digital environment according to established dental principles. The CAD software was then used to design the framework and generate a standard triangulation language (STL) file in preparation for its manufacture using rapid prototyping (RP) methods. Several RP methods were subsequently used to produce sacrificial patterns, which were then cast in a chromium-cobalt alloy using conventional methods and assessed for accuracy of fit. This work demonstrates that CAD/CAM techniques can be used for electronic dental cast analysis, preparation, and design of RPD frameworks. It also demonstrates that RP-produced patterns may be successfully cast using conventional methods and that the resulting frameworks can provide a satisfactory fit.
APA, Harvard, Vancouver, ISO, and other styles
17

Yashiro, Hisashi, Koji Terasaki, Takemasa Miyoshi, and Hirofumi Tomita. "Performance evaluation of a throughput-aware framework for ensemble data assimilation: the case of NICAM-LETKF." Geoscientific Model Development 9, no. 7 (July 5, 2016): 2293–300. http://dx.doi.org/10.5194/gmd-9-2293-2016.

Full text
Abstract:
Abstract. In this paper, we propose the design and implementation of an ensemble data assimilation (DA) framework for weather prediction at a high resolution and with a large ensemble size. We consider the deployment of this framework on the data throughput of file input/output (I/O) and multi-node communication. As an instance of the application of the proposed framework, a local ensemble transform Kalman filter (LETKF) was used with a Non-hydrostatic Icosahedral Atmospheric Model (NICAM) for the DA system. Benchmark tests were performed using the K computer, a massive parallel supercomputer with distributed file systems. The results showed an improvement in total time required for the workflow as well as satisfactory scalability of up to 10 K nodes (80 K cores). With regard to high-performance computing systems, where data throughput performance increases at a slower rate than computational performance, our new framework for ensemble DA systems promises drastic reduction of total execution time.
APA, Harvard, Vancouver, ISO, and other styles
18

Waterborg, Jakob H., and Rodney E. Harrington. "A standard multidimensional, easy-access data file structure for Apple II computers." Computer Methods and Programs in Biomedicine 23, no. 3 (December 1986): 255–60. http://dx.doi.org/10.1016/0169-2607(86)90059-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Rokicka-Murszewska, Karolina. "Glosa do wyroku NSA z 14 lutego 2019 r., II OSK 626/17 (aprobująca)." Radca Prawny, no. 2 (27) (2021): 219–27. http://dx.doi.org/10.4467/23921943rp.21.019.14212.

Full text
Abstract:
Gloss to the Supreme Administrative Court of Poland judgment of February 14, 2019 – case file no. II OSK 626/17 The author analyzes crucial problems identified by the Supreme Administrative Court of Poland in the judgment of February 14, 2019 – case file no. II OSK 626/17 concerning the application in practice of the principle of good neighborhood, referred to in Article 61(1)(1) of the Act on Spatial Planning and Development. It should be noted that although the existing buildings determine the manner in which the conditions of a new investment are specified (due to the necessity of existence of “at least one neighboring plot”), various functions may co-exist within the framework of the existing and future developments, provided that they can be mutually reconciled. The purpose of the gloss is to demonstrate the appropriateness of the decision of the Court, which concluded that the existing housing development will have a negative impact on the residents of the new single-family housing estate due to the conflicting functions of the sites. In this respect, the reviewed judgment constitutes a certain attempt to prevent the so-called single-unit urbanism.
APA, Harvard, Vancouver, ISO, and other styles
20

KLEIN, SHMUEL T., TAMAR C. SEREBRO, and DANA SHAPIRA. "MODELING DELTA ENCODING OF COMPRESSED FILES." International Journal of Foundations of Computer Science 19, no. 01 (February 2008): 137–46. http://dx.doi.org/10.1142/s0129054108005589.

Full text
Abstract:
The Compressed Delta Encoding paradigm is introduced, i.e., delta encoding directly in two given compressed files without decompressing. Here we explore the case where the two given files are compressed using LZW, and devise the theoretical framework for modeling delta encoding of compressed files. In practice, although working on the compressed versions in processing time proportional to the compressed files, our target file may be considerably smaller than the corresponding LZW form.
APA, Harvard, Vancouver, ISO, and other styles
21

Jin, Peiquan, Hong Chen, Xujian Zhao, Xiaowen Li, and Lihua Yue. "Indexing temporal information for web pages." Computer Science and Information Systems 8, no. 3 (2011): 711–37. http://dx.doi.org/10.2298/csis100407025j.

Full text
Abstract:
Temporal information plays important roles in Web search, as Web pages intrinsically involve crawled time and most Web pages contain time keywords in their content. How to integrate temporal information in Web search engines has been a research focus in recent years, among which some key issues such as temporal-textual indexing and temporal information extraction have to be first studied. In this paper, we first present a framework of temporal-textual Web search engine. And then, we concentrate on designing a new hybrid index structure for temporal and textual information of Web pages. In particular, we propose to integrate B+-tree, inverted file and a typical temporal index called MAP21-Tree, to handle temporal-textual queries. We study five mechanisms to implement a hybrid index structure for temporal-textual queries, which use different ways to organize the inverted file, B+-tree and MAP-21 tree. After a theoretic analysis on the performance of those five index structures, we conduct experiments on both simulated and real data sets to make performance comparison. The experimental results show that among all the index schemes the first-inverted-file-then-MAP21-tree index structure has the best query performance and thus is an acceptable choice to be the temporal-textual index for future time-aware search engines.
APA, Harvard, Vancouver, ISO, and other styles
22

Sun, Li Yuan, and Yan Mei Zhang. "The Research and Application of the Variant Fuzz Testing Framework for Log Based on the Structured Data." Applied Mechanics and Materials 602-605 (August 2014): 1749–52. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.1749.

Full text
Abstract:
Fuzz testing is a software testing technique,which provides invalid, unexpected, or random data to the inputs of a computer program to test the robustness and security of procedures[1]. For structured data like logging, the variant fuzz testing framework adopts a configuration file, apply traverse and stream processing to complete the structured fuzzing. This article starts with the features of the structured data, then introduces the design and implementation of the variant fuzz testing framework, including function modules, class structure, and logic processing. As a conclusion, this framework is compared with zzuf tool, and the advanced nature of this framework is elaborated.
APA, Harvard, Vancouver, ISO, and other styles
23

Xiong, Qi, Xinman Zhang, Wen-Feng Wang, and Yuhong Gu. "A Parallel Algorithm Framework for Feature Extraction of EEG Signals on MPI." Computational and Mathematical Methods in Medicine 2020 (May 27, 2020): 1–10. http://dx.doi.org/10.1155/2020/9812019.

Full text
Abstract:
In this paper, we present a parallel framework based on MPI for a large dataset to extract power spectrum features of EEG signals so as to improve the speed of brain signal processing. At present, the Welch method has been wildly used to estimate the power spectrum. However, the traditional Welch method takes a lot of time especially for the large dataset. In view of this, we added the MPI into the traditional Welch method and developed it into a reusable master-slave parallel framework. As long as the EEG data of any format are converted into the text file of a specified format, the power spectrum features can be extracted quickly by this parallel framework. In the proposed parallel framework, the EEG signals recorded by a channel are divided into N overlapping data segments. Then, the PSD of N segments are computed by some nodes in parallel. The results are collected and summarized by the master node. The final PSD results of each channel are saved in the text file, which can be read and analyzed by Microsoft Excel. This framework can be implemented not only on the clusters but also on the desktop computer. In the experiment, we deploy this framework on a desktop computer with a 4-core Intel CPU. It took only a few minutes to extract the power spectrum features from the 2.85 GB EEG dataset, seven times faster than using Python. This framework makes it easy for users, who do not have any parallel programming experience in constructing the parallel algorithms to extract the EEG power spectrum.
APA, Harvard, Vancouver, ISO, and other styles
24

Mishra, Kritika, Ilanthenral Kandasamy, Vasantha Kandasamy W. B., and Florentin Smarandache. "A Novel Framework Using Neutrosophy for Integrated Speech and Text Sentiment Analysis." Symmetry 12, no. 10 (October 18, 2020): 1715. http://dx.doi.org/10.3390/sym12101715.

Full text
Abstract:
With increasing data on the Internet, it is becoming difficult to analyze every bit and make sure it can be used efficiently for all the businesses. One useful technique using Natural Language Processing (NLP) is sentiment analysis. Various algorithms can be used to classify textual data based on various scales ranging from just positive-negative, positive-neutral-negative to a wide spectrum of emotions. While a lot of work has been done on text, only a lesser amount of research has been done on audio datasets. An audio file contains more features that can be extracted from its amplitude and frequency than a plain text file. The neutrosophic set is symmetric in nature, and similarly refined neutrosophic set that has the refined indeterminacies I1 and I2 in the middle between the extremes Truth T and False F. Neutrosophy which deals with the concept of indeterminacy is another not so explored topic in NLP. Though neutrosophy has been used in sentiment analysis of textual data, it has not been used in speech sentiment analysis. We have proposed a novel framework that performs sentiment analysis on audio files by calculating their Single-Valued Neutrosophic Sets (SVNS) and clustering them into positive-neutral-negative and combines these results with those obtained by performing sentiment analysis on the text files of those audio.
APA, Harvard, Vancouver, ISO, and other styles
25

van den Brink, Cors, and Susanne Wuijts. "Towards an effective protection of groundwater resources: putting policy into practice with the drinking water protection file." Water Policy 18, no. 3 (November 18, 2015): 635–53. http://dx.doi.org/10.2166/wp.2015.197.

Full text
Abstract:
Groundwater in the Netherlands is a major resource for drinking water. As such it must be carefully monitored and managed. Evaluation of the European Water Framework Directive (EU-WFD) showed that protection of this valuable resource needs improvement. The Drinking Water Protection File identifies necessary measures needed per water abstraction site. The Protection File is part of the Dutch national EU-WFD implementation strategy, intended to improve the protection level of groundwater resources. It consists of a national top-down framework and a regional bottom-up process, which respectively enforces commitment and enhances stakeholder awareness regarding risks and actions needed regarding the identification and implementation of measures enhancing the protection level of groundwater resources. It is yet uncertain whether the initial implementation of the measures in the first planning cycle is adequate to obtain compliance with EU-WFD objectives in 2021, because (i) some of these measures are on a voluntary basis and (ii) standards for the remediation of point source pollution and allowed application of nutrients do not currently comply with drinking water standards.
APA, Harvard, Vancouver, ISO, and other styles
26

Sethuraman, R., and T. Sasiprabha. "Semantic web service-based messaging framework for prediction of fitness data using Hadoop distributed file system." Automatika 60, no. 3 (July 3, 2019): 349–59. http://dx.doi.org/10.1080/00051144.2019.1637175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Meddah, Ishak H. A., and Khaled Belkadi. "Parallel Distributed Patterns Mining Using Hadoop MapReduce Framework." International Journal of Grid and High Performance Computing 9, no. 2 (April 2017): 70–85. http://dx.doi.org/10.4018/ijghpc.2017040105.

Full text
Abstract:
The treatment of large data is proving more difficult in different axes, but the arrival of the framework MapReduce is a solution of this problem. With it we can analyze and process vast amounts of data. It does this by distributing the computational work across a cluster of virtual servers running in a cloud or large set of machines while process mining provides an important bridge between data mining and business process analysis. The process mining techniques allow for extracting information from event logs. In general, there are two steps in process mining: correlation definition or discovery and process inference or composition. Firstly, the authors' work consists to mine small patterns from a log traces. Those patterns are the representation of the traces execution from a log file of a business process. In this step, they use existing techniques. The patterns are represented by finite state automaton or their regular expression. The final model is the combination of only two types of small patterns whom are represented by the regular expressions (ab)* and (ab*c)*. Secondly, the authors compute these patterns in parallel, and then combine those small patterns using the MapReduce framework. They have two parties: the first is the Map Step in which they mine patterns from execution traces; the second is the combination of these small patterns as reduce step. The authors' results are promising in that they show that their approach is scalable, general, and precise. It minimizes the execution time by the use of the MapReduce framework.
APA, Harvard, Vancouver, ISO, and other styles
28

Wang, C., F. Hu, X. Hu, S. Zhao, W. Wen, and C. Yang. "A HADOOP-BASED DISTRIBUTED FRAMEWORK FOR EFFICIENT MANAGING AND PROCESSING BIG REMOTE SENSING IMAGES." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-4/W2 (July 10, 2015): 63–66. http://dx.doi.org/10.5194/isprsannals-ii-4-w2-63-2015.

Full text
Abstract:
Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.
APA, Harvard, Vancouver, ISO, and other styles
29

Sim, Kwang Mong. "Epistemic logic and logical omniscience II: A unifying framework." International Journal of Intelligent Systems 15, no. 2 (February 2000): 129–52. http://dx.doi.org/10.1002/(sici)1098-111x(200002)15:2<129::aid-int3>3.0.co;2-t.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Wu, Yixin, Yuqiang Sun, Cheng Huang, Peng Jia, and Luping Liu. "Session-Based Webshell Detection Using Machine Learning in Web Logs." Security and Communication Networks 2019 (November 22, 2019): 1–11. http://dx.doi.org/10.1155/2019/3093809.

Full text
Abstract:
Attackers upload webshell into a web server to achieve the purpose of stealing data, launching a DDoS attack, modifying files with malicious intentions, etc. Once these objects are accomplished, it will bring huge losses to website managers. With the gradual development of encryption and confusion technology, the most common detection approach using taint analysis and feature matching might become less useful. Instead of applying source file codes, POST contents, or all received traffic, this paper demonstrated an intelligent and efficient framework that employs precise sessions derived from the web logs to detect webshell communication. Features were extracted from the raw sequence data in web logs while a statistical method based on time interval was proposed to identify sessions specifically. Besides, the paper leveraged long short-term memory and hidden Markov model to constitute the framework, respectively. Finally, the framework was evaluated with real data. The experiment shows that the LSTM-based model can achieve a higher accuracy rate of 95.97% with a recall rate of 96.15%, which has a much better performance than the HMM-based model. Moreover, the experiment demonstrated the high efficiency of the proposed approach in terms of the quick detection without source code, especially when it only considers detecting for a period of time, as it takes 98.5% less time than the cited related approach to get the result. As long as the webshell behavior is detected, we can pinpoint the anomaly session and utilize the statistical method to find the webshell file accurately.
APA, Harvard, Vancouver, ISO, and other styles
31

Li, Wei, Chaolu Feng, Kun Yu, and Dazhe Zhao. "MISS-D: A fast and scalable framework of medical image storage service based on distributed file system." Computer Methods and Programs in Biomedicine 186 (April 2020): 105189. http://dx.doi.org/10.1016/j.cmpb.2019.105189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Sukmana, Farid, and Fahrur Rozi. "Digitalization In Scanning And Remote Shutdown Host of Computer Using Apache Cordova Framework." JIPI (Jurnal Ilmiah Penelitian dan Pembelajaran Informatika) 4, no. 2 (December 1, 2019): 88. http://dx.doi.org/10.29100/jipi.v4i2.1122.

Full text
Abstract:
<p class="Abstract">Internet of Things is one concept in technology 4.0, this technology combine of two component, that is internet and mechine. Internet use to send signal from mechine to second device like mobilephone or from the mobilephone to mechine to execution task. Information Technnology infrastructure in industrial company has many complexity in network, hardware, or software. Especially for hardware like desktop computer or laptop, ech employee has one of them. And many of them, sometime forget to shutdown they computer or they has purpose did not shutdown they computer just for download the big file. Indirectly, the computer that run more than 24 hours in industrial company, it will be degrade service of life, and the consumption of electricity need the high cost. In this paper, our purpose is how to reduce computer that not needed run 24 hours can be monitor and can be shutdown in the out of company area. This is need concept of internet of things and digitalitation to make it. For this problem can be used monitoring system by mobile application, this system will be show about the host computer still be on, and can be remote shutdown with mobile device. This application use cordova to build it and the command from mobile be send to server to shutdown computer that still online. </p>
APA, Harvard, Vancouver, ISO, and other styles
33

Kancherla, Jayaram, Yifan Yang, Hyeyun Chae, and Hector Corrada Bravo. "Epiviz File Server: Query, transform and interactively explore data from indexed genomic files." Bioinformatics 36, no. 18 (July 3, 2020): 4682–90. http://dx.doi.org/10.1093/bioinformatics/btaa591.

Full text
Abstract:
Abstract Motivation Genomic data repositories like The Cancer Genome Atlas, Encyclopedia of DNA Elements, Bioconductor’s AnnotationHub and ExperimentHub etc., provide public access to large amounts of genomic data as flat files. Researchers often download a subset of data files from these repositories to perform exploratory data analysis. We developed Epiviz File Server, a Python library that implements an in situ data query system for local or remotely hosted indexed genomic files, not only for visualization but also data transformation. The File Server library decouples data retrieval and transformation from specific visualization and analysis tools and provides an abstract interface to define computations independent of the location, format or structure of the file. We demonstrate the File Server in two use cases: (i) integration with Galaxy workflows and (ii) using Epiviz to create a custom genome browser from the Epigenome Roadmap dataset. Availability and implementation Epiviz File Server is open source and is available on GitHub at http://github.com/epiviz/epivizFileServer. The documentation for the File Server library is available at http://epivizfileserver.rtfd.io.
APA, Harvard, Vancouver, ISO, and other styles
34

Chandra G., Madhu, and Sreerama Reddy G. M. "Simplified Video Surveillance Framework for Dynamic Object Detection under Challenging Environment." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 4 (August 1, 2019): 2715. http://dx.doi.org/10.11591/ijece.v9i4.pp2715-2724.

Full text
Abstract:
An effective video surveillance system is highly essential in order to ensure constructing better form of video analytics. Existing review of literatures pertaining to video analytics are found to directly implement algorithms on the top of the video file without much emphasis on following problems i.e. i) dynamic orientation of subject, ii)poor illumination condition, iii) identification and classification of subjects, and iv) faster response time. Therefore, the proposed system implements an analytical concept that uses depth-image of the video feed along with the original colored video feed to apply an algorithm for extracting significant information about the motion blob of the dynamic subjects. Implemented in MATLAB, the study outcome shows that it is capable of addressing all the above mentioned problems associated with existing research trends on video analytics by using a very simple and non-iterative process of implementation. The applicability of the proposed system in practical world is thereby proven.
APA, Harvard, Vancouver, ISO, and other styles
35

Qin, Tiancheng, and S. Rasoul Etesami. "Optimal Online Algorithms for File-Bundle Caching and Generalization to Distributed Caching." ACM Transactions on Modeling and Performance Evaluation of Computing Systems 6, no. 1 (June 2021): 1–23. http://dx.doi.org/10.1145/3445028.

Full text
Abstract:
We consider a generalization of the standard cache problem called file-bundle caching, where different queries (tasks), each containing l ≥ 1 files, sequentially arrive. An online algorithm that does not know the sequence of queries ahead of time must adaptively decide on what files to keep in the cache to incur the minimum number of cache misses. Here a cache miss refers to the case where at least one file in a query is missing among the cache files. In the special case where l = 1, this problem reduces to the standard cache problem. We first analyze the performance of the classic least recently used (LRU) algorithm in this setting and show that LRU is a near-optimal online deterministic algorithm for file-bundle caching with regard to competitive ratio. We then extend our results to a generalized ( h,k )-paging problem in this file-bundle setting, where the performance of the online algorithm with a cache size k is compared to an optimal offline benchmark of a smaller cache size h < k . In this latter case, we provide a randomized O ( l ln k / k-h )-competitive algorithm for our generalized ( h, k )-paging problem, which can be viewed as an extension of the classic marking algorithm . We complete this result by providing a matching lower bound for the competitive ratio, indicating that the performance of this modified marking algorithm is within a factor of 2 of any randomized online algorithm. Finally, we look at the distributed version of the file-bundle caching problem where there are m ≥ 1 identical caches in the system. In this case, we show that for m = l + 1 caches, there is a deterministic distributed caching algorithm that is ( l 2 + l )-competitive and a randomized distributed caching algorithm that is O ( l ln ( 2l + 1)-competitive when l ≥ 2. We also provide a general framework to devise other efficient algorithms for the distributed file-bundle caching problem and evaluate the performance of our results through simulations.
APA, Harvard, Vancouver, ISO, and other styles
36

Filsinger, D. H., and L. K. Frevel. "Computer Automation of Structure-Sensitive Search-Match Algorithm for Powder Diffraction Analysis." Powder Diffraction 1, no. 1 (March 1986): 22–25. http://dx.doi.org/10.1017/s0885715600011246.

Full text
Abstract:
AbstractThe Semi-manual Structure-sensitive SEARCH-MATCH procedure [Frevel, (1982), Anal. Chem. 54, 691–697] has been automated for the Hewlett-Packard HP 3354 Computer. The software is coded in the interpretive HP LAB BASIC II language. A core file of 1026 selected powder diffraction standards has been compiled covering the common crystalline phases encountered in industry, the ubiquitous crystalline minerals, and prototypes of the more frequently found crystal structures. Solid solutions can be readily identified.
APA, Harvard, Vancouver, ISO, and other styles
37

Okada, Masahiko, and Mihoko Okada. "Portable file management system in FORTRAN. II. The input/output routine for free-formal text." Computer Methods and Programs in Biomedicine 22, no. 2 (April 1986): 199–207. http://dx.doi.org/10.1016/0169-2607(86)90021-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lan, Divon, Ray Tobler, Yassine Souilmi, and Bastien Llamas. "Genozip: a universal extensible genomic data compressor." Bioinformatics 37, no. 16 (February 15, 2021): 2225–30. http://dx.doi.org/10.1093/bioinformatics/btab102.

Full text
Abstract:
Abstract We present Genozip, a universal and fully featured compression software for genomic data. Genozip is designed to be a general-purpose software and a development framework for genomic compression by providing five core capabilities—universality (support for all common genomic file formats), high compression ratios, speed, feature-richness and extensibility. Genozip delivers high-performance compression for widelyused genomic data formats in genomics research, namely FASTQ, SAM/BAM/CRAM, VCF, GVF, FASTA, PHYLIP and 23andMe formats. Our test results show that Genozip is fast and achieves greatly improved compression ratios, even when the files are already compressed. Further, Genozip is architected with a separation of the Genozip Framework from file-format-specific Segmenters and data-type-specific Codecs. With this, we intend for Genozip to be a general-purpose compression platform where researchers can implement compression for additional file formats, as well as new codecs for data types or fields within files, in the future. We anticipate that this will ultimately increase the visibility and adoption of these algorithms by the user community, thereby accelerating further innovation in this space. Availability and implementation Genozip is written in C. The code is open-source and available on http://www.genozip.com. The package is free for non-commercial use. It is distributed through the Conda package manager, github, and as a Docker container on DockerHub. Genozip is tested on Linux, Mac and Windows. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
39

Riadi, Imam, Rusydi Umar, and Andi Sugandi. "Web Forensic on Container Services Using Grr Rapid Response Framework." Scientific Journal of Informatics 7, no. 1 (June 5, 2020): 33–42. http://dx.doi.org/10.15294/sji.v7i1.18299.

Full text
Abstract:
Cybercrime on Internet that keeps increasing does not only take place in the environment that running web applications traditionally under operating system, but also web applications that are running in more advance environment like container service. Docker is a currently popular container service in Linux operating system needs to be secured and implements incident response mechanisme that will investigate web server that was attacked by DDoS in fast, valid, and comprehesive way. This paper discusses the investigation using Grr Rapid Response framework on web server that was attacked by DDoS running in container service on Linux operating system, and the attacker using Windows oprating system that runs DDos script. This research has succesfully investigated digital evidence in the form of log file from web server running on container service and digital evidence through netstat on Windows computer.
APA, Harvard, Vancouver, ISO, and other styles
40

Shim, Jaewoo, Kyeonghwan Lim, Seong-je Cho, Sangchul Han, and Minkyu Park. "Static and Dynamic Analysis of Android Malware and Goodware Written with Unity Framework." Security and Communication Networks 2018 (June 20, 2018): 1–12. http://dx.doi.org/10.1155/2018/6280768.

Full text
Abstract:
Unity is the most popular cross-platform development framework to develop games for multiple platforms such as Android, iOS, and Windows Mobile. While Unity developers can easily develop mobile apps for multiple platforms, adversaries can also easily build malicious apps based on the “write once, run anywhere” (WORA) feature. Even though malicious apps were discovered among Android apps written with Unity framework (Unity apps), little research has been done on analysing the malicious apps. We propose static and dynamic reverse engineering techniques for malicious Unity apps. We first inspect the executable file format of a Unity app and present an effective static analysis technique of the Unity app. Then, we also propose a systematic technique to analyse dynamically the Unity app. Using the proposed techniques, the malware analyst can statically and dynamically analyse Java code, native code in C or C ++, and the Mono runtime layer where the C# code is running.
APA, Harvard, Vancouver, ISO, and other styles
41

Garetto, M., D. R. Figueiredo, R. Gaeta, and M. Sereno. "A modeling framework to understand the tussle between ISPs and peer-to-peer file-sharing users." Performance Evaluation 64, no. 9-12 (October 2007): 819–37. http://dx.doi.org/10.1016/j.peva.2007.06.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

POWELL, MARK W., and DMITRY GOLDGOF. "SOFTWARE TOOLKIT FOR TEACHING IMAGE PROCESSING." International Journal of Pattern Recognition and Artificial Intelligence 15, no. 05 (August 2001): 833–44. http://dx.doi.org/10.1142/s0218001401001180.

Full text
Abstract:
We introduce a software framework called the Java Vision Toolkit (JVT) for teaching image processing and computer vision. The toolkit provides over 50 image operations and presents them to the user in a GUI that can render grayscale, color and 3D range images. The software is written in Java, enabling it to be integrated into HTML documents and interactive course materials. The framework is designed for extensibility using a source code template that supports the implementation of any new operation with a minimal amount of supporting code. For students, this framework encapsulates the GUI, file I/O and other trivial programming details and allows them the maximum amount of time to spend on understanding computer vision. We compare the JVT with other computer vision software frameworks that are used for teaching and research. We also discuss the use of the JVT in an undergraduate image processing course at the University of South Florida.
APA, Harvard, Vancouver, ISO, and other styles
43

Olson, J. A., E. Stenehjem, W. R. Buckel, E. A. Thorell, S. Howe, X. Wu, P. S. Jones, J. F. Lloyd, and R. S. Evans. "Use of Computer Decision Support in an Antimicrobial Stewardship Program (ASP)." Applied Clinical Informatics 06, no. 01 (2015): 120–35. http://dx.doi.org/10.4338/aci-2014-11-ra-0102.

Full text
Abstract:
SummaryObjective: Document information needs, gaps within the current electronic applications and reports, and workflow interruptions requiring manual information searches that decreased the ability of our antimicrobial stewardship program (ASP) at Intermountain Healthcare (IH) to prospectively audit and provide feedback to clinicians to improve antimicrobial use.Methods: A framework was used to provide access to patient information contained in the electronic medical record, the enterprise-wide data warehouse, the data-driven alert file and the enterprise-wide encounter file to generate alerts and reports via pagers, emails and through the Centers for Diseases and Control’s National Healthcare Surveillance Network.Results: Four new applications were developed and used by ASPs at Intermountain Medical Center (IMC) and Primary Children’s Hospital (PCH) based on the design and input from the pharmacists and infectious diseases physicians and the new Center for Diseases Control and Prevention/ National Healthcare Safety Network (NHSN) antibiotic utilization specifications. Data from IMC and PCH now show a general decrease in the use of drugs initially targeted by the ASP at both facilities.Conclusions: To be effective, ASPs need an enormous amount of “timely” information. Members of the ASP at IH report these new applications help them improve antibiotic use by allowing efficient, timely review and effective prioritization of patients receiving antimicrobials in order to optimize patient care.Citation: Evans RS, Olson JA, Stenehjem E, Buckel WR, Thorell EA, Howe S, Wu X, Jones PS, Lloyd JF. Use of computer decision support in an antimicrobial stewardship program (ASP). Appl Clin Inf 2015; 6: 120–135http://dx.doi.org/10.4338/ACI-2014-11-RA-0102
APA, Harvard, Vancouver, ISO, and other styles
44

Pradeep, K. V., V. Vijayakumar, and V. Subramaniyaswamy. "An Efficient Framework for Sharing a File in a Secure Manner Using Asymmetric Key Distribution Management in Cloud Environment." Journal of Computer Networks and Communications 2019 (June 27, 2019): 1–8. http://dx.doi.org/10.1155/2019/9852472.

Full text
Abstract:
Cloud computing is a platform to share the data and resources used among various organizations, but the survey shows that there is always a security threat. Security is an important aspect of cloud computing. Hence, the responsibility underlines to the cloud service providers for providing security as the quality of service. However, cloud computing has many challenges in security that have not yet been addressed well. The data accessed or shared through any devices from the cloud environment are not safe because they are likely to have various attacks like Identity Access Management (IAM), hijacking an account or a service either by internal/external intruders. The cryptography places a major role to secure the data within the cloud environment. Therefore, there is a need for standard encryption/decryption mechanism to protect the data stored in the cloud, in which key is the mandatory element. Every cloud provider has its own security mechanisms to protect the key. The client cannot trust the service provider completely in spite of the fact that, at any instant, the provider has full access to both data and key. In this paper, we have proposed a new system which can prevent the exposure of the key as well as a framework for sharing a file that will ensure security (CIA) using asymmetric key and distributing it within the cloud environment using a trusted third party. We have compared RSA with ElGamal and Paillier in our proposed framework and found RSA gives a better result.
APA, Harvard, Vancouver, ISO, and other styles
45

Mattoussi, Ferdaouss, Matthieu Crussiere, Jean-Francois Helard, and Gheorghe Zaharia. "Analysis of Coding Strategies Within File Delivery Protocol Framework for HbbTV Based Push-VoD Services Over DVB Networks." IEEE Access 7 (2019): 15489–508. http://dx.doi.org/10.1109/access.2019.2893756.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Mavridou, Efthimia, Konstantinos M. Giannoutakis, Dionysios Kehagias, Dimitrios Tzovaras, and George Hassapis. "Automatic categorization of Web service elements." International Journal of Web Information Systems 14, no. 2 (June 18, 2018): 233–58. http://dx.doi.org/10.1108/ijwis-08-2017-0059.

Full text
Abstract:
Purpose Semantic categorization of Web services comprises a fundamental requirement for enabling more efficient and accurate search and discovery of services in the semantic Web era. However, to efficiently deal with the growing presence of Web services, more automated mechanisms are required. This paper aims to introduce an automatic Web service categorization mechanism, by exploiting various techniques that aim to increase the overall prediction accuracy. Design/methodology/approach The paper proposes the use of Error Correcting Output Codes on top of a Logistic Model Trees-based classifier, in conjunction with a data pre-processing technique that reduces the original feature-space dimension without affecting data integrity. The proposed technique is generalized so as to adhere to all Web services with a description file. A semantic matchmaking scheme is also proposed for enabling the semantic annotation of the input and output parameters of each operation. Findings The proposed Web service categorization framework was tested with the OWLS-TC v4.0, as well as a synthetic data set with a systematic evaluation procedure that enables comparison with well-known approaches. After conducting exhaustive evaluation experiments, categorization efficiency in terms of accuracy, precision, recall and F-measure was measured. The presented Web service categorization framework outperformed the other benchmark techniques, which comprise different variations of it and also third-party implementations. Originality/value The proposed three-level categorization approach is a significant contribution to the Web service community, as it allows the automatic semantic categorization of all functional elements of Web services that are equipped with a service description file.
APA, Harvard, Vancouver, ISO, and other styles
47

Groh, Micah, Norman Buchanan, Derek Doyle, James B. Kowalkowski, Marc Paterno, and Saba Sehrish. "PandAna: A Python Analysis Framework for Scalable High Performance Computing in High Energy Physics." EPJ Web of Conferences 251 (2021): 03033. http://dx.doi.org/10.1051/epjconf/202125103033.

Full text
Abstract:
Modern experiments in high energy physics analyze millions of events recorded in particle detectors to select the events of interest and make measurements of physics parameters. These data can often be stored as tabular data in files with detector information and reconstructed quantities. Most current techniques for event selection in these files lack the scalability needed for high performance computing environments. We describe our work to develop a high energy physics analysis framework suitable for high performance computing. This new framework utilizes modern tools for reading files and implicit data parallelism. Framework users analyze tabular data using standard, easy-to-use data analysis techniques in Python while the framework handles the file manipulations and parallelism without the user needing advanced experience in parallel programming. In future versions, we hope to provide a framework that can be utilized on a personal computer or a high performance computing cluster with little change to the user code.
APA, Harvard, Vancouver, ISO, and other styles
48

Sahlabadi, Amirhossein, Ravie Chandren Muniyandi, Mahdi Sahlabadi, and Hossein Golshanbafghy. "Framework for Parallel Preprocessing of Microarray Data Using Hadoop." Advances in Bioinformatics 2018 (March 29, 2018): 1–9. http://dx.doi.org/10.1155/2018/9391635.

Full text
Abstract:
Nowadays, microarray technology has become one of the popular ways to study gene expression and diagnosis of disease. National Center for Biology Information (NCBI) hosts public databases containing large volumes of biological data required to be preprocessed, since they carry high levels of noise and bias. Robust Multiarray Average (RMA) is one of the standard and popular methods that is utilized to preprocess the data and remove the noises. Most of the preprocessing algorithms are time-consuming and not able to handle a large number of datasets with thousands of experiments. Parallel processing can be used to address the above-mentioned issues. Hadoop is a well-known and ideal distributed file system framework that provides a parallel environment to run the experiment. In this research, for the first time, the capability of Hadoop and statistical power of R have been leveraged to parallelize the available preprocessing algorithm called RMA to efficiently process microarray data. The experiment has been run on cluster containing 5 nodes, while each node has 16 cores and 16 GB memory. It compares efficiency and the performance of parallelized RMA using Hadoop with parallelized RMA using affyPara package as well as sequential RMA. The result shows the speed-up rate of the proposed approach outperforms the sequential approach and affyPara approach.
APA, Harvard, Vancouver, ISO, and other styles
49

Kumar, Akshay, and T. V. Vijay Kumar. "Multi-Objective Big Data View Materialization Using NSGA-II." Information Resources Management Journal 34, no. 2 (April 2021): 1–28. http://dx.doi.org/10.4018/irmj.2021040101.

Full text
Abstract:
Big data views, in the context of distributed file system (DFS), are defined over structured, semi-structured and unstructured data that are voluminous in nature with the purpose to reduce the response time of queries over Big data. As the size of semi-structured and unstructured data in Big data is very large compared to structured data, a framework based on query attributes on Big data can be used to identify Big data views. Materializing Big data views can enhance the query response time and facilitate efficient distribution of data over the DFS based application. Given all the Big data views cannot be materialized, therefore, a subset of Big data views should be selected for materialization. The purpose of view selection for materialization is to improve query response time subject to resource constraints. The Big data view materialization problem was defined as a bi-objective problem with the two objectives- minimization of query evaluation cost and minimization of the update processing cost, with a constraint on the total size of the materialized views. This problem is addressed in this paper using multi-objective genetic algorithm NSGA-II. The experimental results show that proposed NSGA-II based Big data view selection algorithm is able to select reasonably good quality views for materialization.
APA, Harvard, Vancouver, ISO, and other styles
50

Attouch, Hedy, and Roger J. B. Wets. "Quantitative Stability of Variational Systems II. A Framework for Nonlinear Conditioning." SIAM Journal on Optimization 3, no. 2 (May 1993): 359–81. http://dx.doi.org/10.1137/0803016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography