To see the other types of publications on this topic, follow the link: Reusable software libraries (RSLs).

Journal articles on the topic 'Reusable software libraries (RSLs)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 39 journal articles for your research on the topic 'Reusable software libraries (RSLs).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Zimmermann, W. "Editorial: Reusable software libraries." IEE Proceedings - Software 152, no. 1 (2005): 1. http://dx.doi.org/10.1049/ip-sen:20051253.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Vo, Kiem-Phong. "The discipline and method architecture for reusable libraries." Software: Practice and Experience 30, no. 2 (2000): 107–28. http://dx.doi.org/10.1002/(sici)1097-024x(200002)30:2<107::aid-spe289>3.0.co;2-d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Runciman, Colin, and Ian Toyn. "Retrieving reusable software components by polymorphic type." Journal of Functional Programming 1, no. 2 (1991): 191–211. http://dx.doi.org/10.1017/s0956796800020049.

Full text
Abstract:
AbstractPolymorphic types are labels classifying both (a) defined components in a library and (b) contexts of free variables in partially written programs. It is proposed to help programmers make better use of software libraries by providing a system that, given (b), identifies candidates from (a) with matching types. Assuming at first that matching means unifying (i.e. having a common instance), efficient ways of implementing such a retrieval system are discussed and its likely effectiveness based on a quantitative study of currently available libraries is indicated. The applicative instance relation between types, which captures some intuitions about generalization/specialization is then introduced, and its use as the basis of a more flexible system is discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

KATZ, MARTIN DAVID, and DENNIS J. VOLPER. "CONSTRAINT PROPAGATION IN SOFTWARE LIBRARIES OF TRANSFORMATION SYSTEMS." International Journal of Software Engineering and Knowledge Engineering 02, no. 03 (1992): 355–74. http://dx.doi.org/10.1142/s0218194092000178.

Full text
Abstract:
Domain modeling can be applied to collections of reusable designs and reusable software. Libraries of such collections can be used when applying a refinement by transformation technique to software construction. There already exist systems that can automatically or semi-automatically perform program transformations that constitute refinements. An important question is how to organize such libraries so that transformation tools may feasibly use them. We show that transformation of a high level program with constraints on the transformations is an NP-complete problem; however, appropriately organized libraries are tractable. Moreover, we define a property which a library of transformations can have, ensuring that any consistent high level program can be transformed into an executable form. Finally, we give approximations which reduce the complexity of transformations for libraries which do not have this property. The most important aspect of this work is that it implies certain rules should be followed in constructing libraries and the domains that are placed in them.
APA, Harvard, Vancouver, ISO, and other styles
5

Breunese, A. P. J., J. F. Broenink, J. L. Top, and J. M. Akkermans. "Libraries of Reusable Models: Theory and Application." SIMULATION 71, no. 1 (1998): 7–22. http://dx.doi.org/10.1177/003754979807100101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

JENG, JUN-JANG, and BETTY H. C. CHENG. "USING AUTOMATED REASONING TECHNIQUES TO DETERMINE SOFTWARE REUSE." International Journal of Software Engineering and Knowledge Engineering 02, no. 04 (1992): 523–46. http://dx.doi.org/10.1142/s0218194092000245.

Full text
Abstract:
Reusing software may greatly increase the productivity of software engineers and improve the quality of developed software. Software component libraries have been suggested as a means for facilitating reuse. A major difficulty in designing software libraries is in the selection of a component representation that will facilitate the classification and the retrieval processes. Using formal specifications to represent software components facilitates the determination of reusable software because they more precisely characterize the functionality of the software, and the well-defined syntax makes processing amenable to automation. This paper presents an approach, based on formal methods, to the classification, organization and retrieval of reusable software components. From a set of formal specifications, a two-tiered hierarchy of software components is constructed. The formal specifications represent software that has been implemented and verified for correctness. The lower-level hierarchy is created by a subsumption test algorithm that determines whether one component is more general than another; this level facilitates the application of automated logical reasoning techniques for a fine-grained, exact determination of reusable candidates. The higher-level hierarchy provides a coarse-grained determination of reusable candidates and is constructed by applying a hierarchical clustering algorithm to the most general components from the lower-level hierarchy. The hierarchical organization of the software component specifications provides a means for storing, browsing, and retrieving reusable components that is amenable to automation. In addition, the formal specifications facilitate the verification process that proves a given software component correctly satisfies the current problem. A prototype browser that provides a graphical framework for the classification and retrieval process is described.
APA, Harvard, Vancouver, ISO, and other styles
7

Henninger, Scott. "Supporting the process of satisfying information needs with reusable software libraries." ACM SIGSOFT Software Engineering Notes 20, SI (1995): 267–70. http://dx.doi.org/10.1145/223427.211858.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bhatia, Rajesh K., Mayank Dave, and R. C. Joshi. "A Hybrid Technique for Searching a Reusable Component from Software Libraries." DESIDOC Journal of Library & Information Technology 27, no. 5 (2007): 27–34. http://dx.doi.org/10.14429/djlit.27.5.137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sutcliffe, Alistair, George Papamargaritis, and Liping Zhao. "Comparing requirements analysis methods for developing reusable component libraries." Journal of Systems and Software 79, no. 2 (2006): 273–89. http://dx.doi.org/10.1016/j.jss.2005.06.027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

O'Connor, Martin J., Csongor Nyulas, Samson Tu, David L. Buckeridge, Anna Okhmatovskaia, and Mark A. Musen. "Software-engineering challenges of building and deploying reusable problem solvers." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 23, no. 4 (2009): 339–56. http://dx.doi.org/10.1017/s0890060409990047.

Full text
Abstract:
AbstractProblem solving methods (PSMs) are software components that represent and encode reusable algorithms. They can be combined with representations of domain knowledge to produce intelligent application systems. A goal of research on PSMs is to provide principled methods and tools for composing and reusing algorithms in knowledge-based systems. The ultimate objective is to produce libraries of methods that can be easily adapted for use in these systems. Despite the intuitive appeal of PSMs as conceptual building blocks, in practice, these goals are largely unmet. There are no widely available tools for building applications using PSMs and no public libraries of PSMs available for reuse. This paper analyzes some of the reasons for the lack of widespread adoptions of PSM techniques and illustrate our analysis by describing our experiences developing a complex, high-throughput software system based on PSM principles. We conclude that many fundamental principles in PSM research are useful for building knowledge-based systems. In particular, the task–method decomposition process, which provides a means for structuring knowledge-based tasks, is a powerful abstraction for building systems of analytic methods. However, despite the power of PSMs in the conceptual modeling of knowledge-based systems, software engineering challenges have been seriously underestimated. The complexity of integrating control knowledge modeled by developers using PSMs with the domain knowledge that they model using ontologies creates a barrier to widespread use of PSM-based systems. Nevertheless, the surge of recent interest in ontologies has led to the production of comprehensive domain ontologies and of robust ontology-authoring tools. These developments present new opportunities to leverage the PSM approach.
APA, Harvard, Vancouver, ISO, and other styles
11

ESTEVA, JUAN CARLOS, and ROBERT G. REYNOLDS. "IDENTIFYING REUSABLE SOFTWARE COMPONENTS BY INDUCTION." International Journal of Software Engineering and Knowledge Engineering 01, no. 03 (1991): 271–92. http://dx.doi.org/10.1142/s0218194091000202.

Full text
Abstract:
The goal of the Partial Metrics Project is the automatic acquisition of planning knowledge from target code modules in a program library. In the current prototype the system is given a target code module written in Ada as input, and the result is a sequence of generalized transformations that can be used to design a class of related modules. This is accomplished by embedding techniques from Artificial Intelligence into the traditional structure of a compiler. The compiler performs compilation in reverse, starting with detailed code and producing an abstract description of it. The principal task facing the compiler is to find a decomposition of the target code into a collection of syntactic components that are nearly decomposable. Here, nearly decomposable corresponds to the need for each code segment to be nearly independent syntactically from the others. The most independent segments are then the target of the code generalization process. This process can be described as a form of chunking and is implemented here in terms of explanation-based learning. The problem of producing nearly decomposable code components becomes difficult when target code module is not well structured. The task facing users of the system is to be able to identify well-structured code modules from a library of modules that are suitable for input to the system. In this paper we describe the use of inductive learning techniques, namely variations on Quinlan's ID3 system that are capable of producing a decision tree that can be used to conceptually distinguish between well poorly structured code. In order to accomplish that task a set of high-level concepts used by software engineers to characterize structurally understandable code were identified. Next, each of these concepts was operationalized in terms of code complexity metrics that can be easily calculated during the compilation process. These metrics are related to various aspects of the program structure including its coupling, cohesion, data structure, control structure, and documentation. Each candidate module was then described in terms of a collection of such metrics. Using a training set of positive and negative examples of well-structured modules, each described in terms of the appointed metrics, a decision tree was produced that was used to recognize other well-structured modules in terms of their metric properties. This approach was applied to modules from existing software libraries in a variety of domains such as database, editor, graphic, window, data processing, FFT and computer vision software. The results achieved by the system were then benchmarked against the performance of experienced programmers in terms of recognizing well structured code. In a test case involving 120 modules, the system was able to discriminate between poor and well-structured code 99% of the time as compared to an 80% average for the 52 programmers sampled. The results suggest that such an inductive system can serve as a practical mechanism for effectively identifying reusable code modules in terms of their structural properties.
APA, Harvard, Vancouver, ISO, and other styles
12

Rathee, Amit, and Jitender Kumar Chhabra. "Sensitivity Analysis of Evolutionary Algorithm for Software Reusability." MENDEL 25, no. 1 (2019): 31–38. http://dx.doi.org/10.13164/mendel.2019.1.031.

Full text
Abstract:
Fast and competitive software industry demands rapid development using Component Based Software Development (CBSD). CBSD is dependent on the availability of the high-quality reusable component libraries. Recently, evolutionary multi-objective optimization algorithms have been used to identify sets of reusable software components from the source-code of Object Oriented (OO) software, using different quality indicators (e.g. cohesion, coupling, etc.). Sometimes, these used quality indicators are quite sensitive towards the small variations in their values, although they should not be. Therefore, this paper analyzes the sensitivity of the evolutionary technique for three quality indicators used during the identification: Frequent Usage Pattern (FUP), Semantic and evolutionary coupling. The sensitivity analysis is performed on three widely used open-source OO software. The experimentation is performed by mutating the system to different degrees. Results of the empirical analysis indicate that the semantic parameter is most sensitive and important. Ignoring this feature highly degrades the quality; FUP relation is uniformly sensitive and evolutionary relations's sensitivity is non-uniform.
APA, Harvard, Vancouver, ISO, and other styles
13

Kancherla, Jayaram, Alexander Zhang, Brian Gottfried, and Hector Corrada Bravo. "Epiviz Web Components: reusable and extensible component library to visualize functional genomic datasets." F1000Research 7 (July 17, 2018): 1096. http://dx.doi.org/10.12688/f1000research.15433.1.

Full text
Abstract:
Interactive and integrative data visualization tools and libraries are integral to exploration and analysis of genomic data. Web based genome browsers allow integrative data exploration of a large number of data sets for a specific region in the genome. Currently available web-based genome browsers are developed for specific use cases and datasets, therefore integration and extensibility of the visualizations and the underlying libraries from these tools is a challenging task. Genomic data visualization and software libraries that enable bioinformatic researchers and developers to implement customized genomic data viewers and data analyses for their application are much needed. Using recent advances in core web platform APIs and technologies including Web Components, we developed the Epiviz Component Library, a reusable and extensible data visualization library and application framework for genomic data. Epiviz Components can be integrated with most JavaScript libraries and frameworks designed for HTML. To demonstrate the ease of integration with other frameworks, we developed an R/Bioconductor epivizrChart package, that provides interactive, shareable and reproducible visualizations of genomic data objects in R, Shiny and also create standalone HTML documents. The component library is modular by design, reusable and natively extensible and therefore simplifies the process of managing and developing bioinformatic applications.
APA, Harvard, Vancouver, ISO, and other styles
14

REYNOLDS, ROBERT G., and ELENA ZANNONI. "EXTRACTING PROCEDURAL KNOWLEDGE FROM SOFTWARE SYSTEMS USING INDUCTIVE LEARNING IN THE PM SYSTEM." International Journal on Artificial Intelligence Tools 01, no. 03 (1992): 351–67. http://dx.doi.org/10.1142/s0218213092000247.

Full text
Abstract:
Biggerstaff and Richter suggest that there are four fundamental subtasks associated with operationalizing the reuse process [1]. They are finding reusable components, understanding these components, modifying these components, and composing components. Each of these sub-problems can be re-expressed as a knowledge acquisition sub-problem relative to producing a new representation for the components that make them more suitable for future reuse. In this paper, we express the first two subtasks for the software reuse activity, as described by Biggerstaff and Richter, as a problem in Machine Learning. From this perspective, the goal of software reuse is to learn to recognize reusable software in terms of code structure, run-time behavior, and functional specification. The Partial Metrics (PM) System supports the acquisition of reusable software at three different levels of granularity: the system level, the procedural level, and the code segment level. Here, we describe how the system extracts procedural knowledge from an example Pascal software system that satisfies a set of structural, behavioral, and functional constraints. These constraints are extracted from a set of positive and negative examples using inductive learning techniques. The constraints are expressed quantitatively in terms of various quality models and metrics. The general characteristics of learned constraints that were extracted from a variety of applications libraries are discussed.
APA, Harvard, Vancouver, ISO, and other styles
15

Cechich, Alejandra, Agustina Buccella, Daniela Manrique, and Lucas Perez. "Towards Building Reuse-Based Digital Libraries for National Universities in Patagonia." Journal of Computer Science and Technology 18, no. 02 (2018): e10. http://dx.doi.org/10.24215/16666038.18.e10.

Full text
Abstract:
This article presents a case study exploring the use of software product lines and reference models as mechanisms of a reuse-based design process to build digital libraries. As a key component in a modern digital library, the reference architecture is responsible for helping define quality of the resulting repository. It is true that many efforts have been addressed towards providing interoperability; however, repositories are expected to provide high levels of reuse too, which goes beyond that of simple object sharing. This work presents the main steps we followed towards building a reusable digital library capable of accommodating such needs by (i) providing mechanisms to reuse resources, and (ii) enabling explicit sharing of commonalities in a distributed environment.
APA, Harvard, Vancouver, ISO, and other styles
16

Capiluppi, Andrea, Klaas-Jan Stol, and Cornelia Boldyreff. "Software Reuse in Open Source." International Journal of Open Source Software and Processes 3, no. 3 (2011): 10–35. http://dx.doi.org/10.4018/jossp.2011070102.

Full text
Abstract:
A promising way to support software reuse is based on Component-Based Software Development (CBSD). Open Source Software (OSS) products are increasingly available that can be freely used in product development. However, OSS communities still face several challenges before taking full advantage of the “reuse mechanism”: many OSS projects duplicate effort, for instance when many projects implement a similar system in the same application domain and in the same topic. One successful counter-example is the FFmpeg multimedia project; several of its components are widely and consistently reused in other OSS projects. Documented is the evolutionary history of the various libraries of components within the FFmpeg project, which presently are reused in more than 140 OSS projects. Most use them as black-box components; although a number of OSS projects keep a localized copy in their repositories, eventually modifying them as needed (white-box reuse). In both cases, the authors argue that FFmpeg is a successful project that provides an excellent exemplar of a reusable library of OSS components.
APA, Harvard, Vancouver, ISO, and other styles
17

BENJAMINS, V. RICHARD, and MANFRED ABEN. "Structure-preserving knowledge-based system development through reusable libraries: a case study in diagnosis." International Journal of Human-Computer Studies 47, no. 2 (1997): 259–88. http://dx.doi.org/10.1006/ijhc.1997.0117.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kaur, Jagmeet, and Dr Dheerendra Singh. "Implementing Clustering Based Approach for Evaluation of Success of Software Reuse using K-means algorithm." INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 4, no. 3 (2013): 807–12. http://dx.doi.org/10.24297/ijct.v4i3.4199.

Full text
Abstract:
A great deal of research over the past several years has been devoted to the development of methodologies to create reusable software components and component libraries. But the issue of how to find the contribution of the factor towards the successfulness of the reuse program is still in the naïve stage and very less work is done on the modeling of the success of the reuse. The success and failure factors are the key factors that predict the successful reuse of software. An algorithm has been proposed in which the inputs can be given to K-Means Clustering system in form of tuned values of the Data Factors and the developed model shows the high precision results , which describe the success of software reuse.
APA, Harvard, Vancouver, ISO, and other styles
19

Urquia, A., and S. Dormido. "Object-oriented Design of Reusable Model Libraries of Hybrid Dynamic Systems ? Part One: A Design Methodology." Mathematical and Computer Modelling of Dynamical Systems 9, no. 1 (2003): 65–90. http://dx.doi.org/10.1076/mcmd.9.1.65.16516.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Urquia, A., and S. Dormido. "Object-oriented Design of Reusable Model Libraries of Hybrid Dynamic Systems ? Part Two: A Case Study." Mathematical and Computer Modelling of Dynamical Systems 9, no. 1 (2003): 91–118. http://dx.doi.org/10.1076/mcmd.9.1.91.16515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

HOLZMAN, LARS E., TODD A. FISHER, LEON M. GALITSKY, APRIL KONTOSTATHIS, and WILLIAM M. POTTENGER. "A SOFTWARE INFRASTRUCTURE FOR RESEARCH IN TEXTUAL DATA MINING." International Journal on Artificial Intelligence Tools 13, no. 04 (2004): 829–49. http://dx.doi.org/10.1142/s0218213004001843.

Full text
Abstract:
Few tools exist that address the challenges facing researchers in the Textual Data Mining (TDM) field. Some are too specific to their application, or are prototypes not suitable for general use. More general tools often are not capable of processing large volumes of data. We have created a Textual Data Mining Infrastructure (TMI) that incorporates both existing and new capabilities in a reusable framework conducive to developing new tools and components. TMI adheres to strict guidelines that allow it to run in a wide range of processing environments – as a result, it accommodates the volume of computing and diversity of research occurring in TDM. A unique capability of TMI is support for optimization. This facilitates text mining research by automating the search for optimal parameters in text mining algorithms. In this article we describe a number of applications that use the TMI. A brief tutorial is provided on the use of TMI. We present several novel results that have not been published elsewhere. We also discuss how the TMI utilizes existing machine-learning libraries, thereby enabling researchers to continue and extend their endeavors with minimal effort. Towards that end, TMI is available on the web at .
APA, Harvard, Vancouver, ISO, and other styles
22

Patil, M. K., and P. P. Jamsandekar. "Retrieval of Similarity Measures of Code Component." IRA-International Journal of Technology & Engineering (ISSN 2455-4480) 6, no. 3 (2017): 38. http://dx.doi.org/10.21013/jte.v6.n3.p1.

Full text
Abstract:
&lt;div&gt;&lt;p&gt;&lt;em&gt;Modern programming languages, especially object oriented languages facilitate to create libraries of reusable components (e.g. class definition). The majority of software companies are designing the components and reusing those wherever applicable. Maintaining such components (i.e. class library) and accessing those at right time in right form is challenging because large no. of components in library. Object Oriented Programming supports the reusability of the code. The major challenge in programming is to improve the learning quality and productivity of the software developer, subject teachers and students. &lt;/em&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;To support programming in Java, researcher implemented a design retrieval algorithm which will make it possible to search through potentially reusable Java classes. &lt;/em&gt;&lt;em&gt;The proposed work, selects the appropriate descriptors of the inputted cases - .java files. It will separate the code components automatically and stores in the repository. The different levels of ambiguity in selection of cases are controlled through data preprocessing technique of data mining. The set of adjustments applied to get the similarity of the code components.&lt;/em&gt;&lt;/p&gt;&lt;/div&gt;
APA, Harvard, Vancouver, ISO, and other styles
23

GAVA, FRÉDÉRIC. "A MODULAR IMPLEMENTATION OF DATA STRUCTURES IN BULK-SYNCHRONOUS PARALLEL ML." Parallel Processing Letters 18, no. 01 (2008): 39–53. http://dx.doi.org/10.1142/s0129626408003211.

Full text
Abstract:
A functional data-parallel language called BSML has been designed for programming Bulk-Synchronous Parallel algorithms. Many sequential algorithms do not have parallel counterparts and many non-computer science researchers do not want to deal with parallel programming. In sequential programming environments, common data structures are often provided through reusable libraries to simplify the development of applications. A parallel representation of such data structures is thus a solution for writing parallel programs without suffering from disadvantages of all the features of a parallel language. In this paper we describe a modular implementation in BSML of some data structures and show how those data types can address the needs of many potential users of parallel machines who have so far been deterred by the complexity of parallelizing code.
APA, Harvard, Vancouver, ISO, and other styles
24

Tsoeunyane, Lekhobola, Simon Winberg, and Michael Inggs. "Software-Defined Radio FPGA Cores: Building towards a Domain-Specific Language." International Journal of Reconfigurable Computing 2017 (2017): 1–28. http://dx.doi.org/10.1155/2017/3925961.

Full text
Abstract:
This paper reports on the design and implementation of an open-source library of parameterizable and reusable Hardware Description Language (HDL) Intellectual Property (IP) cores designed for the development of Software-Defined Radio (SDR) applications that are deployed on FPGA-based reconfigurable computing platforms. The library comprises a set of cores that were chosen, together with their parameters and interfacing schemas, based on recommendations from industry and academic SDR experts. The operation of the SDR cores is first validated and then benchmarked against two other cores libraries of a similar type to show that our cores do not take much more logic elements than existing cores and that they support a comparable maximum clock speed. Finally, we propose our design for a Domain-Specific Language (DSL) and supporting tool-flow, which we are in the process of building using our SDR library and the Delite DSL framework. We intend to take this DSL and supporting framework further to provide a rapid prototyping system for SDR application development to programmers not experienced in HDL coding. We conclude with a summary of the main characteristics of our SDR library and reflect on how our DSL tool-flow could assist other developers working in SDR field.
APA, Harvard, Vancouver, ISO, and other styles
25

Li, Chi, Zuxing Gu, Min Zhou, Jiecheng Wu, Jiarui Zhang, and Ming Gu. "API Misuse Detection in C Programs: Practice on SSL APIs." International Journal of Software Engineering and Knowledge Engineering 29, no. 11n12 (2019): 1761–79. http://dx.doi.org/10.1142/s0218194019400205.

Full text
Abstract:
Libraries offer reusable functionality through Application Programming Interfaces (APIs) with usage constraints such as call conditions or orders. Constraint violations, i.e. API misuses, commonly lead to bugs and security issues. Although researchers have developed various API misuse detectors in the past few decades, recent studies show that API misuse is prevalent in real-world projects, especially for secure socket layer (SSL) certificate validation, which is completely broken in many security-critical applications and libraries. In this paper, we introduce SSLDoc to effectively detect API misuse bugs, specifically for SSL API libraries. The key insight behind SSLDoc is a constraint-directed static analysis technique powered by a domain-specific language (DSL) for specifying API usage constraints. Through studying real-world API misuse bugs, we propose ISpec DSL, which covers majority types of API usage constraints and enables simple but precise specification. Furthermore, we design and implement SSLDoc to automatically parse ISpec into checking targets and employ a static analysis engine to identify potential API misuses and prune false positives with rich semantics. We have instantiated SSLDoc for OpenSSL APIs and applied it to large-scale open-source programs. SSLDoc found 45 previously unknown security-sensitive bugs in OpenSSL implementation and applications in Ubuntu. Up to now, 35 have been confirmed by the corresponding development communities and 27 have been fixed in master branch.
APA, Harvard, Vancouver, ISO, and other styles
26

Nédellec, Claire. "Integration of Machine Learning and Knowledge Acquisition." Knowledge Engineering Review 10, no. 1 (1995): 77–81. http://dx.doi.org/10.1017/s026988890000730x.

Full text
Abstract:
“Integration of Machine Learning and Knowledge Acquisition” may be a surprising title for an ECAI-94 workshop, since most machine learning (ML) systems are intended for knowledge acquisition (KA). So what seems problematic about integrating ML and KA? The answer lies in the difference between the approaches developed by what is referred to as ML and KA research. Apart from sonic major exceptions, such as learning apprentice tools (Mitchell et al., 1989), or libraries like the Machine Learning Toolbox (MLT Consortium, 1993), most ML algorithms have been described without any characterization in terms of real application needs, in terms of what they could be effectively useful for. Although ML methods have been applied to “real world” problems few general and reusable conclusions have been drawn from these knowledge acquisition experiments. As ML techniques become more and more sophisticated and able to produce various forms of knowledge, the number of possible applications grows. ML methods tend then to be more precisely specified in terms of the domain knowledge initially required, the control knowledge to be set and the nature of the system output (MLT Consortium, 1993; Kodratoff et al., 1994).
APA, Harvard, Vancouver, ISO, and other styles
27

COUNSELL, STEVE, PETE NEWSON, and EMILIA MENDES. "DESIGN LEVEL HYPOTHESIS TESTING THROUGH REVERSE ENGINEERING OF OBJECT-ORIENTED SOFTWARE." International Journal of Software Engineering and Knowledge Engineering 14, no. 02 (2004): 207–20. http://dx.doi.org/10.1142/s0218194004001609.

Full text
Abstract:
Comprehension of an object-oriented (OO) system, its design and use of OO features such as aggregation, generalisation and other forms of association is a difficult task to undertake without the original design documentation for reference. In this paper, we describe the collection of high-level class metrics from the UML design documentation of five industrial-sized C++ systems. Two of the systems studied were libraries of reusable classes. Three hypotheses were tested between these high-level features and the low-level class features of a number of class methods and attributes in each of the five systems. A further two conjectures were then investigated to determine features of key classes in a system and to investigate any differences between library-based systems and the other systems studied in terms of coupling. Results indicated that, for the three application-based systems, no clear patterns emerged for hypotheses relating to generalisation. There was, however, a clear (positive) statistical significance for all three systems studied between aggregation, other types of association and the number of methods and attributes in a class. Key classes in the three application-based systems tended to contain large numbers of methods, attributes, and associations, significant amounts of aggregation but little inheritance. No consistent, identifiable key features could be found in the two library-based systems; both showed a distinct lack of any form of coupling (including inheritance) other than through the C++ friend facility.
APA, Harvard, Vancouver, ISO, and other styles
28

Liao, Zitian, Shah Nazir, Habib Ullah Khan, and Muhammad Shafiq. "Assessing Security of Software Components for Internet of Things: A Systematic Review and Future Directions." Security and Communication Networks 2021 (February 15, 2021): 1–22. http://dx.doi.org/10.1155/2021/6677867.

Full text
Abstract:
Software component plays a significant role in the functionality of software systems. Component of software is the existing and reusable parts of a software system that is formerly debugged, confirmed, and practiced. The use of such components in a newly developed software system can save effort, time, and many resources. Due to the practice of using components for new developments, security is one of the major concerns for researchers to tackle. Security of software components can save the software from the harm of illegal access and damages of its contents. Several existing approaches are available to solve the issues of security of components from different perspectives in general while security evaluation is specific. A detailed report of the existing approaches and techniques used for security purposes is needed for the researchers to know about the approaches. In order to tackle this issue, the current research presents a systematic literature review (SLR) of the present approaches used for assessing the security of software components in the literature by practitioners to protect software systems for the Internet of Things (IoT). The study searches the literature in the popular and well-known libraries, filters the relevant literature, organizes the filter papers, and extracts derivations from the selected studies based on different perspectives. The proposed study will benefit practitioners and researchers in support of the report and devise novel algorithms, techniques, and solutions for effective evaluation of the security of software components.
APA, Harvard, Vancouver, ISO, and other styles
29

ABELSON, HAROLD, ANDREW A. BERLIN, JACOB KATZENELSON, et al. "THE SUPERCOMPUTER TOOLKIT: A GENERAL FRAMEWORK FOR SPECIAL-PURPOSE COMPUTING." International Journal of High Speed Electronics and Systems 03, no. 03n04 (1992): 337–61. http://dx.doi.org/10.1142/s0129156492000138.

Full text
Abstract:
The Supercomputer Toolkit is a family of hardware modules (processors, memory, interconnect, and input-output devices) and a collection of software modules (compilers, simulators, scientific libraries, and high-level front ends) from which high-performance special-purpose computers can be easily configured and programmed. Although there are many examples of special-purpose computers (see Ref. 4), the Toolkit approach is different in that our aim is to construct these machines from standard, reusable parts. These are combined by means of a user-reconfigurable, static interconnect technology. The Toolkit’s software support, based on novel compilation techniques, produces extremely high-performance numerical code from high-level language input. We have completed fabrication of the Toolkit processor module, and several critical software modules. An eight-processor configuration is running at MIT. We have used the prototype Toolkit to perform a breakthrough computation of scientific importance—an integration of the motion of the Solar System that extends previous results by nearly two orders of magnitude. While the Toolkit project is not complete, we believe our results show evidence that generating special-purpose computers from standard modules can be an important method of performing intensive scientific computing. This paper briefly describes the Toolkit’s hardware and software modules, the Solar System simulation, conclusions and future plans.
APA, Harvard, Vancouver, ISO, and other styles
30

Dore, C., and M. Murphy. "CURRENT STATE OF THE ART HISTORIC BUILDING INFORMATION MODELLING." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W5 (August 18, 2017): 185–92. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w5-185-2017.

Full text
Abstract:
In an extensive review of existing literature a number of observations were made in relation to the current approaches for recording and modelling existing buildings and environments: Data collection and pre-processing techniques are becoming increasingly automated to allow for near real-time data capture and fast processing of this data for later modelling applications. Current BIM software is almost completely focused on new buildings and has very limited tools and pre-defined libraries for modelling existing and historic buildings. The development of reusable parametric library objects for existing and historic buildings supports modelling with high levels of detail while decreasing the modelling time. Mapping these parametric objects to survey data, however, is still a time-consuming task that requires further research. Promising developments have been made towards automatic object recognition and feature extraction from point clouds for as-built BIM. However, results are currently limited to simple and planar features. Further work is required for automatic accurate and reliable reconstruction of complex geometries from point cloud data. Procedural modelling can provide an automated solution for generating 3D geometries but lacks the detail and accuracy required for most as-built applications in AEC and heritage fields.
APA, Harvard, Vancouver, ISO, and other styles
31

Klesmith, Justin R., and Benjamin J. Hackel. "Improved mutant function prediction via PACT: Protein Analysis and Classifier Toolkit." Bioinformatics 35, no. 16 (2018): 2707–12. http://dx.doi.org/10.1093/bioinformatics/bty1042.

Full text
Abstract:
Abstract Motivation Deep mutational scanning experiments have enabled the measurement of the sequence-function relationship for thousands of mutations in a single experiment. The Protein Analysis and Classifier Toolkit (PACT) is a Python software package that marries the fitness metric of a given mutation within these experiments to sequence and structural features enabling downstream analyses. PACT enables the easy development of user sharable protocols for custom deep mutational scanning experiments as all code is modular and reusable between protocols. Protocols for mutational libraries with single or multiple mutations are included. To exemplify its utility, PACT assessed two deep mutational scanning datasets that measured the tradeoff of enzyme activity and enzyme stability. Results PACT efficiently evaluated classifiers that predict protein mutant function tested on deep mutational scanning screens. We found that the classifiers with the lowest false positive and highest true positive rate assesses sequence homology, contact number and if mutation involves proline. Availability and implementation PACT and the processed datasets are distributed freely under the terms of the GPL-3 license. The source code is available at GitHub (https://github.com/JKlesmith/PACT). Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
32

Latt, Jonas, Christophe Coreixas, and Joël Beny. "Cross-platform programming model for many-core lattice Boltzmann simulations." PLOS ONE 16, no. 4 (2021): e0250306. http://dx.doi.org/10.1371/journal.pone.0250306.

Full text
Abstract:
We present a novel, hardware-agnostic implementation strategy for lattice Boltzmann (LB) simulations, which yields massive performance on homogeneous and heterogeneous many-core platforms. Based solely on C++17 Parallel Algorithms, our approach does not rely on any language extensions, external libraries, vendor-specific code annotations, or pre-compilation steps. Thanks in particular to a recently proposed GPU back-end to C++17 Parallel Algorithms, it is shown that a single code can compile and reach state-of-the-art performance on both many-core CPU and GPU environments for the solution of a given non trivial fluid dynamics problem. The proposed strategy is tested with six different, commonly used implementation schemes to test the performance impact of memory access patterns on different platforms. Nine different LB collision models are included in the tests and exhibit good performance, demonstrating the versatility of our parallel approach. This work shows that it is less than ever necessary to draw a distinction between research and production software, as a concise and generic LB implementation yields performances comparable to those achievable in a hardware specific programming language. The results also highlight the gains of performance achieved by modern many-core CPUs and their apparent capability to narrow the gap with the traditionally massively faster GPU platforms. All code is made available to the community in form of the open-source project stlbm, which serves both as a stand-alone simulation software and as a collection of reusable patterns for the acceleration of pre-existing LB codes.
APA, Harvard, Vancouver, ISO, and other styles
33

Jackson, Michael, Kostas Kavoussanakis, and Edward W. J. Wallace. "Using prototyping to choose a bioinformatics workflow management system." PLOS Computational Biology 17, no. 2 (2021): e1008622. http://dx.doi.org/10.1371/journal.pcbi.1008622.

Full text
Abstract:
Workflow management systems represent, manage, and execute multistep computational analyses and offer many benefits to bioinformaticians. They provide a common language for describing analysis workflows, contributing to reproducibility and to building libraries of reusable components. They can support both incremental build and re-entrancy—the ability to selectively re-execute parts of a workflow in the presence of additional inputs or changes in configuration and to resume execution from where a workflow previously stopped. Many workflow management systems enhance portability by supporting the use of containers, high-performance computing (HPC) systems, and clouds. Most importantly, workflow management systems allow bioinformaticians to delegate how their workflows are run to the workflow management system and its developers. This frees the bioinformaticians to focus on what these workflows should do, on their data analyses, and on their science. RiboViz is a package to extract biological insight from ribosome profiling data to help advance understanding of protein synthesis. At the heart of RiboViz is an analysis workflow, implemented in a Python script. To conform to best practices for scientific computing which recommend the use of build tools to automate workflows and to reuse code instead of rewriting it, the authors reimplemented this workflow within a workflow management system. To select a workflow management system, a rapid survey of available systems was undertaken, and candidates were shortlisted: Snakemake, cwltool, Toil, and Nextflow. Each candidate was evaluated by quickly prototyping a subset of the RiboViz workflow, and Nextflow was chosen. The selection process took 10 person-days, a small cost for the assurance that Nextflow satisfied the authors’ requirements. The use of prototyping can offer a low-cost way of making a more informed selection of software to use within projects, rather than relying solely upon reviews and recommendations by others.
APA, Harvard, Vancouver, ISO, and other styles
34

DI FABBRIZIO, GIUSEPPE, GOKHAN TUR, DILEK HAKKANI-TÜR, et al. "Bootstrapping spoken dialogue systems by exploiting reusable libraries." Natural Language Engineering 14, no. 03 (2007). http://dx.doi.org/10.1017/s1351324907004561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

"Retrieval of Java Program Code Components using Case Based Reasoning (CBR)." International Journal of Engineering and Advanced Technology 9, no. 3 (2020): 2938–53. http://dx.doi.org/10.35940/ijeat.c5483.029320.

Full text
Abstract:
Object Oriented Programming (OOP) facilitates to create libraries of reusable software components. The reusability approach in developing a new system can be applied to an existing system with prior modifications. The reusability definitely decreases the time and effort required for developing the new system. To support reusability of program code, a proper code retrieval process is necessary. It makes possible to search the similar code component of java programming environment. OOP paradigm has specific style of writing the program code. The program code is a collection of objects, classes and methods. It is very easy to store the cases and reuse or revise wherever necessary. To get the similarity between the program code components, it is necessary to have an efficient retrieval method. The retrieval phase can retrieve the program code components as classes, methods, and interfaces depending on components selection by the user. A purely case-based approach is adopted for revising or reusing the existing cases to solve the new problems. Case Based Reasoning (CBR) is the process of solving new problems based on the experience coming from similar past problems.
APA, Harvard, Vancouver, ISO, and other styles
36

Rodchenkov, Igor, Ozgun Babur, Augustin Luna, et al. "Pathway Commons 2019 Update: integration, analysis and exploration of pathway data." Nucleic Acids Research, October 24, 2019. http://dx.doi.org/10.1093/nar/gkz946.

Full text
Abstract:
Abstract Pathway Commons (https://www.pathwaycommons.org) is an integrated resource of publicly available information about biological pathways including biochemical reactions, assembly of biomolecular complexes, transport and catalysis events and physical interactions involving proteins, DNA, RNA, and small molecules (e.g. metabolites and drug compounds). Data is collected from multiple providers in standard formats, including the Biological Pathway Exchange (BioPAX) language and the Proteomics Standards Initiative Molecular Interactions format, and then integrated. Pathway Commons provides biologists with (i) tools to search this comprehensive resource, (ii) a download site offering integrated bulk sets of pathway data (e.g. tables of interactions and gene sets), (iii) reusable software libraries for working with pathway information in several programming languages (Java, R, Python and Javascript) and (iv) a web service for programmatically querying the entire dataset. Visualization of pathways is supported using the Systems Biological Graphical Notation (SBGN). Pathway Commons currently contains data from 22 databases with 4794 detailed human biochemical processes (i.e. pathways) and ∼2.3 million interactions. To enhance the usability of this large resource for end-users, we develop and maintain interactive web applications and training materials that enable pathway exploration and advanced analysis.
APA, Harvard, Vancouver, ISO, and other styles
37

Appleby, Phil. "Linking Genomic Data with Phenotypes Derived from Electronic Health Records." International Journal of Population Data Science 1, no. 1 (2017). http://dx.doi.org/10.23889/ijpds.v1i1.173.

Full text
Abstract:
ABSTRACT&#x0D; ObjectivesTo build a searchable database for SNP array data from the GoDARTS data set, in which a combined view of genotype data derived from multiple assay platforms can be extracted for both candidate gene and GWA studies and to combine this with a database of phenotype descriptors which are saved as shareable, reusable database objects and which persist beyond the lifetime of any analysis script.&#x0D; To build databases and software solutions which can be made readily available to laboratories and academic institutions which may not have the resources to adopt one of the larger Genotype / Phenotype integration solutions.&#x0D; ApproachTwo databases were built. The first is a hybrid Genomics one in which variant and study subject data are stored in a database with variant detail data retained in Variant Call Format (VCF) files. The second database saves phenotype descriptors as shareable, modifiable database objects alongside a table of events derived from the set of available Electronic Health Records (EHRs). All detail from the EHRs is also retained in the database which is delivered on a project by project basis using virtual machines.&#x0D; Both databases are accessed using web applications, allowing delivery of data to the users’ desktops.&#x0D; ResultsTraditionally the process of deriving genotype and phenotype data for epidemiological studies can be a laborious one with genotype data being retrieved from large, flat data files and phenotypes being defined by codes in flat EHR records which are tested and filtered in scripts, written for analysis in a statistical package such as Stata, SPSS or R.&#x0D; In our solution, genotype data can be retrieved in seconds and delivered to the users’ desktops. Similarly lists of cases and controls can be downloaded based on saved or transient phenotype descriptors. Phenotypes descriptors derived from codes in Electronic Health Records are saved as reusable, shareable and modifiable database objects objects, allowing rapid retrieval of phenotype data.&#x0D; ConclusionThe ability to access Genomic data from multiple assay platforms and to use this in conjunction with shareable libraries of phenotype objects allows rapid access to data for analysis using both Genomic SNP Array data and linked Electronic Health Records. Analysis on data extracted from our linked databases should proceed more rapidly and should be more easily reproducible.
APA, Harvard, Vancouver, ISO, and other styles
38

Triebel, Dagmar, Dragan Ivanovic, Gila Kahila Bar-Gal, Sven Bingert, and Tanja Weibulat. "Towards a COST MOBILISE Guideline for Long Term Preservation and Archiving of Data Constructs from Scientific Collections Facilities." Biodiversity Information Science and Standards 5 (September 3, 2021). http://dx.doi.org/10.3897/biss.5.73901.

Full text
Abstract:
COST (European Cooperation in Science and Technology) is a funding organisation for research and innovation networks. One of the objectives of the COSTAction called “Mobilising Data, Policies and Experts in Scientific Collections“ (MOBILISE) is to work on documents for expert training with broad involvement of professionals from the participating European countries. The guideline presented here in its general concept will address principles, strategies and standards for long term preservation and archiving of data constructs (data packages, data products) as addressed by and under control of the scientific collections community. The document is being developed as part of the MOBILISE Action targeted towards primarily scientific staff at natural scientific collection facilities, as well as management bodies of collections like museums, herbaria and information technology personnel less familiar with data archiving principles and routines. The challenges of big data storage and (distributed, cloud-based) storage solutions as well as that of data mirroring, backing up, synchronisation and publication in productive data environments are well addressed by documents, guidelines and online platforms, e.g., in the DISSCo knowledge base (see Hardisty et al. (2020)) and as part of concepts of the European Open Science Cloud (EOSC). Archival processes and the resulting data constructs, however, are often left outside of the considerations. This is a large gap because archival issues are not only simple technical ones as addressed by the term “bit preservation” but also envisage a number of logical, functional, normative, administrative and semantic issues as addressed by the term “functional long-term archiving”. The main target digital object types addressed by this COST MOBILISE Guideline are data constructs called Digital or Digital Extended Specimens and data products with the persistent identifier assignment lying under the authority of scientific collections facilities. Such digital objects are specified according to the Digital Object Architecture (DOA , see Wittenburg et al. 2018) and similar abstract models introduced by Harjes et al. (2020) and Lannom et al. (2020). The scientific collection-specific types are defined following evolving concepts in the context of the Consortium of European Taxonomic Facilities (CETAF), the research infrastructure DiSSCo (Distributed System of Scientific Collections), and the Biodiversity Information Standards (TDWG). Archival processes are described following the OAIS (Open Archival Information System) reference model. The archived objects should be reusable in the sense of the FAIR (Findable, Accessible, Interoperable, and Reusable) guiding principles. Organisations like national (digital) archives, computing or professional (domain-specific) data centers as well as libraries might offer specific archiving services and act as partner organisations of scientific collections facilities. The guideline consists of key messages that have been defined. They address the collection community, especially the staff and leadership of taxonomic facilities. Aspects of several groups of stakeholders are discussed as well as cost models. The guideline does not recommend specific solutions for archiving software and workflows. Supplementary information is delivered via a wiki-based platform for the COST MOBILISE Archiving Working Group WG4.
APA, Harvard, Vancouver, ISO, and other styles
39

Penev, Lyubomir, Donat Agosti, Teodor Georgiev, et al. "The Open Biodiversity Knowledge Management (eco-)System: Tools and Services for Extraction, Mobilization, Handling and Re-use of Data from the Published Literature." Biodiversity Information Science and Standards 2 (May 17, 2018). http://dx.doi.org/10.3897/biss.2.25748.

Full text
Abstract:
The Open Biodiversity Knowledge Management System (OBKMS) is an end-to-end, eXtensible Markup Language (XML)- and Linked Open Data (LOD)-based ecosystem of tools and services that encompasses the entire process of authoring, submission, review, publication, dissemination, and archiving of biodiversity literature, as well as the text mining of published biodiversity literature (Fig. 1). These capabilities lead to the creation of interoperable, computable, and reusable biodiversity data with provenance linking facts to publications. OBKMS is the result of a joint endeavour by Plazi and Pensoft lasting many years. The system was developed with the support of several biodiversity informatics projects - initially (Virtual Biodiversity Research and Access Network for Taxonomy) ViBRANT, and then followed by pro-iBiosphere, European Biodiversity Observation Network (EU BON), and Biosystematics, informatics and genomics of the big 4 insect groups (BIG4). The system includes the following key components: ARPHA Journal Publishing Platform: a journal publishing platform based on the TaxPub XML extension for National Library of Medicine (NLM)’s Journal Publishing Document Type Definition (DTD) (Version 3.0). Its advanced ARPHA-BioDiv component deals with integrated biodiversity data and narrative publishing (Penev et al. 2017). GoldenGATE Imagine: an environment for marking up, enhancing, and extracting text and data from PDF files, supporting the TaxonX XML schema. It has specific enhancements for articles containing descriptions of taxa ("taxonomic treatments") in the field of biological systematics, but its core features may be used for general purposes as well. Biodiversity Literature repository (BLR): a public repository hosted at Zenodo (CERN) for published articles (PDF and XML) and images extracted from articles. Ocellus/Zenodeo: a search interface for the images stored at BLR. TreatmentBank: an XML-based repository for taxonomic treatments and data therein extracted from literature. The OpenBiodiv knowledge graph: a biodiversity knowledge graph built according to the Linked Open Data (LOD) principles. Uses the RDF data model, the SPARQL Protocol and RDF Query Language (SPARQL) query language, is open to the public, and is powered by the OpenBiodiv-O ontology (Senderov et al. 2018). OpenBiodiv portal: Semantic search and browser for the biodiversity knowledge graph. Multiple semantic apps packaging specific views of the biodiviersity knowledge graph. Supporting tools: Pensoft Markup Tool (PMT) ARPHA Writing Tool (AWT) ReFindit R libraries for working with RDF and for converting XML to RDF (ropenbio, RDF4R). Plazi RDF converter, web services and APIs. ARPHA Journal Publishing Platform: a journal publishing platform based on the TaxPub XML extension for National Library of Medicine (NLM)’s Journal Publishing Document Type Definition (DTD) (Version 3.0). Its advanced ARPHA-BioDiv component deals with integrated biodiversity data and narrative publishing (Penev et al. 2017). GoldenGATE Imagine: an environment for marking up, enhancing, and extracting text and data from PDF files, supporting the TaxonX XML schema. It has specific enhancements for articles containing descriptions of taxa ("taxonomic treatments") in the field of biological systematics, but its core features may be used for general purposes as well. Biodiversity Literature repository (BLR): a public repository hosted at Zenodo (CERN) for published articles (PDF and XML) and images extracted from articles. Ocellus/Zenodeo: a search interface for the images stored at BLR. TreatmentBank: an XML-based repository for taxonomic treatments and data therein extracted from literature. The OpenBiodiv knowledge graph: a biodiversity knowledge graph built according to the Linked Open Data (LOD) principles. Uses the RDF data model, the SPARQL Protocol and RDF Query Language (SPARQL) query language, is open to the public, and is powered by the OpenBiodiv-O ontology (Senderov et al. 2018). OpenBiodiv portal: Semantic search and browser for the biodiversity knowledge graph. Multiple semantic apps packaging specific views of the biodiviersity knowledge graph. Semantic search and browser for the biodiversity knowledge graph. Multiple semantic apps packaging specific views of the biodiviersity knowledge graph. Supporting tools: Pensoft Markup Tool (PMT) ARPHA Writing Tool (AWT) ReFindit R libraries for working with RDF and for converting XML to RDF (ropenbio, RDF4R). Plazi RDF converter, web services and APIs. Pensoft Markup Tool (PMT) ARPHA Writing Tool (AWT) ReFindit R libraries for working with RDF and for converting XML to RDF (ropenbio, RDF4R). Plazi RDF converter, web services and APIs. As part of OBKMS, Plazi and Pensoft offer the following services beyond supplying the software toolkit: Digitization through imaging and text capture of paper-based or digitally born (PDF) legacy literature. XML markup of both legacy and newly published literature (journals and books). Data extraction and markup of taxonomic names, literature references, taxonomic treatments and organism occurrence records. Export and storage of text, images, and structured data in data repositories. Linking and semantic enhancement of text and data, bibliographic references, taxonomic treatments, illustrations, organism occurrences and organism traits. Re-packaging of extracted information into new, user-demanded outputs via semantic apps at the OpenBiodiv portal. Re-publishing of legacy literature (e.g., Flora, Fauna, and Mycota series, important biodiversity monographs, etc.). Semantic open access publishing (including data publishing) of journal and books. Integration of biodiversity information from legacy and newly published literature into interoperable biodiversity repositories and platforms (Global Biodiversity Information Facility (GBIF), Encyclopedia of Life (EOL), Species-ID, Plazi, Wikidata, and others). Digitization through imaging and text capture of paper-based or digitally born (PDF) legacy literature. XML markup of both legacy and newly published literature (journals and books). Data extraction and markup of taxonomic names, literature references, taxonomic treatments and organism occurrence records. Export and storage of text, images, and structured data in data repositories. Linking and semantic enhancement of text and data, bibliographic references, taxonomic treatments, illustrations, organism occurrences and organism traits. Re-packaging of extracted information into new, user-demanded outputs via semantic apps at the OpenBiodiv portal. Re-publishing of legacy literature (e.g., Flora, Fauna, and Mycota series, important biodiversity monographs, etc.). Semantic open access publishing (including data publishing) of journal and books. Integration of biodiversity information from legacy and newly published literature into interoperable biodiversity repositories and platforms (Global Biodiversity Information Facility (GBIF), Encyclopedia of Life (EOL), Species-ID, Plazi, Wikidata, and others). In this presentation we make the case for why OpenBiodiv is an essential tool for advancing biodiversity science. Our argument is that through OpenBiodiv, biodiversity science makes a step towards the ideals of open science (Senderov and Penev 2016). Furthermore, by linking data from various silos, OpenBiodiv allows for the discovery of hidden facts. A particular example of how OpenBiodiv can advance biodiversity science is demonstrated by the OpenBiodiv's solution to "taxonomic anarchy" (Garnett and Christidis 2017). "Taxonomic anarchy" is a term coined by Garnett and Christidis to denote the instability of taxonomic names as symbols for taxonomic meaning. They propose an "authoritarian" top-down approach to stablize the naming of species. OpenBiodiv, on the other hand, relies on taxonomic concepts as integrative units and therefore integration can occur through alignment of taxonomic concepts via Region Connection Calculus (RCC-5) (Franz and Peet 2009). The alignment is "democratically" created by the users of system but no consensus is forced and "anarchy" is avoided by using unambiguous taxonomic concept labels (Franz et al. 2016) in addition to Linnean names.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography