To see the other types of publications on this topic, follow the link: Enterprise Repository.

Journal articles on the topic 'Enterprise Repository'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Enterprise Repository.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lee, Heeseok, and Jae-Woo Joung. "An Enterprise Model Repository." Journal of Database Management 11, no. 1 (2000): 16–28. http://dx.doi.org/10.4018/jdm.2000010102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Whitman, Lawrence E., and John Huffman. "A PRACTICAL ENTERPRISE MODEL REPOSITORY." IFAC Proceedings Volumes 42, no. 4 (2009): 918–23. http://dx.doi.org/10.3182/20090603-3-ru-2001.0558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Whitman, Lawrence E., and Danny Santanu. "ENABLING AN ENTERPRISE MODEL REPOSITORY." IFAC Proceedings Volumes 38, no. 1 (2005): 13–18. http://dx.doi.org/10.3182/20050703-6-cz-1902.01521.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rieger, Oya Yildirim. "Sustainability: Scholarly repository as an enterprise." Bulletin of the American Society for Information Science and Technology 39, no. 1 (2012): 27–31. http://dx.doi.org/10.1002/bult.2012.1720390110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

ГОНТАР, Юлія Миколаївна, Ольга Юріївна ЧЕРЕДНІЧЕНКО, Олексій Олександрович КУСТОВ, and Світлана Іванівна ЄРШОВА. "DEVELOPMENT OF REPOSITORY OF BUSINESS-INFORMATION ON THE ENTERPRISE." Bulletin of NTU "KhPI". Series: Strategic Management, Portfolio, Program and Project Management 4, no. 2(1111) (2015): 120–25. http://dx.doi.org/10.20998/2413-3000.2015.1111.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Easton, Annette, and George Easton. "Enterprise Content Management Should Be Academic." International Journal of Management & Information Systems (IJMIS) 18, no. 1 (2013): 27. http://dx.doi.org/10.19030/ijmis.v18i1.8336.

Full text
Abstract:
The digital divide that was formed by a curriculum that affords no direct exposure to any business-oriented enterprise content management system and the surprising ubiquity and dependency on enterprise content management systems in business provided the motivation to class-test SharePoint as a surrogate for a university-supported course management system. The classroom test became the basis of a proof-of-concept model for a college-wide document repository that was conceived to manage most of the colleges departmental and committee documents, including those related to AACSB maintenance of accreditation. The use of a business-tested, enterprise content management system for academic purposes could narrow an academic/industry digital divide and may remove an impediment to the adage practice what you teach.
APA, Harvard, Vancouver, ISO, and other styles
7

Ding, Baobao, Tong Wu, Yingming Yang, Liang Dou, and Tiancheng Jin. "ArchiMate Customization and Architecture Repository Management Practices: for a Technology-Intensive Enterprise." Journal of Physics: Conference Series 1187, no. 4 (2019): 042049. http://dx.doi.org/10.1088/1742-6596/1187/4/042049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kopp, Andrii M., and Dmytro L. Orlovskyi. "Estimation and analysis of business process models similarity in enterprise continuum repository." System research and information technologies, no. 4 (December 18, 2018): 48–57. http://dx.doi.org/10.20535/srit.2308-8893.2018.4.04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Plasek, Joseph M., Foster R. Goss, Kenneth H. Lai, et al. "Food entries in a large allergy data repository." Journal of the American Medical Informatics Association 23, e1 (2015): e79-e87. http://dx.doi.org/10.1093/jamia/ocv128.

Full text
Abstract:
Abstract Objective Accurate food adverse sensitivity documentation in electronic health records (EHRs) is crucial to patient safety. This study examined, encoded, and grouped foods that caused any adverse sensitivity in a large allergy repository using natural language processing and standard terminologies. Methods Using the Medical Text Extraction, Reasoning, and Mapping System (MTERMS), we processed both structured and free-text entries stored in an enterprise-wide allergy repository (Partners’ Enterprise-wide Allergy Repository), normalized diverse food allergen terms into concepts, and encoded these concepts using the Systematized Nomenclature of Medicine – Clinical Terms (SNOMED-CT) and Unique Ingredient Identifiers (UNII) terminologies. Concept coverage also was assessed for these two terminologies. We further categorized allergen concepts into groups and calculated the frequencies of these concepts by group. Finally, we conducted an external validation of MTERMS’s performance when identifying food allergen terms, using a randomized sample from a different institution. Results We identified 158 552 food allergen records (2140 unique terms) in the Partners repository, corresponding to 672 food allergen concepts. High-frequency groups included shellfish (19.3%), fruits or vegetables (18.4%), dairy (9.0%), peanuts (8.5%), tree nuts (8.5%), eggs (6.0%), grains (5.1%), and additives (4.7%). Ambiguous, generic concepts such as “nuts” and “seafood” accounted for 8.8% of the records. SNOMED-CT covered more concepts than UNII in terms of exact (81.7% vs 68.0%) and partial (14.3% vs 9.7%) matches. Discussion Adverse sensitivities to food are diverse, and existing standard terminologies have gaps in their coverage of the breadth of allergy concepts. Conclusion New strategies are needed to represent and standardize food adverse sensitivity concepts, to improve documentation in EHRs.
APA, Harvard, Vancouver, ISO, and other styles
10

Weichselbraun, Albert, Daniel Streiff, and Arno Scharl. "Consolidating Heterogeneous Enterprise Data for Named Entity Linking and Web Intelligence." International Journal on Artificial Intelligence Tools 24, no. 02 (2015): 1540008. http://dx.doi.org/10.1142/s0218213015400084.

Full text
Abstract:
Linking named entities to structured knowledge sources paves the way for state-of-the-art Web intelligence applications which assign sentiment to the correct entities, identify trends, and reveal relations between organizations, persons and products. For this purpose this paper introduces Recognyze, a named entity linking component that uses background knowledge obtained from linked data repositories, and outlines the process of transforming heterogeneous data silos within an organization into a linked enterprise data repository which draws upon popular linked open data vocabularies to foster interoperability with public data sets. The presented examples use comprehensive real-world data sets from Orell Füssli Business Information, Switzerland's largest business information provider. The linked data repository created from these data sets comprises more than nine million triples on companies, the companies' contact information, key people, products and brands. We identify the major challenges of tapping into such sources for named entity linking, and describe required data pre-processing techniques to use and integrate such data sets, with a special focus on disambiguation and ranking algorithms. Finally, we conduct a comprehensive evaluation based on business news from the New Journal of Zurich and AWP Financial News to illustrate how these techniques improve the performance of the Recognyze named entity linking component.
APA, Harvard, Vancouver, ISO, and other styles
11

Ahmad, Mohammad Nazir, Nor Hidayati Zakaria, and Darshana Sedera. "Ontology-Based Knowledge Management for Enterprise Systems." International Journal of Enterprise Information Systems 7, no. 4 (2011): 64–90. http://dx.doi.org/10.4018/jeis.2011100104.

Full text
Abstract:
Companies face the challenges of expanding their markets, improving products, services and processes, and exploiting intellectual capital in a dynamic network. Therefore, more companies are turning to an Enterprise System (ES). Knowledge management (KM) has also received considerable attention and is continuously gaining the interest of industry, enterprises, and academia. For ES, KM can provide support across the entire lifecycle, from selection and implementation to use. In addition, it is also recognised that an ontology is an appropriate methodology to accomplish a common consensus of communication, as well as to support a diversity of KM activities, such as knowledge repository, retrieval, sharing, and dissemination. This paper examines the role of ontology-based KM for ES (OKES) and investigates the possible integration of ontology-based KM and ES. The authors develop a taxonomy as a framework for understanding OKES research. In order to achieve the objective of this study, a systematic review of existing research was conducted. Based on a theoretical framework of the ES lifecycle, KM, KM for ES, ontology, and ontology-based KM, guided by the framework of study, a taxonomy for OKES is established.
APA, Harvard, Vancouver, ISO, and other styles
12

O'Leary, Daniel E. "Enterprise Resource Planning (ERP) Systems: An Empirical Analysis of Benefits." Journal of Emerging Technologies in Accounting 1, no. 1 (2004): 63–72. http://dx.doi.org/10.2308/jeta.2004.1.1.63.

Full text
Abstract:
This paper uses a database, derived from a data repository, in order to do an analysis of enterprise resource planning (ERP) system benefits. ERP benefits are important for a number of reasons, including establishing a match between what ERP systems benefits are—as compared to ERP expectations—setting a benchmark for other firms, and measuring those benefits. ERP benefits also are central to the business case for deciding whether a firm will invest in an ERP system. It is found that some benefits vary across industry, while others seem to be important to firms independent of industry. In particular, tangible benefits are largely industry-independent, while intangible benefits vary across industry. In addition, when compared to an earlier study by Deloitte Consulting, the results are statistically consistent with their findings, but find substantial additional intangible benefits.
APA, Harvard, Vancouver, ISO, and other styles
13

Rábová, Ivana. "Business rules formalisation for information systems." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 58, no. 6 (2010): 399–406. http://dx.doi.org/10.11118/actaun201058060399.

Full text
Abstract:
The article deals with relation business rules and business applications and describes a number of structures for support of information systems implementation and customization. Particular formats of structure are different according to different type of business rules. We arise from model of enterprise architecture that is a significant document of all what happens in business and serves for blueprint and facilitates of managers decisions. Most complicated part of enterprise architecture is business rule. When we gain its accurate formulation and when we achieve to formalize and to store business rule in special repository we can manage it actualize it and use it for many reasons. The article emphasizes formats of business rule formalization and its reference to business applications implementation.
APA, Harvard, Vancouver, ISO, and other styles
14

Panwar, Arvind, and Vishal Bhatnagar. "Data Lake Architecture." International Journal of Organizational and Collective Intelligence 10, no. 1 (2020): 63–75. http://dx.doi.org/10.4018/ijoci.2020010104.

Full text
Abstract:
Data is the biggest asset after people for businesses, and it is a new driver of the world economy. The volume of data that enterprises gather every day is growing rapidly. This kind of rapid growth of data in terms of volume, variety, and velocity is known as Big Data. Big Data is a challenge for enterprises, and the biggest challenge is how to store Big Data. In the past and some organizations currently, data warehouses are used to store Big Data. Enterprise data warehouses work on the concept of schema-on-write but Big Data analytics want data storage which works on the schema-on-read concept. To fulfill market demand, researchers are working on a new data repository system for Big Data storage known as a data lake. The data lake is defined as a data landing area for raw data from many sources. There is some confusion and questions which must be answered about data lakes. The objective of this article is to reduce the confusion and address some question about data lakes with the help of architecture.
APA, Harvard, Vancouver, ISO, and other styles
15

Jansen, Gregory, Aaron Coburn, Adam Soroka, Will Thomas, and Richard Marciano. "DRAS-TIC Linked Data: Evenly Distributing the Past." Publications 7, no. 3 (2019): 50. http://dx.doi.org/10.3390/publications7030050.

Full text
Abstract:
Memory institutions must be able to grow a fully-functional repository incrementally as collections grow, without expensive enterprise storage, massive data migrations, and the performance limits that stem from the vertical storage strategies. The Digital Repository at Scale that Invites Computation (DRAS-TIC) Fedora research project, funded by a two-year National Digital Platform grant from the Institute for Museum and Library Services (IMLS), is producing open-source software, tested cluster configurations, documentation, and best-practice guides that enable institutions to manage linked data repositories with petabyte-scale collections reliably. DRAS-TIC is a research initiative at the University of Maryland (UMD). The first DRAS-TIC repository system, named Indigo, was developed in 2015 and 2016 through a collaboration between U.K.-based storage company, Archive Analytics Ltd., and the UMD iSchool Digital Curation Innovation Center (DCIC), through funding from an NSF DIBBs (Data Infrastructure Building Blocks) grant (NCSA “Brown Dog”). DRAS-TIC Indigo leverages industry standard distributed database technology, in the form of Apache Cassandra, to provide open-ended scaling of repository storage without performance degradation. With the DRAS-TIC Fedora initiative, we make use of the Trellis Linked Data Platform (LDP), developed by Aaron Coburn at Amherst College, to add the LDP API over similar Apache Cassandra storage. This paper will explain our partner use cases, explore the system components, and showcase our performance-oriented approach, with the most emphasis given to performance measures available through the analytical dashboard on our testbed website.
APA, Harvard, Vancouver, ISO, and other styles
16

Lu, Shu Ping, Kuei Kai Shao, and Kuo Shu Luo. "A Service-Oriented After-Sales Services System in Mechanical Engineering Industry." Applied Mechanics and Materials 307 (February 2013): 447–50. http://dx.doi.org/10.4028/www.scientific.net/amm.307.447.

Full text
Abstract:
This paper presents a service-oriented After-sales services system in Mechanical Engineering Industry. Typical After-sales services include status tracking services by customers, customer services, assignors and assignees. Therefore, the proposed After-sales service tracking management system work in the progress from the case study is conducted. Our system can connect with other service-related systems, such as enterprise content management repository system and business process management system. The After-sales services system is developed by consulting and visiting the machine tools manufacturers.
APA, Harvard, Vancouver, ISO, and other styles
17

Denny, Diane, and Caitlin Keenan. "Cost, value, and policy in quality: Evolving relationship of payers and providers." Journal of Clinical Oncology 32, no. 30_suppl (2014): 44. http://dx.doi.org/10.1200/jco.2014.32.30_suppl.44.

Full text
Abstract:
44 Background: Cancer Treatment Centers of America (CTCA), a network of five hospitals, has multiple pay for performance contracts. Some apply to all hospitals, while others are unique to individual facilities. While many metrics are managed by site, some metrics are collected and measured centrally by corporate. Varying levels of oversight were required from site quality teams, executive teams and externally-facing corporate departments. In order to coordinate these efforts, CTCA implemented a real-time tracking system for monitoring pay for performance metrics. Methods: CTCA’s corporate Quality department designed a central repository for monitoring metrics that is populated monthly using customized workbooks submitted from each site. The central repository compiles monthly data and displays it in several formats broken down by timeframe, site and contract. The repository incorporates features allowing easy sharing and data accuracy. An accompanying process for collecting data was created to ensure timely, efficient updates to the repository. Results: The monthly monitoring process was implemented in January 2014 and continues to operate successfully. For one enterprise-wide contract cycle closing on April 31, CTCA met requirements and outperformed the contract-specified goal for each by an average of 19.8%. Pre-implementation, metrics were monitored quarterly. Although two metrics showed a small (<2.5%) decrease in average performance, given the intensified focus, the remaining metrics showed an average 7.95% improvement in performance during the second six months with monthly monitoring. Metrics associated with the remaining contracts are on track to meet or exceed specified performance standards. Conclusions: The new tool and monthly reporting process successfully monitored pay for performance metrics at CTCA. Most importantly, the system allowed for close to real-time assessment and therefore promoted ongoing intervention if warranted. The repository created a single source of data sharing and visibility for these important elements of performance. As CTCA continues to participate in pay for performance activities, this method will be adapted to accommodate more contracts and metrics.
APA, Harvard, Vancouver, ISO, and other styles
18

Blanco, Carlos, Guzmán de, Eduardo Fernández-Medina, and Juan Trujillo. "An MDA approach for developing secure OLAP applications: Metamodels and transformations." Computer Science and Information Systems 12, no. 2 (2015): 541–65. http://dx.doi.org/10.2298/csis140617007b.

Full text
Abstract:
Decision makers query enterprise information stored in Data Warehouses (DW) by using tools (such as On-Line Analytical Processing (OLAP) tools) which employ specific views or cubes from the corporate DW or Data Marts, based on multidimensional modelling. Since the information managed is critical, security constraints have to be correctly established in order to avoid unauthorized access. In previous work we defined a Model-Driven based approach for developing a secure DW repository by following a relational approach. Nevertheless, it is also important to define security constraints in the metadata layer that connects the DW repository with the OLAP tools; that is, over the same multidimensional structures that end users manage. This paper incorporates a proposal for developing secure OLAP applications within our previous approach: it improves a UML profile for conceptual modelling; it defines a logical metamodel for OLAP applications; and it defines and implements transformations from conceptual to logical models, as well as from logical models to secure implementation in a specific OLAP tool (SQL Server Analysis Services).
APA, Harvard, Vancouver, ISO, and other styles
19

Dumont, Emmanuel L. P., Benjamin Tycko, and Catherine Do. "CloudASM: an ultra-efficient cloud-based pipeline for mapping allele-specific DNA methylation." Bioinformatics 36, no. 11 (2020): 3558–60. http://dx.doi.org/10.1093/bioinformatics/btaa149.

Full text
Abstract:
Abstract Summary Methods for quantifying the imbalance in CpG methylation between alleles genome-wide have been described but their algorithmic time complexity is quadratic and their practical use requires painstaking attention to infrastructure choice, implementation and execution. To solve this problem, we developed CloudASM, a scalable, ultra-efficient, turn-key, portable pipeline on Google Cloud Platform (GCP) that uses a novel pipeline manager and GCP’s serverless enterprise data warehouse. Availability and implementation CloudASM is freely available in the GitHub repository https://github.com/TyckoLab/CloudASM and a sample dataset and its results are also freely available at https://console.cloud.google.com/storage/browser/cloudasm. Contact emmanuel.dumont@hmh-cdi.org
APA, Harvard, Vancouver, ISO, and other styles
20

Kapsoulis, Nikolaos, Alexandros Psychas, Georgios Palaiokrassas, Achilleas Marinakis, Antonios Litke, and Theodora Varvarigou. "Know Your Customer (KYC) Implementation with Smart Contracts on a Privacy-Oriented Decentralized Architecture." Future Internet 12, no. 2 (2020): 41. http://dx.doi.org/10.3390/fi12020041.

Full text
Abstract:
Enterprise blockchain solutions attempt to solve the crucial matter of user privacy, albeit that blockchain was initially directed towards full transparency. In the context of Know Your Customer (KYC) standardization, a decentralized schema that enables user privacy protection on enterprise blockchains is proposed with two types of developed smart contracts. Through the public KYC smart contract, a user registers and uploads their KYC information to the exploited IPFS storage, actions interpreted in blockchain transactions on the permissioned blockchain of Alastria Network. Furthermore, through the public KYC smart contract, an admin user approves or rejects the validity and expiration date of the initial user’s KYC documents. Inside the private KYC smart contract, CRUD (Create, read, update and delete) operations for the KYC file repository occur. The presented system introduces effectiveness and time efficiency of operations through its schema simplicity and smart integration of the different technology modules and components. This developed scheme focuses on blockchain technology as the most important and critical part of the architecture and tends to accomplish an optimal schema clarity.
APA, Harvard, Vancouver, ISO, and other styles
21

Sołek-Borowska, Celina. "The use and benefits of information communication technology by Polish small and medium sized enterprises." Online Journal of Applied Knowledge Management 6, no. 1 (2018): 211–25. http://dx.doi.org/10.36965/ojakm.2018.6(1)211-225.

Full text
Abstract:
Despite the significant contribution that information communication technology (ICT) has made to business, prior studies indicated that there is a large number of unsuccessful ICT implementations in small and medium sized enterprises (SMEs). Moreover, literature also indicated that SMEs’ utilization of ICT tools is reported to be low worldwide, while that is also the case with Polish SMEs. This paper was aim to identify the ICT tools used by Polish SMEs, the areas in which they are used, and the benefits companies perceive from such use. Findings of this research study with 153 Polish SMEs suggest that the most popular ICT tool is email, used by 88% of surveyed entities. Emails contain knowledge and information that is not codified in any knowledge repository. The main area of enterprise where employees use new technologies is marketing, as confirmed by 44% of the surveyed entities. About 59% of the companies surveyed recognize that the use of new technologies brings benefits, which translate into an increase in the company's profits. To stimulate the development of SMEs Polish companies, this study found that it is important to support the development of modern ICT tools for economic growth.
APA, Harvard, Vancouver, ISO, and other styles
22

Klann, Jeffrey G., Michael Mendis, Lori C. Phillips, et al. "Taking advantage of continuity of care documents to populate a research repository." Journal of the American Medical Informatics Association 22, no. 2 (2014): 370–79. http://dx.doi.org/10.1136/amiajnl-2014-003040.

Full text
Abstract:
Abstract Objective Clinical data warehouses have accelerated clinical research, but even with available open source tools, there is a high barrier to entry due to the complexity of normalizing and importing data. The Office of the National Coordinator for Health Information Technology's Meaningful Use Incentive Program now requires that electronic health record systems produce standardized consolidated clinical document architecture (C-CDA) documents. Here, we leverage this data source to create a low volume standards based import pipeline for the Informatics for Integrating Biology and the Bedside (i2b2) clinical research platform. We validate this approach by creating a small repository at Partners Healthcare automatically from C-CDA documents. Materials and methods We designed an i2b2 extension to import C-CDAs into i2b2. It is extensible to other sites with variances in C-CDA format without requiring custom code. We also designed new ontology structures for querying the imported data. Results We implemented our methodology at Partners Healthcare, where we developed an adapter to retrieve C-CDAs from Enterprise Services. Our current implementation supports demographics, encounters, problems, and medications. We imported approximately 17 000 clinical observations on 145 patients into i2b2 in about 24 min. We were able to perform i2b2 cohort finding queries and view patient information through SMART apps on the imported data. Discussion This low volume import approach can serve small practices with local access to C-CDAs and will allow patient registries to import patient supplied C-CDAs. These components will soon be available open source on the i2b2 wiki. Conclusions Our approach will lower barriers to entry in implementing i2b2 where informatics expertise or data access are limited.
APA, Harvard, Vancouver, ISO, and other styles
23

Kumar, Amit, and T. V. Vijay Kumar. "Materialized View Selection Using Swap Operator Based Particle Swarm Optimization." International Journal of Distributed Artificial Intelligence 13, no. 1 (2021): 58–73. http://dx.doi.org/10.4018/ijdai.2021010103.

Full text
Abstract:
The data warehouse is a key data repository of any business enterprise that stores enormous historical data meant for answering analytical queries. These queries need to be processed efficiently in order to make efficient and timely decisions. One way to achieve this is by materializing views over a data warehouse. An n-dimensional star schema can be mapped into an n-dimensional lattice from which Top-K views can be selected for materialization. Selection of such Top-K views is an NP-Hard problem. Several metaheuristic algorithms have been used to address this view selection problem. In this paper, a swap operator-based particle swarm optimization technique has been adapted to address such a view selection problem.
APA, Harvard, Vancouver, ISO, and other styles
24

Brzozowski, Bartosz, Karol Kawka, Krzysztof Kaźmierczak, Zdzisław Rochala, and Konrad Wojtowicz. "Supporting the Process of Aircraft Maintenance with Mobile Devices." Transactions on Aerospace Research 2017, no. 2 (2017): 7–18. http://dx.doi.org/10.2478/tar-2017-0011.

Full text
Abstract:
Abstract Maintenance of aircraft is a complex process and therefore, in order to optimize the process, integrated information systems are increasingly used. Rapid development and wide availability of mobile devices equipped with powerful processors and with a wide range of modern communication connections suggests their high usability for enterprise IT systems. In the Department of Avionics and Air Armament of the Military University of Technology (WAT) an ERP-class (Enterprise Resource Planning) system, intended to support aircraft maintenance [4] has been designed and developed. The main concept of the system is to store the aircraft related and maintenance information in a central repository, i.e. in databases hosted on a central database server. This solution ensures concurrent availability of the data to a large group of authorized users. The key components of the system include the database server and client applications, which ensure access to centralized information resources, according to assigned user rights. The project involves development of client applications using three technologies: web, desktop and mobile one. Developed client applications have successfully passed integration tests perfomed using sample maintenance data. Currently works on user authorization security and wireless data security are under way.
APA, Harvard, Vancouver, ISO, and other styles
25

Yun, Jinhyo Joseph, Abiodun A. Egbetoku, and Xiaofei Zhao. "How Does a Social Open Innovation Succeed? Learning from Burro Battery and Grassroots Innovation Festival of India." Science, Technology and Society 24, no. 1 (2018): 122–43. http://dx.doi.org/10.1177/0971721818806101.

Full text
Abstract:
As people pay attentions to social innovation as the source of innovative ideas and the repository of new business models, this study poses the following research questions: How does a social open innovation succeed? What is the success factor of social open innovation? What are the successful dynamics of social open innovation? This article selected two case studies: one is the Burro Battery Company in Ghana and the other is grassroots innovation enterprise of India known as the Honey Bee Network and its collaborator, National Innovation Foundation (NIF), Ahmedabad. The first case is a social open innovation firm case while the second case is a social open innovation policy case. Through deep case study, we found out the ways of success of social open innovation strategy and social open innovation policy.
APA, Harvard, Vancouver, ISO, and other styles
26

PAUL, MANOJ, and S. K. GHOSH. "A SERVICE-ORIENTED APPROACH FOR INTEGRATING HETEROGENEOUS SPATIAL DATA SOURCES REALIZATION OF A VIRTUAL GEO-DATA REPOSITORY." International Journal of Cooperative Information Systems 17, no. 01 (2008): 111–53. http://dx.doi.org/10.1142/s0218843008001774.

Full text
Abstract:
Searching and accessing geospatial information in the open and distributed environments of geospatial information systems poses several challenges due to the heterogeneity in geospatial data. Geospatial data is highly heterogeneous — both at the syntactic and semantic level. The requirement for an integration architecture for seamless access of geospatial data has been raised over the past decades. The paper proposes a service-based model for geospatial integration where each geospatial data provider is interfaced on the web as services. The interface for these services has been described with Open Geospatial Consortium (OGC) specified service standards. Catalog service provides service descriptions for the services to be discovered. The semantic of each service description is captured in the form of ontology. The similarity assessment method of request service with candidate services proposed in this paper is aimed at resolving the heterogeneity in semantics of locational terms of service descriptions. In a way, we have proposed an architecture for enterprise geographic information system (E-GIS), which is an organization-wide approach to GIS integration, operation, and management. A query processing mechanism for accessing geospatial information in the service-based distributed environment has also been discussed with the help of a case study.
APA, Harvard, Vancouver, ISO, and other styles
27

Melouk, Sharif H., Uzma Raja, and Burcu B. Keskin. "Managing Resource Allocation and Task Prioritization Decisions in Large Scale Virtual Collaborative Development Projects." Information Resources Management Journal 23, no. 2 (2010): 53–76. http://dx.doi.org/10.4018/irmj.2010102604.

Full text
Abstract:
The authors use a simulation approach to determine effective management of resource allocation and task prioritization decisions for the development of open source enterprise solutions software in the context of a large scale collaborative development project (CDP). Unlike traditional software systems where users have limited access to the development team, in open source environments, the resolution of issues is a collaborative effort among users and the team. However, as the project grows in size, complexity, and usage, effective allocation of resources and prioritization of tasks become a necessity to improve the operational performance of the software system. In this paper, by mining an open source software repository, the authors analyze the effects of collaborative issue resolution in a CDP and its effects on resource allocation of the team developers. This article examines several scenarios to evaluate the effects of forum discussions, resource allocation, and task prioritization on operational performance of the software system.
APA, Harvard, Vancouver, ISO, and other styles
28

Searle, Samantha, Malcolm Wolski, Natasha Simons, and Joanna Richardson. "Librarians as partners in research data service development at Griffith University." Program: electronic library and information systems 49, no. 4 (2015): 440–60. http://dx.doi.org/10.1108/prog-02-2015-0013.

Full text
Abstract:
Purpose – The purpose of this paper is to describe the evolution to date and future directions in research data policy, infrastructure, skills development and advisory services in an Australian university, with a focus on the role of librarians. Design/methodology/approach – The authors have been involved in the development of research data services at Griffith, and the case study presents observations and reflections arising from their first-hand experiences. Findings – Griffith University’s organisational structure and “whole-of-enterprise” approach has facilitated service development to support research data. Fostering strong national partnerships has also accelerated development of institutional capability. Policies and strategies are supported by pragmatic best practice guidelines aimed directly at researchers. Iterative software development and a commitment to well-supported enterprise infrastructure enable the provision of a range of data management solutions. Training programs, repository support and data planning services are still relatively immature. Griffith recognises that information services staff (including librarians) will need more opportunities to develop knowledge and skills to support these services as they evolve. Originality/value – This case study provides examples of library-led and library-supported activities that could be used for comparative purposes by other libraries. At the same time, it provides a critical perspective by contrasting areas of good practice within the University with those of less satisfactory progress. While other institutions may have different constraints or opportunities, some of the major concepts within this paper may prove useful to advance the development of research data capability and capacity across the library profession.
APA, Harvard, Vancouver, ISO, and other styles
29

Denny, Diane, and Danielle Gross. "Establishing an oncology quality dashboard for improvement." Journal of Clinical Oncology 34, no. 7_suppl (2016): 223. http://dx.doi.org/10.1200/jco.2016.34.7_suppl.223.

Full text
Abstract:
223 Background: Cancer Treatment Centers of America (CTCA), a national network of five hospitals, formed a Quality Improvement Committee to bring together both clinical and administrative leadership (physician, nursing, pharmacy and quality) across the enterprise vested in the oversight of quality and patient safety. Tasked with facilitating and driving measurement and improvement the group established a Quality dashboard housing metrics of importance given current performance, evidence-based practice, and evolving external standards. The dashboard includes a mix of outcome, process, and structure measures with targets, both externally driven and CTCA established. Methods: The dashboard houses data sourced from a multitude of systems and is segmented into clinically relevant categories such as Infection Prevention, Nursing, Medication Management, and Medical Oncology to name a few. These categories are further broken down into specific measures, reported by quarter, and color-coded (red/yellow/green) to display progress. Targets are established based upon external benchmarks, where data is available. Each metric is also graphically displayed, trending the 6 most recent quarters of data as another visual reference. Finally, a glossary was created to standardize key measure details across the enterprise for accurate data collection including documentation of source, numerator, denominator, references, inclusions, exclusions and rationale. Data is updated monthly to reflect most current performance and promote timely intervention. Results: Since development of the Quality dashboard, CTCA has experienced various red to yellow to green progressions as well as upped its targets indicating significant advancement in performance. Standard data collection and visual identifiers within one repository has enabled CTCA to accurately focus time and resources where needed in order to continually improve care. Conclusions: The establishment of the enterprise-wide quality dashboard has led to great collaboration within CTCA’s network of hospitals. The display of the data allows both a focus upon individual and network strengths and opportunities for improvement as well as a mechanism for communicating shared goals.
APA, Harvard, Vancouver, ISO, and other styles
30

Trisnawarman, Dedi, and Muhammad Choirul Imam. "BUSINESS INTELLIGENCE FRAMEWORK FOR PERFORMANCE MEASUREMENT IN HIGHER EDUCATION STUDY PROGRAMS." Jurnal Muara Sains, Teknologi, Kedokteran dan Ilmu Kesehatan 4, no. 2 (2020): 249. http://dx.doi.org/10.24912/jmstkik.v4i2.8877.

Full text
Abstract:
Business Intelligence (BI) is an online application and realtime, needed by large and modern organizations to increase competitive advantage in the global competitive environment. Higher Education (HE) is a current organization that houses excellent resources, with the number of students reaching up to tens of thousands, thus requiring an application that can be used as a tool to achieve the organization's business goals. This study aims to build a BI model and framework aimed at developing decision-making applications for measuring the business performance of HE. The method used in this research is the BI development method which is derived from the software engineering development method and adapted to the Study Program Accreditation Instrument (IAPS) 4.0. The case study used is the Information Systems Study Program at Tarumanagara University. The BI development methods are: Business Case Assessment, Enterprise Infrastructure Evaluation, Project Planning, Project Requirements, Data Analysis, Prototyping, Meta Data Analysis, Database Design, ETL Design, Meta Data Design, ETL Development, Application Development, Data Mining, Meta Data Repository Development, Implementation, Release Evaluation. The results of this study are a development stage model and framework that can be used to build BI applications for study program performance measurement.ABSTRAK Business Intelligence (BI) adalah aplikasi daringdan terkini, dibutuhkan oleh organisasi besar dan modern untuk meningkatkan keunggulan kompetitif dalam persaingan global. Perguruan tinggi adalah organisasi saat ini yang memiliki sumber daya yang sangat baik, dengan jumlah mahasiswa mencapai puluhan ribu, sehingga membutuhkan aplikasi yang dapat digunakan sebagai alat untuk mencapai tujuan bisnis organisasi. Penelitian ini bertujuan untuk membangun model dan framework BI yang bertujuan untuk mengembangkan aplikasi pengambilan keputusan untuk mengukur kinerja bisnis PT. Metode yang digunakan dalam penelitian ini adalah metode pengembangan BI yang bersumber dari metode pengembangan rekayasa perangkat lunak dan disesuaikan dengan Instrumen Akreditasi Program Studi (IAPS) 4.0. Studi kasus yang digunakan adalah Program Studi Sistem Informasi Universitas Tarumanagara. Metode pengembangan BI adalah: Business Case Assessment, Enterprise Infrastructure Evaluation, Project Planning, Project Requirements, Data Analysis, Prototyping, Metadata Analysis, Database Design, ETL Design, Metadata Design, ETL Development, Application Development, Data Mining, Metadata Pengembangan Repositori, Implementasi, Evaluasi Rilis. Hasil dari penelitian ini berupa model tahapan pengembangan dan kerangka kerja yang dapat digunakan untuk membangun aplikasi BI untuk pengukuran kinerja program studi. Kata Kunci: Kerangka kerja Business Intelligence; Pengukuran Performa; Perguruan Tinggi
APA, Harvard, Vancouver, ISO, and other styles
31

Saraç-Lesavre, Başak. "Desire for the ‘worst’: Extending nuclear attachments in southeastern New Mexico." Environment and Planning D: Society and Space 38, no. 4 (2019): 753–71. http://dx.doi.org/10.1177/0263775819889969.

Full text
Abstract:
In the city of Carlsbad, New Mexico, where a deep geological repository named the Waste Isolation Pilot Plant receives regular shipments of military transuranic waste that will remain radioactive for 10,000 years, a group of local actors has been putting major efforts to extend their attachments to nuclear waste futures. In January 2012, Forbes magazine named Carlsbad ‘The Town that Wants America’s Worst Atomic Waste’. Obviously, linking the verb ‘want’ with ‘worst’ is meant to show how unexpected this desire might be. This paper offers an ethnographical account of activities undertaken by their Nuclear Task Force at a peculiar moment. Articulating the relationship between valuation and desiring, it develops the notion of proactive valuation undertakings to refer to local actors’ attempts to generate new attachments and to maintain existing ones through the succession of a series of mediated and equipped valuation undertakings. Rather than approaching desire for waste with scepticism, it follows what ‘desiring’ actors do and how they do it in ‘enterprise form’ and offers a novel, symmetrical and a more comprehensive approach to examine, not only, the co-production of places and techno political orders, but also the constitution of political and moral values at specific sites.
APA, Harvard, Vancouver, ISO, and other styles
32

Georgiadou, Anna, Spiros Mouzakitis, and Dimitris Askounis. "Assessing MITRE ATT&CK Risk Using a Cyber-Security Culture Framework." Sensors 21, no. 9 (2021): 3267. http://dx.doi.org/10.3390/s21093267.

Full text
Abstract:
The MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge) Framework provides a rich and actionable repository of adversarial tactics, techniques, and procedures. Its innovative approach has been broadly welcomed by both vendors and enterprise customers in the industry. Its usage extends from adversary emulation, red teaming, behavioral analytics development to a defensive gap and SOC (Security Operations Center) maturity assessment. While extensive research has been done on analyzing specific attacks or specific organizational culture and human behavior factors leading to such attacks, a holistic view on the association of both is currently missing. In this paper, we present our research results on associating a comprehensive set of organizational and individual culture factors (as described on our developed cyber-security culture framework) with security vulnerabilities mapped to specific adversary behavior and patterns utilizing the MITRE ATT&CK framework. Thus, exploiting MITRE ATT&CK’s possibilities towards a scientific direction that has not yet been explored: security assessment and defensive design, a step prior to its current application domain. The suggested cyber-security culture framework was originally designed to aim at critical infrastructures and, more specifically, the energy sector. Organizations of these domains exhibit a co-existence and strong interaction of the IT (Information Technology) and OT (Operational Technology) networks. As a result, we emphasize our scientific effort on the hybrid MITRE ATT&CK for Enterprise and ICS (Industrial Control Systems) model as a broader and more holistic approach. The results of our research can be utilized in an extensive set of applications, including the efficient organization of security procedures as well as enhancing security readiness evaluation results by providing more insights into imminent threats and security risks.
APA, Harvard, Vancouver, ISO, and other styles
33

Bilal, Ahmad Raza, Aaisha Arbab Khan, and Michèle Eunice Marie Akoorie. "Constraints to growth: a cross country analysis of Chinese, Indian and Pakistani SMEs." Chinese Management Studies 10, no. 2 (2016): 365–86. http://dx.doi.org/10.1108/cms-06-2015-0127.

Full text
Abstract:
Purpose This paper aims to identify the barriers that are linked to the institutional, external and social environmental factors in the emerging economies of South-East Asia (SEA). Through a comparative analysis of China, India and Pakistan, this study attempts to understand the constraints that might inhibit small and medium-sized enterprises (SMEs) in this region from becoming more successful. Design/methodology/approach This study proposes an empirical research framework to identify the constraints to determinants of SMEs’ growth (the CDSG model) in an important geographic and industrial cluster of SEA countries including China, India and Pakistan. Six propositions are tested, using data from 1,443 SMEs obtained from Enterprise Survey Data Repository database from the World Bank. Ordinary least-squares estimation is applied for statistical analyses and testing of the research propositions. Findings The results show the differential effects of the proposed CDSG model in China, India and Pakistan. Access to external finance is found to be irrelevant to the growth of SMEs in China, while it has a positive influence in India and Pakistan. Furthermore, in terms of the innovation process, partial mediation is traced. Using the tax rate factor, negative mediation is found between CDSG variables and SMEs’ growth. Both mediators play different roles in firm growth activities, while the level of significance of some variables is found to be more relevant to a specific region rather than to all. Practical implications The prudent management of the proposed CDSG variables could revolutionize the constraints facing SME growth, making them into success factors. This could invigorate the growth of SMEs’ in SEA countries. The paper concludes with practical implications for policymakers and investors. Originality/value This SMEs’ theoretical framework is the first to use innovation and tax rate mediators to highlight the determinants of business growth in three SEA regional economies (China, India and Pakistan).
APA, Harvard, Vancouver, ISO, and other styles
34

Kumar, Amit, and T. V. Vijay Kumar. "Improved Quality View Selection for Analytical Query Performance Enhancement Using Particle Swarm Optimization." International Journal of Reliability, Quality and Safety Engineering 24, no. 06 (2017): 1740001. http://dx.doi.org/10.1142/s0218539317400010.

Full text
Abstract:
A data warehouse, which is a central repository of the detailed historical data of an enterprise, is designed primarily for supporting high-volume analytical processing in order to support strategic decision-making. Queries for such decision-making are exploratory, long and intricate in nature and involve the summarization and aggregation of data. Furthermore, the rapidly growing volume of data warehouses makes the response times of queries substantially large. The query response times need to be reduced in order to reduce delays in decision-making. Materializing an appropriate subset of views has been found to be an effective alternative for achieving acceptable response times for analytical queries. This problem, being an NP-Complete problem, can be addressed using swarm intelligence techniques. One such technique, i.e., the similarity interaction operator-based particle swarm optimization (SIPSO), has been used to address this problem. Accordingly, a SIPSO-based view selection algorithm (SIPSOVSA), which selects the Top-[Formula: see text] views from a multidimensional lattice, has been proposed in this paper. Experimental comparison with the most fundamental view selection algorithm shows that the former is able to select relatively better quality Top-[Formula: see text] views for materialization. As a result, the views selected using SIPSOVSA improve the performance of analytical queries that lead to greater efficiency in decision-making.
APA, Harvard, Vancouver, ISO, and other styles
35

Prorokowski, Lukasz. "Bank’s perspective on regulatory-driven changes to collateral management." Journal of Financial Regulation and Compliance 22, no. 2 (2014): 128–46. http://dx.doi.org/10.1108/jfrc-07-2013-0025.

Full text
Abstract:
Purpose – The aim of this paper is to discuss the impact of regulatory-driven changes to the collateral management landscape, indicating operational and technological challenges faced by global investment banks while complying with the new regulatory framework. As it transpires, collateral management strategies need to be revised to find optimal solutions for the regulatory-shaped landscape. Furthermore, set against the regulatory background, this paper attempts to provide some insights into the future risks and shocks to collateral management. Design/methodology/approach – This paper recognizes the dearth of up-to-date studies on current issues with collateral management and overnight indexed swap (OIS) discounting. Therefore, to introduce new theoretical avenues, this paper is based on an exploratory, qualitative approach to analyse the regulatory-driven collateral management. Findings – The increased use of collateral, with a sharp focus on its quality, liquidity and eligibility for central clearing, requires a new approach to collateral management and discounting methods. At this point, banks (especially those with agency businesses) should develop an enterprise-wide view of collateral by having a central data repository, which allows access to information about the transactions conducted with all counterparties. Originality/value – Analysing the regulatory-driven (Basel III; Dodd-Frank; EMIR) changes to collateral management, this paper adopts banks’ perspectives on the new regulations in collateral management. The paper contributes to the widespread, albeit complex, discussion on how banks adapt to the rapidly changing environment in collateral management and risk operations.
APA, Harvard, Vancouver, ISO, and other styles
36

Pinet, Simone. "Walk on the Wild Side." Medieval Encounters 14, no. 2-3 (2008): 368–89. http://dx.doi.org/10.1163/157006708x366308.

Full text
Abstract:
AbstractThe figure of the wild man is one that crosses artistic disciplines and genres in the cultures of medieval Iberia. In this article I show how the wild man operates within a variety of meanings in diverse literary contexts that, working simultaneously at different narrative levels, cross over from literature into daily life and spectacles, from legal to political discourses. The figure's continued presence from the medieval period into the sixteenth and seventeenth centuries suggests its use as a commonplace, as a motif with a number of fixed meanings that are put to work through context, providing the possibility of different, perhaps even contradictory readings. As commonplace, then, the wild man is presented as a case study for the reconsideration of other elements in the paintings of the Hall of Justice of the Alhambra, often interpreted to have a specific or fixed meaning, and thus programmed within a particular narrative. Seen in its entirety as a repository of commonplaces, I interpret the complex of the lateral paintings of the Hall of Justice in relation to the central one, in which a set of ten kings in Nasrid dress are depicted as conversing, as pretexts for narration that can be of a literary or juridical nature. I then go on to provide a possible itinerary of reading for the wild man scene not only in its immediate context, but as part of he overall visual project in a political key that illustrates the productive makeup of the paintings as pedagogical and ideological enterprise.
APA, Harvard, Vancouver, ISO, and other styles
37

Sarmento, PT, PhD, Caio V. M., Mehrdad Maz, MD, Taylor Pfeifer, DPT, Marco Pessoa, PhD, and Wen Liu, PhD. "Opioid prescription patterns among patients with fibromyalgia." Journal of Opioid Management 15, no. 6 (2019): 469–77. http://dx.doi.org/10.5055/jom.2019.0537.

Full text
Abstract:
Objectives: To investigate opioid prescribing patterns among patients with fibromyalgia (FM) in terms of age, gender, race, type of opioids, and to examine changes in opioid prescription over the past 8 years compared to the US Food and Drug Administration (FDA)-approved FM medications.Design: Retrospective review of data using the Healthcare Enterprise Repository for Ontological Narration database. The collected data were analyzed descriptively and a chi-square test for trend was used to analyze a possible linear relationship between the proportions of opioid and non-opioid users along the time.Participants: Patients with a diagnosis of FM who had received opioid prescriptions from January 1, 2010 to December 31, 2017, and FM patients who had received prescriptions of FDA-approved FM medications in the same period. Main outcome measure: Trends in opioid and non-opioid prescriptions in patients with FM.Results: The opioid medications were prescribed more frequently in 2010 (40 percent) and 2011 (42 percent), but the percentages have decreased since 2012 and reached the lowest numbers in 2016 (27 percent). The chi-square test for trend shows that from 2010 to 2017 the prescriptions of opioids had a statistically significant (p 0.0001) decrease.Conclusion: This study suggests that the frequency of prescribed opioids in FM patients has decreased since 2012. This decline could be attributed to (1) FDA monitoring programs, (2) national efforts to increase awareness of the addictive and harmful effects of opioids, and (3) the growing research on the efficacy of non-opioid therapies to treat chronic pain conditions including FM.
APA, Harvard, Vancouver, ISO, and other styles
38

Lyashenko, V. I., V. I. Golik, and V. Z. Dyatchin. "Increasing environmental safety by reducing technogenic load in mining regions." Izvestiya. Ferrous Metallurgy 63, no. 7 (2020): 529–38. http://dx.doi.org/10.17073/0368-0797-2020-7-529-538.

Full text
Abstract:
One of the most problematic points in technology for storing ore enrichment waste materials with hardener admixture into underground mined space and tailing dumps are the tailings of hydrometallurgical plant (HMP). They are supplied through a slurry pipeline to the tailing dump in form of pulp with solid to liquid mass ratio of 1:2. Liquid phase of the pulp after gravity separation and clarification in tailing dump is returned to technological cycle of HMP. Storage technology under consideration has several disadvantages: high nonrecurrent capital costs for construction of tailing dump at full design capacity; high probability of harmful chemicals migration into groundwater if protective shields of the base or sides of tailings are damaged. The authors have used data from literature and patent documentation considering storage parameters, laboratory and production experiments, physical modeling and selection of compositions of hardening mixtures. Analytical studies, comparative analysis of theoretical and practical results by standard and new methods were performed. Possibility of using hardening mixtures with adjacent production wastes used as binders was established. Optimal composition of ingredients per 1 m3 of hardening mixture is proposed as follows: 1350 – 1500 kg of HMP tailings; 50 - 70 kg of binder (cement); 350 liters of mixing water. Proposed technology of ore enrichment waste storage into underground mined space and tailings with hardener admixture application allows using underground mined space at the enterprise production capacity of 1,500 thousand tons per year to store 50 – 55 % of tailings, and store the rest wastes cemented by binding material in repository. When filling the entire area of the tailing dump mirror of 10 m height with cemented tails and HMP capacity of up to 1.5 million tons per year, its operation life is extended by 50 years.
APA, Harvard, Vancouver, ISO, and other styles
39

Goss, Foster R., Kenneth H. Lai, Maxim Topaz, et al. "A value set for documenting adverse reactions in electronic health records." Journal of the American Medical Informatics Association 25, no. 6 (2017): 661–69. http://dx.doi.org/10.1093/jamia/ocx139.

Full text
Abstract:
Abstract Objective To develop a comprehensive value set for documenting and encoding adverse reactions in the allergy module of an electronic health record. Materials and Methods We analyzed 2 471 004 adverse reactions stored in Partners Healthcare’s Enterprise-wide Allergy Repository (PEAR) of 2.7 million patients. Using the Medical Text Extraction, Reasoning, and Mapping System, we processed both structured and free-text reaction entries and mapped them to Systematized Nomenclature of Medicine – Clinical Terms. We calculated the frequencies of reaction concepts, including rare, severe, and hypersensitivity reactions. We compared PEAR concepts to a Federal Health Information Modeling and Standards value set and University of Nebraska Medical Center data, and then created an integrated value set. Results We identified 787 reaction concepts in PEAR. Frequently reported reactions included: rash (14.0%), hives (8.2%), gastrointestinal irritation (5.5%), itching (3.2%), and anaphylaxis (2.5%). We identified an additional 320 concepts from Federal Health Information Modeling and Standards and the University of Nebraska Medical Center to resolve gaps due to missing and partial matches when comparing these external resources to PEAR. This yielded 1106 concepts in our final integrated value set. The presence of rare, severe, and hypersensitivity reactions was limited in both external datasets. Hypersensitivity reactions represented roughly 20% of the reactions within our data. Discussion We developed a value set for encoding adverse reactions using a large dataset from one health system, enriched by reactions from 2 large external resources. This integrated value set includes clinically important severe and hypersensitivity reactions. Conclusion This work contributes a value set, harmonized with existing data, to improve the consistency and accuracy of reaction documentation in electronic health records, providing the necessary building blocks for more intelligent clinical decision support for allergies and adverse reactions.
APA, Harvard, Vancouver, ISO, and other styles
40

CHEN, JINHUA. "A Daoist princess and a Buddhist temple: a new theory on the causes of the canon-delivering mission originally proposed by princess Jinxian (689–732) in 730." Bulletin of the School of Oriental and African Studies 69, no. 2 (2006): 267–92. http://dx.doi.org/10.1017/s0041977x06000127.

Full text
Abstract:
Yunjusi has, over the past few decades, earned a worldwide reputation for the immense repository of Buddhist scriptures carved on the stone slabs that are stored there (the so-called Stone Canon of Fangshan [Fangshan shijing]). The heroic enterprise of carving the whole Buddhist canon into stone had already been initiated during the Daye era (604–617) thanks to the resolve of the monk Jingwan (?–639) and support from Empress Xiao (d. after 630) of Sui Yangdi (r. 604–617) and her brother Xiao Yu (574–647). However, it did not accelerate until 740 when Xuanzong, as urged by his sister Princess Jinxian (689–732), ordered two eminent monks from the capital monastery Great Chongfusi (one of them being the great Buddhist historian and cataloguer Zhisheng [fl. 740s]) to deliver over four-thousand fascicles of Buddhist translations, which constituted the main body of the newly compiled Kaiyuan Buddhist canon, to Yunjusi to serve as base texts for the stone scriptures. This event is remarkable and puzzling for at least three reasons. First, although Yunjusi, a local temple situated far from the capitals, was not a Kaiyuan monastery, it still had the honour of being chosen as a recipient of the Kaiyuan canon. Second, one cannot help but wonder why and how two Chongfusi monks, who were of obvious prestige, should have demonstrated such enthusiasm in escorting so many Buddhist texts to this apparently marginal temple. Finally, it is difficult to understand why Princess Jinxian, who was then an ordained Taoist nun, played such an active and decisive role in this project. Such a remarkable and important event inevitably invited considerable attention from scholars, who have noted, and attempted to explain, several aspects of the mystery surrounding Princess Jinxian's Yunjusi ties. This article attempts to address this old issue from a perspective that has never been explored. It broaches and elaborates on the possibility that the great AvatamD saka master Fazang's (643–712) possible ties with Yunjusi form a major missing piece in this complex puzzle.
APA, Harvard, Vancouver, ISO, and other styles
41

Geraldes, Mauro Cesar, Corbiniano Silva, Ariadne Marra de Souza, et al. "ALKALINE COMPLEXES AS POTENTIAL AREAS FOR INSTALLATION OF REPOSITORIES OF NUCLEAR WASTE: CRETACEOUS INTRUSIONS IN BRAZIL." Journal of Sedimentary Environments 2, no. 2 (2017): 111–32. http://dx.doi.org/10.12957/jse.2017.30051.

Full text
Abstract:
Brazil initiated its nuclear program in the decade of 70 with the installation of the Thermonuclear Plant Angra I. The implantation of this type of enterprise involves activities which generate environmental risks manly due to disposal of nuclear waste. The storage of nuclear waste must follow strict international standards and faces the resistance of the civil society. Thus, complex technical and politically procedures are required to find a suitable place for this end. The use of geologic bodies that fit the required features for the installation of nuclear repositories is described in literature. This kind of geological units must be deep, structuralized and with low infiltration. These aspects should to be identified through geologic, hydrologic and geophysical studies and structural models as well.This study aims to report the results of the regional geophysical survey performed to delimit the gneissic host rocks of Ribeira Belt and the characterization of their magnetic characteristics main textural variations. This work also includes the analysis of the structural features of rocks (faults and fractures), in addition to how these structures change within the massif coupled with mineralogy and chemistry signatures.The magnetic anomaly map displays a series of semi-circular magnetic highs, following the general configuration of the alkaline intrusions. In Brazil, the intrusive bodies with potential to function as repository are the alkaline complexes, once they are isotropic, massive, have well defined geometry, compatible depth and with low or no human occupation. However special attention to the possible occurrence of water percolations in fractures and faults must be given. In this way, detailed studies of alkaline bodies that fit the expected features comprise the following intrusions: Mendanha, Tinguá, Tanguá, Soarinho, Rio Bonito and Morro de São João. This study intends to be a contribution for the knowledge of the properties of alkaline bodies with potential to be used for storage of nuclear waste, in the near future, in Brazil.
APA, Harvard, Vancouver, ISO, and other styles
42

Cheng, Xiufeng, Ziming Zhang, Yue Yang, and Zhonghua Yan. "Open collaboration between universities and enterprises: a case study on GitHub." Internet Research 30, no. 4 (2020): 1251–79. http://dx.doi.org/10.1108/intr-01-2019-0013.

Full text
Abstract:
PurposeSocial coding platforms (SCPs) have been adopted by scores of developers in building, testing and managing their codes collaboratively. Accordingly, this type of platform (site) enables collaboration between enterprises and universities (c-EU) at a lower cost in the form of online team-building projects (repositories). This paper investigates the open collaboration patterns between these two parties on GitHub by measuring their online behaviours. The purpose of this investigation is to identify the most attractive collaboration features that enterprises can offer to increase university students' participation intentions.Design/methodology/approachThe research process is divided into four steps. First, the authors crawled for numerical data for each interactive repository feature created by employees of Alibaba on GitHub and identified the student accounts associated with these repositories. Second, a categorisation schema of feature classification was proposed on a behavioural basis. Third, the authors clustered the aforementioned repositories based on feature data and recognised four types of repositories (popular, formal, normal and obsolete) to represent four open collaboration patterns. The effects of the four repository types on university students' collaboration behaviour were measured using a multiple linear regression model. An ANOVA test was implemented to examine the robustness of research results. Finally, the authors proposed some practical suggestions to enhance collaboration between both sides of SCPs.FindingsSeveral counterintuitive but reasonable findings were revealed, for example, those based on the “star” repository feature. The actual coding contribution of the repositories had a negative correlation with student attention. This result indicates that students were inclined to imitate rather than innovate.Originality/valueThis research explores the open collaboration patterns between enterprises and universities on GitHub and their impact on student coding behaviour. According to the research analysis, both parties benefit from open collaboration on SCPs, and the allocation or customisation of online repository features may affect students' participation in coding. This research brings a new perspective to the measurement of users' collaboration behaviour with output rates on SCPs.
APA, Harvard, Vancouver, ISO, and other styles
43

Moulin, Pierre, Katrien Grünberg, Erio Barale-Thomas, and Jeroen van der Laak. "IMI—Bigpicture: A Central Repository for Digital Pathology." Toxicologic Pathology 49, no. 4 (2021): 711–13. http://dx.doi.org/10.1177/0192623321989644.

Full text
Abstract:
To address the challenges posed by large-scale development, validation, and adoption of artificial intelligence (AI) in pathology, we have constituted a consortium of academics, small enterprises, and pharmaceutical companies and proposed the BIGPICTURE project to the Innovative Medicines Initiative. Our vision is to become the catalyst in the digital transformation of pathology by creating the first European, ethically compliant, and quality-controlled whole slide imaging platform, in which both large-scale data and AI algorithms will exist. Our mission is to develop this platform in a sustainable and inclusive way, by connecting the community of pathologists, researchers, AI developers, patients, and industry parties based on creating value and reciprocity in use based on a community model as the mechanism for ensuring sustainability of the platform.
APA, Harvard, Vancouver, ISO, and other styles
44

He, Zhao Xia, Geng Liu, Lan Liu, and Bing Han. "Data Management System for Complex Mechanical Product in Collaborative Simulation Environment." Key Engineering Materials 431-432 (March 2010): 196–200. http://dx.doi.org/10.4028/www.scientific.net/kem.431-432.196.

Full text
Abstract:
In order to implement collaborative work in enterprises, it is important to exchange and share data during product design process. According to the characteristics and types of simulation data for mechanical properties analysis, an effective data management method is presented. Based on the functional framework of collaborative simulation environment, a new database system has been designed by combining relational database and content repository. The content repository for simulation data has been established and designed by user-defined node type whose model is a simple hierarchy. In this repository, simulation metadata have been stored as node properties. Furthermore, simulation data files can be stored in node which facilitates the upload and download of files. In the mean while, simulation flow data can be stored in relational database. An engineering application is employed to demonstrate the feasibility and practicability of the data management system.
APA, Harvard, Vancouver, ISO, and other styles
45

Dumas, Colette, Susan Foley, Pat Hunt, Miriam Weismann, and Aimee Williamson. "Accelerating collaboration to find a cure: a nonprofit's evolving business model." CASE Journal 11, no. 1 (2015): 58–75. http://dx.doi.org/10.1108/tcj-05-2014-0032.

Full text
Abstract:
Synopsis This is a field-researched case about a nonprofit organization, the Accelerated Cure Project (ACP), dedicated to accelerating advances toward a cure for multiple sclerosis (MS). Inspired by the successful open source software development platform, ACP brings the strengths of that platform into the medical research and development environment. At the opening of the case, Robert McBurney, an Australian scientist with extensive experience in the biotech world, has been named CEO. McBurney and his team want to use ACP's bio-sample and data Repository to drive innovation in the search for the cure for MS by fostering collaborative research and development across research institutions, pharmaceutical and bio-tech companies. To encourage such collaboration ACP waives its rights to potentially lucrative Intellectual Property. This decision to foster collaboration at the expense of revenue sources appears problematic, since ACP does not have the staff or resources to undertake fundraising at the scale needed to fund current projects. ACP chooses to serve instead as an open access research accelerator making an impact on the field by functioning as an innovation driver rather than a profit maker. Is this an innovative recipe for success in finding a cure for MS or a recipe for financial disaster for ACP? Research methodology Interviews provided the primary source of data for this case. Four semi-structured interviews were conducted with the CEO of ACP, the Vice President of Scientific Operations, and a member of the organization's Board of Trustees, a collaborating university researcher, and the President of a bio-tech company working with ACP. Interview data was supplemented with additional information from ACP's web site, news reports, McBurney's comments at Suffolk University's Global Leadership in Innovation and Collaboration Award event, and follow-up conversations. Relevant courses and levels This case is intended for use in an undergraduate course examining strategic management issues midway through the term. The case discussion can center on issues relating to: first, the development of the business model; second, revenue resources and fundraising. Students are expected to spend two to three hours of outside preparation reviewing concepts of change leadership and the collaborative enterprise business model. They should read the case materials and brainstorm options for improved change leadership. The case can be taught in one two-hour class period. Theoretical basis The purpose of this case is to introduce students to the strategic management and funding challenges faced by an organization that is using a non-traditional business model in an increasingly complex environment. As a result of discussing this case, students should be able to: first, examine strategic organizational strengths, analyze opportunities created by business, market and environmental factors, and strategize to minimize weaknesses and to address threats identify an organization's strategic focus; recognize and recommend options at crucial decision making junctures in a business situation; second, assess an organization's revenue model; analyze how this model can be improved; third, analyze the functionality and sustainability of an organization's business model.
APA, Harvard, Vancouver, ISO, and other styles
46

Koval, Olexandr, Valeriy Kuzminykh, Maxim Voronko, and Dmitriy Khaustov. "DEVELOPMENT OF A SCENARIO-BASED PROJECT MANAGEMENT SYSTEM CONSTRUCTION IN ENTERPRISES WITH THE FUNCTIONAL ORGANIZATIONAL STRUCTURE." Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska 3, no. 4 (2013): 26–30. http://dx.doi.org/10.35784/iapgos.1474.

Full text
Abstract:
Commonly accepted point is that functional organization structure is a poor environment for project management. The scenario repository approach is designed especially to support project management in environment with great amount of operational activity taking into account, that in such cases, project tasks have their reflection in the tasks of business-processes. Creation of a special project and operation task management tools set allows to create a new approach in change management in enterprises with functional organization structure, optimize the processes, involved in project management activities, compile the knowledge base of project and operational task management using the project task - operational task mapping to reveal the bottle necks of both.
APA, Harvard, Vancouver, ISO, and other styles
47

HE, ZHENGWEI, and RENZHONG TANG. "NUMERICAL SIMULATION OF DYNAMIC PRODUCTION SCHEDULING BASED ON ICAM." Journal of Advanced Manufacturing Systems 07, no. 01 (2008): 55–58. http://dx.doi.org/10.1142/s0219686708001085.

Full text
Abstract:
In order to resolve the problem of unreasonably allocating resources during the production process in discrete manufacturing enterprises, a method of dynamic production scheduling based on Information Coordinates Analysis Method (ICAM) was put forward. The communication modes of manufacturing equipment and the characteristic of the manufacturing process were analyzed, and the real-time data acquisition system for production field was established. The production information repository was built by analyzing the collected real-time data. With abstracting the correlative information, the information nodes were produced and the information coordinates were established. With the support of real-time information repository, the information in the production field could be available at any time by analyzing the typical information nodes. The feedback information could affect the original scheduling, thus the dynamic production scheduling was realized. Comparing with the classical analysis methods, ICAM can reflect the utilization state of the resources more directly during the process of the production scheduling, the scheduling rules can be set down more conveniently, and it is easier to apply the Object-Oriented method into the numerical simulation of production scheduling. The result of the numerical simulation proves that ICAM can effectively optimize the process of the typical production scheduling.
APA, Harvard, Vancouver, ISO, and other styles
48

Costello, Laura. "Survey of Canadian Academic Librarians Outlines Integration of Traditional and Emerging Services." Evidence Based Library and Information Practice 15, no. 3 (2020): 184–86. http://dx.doi.org/10.18438/eblip29789.

Full text
Abstract:
A Review of: Ducas, A., Michaud-Oystryk, N., & Speare, M. (2020). Reinventing ourselves: New and emerging roles of academic librarians in Canadian research-intensive universities. College & Research Libraries, 81(1), 43–65. https://doi.org/10.5860/crl.81.1.43 Abstract Objective – To identify new and emerging roles for librarians and understand how those new roles impact their confidence, training needs, and job satisfaction. To understand how librarians conceptualize the impact of these new roles on the academic enterprise. Design – Electronic survey. Setting – Academic research libraries at Canadian research-intensive universities. Subjects – 205 academic librarians. Methods – An electronic survey was distributed to all librarians working at the 15 research-intensive universities in Canada. Archivists were included in this population, but senior administrators, such as university librarians, deans, and associate administrators, were not included. The 38-question survey was produced in English and French. Five focus areas for emerging skills were drawn from the literature and a review of job postings. Librarians were asked about their participation in particular activities associated with the different focus areas and about their training and confidence in those areas. The survey was sent to 743 librarians and had a 27% response rate with a total of 205 complete responses. Librarians participated from each of the 15 research universities and institutional response rates ranged from 14% to 51%. Survey Monkey was used to distribute the online survey. Cronbach’s alpha was used to measure reliability for each section of the survey and ranged from .735 in the confidence area to .934 in the job satisfaction area, indicating sufficient internal consistency. The data were analyzed using SPSS and RStudio. Main Results – In the general area of research support, a majority (75%) of participants reported that they provided information discovery services like consultations and literature reviews, 28% engaged in grant application support, 27% provided assistance with systematic reviews, 26% provided bibliometric services, and 23% provided data management services. In the teaching and learning area, 78% of participants provided classroom teaching to students, 75% provided one-on-one instruction, 48% created tutorials, 47% taught workshops for faculty, and 43% conducted copyright consultations. Only around half of participants offered digital scholarship services, and copyright consultations were the most frequently offered service in this area, with 36% of participants indicating that they offered this service. The area of user experience had the highest number of respondents, and the top services offered in this area included liaison services for staff and faculty (87%), library services assessment (46%), and student engagement initiatives (41%). In the scholarly communication area, 49% of respondents indicated that they provided consultation on alternative publishing models, including open access, and 41% provided copyright and intellectual property services. The majority of librarians were confident that they could perform their duties in the five focus areas. Teaching and learning had the highest confidence rate, with 75% of respondents indicating that they felt confident or very confident in their roles. Digital scholarship had the lowest confidence rating, with only 50% indicating that they felt confident or very confident about these roles. The survey also asked participants about their training and skills acquisition in the five areas. Most participants indicated that they acquired these skills through professional work experience and self-teaching. Based on the calculations from the survey focusing on participation in new and traditional roles, 13% of librarian participants performed only new roles, 44% performed only traditional roles, and 44% performed some new and some traditional roles. Additionally, 45% of librarians spent the majority of their time delivering traditional services, 19% delivering new services, and 36% dividing their time between new and traditional services. Job satisfaction and new or traditional roles were also examined, and statistically significant results indicated that librarians performing new roles were more satisfied with assigned duties (p = 0.009084), more satisfied with opportunities for challenge (p = 0.02499), and less satisfied with opportunities for independent action (p = 0.02904). Librarians performing new roles perceived a higher impact on scholarly communication (p = 0.02621) and supporting researchers (p = 0.0002126) than those performing traditional roles. Librarians performing new roles perceived a lower impact on contributing to student success (p = 0.003686) and supporting teaching and learning at the classroom level (p = 0.01473) than librarians performing traditional roles. Conclusion – Results demonstrate that librarians are still engaged in traditional roles, but new roles are emerging particularly in the areas of copyright and publishing, bibliometrics, online learning initiatives, and new communication strategies. Job satisfaction and confidence in these roles are similar between traditional and emerging roles. Overall, participants felt that they had a significant impact on the academic enterprise when performing new or traditional roles but that the roles had different areas of impact. This study is meant to be a baseline for future investigations in the trends and developments of roles for Canadian librarians. The survey and data are available from the University of Manitoba’s Dataverse repository: https://doi.org/10.5203/FK2/RHOFFU
APA, Harvard, Vancouver, ISO, and other styles
49

Drake, Philip, and Stuart Toddington. "Clinical Pathways to Ethically Substantive Autonomy." International Journal of Clinical Legal Education 19 (July 8, 2014): 311. http://dx.doi.org/10.19164/ijcle.v19i0.32.

Full text
Abstract:
<p>There is no shortage of support for the idea that ethics should be incorporated into the academic and professional curriculum. There is a difference, however, between, on the one hand, teaching professionals about ethics, and, on the other, demanding that they give ethical expression to the range of professional skills they are expected to apply daily in their work. If this expression is not to be perfunctory, ethical judgement must be genuinely integrated into the professional skill set. The mark of integration in this regard is the capacity for autonomous judgement. Ethical autonomy cannot be achieved by a mechanical, rule-bound and circumstance-specific checklist of ethical do’s and don’ts, and it is only partially achieved by a move from mechanistic rules to ‘outcome based’ processes. Rather, professional ethical autonomy presupposes not only a formal understanding of the requirements of an ethical code of conduct, but a genuine engagement with the substantive values and techniques that enable practitioners to interpret and apply principles confidently over a range of circumstances. It is not then, that ethical skill is not valued by the legal profession or legal education, or that the shortfall of ethical skill goes unacknowledged, it is rather that the language of professional ethics struggles to break free from the cautious circularity that is the mark of its formal expression. To require a professional to ‘act in their client’s interests’, or ‘act in accordance with the expectations of the profession’ or act ‘fairly and effectively’ are formal, infinitely ambiguous and entirely safe suggestions; to offer a substantive account of what, specifically, those interests might be, or what expectations we should have, are rather more contentious. Fears of dogma and a narrowing of discretion do, of course, accompany the idea of a search for ethical substance, and caution is to be expected in response to it. Notwithstanding these anxieties, there would appear to be no coherent alternative to the aspiration to substantive autonomy, and this must remain the goal of teaching legal ethics. In light of this, the problem facing educationalists is then perhaps expressed more diplomatically in terms of how ethical skill might be substantively developed, imparted, and integrated into a genuinely comprehensive conception of professional skill.</p><p>Clinical education can go a long way to solving this problem: exposure to the practical tasks of lawyering is the surest and best way of raising consciousness in this regard: ‘Hands-on’ is good - and consciousness-raising is a step in the direction of autonomy, but raw experience and elevated awareness is not enough. We know that our most influential theories of learning tells us that it is in the process of reflection upon problem solving that the practitioner begins to take autonomous control of skill development. In the view of the author, reflection, requires content and direction, and in this paper, with the aid of three models of skill integration inspired by Nigel Duncan’s detailed analysis and video reconstruction of the ethical and technical skill deficiencies brought to light by R v Griffiths, we attempt to specify what might be understood in this regard: Reflective content refers to the discrete interests and values that compete to produce tension in what we will refer to the ‘matrix’ of concerns that feature in all forms of dispute resolution; reflective direction points to an engagement with the resources and techniques that can empower critical and autonomous judgment. In the context of a clinical process broadly structured by the insights of Wenger and by Rest’s model of ethical skill, guided reflection so specified thus serves as an interface between on the one hand, indeterminate ethical form, and, on the other, the substantive ethical wisdom to be found in the repository of values that underpin the very idea of the legal enterprise.</p>
APA, Harvard, Vancouver, ISO, and other styles
50

Silgard, Emily, Vicky Sandhu, Elihu H. Estey, and Daniel Herman. "An Automated System for Parsing and Risk Classifying Karyotype Nomenclature for Acute Myeloid Leukemia." Blood 126, no. 23 (2015): 2602. http://dx.doi.org/10.1182/blood.v126.23.2602.2602.

Full text
Abstract:
Abstract Introduction: Cytogenetic analyses are critical in the management of a variety of malignancies. In acute myeloid leukemia (AML), tumor karyotype is among the most valuable features for prognosis and therapy selection. The reporting of patient karyotypes is reasonably standardized through the use of the International System for Human Cytogenetic Nomenclature (ISCN). However, karyotypes can be extremely complicated, sometimes leading to confusion in their interpretation. In addition, secondary-use of karyotype data requires re-interpretation of ISCN formulas, which acts as a barrier to research. To improve the speed and accuracy of karyotype interpretation and classification, we developed an automated, modular pipeline to parse and interpret cytogenetics reports and have initially applied this to risk prediction in AML. Methods: We developed a modular set of python scripts that (1) process a batch of cytogenetics reports, (2) identifies and cleans ISCN formulas, (3) parses ISCN into an efficient representation of cells and their mutations, (4) classifies karyotypes according to Southwestern Oncology Group (SWOG) risk categories (Slovak, 2000), and (5) outputs a JSON document that details salient mutations, risk categories, algorithm confidence levels, and references to the supporting evidence for interpretation. We classified each case according to AML risk categories: unfavorable, intermediate, miscellaneous, favorable, unknown, or insufficient. This program will be embedded within the incoming document pipeline for an enterprise-wide data repository and archive being constructed to facilitate research. The modular design will facilitate extension to additional data sources and classification schemas. We collected reports (N=4,169) from two cytogenetics laboratories for patients newly diagnosed with AML at an academic hospital between 2008 and 2014. We split random subsets of these reports into training and testing sets. For training and testing, cytogenetic reports were matched to a research database of manual cytogenetic interpretations using patient identifier and report date. In addition to the standard SWOG risk category labels, the research database supplied a label of "not done." Results: We trained our algorithm on 1,058 reports and tested it with 1,301 reports (See table). We demonstrated 95.5% and 94.7% strict accuracy, respectively for training and testing sets. In testing, the algorithm failed to interpret 29 documents (2.2% of all testing records) due to incorrectly specified ISCN formulas that could not be definitively parsed. All of these reports were assigned an "unknown" label, indicating the need for further manual review. There were 40 (3.1% of testing records) disagreements between automated and manual interpretations. On further review, 14 (35%) of these appeared to be due to mismatched underlying data, in part due to imperfect pairing of reports and manual interpretations, in the absence of unique identifiers. Other errors included differences in the applied clonality criteria and/or the limitations of interpreting a single karyotype without considering historical and concurrent testing (45%) and the inability of our algorithm to identify some implied abnormalities not explicitly specified by ISCN (15%). Conclusions: We developed an automated program that rapidly interprets karyotype reports for AML risk classification with ~95% accuracy. We plan to improve our interpretation algorithm to better identify implied abnormalities and update our risk classification to reflect the most recent SWOG risk classification guidelines. This tool should dramatically decrease the barrier to incorporating cytogenetic data in research studies and potentially improve the accuracy of clinical interpretations. Table 1. Confusion matrix of test set results System output labels Manual data labels Favorable Insufficient Intermediate Miscellaneous Not done Unfavorable Unknown Total Favorable 20 1 1 1 0 3 2 28 Insufficient 0 74 0 0 0 2 9 85 Intermediate 1 1 818 0 0 0 2 822 Miscellaneous 0 0 1 45 0 2 1 49 Not done 0 2 2 1 0 2 4 11 Unfavorable 0 5 8 5 0 275 11 304 Unknown 0 1 0 0 0 1 0 2 Total 21 84 830 52 0 285 29 1301 Disclosures No relevant conflicts of interest to declare.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!