Academic literature on the topic 'Proof system interoperability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Proof system interoperability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Proof system interoperability"

1

Myeong, Go Eun, and Kim Sa Ram. "Blockchain Based Zero Knowledge Proof Protocol For Privacy Preserving Healthcare Data Sharing." Journal of Technology Informatics and Engineering 4, no. 1 (2025): 171–89. https://doi.org/10.51903/jtie.v4i1.296.

Full text
Abstract:
The rise of digital healthcare has intensified concerns over data privacy, particularly in cross-institutional medical data exchanges. This study introduces a blockchain-based protocol leveraging Zero-Knowledge Proofs (ZKP), specifically zk-SNARK, to enable verifiable yet privacy-preserving health data sharing. Built on a permissioned Ethereum blockchain, the protocol ensures that medical data validity can be confirmed without disclosing sensitive content. System implementation involves Python-based zk-circuits, smart contracts in Solidity, and RESTful APIs supporting HL7 FHIR formats for interoperability. Performance evaluations show promising results: proof verification times remained under 100 ms, with average proof sizes below 2 KB, even under complex transaction scenarios. Gas consumption analysis indicates a trade-off—ZKP-enabled transactions consumed approximately 93,000 gas units, compared to 52,800 in baseline cases. Interoperability testing across 10 FHIR-based scenarios resulted in 100% parsing success and an average data integration time of 1.7 seconds. Security assessments under white-box threat models confirmed that sensitive information remains unreconstructable, preserving patient confidentiality. Compared to previous implementations using zk-STARK, this protocol offers a 30% improvement in verification efficiency and a 45% reduction in proof size. The novelty lies in combining lightweight ZKP mechanisms with an interoperability-focused design, tailored for realistic hospital infrastructures. This research delivers a scalable, standards-compliant architecture poised to advance secure digital healthcare ecosystems while complying with regulations like GDPR
APA, Harvard, Vancouver, ISO, and other styles
2

Lemus-Zúñiga, Lenin-Guillermo, Juan M. Félix, Alvaro Fides-Valero, José-Vte Benlloch-Dualde, and Antonio Martinez-Millana. "A Proof-of-Concept IoT System for Remote Healthcare Based on Interoperability Standards." Sensors 22, no. 4 (2022): 1646. http://dx.doi.org/10.3390/s22041646.

Full text
Abstract:
The Internet of Things paradigm in healthcare has boosted the design of new solutions for the promotion of healthy lifestyles and the remote care. Thanks to the effort of academia and industry, there is a wide variety of platforms, systems and commercial products enabling the real-time information exchange of environmental data and people’s health status. However, one of the problems of these type of prototypes and solutions is the lack of interoperability and the compromised scalability in large scenarios, which limits its potential to be deployed in real cases of application. In this paper, we propose a health monitoring system based on the integration of rapid prototyping hardware and interoperable software to build system capable of transmitting biomedical data to healthcare professionals. The proposed system involves Internet of Things technologies and interoperablility standards for health information exchange such as the Fast Healthcare Interoperability Resources and a reference framework architecture for Ambient Assisted Living UniversAAL.
APA, Harvard, Vancouver, ISO, and other styles
3

Klausen, Tobias, Valentin Hartig, Dominik Fuchs, et al. "A Digital Vaccination Pass Using Fast Healthcare Interoperability Resources: A Proof of Concept." Digital 4, no. 2 (2024): 389–409. http://dx.doi.org/10.3390/digital4020019.

Full text
Abstract:
The traditional manual recording of vaccination records in Germany faced challenges during the COVID-19 pandemic, prompting the introduction of a COVID smartphone app with QR codes. However, this solution brought new challenges, emphasizing the need for a centrally managed European digital vaccination record for efficiency and validity. This study assesses the feasibility of using the HL7 FHIR standard in the healthcare industry for implementing a digital vaccination pass management and monitoring system. The system aims to offer convenience and improved efficiency for both patients and healthcare providers while promoting interoperability with other healthcare systems. To this end, we developed a prototype using modern technologies, such as React, Quarkus, and Keycloak. Results indicate potential benefits for patients and healthcare providers, offering access to immunization records, personalized recommendations, and streamlined management. However, integrating nuanced vaccination processes into the standardized FHIR system requires custom extensions, which might hinder interoperability. Manual data entry and the integration of an identity provider present further obstacles in industry scenarios. Despite these challenges, this study suggests that implementing HL7 FHIR can enhance efficiency, data accessibility, and accuracy in the vaccination process, supporting broader digitization efforts in the German healthcare system and beyond.
APA, Harvard, Vancouver, ISO, and other styles
4

Kumar, Adarsh, Deepak Kumar Sharma, Anand Nayyar, Saurabh Singh, and Byungun Yoon. "Lightweight Proof of Game (LPoG): A Proof of Work (PoW)’s Extended Lightweight Consensus Algorithm for Wearable Kidneys." Sensors 20, no. 10 (2020): 2868. http://dx.doi.org/10.3390/s20102868.

Full text
Abstract:
In healthcare, interoperability is widely adopted in the case of cross-departmental or specialization cases. As the human body demands multiple specialized and cross-disciplined medical experiments, interoperability of business entities like different departments, different specializations, the involvement of legal and government monitoring issues etc. are not sufficient to reduce the active medical cases. A patient-centric system with high capability to collect, retrieve, store or exchange data is the demand for present and future times. Such data-centric health processes would bring automated patient medication, or patient self-driven trusted and high satisfaction capabilities. However, data-centric processes are having a huge set of challenges such as security, technology, governance, adoption, deployment, integration etc. This work has explored the feasibility to integrate resource-constrained devices-based wearable kidney systems in the Industry 4.0 network and facilitates data collection, liquidity, storage, retrieval and exchange systems. Thereafter, a Healthcare 4.0 processes-based wearable kidney system is proposed that is having the blockchain technology advantages. Further, game theory-based consensus algorithms are proposed for resource-constrained devices in the kidney system. The overall system design would bring an example for the transition from the specialization or departmental-centric approach to data and patient-centric approach that would bring more transparency, trust and healthy practices in the healthcare sector. Results show a variation of 0.10 million GH/s to 0.18 million GH/s hash rate for the proposed approach. The chances of a majority attack in the proposed scheme are statistically proved to be minimum. Further Average Packet Delivery Rate (ADPR) lies between 95% to 97%, approximately, without the presence of outliers. In the presence of outliers, network performance decreases below 80% APDR (to a minimum of 41.3%) and this indicates that there are outliers present in the network. Simulation results show that the Average Throughput (AT) value lies between 120 Kbps to 250 Kbps.
APA, Harvard, Vancouver, ISO, and other styles
5

Trivedi, Sapna, Stephen Hall, Fiona Inglis, and Afzal Chaudhry. "Proof-of-concept solution to create an interoperable timeline of healthcare data." BMJ Health & Care Informatics Online 30, no. 1 (2023): e100754. http://dx.doi.org/10.1136/bmjhci-2023-100754.

Full text
Abstract:
ObjectivesTo overcome the barriers of interoperability by sharing simulated patient data from different electronic health records systems and presenting them in an intuitive timeline of events.MethodsThe ‘Patient Story’ software comprising database and blockchain, PS Timeline Windows interface, PS Timeline Web interface and network relays on Azure cloud was customised for Epic and Lorenzo electonic patient record (EPR) systems used at different hospitals, using site-specific adapters.ResultsEach site could view their own clinical documents and view each other’s site specific, fully coded test sets of (Care Connect) medications, conditions and allergies, in an aggregated single view.DiscussionThis work has shown that clinical data from different EPR systems can be successfully integrated and visualised on a single timeline, accessible by clinicians and patients.ConclusionThe Patient Story system combined the timeline visualisation with successful interoperability across healthcare settings, as well giving patients the ability to directly interact with their timeline.
APA, Harvard, Vancouver, ISO, and other styles
6

Santiago Luis Delgado, Mateo Santos, and Carlos Mateo Ramirez. "Blockchain Technology for Secure Digital Health Records in Pediatric Care." Proceeding of The International Conference of Inovation, Science, Technology, Education, Children, and Health 4, no. 2 (2024): 329–33. https://doi.org/10.62951/icistech.v4i2.133.

Full text
Abstract:
Blockchain technology is gaining attention in healthcare for its potential to secure and streamline digital health records. This paper explores how blockchain can enhance the security, accessibility, and interoperability of pediatric health data. By providing a decentralized and tamper-proof system, blockchain can address issues related to data privacy, patient consent, and medical history tracking in children's healthcare.
APA, Harvard, Vancouver, ISO, and other styles
7

Shailesh Shetty S. "A Private Blockchain Based Approach for Securing Clinical Trials Data to Provide Data Security and Interoperability." Journal of Information Systems Engineering and Management 10, no. 33s (2025): 1139–49. https://doi.org/10.52783/jisem.v10i33s.6724.

Full text
Abstract:
This research proposes a Blockchain-driven solution for enhancing the integrity and security of clinical trials, introducing a specialized system called Blockchain for Securing Clinical Trials (BC-SCT). The system reimagines traditional clinical trial data management by offering a decentralized, tamper-resistant platform that ensures trust, transparency, and efficiency across stakeholders including researchers, sponsors, and regulatory bodies.BC-SCT employs modern consensus mechanisms such as Proof-of-Authority (PoA) and Delegated Proof of Stake (DPoS) to significantly reduce transaction processing delays—from 900 ms to 550 ms across 50 transactions—ensuring faster data validation without compromising reliability. It also demonstrates strong performance under simultaneous query loads, cutting response times from 70 ms to 40 ms, a 43% improvement in real-time data access. To handle the scale and complexity of clinical data, the system incorporates features like data sharding, in-memory caching, and off-chain storage. These enhancements reduce Blockchain ledger load by 20%, lowering storage requirements from 100 GB to 80 GB for 10,000 entries—while maintaining high-speed access and data fidelity. Through these innovations, BC-SCT offers a future-proof foundation for conducting and overseeing clinical trials, addressing long-standing issues related to data manipulation, inefficiency, and lack of transparency in research workflows.
APA, Harvard, Vancouver, ISO, and other styles
8

Bai, Tianyu, Yangsheng Hu, Jianfeng He, Hongbo Fan, and Zhenzhou An. "Health-zkIDM: A Healthcare Identity System Based on Fabric Blockchain and Zero-Knowledge Proof." Sensors 22, no. 20 (2022): 7716. http://dx.doi.org/10.3390/s22207716.

Full text
Abstract:
The issue of identity authentication for online medical services has been one of the key focuses of the healthcare industry in recent years. Most healthcare organizations use centralized identity management systems (IDMs), which not only limit the interoperability of patient identities between institutions of healthcare, but also create isolation between data islands. The more important matter is that centralized IDMs may lead to privacy disclosure. Therefore, we propose Health-zkIDM, a decentralized identity authentication system based on zero-knowledge proof and blockchain technology, which allows patients to identify and verify their identities transparently and safely in different health fields and promotes the interaction between IDM providers and patients. The users in Health-zkIDM are uniquely identified by one ID registered. The zero-knowledge proof technology is deployed on the client, which provides the user with a proof of identity information and automatically verifies the user’s identity after registration. We implemented chaincodes on the Fabric, including the upload of proof of identity information, identification, and verification functions. The experiences show that the performance of the Health-zkIDM system can achieve throughputs higher than 400 TPS in Caliper.
APA, Harvard, Vancouver, ISO, and other styles
9

L'Amrani, Hasnae, Younès El Bouzekri El Idrissi, and Rachida Ajhoun. "Technical Interoperability to Solve Cross-Domain Issues Among Federation Systems." International Journal of Smart Security Technologies 7, no. 1 (2020): 21–40. http://dx.doi.org/10.4018/ijsst.2020010102.

Full text
Abstract:
Digital identity management with the metamorphosis of web services enforces new security challenges. A set of identity management systems exists to deal with these identities, alongside the goal of improving user experience and gain secure access. Nowadays, one faces a large number of heterogeneous identity management approaches. This study treated several identity management systems. The federated system makes proof of it eligibility for the identity management. Thus, the researcher interest is on the federated model. Since it consists of the distribution of digital identity between different security domains. The base of security domains is a trust agreement between the entities in communication. Federated identity management faces the problem of interoperability between heterogeneous federated systems. This study is an approach of a technical interoperability between the federations. The authors propose an approach that will permit inter-operation and exchange identity information among heterogeneous federations.
APA, Harvard, Vancouver, ISO, and other styles
10

Ahmad, Hafiz Farooq, Fatimah Mohammad Alhassan, Asrar Haque, and Sarah Shafqat. "A Secure Architecture for Interoperable Personal Health Records (PHR) Based on Blockchain and FHIR." Journal of Pioneering Medical Sciences 14, no. 2 (2025): 42–48. https://doi.org/10.47310/jpms2025140207.

Full text
Abstract:
Personal Health Records (PHRs) would empower patients play active role for quality healthcare provision and access for routine checkup and self-care management. There is a need to recognize security, privacy and interoperability issues for successful design, implementation and adoption of PHR at a wider scale. However, it is one of the greatest challenges to achieve interoperability, security and privacy simultaneously in healthcare domain. Health Level Seven (HL7) international standards body is working to propose interoperability standards for healthcare information systems. However, privacy and security need to be incorporated in the system design and implementation. This work focuses on designing HL7 compliant PHR with security and privacy using blockchain, which is a distributed ledger data storage mechanism. The scope of the paper is limited to a number of core functional requirements of Fast Healthcare Interoperability Resources (FHIR). The PHR models applications of blockchain to these requirements to propose base system architecture. Several tools support FHIR compliant development of HL7 family of standards. We analyzed existing blockchain based PHR and their data sharing services in the domain to integrate FHIR and blockchain technologies. The goal is to share patient's data to facilitate health services by designing for a trusted interoperable sharing of data among different custodians such as physicians and insurance companies. Simultaneously the proof of concept is created through prototype implementation using Python in the open source tool Spyder IDE.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Proof system interoperability"

1

Cauderlier, Raphaël. "Object-Oriented Mechanisms for Interoperability Between Proof Systems." Thesis, Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1065/document.

Full text
Abstract:
Dedukti est un cadre logique résultant de la combinaison du typage dépendant et de la réécriture. Il permet d'encoder de nombreux systèmes logiques au moyen de plongements superficiels qui préservent la notion de réduction. Ces traductions de systèmes logiques dans un format commun sont une première étape nécessaire à l'échange de preuves entre ces systèmes. Cet objectif d'interopérabilité des systèmes de preuve est la motivation principale de cette thèse. Pour y parvenir, nous nous inspirons du monde des langages de programmation et plus particulièrement des langages orientés-objet parce qu'ils mettent en œuvre des mécanismes avancés d'encapsulation, de modularité et de définitions par défaut. Pour cette raison, nous commençons par une traduction superficielle d'un calcul orienté-objet en Dedukti. L'aspect le plus intéressant de cette traduction est le traitement du sous-typage. Malheureusement, ce calcul orienté-objet ne semble pas adapté à l'incorporation de traits logiques. Afin de continuer, nous devons restreindre les mécanismes orientés-objet à des mécanismes statiques, plus faciles à combiner avec la logique et apparemment suffisant pour notre objectif d'interopérabilité. Une telle combinaison de mécanismes orientés-objet et de logique est présente dans l'environnement FoCaLiZe donc nous proposons un encodage superficiel de FoCaLiZe dans Dedukti. Les difficultés principales proviennent de l'intégration de Zenon, le prouveur automatique de théorèmes sur lequel FoCaLiZe repose, et de la traduction du langage d'implantation fonctionnel de FoCaLiZe qui présente deux constructions qui n'ont pas de correspondance simple en Dedukti : le filtrage de motif local et la récursivité. Nous démontrons finalement comment notre encodage de FoCaLiZe dans Dedukti peut servir en pratique à l'interopérabilité entre des systèmes de preuve à l'aide de FoCaLiZe, Zenon et Dedukti. Pour éviter de trop renforcer la théorie dans laquelle la preuve finale est obtenue, nous proposons d'utiliser Dedukti en tant que méta-langage pour éliminer des axiomes superflus<br>Dedukti is a Logical Framework resulting from the combination ofdependent typing and rewriting. It can be used to encode many logical systems using shallow embeddings preserving their notion of reduction. These translations of logical systems in a common format are a necessary first step for exchanging proofs between systems. This objective of interoperability of proof systems is the main motivation of this thesis.To achieve it, we take inspiration from the world of programming languages and more specifically from object-oriented languages because they feature advanced mechanisms for encapsulation, modularity, and default definitions. For this reason we start by a shallow translation of an object calculus to Dedukti. The most interesting point in this translation is the treatment of subtyping. Unfortunately, it seems very hard to incorporate logic in this object calculus. To proceed, object-oriented mechanisms should be restricted to static ones which seem enough for interoperability. Such a combination of static object-oriented mechanisms and logic is already present in the FoCaLiZe environment so we propose a shallow embedding of FoCaLiZe in Dedukti. The main difficulties arise from the integration of FoCaLiZe automatic theorem prover Zenon and from the translation of FoCaLiZe functional implementation language featuring two constructs which have no simple counterparts in Dedukti: local pattern matching and recursion. We then demonstrate how this embedding of FoCaLiZe to Dedukti can be used in practice for achieving interoperability of proof systems through FoCaLiZe, Zenon, and Dedukti. In order to avoid strengthening to much the theory in which the final proof is expressed, we use Dedukti as a meta-language for eliminating unnecessary axioms
APA, Harvard, Vancouver, ISO, and other styles
2

Cauderlier, Raphaël. "Object-Oriented Mechanisms for Interoperability Between Proof Systems." Electronic Thesis or Diss., Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1065.

Full text
Abstract:
Dedukti est un cadre logique résultant de la combinaison du typage dépendant et de la réécriture. Il permet d'encoder de nombreux systèmes logiques au moyen de plongements superficiels qui préservent la notion de réduction. Ces traductions de systèmes logiques dans un format commun sont une première étape nécessaire à l'échange de preuves entre ces systèmes. Cet objectif d'interopérabilité des systèmes de preuve est la motivation principale de cette thèse. Pour y parvenir, nous nous inspirons du monde des langages de programmation et plus particulièrement des langages orientés-objet parce qu'ils mettent en œuvre des mécanismes avancés d'encapsulation, de modularité et de définitions par défaut. Pour cette raison, nous commençons par une traduction superficielle d'un calcul orienté-objet en Dedukti. L'aspect le plus intéressant de cette traduction est le traitement du sous-typage. Malheureusement, ce calcul orienté-objet ne semble pas adapté à l'incorporation de traits logiques. Afin de continuer, nous devons restreindre les mécanismes orientés-objet à des mécanismes statiques, plus faciles à combiner avec la logique et apparemment suffisant pour notre objectif d'interopérabilité. Une telle combinaison de mécanismes orientés-objet et de logique est présente dans l'environnement FoCaLiZe donc nous proposons un encodage superficiel de FoCaLiZe dans Dedukti. Les difficultés principales proviennent de l'intégration de Zenon, le prouveur automatique de théorèmes sur lequel FoCaLiZe repose, et de la traduction du langage d'implantation fonctionnel de FoCaLiZe qui présente deux constructions qui n'ont pas de correspondance simple en Dedukti : le filtrage de motif local et la récursivité. Nous démontrons finalement comment notre encodage de FoCaLiZe dans Dedukti peut servir en pratique à l'interopérabilité entre des systèmes de preuve à l'aide de FoCaLiZe, Zenon et Dedukti. Pour éviter de trop renforcer la théorie dans laquelle la preuve finale est obtenue, nous proposons d'utiliser Dedukti en tant que méta-langage pour éliminer des axiomes superflus<br>Dedukti is a Logical Framework resulting from the combination ofdependent typing and rewriting. It can be used to encode many logical systems using shallow embeddings preserving their notion of reduction. These translations of logical systems in a common format are a necessary first step for exchanging proofs between systems. This objective of interoperability of proof systems is the main motivation of this thesis.To achieve it, we take inspiration from the world of programming languages and more specifically from object-oriented languages because they feature advanced mechanisms for encapsulation, modularity, and default definitions. For this reason we start by a shallow translation of an object calculus to Dedukti. The most interesting point in this translation is the treatment of subtyping. Unfortunately, it seems very hard to incorporate logic in this object calculus. To proceed, object-oriented mechanisms should be restricted to static ones which seem enough for interoperability. Such a combination of static object-oriented mechanisms and logic is already present in the FoCaLiZe environment so we propose a shallow embedding of FoCaLiZe in Dedukti. The main difficulties arise from the integration of FoCaLiZe automatic theorem prover Zenon and from the translation of FoCaLiZe functional implementation language featuring two constructs which have no simple counterparts in Dedukti: local pattern matching and recursion. We then demonstrate how this embedding of FoCaLiZe to Dedukti can be used in practice for achieving interoperability of proof systems through FoCaLiZe, Zenon, and Dedukti. In order to avoid strengthening to much the theory in which the final proof is expressed, we use Dedukti as a meta-language for eliminating unnecessary axioms
APA, Harvard, Vancouver, ISO, and other styles
3

Thiré, François. "Interoperability between proof systems using the logical framework Dedukti." Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG053.

Full text
Abstract:
Il existe aujourd'hui une large famille de systèmes de preuve baséesur différentes logiques: Le calcul des constructions inductives, lalogique d'ordre supérieur, la théorie des ensembles, etc... Undésavantage majeur de cette diversité est, que les théorèmes sontprouvés de nombreuses fois. Une possibilité pour résoudre ce problèmeest de rendre les systèmes de preuve interoperables. Dans cette thèse,nous avons attaqué le problème d'interopérabilité entre systèmes depreuve aussi bien sur le plan théorique que le plan pratique enutilisant le cadre logique Dedukti.Notre voyage commence avec l'exploration des systèmes de typescumulatifs (CTS), une famille de systèmes de type qui étend celle dessystèmes de types pures avec une relation de sous-typage. Les CTSfournissent aujourd'hui une squelette commun a beaucoup de logiquesutilisés aujourd'hui. Les logiques derrières les systèmes Coq,HOL-Light, Lean, Matita ou bien PVS peuvent toutes être vues comme desextensions des CTS avec des fonctionnalités différentes (typesinductifs, irrelevance de la preuve, sous-typage propositionnel,...). Nous définissons une nouvelle notion de traduction entreCTS. Entre autre, nous expliquons un nouvel algorithme (correct maisincomplet) pour déterminer si une preuve écrite dans un CTS peut-êtretraduire dans un autre CTS. Cet algorithme peut aussi être vu commeune extension de l'algorithme de Coq qui permet de vérifier qu'unensemble de contraintes sur les univers est cohérent. Finalement, nousproposons un nouvel encodage des CTS dans Dedukti et nous prouvonsque cet encodage est correct. Ces résultats montrent que Dedukti estun cadre logique approprié sur le plan théorique pour étudierl'interopérabilité entre systèmes de preuve.Nous poursuivons notre voyage avec une étude de cas: le petit théorèmede Fermat prouvé en Matita. Nous montrons comment nous avons putraduire cette preuve vers différents systèmes à travers le cadrelogique Dedukti. Cette traduction se repose principalement sur deuxoutils que nous avons créees :- Dkmeta qui permet d'utiliser la réécriture comme un langage à partentière pour écrire des traductions de preuves. Un avantage de cetoutil est qu'il ré-utilise la syntaxe de Dedukti.- Universo, un outil qui permet d'implémenter l'algorithme mentionnéprécédemment qui permet de traduire une preuve d'un CTS (en Dedukti)vers un autre.Cette procédure semi-automatique permet de traduire la preuve du petitthéorème de Fermat dans une logique assez faible mais expressive quel'on appelle STTforall. STTforall est une version constructive de lathéorie des types simples avec du polymorphisme prénexe. La simplicitéde cette logique permet d'exporter des preuves de STTforall vers denombreux systèmes de preuve. Cette étude de cas montre que Deduktiest tout aussi efficace en tant qu'outil pour faire del'interopérabilité sur le plan pratique.Les outils que nous avons développés dans cette thèse ne sont passpécifiques à cette traduction et peuvent être réutilisés pourd'autres systèmes de preuve dont un encodage en Dedukti est connu(comme Coq ou bien Agda).Finalement, nous concluons ce voyage avec l'exportation du petitthéorème de Fermat vers cinq systèmes de preuve différents: Coq, Lean,Matita, OpenTheory (membre de la famille des systèmes de preuve basésur la logique d'ordre supérieure) et PVS. Cette traduction estdisponible via un site web appelé Logipedia.Logipedia n'a pas été créé seulement pour cette traduction mais avecl'objectif de contenir beaucoup plus de preuves que l'on pourraitpartager entre plusieurs systèmes de preuves. Cela rendrait Logipediaalors une encyclopédie de preuves formelles en ligne<br>There is today a large family of proof systems based upon variouslogics: The Calculus of Inductive Constructions, Higher-Order logic orSet theory, etc. The diversity of proof systems has the negativeconsequence that theorems are formalized many times. One way toovercome this issue would be to make proof systems interoperable. Inthis thesis, we have tackled the interoperability problem for proofsystems both on the theoretical and the practical side using theDedukti logical framework.We begin our journey by looking at Cumulative Type Systems (CTS), afamily of type systems which extends that of Pure Type Systemswith a subtyping relation. CTS provides a common skeleton to manylogics used today. The logic behind Coq, HOL-Light, Lean, Matita orPVS can be seen as an extension of CTS with various features(inductive types, proof irrelevance, predicate subtyping, …). Wedefine a new notion of embedding between CTS. We also provide a soundbut incomplete algorithm to decide whether a proof in one CTS can betranslated into another CTS. This algorithm can also be seen as anextension of Coq's algorithm to check that the floating universeconstraints are consistent. Then, we propose a new embedding of CTSinto Dedukti and give a soundness proof of this embedding. Theseresults show that Dedukti is suitable for studying interoperability onthe theoretical side.We continue our journey on a case of study: The proof of Fermat'slittle theorem written in Matita. We show how we were able totranslate this proof to various proof systems through Dedukti. Thistranslation mainly relies on two tools created for this purpose:— Dkmeta, a tool which proposes to use rewriting as a way to writeproofs transformation programs. One advantage of this tool is that itreuses the syntax of Dedukti itself.— Universo, a tool which implements the aforementioned algorithm whichallows to translate a proof in one CTS (written in Dedukti) toanother.This semi-automatic translation allows to translate the proof ofFermat's little theorem into a weak but expressive logic calledSTTforall. STTforall is a constructive version of Simple Type Theorywith prenex polymorphism. As a consequence, a proof in STTforall canbe exported easily to many proof systems. This case of study showsthat Dedukti is also suitable for interoperability on the practicalside.The tools used for these transformations could be reused also forproofs coming from other proof systems for which an encoding inDedukti is known (such as Coq or Agda).The journey ends with the exportation of the proof of Fermat's littletheorem encoded in STTforall towards 5 different proof systems: Coq,Lean, Matita, OpenTheory (a member of the HOL-family proof systems)and PVS. We have implemented a user interface for that via a websitecalled Logipedia.Logipedia was designed with the goal of containing many more proofs thatcould be shared between proof systems and as such is intended to be anencyclopedia of formal proofs
APA, Harvard, Vancouver, ISO, and other styles
4

Felicissimo, Thiago. "Generic bidirectional typing in a logical framework for dependent type theories." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG049.

Full text
Abstract:
Les théories des types dépendants sont des systèmes formels qui peuvent être utilisés à la fois comme langages de programmation et pour la formalisation des mathématiques, et constituent la base de plusieurs assistants de preuve tels que Coq et Agda. Afin d'unifier leur étude, les Logical Frameworks (LFs) fournissent un méta-langage unifié permettant de définir ces théories, dans lequel diverses notions universelles sont intégrées par défaut et où des méta-théorèmes génériques peuvent être prouvés. Cette thèse se concentre sur les LFs conçus pour être implémentés, avec pour objectif de fournir des type-checkers génériques. Notre principale contribution est un nouveau LF permettant de représenter les théories des types avec leurs syntaxes non annotées habituelles. La clé pour permettre de supprimer des annotations sans compromettre la décidabilité du typage est l'intégration du typage bidirectionnel, une discipline dans laquelle le jugement de typage est décomposé en modes d'inférence et de checking. Si le typage bidirectionnel est déjà bien étudié dans la littérature, l'une des contributions centrales de notre travail est sa formulation dans un LF, ce qui donne un traitement générique pour toutes les théories définissables dans notre système. Notre proposition a été implémentée dans le type-checker générique BiTTs, permettant son utilisation avec diverses théories.En plus de notre contribution principale, nous proposons des avancés dans l'étude de Dedukti, un LF appartenant à la même famille que le système que nous proposons. Tout d'abord, nous revisitons le problème de la correction des encodages dans Dedukti en proposant une méthodologie qui permet de démontrer plus facilement la conservativité. De plus, nous montrons comment Dedukti peut être utilisé en pratique comme outil de traduction de preuves, en proposant une transformation pour partager des preuves avec des systèmes prédicatifs. Cette transformation a permis la traduction de preuves de Matita vers Agda, aboutissant aux toutes premières preuves en Agda du Petit Théorème de Fermat et du Postulat de Bertrand<br>Dependent type theories are formal systems that can be used both as programming languages and for the formalization of mathematics, and constitute the foundation of popular proof assistants such as Coq and Agda. In order to unify their study, Logical Frameworks (LFs) provide a unified meta-language for defining such theories in which various universal notions are built in by default and metatheorems can be proven in a theory-independent way. This thesis focuses on LFs designed with implementation in mind, with the goal of providing generic type-checkers. Our main contribution is a new such LF which allows for representing type theories with their usual non-annotated syntaxes. The key to allowing the removal of annotations without jeopardizing decidability of typing is the integration of bidirectional typing, a discipline in which the typing judgment is decomposed into inference and checking modes. While bidirectional typing has been well known in the literature for quite some time, one of the central contributions of our work is that, by formulating it in an LF, we give it a generic treatment for all theories fitting our framework. Our proposal has been implemented in the generic type-checker BiTTs, allowing it to be used in practice with various theories. In addition to our main contribution, we also advance the study of Dedukti, a sibling LF of our proposed framework. First, we revisit the problem of showing that theories are correctly represented in Dedukti by proposing a methodology for encodings which allows for showing their conservativity easily. Furthermore, we demonstrate how Dedukti can be used in practice as a tool for translating proofs by proposing a transformation for sharing proofs with predicative systems. This transformation has allowed for the translation of proofs from Matita to Agda, yielding the first-ever Agda proofs of Fermat's Little Theorem and Bertrand's Postulate
APA, Harvard, Vancouver, ISO, and other styles
5

Grienenberger, Emilie. "Combining computational theories." Electronic Thesis or Diss., université Paris-Saclay, 2025. http://www.theses.fr/2025UPASG011.

Full text
Abstract:
Les vérificateurs et assistants de preuve sont utilisés pour formaliser des théorèmes mathématiques et la vérification logicielle, notamment de systèmes critiques tels que des sytèmes médicaux, industriels ou de transport. La diversité des systèmes de preuves soulève la question de leur interopérabilité : comment revérifier ou réutiliser des preuves d'un système dans un autre ? Des cadres logiques tels que le lambdaPi-calcul modulo théorie procurent un formalisme commun dans lequel peuvent être exprimées divers systèmes logiques et théories mathématiques. Des outils de transformation de preuves au sein du lambdaPi-calcul modulo théorie permettent alors de traduire des preuves d'un système logique à l'autre.De manière analogue aux cadres logiques, des théories combinant plusieurs théories fournissent un langage commun entre ces théories. L'enjeu principal de ce manuscrit est l'étude de la définition et des propriétés de ces théories, spécifiquement dans le cadre des théories calculatoires, c'est-à-dire dont la définition repose sur la réécriture. Plus précisément, les combinaisons de théories calculatoires seront étudiées dans deux contextes : les logiques œcuméniques, c'est-à-dire des logiques combinant logiques intuitionnistes et classiques, et les théories des systèmes de types purs modulo réécriture, qui sont utilisées dans le lambdaPi-calcul modulo pour exprimer les systèmes logiques de divers assistants de preuve.Dans un premier temps, on s'intéresse au cas particulier des logiques œcuméniques. Une nouvelle logique œcuménique NE est définie et étudiée. Ces principales propriétés sont que sa définition, bien que guidée par, ne repose pas sur les traductions par doubles négations et qu'elle permet la définition de théories œcuméniques calculatoires, par exemple la définition d'une théorie des types simples œcuménique.Dans un second temps, les théories des systèmes de types purs modulo théorie, dont un exemple est le cadre logique du lambdaPi-calcul modulo théorie, sont étudiées. Notamment, des outils permettant de prouver le bon typage de théories calculatoires de manière modulaire sont exhibés, que ce soit pour l'extension ou la restriction (appelée fragmentation) des théories. Finalement, il est établi qu'étant donné une preuve dans une théorie combinant de multiples fragments, il est possible d'établir à quel fragment cette preuve appartient à partir des constantes qu'elle contient, sans indication explicite des règles de réécriture utilisées. Ce résultat est crucial pour justifier l'utilisation de telles théories pour l'interopérabilité des systèmes de preuves.Enfin, en s'appuyant sur la définition deec et sur les résultats de modularité précédents, on s'intéresse à une théorie calculatoire du lambdaPi-calcul nommée théorie U et combinant logiques du premier ordre et d'ordre supérieur minimales, intuitionnistes, classiques et œcuméniques, avec ou sans polymorphisme prénexe ou sous-typage par prédicats, et le calcul des constructions. Les propriétés de correction, conservativité, cohérence, normalisation, et décidabilité de la vérification de type sont notamment établies pour les fragments des logiques œcuméniques du premier order et d'ordre supérieur sont établies<br>Proof checkers and proof assistants are used to formalize mathematical theorems and verify software, notably critical systems such as medical, industrial and transport systems. The diversity of proof systems raises the question of their interoperability: how can proofs be rechecked and reused across systems? Logical frameworks such as the lambdaPi-calculus modulo theory provide a common formalism in which various logical systems and mathematical theories can be expressed. The proof transformation tools within the lambdaPi-calculus modulo theory allow proofs to be translated from one logical system another.In a similar way to logical frameworks, theories combining several theories provide a common language between theories. The main focus of this manuscript is the study of the definition and properties of these theories, specifically in the context of computational theories, that is whose definition relies on rewriting. More specifically, combinations of computational theories will be studied in two contexts: ecumenical logics, i.e. logics that combine intuitionistic and classical logics, and theories of pure type systems modulo rewriting, which are used in the lambdaPi-calculus modulo theory to express logical systems of various proof assistants.First, we look at the special case of ecumenical logics, i.e. logics that combine intuitionistic and classical logics. A new ecumenical logic called NE is defined and studied. Its main properties are that its definition, though guided by, does not rely on double negations, and that it allows the definition of computational ecumenical theories, such as an ecumenical simple type theory.In a second time, the computational theories of pure type systems, an example of which is the logical framework of the lambdaPi-calculus modulo theory, are studied. In particular, we give tools to establish the well typedness of computational theories in a modular way, whether for extension or restriction (called fragmentation) of theories. Finally, we prove that for a given proof in a theory combining multiple fragments, it is possible to establish to which fragment this proof belongs from the constants it contains, without explicit indications of the rewriting rules used. This result is crucial in justifying the use of such theories for the interoperability of proof systems.Finally, based on the definition of NE and the previous modularity results, we focus on a computational theory of lambdaPi-calculus called theory U, combining first- and higher-order logics, minimal, intuitionistic, classical and ecumenical logics, prenex polymorphism and predicate subtyping, and the calculus of constructions. Properties of soundness, conservativity, consistency, normalization, and decidability of type-checking are established for the first- and higher-order logical fragments of theory U
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Proof system interoperability"

1

Folino, Antonietta, and Roberto Guarasci, eds. Knowledge Organization and Management in the Domain of Environment and Earth Observation (KOMEEO). Ergon – ein Verlag in der Nomos Verlagsgesellschaft, 2022. http://dx.doi.org/10.5771/9783956508752.

Full text
Abstract:
The volume contains the proceedings of the KOMEEO (Knowledge Organization and Management in the domain of Environment and Earth Observation) international conference, organized in the field of the European ERA-PLANET (The European Network for observing our changing Planet) H2020 program. Papers present research projects and experiences related to different aspects of organizing knowledge in the environmental domain, which nowadays is receiving great attention from the European Union. In particular, they address topics related to Knowledge Organization Systems (KOSs), to their application in specific contexts, to the extraction of metadata, to the achievement of semantic interoperability. With contributions by Richard Absalom, Prof. Stefano Allegrezza, Dr. Giovanna Aracri, Armando Bartucci, Dr. Assunta Caruso, Prof. Eugenio Casario, Dr. Maria Teresa Chiaravalloti, Sergio Cinnirella, Martin Critelli, Sabina Di Franco, Prof. Antonietta Folino, Dr. Claudia Lanza, Francesca M.C. Messiniti, Prof. Alexander Murzaku, Dr. Anna Perri, Dr. Erika Pasceri, Paolo Plini, Prof. Anna Rovella and Rosamaria Salvatori.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Proof system interoperability"

1

Dowek, Gilles. "From the Universality of Mathematical Truth to the Interoperability of Proof Systems." In Automated Reasoning. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-10769-6_2.

Full text
Abstract:
AbstractThe development of computerized proof systems, such as Coq, Matita, Agda, Lean, HOL 4, HOL Light, Isabelle/HOL, Mizar, etc. is a major step forward in the never ending quest of mathematical rigor.
APA, Harvard, Vancouver, ISO, and other styles
2

Storck Michael, Hollenberg Luca, Dugas Martin, and Soto-Rey Iñaki. "Interoperability Improvement of Mobile Patient Survey (MoPat) Implementing Fast Health Interoperability Resources (FHIR)." In Studies in Health Technology and Informatics. IOS Press, 2019. https://doi.org/10.3233/978-1-61499-959-1-141.

Full text
Abstract:
Despite the advances in health information technology and the increasing usage of electronic systems, syntactic and semantic interoperability between different health information systems remains challenging. An emerging standard to tackle interoperability issues is HL7 FHIR, which uses modern web technologies for communication like Representational State Transfer. The electronic patient reported outcome system Mobile Patient Survey (MoPat) was adapted to support metadata import and clinical data export using HL7 FHIR. Thereby, the data models of HL7 FHIR and MoPat were compared and the existing import and export functions of MoPat were extended to support HL7 FHIR. A test protocol including eight test datasets to proof functioning of the new features was successfully conducted. In the near future, a real time searching toolbar of FHIR metadata resources will be integrated within MoPat. MoPat FHIR import and export functions are ready to be used in a clinical setting in combination with a FHIR compliant clinical data server.
APA, Harvard, Vancouver, ISO, and other styles
3

Urbauer Philipp, Kmenta Maximilian, Frohner Matthias, Mense Alexander, and Sauermann Stefan. "Propose of Standards Based IT Architecture to Enrich the Value of Allergy Data by Telemonitoring Data." In Studies in Health Technology and Informatics. IOS Press, 2017. https://doi.org/10.3233/978-1-61499-759-7-136.

Full text
Abstract:
Interoperability is a key requirement for any IT-System to be future proof and cost efficient, due to the increasing interaction of IT-Systems in Healthcare. This feasibility study is part of a larger project focusing on the conceptualization and evaluation of interoperable and modular IT-Framework components for exchanging big data information sets. Hence, this project investigates the applicability of a standard based IT-Architecture for the integration of Personal Health Devices data and open data sources. As a proof of concept use case, pollen forecast data from the Medical University of Vienna were combined with Personal Health Device data and a data correlation was investigated. The standards were identified as well as selected in expert's reviewed and the Architecture was designed based on a literature research. Subsequently the prototype was implemented and successfully tested in interoperability tests. The study shows that the architecture meets the requirements. It can be flexibly extended according to further requirements due to its generic setup. However, further extensions of the Interoperability-Connector and a full test setup needs to be realized in future.
APA, Harvard, Vancouver, ISO, and other styles
4

Abreu Maia Thais, Fernandes De Muylder Cristiana, and Mendonça Queiroga Rodrigo. "Archetype Development Process of Electronic Health Record of Minas Gerais." In Studies in Health Technology and Informatics. IOS Press, 2015. https://doi.org/10.3233/978-1-61499-564-7-938.

Full text
Abstract:
The Electronic Health Record (EHR) supports health systems and aims to reduce fragmentation, which will enable continuity of patient care. The paper's main objective is to define the steps, roles and artifacts for an archetype development process (ADP) for the EHR at the Brazilian National Health System (SUS) in the State of Minas Gerais (MG). This study was conducted using qualitative analysis based upon an applied case. It had an exploratory purpose metodologically defined in four stages: literature review; descriptive comparison; proposition of an archetype development process and proof of concept. The proof of concept showed that the proposed ADP ensures the archetype quality and supports the semantic interoperability in SUS to improve clinical safety and the continuity of patient care.
APA, Harvard, Vancouver, ISO, and other styles
5

Dustdar, Schahram, Harald Gall, and Roman Schmidt. "Web Services for Groupware." In Service-Oriented Software System Engineering. IGI Global, 2005. http://dx.doi.org/10.4018/978-1-59140-426-2.ch017.

Full text
Abstract:
While some years ago the focus of many Groupware systems has been on the support of Web based information systems to support access with Web browsers, the focus today is shifting towards a programmatic access to software services, regardless of their location and the application used to manipulate those services. Whereas the goal of Web Computing has been to support group work on the Web (browser), Web services support for Groupware has the goal to provide interoperability between many Groupware systems. The contribution of this chapter is threefold: (1) to present a framework consisting of three levels of Web services for Groupware support, (2) to present a novel Web services management and configuration architecture with the aim of integrating various Groupware systems in one overall configurable architecture, and (3) to provide a use case scenario and preliminary proof -of-concept implementation. Our overall goal for this chapter is to provide a sound and flexible architecture for gluing together various Groupware systems using Web services technologies.
APA, Harvard, Vancouver, ISO, and other styles
6

Hochedlinger Nina, Nitzlnader Michael, Falgenhauer Markus, et al. "Standardized Data Sharing in a Paediatric Oncology Research Network &ndash; A Proof-of-Concept Study." In Studies in Health Technology and Informatics. IOS Press, 2015. https://doi.org/10.3233/978-1-61499-524-1-27.

Full text
Abstract:
Data that has been collected in the course of clinical trials are potentially valuable for additional scientific research questions in so called secondary use scenarios. This is of particular importance in rare disease areas like paediatric oncology. If data from several research projects need to be connected, so called Core Datasets can be used to define which information needs to be extracted from every involved source system. In this work, the utility of the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM) as a format for Core Datasets was evaluated and a web tool was developed which received Source ODM XML files and &amp;ndash; via Extensible Stylesheet Language Transformation (XSLT) &amp;ndash; generated standardized Core Dataset ODM XML files. Using this tool, data from different source systems were extracted and pooled for joined analysis in a proof-of-concept study, facilitating both, basic syntactic and semantic interoperability.
APA, Harvard, Vancouver, ISO, and other styles
7

Baihan, Mohammed S., Yaira K. Rivera Sánchez, Xian Shao, Christopher Gilman, Steven A. Demurjian, and Thomas P. Agresta. "A Blueprint for Designing and Developing M-Health Applications for Diverse Stakeholders Utilizing FHIR." In Advances in Healthcare Information Systems and Administration. IGI Global, 2018. http://dx.doi.org/10.4018/978-1-5225-5036-5.ch006.

Full text
Abstract:
FHIR standard is designed to enable interoperability and integration with the newest and adopted technologies by the industry. This chapter presents a number of blueprints for the design and development of FHIR servers that enable the integration between HIT systems with m-health applications via FHIR. Each blueprint is based on the location that FHIR servers can be placed with respect to the components of the m-health application (UI, API, server) or a HIT system in order to define and design the necessary infrastructure to facilitate the exchange of information via FHIR. To demonstrate the feasibility of the work, this chapter utilizes the Connecticut concussion tracker (CT2) m-health application as a proof-of-concept prototype that fully illustrates the blueprints of the design and development steps that are involved. The blueprints can be applied to any m-health application and are informative and instructional for medical stakeholders, researchers, and developers.
APA, Harvard, Vancouver, ISO, and other styles
8

Manivel, P., and Ramesh Kumar Yadav. "Blockchain and Smart Contracts for Secure, Transparent, and Immutable Student Feedback Management in OBE." In Artificial Intelligence-Powered Learning Analytics and Student Feedback Mechanisms for Dynamic Curriculum Enhancement and Continuous Quality Improvement in Outcome-Based Education. RADemics Research Institute, 2025. https://doi.org/10.71443/9789349552531-08.

Full text
Abstract:
Blockchain technology has emerged as a transformative solution for ensuring transparency, security, and immutability in student feedback management within OBE frameworks. Traditional feedback systems often suffer from inefficiencies, data manipulation risks, and lack of trust, necessitating the integration of decentralized and tamper-proof mechanisms. This book chapter explores the potential of blockchain and smart contracts in addressing these challenges by establishing a secure, transparent, and immutable student feedback system. The study examines the scalability limitations of blockchain networks and investigates advanced optimization techniques, including Layer 2 scaling solutions, sharding mechanisms, and hybrid storage models, to enhance performance and efficiency. The chapter highlights energy-efficient consensus protocols to improve sustainability in educational blockchain applications. Data availability challenges in off-chain storage and interoperability issues with existing LMS are also analyzed to ensure seamless adoption. By leveraging blockchain’s decentralized architecture, cryptographic security, and automated validation mechanisms, institutions can enhance the reliability and accountability of student feedback systems. The findings contribute to the ongoing discourse on blockchain applications in education, offering a scalable and efficient model for feedback management in OBE.
APA, Harvard, Vancouver, ISO, and other styles
9

Rinner Christoph and Duftschmid Georg. "Bridging the Gap between HL7 CDA and HL7 FHIR: A JSON Based Mapping." In Studies in Health Technology and Informatics. IOS Press, 2016. https://doi.org/10.3233/978-1-61499-645-3-100.

Full text
Abstract:
The Austrian electronic health record (EHR) system ELGA went live in December 2016. It is a document oriented EHR system and is based on the HL7 Clinical Document Architecture (CDA). The HL7 Fast Healthcare Interoperability Resources (FHIR) is a relatively new standard that combines the advantages of HL7 messages and CDA Documents. In order to offer easier access to information stored in ELGA we present a method based on adapted FHIR resources to map CDA documents to FHIR resources. A proof-of-concept tool using Java, the open-source FHIR framework HAPI-FHIR and publicly available FHIR servers was created to evaluate the presented mapping. In contrast to other approaches the close resemblance of the mapping file to the FHIR specification allows existing FHIR infrastructure to be reused. In order to reduce information overload and facilitate the access to CDA documents, FHIR could offer a standardized way to query CDA data on a fine granular base in Austria.
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Ronghua, Deeraj Nagothu, and Yu Chen. "AR-Edge: Autonomous and Resilient Edge Computing Architecture for Smart Cities." In Edge Computing - Architecture and Applications for Smart Cities [Working Title]. IntechOpen, 2024. http://dx.doi.org/10.5772/intechopen.1005876.

Full text
Abstract:
With the rapid advancements in artificial intelligence (AI), the Internet of Things (IoT), and network communication technologies, recent years have witnessed a boom in smart cities that has dramatically changed human life and society. While many smart city applications rely on cloud servers, enabling comprehensive information fusion among users, smart devices, and service providers to provide diverse, intelligent applications, IoT networks’ high dynamicity and heterogeneity also bring performance, security, and interoperability challenges to centralized service frameworks. This chapter introduces a novel Autonomous and Resilient Edge (AR-Edge) computing architecture, which integrates AI, software-defined network (SDN), and Blockchain technologies to enable next-generation edge computing networks. Thanks to capabilities in terms of logically centralized control, global network status, and programmable traffic rules, SDN allows for efficient edge resource coordination and optimization with the help of artificial intelligence methods, like large language models (LLM). In addition, a federated microchain fabric is utilized to ensure the security and resilience of edge networks in a decentralized manner. The AR-Edge aims to provide autonomous, secure, resilient edge networks for dynamic and complex IoT ecosystems. Finally, a preliminary proof-of-concept prototype of an intelligent transportation system (ITS) demonstrates the feasibility of applying AR-Edge in real-world scenarios.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Proof system interoperability"

1

Miranda, Nelson, Matheus Matos Machado, and Dilvan A. Moreira. "OntoDrug: Enhancing Brazilian Health System Interoperability with a National Medication Ontology." In Proceedings of the Brazilian Symposium on Multimedia and the Web. Sociedade Brasileira de Computação - SBC, 2024. http://dx.doi.org/10.5753/webmedia.2024.242062.

Full text
Abstract:
This paper presents OntoDrug, an ontology designed to enhance medicine management in Brazil by integrating regulatory frameworks and standardizing terminologies. OntoDrug improves patient safety and treatment efficacy by accurately identifying and classifying medications and supporting interoperability with health information systems. A proof-of-concept application integrated into the Hospital das Clínicas de Marília’s hospital EHR system demonstrated OntoDrug’s utility, achieving high precision and recall. An experimental study using large language models grounded on the ontology achieved, using GPT-4 turbo, 0.97 precision, 1.0 recall and an F1-score of 0.99. We also evaluated open-source models llama3-8b, llama3-70b, and gemma-7b-it. Their performance was close to GPT-4’s. The significant effectiveness is primarily due to the utilization of large language models (LLMs). While using these large language models enhanced performance, challenges related to cost, privacy, and service availability were identified. OntoDrug represents a significant advancement in Brazil’s medication information standardization and optimization.
APA, Harvard, Vancouver, ISO, and other styles
2

Severo, Liverson Paulo Furtado, and Jean Everson Martina. "Digital Prescription and Dispensation of Medications." In Simpósio Brasileiro de Sistemas de Informação. Sociedade Brasileira de Computação, 2025. https://doi.org/10.5753/sbsi.2025.246608.

Full text
Abstract:
Context: In Brazil, the prescription and dispensing of medications remain largely manual, relying on physical documents. This approach poses challenges for security, traceability, and regulatory compliance, especially for controlled substances. Problem: Manual systems are insufficient for tracking medication dispensing, preventing misuse, and ensuring interoperability between healthcare providers and pharmacies. Solution: This study proposes a system that integrates the FHIR interoperability standard, adapted to produce self-contained documents, with JAdES digital signatures for secure and authentic prescription records. Blockchain is used to enable traceability and control over medication dispensing through an immutable record of transactions. Method: The research employed a Proof of Concept (PoC) methodology to validate the proposed system, focusing on analyzing the current manual processes and proposing a digital solution to address identified gaps. This PoC was conducted in a controlled laboratory environment to simulate real-world scenarios and test the integration of FHIR, JAdES signatures, and Blockchain technologies to evaluate the system’s functionality and compliance with regulatory requirements. Results: The system successfully generated secure, self-contained digital prescriptions and used Blockchain to trace and control medication dispensing. It improved regulatory compliance and addressed interoperability issues by eliminating external dependencies. Contributions: This work advances healthcare information systems by combining interoperability standards, electronic signatures, and Blockchain to digitize and secure critical processes, addressing key challenges in medication management.
APA, Harvard, Vancouver, ISO, and other styles
3

Henry, Chris, and Steven Grant. "Implementing New Automated Ticketing Technology at Virginia Railway Express." In 2012 Joint Rail Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/jrc2012-74054.

Full text
Abstract:
Virginia Railway Express (VRE) is at a crossroads at a key time with its current technology. In the near future, VRE will be required to replace its existing Automated Fare Collection (AFC) system. While this may not initially sound so different from what all rail agencies must eventually go through, ensuring that the system can be integrated into its neighboring Washington Metropolitan Area Transit Authority’s (WMATA) impending New Electronic Payments Program (NEPP) is a completely different story, and for many reasons. VRE is a key regional partner of WMATA and, as such, the two work hand-in-hand to ensure the interoperability between the two systems is maximized for the passengers who ride both services. Key to this is NEPP as an eventual replacement of WMATA’s SmarTrip® program. Since the majority of VRE’s ridership is Federal employees who carry PIV (Personal Identity Verification)/CAC (Common Access Card) cards and are making their way into the nation’s capital from Virginia and Maryland, the SmarTrip® program has been a major focus for VRE. While the NEPP program has several years before it goes live, it presents VRE with a valuable opportunity to review its current AFC system and use the interim to implement various concepts of operations for a future system. As such, VRE has become a willing partner for WMATA as a host for technology proof-of-concepts that will aid both VRE and WMATA in the long term. VRE is looking into hosting various technology options to pilot at key stations that may include mobile ticketing, Near Field Communication (NFC), or PIV/CAC cards as forms of payment, as well as proof of payment. As an open-gated system, VRE must tackle the problem of fare evasion, so looking to maximize its proof-of-payment capabilities with the latest technology is key. VRE would like to share with the rail community its thoughts and ideas for proof-of-concepts to utilize the latest payment technologies, as well as discuss its plans on interoperability with WMATA to assist agencies with similar challenges.
APA, Harvard, Vancouver, ISO, and other styles
4

Saylor, Kase J., Cyril F. Meyer, Theodore Wilmes, and Michael S. Moore. "ADVANCED SA – MODELING AND VISUALIZATION ENVIRONMENT." In 2024 NDIA Michigan Chapter Ground Vehicle Systems Engineering and Technology Symposium. National Defense Industrial Association, 2024. http://dx.doi.org/10.4271/2024-01-3246.

Full text
Abstract:
&lt;title&gt;ABSTRACT&lt;/title&gt; &lt;p&gt;In this paper, we present a proof-of-concept prototype system created in an applied research and development effort at Southwest Research Institute. The Advanced Situational Awareness (ASA) Modeling and Visualization Environment is a response to the need for applications that improve the value and presentation of situational awareness information by leveraging the increased integration of sensors, Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR), and Electronic Warfare (EW) systems with networks in ground vehicles. The ongoing U.S. Army Vehicular Integration for C4ISR/EW Interoperability (VICTORY) initiative is providing the framework by which this integration of sensors and systems can be realized. By utilizing the VICTORY concepts and current specifications, the research team was able to develop an ASA system that provides: cross-vehicle reasoning, visualization of situation awareness (SA) data overlaid on video, and a mapping capability.&lt;/p&gt;
APA, Harvard, Vancouver, ISO, and other styles
5

Rahamtallah, M., A. Abri, W. Abdel Rahman, and Q. AlZain. "Building the Next-Generation Process Automation System: An O-PAS Field Pilot Initiative Between Schneider Electric & Petroleum Development Oman." In International Petroleum Technology Conference. IPTC, 2024. http://dx.doi.org/10.2523/iptc-24610-ea.

Full text
Abstract:
The evolution of process automation has been largely dominated by legacy control systems. Although these systems are reliable and robust, they are closed, proprietary in nature. These legacy systems have costs of upgrades and maintenance, due to the specific, custom nature of their components and the lack of interoperability between different systems. Additionally, system security often comes as an afterthought and maintenance updates are required after install to maintain the current level of security or regulatory standards, especially in an increasingly connected world. Industrial manufacturers are under increasing pressure to reduce both the capital and lifecycle expenses of their process control systems while enhancing operational profitability. Even though modern automation systems incorporate certain improvements, integrating best-in-class, third-party components into proprietary, closed systems remains a challenge. Moreover, despite the ongoing efforts by automation vendors to adopt industry-standard security measures and best practices, these systems lack the built-in cybersecurity measures needed to protect operations, equipment assets, and other capital investments. The solution to these impediments is an open, interoperable, and secure-by-design process automation architecture. Open, interoperable systems stimulate growth in the supplier market, reducing costs through heightened choice and competition. They facilitate the integration of products from multiple vendors, enabling the adoption of best-fit and best-in-class components. Ensuring future automation systems embrace and reinforce standards that promote genuine heterogeneity, inherent security, multi-vendor interoperability, future-proof innovation, and an uncomplicated pathway for system migration will enable end users to derive more values and profitability from their controlled operations. This case study is a field pilot, a collaboration between Schneider Electric, and Petroleum Development Oman (PDO) aimed at demonstrating some of the Open Process Automation™ Standard (O PAS™) requirements which is one of the emerging next-generation process automation system. Additionally, the O-PAS will accelerate the digital transformation of industry. The O PAS™ redefines the architecture of industrial automation and control systems, enabling the development of scalable, modular, interchangeable, secure-by-design, and interoperable distributed control systems. This initiative leverages the IEC 61499 architecture and UniversalAutomation.org (UAO) to achieve these objectives of the pilot to demonstrate these requirements and implement a multi-vendor architecture.
APA, Harvard, Vancouver, ISO, and other styles
6

Santos, Nuno, Paula Monteiro, Francisco Morais, et al. "Towards Implementing a Collaborative Manufacturing Cloud Platform: Experimenting Testbeds Aiming Asset Efficiency." In ASME 2020 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/imece2020-24044.

Full text
Abstract:
Abstract Developing Industrial Internet of Things (IIoT) systems requires addressing challenges that range from acquiring data at the level of the shopfloor, integrated at the edge level and managing it at the cloud level. Managing manufacturing operations at the cloud level arose the opportunity for extending decisions to entities of the supply chain in a collaborative way. Not only it has arisen many challenges due to several interoperability needs; but also in properly defining an effective way to take advantage of the available data, leading to Industrial Digital Thread (IDT) and Asset Efficiency (AE) implementing. This paper discusses implementation concerns for a collaborative manufacturing environment in an IIoT system in order to monitor equipment’s AE. Each concern was addressed in a separate proof of concept testbed. The demonstration is based in a project for the IIoT domain called PRODUTECH-SIF (Solutions for the Industry of the Future).
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, G. Q., S. Bin, and K. L. Mak. "ppXML: Towards Generic and Extensible Modelling of Platform Products." In ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/cie-48222.

Full text
Abstract:
ppXML is an information infrastructure that enables and facilitates meaningful information and knowledge management within and interoperability between web services (applications) for Platform Product Development (PPD) — a formidable approach to agile product development (APD) for mass customization (MC). There are four folds of meanings in ppXML. Firstly, ppXML represents a set of constructs that are consistent with concepts and methods of Platform Product Development for Mass Customization. Secondly, derived from XML (eXtensible Markup Language) as a sublanguage, ppXML is a standard and yet extensible modeling language dedicated to the modelling of products and product platforms for PPD web services. Thirdly, ppXML serves as a product platform repository and a PPD web service registry, together with a set of online facilities for data representation and transformation between different components and parties involved in the web services. Finally, ppXML is a proof-of-theconcept online PPD portal, incorporating some essential web-based Decision Support Systems (DSS) for product platform development and product platform customization. This paper presents an overview of ppXML together with its background and underlying philosophy.
APA, Harvard, Vancouver, ISO, and other styles
8

Banerjee, Subharthi, Michael Hempel, Pejman Ghasemzadeh, Hamid Sharif, and Tarek Omar. "Wireless Communication for High-Speed Passenger Rail Services: A Study on the Design and Evaluation of a Unified Architecture." In 2020 Joint Rail Conference. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/jrc2020-8068.

Full text
Abstract:
Abstract High-speed trains, though prevalent in Europe and Asia, are not yet a reality in the US. But interest and industry engagement are growing, especially around commercial hubs close to commuter homes for alleviating commute times. With support from the Federal Railroad Administration in the United States, the authors are exploring the design requirements, challenges, and technology capabilities for wireless communication between passenger cars, on-board systems and with trackside infrastructure, all using next-generation radio access technologies. Key aspects of this work focus on interoperability, modularity of the architecture to facilitate a future-proof design, high-performance operations for passenger services and ultra-low latency capabilities for train control operations. This paper presents the theoretical studies and computer simulations of the proposed network architectures, as well as the results of an LTE/5G field test framework using an OpenAir-Interface (OAI)-based software-defined radio (SDR) approach. Through various test scenarios the OAI LTE/5G implementation is first evaluated in a lab environment and through field tests. These tests provide ground-truth data that can be leveraged to refine the computer simulation model for evaluating large-scale environments with high fidelity and high accuracy. Of particular focus in this evaluation are performance aspects related to delay, handover, bit error rate, frequency offset and achievable uplink/downlink throughput.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Proof system interoperability"

1

Shapovalov, Yevhenii B., Viktor B. Shapovalov, Roman A. Tarasenko, Stanislav A. Usenko, and Adrian Paschke. A semantic structuring of educational research using ontologies. [б. в.], 2021. http://dx.doi.org/10.31812/123456789/4433.

Full text
Abstract:
This article is devoted to the presentation of the semantic interoperability of research and scientific results through an ontological taxonomy. To achieve this, the principles of systematization and structuration of the scientific/research results in scientometrics databases have been analysed. We use the existing cognitive IT platform Polyhedron and extend it with an ontology-based information model as main contribution. As a proof-of-concept we have modelled two ontological graphs, “Development of a rational way for utilization of methane tank waste at LLC Vasylkivska poultry farm” and “Development a method for utilization of methane tank effluent”. Also, for a demonstration of the perspective of ontological systems for a systematization of research and scientific results, the “Hypothesis test system” ontological graph has created.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!