To see the other types of publications on this topic, follow the link: Modular redundancy principle.

Journal articles on the topic 'Modular redundancy principle'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 32 journal articles for your research on the topic 'Modular redundancy principle.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Arifeen, Tooba, Abdus Hassan, and Jeong-A. Lee. "A Fault Tolerant Voter for Approximate Triple Modular Redundancy." Electronics 8, no. 3 (2019): 332. http://dx.doi.org/10.3390/electronics8030332.

Full text
Abstract:
Approximate Triple Modular Redundancy has been proposed in the literature to overcome the area overhead issue of Triple Modular Redundancy (TMR). The outcome of TMR/Approximate TMR modules serves as the voter input to produce the final output of a system. Because the working principle of Approximate TMR conditionally allows one of the approximate modules to differ from the original circuit, it is critical for Approximate TMR that a voter not only be tolerant toward its internal faults but also toward faults that occur at the voter inputs. Herein, we present a novel compact voter for Approximate TMR using pass transistors and quadded transistor level redundancy to achieve a higher fault masking. The design also targets a better Quality of Circuit (QoC), a new metric which we have proposed for highlighting the ability of a circuit to fully mask all possible internal faults for an input vector. Comparing the fault masking features with those of existing works, the proposed voter delivered upto 45.1%, 62.5%, 26.6% improvement in Fault Masking Ratio (FMR), QoC, and reliability, respectively. With respect to the electrical characteristics, our proposed voter can achieve an improvement of up to 50% and 56% in terms of the transistor count and power delay product, respectively.
APA, Harvard, Vancouver, ISO, and other styles
2

Shi, Yong, and Zhuoyi Xu. "Wide Load Range ZVS Three-level DC-DC Converter: Modular Structure, Redundancy Ability, and Reduced Filters Size." Energies 12, no. 18 (2019): 3537. http://dx.doi.org/10.3390/en12183537.

Full text
Abstract:
In future dc distributed power systems, high performance high voltage dc-dc converters with redundancy ability are welcome. However, most existing high voltage dc-dc converters do not have redundancy ability. To solve this problem, a wide load range zero-voltage switching (ZVS) three-level (TL) dc-dc converter is proposed, which has some definitely good features. The primary switches have reduced voltage stress, which is only Vin/2. Moreover, no extra clamping component is needed, which results simple primary structure. Redundancy ability can be obtained by both primary and secondary sides, which means high system reliability. With proper designing of magnetizing inductance, all primary switches can obtain ZVS down to 0 output current, and in addition, the added conduction loss can be neglected. TL voltage waveform before the output inductor is obtained, which leads small volume of the output filter. Four secondary MOSFETs can be switched in zero-current switching (ZCS) condition over wide load range. Finally, both the primary and secondary power stages are modular architecture, which permits realizing any given system specifications by low voltage, standardized power modules. The operation principle, soft switching characteristics are presented in this paper, and the experimental results from a 1 kW prototype are also provided to validate the proposed converter.
APA, Harvard, Vancouver, ISO, and other styles
3

Klochan, A., P. Dyachenko, Yu. Bozhok, H. Al-Ammori, and I. Zhykhariev. "OPTIMIZATION OF INFORMATION BACKUP OF DATA PROTECTION SYSTEMS." SCIENTIFIC-DISCUSSION, no. 100 (May 15, 2025): 41–46. https://doi.org/10.5281/zenodo.15427386.

Full text
Abstract:
For data protection systems, the issue of ensuring high reliability of incoming and outgoing data is very important. To increase data reliability, a method of parallel information backup can be used, which significantly reduces the likelihood of not detecting a situation and has little effect on reducing the likelihood of a false alarm. The application of the principles of majority logic makes it possible to reduce the probability of false alarms, but at the same time, it is necessary to increase the number of parallel channels, which is associated with economic constraints. In the future, the development of the method of parallel information redundancy using the nested modules method will make it possible to create simple, technically reliable, cost-effective, highly informative systems with high reliability of monitored data. The proposed method of nested information redundancy is a very effective method of building integrated automated decision support systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Bakay, B. Ya, and V. M. Hobela. "Formation of parameters of elements of hoisting and transport machines of manipulator type at the design stage." Forestry, Forest, Paper and Woodworking Industry 44 (December 30, 2018): 64–71. http://dx.doi.org/10.36930/42184409.

Full text
Abstract:
Techniques and principles of designing loading machines of manipulator type are diverse and complex. Modern methods of designing loading machines and their individual elements are implemented on the basis of analysis of the technological process in which they are expected to operate. Also, to reduce the cost and reduce the time of design, manufacture and implementation of special purpose loading machines, increase their maintainability and facilitate the acquisition of many domestic and foreign companies began to use unit-modular design. This approach allows to divide the loading machines of the manipulator type into simpler functional elements, the pliability of which is easy to determine by the methods of the theory of resistance of materials. To transition from the pliability of such individual elements to the pliability of loading machines in general, use the matrix of transmission relations obtained in the process of force analysis and calculation of elements for accuracy. The aggregate-modular principle can be one of the main principles of realization of loading machines of manipulator type. This principle makes it possible, given the limited number of normalized elements to create a specialized design of the loading machine, which best meets the requirements of a particular technological task and there is no redundancy. This approach in each case allows to reduce the time of development and design of specialized hoisting machines of the manipulator type, increase reliability due to the durability of the elements included in it, reduce the cost of production by reducing the range of parts and components. It is offered to carry out formation of constructive parameters of elements of loading machines of manipulator type at a design stage by carrying out the power analysis and calculation of elements on accuracy. This improves the known design solutions, making them more suitable for practical application.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhadnov, V. V. "Assessing the sufficiency indicators of a set of spare parts, tools and accessories for uninterruptible power supplies of a data centre using data sheet specifications." Dependability 22, no. 3 (2022): 11–20. http://dx.doi.org/10.21683/1729-2646-2022-22-3-11-20.

Full text
Abstract:
Aim. To suggest a method of estimating the parameters of a set of spare parts, tools and accessories (SPTA) according to data sheet specifications for industrial uninterruptible powers supplies (UPS) of data centres using state-of-the-art techniques. Methods. The paper uses methods of the dependability theory, the Markov process theory and the optimisation method. Results. Using the suggested approach, the stages of parametric synthesis of an SPTA kit were defined for mainline modular UPS that feature redundancy with repair and limited SPTA. For each stage, the application of mathematical models required for calculating the dependability characteristics and parameters of power module components based on UPS dependability indicators is substantiated along with the mathematical models that associate the sufficiency indicators of an SPTA kit with its parameters. Those models allow calculating the failure and recovery rates of UPS power modules, as well as the mean time to failure and restoration based on the data sheet specifications of reliability, maintainability and availability. In turn, the obtained dependability characteristics are the input data for calculating the SPTA sufficiency values (average delay in meeting a request). Using the value of average delay in meeting a request with an SPTA kit as a criterion for the mean time to power module restoration allows suggesting that it is, in principle, possible to ensure the specified dependability indicators in the course of its operation, and, therefore, such UPS can be used. Should the latter be possible, then using the value of average delay in meeting a request as a restriction, while taking into account the restrictions on the initial SPTA inventory, will allow synthesising the SPTA kit (select a replenishment strategy and define its parameters (delivery time, etc.). Comparing the logistical capabilities and the resulting data for the selected replenishment strategy will allow making a final conclusion regarding the capability to maintain the specified UPS dependability characteristics throughout the operation period. Using the above method, the parameters were synthesised of a single kit of spare parts, tools and accessories, using the Protect 3.M UPS as an example. Conclusion. The approach suggested in the paper allows estimating both the general feasibility of ensuring the specified dependability, and the economic expediency of using industrial mainline modular UPS with redundancy and recovery. Additionally, if ensuring the UPS dependability is possible, but the operating costs of its maintenance are unacceptable, the possibility of reducing the number of repair teams (reducing the cost of their deployment) and/or using more efficient redundancy methods (mixed redundancy, mixed redundancy with rotation, etc.) should be evaluated. However, it should be taken into consideration that the proposed approach based on the use of mathematical models does not guarantee a 100% accuracy of SPTA parameter estimation, as the mathematical models that it uses, like any other models, have a limited accuracy and the results obtained with their help require experimental confirmation by means of testing or controlled operation.
APA, Harvard, Vancouver, ISO, and other styles
6

Burgas, Llorenç, Joaquim Melendez, Joan Colomer, Joaquim Massana, and Carles Pous. "N-dimensional extension of unfold-PCA for granular systems monitoring." Engineering Applications of Artificial Intelligence 71 (May 1, 2018): 113–24. https://doi.org/10.1016/j.engappai.2018.02.013.

Full text
Abstract:
This work is focused on the data based modelling and monitoring of a family of modular systems that have multiple replicated structures with the same nominal variables and show temporal behaviour with certain periodicity. These characteristics are present in many systems in numerous fields such as the construction or energy sector or in industry. The challenge for these systems is to be able to exploit the redundancy in both time and the physical structure. In this paper the authors present a method for representing such granular systems using N-dimensional data arrays which are then transformed into the suitable 2-dimensional matrices required to perform statistical processing. Here, the focus is on pre-processing data using a non-unique folding–unfolding algorithm in a way that allows for different statistical models to be built in accordance with the monitoring requirements selected. Principal Component Analysis (PCA) is assumed as the underlying principle to carry out the monitoring. Thus, the method extends the Unfold Principal Component Analysis (Unfold-PCA or Multiway PCA), applied to 3D arrays, to deal with N-dimensional matrices. However, this method is general enough to be applied in other multivariate monitoring strategies. Two of examples in the area of energy efficiency illustrate the application of the method for modelling. Both examples illustrate how when a unique data-set folded and unfolded in different ways, it offers different modelling capabilities. Moreover, one of the examples is extended to exploit real data. In this case, real data collected over a two-year period from a multi-housing social-building located in down town Barcelona (Catalonia) has been used.
APA, Harvard, Vancouver, ISO, and other styles
7

Efanov, D. V. "Fault-tolerant Structures of Digital Devices Based on Boolean Complement with the Calculations Checking by Sum Codes." Èlektronnoe modelirovanie 43, no. 5 (2021): 21–42. http://dx.doi.org/10.15407/emodel.43.05.021.

Full text
Abstract:
The article considers the construction of fault-tolerant digital devices and computing systems that does not use the principles of introducing modular redundancy. To correct the signals, a special distorted signal fixation unit, concurrent error-detection by the pre-selected redundant code circuit, as well as a signal correction block are used. The distorted signal fixation unit is implemented by the Boolean complement method, which makes it possible to design a large number of such blocks with different indicators of technical implementation complexity. When synthesizing a fault-tolerant device according to the proposed method, it is possible to organize a concurrent error-detection circuit for both the source device and the Boolean complement block in the structure of the distorted signal fixation unit. This makes it possible to choose among the variety of ways to implement fault-tolerant devices according to the proposed method, one that gives a device with the least structural redundancy. Various redundant codes can be used to organize concurrent error-detection circuits, including classical and modified sum codes. The author provides algorithms for the synthesis of distorted signal fixation unit and the Boolean complement block. The results of experimental researches with combinational benchmarks devices from the well-known LG’91 and MCNC Benchmarks sets are highlighted. The article presents the possibilities of the considered method for the organization of faulttolerant digital devices and computing systems.
APA, Harvard, Vancouver, ISO, and other styles
8

Букирёв, А. С., А. Ю. Савченко, М. И. Яцечко, and В. А. Малышев. "Diagnostic system for the technical condition of the aircraft avionics complex based on intelligent information technologies." МОДЕЛИРОВАНИЕ, ОПТИМИЗАЦИЯ И ИНФОРМАЦИОННЫЕ ТЕХНОЛОГИИ 8, no. 1(28) (2020): 10–11. http://dx.doi.org/10.26102/2310-6018/2020.28.1.010.

Full text
Abstract:
Предложен подход к построению системы диагностики технического состояния комплекса бортового оборудования воздушного судна на основе интеллектуальных информационных технологий с целью обеспечения безопасности полетов. Разработана интеллектуальная диагностическая система и решена задача диагностики технического состояния объектов, выполняющих информационные преобразования сигналов. Обоснована возможность управления избыточностью в комплексе бортового оборудования с помощью интеллектуальной диагностической системы. Принцип построения такой системы реализуется в интересах решения задачи автоматического построения диагностической модели объекта диагностирования за счет применения методов искусственного интеллекта. Это позволяет реализовать в виде программного обеспечения унифицированную (инвариантную к различным объектам) интеллектуальную диагностическую систему в комплексе бортового оборудования, построенном по принципу интегрированной модульной авионики. В свою очередь, важной особенностью реализации и применения интеллектуальной диагностической системы является возможность функционирования (обучения) и выполнения задачи по предназначению (диагностика технического состояния) в режиме реального масштаба времени. Процесс обучения интеллектуальной диагностической системы может осуществляться двумя основными способами: обучение с учителем (является наиболее актуальным в процессе испытания объекта авиационной техники на надежность), а также обучение без учителя (является полностью автономным способом, наиболее актуальным в процессе испытаний объекта контроля или применения по назначению). В процессе испытания объекта контроля на надежность интеллектуальная диагностическая система позволит сформировать интеллектуальную базу данных моделей исходного (правильного) функционирования объекта контроля комплекса бортового оборудования с последующим распознаванием предотказных состояний и их классификацией (кластеризацией). Proposed Approach to the construction of a diagnostic system for the technical condition of the aircraft avionics complex based on intelligent information technologies in order to ensure flight safety is proposed. An intelligent diagnostic system is developed and the problem of diagnosing the technical condition of objects performing information signal transformations is solved. The possibility of managing redundancy in the complex of on-board equipment using an intelligent diagnostic system is justified. The principle of constructing such a system is implemented in the interests of solving the problem of automatically constructing a diagnostic model of the diagnostic object through the use of artificial intelligence methods. This allows you to implement in the form of software a unified (invariant to various objects) intelligent diagnostic system in the on-board equipment complex, built on the principle of integrated modular avionics. In turn, an important feature of the implementation and application of an intelligent diagnostic system is the possibility of functioning (training) and completing a task for its intended purpose (diagnostics of a technical condition) in real time. The learning process of an intelligent diagnostic system can be carried out in two main ways: training with a teacher (which is most relevant in the process of testing an object of aviation equipment for reliability), and training without a teacher (is a completely autonomous way, most relevant in the process of testing an object of control or intended use ) In the process of testing the object of control for reliability, an intelligent diagnostic system will form an intelligent database of models of the initial (correct) functioning of the object of control of the complex of on-board equipment with subsequent recognition of precautionary states and their classification (clustering).
APA, Harvard, Vancouver, ISO, and other styles
9

Aparakin, Аnton. "Modular-Parametric Principle of Design Development of Gear Hydraulic Machines." Central Ukrainian Scientific Bulletin. Technical Sciences 2, no. 7(38) (2023): 51–58. http://dx.doi.org/10.32515/2664-262x.2023.7(38).2.51-58.

Full text
Abstract:
The conventional system of design development of gear-type hydraulic machines and their implementation is not sufficiently perfect for a number of reasons. This system cannot be effective in the conditions of large-scale production. The purpose of the work is to create such a principle of designing a model range of gear hydraulic machines, which optimizes the conditions of production and marketing, with the subsequent reduction of the cost of production. To achieve the goal, the theory of "redundant connections" was used, and the design scheme of the hydraulic machine was analyzed. On the results of the conducted analysis, several possible schemes for the extraction of "redundant connections" were developed, and the calculation was carried out. From the proposed schemes, the most appropriate scheme for creating a gear-type hydraulic machine was determined - a scheme using the central load of the driving gear and a barrel-shaped profile of the longitudinal contour of the tooth of the driven gear. The paper also considers additional benefits arising from the removal of redundant connections. On the basis of the developed diagram of the action of forces in gear engagement, it is shown how deviations from the geometric accuracy of the gear affect the operation of the hydraulic machine and why the proposed scheme with a barrel-shaped profile of the longitudinal contour of the tooth is more appropriate. Among the important results of the research should be attributed the synthesized perspective scheme of design of a gear hydraulic machine. The use of the proposed scheme will allow the reduction of redundant connections (from 7 to 5, relative to the conventional scheme), which will contribute to the reduction of additional deformations and energy losses when working in the hydraulic motor mode and fluid losses when working in the hydraulic pump mode. And the implementation of one of the coupled gears with a barrel-shaped longitudinal profile of the teeth will ensure the stabilization of the displacement moment when the unit is operating in the hydraulic motor mode and will stabilize the hydraulic efficiency parameter when operating in the pump mode.
APA, Harvard, Vancouver, ISO, and other styles
10

Kalmykov, Igor Anatolyevich, Vladimir Petrovich Pashintsev, Kamil Talyatovich Tyncherov, Aleksandr Anatolyevich Olenev, and Nikita Konstantinovich Chistousov. "Error-Correction Coding Using Polynomial Residue Number System." Applied Sciences 12, no. 7 (2022): 3365. http://dx.doi.org/10.3390/app12073365.

Full text
Abstract:
There has been a tendency to use the theory of finite Galois fields, or GF(2n), in cryptographic ciphers (AES, Kuznyechik) and digital signal processing (DSP) systems. It is advisable to use modular codes of the polynomial residue number system (PRNS). Modular codes of PRNS are arithmetic codes in which addition, subtraction and multiplication operations are performed in parallel on the bases of the code, which are irreducible polynomials. In this case, the operands are small-bit residues. However, the independence of calculations on the bases of the code and the lack of data exchange between the residues can serve as the basis for constructing codes of PRNS capable of detecting and correcting errors that occur during calculations. The article will consider the principles of constructing redundant codes of the polynomial residue number system. The results of the study of codes of PRNS with minimal redundancy are presented. It is shown that these codes are only able to detect an error in the code combination of PRNS. It is proposed to use two control bases, the use of which allows us to correct an error in any residue of the code combination, in order to increase the error-correction abilities of the code of the polynomial residue number system. Therefore, the development of an algorithm for detecting and correcting errors in the code of the polynomial residue number system, which allows for performing this procedure based on modular operations that are effectively implemented in codes of PRNS, is an urgent task.
APA, Harvard, Vancouver, ISO, and other styles
11

ДАВЛЕТОВА, АЛІНА. "ПОБУДОВА КОДІВ ХЕММІНГА В СКІНЧЕННИХ ПОЛЯХ ГАЛУА". Herald of Khmelnytskyi National University. Technical sciences 333, № 2 (2024): 28–34. http://dx.doi.org/10.31891/2307-5732-2024-333-2-4.

Full text
Abstract:
Hamming codes and their modifications are key in numerous technological processes and systems, where minimizing errors is crucial for enhancing reliability and efficiency in data transmission or storage. They allow for the detection and automatic correction of single errors in each data block. Their relative simplicity to implement at the hardware and software levels, renders them useful in systems demanding high reliability. Moreover, they optimize transmission channel use due to their minimal redundancy. At work is the task of applying the properties of modular arithmetic to develop codes that can effectively correct errors in systems with higher bit rates than binary has been considered. Conducted research confirms that such codes can be tailored to meet specific data transmission system requirements, showcasing adaptability. The use of Hamming codes within finite Galois fields, leveraging modular arithmetic, further enhances correction capabilities and efficiency, streamlining implementation and computation processes. Error correction codes in finite Galois fields use the same principles applied in Hamming codes but take into account the mathematical properties of these fields for effective encoding, detection, and correction of errors.The proposed error correction method, through integration with modular arithmetic, opens possibilities for optimizing encoding and decoding processes, allowing for a higher level of data transmission reliability with minimal redundancy, which was previously limited by the properties of traditional codes. The proposed solution demonstrates the potential for using modular arithmetic for error correction but can provide a basis for further research in this area, opening new possibilities for improving information processing and transmission technologies.
APA, Harvard, Vancouver, ISO, and other styles
12

Chen, Yan, Wen Zhuo Chen, Ken Chen, Jun Yi Shao, and Wei Ming Zhang. "Mechanism Configuration of Super-Redundant Robot." Advanced Materials Research 744 (August 2013): 68–73. http://dx.doi.org/10.4028/www.scientific.net/amr.744.68.

Full text
Abstract:
Structural synthesis of super-redundant mechanism is a key to design of painting robot for S-shape duct. Based on the principles of modular structure design, the painting robot for S-shape duct falls naturally into three separate control and design components: a moving platform, a locating mechanism, and a painting manipulator. The moving platform consists of a robot body, wheels, gasbags for position adjustment, and screw jacks. The 3PR-type serial mechanism is adopted for location. The super-redundant 3P7R-type serial mechanism is used for the painting manipulator. The restriction of the distance between connecting rods and the axis of the duct is employed to control self-motion of the painting manipulator for joint trajectory planning. Experiment result shows that minimal distance between robot joints and duct interior is 18mm and that average dynamic accuracy is ±6.8mm, which satisfies the working requirements.
APA, Harvard, Vancouver, ISO, and other styles
13

Selianinau, Mikhail. "Computationally Efficient Approach to Implementation of the Chinese Remainder Theorem Algorithm in Minimally Redundant Residue Number System." Theory of Computing Systems 65, no. 7 (2021): 1117–40. http://dx.doi.org/10.1007/s00224-021-10035-y.

Full text
Abstract:
AbstractIn this paper, we deal with the critical problem of performing non-modular operations in the Residue Number System (RNS). The Chinese Remainder Theorem (CRT) is widely used in many modern computer applications. Throughout the article, an efficient approach for implementing the CRT algorithm is described. The structure of the rank of an RNS number, a principal positional characteristic of the residue code, is investigated. It is shown that the rank of a number can be represented by a sum of an inexact rank and a two-valued correction to it. We propose a new variant of minimally redundant RNS, which provides low computational complexity for the rank calculation, and its effectiveness analyzed concerning conventional non-redundant RNS. Owing to the extension of the residue code, by adding the excess residue modulo 2, the complexity of the rank calculation goes down from $O\left (k^{2}\right )$ O k 2 to $O\left (k\right )$ O k with respect to required modular addition operations and lookup tables, where k equals the number of non-redundant RNS moduli.
APA, Harvard, Vancouver, ISO, and other styles
14

Brima, Yusuf, Ulf Krumnack, Simone Pika, and Gunther Heidemann. "Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction." Information 15, no. 2 (2024): 114. http://dx.doi.org/10.3390/info15020114.

Full text
Abstract:
Self-supervised learning (SSL) has emerged as a promising paradigm for learning flexible speech representations from unlabeleddata. By designing pretext tasks that exploit statistical regularities, SSL models can capture useful representations that are transferable to downstream tasks. Barlow Twins (BTs) is an SSL technique inspired by theories of redundancy reduction in human perception. In downstream tasks, BTs representations accelerate learning and transfer this learning across applications. This study applies BTs to speech data and evaluates the obtained representations on several downstream tasks, showing the applicability of the approach. However, limitations exist in disentangling key explanatory factors, with redundancy reduction and invariance alone being insufficient for factorization of learned latents into modular, compact, and informative codes. Our ablation study isolated gains from invariance constraints, but the gains were context-dependent. Overall, this work substantiates the potential of Barlow Twins for sample-efficient speech encoding. However, challenges remain in achieving fully hierarchical representations. The analysis methodology and insights presented in this paper pave a path for extensions incorporating further inductive priors and perceptual principles to further enhance the BTs self-supervision framework.
APA, Harvard, Vancouver, ISO, and other styles
15

Zatsarinny, A. A., Yu A. Stepchenkov, Yu G. Diachenko, and Yu V. Rogdestvenski. "Failure-tolerant synchronous and self-timed circuits comparison." Izvestiya Vysshikh Uchebnykh Zavedenii. Materialy Elektronnoi Tekhniki = Materials of Electronics Engineering 24, no. 4 (2022): 229–33. http://dx.doi.org/10.17073/1609-3577-2021-4-229-233.

Full text
Abstract:
The article considers the problem of developing synchronous and self-timed (ST) digital circuits that are tolerant to soft errors. Synchronous circuits traditionally use the 2-of-3 voting principle to ensure single failure, resulting in three times the hardware costs. In ST circuits, due to dual-rail signal coding and two-phase control, even duplication provides a soft error tolerance level 2.1 to 3.5 times higher than the triple modular redundant synchronous counterpart. The development of new high-precision software simulating microelectronic failure mechanisms will provide more accurate estimates for the electronic circuits’ failure tolerance.
APA, Harvard, Vancouver, ISO, and other styles
16

Anania, Chiara, Rafael D. Acemel, Johanna Jedamzick, et al. "In vivo dissection of a clustered-CTCF domain boundary reveals developmental principles of regulatory insulation." Nature Genetics 54, no. 7 (2022): 1026–36. http://dx.doi.org/10.1038/s41588-022-01117-9.

Full text
Abstract:
AbstractVertebrate genomes organize into topologically associating domains, delimited by boundaries that insulate regulatory elements from nontarget genes. However, how boundary function is established is not well understood. Here, we combine genome-wide analyses and transgenic mouse assays to dissect the regulatory logic of clustered-CCCTC-binding factor (CTCF) boundaries in vivo, interrogating their function at multiple levels: chromatin interactions, transcription and phenotypes. Individual CTCF binding site (CBS) deletions revealed that the characteristics of specific sites can outweigh other factors such as CBS number and orientation. Combined deletions demonstrated that CBSs cooperate redundantly and provide boundary robustness. We show that divergent CBS signatures are not strictly required for effective insulation and that chromatin loops formed by nonconvergently oriented sites could be mediated by a loop interference mechanism. Further, we observe that insulation strength constitutes a quantitative modulator of gene expression and phenotypes. Our results highlight the modular nature of boundaries and their control over developmental processes.
APA, Harvard, Vancouver, ISO, and other styles
17

Xing, Jinquan. "Advanced analysis and construction techniques for long-span spatial structures in steel engineering." Applied and Computational Engineering 66, no. 1 (2024): 172–77. http://dx.doi.org/10.54254/2755-2721/66/20240944.

Full text
Abstract:
This paper explores advanced analysis methods and construction techniques for long-span spatial structures in steel engineering. It delves into the principles of the Direct Analysis Method (DAM), emphasizing equilibrium, compatibility, plasticity, and stability, as well as load path and redundancy considerations. The DAM provides a robust framework for analyzing complex steel structures, ensuring stability, resilience, and efficiency. Furthermore, the paper discusses modeling techniques, including Finite Element Analysis (FEA), dynamic analysis, and nonlinear analysis, highlighting their significance in optimizing design solutions and predicting structural behavior under diverse loading conditions. Practical considerations in material selection, connection design, and construction are addressed, focusing on enhancing performance, durability, and constructability. Prefabrication and modular construction techniques are explored as effective strategies for accelerating project schedules, improving quality control, and enhancing site safety. Through a comprehensive review of literature and case studies, this paper provides valuable insights for engineers and researchers involved in the design and construction of long-span spatial structures in steel engineering.
APA, Harvard, Vancouver, ISO, and other styles
18

Menna, F., A. Torresani, R. Battisti, E. Nocerino, and F. Remondino. "A MODULAR AND LOW-COST PORTABLE VSLAM SYSTEM FOR REAL-TIME 3D MAPPING: FROM INDOOR AND OUTDOOR SPACES TO UNDERWATER ENVIRONMENTS." International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-2/W1-2022 (December 8, 2022): 153–62. http://dx.doi.org/10.5194/isprs-archives-xlviii-2-w1-2022-153-2022.

Full text
Abstract:
Abstract. The bond with computer vision and robotics is revolutionizing the traditional surveying approaches. Algorithms such as visual odometry and SLAM are embedded in surveying systems to make on-site and processing operations more efficient both in terms of time and quality of the achieved results. In this paper, we present the latest developments on GuPho, a mobile mapping concept based on photogrammetry that leverages a vSLAM solution to provide innovative and unique features supporting the image acquisition and optimising the processing steps. These include visual feedback on ground sample distance and maximum allowed speed to avoid motion blur. Two efficient image acquisition strategies, based on geometric principles, are implemented to optimise the disk storage, avoiding unnecessary redundancy. Moreover, an innovative automatic exposure control that adjusts the shutter speed or gain based on the tracked object in 3D is part of the system. The paper reports the motivations behind the design choices, details the hardware and software components, discusses several case studies to showcase the potentialities of our low-cost, lightweight, and portable modular prototype system.
APA, Harvard, Vancouver, ISO, and other styles
19

Spies, Simon, Niklas Mück, Haoyi Zeng, et al. "Destabilizing Iris." Proceedings of the ACM on Programming Languages 9, PLDI (2025): 848–73. https://doi.org/10.1145/3729284.

Full text
Abstract:
The separation logic framework Iris has been built on the premise that all assertions are stable , meaning they unconditionally enjoy the famous frame rule . This gives Iris—and the numerous program logics that build on it—very modular reasoning principles. But stability also comes at a cost. It excludes a core feature of the Viper verifier family, heap-dependent expression assertions , which lift program expressions to the assertion level in order to reduce redundancy between code and specifications and better facilitate SMT-based automation. In this paper, we bring heap-dependent expression assertions to Iris with Daenerys . To do so, we must first revisit the very core of Iris, extending it with a new form of unstable resources (and adapting the frame rule accordingly). On top, we then build a program logic with heap-dependent expression assertions and lay the foundations for connecting Iris to SMT solvers. We apply Daenerys to several case studies, including some that go beyond what Viper and Iris can do individually and others that benefit from the connection to SMT.
APA, Harvard, Vancouver, ISO, and other styles
20

Ji, Yirun, Qian Yuan, Chengjie Zhou, et al. "Analysis of the Impact of Short Circuit Faults in Converter Valve Submodules on Valve Power Transmission." Energies 18, no. 6 (2025): 1496. https://doi.org/10.3390/en18061496.

Full text
Abstract:
Faults of a Modular Multilevel Converter (MMC)-type converter valve significantly impact the reliability of flexible DC transmission systems. This paper analyzed the impact of ongoing short circuit faults in submodules on the power transmission of the MMC-type converter valve of which redundant submodules had been depleted. First, MMC’s working principle and its submodules’ possible operational states were investigated. Then, fault mechanisms for intra-submodule Insulated-Gate Bipolar transistor (IGBT) short circuits and inter-submodule short circuits were modeled to infer changes in power transmission during submodule faults. To quantify the impact of submodule faults on the energy transfer efficiency of the converter valve, an energy transfer efficiency index was proposed to obtain analytical expressions for energy transfer efficiency in the case of intra-submodule and inter-submodule short-circuit faults. Finally, the effectiveness of the proposed analytical model was verified through Simulink simulations. Simulation results indicate that ongoing intra-submodule and inter-submodule short circuits increase the input power of the converter valve, reducing energy transfer efficiency. Moreover, the energy transfer efficiency continues to decline with an increase in faulty submodules.
APA, Harvard, Vancouver, ISO, and other styles
21

GOSWAMI, SANJAY, and PARTHA BHATTACHARYA. "A SCALABLE NEURAL-NETWORK MODULAR-ARRAY ARCHITECTURE FOR REAL-TIME MULTI-PARAMETER DAMAGE DETECTION IN PLATE STRUCTURES USING SINGLE SENSOR OUTPUT." International Journal of Computational Intelligence and Applications 11, no. 04 (2012): 1250024. http://dx.doi.org/10.1142/s1469026812500241.

Full text
Abstract:
A scalable modular neural network array architecture has been proposed for real time damage detection in plate like structures for structural health monitoring applications. Damages in a plate like structure are simulated using finite element method of numeric system simulation. Various damage states are numerically simulated by varying Young's modulus of the material at various locations of the structure. Transient vibratory loads are applied at one end of the beam and picked at the other end by means of point sensors. The vibration signals thus obtained are then filtered and subjected to wavelet transform (WT) based multi resolution analysis (MRA) to extract features and identify them. The redundant features are removed and only the principal features are retained using principal component analysis (PCA). A large database of principal features (the feature base) corresponding to different damage scenarios is created. This feature base is used to train individual multi layer perceptron (MLP) networks to identify different parameters of the damage such as location and extent (Young's modulus). Individually trained MLP units are then organized and connected in parallel so that different damage parameters can be identified almost simultaneously, on being fed with new signal feature vectors. For a given case, damage classification success rate has been found to be encouraging. The main feature of this implementation is that it is scalable. That is, any number of trained MLP units capable of identifying a certain parameter of damages can be integrated into the architecture and theoretically it will take almost the same time to identify various damage parameters irrespective of their numbers.
APA, Harvard, Vancouver, ISO, and other styles
22

Bai, Qingbo, Xu Li, Zhenze Ma, Xiaokang Li, and Long Liu. "A novel digital design method for railway subgrade sections." Railway Sciences 4, no. 3 (2025): 375–87. https://doi.org/10.1108/rs-11-2024-0046.

Full text
Abstract:
PurposeConventional high-speed railways (HSR) subgrade design methods remain constrained by platform-dependent drafting systems, leading to data interaction hindrances and redundant design processes. This study strives to develop a digital earthwork design methodology that enhances design while reducing collaborative expenses.Design/methodology/approachA novel digital subgrade design approach, utilizing sophisticated analysis and modeling tools customized for different subgrade elements, is put forward in this study. The methodology incorporates the following essential steps: (1) the advancement of digital analysis and modeling techniques for diverse subgrade components, including surfaces, filling, slopes, retaining structures, and foundation treatments; (2) the formulation of a digital design principle repository incorporating various slope protection combinations; (3) the establishment of a comprehensive digital design framework and process for subgrade cross-sections; and (4) the development and implementation of an open-source digital design system.FindingsThe proposed method liberates subgrade design from the constraints of conventional drawing platforms, elevating efficiency, intelligence, and flexibility. The open software architecture and code have achieved over 60% efficiency gains in design workflows during its deployment on three major high-speed rail projects: the Baotou-Yinchuan HSR corridor, Shenyang-Baihe HSR network, and Weifang-Yantai HSR system.Originality/valueThis paper introduces an innovative digital design methodology that enables modular and parametric design for railway subgrade sections. The proposed approach provides a digital base for the intelligent design and maintenance of the next-generation high-speed railway.
APA, Harvard, Vancouver, ISO, and other styles
23

Tao, Zhicheng, Shineng Sheng, Zhipei Chen, and Guanjun Bao. "Novel design method for multi-configuration dexterous hand based on a gesture primitives analysis." Industrial Robot: the international journal of robotics research and application 48, no. 3 (2021): 463–72. http://dx.doi.org/10.1108/ir-09-2020-0211.

Full text
Abstract:
Purpose This paper aims to propose a novel method based on a gesture primitives analysis of human daily grasping tasks for designing dexterous hands with various grasping and in-hand manipulation abilities, which simplifies the complex and redundant humanoid five-finger hand system. Design/methodology/approach First, the authors developed the fingers and the joint configuration with a series of gesture primitives configurations and the modular virtual finger scheme, refined from the daily work gesture library by principal component analysis. Then, the authors optimized the joint degree-of-freedom configuration with the bionic design analysis of the anatomy, and the authors optimized the dexterity workspace. Furthermore, the adaptive fingertip and routing structure were designed based on the dexterous manipulation theory. Finally, the effectiveness of the design method was experimentally validated. Findings A novel lightweight three-finger and nine-degree-of-freedom dexterous hand with force/position perception was designed. The proposed routing structure was shown to have the capability of mapping the relationship between the joint space and actuator space. The adaptive fingertip with an embedded force sensor can effectively increase the robustness of the grasping operation. Moreover, the dexterous hand can grasp various objects in different configurations and perform in-hand manipulation dexterously. Originality/value The dexterous hand design developed in this study is less complex and performs better in dexterous manipulation than previous designs.
APA, Harvard, Vancouver, ISO, and other styles
24

Yuriy, Polissky. "Study of the division operation by two in the remainder class system with all paired modules." System technologies 6, no. 155 (2025): 218–22. https://doi.org/10.34185/1562-9945-6-155-2024-21.

Full text
Abstract:
The development of modern technology, information and control systems will require the establishment of new principles, focusing on the representation of numbers in the system of redundant classes. The traditional system of residue classes is a system in which an arbitrary number is represented as a set of smallest non-negative remainders modulo. Moreover, if the modules are pairwise coprime, then only one number in the range of numbers corresponds to this representation. At the same time, the implementation of new trends in the system of residual classes requires, along with the use of systems of co-prime modules, the use of systems with co-prime modules, in particular, with all even modules. Moreover, the system of all even modules, each of which is not a factor of any of the other modules of this system, is built on the basis of a system of relatively simple modules - the basis - by multiplying each basis module by an even number - the transition coefficient. One of the complex operations in such a system is dividing a number by two. The proposed approach to solving the problem is as follows. The remainder is divided into two by modules of the system. A modular equation is compiled, the results of which determine two remainder values for each module, located in different number intervals and having opposite parities. Since in an even system of modules all remainders are either even or odd, we form a set of all even remainders and a set of all odd remainders. Since, when divided by two, numbers are transferred to the lower half of the range of numbers, we compare these sets to the smaller of them. The proposed approach provides the desired result, and it seems appropriate to apply it as a promising direction for studying complex operations in a system of residual classes with all even modules.
APA, Harvard, Vancouver, ISO, and other styles
25

Stovpnyk, O. V. "Building integrated management systems: achieving synergy and sustainability in modern organizations." Вісник Східноукраїнського національного університету імені Володимира Даля, no. 4 (284) (June 15, 2024): 89–95. http://dx.doi.org/10.33216/1998-7927-2024-284-4-89-95.

Full text
Abstract:
In today's competitive global marketplace, the quest for efficiency, sustainability, and quality is at the forefront of organizational management. The implementation of international management standards—such as ISO 9001 for quality, ISO 14001 for environmental management, and ISO 45001 for occupational health and safety—has become widespread across industries. However, these systems are often implemented separately, which can lead to inefficiencies and resource redundancy. An Integrated Management System (IMS) offers a strategic solution by consolidating these disparate systems into a unified framework, providing organizations with a comprehensive, cohesive approach to management. This paper explores the evolution of management systems and the growing need for integration in response to globalization, increased competition, and stakeholder demands. By combining the principles of quality management, environmental sustainability, social responsibility, and occupational safety into a single system, organizations can streamline their operations, improve resource allocation, and enhance overall performance. This approach allows organizations to not only meet regulatory requirements but also achieve a synergistic effect, where the whole system performs better than the sum of its individual parts. Two primary models for integrating management systems are discussed: the additive model and the comprehensive integrated model. The additive model involves the step-by-step incorporation of additional systems into an existing framework, typically beginning with ISO 9001 for quality management and gradually adding systems like ISO 14001 and ISO 45001. This modular approach allows for flexibility and is easier to implement for organizations with existing quality management systems. However, the additive model can sometimes result in systems that operate in parallel rather than truly integrated. The comprehensive integrated model, on the other hand, is designed from the outset as a unified system. This approach ensures that all components are interrelated and work together seamlessly, minimizing duplication of processes and promoting better alignment of organizational goals. The integrated model fosters greater synergy and reduces the administrative burden associated with managing multiple systems independently. The paper also highlights the importance of adopting a structured approach to integration based on the Deming PDCA (Plan-Do-Check-Act) cycle, which is central to ISO management standards. The PDCA cycle provides a framework for continuous improvement, enabling organizations to adapt and refine their management systems over time. Through effective leadership and stakeholder engagement, organizations can achieve not only compliance with international standards but also significant improvements in operational efficiency and long-term sustainability. The benefits of integrated management systems are numerous: they reduce duplication, optimize resource use, and improve communication and coordination across departments. Moreover, an IMS enables organizations to respond more effectively to external challenges, such as changing regulations, customer demands, and environmental concerns. The paper concludes by advocating for the widespread adoption of integrated management systems, emphasizing that their success depends on both the structural framework and the active involvement of management and staff. As organizations continue to face complex, interconnected challenges, the integration of management systems is a crucial strategy for enhancing competitiveness, sustainability, and long-term organizational resilience. The future of organizational success lies in the ability to harmonize different management approaches into a single, efficient system that supports continuous improvement and adaptability in an ever-changing global environment.
APA, Harvard, Vancouver, ISO, and other styles
26

Bach, Felix, Kerstin Soltau, Sandra Göller, and Minella Christian Bonatto. "Current Developments in the Research Data Repository RADAR." Research Ideas and Outcomes 8 (October 12, 2022): e96005. https://doi.org/10.3897/rio.8.e96005.

Full text
Abstract:
RADAR is a cross-disciplinary internet-based service for long-term and format-independent archiving and publishing of digital research data from scientific studies and projects. The focus is on data from disciplines that are not yet supported by specific research data management infrastructures. The repository aims to ensure access and long-term availability of deposited datasets according to FAIR criteriaWilkinson et al. 2016 for the benefit of the scientific community. Published datasets are retained for at least 25 years; for archived datasets, the retention period can be flexibly selected up to 15 years. The RADAR Cloud service was developed as a cooperation project funded by the DFG (2013-2016) and started operations in 2017. It is operated by FIZ Karlsruhe - Leibniz-Institute for Information Infrastructure.As a distributed, multilayer application, RADAR is structured into a multitude of services and interfaces. The system architecture is modular and consists of a user interface (frontend), management layer (backend) and storage layer (archive), which communicate with each other via application programming interfaces (API). This open structure and the access to the APIs from outside allows integrating RADAR into existing systems and work processes, e. g. for automated upload of metadata from other applications using the RADAR API. RADAR's storage layer is encapsulated via the Data Center API. This approach guarantees independence from a specific storage technology and makes it possible to integrate alternative archives for the bitstream preservation of the research data.The data transfer to RADAR takes place in two steps: In the first step, the data is transferred to a temporary work storage. The ingest service accepts individual files and packed archives, optionally unpacks them while retaining the original directory structure and creates a dataset. For each file found, the MIME Type (see Multipurpose Internet Mail Extensions specification)) is analysed and stored in the technical metadata. When archiving and publishing, a dataset is created in the second step. The structure of this dataset - the AIP (archival information package) in the sense of the OAIS standard - corresponds to the BagIt standard. It contains, in addition to the actual research data in original order, technical and descriptive metadata (if created) for each file or directory as well as a manifest within one single TAR ("tape archive", a unix archiving format and utility) file as an entity in one place. This TAR file is stored permanently on magnetic tapes redundantly in three copies at different locations in two academic computing centres.The FAIR Principles are currently being given special importance in the research community. They define measures that ensure the optimal processing of research data, accessibility for both humans and machines, as well as reusability for further research. RADAR also promotes the implementation of the FAIR Principles with different measures and functional features, amongst others:Descriptive metadata are recorded using the internal RADAR Metadata Schema (based on DataCite Metadata Schema 4.0), which supports 10 mandatory and 13 optional metadata fields. Annotations can be made on the dataset level and on the individual files and folders level. A user licence which rules re-use of the data, must be defined for each dataset. Each published dataset receives a DOI which is registered with DataCite. RADAR metadata uses a combination of controlled lists and free text entries. Author identification is ensured by using an ORCID ID and funder identification by CrossRef Open Funder Registry. More interfacing options, e.g. ROR and the Integrated Authority File (GND) are currently implemented. Datasets can be easily linked with other digital resources (e.g. text publications) via a "related identifier". To maximise data dissemination and discoverability, the metadata of published datasets are indexed in various formats (e.g. DataCite and DublinCore) and offered for public metadata harvesting e.g. via an OAI-provider.These measures are - to our minds - undoubtedly already significant, but not yet sufficient in the medium to long term. Especially in terms of interoperability, we see development potential for RADAR. The FAIR Digital Object (FDO) Framework seems to offer a promising concept, especially to further promote data interoperability and to close respective gaps in the current infrastructure and repository landscape.RADAR aims to participate in this community driven approach also in its role within the National Research Data Infrastructure (NFDI). As part of the NFDI, RADAR already plays a relevant role as a generic infrastructure service in several NFDI consortia (e.g. NFDI4Culture and NFDI4Chem). With RADAR4Chem and RADAR4Culture, FIZ Karlsruhe for example offers researchers from chemistry and the cultural sciences low-threshold data publication services based on RADAR. We successively develop these services further according to the needs of the communities, e.g. by integrating and linking them with subject-specific terminologies, by providing annotation options with subject-specific metadata or by enabling selective reading or previewing options for individual files in existing datasets.In our presentation, we would like to describe the present and future functionality of RADAR and its current level of FAIRness as possible starting points for further discussion with the FDO community with regard to the implementation of the FDO framework for our service.
APA, Harvard, Vancouver, ISO, and other styles
27

Westover, Jonathan. "Designing for Resilience: Principles for Building Organizational Adaptability." Human Capital Leadership Review 14, no. 1 (2024). http://dx.doi.org/10.70175/hclreview.2020.14.1.8.

Full text
Abstract:
Organizational disruptions from events like the COVID-19 pandemic have demonstrated the importance of resilience - an organization's ability to anticipate risks, maintain core functions during crises, and adapt successfully. This article outlines eight design principles that research indicates can help build resilience into an organization's structures, systems, culture and operations. Drawing from literature in fields like organizational development, crisis leadership and strategic management, the principles focus on distributing leadership and information sharing; promoting flexible, modular designs; ensuring redundancy of critical resources; cultivating a learning culture; maintaining flexible funding and resources; conducting scenario planning and training; fostering strategic partnerships; and taking an adaptive approach to goals. The principles are grounded in academic research but presented through the lens of a practitioner's consulting experience. The article also discusses practical strategies for applying the resilience design framework to assess risks, strengthen crisis response capabilities, and nurture continuous organizational learning and adaptation.
APA, Harvard, Vancouver, ISO, and other styles
28

Pirro, Nicholas. "The Synergistic Organizational Resilience and Evolution (SORE) Theory." April 2, 2025. https://doi.org/10.5281/zenodo.15460186.

Full text
Abstract:
The Synergistic Organizational Resilience and Evolution (SORE) Theory proposes that businesses achieve long-term success not merely through efficiency, competition, or market positioning but through the ability to synergize with their environment, build resilience through redundancy, and evolve by leveraging crisis-driven innovation. Unlike traditional growth models that emphasize market expansion or operational efficiency, SORE Theory underscores proactive adaptability, layered redundancy, and evolution through adversity as critical elements for sustainable business practices. As global markets become increasingly volatile, businesses must shift from a linear, efficiency-driven mindset to an adaptive, networked approach. SORE Theory highlights the importance of resilience by advocating for built-in redundancy, modular strategies, and continuous learning mechanisms. These elements allow companies to absorb shocks, navigate crises, and leverage disruptions as opportunities for transformation. This paper explores the core principles of SORE Theory, comparing it to existing business models and highlighting its advantages in an era of increasing global uncertainties. It integrates insights from complexity science, resilience theory, and evolutionary economics to demonstrate how businesses that embrace systemic flexibility and redundancy outperform those that rely on linear strategic planning. Furthermore, real-world case studies illustrate how companies that foster resilience and adaptability navigate crises more effectively than their rigid counterparts. Companies like Tesla, Amazon, and Google have demonstrated the power of layered redundancy and crisis-driven innovation, providing concrete examples of SORE Theory in practice.
APA, Harvard, Vancouver, ISO, and other styles
29

Zhao, Yongjie, Xiaogang Song, Xingwei Zhang, and Xinjian Lu. "A Hyper-redundant Elephant’s Trunk Robot with an Open Structure: Design, Kinematics, Control and Prototype." Chinese Journal of Mechanical Engineering 33, no. 1 (2020). http://dx.doi.org/10.1186/s10033-020-00509-4.

Full text
Abstract:
AbstractAs for the complex operational tasks in the unstructured environment with narrow workspace and numerous obstacles, the traditional robots cannot accomplish these mentioned complex operational tasks and meet the dexterity demands. The hyper-redundant bionic robots can complete complex tasks in the unstructured environments by simulating the motion characteristics of the elephant’s trunk and octopus tentacles. Compared with traditional robots, the hyper-redundant bionic robots can accomplish complex tasks because of their flexible structure. A hyper-redundant elephant’s trunk robot (HRETR) with an open structure is developed in this paper. The content includes mechanical structure design, kinematic analysis, virtual prototype simulation, control system design, and prototype building. This design is inspired by the flexible motion of an elephant’s trunk, which is expansible and is composed of six unit modules, namely, 3UPS-PS parallel in series. First, the mechanical design of the HRETR is completed according to the motion characteristics of an elephant’s trunk and based on the principle of mechanical bionic design. After that, the backbone mode method is used to establish the kinematic model of the robot. The simulation software SolidWorks and ADAMS are combined to analyze the kinematic characteristics when the trajectory of the end moving platform of the robot is assigned. With the help of ANSYS, the static stiffness of each component and the whole robot is analyzed. On this basis, the materials of the weak parts of the mechanical structure and the hardware are selected reasonably. Next, the extensible structures of software and hardware control system are constructed according to the modular and hierarchical design criteria. Finally, the prototype is built and its performance is tested. The proposed research provides a method for the design and development for the hyper-redundant bionic robot.
APA, Harvard, Vancouver, ISO, and other styles
30

Min, Chulhong, Akhil Mathur, Utku Günay Acer, Alessandro Montanari, and Fahim Kawsar. "SensiX++: Bringing MLOps and Multi-tenant Model Serving to Sensory Edge Devices." ACM Transactions on Embedded Computing Systems, September 7, 2023. http://dx.doi.org/10.1145/3617507.

Full text
Abstract:
We present SensiX++ - a multi-tenant runtime for adaptive model execution with integrated MLOps on edge devices, e.g., a camera, a microphone, or IoT sensors. SensiX++ operates on two fundamental principles - highly modular componentisation to externalise data operations with clear abstractions and document-centric manifestation for system-wide orchestration. First, a data coordinator manages the lifecycle of sensors and serves models with correct data through automated transformations. Next, a resource-aware model server executes multiple models in isolation through model abstraction, pipeline automation and feature sharing. An adaptive scheduler then orchestrates the best-effort executions of multiple models across heterogeneous accelerators, balancing latency and throughput. Finally, microservices with REST APIs serve synthesised model predictions, system statistics, and continuous deployment. Collectively, these components enable SensiX++ to serve multiple models efficiently with fine-grained control on edge devices while minimising data operation redundancy, managing data and device heterogeneity, and reducing resource contention. We benchmark SensiX++ with ten different vision and acoustics models across various multi-tenant configurations on different edge accelerators (Jetson AGX and Coral TPU) designed for sensory devices. We report on the overall throughput and quantified benefits of various automation components of SensiX++ and demonstrate its efficacy in significantly reducing operational complexity and lowering the effort to deploy, upgrade, reconfigure and serve embedded models on edge devices.
APA, Harvard, Vancouver, ISO, and other styles
31

Bach, Felix, Kerstin Soltau, Sandra Göller, and Christian Bonatto Minella. "Current Developments in the Research Data Repository RADAR." Research Ideas and Outcomes 8 (October 12, 2022). http://dx.doi.org/10.3897/rio.8.e96005.

Full text
Abstract:
RADAR is a cross-disciplinary internet-based service for long-term and format-independent archiving and publishing of digital research data from scientific studies and projects. The focus is on data from disciplines that are not yet supported by specific research data management infrastructures. The repository aims to ensure access and long-term availability of deposited datasets according to FAIR criteriaWilkinson et al. 2016 for the benefit of the scientific community. Published datasets are retained for at least 25 years; for archived datasets, the retention period can be flexibly selected up to 15 years. The RADAR Cloud service was developed as a cooperation project funded by the DFG (2013-2016) and started operations in 2017. It is operated by FIZ Karlsruhe - Leibniz-Institute for Information Infrastructure. As a distributed, multilayer application, RADAR is structured into a multitude of services and interfaces. The system architecture is modular and consists of a user interface (frontend), management layer (backend) and storage layer (archive), which communicate with each other via application programming interfaces (API). This open structure and the access to the APIs from outside allows integrating RADAR into existing systems and work processes, e. g. for automated upload of metadata from other applications using the RADAR API. RADAR's storage layer is encapsulated via the Data Center API. This approach guarantees independence from a specific storage technology and makes it possible to integrate alternative archives for the bitstream preservation of the research data. The data transfer to RADAR takes place in two steps: In the first step, the data is transferred to a temporary work storage. The ingest service accepts individual files and packed archives, optionally unpacks them while retaining the original directory structure and creates a dataset. For each file found, the MIME Type (see Multipurpose Internet Mail Extensions specification)) is analysed and stored in the technical metadata. When archiving and publishing, a dataset is created in the second step. The structure of this dataset - the AIP (archival information package) in the sense of the OAIS standard - corresponds to the BagIt standard. It contains, in addition to the actual research data in original order, technical and descriptive metadata (if created) for each file or directory as well as a manifest within one single TAR ("tape archive", a unix archiving format and utility) file as an entity in one place. This TAR file is stored permanently on magnetic tapes redundantly in three copies at different locations in two academic computing centres. The FAIR Principles are currently being given special importance in the research community. They define measures that ensure the optimal processing of research data, accessibility for both humans and machines, as well as reusability for further research. RADAR also promotes the implementation of the FAIR Principles with different measures and functional features, amongst others: Descriptive metadata are recorded using the internal RADAR Metadata Schema (based on DataCite Metadata Schema 4.0), which supports 10 mandatory and 13 optional metadata fields. Annotations can be made on the dataset level and on the individual files and folders level. A user licence which rules re-use of the data, must be defined for each dataset. Each published dataset receives a DOI which is registered with DataCite. RADAR metadata uses a combination of controlled lists and free text entries. Author identification is ensured by using an ORCID ID and funder identification by CrossRef Open Funder Registry. More interfacing options, e.g. ROR and the Integrated Authority File (GND) are currently implemented. Datasets can be easily linked with other digital resources (e.g. text publications) via a “related identifier”. To maximise data dissemination and discoverability, the metadata of published datasets are indexed in various formats (e.g. DataCite and DublinCore) and offered for public metadata harvesting e.g. via an OAI-provider. These measures are - to our minds - undoubtedly already significant, but not yet sufficient in the medium to long term. Especially in terms of interoperability, we see development potential for RADAR. The FAIR Digital Object (FDO) Framework seems to offer a promising concept, especially to further promote data interoperability and to close respective gaps in the current infrastructure and repository landscape. RADAR aims to participate in this community driven approach also in its role within the National Research Data Infrastructure (NFDI). As part of the NFDI, RADAR already plays a relevant role as a generic infrastructure service in several NFDI consortia (e.g. NFDI4Culture and NFDI4Chem). With RADAR4Chem and RADAR4Culture, FIZ Karlsruhe for example offers researchers from chemistry and the cultural sciences low-threshold data publication services based on RADAR. We successively develop these services further according to the needs of the communities, e.g. by integrating and linking them with subject-specific terminologies, by providing annotation options with subject-specific metadata or by enabling selective reading or previewing options for individual files in existing datasets. In our presentation, we would like to describe the present and future functionality of RADAR and its current level of FAIRness as possible starting points for further discussion with the FDO community with regard to the implementation of the FDO framework for our service.
APA, Harvard, Vancouver, ISO, and other styles
32

Temate-Tiagueu, Yvette, Joseph Amlung, Dennis Stover, et al. "Dashboard Prototype for Improved HIV Monitoring and Reporting for Indiana." Online Journal of Public Health Informatics 11, no. 1 (2019). http://dx.doi.org/10.5210/ojphi.v11i1.9699.

Full text
Abstract:
ObjectiveThe objective was to design and develop a dashboard prototype (DP) that integrates HIV data from disparate sources to improve monitoring and reporting of HIV care continuum metrics in Indiana. The tool aimed to support Indiana State Department of Health (ISDH) to monitor key HIV performance indicators, more fully understand populations served, more quickly identify and respond to crucial needs, and assist in planning and decision-making.IntroductionIn 2015, ISDH responded to an HIV outbreak among persons using injection drugs in Scott County [1]. Information to manage the public health response to this event and aftermath included data from multiple sources (e.g., HIV testing, surveillance, contact tracing, medical care, and HIV prevention activities). During the outbreak, access to timely and accurate data for program monitoring and reporting was difficult for health department staff. Each dataset was managed separately and tailored to the relevant HIV program area’s needs. Our challenge was to create a platform that allowed separate systems to communicate with each other and design a DP that offered a consolidated view of data.ISDH initiated efforts to integrate these HIV data sources to better track HIV prevention, diagnosis, and care metrics statewide, support decision-making and policies, and facilitate a more rapid response to future HIV-related investigations. The Centers for Disease Control and Prevention (CDC) through its Info-Aid program provided technical assistance to support ISDH’s data integration process and develop a DP that could aggregate these data and improve reporting of crucial statewide metrics.After an initial assessment phase, an in-depth analysis of requirements resulted in several design principles and lessons learned that later translated into standardization of data formats and design of the data integration process [2].MethodsSpecific design principles and prototyping methods were applied during the 9 months that lasted the DP design and development process starting from June 2017.Requirements elicitation, analysis, and validationThe elicitation and analysis of the requirements were done using a dashboard content inventory tool to gather and analyze HIV reporting needs and dashboard requirements from stakeholders. Results of this analysis allowed us to validate project goals, list required functionalities, prioritize features, and design the initial dashboard architecture. The initial scope was Scott County.Design mappingThe design mapping exercise reviewed different scenarios involving data visualization using DP, clarified associations among data from different programs and determined how best to capture and present them in the DP. For example, we linked data in separate datasets using unique identifier or county name. This step’s output was to refine DP architecture.Parallel designIn a parallel design session, we drew dashboard mockups on paper with end users. These mockups helped illustrate how information captured during design mapping would be translated into visual design before prototype implementation. Drawings were converted to PowerPoint mockups for validation and modifications. The mockup helped testers and future users, interact and rapidly understand the DP architecture. The model can be used for designing other DP.IntegrationData integration was conducted in SAS by merging datasets from different program areas iteratively. Next, we cleaned (e.g., deleted records missing crucial information) and validated data. The integration step solved certain challenges with ISDH data (e.g. linking data across systems while automating data cleaning was planned for later), increased data consistency and reduced redundancy, and resulted in a consolidated view of the data.PrototypingAfter data integration, we extracted a reduced dataset to implement and test different DP features. The first prototype was in Excel. We applied a modular design that allowed frequent feedback and input from ISDH program managers. Developers of the first prototype were in two locations, but team members kept in close contact and further refined the DP through weekly communications. We expanded the DP scope from Scott County to include all counties in Indiana.Beta VersionTo enable advanced analysis and ease collaboration of the final tool across users, we moved to Tableau Desktop Professional version 10. All Excel screens were redeveloped and integrated into a unique dashboard for a consolidated view of ISDH programs. After beta version completion, usability tests were conducted to guide the DP production version.Technical requirementsAll users were provided Tableau Reader to interact with the tool. DP is not online, but shared by ISDH through a protected shared drive. Provisions are made for the DP to use a relational database that will provide greater data storage flexibility, management, and retrieval. DP benefits from the existing security infrastructure at ISDH that allows for safeguarding personal identifiable information, secured access, backup and restoration.ResultsSystem contentISDH’s data generated at the county and state level were used to assess the following domains: HIV Testing, HIV Surveillance, Contact Tracing, HIV Care Coordination, and Syringe Exchange. The DP was populated through an offline extract of the integrated datasets. This approach sped up the Tableau workbook and allowed monthly update to the uploaded datasets. The system also included reporting features to display aggregate information for multiple population groups.Stakeholders’ feedbackTo improve users’ experience, the development team trained and offered stakeholders multiple opportunities to provide feedback, which was collected informally from ISDH program directors to guide DP enhancements. The initial feedback was collected through demonstration to CDC domain experts and ISDH staff. They were led through different scenarios and provided comments on overall design and suggestions for improvement. The goal of the demos was to assess ease of use and benefits and determine how it could be used to engage with stakeholders inside and outside of ISDH.DP Action ReportingThe DP reporting function will allow users to download spreadsheets and graphs. Some reports will be automatically generated and some will be ad-hoc. All users, including the ISDH Quality Manager and grant writers, can use the tool to guide program evaluations and justifications for funding. The tool will provide a way for ISDH staff to stay current about work of grantees, document key interactions with each community, and track related next steps. In addition, through an extract of the integrated dataset (e.g., out-of-care HIV positives), DP could support another ISDH program area, Linkage to Care.ConclusionsWe describe the process to design and develop a DP to improve monitoring and reporting of statewide HIV-related data. The solution from this technical assistance project was a useful and innovative tool that allows for capture of time-crucial information about populations at high risk. The system is expected to help ISDH improves HIV surveillance and prevention in Indiana. Our approach could be adapted to similar public health areas in Indiana.References1. Peters PJ et al. HIV infection linked to injection use of oxymorphone in Indiana, 2014–2015. N Engl J Med. 2016;375(3):229-39.2. Ahmed K et al. Integrating data from disparate data systems for improved HIV reporting: Lessons learned. OJPHI. 2018 May 17;10 (1).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!