Zeitschriftenartikel zum Thema „Automated data processing systems“

Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Automated data processing systems.

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Automated data processing systems" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Dzyubetsʹka, M. O., und P. O. Yahanov. „Data processing modules for automated systems building management“. Electronics and Communications 16, Nr. 3 (28.03.2011): 92–100. http://dx.doi.org/10.20535/2312-1807.2011.16.3.266193.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
New program modules of ACS «The Intelligent Building» which use for data processing the methods of regression procedure, neural networks, systems of mass service, linear programming, fuzzy logic were developed. The control panels were developed for the operator of the ACS «The Intelligent Building» using the technologies of virtual devices of the environment of graphic programming LabVEW. The developed modules are convenient in use and can be easily integrated into already existing ACS
2

Sarsembayev, M., M. Turdalyuly und P. Omarova. „Сloud data-processing system for the automated generation of combustion models“. International Journal of Mathematics and Physics 7, Nr. 1 (2016): 65–68. http://dx.doi.org/10.26577/2218-7987-2016-7-1-65-68.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Manjunath, Akanksh Aparna, Manjunath Sudhakar Nayak, Santhanam Nishith, Satish Nitin Pandit, Shreyas Sunkad, Pratiba Deenadhayalan und Shobha Gangadhara. „Automated invoice data extraction using image processing“. IAES International Journal of Artificial Intelligence (IJ-AI) 12, Nr. 2 (01.06.2023): 514. http://dx.doi.org/10.11591/ijai.v12.i2.pp514-521.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Manually processing invoices which are in the form of scanned photocopies is a time-consuming process. There is a need to automate the task of extraction of data from the invoices with a similar format. In this paper we investigate and analyse various techniques of image processing and text extraction to improve the results of the optical character recognition (OCR) engine, which is applied to extract the text from the invoice. This paper also proposes the design and implementation of a web enabled invoice processing system (IPS). The IPS consists of an annotation tool and an extraction tool. The annotation tool is used to mark the fields of interest in the invoice which are to be extracted. The extraction tool makes use of opensource computer vision library (OpenCV) algorithms to detect text. The proposed system was tested on more than 25 types of invoices with the average accuracy score lying between 85% and 95%. Finally, to provide ease of use, a web application is developed which also presents the results in a structured format. The entire system is designed so as to provide flexibility and automate the process of extracting details of interest from the invoices.
4

I.I. BYSTROV. „Automated Processing of Unstructured Data in Prospective Military Automated Systems: A Conceptual Basis“. Military Thought 27, Nr. 004 (31.12.2018): 102–13. http://dx.doi.org/10.21557/mth.52771238.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Shakhanova, M. V. „Optimization of protection in automated data transmission and processing systems“. Automatic Control and Computer Sciences 47, Nr. 3 (Mai 2013): 139–46. http://dx.doi.org/10.3103/s0146411613030061.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Minaev, V. A., A. V. Mazin, K. B. Zdiruk und E. V. Poddubnaya. „MODELING OF INTERNAL CONFLICTS OF AUTOMATED DATA COLLECTION AND DATA PROCESSING SYSTEMS“. Radio industry, Nr. 1 (10.03.2018): 118–23. http://dx.doi.org/10.21778/2413-9599-2018-1-118-123.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Simavoryan, Simon Zh, Arsen R. Simonyan, Elena I. Ulitina und Rafik A. Simonyan. „Projecting Intelligent Systems to Protect Information in Automated Data Processing Systems (Functional Approach)“. Modeling of Artificial Intelligence 7, Nr. 3 (05.09.2015): 212–20. http://dx.doi.org/10.13187/mai.2015.7.212.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Giacalone, V. M., G. Garofalo, G. D'Anna, F. Badalamenti und C. Pipitone. „Fi.S.A.R.: A Data-managing and Processing Software for Automated Telemetry Systems“. Marine Technology Society Journal 40, Nr. 1 (01.03.2006): 47–50. http://dx.doi.org/10.4031/002533206787353592.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Ultrasonic telemetry systems are increasingly used in studies on the behavioral ecology of marine and freshwater animals. Systems based on automated omnidirectional receivers in particular offer a powerful and relatively cheap tool to provide presence/absence data of tagged animals, but they do not provide the geographic position of the transmitters. In this paper a new software called FiSAR (Fish-finder Software for Automated Receivers), developed for application with Vemco VR1/VR2 receivers, is presented (available at www.marecofree.org/download.asp). FiSAR is able to manage large datasets as well as to calculate the activity center of each transmitter from simple presence/absence data. This is achieved through (1) the collation of the data collected by each receiver in a unique MS Access cross-table, and (2) the use of the Simpfendorfer et al. (2002) algorithm. The resulting cross-table can be easily exported to most statistical packages for further analysis.
9

Liu, Weiping, Jennifer Fung, W. J. de Ruijter, Hans Chen, John W. Sedat und David A. Agard. „Automated electron tomography: from data collection to image processing“. Proceedings, annual meeting, Electron Microscopy Society of America 53 (13.08.1995): 26–27. http://dx.doi.org/10.1017/s0424820100136507.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Electron tomography is a technique where many projections of an object are collected from the transmission electron microscope (TEM), and are then used to reconstruct the object in its entirety, allowing internal structure to be viewed. As vital as is the 3-D structural information and with no other 3-D imaging technique to compete in its resolution range, electron tomography of amorphous structures has been exercised only sporadically over the last ten years. Its general lack of popularity can be attributed to the tediousness of the entire process starting from the data collection, image processing for reconstruction, and extending to the 3-D image analysis. We have been investing effort to automate all aspects of electron tomography. Our systems of data collection and tomographic image processing will be briefly described.To date, we have developed a second generation automated data collection system based on an SGI workstation (Fig. 1) (The previous version used a micro VAX). The computer takes full control of the microscope operations with its graphical menu driven environment. This is made possible by the direct digital recording of images using the CCD camera.
10

Morozov, A. O. „Decision-making. Terms and definitions“. Mathematical machines and systems 2 (2022): 64–67. http://dx.doi.org/10.34121/1028-9763-2022-2-64-67.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Decision-making is directly connected with purposeful human activity. All people are engaged in this process on a daily basis both personally and using automatic or automated systems. The paper considers the issues of decision-making technology in automatic and automated systems, identifies the main stages of decision making: data – information – knowledge – decision making – decision implementation. There are defined terms «data», «information», «knowledge», «decision», «implementation» which are used at the stages of decision making and decision implementation in automated and automatic control systems. In the paper, there are provided definitions of automatic and automated systems, robots as a separate class of automatic systems, as well as the next stage of development of robots – self-organizing automatic machines which are independently configured to perform various target functions based on the rules of acquiring knowledge. Some features of large systems such as industry or state are noted. For such systems, it is impossible to get all the necessary data about the processes that take place in them. Therefore, the information obtained as a result of their processing will not provide enough knowledge for decision-making on system management. Missing knowledge can be obtained thanks to the unformalized knowledge of people. The definition of unformalized knowledge is provided as well. The paper forms the principles of building automated and automatic systems using artificial intelligence and describes the sequence of control processes in any automatic or automated control system
11

HANSON, MARK L., PAUL G. GONSALVES, JESSICA TSE und RACHEL GREY. „AUTOMATED DATA FUSION AND SITUATION ASSESSMENT IN SPACE SYSTEMS“. International Journal on Artificial Intelligence Tools 13, Nr. 01 (März 2004): 255–71. http://dx.doi.org/10.1142/s021821300400151x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Space systems are an important part of everyday life. They provide global positioning data, communications, and Earth science data such as weather information. All space systems require satellite operators to ensure high performance and continuous operations in the presence of off-nominal conditions due to space weather and onboard anomalies. Similar to other high-stress, time critical operations (e.g., piloting an aircraft or operating a nuclear power plant), situation awareness is a crucial factor in operator performance during these conditions. Because situation awareness is largely acquired by monitoring large numbers of parameters, it is difficult to rapidly and accurately fuse the data to develop an accurate assessment. To aid operators in this task, we have developed a prototype Multi-Agent Satellite System for Information Fusion (MASSIF) for automated data fusion and situation awareness. This system is based on human cognitive decision-making models and integrates a fuzzy logic system for semantic data processing, Bayesian belief networks for multi-source data fusion and situation assessment, and rule-bases for automatic network construction. This paper describes initial simulation-based results to establish feasibility and baseline performance. We describe knowledge engineering efforts, belief network construction, and operator-interfaces for automated data fusion and situation awareness for a hypothetical geosynchronous satellite.
12

Simavoryan, Simon Zhorzhevich, Arsen Rafikovich Simonyan, Georgii Aleksandrovich Popov und Elena Ivanovna Ulitina. „Functionality of the system of information security in automated data processing systems in the conditions of external intrusions by analogy with the human immune system“. Программные системы и вычислительные методы, Nr. 3 (März 2021): 11–24. http://dx.doi.org/10.7256/2454-0714.2021.3.36226.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This article is dedicated to construction of the system of information security in automated data processing systems that function by analogy with the human immune system. The subject of this research is the development of the procedure for countering external intrusions of viruses, spam, and other destructive software programs in automated data processing systems. The object of this research is the systems of ensuring information security in automated data processing systems and human immune system. Methodological research on elaboration of the procedure for identification of intrusion is conducted via methods of artificial intelligence, systemic analysis, theory of neural and immune systems in the sphere of ensuring information security based on the achievements of systemic analysis and a systemic-conceptual approach towards information security in automated data processing systems. The main result lies in the developed general procedure for the functionality of the system of ensuring information security in countering external intrusions in the form of block-diagram and its description. The procedure is based on the idea of similarity in functionality of the mechanisms and procedures for protection against external intrusions in both, human immune system and automated data processing system, as well as drawing parallel between them. The main peculiarity of the developed procedure lies in its applicability to the accepted classification of the initial external environment of intrusion onto physical, information, field, and infrastructure environments. Such approach guarantees the novelty of the development from the perspective of constant updating of human immune system countering mechanisms to the external intrusions and its application for each environment in applicable to automated data processing systems.
13

Takahashi, K., R. Ooka und S. Ikeda. „Anomaly detection and missing data imputation in building energy data for automated data pre-processing“. Journal of Physics: Conference Series 2069, Nr. 1 (01.11.2021): 012144. http://dx.doi.org/10.1088/1742-6596/2069/1/012144.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Abstract A new trend in building automation is the implementation of smart energy management systems to measure and control building systems without a need for decision-making by human operators. Artificial intelligence can optimize these systems by predicting future demand to make informed decisions about how to efficiently operate individual equipment. These machine learning algorithms use historical data to learn demand trends and require high quality datasets in order to make accurate predictions. But because of issues with data transmission or sensor errors, real world datasets often contain outliers or have data missing. In most research settings, these values can be simply omitted, but in practice, anomalies compromise the automation system’s prediction accuracy, rendering it unable to maximize energy savings. This study explores different machine learning algorithms for anomaly detection for automatically pre-processing incoming data using a case study on an actual electrical demand in a hospital building in Japan, namely cluster-based techniques such as k-means clustering and neural network-based approaches such as the autoencoder. Once anomalies were identified, the missing data was filled with prediction values from a deep neural network model. The newly composed data was then evaluated based on detection accuracy, prediction accuracy and training time. The proposed method of processing anomaly values allows the prediction model to process collected data without interruption, and shows similar predictive accuracy as manually processing the data. These predictions allow energy systems to optimize HVAC equipment control, increasing energy savings and reducing peak building loads.
14

Suwardhi, D., S. W. Trisyanti, L. Kamal, H. A. Permana, A. Murtiyoso und K. N. Fauzan. „POLYFIT ASSISTED MONOSCOPIC MULTI-IMAGE MEASUREMENT SYSTEMS“. ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-2/W17 (29.11.2019): 347–54. http://dx.doi.org/10.5194/isprs-archives-xlii-2-w17-347-2019.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Abstract. Nowadays, in light of the latest development in three-dimensional (3D) modeling technology, an essential role is given to the research and development of fully-automated or semi-automated processes in order to increase workflow effectiveness. A key challenge is thus to automate the process leading to the geometric model which supports the Building Information Modeling (BIM) or 3D-Geographical Information Systems (3D-GIS). This 3D model usually originates from image-based or range-based point clouds. This research is the beginning of the development of a 3D modeling approach that is semi-automatic, and possibly fully-automatic, by combining polygon surface fitting (polyfit) technique and monoscopic multi-image measurement system. With the advent of dense matching and Structure from Motion methods (SfM), point clouds can be generated from multiple images obtained from digital cameras. Then, to reduce the data and to allow for efficient processing, it is necessary to extract polygonal surface data from point clouds delivered by the dense matching process. The polygonal surface is then used for the basis of further manual monoscopic measurements which are achieved separately on each image to obtain more detailed 3D model. Next, this approach analyzed the polygonal surface deformations in comparison to the initial point cloud data. It can be seen how the resolution and noise of the original point clouds affect the subsequent Polyfit-based modeling and monoscopic measurements. The deformations and the accuracy evaluation have been undertaken using different open source software. Also, the geometric error in the polyfit-derived polyhedral reconstruction propagating to the subsequent monoscopic-derived measurements was evaluated. Finally, our modeling approach shows that it can improve the processing speed and level of detail of the 3D models achieved using existing monoscopic measurements. Typically geometric accuracy itself doesn’t have enough information to make accurate geometry model.
15

Smith, Jaclyn, Michael Benedikt, Milos Nikolic und Amir Shaikhha. „Scalable querying of nested data“. Proceedings of the VLDB Endowment 14, Nr. 3 (November 2020): 445–57. http://dx.doi.org/10.14778/3430915.3430933.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
While large-scale distributed data processing platforms have become an attractive target for query processing, these systems are problematic for applications that deal with nested collections. Programmers are forced either to perform non-trivial translations of collection programs or to employ automated flattening procedures, both of which lead to performance problems. These challenges only worsen for nested collections with skewed cardinalities, where both handcrafted rewriting and automated flattening are unable to enforce load balancing across partitions. In this work, we propose a framework that translates a program manipulating nested collections into a set of semantically equivalent shredded queries that can be efficiently evaluated. The framework employs a combination of query compilation techniques, an efficient data representation for nested collections, and automated skew-handling. We provide an extensive experimental evaluation, demonstrating significant improvements provided by the framework in diverse scenarios for nested collection programs.
16

Chakaberia, Irakli, Jerome Lauret, Michael Poat und Jefferson Porter. „Data transfer for STAR grid jobs“. Journal of Physics: Conference Series 2438, Nr. 1 (01.02.2023): 012022. http://dx.doi.org/10.1088/1742-6596/2438/1/012022.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Abstract The Solenoidal Tracker at RHIC (STAR) is a multipurpose experiment at the Relativistic Heavy Ion Collider (RHIC) with the primary goal to study the formation and properties of the quark-gluon plasma. STAR is an international collaboration of member institutions and laboratories from around the world. Yearly data-taking period produces PBytes of raw data collected by the experiment. STAR primarily uses its dedicated facility at BNL to process this data, but has routinely leveraged distributed systems, both high throughput (HTC) and high performance (HPC) computing clusters, to significantly augment the processing capacity available to the experiment. The ability to automate the efficient transfer of large data sets on reliable, scalable, and secure infrastructure is critical for any large-scale distributed processing campaign. For more than a decade, STAR computing has relied upon GridFTP with its x509-based authentication to build such data transfer systems and integrate them into its larger production workflow. The end of support by the community for both GridFTP and the x509 standard requires STAR to investigate other approaches to meet its distributed processing needs. In this study we investigate two multi-purpose data distribution systems, Globus.org and XRootD, as alternatives to GridFTP. We compare both their performance and the ease by which each service is integrated into the type of secure and automated data transfer systems STAR has previously built using GridFTP. The presented approach and study may be applicable to other distributed data processing use cases beyond STAR.
17

Ignaszak, Z., R. Sika, M. Perzyk, A. Kochański und J. Kozłowski. „Effectiveness of SCADA Systems in Control of Green Sands Properties“. Archives of Foundry Engineering 16, Nr. 1 (01.03.2016): 5–12. http://dx.doi.org/10.1515/afe-2015-0094.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Abstract The paper undertakes an important topic of evaluation of effectiveness of SCADA (Supervisory Control and Data Acquisition) systems, used for monitoring and control of selected processing parameters of classic green sands used in foundry. Main focus was put on process studies of properties of so-called 1st generation molding sands in the respect of their preparation process. Possible methods of control of this processing are presented, with consideration of application of fresh raw materials, return sand (regenerate) and water. The studies conducted in one of European foundries were aimed at pointing out how much application of new, automated plant of sand processing incorporating the SCADA systems allows stabilizing results of measurement of selected sand parameters after its mixing. The studies concerned two comparative periods of time, before an implementation of the automated devices for green sands processing (ASMS - Automatic Sand Measurement System and MCM – Main Control Module) and after the implementation. Results of measurement of selected sand properties after implementation of the ASMS were also evaluated and compared with testing studies conducted periodically in laboratory.
18

Yalova, K., K. Yashyna und O. Tarasiyk. „AUTOMATED INFORMA¬TION SYSTEM FOR GPS MONITORING DATA PROCESSING“. Collection of scholarly papers of Dniprovsk State Technical University (Technical Sciences) 2, Nr. 37 (23.04.2021): 88–92. http://dx.doi.org/10.31319/2519-2884.37.2020.16.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Using of automated information systems in the field of geolocation data processing increases the control and management efficiency of freight and passenger traffic. The article presents the results of design and software implementation of the automated information system that allows monitoring of GPS tracking data in real time, build routes and set control points for it, generate system messages about the status of vehicles on the route and generate reporting information on the base of user requests. The design of the system architecture and interface was carried out on the basis of developed object and functional data domain models, which take into account its structural and functional features. The microservice approach principles were applied during the developing of the system architecture. The system software is a set of independent services that work in their own process, implement a certain business logic algorithm and communicate with other services through the HTTP protocol. The set of the system software services consists of: a service for working with GPS data, a service for implementing geolocation data processing functions, and a web application service. The main algorithms of the developed system services and their functional features are described in the work. Article’s figures graphically describe developed system site map and system typical Web forms. This data displays the composition of web pages, paths between them and shows the user interface. The design of the user interface was carried out taking into account quality requirements of user graphical web interfaces.
19

Mao, Yi, Yi Yang und Yuxin Hu. „Research into a Multi-Variate Surveillance Data Fusion Processing Algorithm“. Sensors 19, Nr. 22 (15.11.2019): 4975. http://dx.doi.org/10.3390/s19224975.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Targeted information sources include radar and ADS (Automatic Dependent Surveillance) for civil ATM (Air Traffic Management) systems, and the new navigation system based on satellites has the capability of global coverage. In order to solve the surveillance problem in mid-and-high altitude airspace and approaching airspace, this paper proposes a filter-based covariance matrix weighting method, measurement variance weighting method, and measurement-first weighted fusion method weighting integration algorithm to improve the efficiency of data integration calculation under fixed accuracy. Besides this, this paper focuses on the technology of the integration of a multi-radar surveillance system and automated related surveillance system in the ATM system and analyzes the constructional method of a multigeneration surveillance data integration system, as well as establishing the targeted model of sensors and the target track and designing the logical structure of multi-radar and ADS data integration.
20

Umetalieva, Ch T., N. Y. Temirbaeva und B. N. Nurtaev. „THEORY OF BUILDING AUTOMATED CONTROL SYSTEMS“. Herald of KSUCTA, №2, Part 1, 2022, Nr. 2-1-2022 (30.04.2022): 295–99. http://dx.doi.org/10.35803/1694-5298.2022.2.295-299.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The task of collecting, processing and disseminating (exchanging) information stood at all stages of human development. For a long time, the main tools for solving it were the human brain, hearing and language. Storing information in computer memory gives a fundamentally new quality of dynamics, that is, the ability to quickly recover and use it directly in problems solved on a computer. Modern printing equipment, equipped with modern computers, allows, if necessary, to quickly present any selected information on paper. In the process of development of the administrative data management system, the need arose for an automated facility management system (ACS).
21

Shevchenko, Olga. „Regulatory Architecture of Data Processing for Connected and Automated Driving in Europe“. International Journal of Law and Public Administration 2, Nr. 2 (01.12.2019): 24. http://dx.doi.org/10.11114/ijlpa.v2i2.4594.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The beginning of the 2020s ought to reflect a steady conclusion of the vast majority of the European Union’s projects with regards to the new era of connectivity and mobility within the European Union dimension. We expect Intelligent Connected Vehicles (ICVs) to step into free circulation within the internal market. Since the operation of the ICVs depends on the number of data processing operations, data processing operations should be precisely determined and framed beforehand. ICVs data operations consist of extraordinarily large volumes and velocity of a data flow which previously existed in traditional relational database systems and could not have been processed within the desired timeframe. Even though the currently adopted database systems are ready to face the new level of data processing, a huge data stream is also faced with complex obstacles and new risks which have never been experienced beforehand.While seeking to ensure safe and secure introduction of a new level of data processing for connectivity and automation at the European Union market, the author precisely examines all potential risks and possibilities of integration into a uniform legal regulation to ensure secured ICVs data processing at all levels. The regulatory framework should document adequate security requirements and defences against ICVs attacks e.g. interference and remote-control interception.
22

Díaz-Choque, Martín, Vidalina Chaccara-Contreras, Giorgio Aquije-Cárdenas, Raquel Atoche-Wong, Victor Villanueva-Acosta, Carlos Gamarra-Bustillos und Oscar Samanamud-Loyola. „Supervision, control, and data acquisition system of a heat exchanger“. Indonesian Journal of Electrical Engineering and Computer Science 28, Nr. 1 (01.10.2022): 155. http://dx.doi.org/10.11591/ijeecs.v28.i1.pp155-164.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
<span>Today, when industry 4.0 is already being talked about, and its advantages at the organizational level, there are still industrial processes that show a lack of automatic regulation mechanisms, which means that an optimal operating process is not guaranteed, nor that monitoring and supervision capacity. In this sense, the purpose of the article is to demonstrate the feasibility of the integration between the programmable logic controller and the Arduino nanocontrollers, this as an alternative to automate a concentric tube heat exchanger, for monitoring and data acquisition through a supervision, control and data acquisition system. The integration is shown through the design and implementation of a temperature transducer made up of a MAX6675 converter module and an Arduino Nano controller, which is amplified through its pulse width modulation (PWM) interface and the integrated TL081CP and coupled to the analog inputs of the automaton. As a result of the experimental tests, it was possible to determine that the flow rate of the automated system is directly proportional to the drop or difference in temperature in the hot fluid and inversely proportional to the increase in temperature in the cold fluid, verifying the effectiveness of the automated heat exchanger.</span>
23

Branisavljević, Nemanja, Zoran Kapelan und Dušan Prodanović. „Improved real-time data anomaly detection using context classification“. Journal of Hydroinformatics 13, Nr. 3 (06.01.2011): 307–23. http://dx.doi.org/10.2166/hydro.2011.042.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The number of automated measuring and reporting systems used in water distribution and sewer systems is dramatically increasing and, as a consequence, so is the volume of data acquired. Since real-time data is likely to contain a certain amount of anomalous values and data acquisition equipment is not perfect, it is essential to equip the SCADA (Supervisory Control and Data Acquisition) system with automatic procedures that can detect the related problems and assist the user in monitoring and managing the incoming data. A number of different anomaly detection techniques and methods exist and can be used with varying success. To improve the performance, these methods must be fine tuned according to crucial aspects of the process monitored and the contexts in which the data are classified. The aim of this paper is to explore if the data context classification and pre-processing techniques can be used to improve the anomaly detection methods, especially in fully automated systems. The methodology developed is tested on sets of real-life data, using different standard and experimental anomaly detection procedures including statistical, model-based and data-mining approaches. The results obtained clearly demonstrate the effectiveness of the suggested anomaly detection methodology.
24

Eszteri, Dániel. „Blockchain and Artificial Intelligence: Connecting Two Distinct Technologies to Comply with GDPR's Data Protection By Design Principle“. Masaryk University Journal of Law and Technology 16, Nr. 1 (30.06.2022): 59–88. http://dx.doi.org/10.5817/mujlt2022-1-3.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The aim of the paper is to present some of the general principles of data protection law that can be applied to automated decision-making applications embedded into blockchain technology in order to comply with the provision of the European Union’s General Data Protection Regulation (GDPR). The analysis focuses on the applicability of the ‘data protection by design’ principle during the development of such systems. Because blockchain-based networks are built on distributed data processing operations, therefore data controlling or processing of participating nodes should comply some abstract data protection patterns predetermined and collectively built-in during the system’s development phase. On the other hand, the imprint of AI’s automated data processing could be also observed and tracked back in the blockchain due to its historically retroactive nature. In the end, the study presents the human mind and its ‘uploading’ with conscious and unconscious contents as an analogy to blockchain-based AI systems. My goal is to highlight that the synergy of blockchain and machine learning-based AI can be hypothetically suitable to develop robust yet transparent automated decision-making systems. The compliance of these distributed AI systems with data protection law’s principles is a key issue regarding the high risks posed by them to data subjects rights and freedoms.
25

Zhelyazova, Velqna. „Algorithm for information protection in the auto-mated systems“. Journal scientific and applied research 1, Nr. 1 (05.05.2012): 35–40. http://dx.doi.org/10.46687/jsar.v1i1.14.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This scientific report proposes Algorithm to protect information from unauthorized access (IPUA) in the automated systems (AS) with integrated procedures of analysis and synthesis. Currently, the insurance of the information safety (IS) in the automated systems for information processing in the management of different objects becomes a major importance. These objects include: telecommunication systems, banking systems, systems providing the work of the atom electro stations, systems for management of sea, air and land transport, systems for processing and storing of confidential and secret information [1, 2, and 9]. The high us-age of local, corporative and global networks with the appliance of open protocols for data transmission intensifies the problem with the information protection.
26

Badalyan, V. G., A. Kh Vopilkin, S. A. Dolenko, Yu V. Orlov und I. G. Persiantsev. „Data-processing algorithms for automatic operation of ultrasonic systems with coherent data processing“. Russian Journal of Nondestructive Testing 40, Nr. 12 (Dezember 2004): 791–800. http://dx.doi.org/10.1007/s11181-005-0108-7.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Yanjie, Zhang, Liu Hongyu und Liu Xin Yu. „Intelligent metering systems designed to achieve energy efficiency of the production process“. IOP Conference Series: Earth and Environmental Science 990, Nr. 1 (01.02.2022): 012033. http://dx.doi.org/10.1088/1755-1315/990/1/012033.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Abstract The intelligent electricity metering system is designed to collect data from electricity meters in an automatic mode (i.e. without the direct participation of the electricity consumer and other persons). Intelligent energy metering systems rely on a network of smart meters and have a number of new features: bidirectional interaction with meters; automated processing and storage of large amounts of data; flexible and user-friendly interface; active involvement of consumers in the process of energy management
28

Rathee, Geetanjali, Adel Khelifi und Razi Iqbal. „Artificial Intelligence- (AI-) Enabled Internet of Things (IoT) for Secure Big Data Processing in Multihoming Networks“. Wireless Communications and Mobile Computing 2021 (11.08.2021): 1–9. http://dx.doi.org/10.1155/2021/5754322.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The automated techniques enabled with Artificial Neural Networks (ANN), Internet of Things (IoT), and cloud-based services affect the real-time analysis and processing of information in a variety of applications. In addition, multihoming is a type of network that combines various types of networks into a single environment while managing a huge amount of data. Nowadays, the big data processing and monitoring in multihoming networks provide less attention while reducing the security risk and efficiency during processing or monitoring the information. The use of AI-based systems in multihoming big data with IoT- and AI-integrated systems may benefit in various aspects. Although multihoming security issues and their analysis have been well studied by various scientists and researchers; however, not much attention is paid towards big data security processing in multihoming especially using automated techniques and systems. The aim of this paper is to propose an IoT-based artificial network to process and compute big data processing by ensuring a secure communication multihoming network using the Bayesian Rule (BR) and Levenberg-Marquardt (LM) algorithms. Further, the efficiency and effect on multihoming information processing using an AI-assisted mechanism are experimented over various parameters such as classification accuracy, classification time, specificity, sensitivity, ROC, and F -measure.
29

Simavoryan, Simon Zhorzhevich, Arsen Rafikovich Simonyan, Georgii Aleksandrovich Popov und Elena Ivanovna Ulitina. „Analysis of possible adaptation of the general pattern of immune system within the systems for preventing intrusions“. Вопросы безопасности, Nr. 4 (April 2020): 36–46. http://dx.doi.org/10.25136/2409-7543.2020.4.33736.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The subject of this research is the analysis of possible implementation of the mechanisms of functionality of human immune system applicable to information security systems in automated data processing systems. The objects of this research are the human immune system, information security systems, and automated data processing systems. The research is conducted on the basis of achievements of systemic-conceptual approach towards information protection in automated data processing systems, developed within the framework of the project sponsored by the Russian Foundation for Basic Research No. 19-01-00383 on creation of intelligent information protection systems based on the neural network intrusion detection systems and &nbsp;the mechanisms of artificial immune systems. The article reviews similarity and difference between human immune system and information security systems. Special attention is given to identification of peculiarities of functionality of the mechanisms on detection of harmful intrusions into these systems respectively. Methodological research on the topic are carried out using the achievements in the area of creation of neural network intrusion detection system, built on the basis of artificial immune mechanisms that function similar to human immune system. The main result consists in the conclusion that adaptive information security systems containing the means and mechanisms of protection and built by analogy with the human immune system, may provide successful and effective protection of information in automated data processing systems. The specificity and importance of this conclusion is substantiated by the fact that it can be implemented despite the absence of full analogy between human immune system and information security system; moreover, multiple mechanism of protection implemented in human immune system are absent in the information security system, or the other way around.
30

Pasyanos, Michael E., Douglas S. Dreger und Barbara Romanowicz. „Toward real-time estimation of regional moment tensors“. Bulletin of the Seismological Society of America 86, Nr. 5 (01.10.1996): 1255–69. http://dx.doi.org/10.1785/bssa0860051255.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Abstract Recent advances in broadband station coverage, continuous telemetry systems, moment-tensor procedures, and computer data-processing methods have given us the opportunity to automate the two regional moment-tensor methods employed at the UC Berkeley Seismographic Station for events in northern and central California. Preliminary solutions are available within minutes after an event has occurred and are subsequently human reviewed. We compare the solutions of the two methods to each other, as well as the automatic and revised solutions of each individual method. Efforts are being made to establish robust criteria for determining accurate solutions with human review and to fully automate the moment-tensor procedures into the already-existing automated earthquake-location system.
31

Onishchenko, P. S., K. Y. Klyshnikov und E. A. Ovcharenko. „Artificial Neural Networks in Cardiology: Analysis of Numerical and Text Data“. Mathematical Biology and Bioinformatics 15, Nr. 1 (18.02.2020): 40–56. http://dx.doi.org/10.17537/2020.15.40.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
This review discusses works on the use of artificial neural networks for processing numerical and textual data. Application of a number of widely used approaches is considered, such as decision support systems; prediction systems, providing forecasts of outcomes of various methods of treatment of cardiovascular diseases, and risk assessment systems. The possibility of using artificial neural networks as an alternative approach to standard methods for processing patient clinical data has been shown. The use of neural network technologies in the creation of automated assistants to the attending physician will make it possible to provide medical services better and more efficiently.
32

Korniienko, I., S. Korniienko, S. Moskalets, S. Kaznachey und O. Zhyrna. „GEOINFORMATION SUPPORT FOR AUTOMATED TEST PLANNING SUBSYSTEM“. Наукові праці Державного науково-дослідного інституту випробувань і сертифікації озброєння та військової техніки, Nr. 3 (28.05.2020): 49–55. http://dx.doi.org/10.37701/dndivsovt.3.2020.07.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The process of testing weapons and military equipment involves numerous manual labor-intensive operations. Such operations can be simplified by fully or partially automating the test planning stages, conducting them directly, and processing the test results. Feature of testing weapons and military equipment is the large amount of data that somehow has a spatial location. One of the modern tools of cartographic representation, processing and analysis of statistical data arrays that have spatial localization, geospatial modeling and situation forecasting is the technology of geoinformation systems. The article substantiates the feasibility of using geoinformation systems as part of the weapons testing system and military equipment. The functional scheme of integration of the geoinformation component into the structure of the test automation subsystem is presented for geoinformation support of the processes of testing planning and processing of measurement results. An approach to the creation of geoinformation models of test sites is proposed, based on the use of methods of remote sensing of land and open Web-GIS resources. The list of functional modules of spatial data processing and analysis, which can be applied to the tasks of testing, is distributed in the geoinformation toolkit. Examples of typical spatial tasks that can be performed during test planning, direct testing, processing, and analysis of measurement results, if such data are spatially linked. The use of geoinformation technology in the test system will provide an arsenal of qualitatively new methods of digital cartography, such as the technology of automated preparation of cartographic information in the accepted cartographic projections and symbols, mass processing of arrays of measured data, a wide toolkit of mathematical and cartographic methods and functions, features and functions own methods, algorithms and methods of statistical information processing, create and use object-oriented geoinformation data models, operate with a set of visualization tools for the best presentation of research and simulation results.
33

Zakharev, Andrei A., Evgenii S. Kukin, Iaroslav V. Mazurov und Aleksandr I. Chizhov. „AUTOMATED SITUATION AWARENESS IN THE NAVY’S COMPUTER-AIDED CONTROL SYSTEMS“. АВТОМАТИЗАЦИЯ ПРОЦЕССОВ УПРАВЛЕНИЯ 63, Nr. 1 (2021): 4–12. http://dx.doi.org/10.35752/1991-2927-2021-1-63-4-12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Current development of Armed Forces of the Russian Federation is determined by advancing information technologies. The role of information support in the Navy is constantly growing and is crucial in military decision-making. The article deals with the conceptual approach to the integrated automation of situation awareness in the Navy’s computer-aided control system in order to enhance forces/troops management by providing the force command and military authority officers/ operators with analytical data obtained through acquisition and processing of situation information. For automation upgrade and effective management, the mathematical models of data processing represented by linear differential firstdegree equations are given. These models describe the kinetics of various generated information resources and show analytical solutions of the volume of output information resources versus the volume of initial data and conversion rate.
34

Gubernatorov, Oleg. „Assessing the qualiti indicators of automated control system facilities“. Automation and modeling in design and management 2022, Nr. 4 (21.12.2022): 12–17. http://dx.doi.org/10.30987/2658-6436-2022-4-12-17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The aim of the study is to show the necessity to solve the problem of assessing the quality indicators of the automated systems facilities; to demonstrate that the work quality problem of system facilities is closely connected with choosing rational technologies for processing information in the system. The article is devoted to calculating the main quality indicators of operating the automated system facilities, which include the time to complete the system tasks, the output information accuracy and the system feasibility. Research methods are statistical analysis of the operating quality of the automated system facilities using the methods of probability theory and parallel data processing. The novelty of the work is using methods of parallel data processing in calculating the main quality indicators of operating the automated control system facilities. The study results in developing a methodology for assessing the quality of the automated system facilities, namely the time to complete the system tasks, the output information accuracy and the system feasibility. The article concludes that the methods given in the article for calculating the main quality indicators of operating the automated system facilities make it possible to increase the efficiency of automated control systems by boosting the reliability and accuracy of information processing
35

Vydrin, Nikita K. „STUDY OF PROCESSING METHODS OF GEO-SPATIAL DATA TO MANAGE SUSTAINABLE DEVELOPMENT OF TERRITORIES“. Interexpo GEO-Siberia 6, Nr. 1 (08.07.2020): 87–90. http://dx.doi.org/10.33764/2618-981x-2020-6-1-87-90.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The article discusses the technique for processing satellite images by PlanetScope satellites, implemented as a Planet Analytics Feeds web service. Basic characteristics of the survey equipment of these satellite systems and key steps of automated processing are considered. The obtained information can be used to study and analyze sustainable development of territories, as well as making managerial decisions.
36

Abdella, Yisak, und Knut Alfredsen. „A GIS toolset for automated processing and analysis of radar precipitation data“. Computers & Geosciences 36, Nr. 4 (April 2010): 422–29. http://dx.doi.org/10.1016/j.cageo.2009.08.008.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Selzer, Sean, Amber L. Annett und William B. Homoky. „RaDeCC Reader: Fast, accurate and automated data processing for Radium Delayed Coincidence Counting systems“. Computers & Geosciences 149 (April 2021): 104699. http://dx.doi.org/10.1016/j.cageo.2021.104699.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Kobets, D. A., I. V. Balashov, I. G. Sychugov und V. A. Tolpin. „Organization of control and performance analysis of systems for automated processing of satellite data“. Sovremennye problemy distantsionnogo zondirovaniya Zemli iz kosmosa 14, Nr. 3 (2017): 92–103. http://dx.doi.org/10.21046/2070-7401-2017-14-3-92-103.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Lagerev, D. G., und E. A. Makarova. „FEATURES OF PRELIMINARY PROCESSING OF SEMI-STRUCTURED MEDICAL DATA IN RUSSIAN FOR USE IN ENSEMBLES OF DATA MINING MODELS“. Vestnik komp'iuternykh i informatsionnykh tekhnologii, Nr. 193 (Juli 2020): 44–54. http://dx.doi.org/10.14489/vkit.2020.07.pp.044-054.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The paper considers the problem of integration, processing and mining of poorly structured data of medical information systems in order to make managerial decisions in healthcare. The problems of medical data are described, such as the lack of a sufficient structure, a large number of abbreviations characteristic of specific nosologies, the complexity of the automatic semantic interpretation of some fields. The authors demonstrated an approach to the search and disclosure of abbreviation in texts, based on a combination of machine and human processing. The method proposed by the authors, based on a hybrid approach combining the strengths of machine and human processing, made it possible to increase the number of abbreviations found by automatic methods by 21 %, and also opened up to 55 % of cases in the automated mode (with a probability of correctness above 70 %) and significantly reduce the time spent by specialists in processing the remaining reductions. Further research will be aimed at solving the following problems associated with the processing and specificity of medical data, such as a large number of spelling errors, specific grammatical constructions. Using a hybrid approach to preprocessing poorly structured data will increase the efficiency of management decisions in the field of healthcare by reducing the time spent by experts on their creation and support. The hybrid approach to the preprocessing of text data in Russian can be applied in other subject areas. However, it may be necessary to adjust the technique to the specifics of the processed data.
40

Lagerev, D. G., und E. A. Makarova. „FEATURES OF PRELIMINARY PROCESSING OF SEMI-STRUCTURED MEDICAL DATA IN RUSSIAN FOR USE IN ENSEMBLES OF DATA MINING MODELS“. Vestnik komp'iuternykh i informatsionnykh tekhnologii, Nr. 193 (Juli 2020): 44–54. http://dx.doi.org/10.14489/vkit.2020.07.pp.044-054.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The paper considers the problem of integration, processing and mining of poorly structured data of medical information systems in order to make managerial decisions in healthcare. The problems of medical data are described, such as the lack of a sufficient structure, a large number of abbreviations characteristic of specific nosologies, the complexity of the automatic semantic interpretation of some fields. The authors demonstrated an approach to the search and disclosure of abbreviation in texts, based on a combination of machine and human processing. The method proposed by the authors, based on a hybrid approach combining the strengths of machine and human processing, made it possible to increase the number of abbreviations found by automatic methods by 21 %, and also opened up to 55 % of cases in the automated mode (with a probability of correctness above 70 %) and significantly reduce the time spent by specialists in processing the remaining reductions. Further research will be aimed at solving the following problems associated with the processing and specificity of medical data, such as a large number of spelling errors, specific grammatical constructions. Using a hybrid approach to preprocessing poorly structured data will increase the efficiency of management decisions in the field of healthcare by reducing the time spent by experts on their creation and support. The hybrid approach to the preprocessing of text data in Russian can be applied in other subject areas. However, it may be necessary to adjust the technique to the specifics of the processed data.
41

Zavyalov, Aleksandr A., und Dmitry A. Andreev. „Management of the radiotherapy quality control using automated Big Data processing“. Health Care of the Russian Federation 64, Nr. 6 (30.12.2020): 368–72. http://dx.doi.org/10.46563/0044-197x-2020-64-6-368-372.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Introduction. In Moscow, the state-of-the-art information technologies for cancer care data processing are widely used in routine practice. Data Science approaches are increasingly applied in the field of radiation oncology. Novel arrays of radiotherapy performance indices can be introduced into real-time cancer care quality and safety monitoring. The purpose of the study. The short review of the critical structural elements of automated Big Data processing and its perspectives in the light of the internal quality and safety control organization in radiation oncology departments. Material and methods. The PubMed (Medline) and E-Library databases were used to search the articles published mainly in the last 2-3 years. In total, about 20 reports were selected. Results. This paper highlights the applicability of the next-generation Data Science approaches to quality and safety assurance in radiation oncological units. The structural pillars for automated Big Data processing are considered. Big Data processing technologies can facilitate improvements in quality management at any radiotherapy stage. Simultaneously, the high requirements for quality and integrity across indices in the databases are crucial. Detailed dose data may also be linked to outcomes and survival indices integrated into larger registries. Discussion. Radiotherapy quality control could be automated to some extent through further introduction of information technologies making comparisons of the real-time quality measures with digital targets in terms of minimum norms / standards. The implementation of automated systems generating early electronic notifications and rapid alerts in case of serious quality violation could drastically improve the internal medical processes in local clinics. Conclusion. The role of Big Data tools in internal quality and safety control will dramatically increase over time.
42

Zhuang, Zhen Sheng. „Research on Data Sharing for College Financial Systems and Digital Campus“. Applied Mechanics and Materials 687-691 (November 2014): 2706–9. http://dx.doi.org/10.4028/www.scientific.net/amm.687-691.2706.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
With the rapid development of Internet technology and wide use of computer, digitalization also gets large growth attributes to the surprisingly fast development of science and technology at college. The data sharing has become the bottleneck of digital campus development between the financial system and the use of digital campus data files. In this paper, we make the analysis financial data of the financial system and the digital campus network data sharing, automated processing strategies, and achieve the financial data subscription, automated data extraction. Combining financial system and informatization together with the purpose of realizing a complete sharing of financial information, we have designed the college financial management system, which includes the subsystems: account management, student fees management, compensation management and so on. This paper also makes firstly introduction of the technology adopted in the system and explain the function of each subsystem.
43

Bounabi, Mariem, Karim EL Moutaouakil und Khalid Satori. „The Optimal Inference Rules Selection for Unstructured Data Multi-Classification“. Statistics, Optimization & Information Computing 10, Nr. 1 (08.02.2022): 225–35. http://dx.doi.org/10.19139/soic-2310-5070-1131.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The Fuzzy Inference System (FIS) is frequently utilized in a variety of Text Mining applications. In the text processing domains, where the amount of the processed data is vast, inserting manual rules for FIS remains a real issue, especially in the text processing domains, where the size of the processed databases is enormous. Therefore, an automated and optimal inference rules (IR) selection strengthens the FIS process. In this work, we propose to apply the FP-Growth as an association model algorithm and an automatic way to identify IR for fuzzy text vectorization. Once the fuzzy vectors are generated, we call the selection variables algorithms, e.g., Info Gain and Relief, to reduce the given descriptor dimensionality. To test the new descriptor performance, we propose multi-classes text classifification systems using several machine learning algorithms. Applying benchmarked databases, the new technique to produce Fuzzy descriptors achieves a signifificant gain in time, precision rules, and weighting quality. Moreover, comparing the classifification systems, the accuracy is improved by 10% comparing with other approaches.
44

BERRY, IAN, JULIE WILSON, JON DIPROSE, DAVE STUART, STEPHEN FULLER und ROBERT ESNOUF. „IMAGE STORAGE FOR AUTOMATED CRYSTALLIZATION IMAGING SYSTEMS“. International Journal of Neural Systems 15, Nr. 06 (Dezember 2005): 415–25. http://dx.doi.org/10.1142/s0129065705000384.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
To use crystallography for the determination of the three-dimensional structures of proteins, protein crystals need to be grown. Automated imaging systems are increasingly being used to monitor these crystallization experiments. These present problems of accessibility to the data, repeatability of any image analysis performed and the amount of storage required. Various image formats and techniques can be combined to provide effective solutions to high volume processing problems such as these, however lack of widespread support for the most effective algorithms, such as JPeg2000 which yielded a 64% improvement in file size over the bitmap, currently inhibits the immediate take up of this approach.
45

Vogel, Sven C. „gsaslanguage: aGSASscript language for automated Rietveld refinements of diffraction data“. Journal of Applied Crystallography 44, Nr. 4 (13.07.2011): 873–77. http://dx.doi.org/10.1107/s0021889811023181.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
A description of the gsaslanguage software is presented. The software provides input to and processes output from theGSASpackage. It allows the development of scripts for the automatic evaluation of large numbers of data sets and provides documentation of the refinement strategies employed, thus fostering the development of efficient refinement strategies. Use of the bash shell and standard Unix text-processing tools, available natively on Linux and Mac OSX platforms andviathe freecygwinsoftware on Windows systems, make this software platform independent.
46

Maldjian, Joseph A., Aaron H. Baer, Robert A. Kraft, Paul J. Laurienti und Jonathan H. Burdette. „Fully Automated Processing of fMRI Data in SPM: from MRI Scanner to PACS“. Neuroinformatics 7, Nr. 1 (21.01.2009): 57–72. http://dx.doi.org/10.1007/s12021-008-9040-z.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Safronov, Aleksey V. „The Gosplan Automated Planning System as a Necessary Step Toward the Nationwide Automated Data Processing and Control System (NACS)“. Economic History 15, Nr. 4 (31.12.2019): 395–409. http://dx.doi.org/10.15507/2409-630x.047.015.201904.395-409.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Introduction. Given the ongoing digitalization of public administration, the Soviet experience in introducing computer-aided planning warrants more research with regard to overcoming departmentalism in creating supra-institutional (national) information systems. Based on the Russian State Archive of the Economy funds, interviews with the planners and Soviet economics literature, this paper examines the reasons behind the creation of the Automated Planning System which was meant to serve as a necessary intermediate step toward the National Information System (NACS). Materials and Methods. Through the lens of the institutional economics and actor-network theory, this research explores the inter-institutional struggle for the right to supervise the creation of the new instrument for the national economic planning. Results. A vast body of Soviet literature on the Automated Planning System and the archival documents from the Russian State Archive of the Economy was compiled and analyzed alongside with the interviews conducted with the chairmen of the Main Computing Centre of the Gosplan (V. B. Bezrukov, V. V. Kossov, Y. M. Urinson) and its other employees. It enabled the author to reconstruct the history of the ASPR, describe its architecture (key blocks and functions) and trace the political struggles over the computerization of the nationwide planning system. Discussion and Conclusions. This research improves upon the historiographic narrative on the NACS as a completely unimplemented project and offers an explanatory model for the Gosplan tactics which proved most effective in the competition for the right to run the nationwide administration system. ASPR is regarded as an interim solution crucial for suppressing the bureaucratic opposition. As the Gosplan institutional system, it allowed for the exclusion of other institutions from the creation process while the conceptual shift from a single system to a unified one turned all the opponents aspiring to have their information systems into observers. At the later stage, when ASPR had been launched, the Gosplan used the obtained results to insist on linking up other systems to build the OGAS with the ASPR at its core.
48

MacRae, Colin, Nick Wilson und Mark Pownceby. „Electron Microprobe Mapping as a Tool in Ilmenite Characterisation“. Microscopy and Microanalysis 7, S2 (August 2001): 710–11. http://dx.doi.org/10.1017/s1431927600029627.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The demand for accurate mineralogical data is increasing rapidly as exploration methods, prospect evaluation procedures and metallurgical optimisation studies become more sophisticated. in response to these needs, semi-automated and automated image processing systems which detect minerals using optical microscopy, scanning electron microscopy or electron microprobe microanalysis (EPMA) are becoming increasingly important tools in the exploration, mining and mineral processing industries. CSIRO Minerals has developed an EPMA based imaging (or mapping) method for characterising ilmenite concentrates. The method uses a JEOL 8900R EPMA to collect elemental x-ray maps which are then processed using in-house developed software, Chimage. The mapping procedure differs from traditional automated identification systems in that no detailed a priori knowledge of the mineral phases is required. in addition, Chimage software enables complete processing and interpretation of the data set off-line. Elemental data can be displayed in either scatter or ternary diagrams showing clusters which allow mineral phases to be identified.
49

Shtepa, Denis, Vladimir Chen-Shan, Ivan Antonov und Evgenij Gritskevich. „OVERVIEW OF DATA CONTROL SYSTEMS AND METHODS AT DEVICE ENGINEERING ENTERPRISE“. Interexpo GEO-Siberia 6, Nr. 2 (2019): 99–104. http://dx.doi.org/10.33764/2618-981x-2019-6-2-99-104.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
The article analyzes the situation related to compliance of regime of preservation the integrity of information circulating in automated information system, control of required information protection level in system databases, as well as the correctness of functioning of algorithms used in processing. A situation is considered that is characteristic, first of all, for a device engineering enterprise, in which a similar system is already in operation or is planned to be introduced. Recommendations are given to improve its information security, promising ways of scientific study of arising problems are outlined.
50

Archanjo, Gabriel A., und Fernando J. Von Zuben. „Genetic Programming for Automating the Development of Data Management Algorithms in Information Technology Systems“. Advances in Software Engineering 2012 (05.07.2012): 1–14. http://dx.doi.org/10.1155/2012/893701.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Annotation:
Information technology (IT) systems are present in almost all fields of human activity, with emphasis on processing, storage, and handling of datasets. Automated methods to provide access to data stored in databases have been proposed mainly for tasks related to knowledge discovery and data mining (KDD). However, for this purpose, the database is used only to query data in order to find relevant patterns associated with the records. Processes modelled on IT systems should manipulate the records to modify the state of the system. Linear genetic programming for databases (LGPDB) is a tool proposed here for automatic generation of programs that can query, delete, insert, and update records on databases. The obtained results indicate that the LGPDB approach is able to generate programs for effectively modelling processes of IT systems, opening the possibility of automating relevant stages of data manipulation, and thus allowing human programmers to focus on more complex tasks.

Zur Bibliographie