To see the other types of publications on this topic, follow the link: Numerical analysis – Data processing.

Journal articles on the topic 'Numerical analysis – Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Numerical analysis – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

MARTYNIUK, Tatiana, Andrii KOZHEMIAKO, Bohdan KRUKIVSKYI, and Antonina BUDA. "ASSOCIATIVE OPERATIONS BASED ON DIFFERENCE-SLICE DATA PROCESSING." Herald of Khmelnytskyi National University. Technical sciences 311, no. 4 (August 2022): 159–63. http://dx.doi.org/10.31891/2307-5732-2022-311-4-159-163.

Full text
Abstract:
Associative operations are effectively used to solve such application problems as sorting, searching for certain features, and identifying extreme (maximum/minimum) elements in data sets. Thus, determining the maximum number as a result of sorting a numerical array is an acceptable operation in implementing the competition mechanism in neural networks. In addition, determining the average number in a numerical series by sorting significantly speeds up the process of median filtering of images and signals. In this case, the implementation of median filtering requires the use of sorting with the ranking of the elements of the number array. This paper analyses the possibilities of associative operations implementing the elements of a vector (one-dimensional) array of numbers based on processing by difference slices (DS). A simplified description of DS processing with a selection of the common part of the elements of the vector and the difference slice formed from its elements is given. In addition, elements of the binary mask matrix are used as an example of a topological feature matrix. The proposed approach allows for the formation of the ranks of the elements of the initial vector, as a result of sorting in ascending order of their numerical values. The paper shows a schematic representation of the process of DS processing, as well as an example of DS processing of a number vector in the form of a table, which shows the formation sequence of numbers of the sorted array and the ranks of numbers of the initial array. Therefore, the proposed use of topological features allows to determine the comparative relations between the elements of the numerical array in the process of spatially distributed DS processing, as well as to confirm the versatility of this approach.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Fan Xiu, Xing Ping Wen, and Shao Jin Yi. "Numerical Measurement and Data Processing of Air Pollution." Applied Mechanics and Materials 577 (July 2014): 1219–22. http://dx.doi.org/10.4028/www.scientific.net/amm.577.1219.

Full text
Abstract:
Relational analysis method was a data process method used to sort out the correlation extent of effect factors in a system with uncertain information. Common mathematical methods were not applicable for describing the relationship. A new method, equivalent numerical relational degree (ENRD) model was developed to evaluate the effect of different factors on air pollution. The effects of different factors-the port throughput, amount of coal, industrial output, and motor vehicle ownership, investment in fixed assets, real estate development and construction of housing construction area on the quality of atmospheric environment were studied. The degrees of correlation were calculated according to ENRD and the values of the port throughput, amount of coal, industrial output, motor vehicle ownership, investment in fixed assets, real estate development and construction of housing construction area were 0.7947, 0.7943, 0.7289, 0.7238, 0.6702 and 0.6527, respectively. From these values, the relations of these factors to the quality of atmospheric environment could be described and evaluated, and the port throughput and amount of coal were relatively major.
APA, Harvard, Vancouver, ISO, and other styles
3

Onishchenko, P. S., K. Y. Klyshnikov, and E. A. Ovcharenko. "Artificial Neural Networks in Cardiology: Analysis of Numerical and Text Data." Mathematical Biology and Bioinformatics 15, no. 1 (February 18, 2020): 40–56. http://dx.doi.org/10.17537/2020.15.40.

Full text
Abstract:
This review discusses works on the use of artificial neural networks for processing numerical and textual data. Application of a number of widely used approaches is considered, such as decision support systems; prediction systems, providing forecasts of outcomes of various methods of treatment of cardiovascular diseases, and risk assessment systems. The possibility of using artificial neural networks as an alternative approach to standard methods for processing patient clinical data has been shown. The use of neural network technologies in the creation of automated assistants to the attending physician will make it possible to provide medical services better and more efficiently.
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Zhi Wei, and Sheng Guo Cheng. "Contrast Analysis of Data Processing Method Based on the MATLAB in Compaction Test." Applied Mechanics and Materials 170-173 (May 2012): 611–14. http://dx.doi.org/10.4028/www.scientific.net/amm.170-173.611.

Full text
Abstract:
Abstract. The data of compaction test is processed by use of numerical method and least-squares fitting method respectively through MATLAB software. After a simple comparative analysis of the two results, this paper aims to reach the conclusion that when the distribution of test data points consistent with the characteristics of soil compaction, it is better and more accurate to use numerical method.
APA, Harvard, Vancouver, ISO, and other styles
5

G.V., Suresh, and Srinivasa Reddy E.V. "Uncertain Data Analysis with Regularized XGBoost." Webology 19, no. 1 (January 20, 2022): 3722–40. http://dx.doi.org/10.14704/web/v19i1/web19245.

Full text
Abstract:
Uncertainty is a ubiquitous element in available knowledge about the real world. Data sampling error, obsolete sources, network latency, and transmission error are all factors that contribute to the uncertainty. These kinds of uncertainty have to be handled cautiously, or else the classification results could be unreliable or even erroneous. There are numerous methodologies developed to comprehend and control uncertainty in data. There are many faces for uncertainty i.e., inconsistency, imprecision, ambiguity, incompleteness, vagueness, unpredictability, noise, and unreliability. Missing information is inevitable in real-world data sets. While some conventional multiple imputation approaches are well studied and have shown empirical validity, they entail limitations in processing large datasets with complex data structures. In addition, these standard approaches tend to be computationally inefficient for medium and large datasets. In this paper, we propose a scalable multiple imputation frameworks based on XGBoost, bootstrapping and regularized method. XGBoost, one of the fastest implementations of gradient boosted trees, is able to automatically retain interactions and non-linear relations in a dataset while achieving high computational efficiency with the aid of bootstrapping and regularized methods. In the context of high-dimensional data, this methodology provides fewer biased estimates and reflects acceptable imputation variability than previous regression approaches. We validate our adaptive imputation approaches with standard methods on numerical and real data sets and shown promising results.
APA, Harvard, Vancouver, ISO, and other styles
6

Ames, W. F. "Underwater acoustic data processing." Mathematics and Computers in Simulation 31, no. 6 (February 1990): 594. http://dx.doi.org/10.1016/0378-4754(90)90066-r.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kuzminets, Nikolai, and Yurii Dubovenko. "Constraints of optimization of statistical analysis of data of engineerig monitoring of transport networks." Automobile Roads and Road Construction, no. 109 (2021): 157–65. http://dx.doi.org/10.33744/0365-8171-2021-109-157-165.

Full text
Abstract:
Time series for the technical monitoring of the transportation networks include interference and omissions. Their analysis requires the special statistical analysis. Known statistical packages do not contain a full cycle for processing of large time series. The linear timeline for the processing of periodic data is not available in digital statistics. The sliding window approach is suitable for processing of interrupted time series. Its disadvantage is the restriction on the length of the row and the sensitivity to data gaps. The graph of the time series process­ing needs the internal optimization. The necessary steps for optimizing of the time series processing graph are determined. They are as follows: store data in an internal database, build the data samples on a single time scale, sampling based on the meta-description of the series, averaging in a sliding window, calendar bindings and omission masks, generalization of graphs, storage of graphics in vector format and so on. The conditions for the study of series are revealed such as the database, calendar structure of data, processing of the gaps, a package of numerical methods of analysis, processing in a sliding window.
APA, Harvard, Vancouver, ISO, and other styles
8

Siregar, Hans Elmaury Andreas. "GROUND PENETRATING RADAR DATA ANALYSIS BY USING MODELLING WITH FINITE DIFFERENCE METHOD: CASE STUDY IN BELAWAN PORT." Buletin Sumber Daya Geologi 11, no. 1 (May 10, 2016): 15–24. http://dx.doi.org/10.47599/bsdg.v11i1.7.

Full text
Abstract:
Ground Penetrating Radar (GPR) is one of non destructive geophysics methods which is appropriate used to identify subsurface object with depth penetration less than 70 meter. High data resolution as well as relatively unprolonged and manageable data acquisition make this method becoming convenient supporting method to increase near surface data for other geophysics methods. The depth penetration of GPR varies with the frequency of antenna. Getting optimum depth penetration before field acquisition data some numerical simulation should be accomplished in order to perceive antenna frequency and processing technique that used, so the depth of target zone can be achieved. The Finite Difference (FD) is one one of numerical anaysis technique that mostly used to determine differential equation. By using FD method, the solution of electromagnetic waves equation can be obtained and the image of numerical simulation can be displayed. In line with this radar image from numerical simulation, the relationship of frequency and depth penetration on the media used is acquired. Media used in this simulation are sand, clay, sandy clay, clayey sand and concrete. Through numerical simulation from this research, we conclude that GPR method able to distinguish boundary layer among each medium. Processing technique is accomplished to comprehend suitable processing stages for high resolution radar image that can be interpreted. Data acquisition and processing technique from simulation have been implemented in field experiment and very helpful to apprehend GPR characteristic signal in subsurface map in Belawan port.
APA, Harvard, Vancouver, ISO, and other styles
9

Novikov-Borodin, Andrey. "Experimental Data Processing Using Shift Methods." EPJ Web of Conferences 226 (2020): 03014. http://dx.doi.org/10.1051/epjconf/202022603014.

Full text
Abstract:
The numerical methods of step-by-step and combined shifts are proposed for correction and reconstruction the experimental data convolved with different blur kernels. Methods use a shift technique for the direct deconvolution of experimental data, they are fast and effective for data reconstruction, especially, in the case of discrete measurements. The comparative analysis of proposed methods is presented, inaccuracies of reconstruction with different blur kernels, different volumes of data and noise levels are estimated. There are presented the examples of using the shift methods in processing the statistical data of TOF neutron spectrometers and in planning the proton therapy treatment. The multi-dimensional data processing using shift methods is considered. The examples of renewal the 2D images blurred by uniform motion and distorted with the Gaussian-like blur kernels are presented.
APA, Harvard, Vancouver, ISO, and other styles
10

Dobronets, Boris, and Olga Popova. "Numerical Analysis for Problems of Remote Sensing with Random Input Data." E3S Web of Conferences 75 (2019): 01004. http://dx.doi.org/10.1051/e3sconf/20197501004.

Full text
Abstract:
The study is devoted to the remote sensing data processing using the models with random input data. In this article we propose a new approach to calculation of functions with random arguments, which is a technique of fast computations, based on the idea of parallel computations and the application of numerical probability analysis. To calculate a function with random arguments we apply one of the basic concepts of numerical probabilistic analysis as the probabilistic extension. To implement the technique of fast computations, a new method based on parallel recursive calculations is proposed.
APA, Harvard, Vancouver, ISO, and other styles
11

Acho, Leonardo, Gisela Pujol-Vázquez, and José Gibergans-Báguena. "Electronic Device and Data Processing Method for Soil Resistivity Analysis." Electronics 10, no. 11 (May 27, 2021): 1281. http://dx.doi.org/10.3390/electronics10111281.

Full text
Abstract:
This paper presents a mathematical algorithm and an electronic device to study soil resistivity. The system was based on introducing a time-varying electrical signal into the soil by using two electrodes and then collecting the electrical response of the soil. Hence, the proposed electronic system relied on a single-phase DC-to-AC converter followed by a transformer for the soil-to-circuit coupling. By using the maximum likelihood statistical method, a mathematical algorithm was realized to discern soil resistivity. The novelty of the numerical approach consisted of modeling a set of random data from the voltmeters by using a parametric uniform probability distribution function, and then, a parametric estimation was carried out for dataset analysis. Furthermore, to validate our contribution, a two-electrode laboratory experiment with soil was also designed. Finally, and according to the experimental outcomes, our electronic circuit and mathematical data analysis approach were able to detect different soil resistivities.
APA, Harvard, Vancouver, ISO, and other styles
12

Podvesovskii, A. G., E. V. Karpenko, D. G. Lagerev, and A. N. Baburin. "ENSEMBLE MODELS FOR INTELLIGENT ANALYSIS OF SOCIOLOGICAL DATA." Vestnik komp'iuternykh i informatsionnykh tekhnologii, no. 207 (September 2021): 43–52. http://dx.doi.org/10.14489/vkit.2021.09.pp.043-052.

Full text
Abstract:
The paper investigates an approach to sociological information processing based on the use of intelligent data analysis methods applied to the task of processing the results of a questionnaire survey. The advantages of intelligent analysis of sociological data in comparison with traditional statistical processing are discussed, as well as the implementation features and applicability limits of various intelligent data analysis methods in solving problems of association, clustering and classification. Structure and features of representation of respondents’ survey data are considered, the appropriateness is substantiated and the advantages of their processing based on the combination of various methods of intelligent analysis within an ensemble of models are discussed. A structure of an ensemble of models is proposed based on the combination and joint use of association rules, clustering algorithms and decision trees, which makes it possible to jointly process numerical and categorical data contained in the respondent’s answers to the questionnaire and also to interpret the results of data clustering. The paper describes the results of using the constructed ensemble of models for processing and analyzing the data of a sociological survey conducted as part of the annual project for monitoring the drug abuse situation in the Bryansk region in 2013 – 2018. The use of an ensemble of intelligent data analysis models for processing the results of a sociological survey not only makes it possible to detect patterns in them that cannot be otherwise detected by traditional methods of statistical processing, but also contributes to an increase in the reliability, completeness and coherence of the analysis results, due to which the analyst creates a holistic systemic picture of the studied social phenomenon or process.
APA, Harvard, Vancouver, ISO, and other styles
13

Kane, Michael,J. "Towards a Grammar for Processing Clinical Trial Data." R Journal 13, no. 1 (2021): 563. http://dx.doi.org/10.32614/rj-2021-052.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Parhwal, Divyam. "DATA ANALYSIS OF METROLOGICAL DATA." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (May 22, 2024): 1–5. http://dx.doi.org/10.55041/ijsrem34399.

Full text
Abstract:
This project report is set to give an interactive visualization and analytical presentation for Meteorological records in Finland. These Meteorological records Data of Finland is recorded by way of integrating the three current infrastructures for numerical weather prediction, observational information and satellite tv for pc image processing and this is recorded. The Meteorological data used in the study consists of near- floor atmospheric elements including wind direction, apparent temperature, cloud layer(s), ceiling peak, visibility, current weather, wind velocity, cloud cowl and precipitation amount and so on. The data consists of hourly recorded data for the past ten years of Finland from 2006-04- 01 at time 00:00:00.000 to 2016-09-09 at time 23:00:00.000. The analysis had been performed out for 2-m floor temperature. Through this analysis, all the valuable insights into the changing weather patterns and environmental conditions in Finland are analyzed and presented in an interactive visualization, providing a solid foundation for understanding and addressing the impacts of Global Warming in the region. This project main objective is to show the complete analysis of the influences of the Global Warming on the Apparent temperature & humidity in Finland over the course of 10 years from 2006 to 2016. Keywords — Data Analysis, Data Visualization, Meteorological Data, Climate Change, Global Warning, Apparent Temperature, Humidity
APA, Harvard, Vancouver, ISO, and other styles
15

Xu, Dong, Yan Ma, Jining Yan, Peng Liu, and Lajiao Chen. "Spatial-feature data cube for spatiotemporal remote sensing data processing and analysis." Computing 102, no. 6 (December 20, 2018): 1447–61. http://dx.doi.org/10.1007/s00607-018-0681-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

A. M., Chernykh. "Blockchain and Processing of Judicial Data." Rossijskoe pravosudie, no. 9 (August 23, 2021): 54–62. http://dx.doi.org/10.37399/issn2072-909x.2021.9.54-62.

Full text
Abstract:
. Improving the electronic document management system of the judicial system requires the use of new information technologies. Conducting trials with guaranteed protection of documentary data of all participants in the trial from changes or loss will reduce the corruption component, increase mutual confidence of the parties involved in the litigation in documents. An system analysis was made of the possibility of using a distributed registry of databases and building on its basis a secure document exchange network using blockchain technology. The work defines the directions of interaction of information resources of federal state systems and the information system of justice on the blockchain platform in the interests of solving the problems of ensuring the openness of user services of the judicial system and the security of legal data of participants in the trial. A conceptual-logical model of interaction of information resources of the parties with increased requirements for mutual trust based on blockchain technology for maintaining a distributed data register and the ability to process multidimensional data (numerical, text, graphic, coordinate, etc.) with a high degree of information security for a long time.
APA, Harvard, Vancouver, ISO, and other styles
17

Wortha, Silke M., Elise Klein, Katharina Lambert, Tanja Dackermann, and Korbinian Moeller. "The relevance of basic numerical skills for fraction processing: Evidence from cross-sectional data." PLOS ONE 18, no. 1 (January 31, 2023): e0281241. http://dx.doi.org/10.1371/journal.pone.0281241.

Full text
Abstract:
Recent research indicated that fraction understanding is an important predictor of later mathematical achievement. In the current study we investigated associations between basic numerical skills and students’ fraction processing. We analyzed data of 939 German secondary school students (age range = 11.92 to 18.00 years) and evaluated the determinants of fraction processing considering basic numerical skills as predictors (i.e., number line estimation, basic arithmetic operations, non-symbolic magnitude comparison, etc.). Additionally, we controlled for general cognitive ability, grade level, and sex. We found that multiplication, subtraction, conceptual knowledge, number line estimation, and basic geometry were significantly associated with fraction processing beyond significant associations of general cognitive ability and sex. Moreover, relative weight analysis revealed that addition and approximate arithmetic should also be considered as relevant predictors for fraction processing. The current results provide food for thought that further research should focus on investigating whether recapitulating basic numerical content in secondary school mathematics education can be beneficial for acquiring more complex mathematical concepts such as fractions.
APA, Harvard, Vancouver, ISO, and other styles
18

He, Qing Qiang, Jia Sun, Jun You Zhao, Bao Min Yuan, and Li Jian Xu. "Numerical Analysis of Multi-Pass H-Beam Hot Rolling Processing." Applied Mechanics and Materials 190-191 (July 2012): 385–89. http://dx.doi.org/10.4028/www.scientific.net/amm.190-191.385.

Full text
Abstract:
Hot rolling is a basic metal forming technique that is used to transform preformed shape into final products of forms more suitable for further processing. As the hot stock progresses through the forming surfaces, the shape eventually reaches a constant state. With the assumption of forming process has reached steady-state condition, a simulation technique based on elements re-meshes has been constructed to analyze the H-beam metal hot roll process. The technique includes the following approaches: the solution was halted as soon as the steady-state criteria were met, and the plane of elements, which first satisfied the steady-state criteria were written to database, SSES for short; a two-dimensional model was created to model the hot stock cooling between the two roll passes and a geometric part was generated and meshed with quadrilateral elements to transfer the nodes temperatures; a new three-dimensional model extruded from the two-dimensional model was constructed to model the next roll pass with the transfer of nodes temperatures and element integration points equivalent plastic strain(PEEQ), identifying the plastic deformation extent for the classical metal plasticity models, from the new two-dimensional model and the first three-dimensional model respectively. Gleeble-1500 tester is used to get the true stress and true plastic strain data for modeling the yield behavior of material Q235. The effectiveness of the simulation technique has been proved by a simulation of 11-pass H-beam rolling process.
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Li, and Dechun Zheng. "Data Acquisition and Performance Analysis of Image-Based Photonic Encoder Using Field-Programmable Gate Array (FPGA)." Journal of Nanoelectronics and Optoelectronics 18, no. 12 (December 1, 2023): 1475–83. http://dx.doi.org/10.1166/jno.2023.3542.

Full text
Abstract:
With the continuous advancement of numerical control technology, the requirements for the position detection resolution, precision, and size of photoelectric encoders in computer numerical control machine tools are increasingly stringent. In the pursuit of high resolution and precision, this work investigates the principles of electronic subdivision and embedded hardware. It designs a high-precision image-based photonic encoder using a Field-Programmable Gate Array (FPGA). This photonic encoder captures the pattern of a rotating code disk using a complementary metal-oxide-semiconductor (CMOS) image sensor. The encoder’s core is the XC6SLX25T chip from the Spartan-6 series, with peripheral circuits including only A/D sampling and low-pass signal processing circuits. The FPGA module handles the digital signal reception, waveform conversion, quadrature frequency coarse count calculation, fine count subdivision calculation, and final position calculation of the encoder. In experiments, the output signal of the photonic encoder contains many impurities. After processing by the signal processing module, the A and B phase signals are not affected by previous interference, with a phase difference of 90°, meeting the requirements for subsequent signal processing modules. After fine count subdivision processing, the waveform graph significantly increases within one cycle, and after quadrupling the frequency, 30 subdivisions are performed within each cycle. Noise is introduced into graphic positioning or graphics are positioned under different noise conditions. Experimental results show that utilizing an improved centroid algorithm helps further suppress noise and enhance measurement accuracy in the design of image-based photonic encoders.
APA, Harvard, Vancouver, ISO, and other styles
20

Arnold, Taylor. "A Tidy Data Model for Natural Language Processing using cleanNLP." R Journal 9, no. 2 (2017): 248. http://dx.doi.org/10.32614/rj-2017-035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Baigereyev, Dossan, Syrym Kasenov, and Laura Temirbekova. "Empowering geological data analysis with specialized software GIS modules." Indonesian Journal of Electrical Engineering and Computer Science 34, no. 3 (June 1, 2024): 1953. http://dx.doi.org/10.11591/ijeecs.v34.i3.pp1953-1964.

Full text
Abstract:
This research is devoted to the development of a geographic information system (GIS) for the analysis of geological data. It presents two specialized software modules designed to solve complex geological problems related to potential progress to disturbed masses and magnetotelluric sounding. These modules are integrated into the QGIS environment, offering efficient data processing and analysis capabilities, contributing to a deeper understanding of geological structures. The study presents a mathematical model for the problem of magnetotelluric sounding (MTS) and the continuation of potentials towards the perturbed masses, demonstrating numerical results using the developed algorithm. To confirm the accuracy of the model, a comparative analysis was carried out with empirical data for various chemical elements, which showed high accuracy, especially at shallow depths, with an error rate of less than 2%. In addition, the study highlights the importance of powerful GIS for the analysis and interpretation of geological data, including geochemical, geophysical and remote sensing information. The advanced functionality of QGIS simplifies data processing and visualization, which makes it an invaluable tool for geologists and researchers.
APA, Harvard, Vancouver, ISO, and other styles
22

Lei, Changyi, Yunbo Bi, and Jiangxiong Li. "Continuous Numerical Analysis of Slug Rivet Installation Process Using Parameterized Modeling Method." Coatings 11, no. 2 (February 6, 2021): 189. http://dx.doi.org/10.3390/coatings11020189.

Full text
Abstract:
The slug rivet installation process is complex. A lot of parameters are included during the riveting deformation process. The workload and time cost of a traditional simulation study is very high since a traditional numerical model should be modified manually time by time when riveting parameters change. The data processing after simulation is another complex work. To improve the situation, this paper presents a parameterized modeling method. The modeling process and data processing algorithm can be developed using Python script. The parameterized model can automatically and continuously re-build without any manual intervention according to the riveting parameter auto-update condition. The post-processing analysis can be automatically conducted and saved as well. Then this paper conducts continuous analysis to illustrate the impact of riveting parameters on riveting quality. The parameterized model keeps running 41 times until the riveting parameter is out of range. The parameterized modeling method is a useful method for a simulation study. The study will pave the way for further investigations.
APA, Harvard, Vancouver, ISO, and other styles
23

Grakovski, A., and A. Alexandrov. "SPECTRAL METHOD FOR NUMERICAL CALCULATION OF DERIVATIVES IN DIGITAL PROCESSING OF SUBSURFACE RADAR SOUNDING SIGNALS." Mathematical Modelling and Analysis 12, no. 1 (March 31, 2005): 31–40. http://dx.doi.org/10.3846/13926292.2005.9637268.

Full text
Abstract:
The practical realization of the morphologic analysis of radar subsurface as the processing of input digital data is presented here together with real data processing results. The analysis depends on a new numerical differentiation algorithm in frequency domain. It is compared with a central finite differences scheme and results of computations demonstrate approximately 80‐times decrease of the maximum errors. Straipsnyje nagrinejama radaro popaviršutiniu signalu skaitinio duomenu apdorojimo morfologines analizes praktine realizacija, kai skaitiniai duomenys apdorojami kartu su realiais duomenimis. Analize remiasi skaitinio deferenciavimo algoritmu dažniu srityje. Ji lyginama su centrine baigtiniu skirtumu schema ir demonstruoja vidutiniškai 80 kartu mažesne skaičiavimo paklaida.
APA, Harvard, Vancouver, ISO, and other styles
24

Hu, Xiao Min, Hui Wang, and Zhi Xing Hu. "Data Processing in Biological Behavior Analysis of a Delayed Impulsive Lotka-Volterra Model with Mutual Interference." Advanced Materials Research 1046 (October 2014): 396–402. http://dx.doi.org/10.4028/www.scientific.net/amr.1046.396.

Full text
Abstract:
A delayed impulsive Lotka-Volterra model with mutual interference was established. With the help of Mawhin’s Continuation Theorem in coincidence degree theory, a sufficient condition is found for the existence of positive periodic solutions of the system. A numerical simulation is given to illustrate main results.
APA, Harvard, Vancouver, ISO, and other styles
25

Yang, Hao-Yi, Zhi-Rong Lin, and Ko-Chih Wang. "Efficient and Portable Distribution Modeling for Large-Scale Scientific Data Processing with Data-Parallel Primitives." Algorithms 14, no. 10 (September 29, 2021): 285. http://dx.doi.org/10.3390/a14100285.

Full text
Abstract:
The use of distribution-based data representation to handle large-scale scientific datasets is a promising approach. Distribution-based approaches often transform a scientific dataset into many distributions, each of which is calculated from a small number of samples. Most of the proposed parallel algorithms focus on modeling single distributions from many input samples efficiently, but these may not fit the large-scale scientific data processing scenario because they cannot utilize computing resources effectively. Histograms and the Gaussian Mixture Model (GMM) are the most popular distribution representations used to model scientific datasets. Therefore, we propose the use of multi-set histogram and GMM modeling algorithms for the scenario of large-scale scientific data processing. Our algorithms are developed by data-parallel primitives to achieve portability across different hardware architectures. We evaluate the performance of the proposed algorithms in detail and demonstrate use cases for scientific data processing.
APA, Harvard, Vancouver, ISO, and other styles
26

Bobrov, Konstantin, and Aleksandr Iskoldsky. "Stability оf numerical estimations of time series characteristics." Izvestiya VUZ. Applied Nonlinear Dynamics 10, no. 1-2 (July 31, 2002): 127–36. http://dx.doi.org/10.18500/0869-6632-2002-10-1-127-136.

Full text
Abstract:
The numerical methods of a data analysis (finite ordered sequences of natural binary codes), corresponding to fragments оf trajectories, obtained by the numerical solving of а finite number of nonlinear ordinary differential equations are discussed. These equations represent the determined dissipative chaotic dynamic systems. Measuring properties of such sequences are characterized by the estimation which assumes the code, obtained as a result of processing of source sequence of data by given algorithm, realized on the computer and not supposing the participation of the expert. The concept оf stability оf the estimation is formalized. The stability of obtained estimations in relation to numerically simulated small variations оf registration scheme parameters and to parameters of the algorithm of processing is investigated. Examples of data sequences with typical parameters (digit capacity, time step, length of sequence) for many real experiments are considered. It is shown that estimation, obtained by the algorithm based on the analysis оf properties essentially depended оn behavior оf trajectory in every point, is unstable. At the same time, the estimation, obtained by the algorithm based on the analysis of properties, essentially depended оn some «average» for different points behavior оf trajectory, is stable.
APA, Harvard, Vancouver, ISO, and other styles
27

Akanbi, Adeyinka, and Muthoni Masinde. "A Distributed Stream Processing Middleware Framework for Real-Time Analysis of Heterogeneous Data on Big Data Platform: Case of Environmental Monitoring." Sensors 20, no. 11 (June 3, 2020): 3166. http://dx.doi.org/10.3390/s20113166.

Full text
Abstract:
In recent years, the application and wide adoption of Internet of Things (IoT)-based technologies have increased the proliferation of monitoring systems, which has consequently exponentially increased the amounts of heterogeneous data generated. Processing and analysing the massive amount of data produced is cumbersome and gradually moving from classical ‘batch’ processing—extract, transform, load (ETL) technique to real-time processing. For instance, in environmental monitoring and management domain, time-series data and historical dataset are crucial for prediction models. However, the environmental monitoring domain still utilises legacy systems, which complicates the real-time analysis of the essential data, integration with big data platforms and reliance on batch processing. Herein, as a solution, a distributed stream processing middleware framework for real-time analysis of heterogeneous environmental monitoring and management data is presented and tested on a cluster using open source technologies in a big data environment. The system ingests datasets from legacy systems and sensor data from heterogeneous automated weather systems irrespective of the data types to Apache Kafka topics using Kafka Connect APIs for processing by the Kafka streaming processing engine. The stream processing engine executes the predictive numerical models and algorithms represented in event processing (EP) languages for real-time analysis of the data streams. To prove the feasibility of the proposed framework, we implemented the system using a case study scenario of drought prediction and forecasting based on the Effective Drought Index (EDI) model. Firstly, we transform the predictive model into a form that could be executed by the streaming engine for real-time computing. Secondly, the model is applied to the ingested data streams and datasets to predict drought through persistent querying of the infinite streams to detect anomalies. As a conclusion of this study, a performance evaluation of the distributed stream processing middleware infrastructure is calculated to determine the real-time effectiveness of the framework.
APA, Harvard, Vancouver, ISO, and other styles
28

Zhao, Lin. "Numerical Control Lathe Cutting Force Signal On-Line Monitoring Design." Applied Mechanics and Materials 711 (December 2014): 329–32. http://dx.doi.org/10.4028/www.scientific.net/amm.711.329.

Full text
Abstract:
The main research direction of Numerical control lathe cutting force signal on-line monitoring is to process real-time monitoring, using the sensor, charge amplifier, video acquisition card and computer to collect data and signal. Signal acquisition makes use of the piezoelectric sensor signals and send them to the computer in order to acquire the real-time data and display the dynamic signal so that monitor the process. Signal processing is the course that data will be collected for subsequent processing and analyzing. It includes display, filtering, correlation analysis, spectral analysis, etc. We can conclude the signal’s characteristics after the time domain and frequency domain analysis of signals.
APA, Harvard, Vancouver, ISO, and other styles
29

Wlodarczyk-Sielicka, Marta, and Wioleta Blaszczak-Bak. "Processing of Bathymetric Data: The Fusion of New Reduction Methods for Spatial Big Data." Sensors 20, no. 21 (October 30, 2020): 6207. http://dx.doi.org/10.3390/s20216207.

Full text
Abstract:
Floating autonomous vehicles are very often equipped with modern systems that collect information about the situation under the water surface, e.g., the depth or type of bottom and obstructions on the seafloor. One such system is the multibeam echosounder (MBES), which collects very large sets of bathymetric data. The development and analysis of such large sets are laborious and expensive. Reduction of the spatial data obtained from bathymetric and other systems collecting spatial data is currently widely used. In commercial programs used in the development of data from hydrographic systems, methods of interpolation to a specific mesh size are very frequently used. The authors of this article previously proposed original the true bathymetric data reduction method (TBDRed) and Optimum Dataset (OptD) reduction methods, which maintain the actual position and depth for each of the measured points, without their interpolation. The effectiveness of the proposed methods has already been presented in previous articles. This article proposes the fusion of original reduction methods, which is a new and innovative approach to the problem of bathymetric data reduction. The article contains a description of the methods used and the methodology of developing bathymetric data. The proposed fusion of reduction methods allows the generation of numerical models that can be a safe, reliable source of information, and a basis for design. Numerical models can also be used in comparative navigation, during the creation of electronic navigation maps and other hydrographic products.
APA, Harvard, Vancouver, ISO, and other styles
30

Song, Homin, Ukyong Woo, and Hajin Choi. "Numerical Analysis of Ultrasonic Multiple Scattering for Fine Dust Number Density Estimation." Applied Sciences 11, no. 2 (January 8, 2021): 555. http://dx.doi.org/10.3390/app11020555.

Full text
Abstract:
In this study, a method is presented for estimating the number density of fine dust particles (the number of particles per unit area) through numerical simulations of multiply scattered ultrasonic wavefields. The theoretical background of the multiple scattering of ultrasonic waves under different regimes is introduced. A series of numerical simulations were performed to generate multiply scattered ultrasonic wavefield data. The generated datasets are subsequently processed using an ultrasound data processing approach to estimate the number density of fine dust particles in the air based on the independent scattering approximation theory. The data processing results demonstrate that the proposed approach can estimate the number density of fine dust particles with an average error of 43.4% in the frequency band 1–10 MHz (wavenumber × particle radius ≤ 1) at a particle volume fraction of 1%. Several other factors that affect the accuracy of the number density estimation are also presented.
APA, Harvard, Vancouver, ISO, and other styles
31

Song, Homin, Ukyong Woo, and Hajin Choi. "Numerical Analysis of Ultrasonic Multiple Scattering for Fine Dust Number Density Estimation." Applied Sciences 11, no. 2 (January 8, 2021): 555. http://dx.doi.org/10.3390/app11020555.

Full text
Abstract:
In this study, a method is presented for estimating the number density of fine dust particles (the number of particles per unit area) through numerical simulations of multiply scattered ultrasonic wavefields. The theoretical background of the multiple scattering of ultrasonic waves under different regimes is introduced. A series of numerical simulations were performed to generate multiply scattered ultrasonic wavefield data. The generated datasets are subsequently processed using an ultrasound data processing approach to estimate the number density of fine dust particles in the air based on the independent scattering approximation theory. The data processing results demonstrate that the proposed approach can estimate the number density of fine dust particles with an average error of 43.4% in the frequency band 1–10 MHz (wavenumber × particle radius ≤ 1) at a particle volume fraction of 1%. Several other factors that affect the accuracy of the number density estimation are also presented.
APA, Harvard, Vancouver, ISO, and other styles
32

Zheng, Xiangcheng, Vincent J. Ervin, and Hong Wang. "Numerical Approximations for the Variable Coefficient Fractional Diffusion Equations with Non-smooth Data." Computational Methods in Applied Mathematics 20, no. 3 (July 1, 2020): 573–89. http://dx.doi.org/10.1515/cmam-2019-0038.

Full text
Abstract:
AbstractIn this article, we study the numerical approximation of a variable coefficient fractional diffusion equation. Using a change of variable, the variable coefficient fractional diffusion equation is transformed into a constant coefficient fractional diffusion equation of the same order. The transformed equation retains the desirable stability property of being an elliptic equation. A spectral approximation scheme is proposed and analyzed for the transformed equation, with error estimates for the approximated solution derived. An approximation to the unknown of the variable coefficient fractional diffusion equation is then obtained by post-processing the computed approximation to the transformed equation. Error estimates are also presented for the approximation to the unknown of the variable coefficient equation with both smooth and non-smooth diffusivity coefficient and right-hand side. Numerical experiments are presented to test the performance of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
33

Walldén, Marcus, Masao Okita, Fumihiko Ino, Dimitris Drikakis, and Ioannis Kokkinakis. "Accelerating In-Transit Co-Processing for Scientific Simulations Using Region-Based Data-Driven Analysis." Algorithms 14, no. 5 (May 12, 2021): 154. http://dx.doi.org/10.3390/a14050154.

Full text
Abstract:
Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.
APA, Harvard, Vancouver, ISO, and other styles
34

Bezdek, James C. "Generalized C-Means Algorithms for Medical Image Analysis." Proceedings, annual meeting, Electron Microscopy Society of America 48, no. 1 (August 12, 1990): 448–49. http://dx.doi.org/10.1017/s0424820100180999.

Full text
Abstract:
Diagnostic machine vision systems that attempt to interpret medical imagery almost always include (and depend upon) one or more pattern recognition algorithms (cluster analysis and classifier design) for low and intermediate level image data processing. This includes, of course, image data collected by electron microscopes. Approaches based on both statistical and fuzzy models are found in the texts by Bezdek, Duda and Hart, Dubes and Jain,and Pao. Our talk examines the c-means families as they relate to medical image processing. We discuss and exemplify applications in segmentation (MRI data); clustering (flow cytometry data); and boundary analysis.The structure of partition spaces underlying clustering algorithms is described briefly. Let (c) be an integer, 1<c>n and let X = {x1, x2, ..., xn} denote a set of (n) column vectors in Rs. X is numerical object data; the k-th object (some physical entity such as a medical patient, PAP smear image, color photograph, etc.) has xk as its numerical representation; xkj is the j-th characteristic (or feature) associated with object k.
APA, Harvard, Vancouver, ISO, and other styles
35

Sánchez-Puga, Pablo, Javier Tajuelo, Juan Pastor, and Miguel Rubio. "Dynamic Measurements with the Bicone Interfacial Shear Rheometer: Numerical Bench-Marking of Flow Field-Based Data Processing." Colloids and Interfaces 2, no. 4 (December 7, 2018): 69. http://dx.doi.org/10.3390/colloids2040069.

Full text
Abstract:
Flow field-based methods are becoming increasingly popular for the analysis of interfacial shear rheology data. Such methods take properly into account the subphase drag by solving the Navier–Stokes equations for the bulk phase flows, together with the Boussinesq–Scriven boundary condition at the fluid–fluid interface and the probe equation of motion. Such methods have been successfully implemented on the double wall-ring (DWR), the magnetic rod (MR), and the bicone interfacial shear rheometers. However, a study of the errors introduced directly by the numerical processing is still lacking. Here, we report on a study of the errors introduced exclusively by the numerical procedure corresponding to the bicone geometry at an air–water interface. In our study, we set an input value of the complex interfacial viscosity, and we numerically obtained the corresponding flow field and the complex amplitude ratio for the probe motion. Then, we used the standard iterative procedure to obtain the calculated complex viscosity value. A detailed comparison of the set and calculated complex viscosity values was made in wide ranges of the three parameters herein used, namely the real and imaginary parts of the complex interfacial viscosity and the frequency. The observed discrepancies yield a detailed landscape of the numerically-introduced errors.
APA, Harvard, Vancouver, ISO, and other styles
36

S., Pushpalakshmi, Castro, R. "A Novel K-Means Clustering-Based FPGA Parallel Processing in Big Data Analysis." Applied Mathematics & Information Sciences 13, no. 5 (September 1, 2019): 777–82. http://dx.doi.org/10.18576/amis/130510.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Sultangazin, U., A. Terekhov, and N. Muratova. "Problems of satellite data calibration and thematic processing." Mathematics and Computers in Simulation 67, no. 4-5 (December 2004): 403–10. http://dx.doi.org/10.1016/j.matcom.2004.06.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Krivtsov, Serhii, Yurii Parfeniuk, Kseniia Bazilevych, Ievgen Meniailov, and Dmytro Chumachenko. "PERFORMANCE EVALUATION OF PYTHON LIBRARIES FOR MULTITHREADING DATA PROCESSING." Advanced Information Systems 8, no. 1 (February 26, 2024): 37–47. http://dx.doi.org/10.20998/2522-9052.2024.1.05.

Full text
Abstract:
Topicality. The rapid growth of data in various domains has necessitated the development of efficient tools and libraries for data processing and analysis. Python, a popular programming language for data analysis, offers several libraries, such as NumPy and Numba, for numerical computations. However, there is a lack of comprehensive studies comparing the performance of these libraries across different tasks and data sizes. The aim of the study. This study aims to fill this gap by comparing the performance of Python, NumPy, Numba, and Numba.Cuda across different tasks and data sizes. Additionally, it evaluates the impact of multithreading and GPU utilization on computation speed. Research results. The results indicate that Numba and Numba.Cuda significantly optimizes the performance of Python applications, especially for functions involving loops and array operations. Moreover, GPU and multithreading in Python further enhance computation speed, although with certain limitations and considerations. Conclusion. This study contributes to the field by providing valuable insights into the performance of different Python libraries and the effectiveness of GPU and multithreading in Python, thereby aiding researchers and practitioners in selecting the most suitable tools for their computational needs.
APA, Harvard, Vancouver, ISO, and other styles
39

Ignatov, Yuri, Oleg Tailakov, Evgeniy Saltymakov, and Daniil Gorodilov. "Development of an electrical exploration data post-processor." E3S Web of Conferences 315 (2021): 03027. http://dx.doi.org/10.1051/e3sconf/202131503027.

Full text
Abstract:
In modern times, the development of geology and geophysics is associated with complex experiments. The results of these experiments are large arrays of numerical data, which require processing and further analysis. If to process these data manually, it can be a very difficult and routine task. For such studies, specialized tools are important, which are necessary to significantly speed up the processing process and to render visualization of geophysical data in real time. The software is worked out to automate the geophysical data processing obtained after electrical exploration procedure. The designed postprocessor performs functions of data correction and geological and geophysical profile visualization. The user interface of the program provides researchers with the ability to interactively process the initial geophysical data.
APA, Harvard, Vancouver, ISO, and other styles
40

Ma, Sixiang, Feng Chang, Feng Zhang, Meng Yang, and Jian Li. "Evaluation method of the turbine impeller imbalance based on the headroom analysis." Journal of Physics: Conference Series 2473, no. 1 (April 1, 2023): 012002. http://dx.doi.org/10.1088/1742-6596/2473/1/012002.

Full text
Abstract:
Abstract Impeller imbalance is an unavoidable problem in the process of the operation of wind turbines, and it often has a serious impact on the safety and stability of wind turbines, thus shortening their life cycle. Aiming at the current problems, such as high cost and instability caused by the impeller imbalance in wind turbines, this paper proposes an assessment method for impeller imbalance in wind turbines based on image processing and headroom analysis. Firstly, the deep-learning image-processing technology is used to convert the video monitoring data into numerical data, and then the comprehensive imbalance index of the three blades based on the numerical data is calculated, so as to determine if an impeller imbalance problem exists or not. The proposed method was validated with the operation data of wind turbines in field and good results were achieved.
APA, Harvard, Vancouver, ISO, and other styles
41

Wadowska, Agata, Agnieszka Pęska-Siwik, and Kamil Maciuk. "PROBLEMS OF COLLECTING, PROCESSING AND SHARING GEOSPATIAL DATA." Acta Scientiarum Polonorum Formatio Circumiectus 21, no. 3/4 (April 8, 2023): 5–16. http://dx.doi.org/10.15576/asp.fc/2022.21.3/4.5.

Full text
Abstract:
Aim of the study: The paper describes the problems of collecting, processing and sharing geospatial data on the example of the National Geodetic and Cartographic Resource (GUGiK). The Central Office of Geodesy and Cartography (GUGiK), on the basis of the acquired data, prepares spatial databases for the whole country, such as the database of topographic objects (BDOT) or the numerical terrain model. These data are used for further studies, environmental analyses such as hydrographic or sozological maps of Poland. Material and methods: The article indicates what functionalities and, above all, what geospatial data are collected in the geoportal, a government map service managed by the General Office of Geodesy and Cartography. The study included a survey of the geoportal. The purpose of the survey was to obtain information on whether this service is known to potential audiences seeking information on spatial data. Results and conclusions: The survey proved that the geoportal is a popular service used to display and process spatial data. Survey respondents use a lot of the data collected in the geoportal and take advantage of its functionality. Finally, an analysis was carried out of data obtained from the District Geodetic and Cartographic Documentation Center of Małopolska regarding the number of notifications of geodetic works, the number of applications for access to materials from the PZGiK, and the number of applications for extracts or extracts from the cadastral record - for the period 2014-2019.
APA, Harvard, Vancouver, ISO, and other styles
42

Alfaris, Lulut, Ruben Cornelius Siagian, Budiman Nasution, Goldberd Harmuda Duva Sinaga, and Indah Indah. "Non-Linear Regresion And Bisection Method Numerical Analysis of Humidity And Temperature Relationships." Jurnal Pendidikan Fisika dan Teknologi 8, no. 2 (December 23, 2022): 238–44. http://dx.doi.org/10.29303/jpft.v8i2.4394.

Full text
Abstract:
This study aims to analyze the correlation of temperature and humidity based on climatological data from Kuningan, West Java province, Indonesia, using non-linear regression and numerical analysis using the bisection method. In studying the education curriculum in Indonesia, most of them have studied linear regression, but only within certain limits, namely linear and multiple. It is still rare for students to use non-linear regression in their analysis. This paper will explain the analysis using non-linear regression as a source of data and equations in numerical analysis so that it can be applied in the context of the numerical bisection method. As for data processing and numerical analysis, Microsoft Excel is used. The authors combined the two variables in the non-linear analysis and used geometric equations for the numerical quotient analysis. The author displays the results of numerical graphs and real-time data obtained from the source "Data access viewer NASA."
APA, Harvard, Vancouver, ISO, and other styles
43

Tonkoshkur, A. S., and A. S. Lozovskyi. "ALGORITHM FOR PROCESSING GAS SENSOR’S RESPONSE KINETICS DATA USING EXTENDED EXPONENTIAL FUNCTION WITHOUT NUMERICAL DIFFERENTIATION." System technologies 1, no. 144 (May 11, 2023): 26–35. http://dx.doi.org/10.34185/1562-9945-1-144-2023-04.

Full text
Abstract:
The features of the use of computer technologies for processing experimental data for solv-ing the problems of automation of research of materials for gas sensitive sensors are con-sidered. An algorithm for processing the kinetic dependence of the response of gas sensors based on the model of an extended exponential function are proposed, which does not use numerical differentiation operations when finding the parameters of this model. This allows to signifi-cantly reduce the influence of the presence of data spread in the coordinates of the approxi-mating diagrams that are used in calculating the model parameters, increase the accuracy of their determination and contribute to the implementation of an automated information measuring system for the process of computer processing and analysis of experimental data.
APA, Harvard, Vancouver, ISO, and other styles
44

de Oliveira, Iran Rodrigues, Sandro Campos Amico, Jeferson Avila Souza, and Antônio Gilson Barbosa de Lima. "Resin Transfer Molding Process: A Numerical Analysis." Defect and Diffusion Forum 353 (May 2014): 44–49. http://dx.doi.org/10.4028/www.scientific.net/ddf.353.44.

Full text
Abstract:
This work aims to investigate the infiltration of a CaCO3filled resin using experiments and the PAM-RTM software. A preform of glass fiber mat, with dimensions 320 x 150 x 3.6 mm, has been used for experiments conducted at room temperature, with injection pressure of 0.25bar. The resin contained 10 and 40% CaCO3content with particle size 38μm. The numerical results were evaluated by direct comparison with experimental data. The flat flow-front profile of the rectilinear flow was reached approximately halfway the length of the mold. It was observed, that the speed of the filling decreases with increasing CaCO3content and,the higher the amount of CaCO3in the resin, the lower the permeability of the reinforcement that is found. The reduction in permeability is due to the presence of calcium carbonate particles between the fibers, hindering the resin flow in the fibrous media. The computational fluid flow analysis with the PAM-RTM proved to be an accurate tool study for the processing of composite materials.
APA, Harvard, Vancouver, ISO, and other styles
45

YAO, WENJUAN, JIANWEI MA, XUEMEI LUO, and BOTE LUO. "NUMERICAL ANALYSIS OF TYMPANOSCLEROSIS AND TREATMENT EFFECT." Journal of Mechanics in Medicine and Biology 14, no. 04 (July 3, 2014): 1450051. http://dx.doi.org/10.1142/s0219519414500511.

Full text
Abstract:
Tympanosclerosis is a typical middle ear disease, which is one of the main causes of conduction deafness. We investigate the effects of tympanosclerosis and lesion excision on sound transmission of the human ear by using finite element technique. Based on CT scan images from Zhongshan Hospital of Fudan University on the normal human middle ear, numerical values of the CT scans were obtained by further processing of the images using a self-compiled program. The CT data of the right ear from a healthy volunteer were digitalized and imported into PATRAN software to reconstruct the finite element model of the ear by a self-compiling program. A frequency response analysis was made for the model, and comparative analysis was made between the calculated results and experimental data, which validated the model in this paper. The results show that the sclerosis of the ligaments and tensor muscle in the middle ear caused by force on the ossicles is larger than the normal ear and the amplitude of the stapes footplate is larger than the normal ear. This leads to a decrease of the final conductive hearing function. Furthermore, the excision of the stapes ligament and tensor tympani is good for the restoration of normal hearing. This paper provides new research perspective for clinical treatment.
APA, Harvard, Vancouver, ISO, and other styles
46

Mendizabal-Ruiz, Gerardo, Israel Román-Godínez, Sulema Torres-Ramos, Ricardo A. Salido-Ruiz, Hugo Vélez-Pérez, and J. Alejandro Morales. "Genomic signal processing for DNA sequence clustering." PeerJ 6 (January 24, 2018): e4264. http://dx.doi.org/10.7717/peerj.4264.

Full text
Abstract:
Genomic signal processing (GSP) methods which convert DNA data to numerical values have recently been proposed, which would offer the opportunity of employing existing digital signal processing methods for genomic data. One of the most used methods for exploring data is cluster analysis which refers to the unsupervised classification of patterns in data. In this paper, we propose a novel approach for performing cluster analysis of DNA sequences that is based on the use of GSP methods and the K-means algorithm. We also propose a visualization method that facilitates the easy inspection and analysis of the results and possible hidden behaviors. Our results support the feasibility of employing the proposed method to find and easily visualize interesting features of sets of DNA data.
APA, Harvard, Vancouver, ISO, and other styles
47

Otsuka, Shigenori, Seiya Nishizawa, Takeshi Horinouchi, and Shigeo Yoden. "An Experimental Data Handling System for Ensemble Numerical Weather Predictions Using a Web-Based Data Server and Analysis Tool “Gfdnavi”." Journal of Disaster Research 8, no. 1 (February 1, 2013): 48–56. http://dx.doi.org/10.20965/jdr.2013.p0048.

Full text
Abstract:
Gfdnavi is a web-based data and knowledge server program for geophysical fluid data that constructs databases, provides analysis and visualization tools, and shares knowledge documents. A new Gfdnavi user interface for analyzing and visualizing data on web browsers is developed to improve the user experience by providing seamless analysis and visualization operations, multiple diagram editing, a layer function, and so on. An experimental data handling system for ensemble numerical weather prediction data is constructed using Gfdnavi to address such issues as data processing and transfer between weather centers and decision makers in various sectors, including that of disaster management. Special tools to analyze and visualize ensemble numerical weather prediction data are implemented as user-defined Gfdnavi plug-ins. An interactive document that provides basic ideas of how to utilize probabilistic ensemble data information is written with the Gfdnavi knowledge documentation system, in which hyperlinks enable users to edit diagrams in the document.
APA, Harvard, Vancouver, ISO, and other styles
48

Tokhmetov, А. Т., A. D. Tusupov, and L. A. Tanchenko. "DATA ANALYSIS WHEN CREATING AN EXTENDED GIGABIT OPTICAL NETWORK." Bulletin Series of Physics & Mathematical Sciences 75, no. 3 (September 15, 2021): 148–57. http://dx.doi.org/10.51889/2021-3.1728-7901.18.

Full text
Abstract:
The article describes the application of data analysis, which allows, based on the processed experimental data, to obtain new knowledge about the behavior and capabilities of a gigabit passive optical network (GPON network). The description of the test bench of the GPON network is given. The paper studied the characteristics of semiconductor optical amplifiers used to increase the range of GPON networks, as well as their dependence on the input power and signal wavelength. For data processing, the MatLab mathematical calculation automation package and the OriginLab package for numerical data analysis and scientific graphics were used. It is shown that the use of an EDFA amplifier (an optical amplifier on an erbium-doped fiber) in the architecture of a gigabit passive optical network is the best choice and allows expanding the range of the GPON network from 20 kilometers to 60 kilometers.
APA, Harvard, Vancouver, ISO, and other styles
49

Tonkikh, G. P., V. A. Neshchadimov, and I. A. Averin. "Non-parametric data processing in experimental studies of spirally reinforced concrete samples." Bulletin of Science and Research Center of Construction 39, no. 4 (December 17, 2023): 95–105. http://dx.doi.org/10.37538/2224-9494-2023-4(39)-95-105.

Full text
Abstract:
Introduction. Statistical methods in the analysis of experimental data can be applied to identify patterns and test hypotheses, determine the quality of experimental data and draw conclusions based on objective data. In addition, experimental data after non-parametric processing can be used in numerical modeling using contemporary computing suites.Aim. To outline a methodology for non-parametric processing of experimental results using SCAD computing suite tools, certified in the territory of the Russian Federation. In the proposed methodology, experimental test data for spirally reinforced concrete samples of various strengths were used.Results. As a result of non-parametric processing of spirally reinforced concrete samples, empirical coefficients of the Prandtl bilinear diagram were determined according to the proposed method. This diagram is used in the SCAD computing suite to set the physical nonlinearity of the material behavior. A method for processing a small volume of experimental results is proposed for using the available data in SCAD CS numerical studies with an acceptable level of probability.Conclusions. The empirical coefficients, obtained in non-parametric processing for setting the Prandtl bilinear diagram, can be used to perform a numerical modeling of the sample bahavior for planning further experimental studies in order to find more general patterns, taking into account other behavioral factors of real structural elements in load-bearing systems of buildings and structures with spiral reinforcement, including high-intensity dynamic effects. According to experimental and theoretical studies, spiral reinforcement can significantly increase the deformability and energy capacity of reinforced concrete structures, which fundamentally affects the behavioral pattern of structures and supporting framework of buildings and structures as a whole. These behavioral features of spirally reinforced structures can be further taken into account for the computational justification of design solutions in the SCAD CS and other software programs using the Padé approximation of the Prandtl bilinear diagram.
APA, Harvard, Vancouver, ISO, and other styles
50

Jiang, Fan, and Carson Leung. "A Data Analytic Algorithm for Managing, Querying, and Processing Uncertain Big Data in Cloud Environments." Algorithms 8, no. 4 (December 11, 2015): 1175–94. http://dx.doi.org/10.3390/a8041175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography