Academic literature on the topic 'Numerical analysis – Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Numerical analysis – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Numerical analysis – Data processing"

1

MARTYNIUK, Tatiana, Andrii KOZHEMIAKO, Bohdan KRUKIVSKYI, and Antonina BUDA. "ASSOCIATIVE OPERATIONS BASED ON DIFFERENCE-SLICE DATA PROCESSING." Herald of Khmelnytskyi National University. Technical sciences 311, no. 4 (August 2022): 159–63. http://dx.doi.org/10.31891/2307-5732-2022-311-4-159-163.

Full text
Abstract:
Associative operations are effectively used to solve such application problems as sorting, searching for certain features, and identifying extreme (maximum/minimum) elements in data sets. Thus, determining the maximum number as a result of sorting a numerical array is an acceptable operation in implementing the competition mechanism in neural networks. In addition, determining the average number in a numerical series by sorting significantly speeds up the process of median filtering of images and signals. In this case, the implementation of median filtering requires the use of sorting with the ranking of the elements of the number array. This paper analyses the possibilities of associative operations implementing the elements of a vector (one-dimensional) array of numbers based on processing by difference slices (DS). A simplified description of DS processing with a selection of the common part of the elements of the vector and the difference slice formed from its elements is given. In addition, elements of the binary mask matrix are used as an example of a topological feature matrix. The proposed approach allows for the formation of the ranks of the elements of the initial vector, as a result of sorting in ascending order of their numerical values. The paper shows a schematic representation of the process of DS processing, as well as an example of DS processing of a number vector in the form of a table, which shows the formation sequence of numbers of the sorted array and the ranks of numbers of the initial array. Therefore, the proposed use of topological features allows to determine the comparative relations between the elements of the numerical array in the process of spatially distributed DS processing, as well as to confirm the versatility of this approach.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Fan Xiu, Xing Ping Wen, and Shao Jin Yi. "Numerical Measurement and Data Processing of Air Pollution." Applied Mechanics and Materials 577 (July 2014): 1219–22. http://dx.doi.org/10.4028/www.scientific.net/amm.577.1219.

Full text
Abstract:
Relational analysis method was a data process method used to sort out the correlation extent of effect factors in a system with uncertain information. Common mathematical methods were not applicable for describing the relationship. A new method, equivalent numerical relational degree (ENRD) model was developed to evaluate the effect of different factors on air pollution. The effects of different factors-the port throughput, amount of coal, industrial output, and motor vehicle ownership, investment in fixed assets, real estate development and construction of housing construction area on the quality of atmospheric environment were studied. The degrees of correlation were calculated according to ENRD and the values of the port throughput, amount of coal, industrial output, motor vehicle ownership, investment in fixed assets, real estate development and construction of housing construction area were 0.7947, 0.7943, 0.7289, 0.7238, 0.6702 and 0.6527, respectively. From these values, the relations of these factors to the quality of atmospheric environment could be described and evaluated, and the port throughput and amount of coal were relatively major.
APA, Harvard, Vancouver, ISO, and other styles
3

Onishchenko, P. S., K. Y. Klyshnikov, and E. A. Ovcharenko. "Artificial Neural Networks in Cardiology: Analysis of Numerical and Text Data." Mathematical Biology and Bioinformatics 15, no. 1 (February 18, 2020): 40–56. http://dx.doi.org/10.17537/2020.15.40.

Full text
Abstract:
This review discusses works on the use of artificial neural networks for processing numerical and textual data. Application of a number of widely used approaches is considered, such as decision support systems; prediction systems, providing forecasts of outcomes of various methods of treatment of cardiovascular diseases, and risk assessment systems. The possibility of using artificial neural networks as an alternative approach to standard methods for processing patient clinical data has been shown. The use of neural network technologies in the creation of automated assistants to the attending physician will make it possible to provide medical services better and more efficiently.
APA, Harvard, Vancouver, ISO, and other styles
4

Yu, Zhi Wei, and Sheng Guo Cheng. "Contrast Analysis of Data Processing Method Based on the MATLAB in Compaction Test." Applied Mechanics and Materials 170-173 (May 2012): 611–14. http://dx.doi.org/10.4028/www.scientific.net/amm.170-173.611.

Full text
Abstract:
Abstract. The data of compaction test is processed by use of numerical method and least-squares fitting method respectively through MATLAB software. After a simple comparative analysis of the two results, this paper aims to reach the conclusion that when the distribution of test data points consistent with the characteristics of soil compaction, it is better and more accurate to use numerical method.
APA, Harvard, Vancouver, ISO, and other styles
5

G.V., Suresh, and Srinivasa Reddy E.V. "Uncertain Data Analysis with Regularized XGBoost." Webology 19, no. 1 (January 20, 2022): 3722–40. http://dx.doi.org/10.14704/web/v19i1/web19245.

Full text
Abstract:
Uncertainty is a ubiquitous element in available knowledge about the real world. Data sampling error, obsolete sources, network latency, and transmission error are all factors that contribute to the uncertainty. These kinds of uncertainty have to be handled cautiously, or else the classification results could be unreliable or even erroneous. There are numerous methodologies developed to comprehend and control uncertainty in data. There are many faces for uncertainty i.e., inconsistency, imprecision, ambiguity, incompleteness, vagueness, unpredictability, noise, and unreliability. Missing information is inevitable in real-world data sets. While some conventional multiple imputation approaches are well studied and have shown empirical validity, they entail limitations in processing large datasets with complex data structures. In addition, these standard approaches tend to be computationally inefficient for medium and large datasets. In this paper, we propose a scalable multiple imputation frameworks based on XGBoost, bootstrapping and regularized method. XGBoost, one of the fastest implementations of gradient boosted trees, is able to automatically retain interactions and non-linear relations in a dataset while achieving high computational efficiency with the aid of bootstrapping and regularized methods. In the context of high-dimensional data, this methodology provides fewer biased estimates and reflects acceptable imputation variability than previous regression approaches. We validate our adaptive imputation approaches with standard methods on numerical and real data sets and shown promising results.
APA, Harvard, Vancouver, ISO, and other styles
6

Ames, W. F. "Underwater acoustic data processing." Mathematics and Computers in Simulation 31, no. 6 (February 1990): 594. http://dx.doi.org/10.1016/0378-4754(90)90066-r.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Kuzminets, Nikolai, and Yurii Dubovenko. "Constraints of optimization of statistical analysis of data of engineerig monitoring of transport networks." Automobile Roads and Road Construction, no. 109 (2021): 157–65. http://dx.doi.org/10.33744/0365-8171-2021-109-157-165.

Full text
Abstract:
Time series for the technical monitoring of the transportation networks include interference and omissions. Their analysis requires the special statistical analysis. Known statistical packages do not contain a full cycle for processing of large time series. The linear timeline for the processing of periodic data is not available in digital statistics. The sliding window approach is suitable for processing of interrupted time series. Its disadvantage is the restriction on the length of the row and the sensitivity to data gaps. The graph of the time series process­ing needs the internal optimization. The necessary steps for optimizing of the time series processing graph are determined. They are as follows: store data in an internal database, build the data samples on a single time scale, sampling based on the meta-description of the series, averaging in a sliding window, calendar bindings and omission masks, generalization of graphs, storage of graphics in vector format and so on. The conditions for the study of series are revealed such as the database, calendar structure of data, processing of the gaps, a package of numerical methods of analysis, processing in a sliding window.
APA, Harvard, Vancouver, ISO, and other styles
8

Siregar, Hans Elmaury Andreas. "GROUND PENETRATING RADAR DATA ANALYSIS BY USING MODELLING WITH FINITE DIFFERENCE METHOD: CASE STUDY IN BELAWAN PORT." Buletin Sumber Daya Geologi 11, no. 1 (May 10, 2016): 15–24. http://dx.doi.org/10.47599/bsdg.v11i1.7.

Full text
Abstract:
Ground Penetrating Radar (GPR) is one of non destructive geophysics methods which is appropriate used to identify subsurface object with depth penetration less than 70 meter. High data resolution as well as relatively unprolonged and manageable data acquisition make this method becoming convenient supporting method to increase near surface data for other geophysics methods. The depth penetration of GPR varies with the frequency of antenna. Getting optimum depth penetration before field acquisition data some numerical simulation should be accomplished in order to perceive antenna frequency and processing technique that used, so the depth of target zone can be achieved. The Finite Difference (FD) is one one of numerical anaysis technique that mostly used to determine differential equation. By using FD method, the solution of electromagnetic waves equation can be obtained and the image of numerical simulation can be displayed. In line with this radar image from numerical simulation, the relationship of frequency and depth penetration on the media used is acquired. Media used in this simulation are sand, clay, sandy clay, clayey sand and concrete. Through numerical simulation from this research, we conclude that GPR method able to distinguish boundary layer among each medium. Processing technique is accomplished to comprehend suitable processing stages for high resolution radar image that can be interpreted. Data acquisition and processing technique from simulation have been implemented in field experiment and very helpful to apprehend GPR characteristic signal in subsurface map in Belawan port.
APA, Harvard, Vancouver, ISO, and other styles
9

Novikov-Borodin, Andrey. "Experimental Data Processing Using Shift Methods." EPJ Web of Conferences 226 (2020): 03014. http://dx.doi.org/10.1051/epjconf/202022603014.

Full text
Abstract:
The numerical methods of step-by-step and combined shifts are proposed for correction and reconstruction the experimental data convolved with different blur kernels. Methods use a shift technique for the direct deconvolution of experimental data, they are fast and effective for data reconstruction, especially, in the case of discrete measurements. The comparative analysis of proposed methods is presented, inaccuracies of reconstruction with different blur kernels, different volumes of data and noise levels are estimated. There are presented the examples of using the shift methods in processing the statistical data of TOF neutron spectrometers and in planning the proton therapy treatment. The multi-dimensional data processing using shift methods is considered. The examples of renewal the 2D images blurred by uniform motion and distorted with the Gaussian-like blur kernels are presented.
APA, Harvard, Vancouver, ISO, and other styles
10

Dobronets, Boris, and Olga Popova. "Numerical Analysis for Problems of Remote Sensing with Random Input Data." E3S Web of Conferences 75 (2019): 01004. http://dx.doi.org/10.1051/e3sconf/20197501004.

Full text
Abstract:
The study is devoted to the remote sensing data processing using the models with random input data. In this article we propose a new approach to calculation of functions with random arguments, which is a technique of fast computations, based on the idea of parallel computations and the application of numerical probability analysis. To calculate a function with random arguments we apply one of the basic concepts of numerical probabilistic analysis as the probabilistic extension. To implement the technique of fast computations, a new method based on parallel recursive calculations is proposed.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Numerical analysis – Data processing"

1

Chen, Chuan. "Numerical algorithms for data processing and analysis." HKBU Institutional Repository, 2016. https://repository.hkbu.edu.hk/etd_oa/277.

Full text
Abstract:
Magnetic nanoparticles (NPs) with sizes ranging from 2 to 20 nm in diameter represent an important class of artificial nanostructured materials, since the NP size is comparable to the size of a magnetic domain. They have potential applications in data storage, catalysis, permanent magnetic nanocomposites, and biomedicine. To begin with, a brief overview on the background of Fe-based bimetallic NPs and their applications for data-storage and catalysis was presented in Chapter 1. In Chapter 2, L10-ordered FePt NPs with high coercivity were directly prepared from a novel bimetallic acetylenic alternating copolymer P3 by a one-step pyrolysis method without post-thermal annealing. The chemical ordering, morphology and magnetic properties were studied. Magnetic measurements showed that a record coercivity of 3.6 T (1 T = 10 kOe) was obtained in FePt NPs. By comparison of the resultant FePt NPs synthesized under Ar and Ar/H2, the characterization proved that the incorporation of H2 would affect the nucleation and promote the growth of FePt NPs. The L10 FePt NPs were also successfully patterned on Si substrate by nanoimprinting lihthography (NIL). The highly ordered ferromagnetic arrays on a desired substrate for bit-patterned media (BPM) were studied and promised bright prospects for the progress of data-storage. Furthermore, we also reported a new FePt-containing metallopolymer P4 as the single-source precursor for metal alloy NPs synthesis, where the metal fractions were on the side chain and the ratio could be easily controlled. This polymer was synthesized from random copolymer poly(styrene-4-ethynylstyrene) PES-PS and bimetallic precursor TPy-FePt ([Pt(4’-ferrocenyl-(N^N^N))Cl]Cl) by Sonogashira coupling reaction. After pyrolysis of P4, the stoichiometry of Fe and Pt atoms in the synthesized NPs (NPs) is nearly close to 1:1, which is more precise than using TPy-FePt as precursor. Polymer P4 was also more favorable for patterning by high throughout NIL as compared to TPy-FePt. Ferromagnetic nanolines, potentially as bit-patterned magnetic recording media, were successfully fabricated from P4 and fully characterized. In Chapter 3, a novel organometallic compound TPy-FePd-1 [4’-ferrocenyl-(N^N^N)PdOCOCH3] was synthesized and structurally characterized, whose crystal structure showed a coplanar Pd center and Pd-Pd distance (3.17 Å). Two metals Fe and Pd were evenly embedded in the molecular dimension and remained tightly coupled between each other benefiting to the metalmetal (Pd-Pd) and ligand ππ stacking interactions, all of which made it facilitate the nucleation without sintering during preparing the FePd NPs. Ferromagnetic FePd NPs of ca. 16.2 nm in diameter were synthesized by one-pot pyrolysis of the single-source precursor TPy-FePd-1 under getter gas with metal-ion reduction and minimal nanoparticle coalescence, which have a nearly equal atomic ratio (Fe/Pd = 49/51) and exhibited coercivity of 4.9 kOe at 300 K. By imprinting the mixed chloroform solution of TPy-FePd-1 and polystyrene (PS) on Si, reproducible patterning of nanochains was formed due to the excellent self-assembly properties and the incompatibility between TPy-FePd-1 and PS under the slow evaporation of the solvents. The FePd nanochains with average length of ca. 260 nm were evenly dispersed around the PS nanosphere by self-assembly of TPy-FePd-1. In addition, the orientation of the FePd nanochains could also be controlled by tuning the morphology of PS, and the length was shorter in confined space of PS. Orgnic skeleton in TPy-FePd-1 and PS were carbonized and removed by pyrolysis under Ar/H2 (5 wt%) and only magnetic FePd alloy nanochains with domain structure were left. Besides, a bimetallic complex TPy-FePd-2 was prepared and used as a single-source precursor to synthesize ferromagnetic FePd NPs by one-pot pyrolysis. The resultant FePd NPs have a mean size of 19.8 nm and show the coercivity of 1.02 kOe. In addition, the functional group (-NCMe) in TPy-FePd-2 was easily substituted by a pyridyl group. A random copolymer PS-P4VP was used to coordinate with TPy-FePd-2, and the as-synthesized polymer made the metal fraction disperse evenly along the flexible chain. Fabrication of FePd NPs from the polymers was also investigated, and the size could be easily controlled by tuning the metal fraction in polymer. FePd NPs with the mean size of 10.9, 14.2 and 17.9 nm were prepared from the metallopolymer with 5 wt%, 10 wt% and 20wt% of metal fractions, respectively. In Chapter 4, molybdenum disulfide (MoS2) monolayers decorated with ferromagnetic FeCo NPs on the edges were synthesized through a one-step pyrolysis of precursor molecules in an argon atmosphere. The FeCo precursor was spin coated on the MoS2 monolayer grown on Si/SiO2 substrate. Highly-ordered body-centered cubic (bcc) FeCo NPs were revealed under optimized pyrolysis conditions, possessing coercivity up to 1000 Oe at room temperature. The FeCo NPs were well-positioned along the edge sites of MoS2 monolayers. The vibration modes of Mo and S atoms were confined after FeCo NPs decoration, as characterized by Raman shift spectroscopy. These MoS2 monolayers decorated with ferromagnetic FeCo NPs can be used for novel catalytic materials with magnetic recycling capabilities. The sizes of NPs grown on MoS2 monolayers are more uniform than from other preparation routines. Finally, the optimized pyrolysis temperature and conditions provide receipts for decorating related noble catalytic materials. Finally, Chapters 5 and 6 present the concluding remarks and the experimental details of the work described in Chapters 2-4.
APA, Harvard, Vancouver, ISO, and other styles
2

Jia, Hong. "Clustering of categorical and numerical data without knowing cluster number." HKBU Institutional Repository, 2013. http://repository.hkbu.edu.hk/etd_ra/1495.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Siu, Ka Wai. "Numerical algorithms for data analysis with imaging and financial applications." HKBU Institutional Repository, 2018. https://repository.hkbu.edu.hk/etd_oa/550.

Full text
Abstract:
In this thesis, we study modellings and numerical algorithms to data analysis with applications to image processing and financial forecast. The thesis is composed of two parts, namely the tensor regression and data assimilation methods for image restoration.;We start with investigating the tensor regression problem in Chapter 2. It is a generalization of a classical regression in order to adopt and analyze much more information by using multi-dimensional arrays. Since the regression problem is subject to multiple solutions, we propose a regularized tensor regression model to the problem. By imposing a low-rank property of the solution and considering the structure of the tensor product, we develop an algorithm which is suitable for scalable implementations. The regularization method is used to select useful solutions which depend on applications. The proposed model is solved by the alternating minimization method and we prove the convergence of the objective function values and iterates by the maximization-minimization (MM) technique. We study different factors which affects the performance of the algorithm, including sample sizes, solution ranks and the noise levels. Applications include image compressing and financial forecast.;In Chapter 3, we apply filtering methods in data assimilation to image restoration problems. Traditionally, data assimilation methods optimally combine a predictive state from a dynamical system with real partially observations. The motivation is to improve the model forecast by real observation. We construct an artificial dynamics to the non-blind deblurring problems. By making use of spatial information of a single image, a span of ensemble members is constructed. A two-stage use of ensemble transform Kalman filter (ETKF) is adopted to deblur corrupted images. The theoretical background of ETKF and the use of artificial dynamics by stage augmentation method are provided. Numerical experiments include image and video processing.;Concluding remarks and discussion on future extensions are included in Chapter 4.
APA, Harvard, Vancouver, ISO, and other styles
4

Jones, Jonathan A. "Nuclear magnetic resonance data processing methods." Thesis, University of Oxford, 1992. http://ora.ox.ac.uk/objects/uuid:7df97c9a-4e65-4c10-83eb-dfaccfdccefe.

Full text
Abstract:
This thesis describes the application of a wide variety of data processing methods, in particular the Maximum Entropy Method (MEM), to data from Nuclear Magnetic Resonance (NMR) experiments. Chapter 1 provides a brief introduction to NMR and to data processing, which is developed in chapter 2. NMR is described in terms of the classical model due to Bloch, and the principles of conventional (Fourier transform) data processing developed. This is followed by a description of less conventional techniques. The MEM is derived on several grounds, and related to both Bayesian reasoning and Shannon information theory. Chapter 3 describes several methods of evaluating the quality of NMR spectra obtained by a variety of data processing techniques; the simple criterion of spectral appearance is shown to be completely unsatisfactory. A Monte Carlo method is described which allows several different techniques to be compared, and the relative advantages of Fourier transformation and the MEM are assessed. Chapter 4 describes in vivo NMR, particularly the application of the MEM to data from Phase Modulated Rotating Frame Imaging (PMRFI) experiments. In this case the conventional data processing is highly unsatisfactory, and MEM processing results in much clearer spectra. Chapter 5 describes the application of a range of techniques to the estimation and removal of splittings from NMR spectra. The various techniques are discussed using simple examples, and then applied to data from the amino acid iso-leucine. The thesis ends with five appendices which contain historical and philosophical notes, detailed calculations pertaining to PMRFI spectra, and a listing of the MEM computer program.
APA, Harvard, Vancouver, ISO, and other styles
5

Choy, Wing Yiu. "Using numerical methods and artificial intelligence in NMR data processing and analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0024/NQ50131.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Choy, Wing Yiu 1969. "Using numerical methods and artificial intelligence in NMR data processing and analysis." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=35864.

Full text
Abstract:
In this thesis, we applied both numerical methods and artificial intelligence techniques to NMR data processing and analysis. First, a comprehensive study of the Iterative Quadratic Maximum Likelihood (IQML) method applied to NMR spectral parameter estimation is reported. The IQML is compared to other conventional time domain data analysis methods. Extensive simulations demonstrate the superior performance of the IQML method. We also develop a new technique, which uses genetic algorithm with a priori knowledge, to improve the quantification of NMR spectral parameters. The new proposed method outperforms the other conventional methods, especially in the situations that there are signals close in frequencies and the signal-to-noise ratio of the FID is low.
The usefulness of Singular Value Decomposition (SVD) method in NMR data processing is further exploited. A new two dimensional spectral processing scheme based on SVD is proposed for suppressing strong diagonal peaks. The superior performance of this method is demonstrated on an experimental phase-sensitive COSY spectrum.
Finally, we studied the feasibility of using neural network predicted secondary structure information in the NMR data analysis. Protein chemical shift databases are compiled and are used with the neural network predicted protein secondary structure information to improve the accuracy of protein chemical shift prediction. The potential of this strategy for amino acid classification in NMR resonance assignment is explored.
APA, Harvard, Vancouver, ISO, and other styles
7

Howe, Bill. "Gridfields: Model-Driven Data Transformation in the Physical Sciences." PDXScholar, 2006. https://pdxscholar.library.pdx.edu/open_access_etds/2676.

Full text
Abstract:
Scientists' ability to generate and store simulation results is outpacing their ability to analyze them via ad hoc programs. We observe that these programs exhibit an algebraic structure that can be used to facilitate reasoning and improve performance. In this dissertation, we present a formal data model that exposes this algebraic structure, then implement the model, evaluate it, and use it to express, optimize, and reason about data transformations in a variety of scientific domains. Simulation results are defined over a logical grid structure that allows a continuous domain to be represented discretely in the computer. Existing approaches for manipulating these gridded datasets are incomplete. The performance of SQL queries that manipulate large numeric datasets is not competitive with that of specialized tools, and the up-front effort required to deploy a relational database makes them unpopular for dynamic scientific applications. Tools for processing multidimensional arrays can only capture regular, rectilinear grids. Visualization libraries accommodate arbitrary grids, but no algebra has been developed to simplify their use and afford optimization. Further, these libraries are data dependent—physical changes to data characteristics break user programs. We adopt the grid as a first-class citizen, separating topology from geometry and separating structure from data. Our model is agnostic with respect to dimension, uniformly capturing, for example, particle trajectories (1-D), sea-surface temperatures (2-D), and blood flow in the heart (3-D). Equipped with data, a grid becomes a gridfield. We provide operators for constructing, transforming, and aggregating gridfields that admit algebraic laws useful for optimization. We implement the model by analyzing several candidate data structures and incorporating their best features. We then show how to deploy gridfields in practice by injecting the model as middleware between heterogeneous, ad hoc file formats and a popular visualization library. In this dissertation, we define, develop, implement, evaluate and deploy a model of gridded datasets that accommodates a variety of complex grid structures and a variety of complex data products. We evaluate the applicability and performance of the model using datasets from oceanography, seismology, and medicine and conclude that our model-driven approach offers significant advantages over the status quo.
APA, Harvard, Vancouver, ISO, and other styles
8

Kirsch, Matthew Robert. "Signal Processing Algorithms for Analysis of Categorical and Numerical Time Series: Application to Sleep Study Data." Case Western Reserve University School of Graduate Studies / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1278606480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Dannenberg, Matthew. "Pattern Recognition in High-Dimensional Data." Scholarship @ Claremont, 2016. https://scholarship.claremont.edu/hmc_theses/76.

Full text
Abstract:
Vast amounts of data are produced all the time. Yet this data does not easily equate to useful information: extracting information from large amounts of high dimensional data is nontrivial. People are simply drowning in data. A recent and growing source of high-dimensional data is hyperspectral imaging. Hyperspectral images allow for massive amounts of spectral information to be contained in a single image. In this thesis, a robust supervised machine learning algorithm is developed to efficiently perform binary object classification on hyperspectral image data by making use of the geometry of Grassmann manifolds. This algorithm can consistently distinguish between a large range of even very similar materials, returning very accurate classification results with very little training data. When distinguishing between dissimilar locations like crop fields and forests, this algorithm consistently classifies more than 95 percent of points correctly. On more similar materials, more than 80 percent of points are classified correctly. This algorithm will allow for very accurate information to be extracted from these large and complicated hyperspectral images.
APA, Harvard, Vancouver, ISO, and other styles
10

Dorica, Mark. "Novel electromagnetic design system enhancements using computational intelligence strategies." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=102972.

Full text
Abstract:
This thesis presents a wide spectrum of novel extensions and enhancements to critical components of modern electromagnetic analysis and design systems. These advancements are achieved through the use of computational intelligence, which comprises neural networks, evolutionary algorithms, and fuzzy systems. These tools have been proven in myriad industrial applications ranging from computer network optimization to heavy machinery control.
The analysis module of an electromagnetic analysis and design system typically comprises mesh generation and mesh improvement stages. A novel method for discovering optimal orderings of mesh improvement operators is proposed and leads to a suite of novel mesh improvement techniques. The new techniques outperform existing methods in both mesh quality improvement and computational cost.
The remaining contributions pertain to the design module. Specifically, a novel space mapping method is proposed, which allows for the optimization of response surface models. The method is able to combine the accuracy of fine models with the speed of coarse models. Optimal results are achieved for a fraction of the cost of the standard optimization approach.
Models built from computational data often do not take into consideration the intrinsic characteristics of the data. A novel model building approach is proposed, which customizes the model to the underlying responses and accelerates searching within the model. The novel approach is able to significantly reduce model error and accelerate optimization.
Automatic design schemes for 2D structures typically preconceive the final design or create an intractable search space. A novel non-preconceived approach is presented, which relies on a new genome structure and genetic operators. The new approach is capable of a threefold performance improvement and improved manufacturability.
Automatic design of 3D wire structures is often based on "in-series" architectures, which limit performance. A novel technique for automatic creative design of 3D wire antennas is proposed. The antenna structures are grown from a starting wire and invalid designs are avoided. The high quality antennas that emerge from this bio-inspired approach could not have been obtained by a human designer and are able to outperform standard designs.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Numerical analysis – Data processing"

1

Chartered Association of Certified Accountants., ed. Numerical analysis & data processing. 3rd ed. London: Brierley Price Prior, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1938-, Woolf Emile, Tanna Suresh, and Singh Karam, eds. Numerical analysis and data processing. London: Macdonald & Evans, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hamming, R. W. Introduction to applied numerical analysis. Mineola, N.Y: Dover Publications, 2012.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ronald), Kincaid David (David, ed. Numerical mathematics and computing. 7th ed. Boston, MA: Brooks/Cole, Cengage Learning, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chartered Association of Certified Accountants., ed. Numerical analysis and data processing: Paper 1.5. London: BPP, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lindfield, G. R. Microcomputers in numerical analysis. Chichester, West Sussex, England: E. Horwood, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

R, Lindfield G., ed. Numerical methods using MATLAB. New York: Ellis Horwood, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

R, Lindfield G., ed. Numerical methods using Matlab. 2nd ed. Upper Saddle River, NJ: Prentice Hall, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hamming, R. W. Introduction to applied numerical analysis. New York: Hemisphere Pub. Corp., 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

1934-, Björck Åke, ed. Numerical methods in scientific computing. Philadelphia: Society for Industrial and Applied Mathematics, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Numerical analysis – Data processing"

1

Johansson, Robert. "Data Processing and Analysis." In Numerical Python, 285–311. Berkeley, CA: Apress, 2015. http://dx.doi.org/10.1007/978-1-4842-0553-2_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Johansson, Robert. "Data Processing and Analysis." In Numerical Python, 405–41. Berkeley, CA: Apress, 2018. http://dx.doi.org/10.1007/978-1-4842-4246-9_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ó Ruanaidh, Joseph J. K., and William J. Fitzgerald. "Integration in Bayesian Data Analysis." In Numerical Bayesian Methods Applied to Signal Processing, 161–93. New York, NY: Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0717-7_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Atoyan, A., and J. Patera. "Continuous extension of the discrete cosine transform, and its applications to data processing." In Group Theory and Numerical Analysis, 1–15. Providence, Rhode Island: American Mathematical Society, 2005. http://dx.doi.org/10.1090/crmp/039/01.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Liang, and Jianxin Zhao. "Performance Accelerators." In Architecture of Advanced Numerical Analysis Systems, 191–213. Berkeley, CA: Apress, 2022. http://dx.doi.org/10.1007/978-1-4842-8853-5_7.

Full text
Abstract:
AbstractThe Graphics Processing Unit (GPU) has become one of the most important types of hardware accelerators. It is designed to render 3D graphics and videos and still is core to the gaming industry. Besides creating stunning visual effects, programmers also take advantage of the GPU’s advantage in parallel processing in many fields to perform computing-heavy tasks, such as in health data analytics, physical simulation, artificial intelligence, etc.
APA, Harvard, Vancouver, ISO, and other styles
6

Caruso, Giulia, Adelia Evangelista, and Stefano Antonio Gattone. "Profiling visitors of a national park in Italy through unsupervised classification of mixed data." In Proceedings e report, 135–40. Florence: Firenze University Press, 2021. http://dx.doi.org/10.36253/978-88-5518-304-8.27.

Full text
Abstract:
Cluster analysis has for long been an effective tool for analysing data. Thus, several disciplines, such as marketing, psychology and computer sciences, just to mention a few, did take advantage from its contribution over time. Traditionally, this kind of algorithm concentrates only on numerical or categorical data at a time. In this work, instead, we analyse a dataset composed of mixed data, namely both numerical than categorical ones. More precisely, we focus on profiling visitors of the National Park of Majella in the Abruzzo region of Italy, which observations are characterized by variables such as gender, age, profession, expectations and satisfaction rate on park services. Applying a standard clustering procedure would be wholly inappropriate in this case. Therefore, we hereby propose an unsupervised classification of mixed data, a specific procedure capable of processing both numerical than categorical variables simultaneously, releasing truly precious information. In conclusion, our application therefore emphasizes how cluster analysis for mixed data can lead to discover particularly informative patterns, allowing to lay the groundwork for an accurate customers profiling, starting point for a detailed marketing analysis.
APA, Harvard, Vancouver, ISO, and other styles
7

Yang, Yixin, Jinbi Ye, Chuanxiong Hong, Chaozi Chen, Liyu Lian, and Lanyu Mao. "Numerical Study of Lateral Bearing Capacity of Conical Composite Pile Based on Data Analysis." In 2020 International Conference on Data Processing Techniques and Applications for Cyber-Physical Systems, 93–101. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1726-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jiang, Hongliang, Chaobo Lu, Chunfa Xiong, and Mengkun Ran. "Seismic Data Denoising Analysis Based on Monte Carlo Block Theory." In Lecture Notes in Civil Engineering, 339–49. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-2532-2_28.

Full text
Abstract:
AbstractDenoising of seismic data has always been an important focus in the field of seismic exploration, which is very important for the processing and interpretation of seismic data. With the increasing complexity of seismic exploration environment and target, seismic data containing strong noise and weak amplitude seismic in-phase axis often contain many weak feature signals. However, weak amplitude phase axis characteristics are highly susceptible to noise and useful signal often submerged by background noise, seriously affected the precision of seismic data interpretation, dictionary based on the theory of the monte carlo study seismic data denoising method, selecting expect more blocks of data, for more accurate MOD dictionary, to gain a higher quality of denoising of seismic data. Monte carlo block theory in this paper, the dictionary learning dictionary, rules, block theory and random block theory is example analysis test, the dictionary learning algorithm based on the results of three methods to deal with, and the numerical results show that the monte carlo theory has better denoising ability, the denoising results have higher SNR, and effectively keep the weak signal characteristics of the data; In terms of computational efficiency, the proposed method requires less time and has higher computational efficiency, thus verifying the feasibility and effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
9

Chen, Yang. "Back Analysis and Numerical Simulation of Surrounding Rock Parameters of Tunnel Close Construction Based on PSO-BP Algorithm." In 2020 International Conference on Data Processing Techniques and Applications for Cyber-Physical Systems, 1237–43. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1726-3_157.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Holzer, Lorenz, Philip Marmet, Mathias Fingerle, Andreas Wiegmann, Matthias Neumann, and Volker Schmidt. "Image Based Methodologies, Workflows, and Calculation Approaches for Tortuosity." In Tortuosity and Microstructure Effects in Porous Media, 91–159. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-30477-4_4.

Full text
Abstract:
AbstractIn this chapter, modern methodologies for characterization of tortuosity are thoroughly reviewed. Thereby, 3D microstructure data is considered as the most relevant basis for characterization of all three tortuosity categories, i.e., direct geometric, indirect physics-based and mixed tortuosities. The workflows for tortuosity characterization consists of the following methodological steps, which are discussed in great detail: (a) 3D imaging (X-ray tomography, FIB-SEM tomography and serial sectioning, Electron tomography and atom probe tomography), (b) qualitative image processing (3D reconstruction, filtering, segmentation) and (c) quantitative image processing (e.g., morphological analysis for determination of direct geometric tortuosity). (d) Numerical simulations are used for the estimation of effective transport properties and associated indirect physics-based tortuosities. Mixed tortuosities are determined by geometrical analysis of flow fields from numerical transport simulation. (e) Microstructure simulation by means of stochastic geometry or discrete element modeling enables the efficient creation of numerous virtual 3D microstructure models, which can be used for parametric studies of micro–macro relationships (e.g., in context with digital materials design or with digital rock physics). For each of these methodologies, the underlying principles as well as the current trends in technical evolution and associated applications are reviewed. In addition, a list with 75 software packages is presented, and the corresponding options for image processing, numerical simulation and stochastic modeling are discussed. Overall, the information provided in this chapter shall help the reader to find suitable methodologies and tools that are necessary for efficient and reliable characterization of specific tortuosity types.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Numerical analysis – Data processing"

1

Kumkov, Sergey I., Alexander A. Redkin, Svetlana V. Pershina, Evgeniya A. Il’ina, Alexander A. Kataev, and Yury P. Zaikov. "Interval approach to processing the noised thermophysical data." In INTERNATIONAL CONFERENCE OF NUMERICAL ANALYSIS AND APPLIED MATHEMATICS ICNAAM 2019. AIP Publishing, 2020. http://dx.doi.org/10.1063/5.0028149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kaliyagina, L., J. Mopoz, Theodore E. Simos, George Psihoyios, and Ch Tsitouras. "Methods of Empirical Data Processing in Developing Processes." In ICNAAM 2010: International Conference of Numerical Analysis and Applied Mathematics 2010. AIP, 2010. http://dx.doi.org/10.1063/1.3497818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fang Chen, Markus Flatken, Ingrid Hotz, and Andreas Gerndt. "In-situ processing and interactive visualization for large-scaled numerical simulations." In 2014 IEEE 4th Symposium on Large Data Analysis and Visualization (LDAV). IEEE, 2014. http://dx.doi.org/10.1109/ldav.2014.7013211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ivanov, Oleg Yu, and Andrey V. Sosnovsky. "Modification of the frost speckle-noise filter for solving problems of radar data processing." In INTERNATIONAL CONFERENCE OF NUMERICAL ANALYSIS AND APPLIED MATHEMATICS ICNAAM 2020. AIP Publishing, 2022. http://dx.doi.org/10.1063/5.0082351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xiaofei, Xu, Li Denghua, Wang Hehu, and Zeng Rui. "A Detecting Method based on Data Fusion Processing And Numerical Analysis." In 2007 8th International Conference on Electronic Measurement and Instruments. IEEE, 2007. http://dx.doi.org/10.1109/icemi.2007.4351258.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Yi, and Wenbin Liu. "Regression Analysis Model Based on Data Processing and MATLAB Numerical Simulation." In 2022 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC). IEEE, 2022. http://dx.doi.org/10.1109/ipec54454.2022.9777319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Alakus, Talha Burak, Bihter Das, and Ibrahim Turkoglu. "DNA encoding with entropy based numerical mapping technique for phylogenetic analysis." In 2019 International Artificial Intelligence and Data Processing Symposium (IDAP). IEEE, 2019. http://dx.doi.org/10.1109/idap.2019.8875937.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Gabriele, Fatuzzo, Mangiameli Michele, Mussumeci Giuseppe, and Zito Salvatore. "Laser scanner data processing and 3D modeling using a free and open source software." In PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON NUMERICAL ANALYSIS AND APPLIED MATHEMATICS 2014 (ICNAAM-2014). AIP Publishing LLC, 2015. http://dx.doi.org/10.1063/1.4912990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Polyakov, M. V., and A. S. Astakhov. "Mathematical Processing and Computer Analysis of Data from Numerical Modelling of Radiothermometric Medical Examinations." In Mathematical Biology and Bioinformatics. Pushchino: IMPB RAS - Branch of KIAM RAS, 2020. http://dx.doi.org/10.17537/icmbb20.23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ve´tel, Je´roˆme, Andre´ Garon, and Dominique Pelletier. "Analysis of Complex Flows From Experimental Data and Numerical Experiments." In ASME 2010 3rd Joint US-European Fluids Engineering Summer Meeting collocated with 8th International Conference on Nanochannels, Microchannels, and Minichannels. ASMEDC, 2010. http://dx.doi.org/10.1115/fedsm-icnmm2010-30033.

Full text
Abstract:
Fluid mechanics is considered to be a privileged field in physics because phenomena can be made visible. This is unfortunately not the case in turbulence where diffusion and mixing of passive tracers are enhanced by turbulent transport. Consequently, the analysis of the rich flow physics provided by direct numerical simulations (DNS) and by modern optical diagnostic techniques require advanced post-processing tools to extract fine flow details. In this context, this paper reviews most recent techniques used to reveal coherent structures and their dynamics in turbulent flows. In particular, results obtained with standard Eulerian techniques are compared to those obtained from a more recent Lagrangian technique. Even if this latter technique can provide finer details, it is found that the two methods are complementary. This is illustrated with DNS results and with experimental data including planar measurements as well as time-resolved measurements converted to quasi-instantaneous volumetric data by using the Taylor hypothesis.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Numerical analysis – Data processing"

1

Blundell, S. User guide : the DEM Breakline and Differencing Analysis Tool—gridded elevation model analysis with a convenient graphical user interface. Engineer Research and Development Center (U.S.), August 2022. http://dx.doi.org/10.21079/11681/45040.

Full text
Abstract:
Gridded elevation models of the earth’s surface derived from airborne lidar data or other sources can provide qualitative and quantitative information about the terrain and its surface features through analysis of the local spatial variation in elevation. The DEM Breakline and Differencing Analysis Tool was developed to extract and display micro-terrain features and vegetative cover based on the numerical modeling of elevation discontinuities or breaklines (breaks-in-slope), slope, terrain ruggedness, local surface optima, and the local elevation difference between first surface and bare earth input models. Using numerical algorithms developed in-house at the U.S. Army Engineer Research and Development Center, Geospatial Research Laboratory, various parameters are calculated for each cell in the model matrix in an initial processing phase. The results are combined and thresholded by the user in different ways for display and analysis. A graphical user interface provides control of input models, processing, and display as color-mapped overlays. Output displays can be saved as images, and the overlay data can be saved as raster layers for input into geographic information systems for further analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

DeMarle, David, and Andrew Bauer. In situ visualization with temporal caching. Engineer Research and Development Center (U.S.), January 2022. http://dx.doi.org/10.21079/11681/43042.

Full text
Abstract:
In situ visualization is a technique in which plots and other visual analyses are performed in tandem with numerical simulation processes in order to better utilize HPC machine resources. Especially with unattended exploratory engineering simulation analyses, events may occur during the run, which justify supplemental processing. Sometimes though, when the events do occur, the phenomena of interest includes the physics that precipitated the events and this may be the key insight into understanding the phenomena that is being simulated. In situ temporal caching is the temporary storing of produced data in memory for possible later analysis including time varying visualization. The later analysis and visualization still occurs during the simulation run but not until after the significant events have been detected. In this article, we demonstrate how temporal caching can be used with in-line in situ visualization to reduce simulation run-time while still capturing essential simulation results.
APA, Harvard, Vancouver, ISO, and other styles
3

Lopez Aldama, Daniel, and Roberto Capote Noy. Processing La-139 in the Unresolved Resonance Region for the FENDL Library. IAEA Nuclear Data Section, January 2021. http://dx.doi.org/10.61092/iaea.mrpt-xx9q.

Full text
Abstract:
The analysis of a numerical benchmark for a pure 1-meter sphere of La-139 proposed by C. Konno using ACE- and MATXS-formatted cross section files from the FENDL-3.1c library showed problems in the unresolved resonance region (URR). The total neutron flux computed using MCNP/ACE differed up to 50% from the one calculated applying ANISN/MATXS. The problem arose when too small total cross section values were sampled by the PURR module of NJOY2016 mainly due to the limitations of the processing method. NJOY2016 (PURR) patch to correct this issue was proposed and applied. The patch increases the neutron flux in the URR for Monte Carlo calculations (4% increase for the ENDF/B-VIII.0 evaluation vs 30% for the FENDL-3.1c evaluation). The corresponding increase of the calculated neutron flux for deterministic codes goes from 15% for the ENDF/B-VIII.0 data up to 50% for the FENDL-3.1c. The agreement between deterministic and Monte Carlo benchmark results was significantly improved. This report documents the NJOY2016 patch and summarizes the main results.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, George, Grang Mei, Bulent Ayhan, Chiman Kwan, and Venu Varma. DTRS57-04-C-10053 Wave Electromagnetic Acoustic Transducer for ILI of Pipelines. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), March 2005. http://dx.doi.org/10.55274/r0012049.

Full text
Abstract:
In this project, Intelligent Automation, Incorporated (IAI) and Oak Ridge National Lab (ORNL) propose a novel and integrated approach to inspect the mechanical dents and metal loss in pipelines. It combines the state-of-the-art SH wave Electromagnetic Acoustic Transducer (EMAT) technique, through detailed numerical modeling, data collection instrumentation, and advanced signal processing and pattern classifications, to detect and characterize mechanical defects in the underground pipeline transportation infrastructures. The technique has four components: (1) thorough guided wave modal analysis, (2) recently developed three-dimensional (3-D) Boundary Element Method (BEM) for best operational condition selection and defect feature extraction, (3) ultrasonic Shear Horizontal (SH) waves EMAT sensor design and data collection, and (4) advanced signal processing algorithm like a nonlinear split-spectrum filter, Principal Component Analysis (PCA) and Discriminant Analysis (DA) for signal-to-noise-ratio enhancement, crack signature extraction, and pattern classification. This technology not only can effectively address the problems with the existing methods, i.e., to detect the mechanical dents and metal loss in the pipelines consistently and reliably but also it is able to determine the defect shape and size to a certain extent.
APA, Harvard, Vancouver, ISO, and other styles
5

Fowler, Kimberly M., Alison H. A. Colotelo, Janelle L. Downs, Kenneth D. Ham, Jordan W. Henderson, Sadie A. Montgomery, Christopher R. Vernon, and Steven A. Parker. Simplified Processing Method for Meter Data Analysis. Office of Scientific and Technical Information (OSTI), November 2015. http://dx.doi.org/10.2172/1255411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Martin, Grant. X-ray Scattering Data Processing and Analysis. Office of Scientific and Technical Information (OSTI), June 2023. http://dx.doi.org/10.2172/1985841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Jha, Alok K. Spar Floating Platform: Numerical Analysis and Comparison with Data. Fort Belvoir, VA: Defense Technical Information Center, June 1997. http://dx.doi.org/10.21236/ada390462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Christodoulou, Christos. Ionosphere-Thermosphere Coupling - Data Analysis and Numerical Simulation Study. Fort Belvoir, VA: Defense Technical Information Center, December 2013. http://dx.doi.org/10.21236/ada599154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hodgkiss, W. S. Shallow Water Adaptive Array Processing and Data Analysis. Fort Belvoir, VA: Defense Technical Information Center, September 1995. http://dx.doi.org/10.21236/ada306525.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Boyd, Timothy J. Processing and Analysis of SCICEX-2000 CTD Data. Fort Belvoir, VA: Defense Technical Information Center, September 2002. http://dx.doi.org/10.21236/ada628072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography