To see the other types of publications on this topic, follow the link: Extended Parallel Processing Model.

Journal articles on the topic 'Extended Parallel Processing Model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Extended Parallel Processing Model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Russell, Jessica C., Sandi Smith, Wilma Novales, Lisa L. Massi Lindsey, and Joseph Hanson. "Use of the Extended Parallel Processing Model to Evaluate Culturally Relevant Kernicterus Messages." Journal of Pediatric Health Care 27, no. 1 (January 2013): 33–40. http://dx.doi.org/10.1016/j.pedhc.2011.06.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Yu, Dongxiang Lu, and Guy Courbebaisse. "A Parallel Image Registration Algorithm Based on a Lattice Boltzmann Model." Information 11, no. 1 (December 19, 2019): 1. http://dx.doi.org/10.3390/info11010001.

Full text
Abstract:
Image registration is a key pre-procedure for high level image processing. However, taking into consideration the complexity and accuracy of the algorithm, the image registration algorithm always has high time complexity. To speed up the registration algorithm, parallel computation is a relevant strategy. Parallelizing the algorithm by implementing Lattice Boltzmann method (LBM) seems a good candidate. In consequence, this paper proposes a novel parallel LBM based model (LB model) for image registration. The main idea of our method consists in simulating the convection diffusion equation through a LB model with an ad hoc collision term. By applying our method on computed tomography angiography images (CTA images), Magnet Resonance images (MR images), natural scene image and artificial images, our model proves to be faster than classical methods and achieves accurate registration. In the continuity of 2D image registration model, the LB model is extended to 3D volume registration providing excellent results in domain such as medical imaging. Our method can run on massively parallel architectures, ranging from embedded field programmable gate arrays (FPGAs) and digital signal processors (DSPs) up to graphics processing units (GPUs).
APA, Harvard, Vancouver, ISO, and other styles
3

Meadows, Cui Zhang, Charles W. Meadows, and Lu Tang. "The CDC and State Health Department Facebook Messages: An Examination of Frames and the Extended Parallel Processing Model." Communication Studies 71, no. 5 (October 9, 2020): 740–52. http://dx.doi.org/10.1080/10510974.2020.1819839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

ISHII, RENATO P., RODRIGO F. DE MELLO, LUCIANO J. SENGER, MARCOS J. SANTANA, REGINA H. C. SANTANA, and LAURENCE TIANRUO YANG. "IMPROVING SCHEDULING OF COMMUNICATION INTENSIVE PARALLEL APPLICATIONS ON HETEROGENEOUS COMPUTING ENVIRONMENTS." Parallel Processing Letters 15, no. 04 (December 2005): 423–38. http://dx.doi.org/10.1142/s0129626405002349.

Full text
Abstract:
This paper presents a new model for the evaluation of the impacts of processing operations resulting from the communication among processes. This model quantifies the traffic volume imposed on the communication network by means of the latency parameters and the overhead. Such parameters represent the load that each process imposes over the network and the delay on CPU, as a consequence of the network operations. This delay is represented on the model by means of metric measurements slowdown. The equations that quantify the costs involved in the processing operation and message exchange are defined. In the same way, equations to determine the maximum network bandwidth are used in the decision-making scheduling. The proposed model uses a constant that delimitates the communication network maximum allowed usage, this constant defines two possible scheduling techniques: group scheduling or through communication network. Such techniques are incorporated to the DPWP policy, generating an extension of this policy. Experimental and simulation results confirm the performance enhancement of parallel applications under supervision of the extended DPWP policy, compared to the executions supervised by the original DPWP.
APA, Harvard, Vancouver, ISO, and other styles
5

Jackson, Ian. "Viscoelastic Behaviour from Complementary Forced-Oscillation and Microcreep Tests." Minerals 9, no. 12 (November 21, 2019): 721. http://dx.doi.org/10.3390/min9120721.

Full text
Abstract:
There is an important complementarity between experimental methods for the study of high-temperature viscoelasticity in the time and frequency domains that has not always been fully exploited. Here, we show that the parallel processing of forced-oscillation data and microcreep records, involving the consistent use of either Andrade or extended Burgers creep function models, yields a robust composite modulus-dissipation dataset spanning a broader range of periods than either technique alone. In fitting this dataset, the alternative Andrade and extended Burgers models differ in their partitioning of strain between the anelastic and viscous contributions. The extended Burgers model is preferred because it involves a finite range of anelastic relaxation times and, accordingly, a well-defined anelastic relaxation strength. The new strategy offers the prospect of better constraining the transition between transient and steady-state creep or, equivalently, between anelastic and viscous behaviour.
APA, Harvard, Vancouver, ISO, and other styles
6

Świetlicka, Aleksandra, Karol Gugała, Marta Kolasa, Jolanta Pauk, Andrzej Rybarczyk, and Rafał Długosz. "A New Model of the Neuron for Biological Spiking Neural Network Suitable for Parallel Data Processing Realized in Hardware." Solid State Phenomena 199 (March 2013): 217–22. http://dx.doi.org/10.4028/www.scientific.net/ssp.199.217.

Full text
Abstract:
The paper presents a modification of the structure of a biological neural network (BNN) based on spiking neuron models. The proposed modification allows to influence the level of the stimulus response of particular neurons in the BNN. We consider an extended, three-dimensional Hodgkin-Huxley model of the neural cell. A typical BNN composed of such neural cells have been expanded by addition of resistors in each branch point. The resistors can be treated as the weights in such BNN. We demonstrate that adding these elements to the BNN significantly affects the waveform of the potential on the membrane of the neuron, causing an uncontrolled excitation. This provides a better description of processes that take place in nervous cell. Such BNN enables an easy adaptation of the learning rules used in artificial or spiking neural networks. The modified BNN has been implemented on Graphics Processing Unit (GPU) in the CUDA C language. This platform enables a parallel data processing, which is an important feature in such applications.
APA, Harvard, Vancouver, ISO, and other styles
7

DE VOCHT, MELANIE, VEROLIEN CAUBERGHE, BENEDIKT SAS, and MIEKE UYTTENDAELE. "Analyzing Consumers' Reactions to News Coverage of the 2011 Escherichia coli O104:H4 Outbreak, Using the Extended Parallel Processing Model." Journal of Food Protection 76, no. 3 (March 1, 2013): 473–81. http://dx.doi.org/10.4315/0362-028x.jfp-12-339.

Full text
Abstract:
This article describes and analyzes Flemish consumers' real-life reactions after reading online newspaper articles related to the enterohemorrhagic Escherichia coli (EHEC) O104:H4 outbreak associated with fresh produce in May and June 2011 in Germany. Using the Extended Parallel Processing Model (EPPM) as the theoretical framework, the present study explored the impact of Flemish (Belgian) online news coverage on consumers' perception of the risk induced by the EHEC outbreak and their behavioral intentions as consumers of fresh produce. After the consumers read a newspaper article related to the outbreak, EPPM concepts were measured, namely, perceived severity, susceptibility, self-efficacy, and affective response, combined with behavioral intentions to eat less fresh produce, to rinse fresh produce better, and to alert loved ones concerning the risk. The consumers' reactions were measured by inserting a link to an online survey below every online newspaper article on the EHEC outbreak that appeared in two substantial Flemish newspapers. The reactions of 6,312 respondents were collected within 9 days for 17 different online newspaper articles. Looking at the perceived values of the EPPM concepts, the perceived severity and the perceived susceptibility of the risk were, as expected, high. However, the consumers thought they could prevent the risk from happening, which stresses the importance of increasing consumers' knowledge of emerging food safety risks. Furthermore, analyses showed the moderating role of government trust and its influence on the way consumers perceived the risk, how worried they were, and their behavioral intentions.
APA, Harvard, Vancouver, ISO, and other styles
8

RUBIN, DANIEL J., DEBORAH A. SWAVELY, JESSE BRAJUHA, PATRICK J. KELLY, SHANEISHA ALLEN, ARIEL HOADLEY, AMY IWAMAYE, YAARA ZISMAN-ILANI, and SARAH B. BASS. "548-P: Understanding T2D Self-Management in Racial/Ethnic Minorities: Application of the Extended Parallel Processing Model and Sensemaking Theory." Diabetes 70, Supplement 1 (June 2021): 548—P. http://dx.doi.org/10.2337/db21-548-p.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Fengxuan, Silu Chen, Yongyi He, Guoyun Ye, Chi Zhang, and Guilin Yang. "A Kinematic Calibration Method of a 3T1R 4-Degree-of-Freedom Symmetrical Parallel Manipulator." Symmetry 12, no. 3 (March 2, 2020): 357. http://dx.doi.org/10.3390/sym12030357.

Full text
Abstract:
This paper proposes a method for kinematic calibration of a 3T1R, 4-degree-of-freedom symmetrical parallel manipulator driven by two pairs of linear actuators. The kinematic model of the individual branched chain is established by using the local product of exponentials formula. Based on this model, the model of the end effector’s pose error is established from a pair of symmetrical branched chains, and a recursive least square method is applied for the parameter identification. By installing built-in sensors at the passive joints, a calibration method for a serial manipulator is eventually extended to this parallel manipulator. Specifically, the sensor installed at the second revolute joint of each branched chain is saved, replaced by numerical calculation according to kinematic constraints. The simulation results validate the effectiveness of the proposed kinematic error modeling and identification methods. The procedure for pre-processing compensation on this 3T1R parallel manipulator is eventually given to improve its absolute positioning accuracy, using the inverse of the calibrated kinematic model.
APA, Harvard, Vancouver, ISO, and other styles
10

HOU, CHAOFENG, and WEI GE. "A NOVEL MODE AND ITS VERIFICATION OF PARALLEL MOLECULAR DYNAMICS SIMULATION WITH THE COUPLING OF GPU AND CPU." International Journal of Modern Physics C 23, no. 02 (February 2012): 1250015. http://dx.doi.org/10.1142/s0129183112500155.

Full text
Abstract:
Graphics processing unit (GPU) is becoming a powerful computational tool in scientific and engineering fields. In this paper, for the purpose of the full employment of computing capability, a novel mode for parallel molecular dynamics (MD) simulation is presented and implemented on basis of multiple GPUs and hybrids with central processing units (CPUs). Taking into account the interactions between CPUs, GPUs, and the threads on GPU in a multi-scale and multilevel computational architecture, several cases, such as polycrystalline silicon and heat transfer on the surface of silicon crystals, are provided and taken as model systems to verify the feasibility and validity of the mode. Furthermore, the mode can be extended to MD simulation of other areas such as biology, chemistry and so forth.
APA, Harvard, Vancouver, ISO, and other styles
11

Gajger, Tomasz, and Pawel Czarnul. "Modelling and simulation of GPU processing in the MERPSYS environment." Scalable Computing: Practice and Experience 19, no. 4 (December 29, 2018): 401–22. http://dx.doi.org/10.12694/scpe.v19i4.1439.

Full text
Abstract:
In this work, we evaluate an analytical GPU performance model based on Little's law, that expresses the kernel execution time in terms of latency bound, throughput bound, and achieved occupancy.We then combine it with the results of several research papers, introduce equations for data transfer time estimation, and finally incorporate it into the MERPSYS framework, which is a general-purpose simulator for parallel and distributed systems.The resulting solution enables the user to express a CUDA application in a MERPSYS editor using an extended Java language and then conveniently evaluate its performance for various launch configurations using different hardware units.We also provide a systematic methodology for extracting kernel characteristics, that are used as input parameters of the model.The model was evaluated using kernels representing different traits and for a large variety of launch configurations.We found it to be very accurate for computation bound kernels and realistic workloads, whilst for memory throughput bound kernels and uncommon scenarios the results were still within acceptable limits.We have also proven its portability between two devices of the same hardware architecture but different processing power.Consequently, MERPSYS with the theoretical models embedded in it can be used for evaluationof application performance on various GPUs and used for performance prediction and e.g. purchase decision making.
APA, Harvard, Vancouver, ISO, and other styles
12

Alwall Svennefelt, Catharina, Erik Hunter, and Peter Lundqvist. "Evaluating The Swedish Approach to Motivating Improved Work Safety Conditions on Farms: Insights from Fear Appeals and the Extended Parallel Processing Model." Journal of Agromedicine 23, no. 4 (September 19, 2018): 355–73. http://dx.doi.org/10.1080/1059924x.2018.1501454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Szabó, Viktor. "The economicalness of apple production in view of post harvest technology." Acta Agraria Debreceniensis, no. 63 (February 17, 2015): 125–31. http://dx.doi.org/10.34101/actaagrar/63/1847.

Full text
Abstract:
This study analyses how the level of postharvest technology’s development influences the economic efficiency of apple production with the help of a deterministic simulation model based on primary data gathering in producer undertakings. To accomplish our objectives and to support our hypotheses three processing plant types are included in the model: firstly apple production with no postharvest and prompt sale after the harvest, secondly parallel production and storage combined with an extended selling period and thirdly production and entire postharvest infrastructure (storage, sorting-ranking, packing) with the highest level of goods production and continuous sales. Based on our results it can be stated that the parallel production (plantation) and cold storage, so the second case is proved to be totally inefficient, considering that the establishment of a cold storage carries enormously high costs with resulting a relative low plus profit compared to the first type of processing plant. The reason for this is that this type is selling bulk goods without sorting-grading or packaging; storage itself – as a means of continuously servicing the market - is not covered properly by the consumers. Absolute efficiency ranking cannot be established regarding the other two processing plants: plantation without post-harvest infrastructure resulting lower NPV, but a higher IRR, DPP and PI as developing a plantation and a whole post-harvest infrastructure. Former processing plant type is favourable considering efficiency ratios describing capital adequacy, while the latter is in terms of income generating capacity.
APA, Harvard, Vancouver, ISO, and other styles
14

Szabó, Viktor. "The economic efficiency of apple production in terms of post‑harvest technology." Applied Studies in Agribusiness and Commerce 8, no. 2-3 (September 30, 2014): 99–106. http://dx.doi.org/10.19041/apstract/2014/2-3/12.

Full text
Abstract:
This study analyses how the level of postharvest technology’s development influences the economic efficiency of apple production with the help of a deterministic simulation model based on primary data gathering in producer undertakings. To accomplish our objectives and to support our hypotheses three processing plant types are included in the model: firstly apple production with no postharvest and prompt sale after the harvest, secondly parallel production and storage combined with an extended selling period and thirdly production and entire postharvest infrastructure (storage, sorting-ranking, packing) with the highest level of goods production and continuous sales. Based on our results it can be stated that the parallel production (plantation) and cold storage, so the second case is proved to be totally inefficient, considering that the establishment of a cold storage carries enormously high costs with resulting a relative low plus profit compared to the first type of processing plant. The reason for this is that this type is selling bulk goods without sorting-grading or packaging; storage itself – as a means of continuously servicing the market – is not covered properly by the consumers. Absolute efficiency ranking cannot be established regarding the other two processing plants: plantation without post-harvest infrastructure resulting lower NPV, but a more favourable IRR, DPP and PI as developing a plantation and a whole post-harvest infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
15

Maree, Mohammed, Saadat M. Alhashmi, and Mohammed Belkhatir. "Towards Improving Meta-Search through Exploiting an Integrated Search Model." Journal of Information & Knowledge Management 10, no. 04 (December 2011): 379–91. http://dx.doi.org/10.1142/s0219649211003073.

Full text
Abstract:
Meta-search engines are created to reduce the burden on the user by dispatching queries to multiple search engines in parallel. Decisions on how to rank the returned results are made based on the query's keywords. Although keyword-based search model produces good results, better results can be obtained by integrating semantic and statistical based relatedness measures into this model. Such integration allows the meta-search engine to search by meanings rather than only by literal strings. In this article, we present Multi-Search+, the next generation of Multi-Search general-purpose meta-search engine. The extended version of the system employs additional knowledge represented by multiple domain-specific ontologies to enhance both the query processing and the returned results merging. In addition, new general-purpose search engines are plugged-in to its architecture. Experimental results demonstrate that our integrated search model obtained significant improvement in the quality of the produced search results.
APA, Harvard, Vancouver, ISO, and other styles
16

Fu, Xiao, Jiaxu Zhang, and Yue Zhang. "An Online Map Matching Algorithm Based on Second-Order Hidden Markov Model." Journal of Advanced Transportation 2021 (July 17, 2021): 1–12. http://dx.doi.org/10.1155/2021/9993860.

Full text
Abstract:
Map matching is a key preprocess of trajectory data which recently have become a major data source for various transport applications and location-based services. In this paper, an online map matching algorithm based on the second-order hidden Markov model (HMM) is proposed for processing trajectory data in complex urban road networks such as parallel road segments and various road intersections. Several factors such as driver’s travel preference, network topology, road level, and vehicle heading are well considered. An extended Viterbi algorithm and a self-adaptive sliding window mechanism are adopted to solve the map matching problem efficiently. To demonstrate the effectiveness of the proposed algorithm, a case study is carried out using a massive taxi trajectory dataset in Nanjing, China. Case study results show that the accuracy of the proposed algorithm outperforms the baseline algorithm built on the first-order HMM in various testing experiments.
APA, Harvard, Vancouver, ISO, and other styles
17

Pak, Jung Min. "Switching Extended Kalman Filter Bank for Indoor Localization Using Wireless Sensor Networks." Electronics 10, no. 6 (March 18, 2021): 718. http://dx.doi.org/10.3390/electronics10060718.

Full text
Abstract:
This paper presents a new filtering algorithm, switching extended Kalman filter bank (SEKFB), for indoor localization using wireless sensor networks. SEKFB overcomes the problem of uncertain process-noise covariance that arises when using the constant-velocity motion model for indoor localization. In the SEKFB algorithm, several extended Kalman filters (EKFs) run in parallel using a set of covariance hypotheses, and the most probable output obtained from the EKFs is selected using Mahalanobis distance evaluation. Simulations demonstrated that the SEKFB can provide accurate and reliable localization without the careful selection of process-noise covariance.
APA, Harvard, Vancouver, ISO, and other styles
18

Zhou, Ri-Gui, Canyun Tan, and Ping Fan. "Quantum multidimensional color image scaling using nearest-neighbor interpolation based on the extension of FRQI." Modern Physics Letters B 31, no. 17 (June 14, 2017): 1750184. http://dx.doi.org/10.1142/s0217984917501846.

Full text
Abstract:
Reviewing past researches on quantum image scaling, only 2D images are studied. And, in a quantum system, the processing speed increases exponentially since parallel computation can be realized with superposition state when compared with classical computer. Consequently, this paper proposes quantum multidimensional color image scaling based on nearest-neighbor interpolation for the first time. Firstly, flexible representation of quantum images (FRQI) is extended to multidimensional color model. Meantime, the nearest-neighbor interpolation is extended to multidimensional color image and cycle translation operation is designed to perform scaling up operation. Then, the circuits are designed for quantum multidimensional color image scaling, including scaling up and scaling down, based on the extension of FRQI. In addition, complexity analysis shows that the circuits in the paper have lower complexity. Examples and simulation experiments are given to elaborate the procedure of quantum multidimensional scaling.
APA, Harvard, Vancouver, ISO, and other styles
19

BINGHAM, JILL, and MARK HINDERS. "3D ELASTODYNAMIC FINITE INTEGRATION TECHNIQUE SIMULATION OF GUIDED WAVES IN EXTENDED BUILT-UP STRUCTURES CONTAINING FLAWS." Journal of Computational Acoustics 18, no. 02 (June 2010): 165–92. http://dx.doi.org/10.1142/s0218396x10004097.

Full text
Abstract:
In order to understand guided wave propagation through real structures containing flaws, a parallel processing, 3D elastic wave simulation using the elastodynamic finite integration technique (EFIT) has been developed. This full field, numeric simulation technique easily examines models too complex for analytical solutions, and is developed to handle built up 3D structures as well as layers with different material properties and complicated surface detail. The simulations produce informative visualizations of the guided wave modes in the structures as well as the output from sensors placed in the simulation space to mimic experiment.
APA, Harvard, Vancouver, ISO, and other styles
20

Ye, Yutong, Hongyin Zhu, Chaoying Zhang, and Binghai Wen. "Efficient graphic processing unit implementation of the chemical-potential multiphase lattice Boltzmann method." International Journal of High Performance Computing Applications 35, no. 1 (October 27, 2020): 78–96. http://dx.doi.org/10.1177/1094342020968272.

Full text
Abstract:
The chemical-potential multiphase lattice Boltzmann method (CP-LBM) has the advantages of satisfying the thermodynamic consistency and Galilean invariance, and it realizes a very large density ratio and easily expresses the surface wettability. Compared with the traditional central difference scheme, the CP-LBM uses the Thomas algorithm to calculate the differences in the multiphase simulations, which significantly improves the calculation accuracy but increases the calculation complexity. In this study, we designed and implemented a parallel algorithm for the chemical-potential model on a graphic processing unit (GPU). Several strategies were used to optimize the GPU algorithm, such as coalesced access, instruction throughput, thread organization, memory access, and loop unrolling. Compared with dual-Xeon 5117 CPU server, our methods achieved 95 times speedup on an NVIDIA RTX 2080Ti GPU and 106 times speedup on an NVIDIA Tesla P100 GPU. When the algorithm was extended to the environment with dual NVIDIA Tesla P100 GPUs, 189 times speedup was achieved and the workload of each GPU reached 96%.
APA, Harvard, Vancouver, ISO, and other styles
21

Hong, Hyehyun. "An Extension of the Extended Parallel Process Model (EPPM) in Television Health News: The Influence of Health Consciousness on Individual Message Processing and Acceptance." Health Communication 26, no. 4 (June 2011): 343–53. http://dx.doi.org/10.1080/10410236.2010.551580.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Legalov, Alexander I., Ivan V. Matkovskii, Mariya S. Ushakova, and Darya S. Romanova. "Dynamically Changing Parallelism with the Asynchronous Sequential Data Flows." Modeling and Analysis of Information Systems 27, no. 2 (June 24, 2020): 164–79. http://dx.doi.org/10.18255/1818-1015-2020-2-164-179.

Full text
Abstract:
A statically typed version of the data driven functional parallel computing model is proposed. It enables a representation of dynamically changing parallelism by means of asynchronous serial data flows. We consider the features of the syntax and semantics of the statically typed data driven functional parallel programming language Smile that supports asynchronous sequential flows. Our main idea is to apply the Hoar concept of communicating sequential processes to the computation control on the data readiness. It is assumed that on the data readiness a control signal is emitted to inform the processes about the occurrence of certain events. The special feature of our approach is that the model is extended with the special asynchronous containers that can generate events on their partial filling. These containers are a stream and a swarm, each of which has its own specifics. A stream is used to process data which have identical type. The data comes sequentially and asynchronously at arbitrary time moments. The number of the incoming data elements is initially unknown, so the processing completes on the signal of the end of the stream. A swarm is used to contain independent data of the same type and may be used for the massive parallel operations performing. Unlike a stream, the swarm’s size is fixed and known in advance. General principles of the operations with the asynchronous sequential flows with an arbitrary order of data arrival are described. The use of the streams and the swarms in various situations is considered. We propose the language constructions which allow us to operate the swarms and streams and describe the specifics of their application. We provide the sample functions to illustrate the use of the different approaches to description of the parallelism: recursive processing of the asynchronous flows, processing of the flows in an arbitrary or predefined order of operations, direct access and access by the reference to the elements of the streams and swarms, pipelining of calculations. We give a preliminary parallelism assessment which depends on the ratio of the rates of data arrival and their processing. The proposed methods can be used in the development of the future languages and tool-kits of architecture-independent parallel programming.
APA, Harvard, Vancouver, ISO, and other styles
23

Camargos, Ana Flávia P., Viviane C. Silva, Jean-M. Guichon, and Gérard Meunier. "GPU-accelerated iterative solution of complex-entry systems issued from 3D edge-FEA of electromagnetics in the frequency domain." International Journal of High Performance Computing Applications 31, no. 2 (July 28, 2016): 119–33. http://dx.doi.org/10.1177/1094342015584476.

Full text
Abstract:
We present a performance analysis of a parallel implementation for both preconditioned conjugate gradient and preconditioned bi-conjugate gradient solvers running on graphic processing units (GPUs) with CUDA programming model. The solvers were mainly optimized for the solution of sparse systems of algebraic equations at complex entries, arising from the three-dimensional edge-finite element analysis of the electromagnetic phenomena involved in the open-bound earth diffusion of currents under time-harmonic excitation. We used a shifted incomplete Cholesky (IC) factorization as preconditioner. Results show a significant speedup by using either a single-GPU or a multi-GPU device, compared to a serial central processing unit (CPU) implementation, thereby allowing the simulations of large-scale problems in low-cost personal computers. Additional experiments of the optimized solvers show that its use can be extended successfully to other complex systems of equations arising in electrical engineering, such as those obtained in power–system analysis.
APA, Harvard, Vancouver, ISO, and other styles
24

Tóth, Norbert, and Gyula Kulcsár. "New models and algorithms to solve integrated problems of production planning and control taking into account worker skills in flexible manufacturing systems." International Journal of Industrial Engineering Computations 12, no. 4 (2021): 381–400. http://dx.doi.org/10.5267/j.ijiec.2021.5.004.

Full text
Abstract:
The paradigm of the cyber-physical manufacturing system is playing an increasingly important role in the development of production systems and management of manufacturing processes. This paper presents an optimization model for solving an integrated problem of production planning and manufacturing control. The goal is to create detailed production plans for a complex manufacturing system and to control the skilled manual workers. The detailed optimization model of the problem and the developed approach and algorithms are described in detail. To consider the impact of human workers performing the manufacturing primary operations, we elaborated an extended simulation-based procedure and new multi-criteria control algorithms that can manage varying availability constraints of parallel workstations, worker-dependent processing times, different product types and process plans. The effectiveness of the proposed algorithms is demonstrated by numerical results based on a case study.
APA, Harvard, Vancouver, ISO, and other styles
25

Dong, Rui, Yating Yang, and Tonghai Jiang. "Spelling Correction of Non-Word Errors in Uyghur–Chinese Machine Translation." Information 10, no. 6 (June 6, 2019): 202. http://dx.doi.org/10.3390/info10060202.

Full text
Abstract:
This research was conducted to solve the out-of-vocabulary problem caused by Uyghur spelling errors in Uyghur–Chinese machine translation, so as to improve the quality of Uyghur–Chinese machine translation. This paper assesses three spelling correction methods based on machine translation: 1. Using a Bilingual Evaluation Understudy (BLEU) score; 2. Using a Chinese language model; 3. Using a bilingual language model. The best results were achieved in both the spelling correction task and the machine translation task by using the BLEU score for spelling correction. A maximum F1 score of 0.72 was reached for spelling correction, and the translation result increased the BLEU score by 1.97 points, relative to the baseline system. However, the method of using a BLEU score for spelling correction requires the support of a bilingual parallel corpus, which is a supervised method that can be used in corpus pre-processing. Unsupervised spelling correction can be performed by using either a Chinese language model or a bilingual language model. These two methods can be easily extended to other languages, such as Arabic.
APA, Harvard, Vancouver, ISO, and other styles
26

Chakraborty, Tanmoy, Dipankar Das, and Sivaji Bandyopadhyay. "Identifying Bengali Multiword Expressions using semantic clustering." Lingvisticæ Investigationes. International Journal of Linguistics and Language Resources 37, no. 1 (September 5, 2014): 106–28. http://dx.doi.org/10.1075/li.37.1.04cha.

Full text
Abstract:
One of the key issues in both natural language understanding and generation is the appropriate processing of Multiword Expressions (MWEs). MWEs pose a huge problem to the precise language processing due to their idiosyncratic nature and diversity in lexical, syntactical and semantic properties. The semantics of a MWE cannot be expressed after combining the semantics of its constituents. Therefore, the formalism of semantic clustering is often viewed as an instrument for extracting MWEs especially for resource constraint languages like Bengali. The present semantic clustering approach contributes to locate clusters of the synonymous noun tokens present in the document. These clusters in turn help measure the similarity between the constituent words of a potentially candidate phrase using a vector space model and judge the suitability of this phrase to be a MWE. In this experiment, we apply the semantic clustering approach for noun-noun bigram MWEs, though it can be extended to any types of MWEs. In parallel, the well known statistical models, namely Point-wise Mutual Information (PMI), Log Likelihood Ratio (LLR), Significance function are also employed to extract MWEs from the Bengali corpus. The comparative evaluation shows that the semantic clustering approach outperforms all other competing statistical models. As a byproduct of this experiment, we have started developing a standard lexicon in Bengali that serves as a productive Bengali linguistic thesaurus.
APA, Harvard, Vancouver, ISO, and other styles
27

Lewis, Mike, and Linda Brackenbury. "CADRE: A Low-power, Low-EMI DSP Architecture for Digital Mobile Phones." VLSI Design 12, no. 3 (January 1, 2001): 333–48. http://dx.doi.org/10.1155/2001/47640.

Full text
Abstract:
Current mobile phone applications demand high performance from the DSP, and future generations are likely to require even greater throughput. However, it is important to balance these processing demands against the requirement of low power consumption for extended battery lifetime. A novel low-power digital signal processor (DSP) architecture CADRE (Configurable Asynchronous DSP for Reduced Energy) addresses these requirements through a multi-level power reduction strategy. A parallel architecture and configurable compressed instruction set meets the throughput requirements without excessive program memory bandwidth, while a large register file reduces the cost of data accesses. Sign-magnitude representation is used for data, to reduce switching activity within the datapath. Asynchronous design gives fine-grained activity control without the complexities of clock gating, and gives low electromagnetic interference. Finally, the operational model of the target application allows for a reduced interrupt structure, simplifying processor design by avoiding the need for exact exceptions.
APA, Harvard, Vancouver, ISO, and other styles
28

Schuegraf, Philipp, and Ksenia Bittner. "Automatic Building Footprint Extraction from Multi-Resolution Remote Sensing Images Using a Hybrid FCN." ISPRS International Journal of Geo-Information 8, no. 4 (April 12, 2019): 191. http://dx.doi.org/10.3390/ijgi8040191.

Full text
Abstract:
Recent technical developments made it possible to supply large-scale satellite image coverage. This poses the challenge of efficient discovery of imagery. One very important task in applications like urban planning and reconstruction is to automatically extract building footprints. The integration of different information, which is presently achievable due to the availability of high-resolution remote sensing data sources, makes it possible to improve the quality of the extracted building outlines. Recently, deep neural networks were extended from image-level to pixel-level labelling, allowing to densely predict semantic labels. Based on these advances, we propose an end-to-end U-shaped neural network, which efficiently merges depth and spectral information within two parallel networks combined at the late stage for binary building mask generation. Moreover, as satellites usually provide high-resolution panchromatic images, but only low-resolution multi-spectral images, we tackle this issue by using a residual neural network block. It fuses those images with different spatial resolution at the early stage, before passing the fused information to the Unet stream, responsible for processing spectral information. In a parallel stream, a stereo digital surface model (DSM) is also processed by the Unet. Additionally, we demonstrate that our method generalizes for use in cities which are not included in the training data.
APA, Harvard, Vancouver, ISO, and other styles
29

Ayres, Daniel L., Michael P. Cummings, Guy Baele, Aaron E. Darling, Paul O. Lewis, David L. Swofford, John P. Huelsenbeck, Philippe Lemey, Andrew Rambaut, and Marc A. Suchard. "BEAGLE 3: Improved Performance, Scaling, and Usability for a High-Performance Computing Library for Statistical Phylogenetics." Systematic Biology 68, no. 6 (April 23, 2019): 1052–61. http://dx.doi.org/10.1093/sysbio/syz020.

Full text
Abstract:
Abstract BEAGLE is a high-performance likelihood-calculation library for phylogenetic inference. The BEAGLE library defines a simple, but flexible, application programming interface (API), and includes a collection of efficient implementations for calculation under a variety of evolutionary models on different hardware devices. The library has been integrated into recent versions of popular phylogenetics software packages including BEAST and MrBayes and has been widely used across a diverse range of evolutionary studies. Here, we present BEAGLE 3 with new parallel implementations, increased performance for challenging data sets, improved scalability, and better usability. We have added new OpenCL and central processing unit-threaded implementations to the library, allowing the effective utilization of a wider range of modern hardware. Further, we have extended the API and library to support concurrent computation of independent partial likelihood arrays, for increased performance of nucleotide-model analyses with greater flexibility of data partitioning. For better scalability and usability, we have improved how phylogenetic software packages use BEAGLE in multi-GPU (graphics processing unit) and cluster environments, and introduced an automated method to select the fastest device given the data set, evolutionary model, and hardware. For application developers who wish to integrate the library, we also have developed an online tutorial. To evaluate the effect of the improvements, we ran a variety of benchmarks on state-of-the-art hardware. For a partitioned exemplar analysis, we observe run-time performance improvements as high as 5.9-fold over our previous GPU implementation. BEAGLE 3 is free, open-source software licensed under the Lesser GPL and available at https://beagle-dev.github.io.
APA, Harvard, Vancouver, ISO, and other styles
30

Jasiukaitytė-Grojzdek, Edita, Filipa A. Vicente, Miha Grilc, and Blaž Likozar. "Ambient-Pressured Acid-Catalysed Ethylene Glycol Organosolv Process: Liquefaction Structure–Activity Relationships from Model Cellulose–Lignin Mixtures to Lignocellulosic Wood Biomass." Polymers 13, no. 12 (June 17, 2021): 1988. http://dx.doi.org/10.3390/polym13121988.

Full text
Abstract:
Raising the awareness of carbon dioxide emissions, climate global warming and fossil fuel depletion has renewed the transition towards a circular economy approach, starting by addressing active bio-economic precepts that all portion amounts of wood are valorised as products. This is accomplished by minimizing residues formed (preferably no waste materials), maximizing reaction productivity yields, and optimising catalysed chemical by-products. Within framework structure determination, the present work aims at drawing a parallel between the characterisation of cellulose–lignin mixture (derived system model) liquefaction and real conversion process in the acidified ethylene glycol at moderate process conditions, i.e., 150 °C, ambient atmospheric pressure and potential bio-based solvent, for 4 h. Extended-processing liquid phase is characterized considering catalyst-transformed reactant species being produced, mainly recovered lignin-based polymer, by quantitative 31P, 13C and 1H nuclear magnetic resonance (NMR) spectroscopy, as well as the size exclusion- (SEC) or high performance liquid chromatography (HPLC) separation for higher or lower molecular weight compound compositions, respectively. Such mechanistic pathway analytics help to understand the steps in mild organosolv biopolymer fractionation, which is one of the key industrial barriers preventing a more widespread manufacturing of the biomass-derived (hydroxyl, carbonyl or carboxyl) aromatic monomers or oligomers for polycarbonates, polyesters, polyamides, polyurethanes and (epoxy) resins.
APA, Harvard, Vancouver, ISO, and other styles
31

Pisso, Ignacio, Espen Sollum, Henrik Grythe, Nina I. Kristiansen, Massimo Cassiani, Sabine Eckhardt, Delia Arnold, et al. "The Lagrangian particle dispersion model FLEXPART version 10.4." Geoscientific Model Development 12, no. 12 (December 2, 2019): 4955–97. http://dx.doi.org/10.5194/gmd-12-4955-2019.

Full text
Abstract:
Abstract. The Lagrangian particle dispersion model FLEXPART in its original version in the mid-1990s was designed for calculating the long-range and mesoscale dispersion of hazardous substances from point sources, such as those released after an accident in a nuclear power plant. Over the past decades, the model has evolved into a comprehensive tool for multi-scale atmospheric transport modeling and analysis and has attracted a global user community. Its application fields have been extended to a large range of atmospheric gases and aerosols, e.g., greenhouse gases, short-lived climate forcers like black carbon and volcanic ash, and it has also been used to study the atmospheric branch of the water cycle. Given suitable meteorological input data, it can be used for scales from dozens of meters to global. In particular, inverse modeling based on source–receptor relationships from FLEXPART has become widely used. In this paper, we present FLEXPART version 10.4, which works with meteorological input data from the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecast System (IFS) and data from the United States National Centers of Environmental Prediction (NCEP) Global Forecast System (GFS). Since the last publication of a detailed FLEXPART description (version 6.2), the model has been improved in different aspects such as performance, physicochemical parameterizations, input/output formats, and available preprocessing and post-processing software. The model code has also been parallelized using the Message Passing Interface (MPI). We demonstrate that the model scales well up to using 256 processors, with a parallel efficiency greater than 75 % for up to 64 processes on multiple nodes in runs with very large numbers of particles. The deviation from 100 % efficiency is almost entirely due to the remaining nonparallelized parts of the code, suggesting large potential for further speedup. A new turbulence scheme for the convective boundary layer has been developed that considers the skewness in the vertical velocity distribution (updrafts and downdrafts) and vertical gradients in air density. FLEXPART is the only model available considering both effects, making it highly accurate for small-scale applications, e.g., to quantify dispersion in the vicinity of a point source. The wet deposition scheme for aerosols has been completely rewritten and a new, more detailed gravitational settling parameterization for aerosols has also been implemented. FLEXPART has had the option of running backward in time from atmospheric concentrations at receptor locations for many years, but this has now been extended to also work for deposition values and may become useful, for instance, for the interpretation of ice core measurements. To our knowledge, to date FLEXPART is the only model with that capability. Furthermore, the temporal variation and temperature dependence of chemical reactions with the OH radical have been included, allowing for more accurate simulations for species with intermediate lifetimes against the reaction with OH, such as ethane. Finally, user settings can now be specified in a more flexible namelist format, and output files can be produced in NetCDF format instead of FLEXPART's customary binary format. In this paper, we describe these new developments. Moreover, we present some tools for the preparation of the meteorological input data and for processing FLEXPART output data, and we briefly report on alternative FLEXPART versions.
APA, Harvard, Vancouver, ISO, and other styles
32

Ort, Alexander, and Andreas Fahr. "The effectiveness of a positively vs. negatively valenced PSA against sexually transmitted diseases – evidence from an experimental study." Studies in Communication and Media 9, no. 3 (2020): 341–66. http://dx.doi.org/10.5771/2192-4007-2020-3-341.

Full text
Abstract:
This study examines the effects of positive compared to negative appeals in persuasive health communication about sexually transmitted diseases (STDs). The theoretical background draws on the Extended Parallel Process Model, which is mainly used to ex- plain the processing of negative appeals (fear) in these contexts. Participants (N = 160; Mage = 22.59, SD = 2.48, 57.4% female; mainly students) took part in a one-factorial experiment by viewing an advertisement promoting the use of condoms that was emotionally framed as either humorous (positive) or threatening (negative) to induce an emotional experience of joy or fear, respectively. Emotional experiences were tested as predictors for health behavior-related outcomes by means of hierarchical regression analyses. Data pro- vides evidence for the beneficial effect of positive emotional appeals on message judgment and attitudes towards the proposed behavior. The threatening appeal reduced perceptions of efficacy and led to an increase in reactance. These findings provide further evidence for carefully using fear appeals in persuasive health communication and speak in favor of integrating positive emotional appeals in these contexts.
APA, Harvard, Vancouver, ISO, and other styles
33

Harlan, William S. "Simultaneous velocity filtering of hyperbolic reflections and balancing of offset‐dependent wavelets." GEOPHYSICS 54, no. 11 (November 1989): 1455–65. http://dx.doi.org/10.1190/1.1442609.

Full text
Abstract:
Hyperbolic reflections and convolutional wavelets are fundamental models for seismic data processing. Each sample of a “stacked” zero‐offset section can parameterize an impulsive hyperbolic reflection in a midpoint gather. Convolutional wavelets can model source waveforms and near‐surface filtering at the shot and geophone positions. An optimized inversion of the combined modeling equations for hyperbolic traveltimes and convolutional wavelets makes explicit any interdependence and nonuniqueness in these two sets of parameters. I first estimate stacked traces that best model the recorded data and then find nonimpulsive wavelets to improve the fit with the data. These wavelets are used for a new estimate of the stacked traces, and so on. Estimated stacked traces model short average wavelets with a superposition of approximately parallel hyperbolas; estimated wavelets adjust the phases and amplitudes of inconsistent traces, including static shifts. Deconvolution of land data with estimated wavelets makes wavelets consistent over offset; remaining static shifts are midpoint‐consistent. This phase balancing improves the resolution of stacked data and of velocity analyses. If precise velocity functions are not known, then many stacked traces can be inverted simultaneously, each with a different velocity function. However, the increased number of overlain hyperbolas can more easily model the effects of inconsistent wavelets. As a compromise, I limit velocity functions to reasonable regions selected from a stacking velocity analysis—a few functions cover velocities of primary and multiple reflections. Multiple reflections are modeled separately and then subtracted from marine data. The model can be extended to include more complicated amplitude changes in reflectivity. Migrated reflectivity functions would add an extra constraint on the continuity of reflections over midpoint. Including the effect of dip moveout in the model would make stacking and migration velocities equivalent.
APA, Harvard, Vancouver, ISO, and other styles
34

Pazouki, Arman, Radu Serban, and Dan Negrut. "A High Performance Computing Approach to the Simulation of Fluid-Solid interaction Problems with Rigid and Flexible Components." Archive of Mechanical Engineering 61, no. 2 (August 15, 2014): 227–51. http://dx.doi.org/10.2478/meceng-2014-0014.

Full text
Abstract:
Abstract This work outlines a unified multi-threaded, multi-scale High Performance Computing (HPC) approach for the direct numerical simulation of Fluid-Solid Interaction (FSI) problems. The simulation algorithm relies on the extended Smoothed Particle Hydrodynamics (XSPH) method, which approaches the fluid flow in a La-grangian framework consistent with the Lagrangian tracking of the solid phase. A general 3D rigid body dynamics and an Absolute Nodal Coordinate Formulation (ANCF) are implemented to model rigid and flexible multibody dynamics. The two-way coupling of the fluid and solid phases is supported through use of Boundary Condition Enforcing (BCE) markers that capture the fluid-solid coupling forces by enforcing a no-slip boundary condition. The solid-solid short range interaction, which has a crucial impact on the small-scale behavior of fluid-solid mixtures, is resolved via a lubrication force model. The collective system states are integrated in time using an explicit, multi-rate scheme. To alleviate the heavy computational load, the overall algorithm leverages parallel computing on Graphics Processing Unit (GPU) cards. Performance and scaling analysis are provided for simulations scenarios involving one or multiple phases with up to tens of thousands of solid objects. The software implementation of the approach, called Chrono:Fluid, is part of the Chrono project and available as an open-source software.
APA, Harvard, Vancouver, ISO, and other styles
35

Grush, Rick. "The emulation theory of representation: Motor control, imagery, and perception." Behavioral and Brain Sciences 27, no. 3 (June 2004): 377–96. http://dx.doi.org/10.1017/s0140525x04000093.

Full text
Abstract:
The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motor control, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language.
APA, Harvard, Vancouver, ISO, and other styles
36

Slifkin, Andrew B., David E. Vaillancourt, and Karl M. Newell. "Intermittency in the Control of Continuous Force Production." Journal of Neurophysiology 84, no. 4 (October 1, 2000): 1708–18. http://dx.doi.org/10.1152/jn.2000.84.4.1708.

Full text
Abstract:
The purpose of the current investigation was to examine the influence of intermittency in visual information processes on intermittency in the control continuous force production. Adult human participants were required to maintain force at, and minimize variability around, a force target over an extended duration (15 s), while the intermittency of on-line visual feedback presentation was varied across conditions. This was accomplished by varying the frequency of successive force-feedback deliveries presented on a video display. As a function of a 128-fold increase in feedback frequency (0.2 to 25.6 Hz), performance quality improved according to hyperbolic functions (e.g., force variability decayed), reaching asymptotic values near the 6.4-Hz feedback frequency level. Thus, the briefest interval over which visual information could be integrated and used to correct errors in motor output was approximately 150 ms. The observed reductions in force variability were correlated with parallel declines in spectral power at about 1 Hz in the frequency profile of force output. In contrast, power at higher frequencies in the force output spectrum were uncorrelated with increases in feedback frequency. Thus, there was a considerable lag between the generation of motor output corrections (1 Hz) and the processing of visual feedback information (6.4 Hz). To reconcile these differences in visual and motor processing times, we proposed a model where error information is accumulated by visual information processes at a maximum frequency of 6.4 per second, and the motor system generates a correction on the basis of the accumulated information at the end of each 1-s interval.
APA, Harvard, Vancouver, ISO, and other styles
37

Leshner, Glenn, I.-Huei Cheng, Hyun Joo Song, Yoonhyueng Choi, and Cynthia Frisby. "The Role of Spiritual Health Locus of Control in Breast Cancer Information Processing between African American and Caucasian Women." Integrative Medicine Insights 1 (January 2006): 117863370600100. http://dx.doi.org/10.1177/117863370600100004.

Full text
Abstract:
Spirituality seems to be an important cultural factor for African American women when thinking about their health. It is, however, not clear how spiritual health locus of control (SLOC) impacts health-related outcomes in the context of health message processing models, such as the Extended Parallel Process and the Risk Perception Attitude framework. Using a survey of African American and Caucasian women in the context of breast cancer, the role of SLOC and its interactions with perceived efficacy and risk was examined on four health outcomes–-message acceptance, talking about breast cancer, information seeking, and behavioral intentions. For African American women, SLOC had a positive impact for talking about breast cancer through an interaction with risk and efficacy such that women high in both SLOC and perceived efficacy, but low in perceived risk were more likely to talk about breast cancer than when efficacy was low. However, high SLOC exacerbated the negative effects of efficacy on talking about breast cancer regardless of the risk level for Caucasian women. SLOC also had a positive influence on attending to breast cancer information in the media for African American women. SLOC played no role in attending to breast cancer information for Caucasian women. Interestingly, SLOC played no role for African American women on behavioral intentions, however, it worked to decrease behavioral intentions for Caucasian women when risk was high.
APA, Harvard, Vancouver, ISO, and other styles
38

Fahad, Muhammad, Tariq Javid, Hira Beenish, Adnan Ahmed Siddiqui, and Ghufran Ahmed. "Extending ONTAgri with Service-Oriented Architecture towards Precision Farming Application." Sustainability 13, no. 17 (August 31, 2021): 9801. http://dx.doi.org/10.3390/su13179801.

Full text
Abstract:
The computer science perspective of ontology refers to ontology as a technology, however, with a different perspective in terms of interrogations and concentrations to construct engineering models of reality. Agriculture-centered architectures are among rich sources of knowledge that are developed, preserved, and released for farmers and agro professionals. Many researchers have developed different variants of existing ontology-based information systems. These systems are primarily picked agriculture-related ontological strategies based on activities such as crops, weeds, implantation, irrigation, and planting, to name a few. By considering the limitations on agricultural resources in the ONTAgri scenario, in this paper, an extension of ontology is proposed. The extended ONTAgri is a service-oriented architecture that connects precision farming with both local and global decision-making methods. These decision-making methods are connected with the Internet of Things systems in parallel for the input processing of system ontology. The proposed architecture fulfills the requirements of Agriculture 4.0. The significance of the proposed approach aiming to solve a multitude of agricultural problems being faced by the farmers is successfully demonstrated through SPARQL queries.
APA, Harvard, Vancouver, ISO, and other styles
39

Lustig, Maayan, Qingling Feng, Yohan Payan, Amit Gefen, and Dafna Benayahu. "Noninvasive Continuous Monitoring of Adipocyte Differentiation: From Macro to Micro Scales." Microscopy and Microanalysis 25, no. 1 (February 2019): 119–28. http://dx.doi.org/10.1017/s1431927618015520.

Full text
Abstract:
Abstract3T3-L1 cells serve as model systems for studying adipogenesis and research of adipose tissue-related diseases, e.g. obesity and diabetes. Here, we present two novel and complementary nondestructive methods for adipogenesis analysis of living cells which facilitate continuous monitoring of the same culture over extended periods of time, and are applied in parallel at the macro- and micro-scales. At the macro-scale, we developed visual differences mapping (VDM), a novel method which allows to determine level of adipogenesis (LOA)—a numerical index which quantitatively describes the extent of differentiation in the whole culture, and percentage area populated by adipocytes (PAPBA) across a whole culture, based on the apparent morphological differences between preadipocytes and adipocytes. At the micro-scale, we developed an improved version of our previously published image-processing algorithm, which now provides data regarding single-cell morphology and lipid contents. Both methods were applied here synergistically for measuring differentiation levels in cultures over multiple weeks. VDM revealed that the mean LOA value reached 1.11 ± 0.06 and the mean PAPBA value reached >60%. Micro-scale analysis revealed that during differentiation, the cells transformed from a fibroblast-like shape to a circular shape with a build-up of lipid droplets. We predict a vast potential for implementation of these methods in adipose-related pharmacological research, such as in metabolic-syndrome studies.
APA, Harvard, Vancouver, ISO, and other styles
40

Onisawa, Takehisa, and Sadaaki Miyamoto. "Applications of Soft Computing to Human-centered Information Systems." Journal of Advanced Computational Intelligence and Intelligent Informatics 3, no. 1 (February 20, 1999): 1–2. http://dx.doi.org/10.20965/jaciii.1999.p0001.

Full text
Abstract:
Soft computing was advocated by Prof. Zadeh as a total technology complementary to the advantages and disadvantages of fuzzy theory, neural network models, genetic algorithms, and so on - a wide variety of topics covered at scientific conferences, in books, in papers, etc. In human-centered information systems, human beings play a central role in information processing. Human information processing involves uncertainty, fuzziness, ambiguity, subjectivity, etc., be dealt with well by soft computing. Human-centered information processing systems are important fields of soft computing. This special issue was motivated by the editors' research project at the Tsukuba Advanced Research Alliance (TARA), University of Tsukuba. The title of this issue is thus similar to the TARA project title, Soft Computing and Human Centered Information Systems. This special issue comprehensively covers soft computing, including chaos theory, rough sets, multisets, as well as fuzzy theory, neural network models, and genetic algorithms. Human-centered information systems are also covered extensively, e.g., human imperfect information processing, human evaluation/judgment, optimal allocation problems, vehicle systems, and human intelligent information processing. This issue focuses on eight papers: The first, A Semantic-Ambiguity-Free Relational Model for Handling Imperfect Information, by Nakata, focuses on imperfect information without semantic ambiguity from the standpoint that an extension of relational models causes semantic ambiguity. This paper proposes an extended relational model in the framework of fuzzy sets and the theory of possibility. The paper formulates set and relational operations as extended relational algebra in the proposed model. The paper is applicable to human imperfect information processing. The second paper, Fuzzy Clustering for Detecting Linear Structures with Different Dimensions, by Umayahara et al., proposes a new objective function and an algorithm for detecting clusters with different dimensionalities. The proposed algorithm improves conventional approaches for detecting linear varieties with different dimensionalities. The paper also uses the noise cluster to deal with extraordinary data. The procedures of the proposed algorithm are demonstrated using numerical examples. The algorithm is useful for human evaluation data processing. As shown by Takahara et al., in An Adaptive Tabu Search and Other Metaheuristics for a Class of Optimal Allocation Problems, an adaptive tabu search for a class of optimal allocation problems uses a set of tables for objects as memory elements in which the search region becomes large, and the structure of memory and the search framework are simplified. This is applied to a class of optimal allocation problems in which small and irregular shapes are placed on a large sheet. The method's effectiveness is compared to results obtained by other metaheuristics. This method is useful for optimal allocation problems faced by human beings. The fourth paper, On Dynamic Clustering Models for 3-Way Data, by Sato, deals with 3-way data consisting of objects, attributes, and times using several clustering models. This paper focuses on the models for 3-way data observed by similarities of objects. The paper proposes models showing exact changes over time by fixing clusters during time. The model configuration is based on fuzzy additive clustering models. Models are modified based on data features. Numerical examples demonstrate that the proposed model shows the movements of objects over time. The fifth paper, A Fuzzy Linear Regression Analysis for Fuzzy Input-Output Data Using the Least Squares Method under Linear Constraints and Its Application to Fuzzy Rating Data, by Takemura, applies a fuzzy linear regression model to the analysis of fuzzy rating data. The paper considers a fuzzy linear regression model with fuzzy input data, fuzzy output data, and fuzzy parameters, since human rating data is usually fuzzy. The paper discusses fuzzy linear regression analysis using the least squares method under linear constraints. The present approach is rather heuristic in that it is an extension of the ordinary least squares method for crisp data. Fuzzy linear regression analysis is applied to psychological studies, i.e., the effect of perceived temperature and humidity on unpleasantness and behavioral intention in fashion shopping. This paper deals with human judgment, considering the human being as a human-centered system. The sixth paper, Study on Intelligent Vehicle Control Considering Driver Perception of Driving Environment, by Takahashi et al., discusses an approach of the design of an intelligent vehicle controller supporting driver vehicle use. The approach considers the interaction of the driving environment, vehicle behavior, and driver expectations of vehicle behavior. The paper uses a multiobjective decision-making model as the intelligent vehicle controller and a fuzzy measures and fuzzy integrals model to reflect driver characteristics. The simulation and experimental results show good vehicle control performance. A vehicle does not move without human control. In this sense, the paper deals with human-centered systems as such. The seventh paper, Determinism Measurement in Time Series by Chaotic Approach and Its Applications, by Fujimoto et al., discusses deterministic chaos. The proposed method, trajectory parallel measure (TPM), distinguishes chaos from embedded time series data. This is simpler than conventional methods and examines only the direction of tangential unit vectors of the trajectory in its neighborhood. This is applied to chaotic time series data with random noise. Fast Fourier transform (FFT) analysis is applied to data to verify the effectiveness of the proposed method. Although FFT analysis cannot distinguish the degree of random noise, the proposed TPM clearly distinguishes it. TPM is also applied to the diagnosis of automobile components. TPM detects abnormal acoustic time series data well. TPM is applicable to fault diagnosis of human-centered systems, e.g., vehicles. The final paper, Linguistic Expression Generation Model of Subjective Content in a Picture, by Iwata et al., proposes a model that expresses subjective contents in a picture given objective information on the picture. Objective information is information on object's location, size, direction, etc. Subjective content is emotions of a human object, the relationship between objects, and object behavior obtained from objective information. Human emotions are recognized from facial expressions using neural network models. Fuzzy reasoning is applied to infer the relationship between objects. Case-based reasoning is used to express object behavior. The effectiveness of the present model is verified by experiments. This paper deals with human intelligent information processing, considering the human being as a human-centered system. We thank Drs. T.Fukuda and K.Hirota, editors in chief of the JACI, for accepting our proposals for this special issue and for their ongoing encouragement during editing. Special thanks are due to all referees for their kind cooperation in helping prepare this issue. We also thank Mr.Y.Inoue for his advice on editing.
APA, Harvard, Vancouver, ISO, and other styles
41

Reshef, Moshe, Dan Kosloff, Mickey Edwards, and Chris Hsiung. "Three‐dimensional elastic modeling by the Fourier method." GEOPHYSICS 53, no. 9 (September 1988): 1184–93. http://dx.doi.org/10.1190/1.1442558.

Full text
Abstract:
Earlier work on three‐dimensional forward modeling is extended to elastic waves using the equations of conservation of momentum and the stress‐strain relations for an isotropic elastic medium undergoing infinitesimal deformation. In addition to arbitrary compressional (or P‐wave) velocity and density variation in lateral and vertical directions, elastic modeling permits shear (or S‐wave) velocity variation as well. The elastic wave equation is solved using a generalization of the method for the acoustic case. Computation of each time step begins by computing six strain components by performing nine spatial partial differentiation operations on the three displacement components from the previous time step. The six strains and two Lamé constants are linearly combined to yield six stress components. Nine spatial partial differentiation operations on the six stresses, three body forces, and density are used to compute second partial time derivatives of the three displacement components. Time stepping to obtain the three displacement components for the current time step is performed with second‐order difference operators. The modeling includes an optional free surface above the spatial grid. An absorbing boundary is applied on the lateral and bottom edges of the spatial grid. This modeling scheme is implemented on a four‐processor CRAY X‐MP computer system using the solid‐state storage device (SSD). Using parallel processing with four CPUs, a reasonable geologic model can be computed within a few hours. The modeling scheme provides a variety of seismic source types and many possible output displays. These features enable the modeling of a wide range of seismic surveys. Numerical and analytic results are presented.
APA, Harvard, Vancouver, ISO, and other styles
42

Sun, Yifang, Sen Zou, Guang Zhao, and Bei Yang. "THE IMPROVEMENT AND REALIZATION OF FINITE-DIFFERENCE LATTICE BOLTZMANN METHOD." Aerospace technic and technology, no. 1 (February 26, 2021): 4–13. http://dx.doi.org/10.32620/aktt.2021.1.01.

Full text
Abstract:
The Lattice Boltzmann Method (LBM) is a numerical method developed in recent decades. It has the characteristics of high parallel efficiency and simple boundary processing. The basic idea is to construct a simplified dynamic model so that the macroscopic behavior of the model is the same as the macroscopic equation. From the perspective of micro-dynamics, LBM treats macro-physical quantities as micro-quantities to obtain results by statistical averaging. The Finite-difference LBM (FDLBM) is a new numerical method developed based on LBM. The first finite-difference LBE (FDLBE) was perhaps due to Tamura and Akinori and was examined by Cao et al. in more detail. Finite-difference LBM was further extended to curvilinear coordinates with nonuniform grids by Mei and Shyy. By improving the FDLBE proposed by Mei and Shyy, a new finite difference LBM is obtained in the paper. In the model, the collision term is treated implicitly, just as done in the Mei-Shyy model. However, by introducing another distribution function based on the earlier distribution function, the implicitness of the discrete scheme is eliminated, and a simple explicit scheme is finally obtained, such as the standard LBE. Furthermore, this trick for the FDLBE can also be easily used to develop more efficient FVLBE and FELBE schemes. To verify the correctness and feasibility of this improved FDLBM model, which is used to calculate the square cavity model, and the calculated results are compared with the data of the classic square cavity model. The comparison result includes two items: the velocity on the centerline of the square cavity and the position of the vortex center in the square cavity. The simulation results of FDLBM are very consistent with the data in the literature. When Re=400, the velocity profiles of u and v on the centerline of the square cavity are consistent with the data results in Ghia's paper, and the vortex center position in the square cavity is also almost the same as the data results in Ghia's paper. Therefore, the verification of FDLBM is successful and FDLBM is feasible. This improved method can also serve as a reference for subsequent research.
APA, Harvard, Vancouver, ISO, and other styles
43

Karthik, Victor U., Sivamayam Sivasuthan, Arunasalam Rahunanthan, Ravi S. Thyagarajan, Paramsothy Jayakumar, Lalita Udpa, and S. Ratnajeevan H. Hoole. "Faster, more accurate, parallelized inversion for shape optimization in electroheat problems on a graphics processing unit (GPU) with the real-coded genetic algorithm." COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering 34, no. 1 (January 5, 2015): 344–56. http://dx.doi.org/10.1108/compel-06-2014-0146.

Full text
Abstract:
Purpose – Inverting electroheat problems involves synthesizing the electromagnetic arrangement of coils and geometries to realize a desired heat distribution. To this end two finite element problems need to be solved, first for the magnetic fields and the joule heat that the associated eddy currents generate and then, based on these heat sources, the second problem for heat distribution. This two-part problem needs to be iterated on to obtain the desired thermal distribution by optimization. Being a time consuming process, the purpose of this paper is to parallelize the process using the graphics processing unit (GPU) and the real-coded genetic algorithm, each for both speed and accuracy. Design/methodology/approach – This coupled problem represents a heavy computational load with long wait-times for results. The GPU has recently been demonstrated to enhance the efficiency and accuracy of the finite element computations and cut down solution times. It has also been used to speedup the naturally parallel genetic algorithm. The authors use the GPU to perform coupled electroheat finite element optimization by the genetic algorithm to achieve computational efficiencies far better than those reported for a single finite element problem. In the genetic algorithm, coding objective functions in real numbers rather than binary arithmetic gives added speed and accuracy. Findings – The feasibility of the method proposed to reduce computational time and increase accuracy is established through the simple problem of shaping a current carrying conductor so as to yield a constant temperature along a line. The authors obtained a speedup (CPU time to GPU time ratio) saturating to about 28 at a population size of 500 because of increasing communications between threads. But this far better than what is possible on a workstation. Research limitations/implications – By using the intrinsically parallel genetic algorithm on a GPU, large complex coupled problems may be solved very quickly. The method demonstrated here without accounting for radiation and convection, may be trivially extended to more completely modeled electroheat systems. Since the primary purpose here is to establish methodology and feasibility, the thermal problem is simplified by neglecting convection and radiation. While that introduces some error, the computational procedure is still validated. Practical implications – The methodology established has direct applications in electrical machine design, metallurgical mixing processes, and hyperthermia treatment in oncology. In these three practical application areas, the authors need to compute the exciting coil (or antenna) arrangement (current magnitude and phase) and device geometry that would accomplish a desired heat distribution to achieve mixing, reduce machine heat or burn cancerous tissue. This process presented does it more accurately and speedily. Social implications – Particularly the above-mentioned application in oncology will alleviate human suffering through use in hyperthermia treatment planning in cancer treatment. The method presented provides scope for new commercial software development and employment. Originality/value – Previous finite element shape optimization of coupled electroheat problems by this group used gradient methods whose difficulties are explained. Others have used analytical and circuit models in place of finite elements. This paper applies the massive parallelization possible with GPUs to the inherently parallel genetic algorithm, and extends it from single field system problems to coupled problems, and thereby realizes practicable solution times for such a computationally complex problem. Further, by using GPU computations rather than CPU, accuracy is enhanced. And then by using real number rather than binary coding for object functions, further accuracy and speed gains are realized.
APA, Harvard, Vancouver, ISO, and other styles
44

Popova, Lucy. "The Extended Parallel Process Model." Health Education & Behavior 39, no. 4 (October 14, 2011): 455–73. http://dx.doi.org/10.1177/1090198111418108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Ruf, B., T. Pollok, and M. Weinmann. "EFFICIENT SURFACE-AWARE SEMI-GLOBAL MATCHING WITH MULTI-VIEW PLANE-SWEEP SAMPLING." ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences IV-2/W7 (September 16, 2019): 137–44. http://dx.doi.org/10.5194/isprs-annals-iv-2-w7-137-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Online augmentation of an oblique aerial image sequence with structural information is an essential aspect in the process of 3D scene interpretation and analysis. One key aspect in this is the efficient dense image matching and depth estimation. Here, the Semi-Global Matching (SGM) approach has proven to be one of the most widely used algorithms for efficient depth estimation, providing a good trade-off between accuracy and computational complexity. However, SGM only models a first-order smoothness assumption, thus favoring fronto-parallel surfaces. In this work, we present a hierarchical algorithm that allows for efficient depth and normal map estimation together with confidence measures for each estimate. Our algorithm relies on a plane-sweep multi-image matching followed by an extended SGM optimization that allows to incorporate local surface orientations, thus achieving more consistent and accurate estimates in areasmade up of slanted surfaces, inherent to oblique aerial imagery. We evaluate numerous configurations of our algorithm on two different datasets using an absolute and relative accuracy measure. In our evaluation, we show that the results of our approach are comparable to the ones achieved by refined Structure-from-Motion (SfM) pipelines, such as COLMAP, which are designed for offline processing. In contrast, however, our approach only considers a confined image bundle of an input sequence, thus allowing to perform an online and incremental computation at 1Hz&amp;ndash;2Hz.</p>
APA, Harvard, Vancouver, ISO, and other styles
46

Fountoukis, S. G., and M. P. Bekakos. "Extended OQL for Object Oriented Parallel Query Processing." Data Science Journal 6 (2007): 121–36. http://dx.doi.org/10.2481/dsj.6.121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Xing, Fei, Yi Ping Yao, Zhi Wen Jiang, and Bing Wang. "Fine-Grained Parallel and Distributed Spatial Stochastic Simulation of Biological Reactions." Advanced Materials Research 345 (September 2011): 104–12. http://dx.doi.org/10.4028/www.scientific.net/amr.345.104.

Full text
Abstract:
To date, discrete event stochastic simulations of large scale biological reaction systems are extremely compute-intensive and time-consuming. Besides, it has been widely accepted that spatial factor plays a critical role in the dynamics of most biological reaction systems. The NSM (the Next Sub-Volume Method), a spatial variation of the Gillespie’s stochastic simulation algorithm (SSA), has been proposed for spatially stochastic simulation of those systems. While being able to explore high degree of parallelism in systems, NSM is inherently sequential, which still suffers from the problem of low simulation speed. Fine-grained parallel execution is an elegant way to speed up sequential simulations. Thus, based on the discrete event simulation framework JAMES II, we design and implement a PDES (Parallel Discrete Event Simulation) TW (time warp) simulator to enable the fine-grained parallel execution of spatial stochastic simulations of biological reaction systems using the ANSM (the Abstract NSM), a parallel variation of the NSM. The simulation results of classical Lotka-Volterra biological reaction system show that our time warp simulator obtains remarkable parallel speed-up against sequential execution of the NSM.I.IntroductionThe goal of Systems biology is to obtain system-level investigations of the structure and behavior of biological reaction systems by integrating biology with system theory, mathematics and computer science [1][3], since the isolated knowledge of parts can not explain the dynamics of a whole system. As the complement of “wet-lab” experiments, stochastic simulation, being called the “dry-computational” experiment, plays a more and more important role in computing systems biology [2]. Among many methods explored in systems biology, discrete event stochastic simulation is of greatly importance [4][5][6], since a great number of researches have present that stochasticity or “noise” have a crucial effect on the dynamics of small population biological reaction systems [4][7]. Furthermore, recent research shows that the stochasticity is not only important in biological reaction systems with small population but also in some moderate/large population systems [7].To date, Gillespie’s SSA [8] is widely considered to be the most accurate way to capture the dynamics of biological reaction systems instead of traditional mathematical method [5][9]. However, SSA-based stochastic simulation is confronted with two main challenges: Firstly, this type of simulation is extremely time-consuming, since when the types of species and the number of reactions in the biological system are large, SSA requires a huge amount of steps to sample these reactions; Secondly, the assumption that the systems are spatially homogeneous or well-stirred is hardly met in most real biological systems and spatial factors play a key role in the behaviors of most real biological systems [19][20][21][22][23][24]. The next sub-volume method (NSM) [18], presents us an elegant way to access the special problem via domain partition. To our disappointment, sequential stochastic simulation with the NSM is still very time-consuming, and additionally introduced diffusion among neighbor sub-volumes makes things worse. Whereas, the NSM explores a very high degree of parallelism among sub-volumes, and parallelization has been widely accepted as the most meaningful way to tackle the performance bottleneck of sequential simulations [26][27]. Thus, adapting parallel discrete event simulation (PDES) techniques to discrete event stochastic simulation would be particularly promising. Although there are a few attempts have been conducted [29][30][31], research in this filed is still in its infancy and many issues are in need of further discussion. The next section of the paper presents the background and related work in this domain. In section III, we give the details of design and implementation of model interfaces of LP paradigm and the time warp simulator based on the discrete event simulation framework JAMES II; the benchmark model and experiment results are shown in Section IV; in the last section, we conclude the paper with some future work.II. Background and Related WorkA. Parallel Discrete Event Simulation (PDES)The notion Logical Process (LP) is introduced to PDES as the abstract of the physical process [26], where a system consisting of many physical processes is usually modeled by a set of LP. LP is regarded as the smallest unit that can be executed in PDES and each LP holds a sub-partition of the whole system’s state variables as its private ones. When a LP processes an event, it can only modify the state variables of its own. If one LP needs to modify one of its neighbors’ state variables, it has to schedule an event to the target neighbor. That is to say event message exchanging is the only way that LPs interact with each other. Because of the data dependences or interactions among LPs, synchronization protocols have to be introduced to PDES to guarantee the so-called local causality constraint (LCC) [26]. By now, there are a larger number of synchronization algorithms have been proposed, e.g. the null-message [26], the time warp (TW) [32], breath time warp (BTW) [33] and etc. According to whether can events of LPs be processed optimistically, they are generally divided into two types: conservative algorithms and optimistic algorithms. However, Dematté and Mazza have theoretically pointed out the disadvantages of pure conservative parallel simulation for biochemical reaction systems [31]. B. NSM and ANSM The NSM is a spatial variation of Gillespie’ SSA, which integrates the direct method (DM) [8] with the next reaction method (NRM) [25]. The NSM presents us a pretty good way to tackle the aspect of space in biological systems by partitioning a spatially inhomogeneous system into many much more smaller “homogeneous” ones, which can be simulated by SSA separately. However, the NSM is inherently combined with the sequential semantics, and all sub-volumes share one common data structure for events or messages. Thus, directly parallelization of the NSM may be confronted with the so-called boundary problem and high costs of synchronously accessing the common data structure [29]. In order to obtain higher efficiency of parallel simulation, parallelization of NSM has to firstly free the NSM from the sequential semantics and secondly partition the shared data structure into many “parallel” ones. One of these is the abstract next sub-volume method (ANSM) [30]. In the ANSM, each sub-volume is modeled by a logical process (LP) based on the LP paradigm of PDES, where each LP held its own event queue and state variables (see Fig. 1). In addition, the so-called retraction mechanism was introduced in the ANSM too (see algorithm 1). Besides, based on the ANSM, Wang etc. [30] have experimentally tested the performance of several PDES algorithms in the platform called YH-SUPE [27]. However, their platform is designed for general simulation applications, thus it would sacrifice some performance for being not able to take into account the characteristics of biological reaction systems. Using the similar ideas of the ANSM, Dematté and Mazza have designed and realized an optimistic simulator. However, they processed events in time-stepped manner, which would lose a specific degree of precisions compared with the discrete event manner, and it is very hard to transfer a time-stepped simulation to a discrete event one. In addition, Jeschke etc.[29] have designed and implemented a dynamic time-window simulator to execution the NSM in parallel on the grid computing environment, however, they paid main attention on the analysis of communication costs and determining a better size of the time-window.Fig. 1: the variations from SSA to NSM and from NSM to ANSMC. JAMES II JAMES II is an open source discrete event simulation experiment framework developed by the University of Rostock in Germany. It focuses on high flexibility and scalability [11][13]. Based on the plug-in scheme [12], each function of JAMES II is defined as a specific plug-in type, and all plug-in types and plug-ins are declared in XML-files [13]. Combined with the factory method pattern JAMES II innovatively split up the model and simulator, which makes JAMES II is very flexible to add and reuse both of models and simulators. In addition, JAMES II supports various types of modelling formalisms, e.g. cellular automata, discrete event system specification (DEVS), SpacePi, StochasticPi and etc.[14]. Besides, a well-defined simulator selection mechanism is designed and developed in JAMES II, which can not only automatically choose the proper simulators according to the modeling formalism but also pick out a specific simulator from a serious of simulators supporting the same modeling formalism according to the user settings [15].III. The Model Interface and SimulatorAs we have mentioned in section II (part C), model and simulator are split up into two separate parts. Thus, in this section, we introduce the designation and implementation of model interface of LP paradigm and more importantly the time warp simulator.A. The Mod Interface of LP ParadigmJAMES II provides abstract model interfaces for different modeling formalism, based on which Wang etc. have designed and implemented model interface of LP paradigm[16]. However, this interface is not scalable well for parallel and distributed simulation of larger scale systems. In our implementation, we accommodate the interface to the situation of parallel and distributed situations. Firstly, the neighbor LP’s reference is replaced by its name in LP’s neighbor queue, because it is improper even dangerous that a local LP hold the references of other LPs in remote memory space. In addition, (pseudo-)random number plays a crucial role to obtain valid and meaningful results in stochastic simulations. However, it is still a very challenge work to find a good random number generator (RNG) [34]. Thus, in order to focus on our problems, we introduce one of the uniform RNGs of JAMES II to this model interface, where each LP holds a private RNG so that random number streams of different LPs can be independent stochastically. B. The Time Warp SimulatorBased on the simulator interface provided by JAMES II, we design and implement the time warp simulator, which contains the (master-)simulator, (LP-)simulator. The simulator works strictly as master/worker(s) paradigm for fine-grained parallel and distributed stochastic simulations. Communication costs are crucial to the performance of a fine-grained parallel and distributed simulation. Based on the Java remote method invocation (RMI) mechanism, P2P (peer-to-peer) communication is implemented among all (master-and LP-)simulators, where a simulator holds all the proxies of targeted ones that work on remote workers. One of the advantages of this communication approach is that PDES codes can be transferred to various hardwire environment, such as Clusters, Grids and distributed computing environment, with only a little modification; The other is that RMI mechanism is easy to realized and independent to any other non-Java libraries. Since the straggler event problem, states have to be saved to rollback events that are pre-processed optimistically. Each time being modified, the state is cloned to a queue by Java clone mechanism. Problem of this copy state saving approach is that it would cause loads of memory space. However, the problem can be made up by a condign GVT calculating mechanism. GVT reduction scheme also has a significant impact on the performance of parallel simulators, since it marks the highest time boundary of events that can be committed so that memories of fossils (processed events and states) less than GVT can be reallocated. GVT calculating is a very knotty for the notorious simultaneous reporting problem and transient messages problem. According to our problem, another GVT algorithm, called Twice Notification (TN-GVT) (see algorithm 2), is contributed to this already rich repository instead of implementing one of GVT algorithms in reference [26] and [28].This algorithm looks like the synchronous algorithm described in reference [26] (pp. 114), however, they are essentially different from each other. This algorithm has never stopped the simulators from processing events when GVT reduction, while algorithm in reference [26] blocks all simulators for GVT calculating. As for the transient message problem, it can be neglect in our implementation, because RMI based remote communication approach is synchronized, that means a simulator will not go on its processing until the remote the massage get to its destination. And because of this, the high-costs message acknowledgement, prevalent over many classical asynchronous GVT algorithms, is not needed anymore too, which should be constructive to the whole performance of the time warp simulator.IV. Benchmark Model and Experiment ResultsA. The Lotka-Volterra Predator-prey SystemIn our experiment, the spatial version of Lotka-Volterra predator-prey system is introduced as the benchmark model (see Fig. 2). We choose the system for two considerations: 1) this system is a classical experimental model that has been used in many related researches [8][30][31], so it is credible and the simulation results are comparable; 2) it is simple but helpful enough to test the issues we are interested in. The space of predator-prey System is partitioned into a2D NXNgrid, whereNdenotes the edge size of the grid. Initially the population of the Grass, Preys and Predators are set to 1000 in each single sub-volume (LP). In Fig. 2,r1,r2,r3stand for the reaction constants of the reaction 1, 2 and 3 respectively. We usedGrass,dPreyanddPredatorto stand for the diffusion rate of Grass, Prey and Predator separately. Being similar to reference [8], we also take the assumption that the population of the grass remains stable, and thusdGrassis set to zero.R1:Grass + Prey ->2Prey(1)R2:Predator +Prey -> 2Predator(2)R3:Predator -> NULL(3)r1=0.01; r2=0.01; r3=10(4)dGrass=0.0;dPrey=2.5;dPredato=5.0(5)Fig. 2: predator-prey systemB. Experiment ResultsThe simulation runs have been executed on a Linux Cluster with 40 computing nodes. Each computing node is equipped with two 64bit 2.53 GHz Intel Xeon QuadCore Processors with 24GB RAM, and nodes are interconnected with Gigabit Ethernet connection. The operating system is Kylin Server 3.5, with kernel 2.6.18. Experiments have been conducted on the benchmark model of different size of mode to investigate the execution time and speedup of the time warp simulator. As shown in Fig. 3, the execution time of simulation on single processor with 8 cores is compared. The result shows that it will take more wall clock time to simulate much larger scale systems for the same simulation time. This testifies the fact that larger scale systems will leads to more events in the same time interval. More importantly, the blue line shows that the sequential simulation performance declines very fast when the mode scale becomes large. The bottleneck of sequential simulator is due to the costs of accessing a long event queue to choose the next events. Besides, from the comparison between group 1 and group 2 in this experiment, we could also conclude that high diffusion rate increased the simulation time greatly both in sequential and parallel simulations. This is because LP paradigm has to split diffusion into two processes (diffusion (in) and diffusion (out) event) for two interactive LPs involved in diffusion and high diffusion rate will lead to high proportional of diffusion to reaction. In the second step shown in Fig. 4, the relationship between the speedups from time warp of two different model sizes and the number of work cores involved are demonstrated. The speedup is calculated against the sequential execution of the spatial reaction-diffusion systems model with the same model size and parameters using NSM.Fig. 4 shows the comparison of speedup of time warp on a64X64grid and a100X100grid. In the case of a64X64grid, under the condition that only one node is used, the lowest speedup (a little bigger than 1) is achieved when two cores involved, and the highest speedup (about 6) is achieved when 8 cores involved. The influence of the number of cores used in parallel simulation is investigated. In most cases, large number of cores could bring in considerable improvements in the performance of parallel simulation. Also, compared with the two results in Fig. 4, the simulation of larger model achieves better speedup. Combined with time tests (Fig. 3), we find that sequential simulator’s performance declines sharply when the model scale becomes very large, which makes the time warp simulator get better speed-up correspondingly.Fig. 3: Execution time (wall clock time) of Seq. and time warp with respect to different model sizes (N=32, 64, 100, and 128) and model parameters based on single computing node with 8 cores. Results of the test are grouped by the diffusion rates (Group 1: Sequential 1 and Time Warp 1. dPrey=2.5, dPredator=5.0; Group 2: dPrey=0.25, dPredator=0.5, Sequential 2 and Time Warp 2).Fig. 4: Speedup of time warp with respect to the number of work cores and the model size (N=64 and 100). Work cores are chose from one computing node. Diffusion rates are dPrey=2.5, dPredator=5.0 and dGrass=0.0.V. Conclusion and Future WorkIn this paper, a time warp simulator based on the discrete event simulation framework JAMES II is designed and implemented for fine-grained parallel and distributed discrete event spatial stochastic simulation of biological reaction systems. Several challenges have been overcome, such as state saving, roll back and especially GVT reduction in parallel execution of simulations. The Lotka-Volterra Predator-Prey system is chosen as the benchmark model to test the performance of our time warp simulator and the best experiment results show that it can obtain about 6 times of speed-up against the sequential simulation. The domain this paper concerns with is in the infancy, many interesting issues are worthy of further investigated, e.g. there are many excellent PDES optimistic synchronization algorithms (e.g. the BTW) as well. Next step, we would like to fill some of them into JAMES II. In addition, Gillespie approximation methods (tau-leap[10] etc.) sacrifice some degree of precision for higher simulation speed, but still could not address the aspect of space of biological reaction systems. The combination of spatial element and approximation methods would be very interesting and promising; however, the parallel execution of tau-leap methods should have to overcome many obstacles on the road ahead.AcknowledgmentThis work is supported by the National Natural Science Foundation of China (NSF) Grant (No.60773019) and the Ph.D. Programs Foundation of Ministry of Education of China (No. 200899980004). The authors would like to show their great gratitude to Dr. Jan Himmelspach and Dr. Roland Ewald at the University of Rostock, Germany for their invaluable advice and kindly help with JAMES II.ReferencesH. Kitano, "Computational systems biology." Nature, vol. 420, no. 6912, pp. 206-210, November 2002.H. Kitano, "Systems biology: a brief overview." Science (New York, N.Y.), vol. 295, no. 5560, pp. 1662-1664, March 2002.A. Aderem, "Systems biology: Its practice and challenges," Cell, vol. 121, no. 4, pp. 511-513, May 2005. [Online]. Available: http://dx.doi.org/10.1016/j.cell.2005.04.020.H. de Jong, "Modeling and simulation of genetic regulatory systems: A literature review," Journal of Computational Biology, vol. 9, no. 1, pp. 67-103, January 2002.C. W. Gardiner, Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences (Springer Series in Synergetics), 3rd ed. Springer, April 2004.D. T. Gillespie, "Simulation methods in systems biology," in Formal Methods for Computational Systems Biology, ser. Lecture Notes in Computer Science, M. Bernardo, P. Degano, and G. Zavattaro, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5016, ch. 5, pp. 125-167.Y. Tao, Y. Jia, and G. T. Dewey, "Stochastic fluctuations in gene expression far from equilibrium: Omega expansion and linear noise approximation," The Journal of Chemical Physics, vol. 122, no. 12, 2005.D. T. Gillespie, "Exact stochastic simulation of coupled chemical reactions," Journal of Physical Chemistry, vol. 81, no. 25, pp. 2340-2361, December 1977.D. T. Gillespie, "Stochastic simulation of chemical kinetics," Annual Review of Physical Chemistry, vol. 58, no. 1, pp. 35-55, 2007.D. T. Gillespie, "Approximate accelerated stochastic simulation of chemically reacting systems," The Journal of Chemical Physics, vol. 115, no. 4, pp. 1716-1733, 2001.J. Himmelspach, R. Ewald, and A. M. Uhrmacher, "A flexible and scalable experimentation layer," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 827-835.J. Himmelspach and A. M. Uhrmacher, "Plug'n simulate," in 40th Annual Simulation Symposium (ANSS'07). Washington, DC, USA: IEEE, March 2007, pp. 137-143.R. Ewald, J. Himmelspach, M. Jeschke, S. Leye, and A. M. Uhrmacher, "Flexible experimentation in the modeling and simulation framework james ii-implications for computational systems biology," Brief Bioinform, vol. 11, no. 3, pp. bbp067-300, January 2010.A. Uhrmacher, J. Himmelspach, M. Jeschke, M. John, S. Leye, C. Maus, M. Röhl, and R. Ewald, "One modelling formalism & simulator is not enough! a perspective for computational biology based on james ii," in Formal Methods in Systems Biology, ser. Lecture Notes in Computer Science, J. Fisher, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5054, ch. 9, pp. 123-138. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-68413-8_9.R. Ewald, J. Himmelspach, and A. M. Uhrmacher, "An algorithm selection approach for simulation systems," pads, vol. 0, pp. 91-98, 2008.Bing Wang, Jan Himmelspach, Roland Ewald, Yiping Yao, and Adelinde M Uhrmacher. Experimental analysis of logical process simulation algorithms in james ii[C]// In M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, editors, Proceedings of the Winter Simulation Conference, IEEE Computer Science, 2009. 1167-1179.Ewald, J. Rössel, J. Himmelspach, and A. M. Uhrmacher, "A plug-in-based architecture for random number generation in simulation systems," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 836-844.J. Elf and M. Ehrenberg, "Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases." Systems biology, vol. 1, no. 2, pp. 230-236, December 2004.K. Takahashi, S. Arjunan, and M. Tomita, "Space in systems biology of signaling pathways? Towards intracellular molecular crowding in silico," FEBS Letters, vol. 579, no. 8, pp. 1783-1788, March 2005.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.D. Ridgway, G. Broderick, and M. Ellison, "Accommodating space, time and randomness in network simulation," Current Opinion in Biotechnology, vol. 17, no. 5, pp. 493-498, October 2006.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.W. G. Wilson, A. M. Deroos, and E. Mccauley, "Spatial instabilities within the diffusive lotka-volterra system: Individual-based simulation results," Theoretical Population Biology, vol. 43, no. 1, pp. 91-127, February 1993.K. Kruse and J. Elf. Kinetics in spatially extended systems. In Z. Szallasi, J. Stelling, and V. Periwal, editors, System Modeling in Cellular Biology. From Concepts to Nuts and Bolts, pages 177–198. MIT Press, Cambridge, MA, 2006.M. A. Gibson and J. Bruck, "Efficient exact stochastic simulation of chemical systems with many species and many channels," The Journal of Physical Chemistry A, vol. 104, no. 9, pp. 1876-1889, March 2000.R. M. Fujimoto, Parallel and Distributed Simulation Systems (Wiley Series on Parallel and Distributed Computing). Wiley-Interscience, January 2000.Y. Yao and Y. Zhang, “Solution for analytic simulation based on parallel processing,” Journal of System Simulation, vol. 20, No.24, pp. 6617–6621, 2008.G. Chen and B. K. Szymanski, "Dsim: scaling time warp to 1,033 processors," in WSC '05: Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference, 2005, pp. 346-355.M. Jeschke, A. Park, R. Ewald, R. Fujimoto, and A. M. Uhrmacher, "Parallel and distributed spatial simulation of chemical reactions," in 2008 22nd Workshop on Principles of Advanced and Distributed Simulation. Washington, DC, USA: IEEE, June 2008, pp. 51-59.B. Wang, Y. Yao, Y. Zhao, B. Hou, and S. Peng, "Experimental analysis of optimistic synchronization algorithms for parallel simulation of reaction-diffusion systems," High Performance Computational Systems Biology, International Workshop on, vol. 0, pp. 91-100, October 2009.L. Dematté and T. Mazza, "On parallel stochastic simulation of diffusive systems," in Computational Methods in Systems Biology, M. Heiner and A. M. Uhrmacher, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5307, ch. 16, pp. 191-210.D. R. Jefferson, "Virtual time," ACM Trans. Program. Lang. Syst., vol. 7, no. 3, pp. 404-425, July 1985.J. S. Steinman, "Breathing time warp," SIGSIM Simul. Dig., vol. 23, no. 1, pp. 109-118, July 1993. [Online]. Available: http://dx.doi.org/10.1145/174134.158473 S. K. Park and K. W. Miller, "Random number generators: good ones are hard to find," Commun. ACM, vol. 31, no. 10, pp. 1192-1201, October 1988.
APA, Harvard, Vancouver, ISO, and other styles
48

Vos, Piet G., and Erwin W. Van Geenen. "A Parallel-Processing Key-Finding Model." Music Perception 14, no. 2 (1996): 185–223. http://dx.doi.org/10.2307/40285717.

Full text
Abstract:
A model of key finding is presented for single-voiced pieces of tonal music. Each tone is input as a pitch class and a duration. The model makes a parallel search for the key in the scalar and chordal domains, taking into account primacy and memory constraints. The model has been tested for a range of tonal music including the fugue subjects of J. S. Bach's Wohltemperierte Klavier (WTK). The notated key was usually found after a few processing steps and from then on remained stable— but was still sensitive to modulation. The performance of the parallel-processing model was compared with the performance of key-finding models previously proposed by Krumhansl and Schmuckler and by Longuet-Higgins and Steedman. The comparison showed that the new model's most distinctive features, implementation of parallel key search in the scalar and chordal domains, as well as the implementation of search-restricting factors, primacy and memory, make the new model a powerful and plausible alternative to the other models. Subsequently, the parallel-processing model's perceptual plausibility has been tested in two experiments, in which 20 musically well-trained subjects had to produce the key(s) of eight WTK fugue themes (Experiment 1) and to rate the key transparency for seven contrapuntal variations of the A minor subject of J. S. Bach's Kunst der Fuge (Experiment 2). A substantial concordance between listeners' judgments and the key inferences produced by the model was found in both experiments. Conceptual limitations, such as the model's disregard for the potential impact of recency on key finding and for expectations from functional implications of tone order, are discussed. Potential extensions of the model are suggested, as well as ideas for further perceptual studies in which the model might be tested in a more advanced manner than in the present study.
APA, Harvard, Vancouver, ISO, and other styles
49

Feng, Wei, and Guo Zong Cheng. "A Parallel Algorithm of Image Processing Model." Advanced Materials Research 121-122 (June 2010): 325–28. http://dx.doi.org/10.4028/www.scientific.net/amr.121-122.325.

Full text
Abstract:
The basic algorithm for the parallel FFT is briefly introduced, combined with the recent years of the parallel algorithm research results, analysis of a image processing model based on practical parallel algorithm.
APA, Harvard, Vancouver, ISO, and other styles
50

Das, Debasis, and Rajiv Misra. "Parallel Processing Concept Based Road Traffic Model." Procedia Technology 4 (2012): 267–71. http://dx.doi.org/10.1016/j.protcy.2012.05.041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography