Academic literature on the topic 'Simulation Problem Analysis and Research Kernel (SPARK)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Simulation Problem Analysis and Research Kernel (SPARK).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Simulation Problem Analysis and Research Kernel (SPARK)"

1

Igue, Fathia Dahir, Anh Dung Tran Le, Alexandra Bourdot, Geoffrey Promis, Sy Tuan Nguyen, Omar Douzane, Laurent Lahoche, and Thierry Langlet. "Impact of Temperature on the Moisture Buffering Performance of Palm and Sunflower Concretes." Applied Sciences 11, no. 12 (June 10, 2021): 5420. http://dx.doi.org/10.3390/app11125420.

Full text
Abstract:
The use of bio-based materials (BBM) in buildings is an interesting solution as they are eco-friendly materials and have low embodied energy. This article aims to investigate the hygric performance of two bio-based materials: palm and sunflower concretes. The moisture buffering value (MBV) characterizes the ability of a material or multilayer component to moderate the variation in the indoor relative humidity (RH). In the literature, the moisture buffer values of bio-based concretes were measured at a constant temperature of 23 °C. However, in reality, the indoor temperature of the buildings is variable. The originality of this article is found in studying the influence of the temperature on the moisture buffer performance of BBM. A study at wall scale on its impact on the indoor RH at room level will be carried out. First, the physical models are presented. Second, the numerical models are implemented in the Simulation Problem Analysis and Research Kernel (SPARK) suited to complex problems. Then, the numerical model validated with the experimental results found in the literature is used to investigate the moisture buffering capacity of BBM as a function of the temperature and its application in buildings. The results show that the temperature has a significant impact on the moisture buffering capacity of bio-based building materials and its capacity to dampen indoor RH variation. Using the numerical model presented in this paper can predict and optimize the hygric performance of BBM designed for building application.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Ze Fang, and Miao Song. "Research on Modeling Method Based Kernel Principal Component Analysis for Ball and Beam System." Applied Mechanics and Materials 233 (November 2012): 292–96. http://dx.doi.org/10.4028/www.scientific.net/amm.233.292.

Full text
Abstract:
To simulate the human control behavior, and to solve the dimension disaster problem of the simulation model which can simulate complicated controlling behavior, the ball and beam system is researched. The kernel principal component analysis and false nearest neighbor method is adopted, and the simplified ball and beam system controller is designed. The embedding dimension of input time series is determined by the false nearest neighbor method, and then the characteristic value is extracted by the kernel principal component analysis method, so the nonlinear auxiliary variable space feature can be extracted, phase space is reconstructed and the variable is selected. Last from simplified input space to the output space regression mathematical model is fit by using the method of least squares linear regression. Test shows that the control algorithm is effective, with high control precision and stability.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Min, Ru Chuan Wang, Jing Li, and Zhi Jie Han. "A Novel P2P Business Traffic Prediction Algorithm." Key Engineering Materials 467-469 (February 2011): 1339–44. http://dx.doi.org/10.4028/www.scientific.net/kem.467-469.1339.

Full text
Abstract:
The increasing P2P network traffic on the Internet has leaded to the problem of network congestion. In the consequence of the diversification of the P2P business and protocol, research on the management of P2P traffic has many problems to resolve. Prediction of the P2P traffic is the kernel problem in the P2P traffic management. Based on the existed P2P traffic characters, this paper structures a P2P traffic model, gives a traffic prediction algorithm bases on wavelet-analysis, and proves the accuracy of the algorithm. Simulation experiment figures that the algorithm has a high prediction precision and a superior real-time performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Long, Hao, Na Huo, Yong Yang, and Ben Chen Yu. "DWT Blind Detection Algorithm of Digital Watermarking on still Image Based on KFDA." Applied Mechanics and Materials 373-375 (August 2013): 454–58. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.454.

Full text
Abstract:
The algorithm of blind detection on DWT (Discrete Wavelet Transform) digital watermarking of still image is proposed to overcome the lower detection rate and higher false alarm rate problem. The algorithm utilizes KFDA (Kernel Fisher Discrimination Analysis) theory. With the help of research results of blind detection on DCT digital watermarking, the algorithm passes the test information by stochastic resonance system so as to amplify weak signals. Then the algorithm chooses suitable sample vector by computation. KFDA theory, a kind of learning machine with high precision is used to realize blind detection. Both theoretical analysis and simulation results show that the algorithm improves detection probability at low embedding strength. At the same time the algorithm also decreases false alarm rate.
APA, Harvard, Vancouver, ISO, and other styles
5

Phan, Quoc-Huy, Su-Lim Tan, Ian McLoughlin, and Duc-Lung Vu. "A Unified Framework for GPS Code and Carrier-Phase Multipath Mitigation Using Support Vector Regression." Advances in Artificial Neural Systems 2013 (March 5, 2013): 1–14. http://dx.doi.org/10.1155/2013/240564.

Full text
Abstract:
Multipath mitigation is a long-standing problem in global positioning system (GPS) research and is essential for improving the accuracy and precision of positioning solutions. In this work, we consider multipath error estimation as a regression problem and propose a unified framework for both code and carrier-phase multipath mitigation for ground fixed GPS stations. We use the kernel support vector machine to predict multipath errors, since it is known to potentially offer better-performance traditional models, such as neural networks. The predicted multipath error is then used to correct GPS measurements. We empirically show that the proposed method can reduce the code multipath error standard deviation up to 79% on average, which significantly outperforms other approaches in the literature. A comparative analysis of reduction of double-differential carrier-phase multipath error reveals that a 57% reduction is also achieved. Furthermore, by simulation, we also show that this method is robust to coexisting signals of phenomena (e.g., seismic signals) we wish to preserve.
APA, Harvard, Vancouver, ISO, and other styles
6

Chukhray, Andrey, and Olena Havrylenko. "The engineering skills training process modeling using dynamic bayesian nets." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 2 (June 2, 2021): 87–96. http://dx.doi.org/10.32620/reks.2021.2.08.

Full text
Abstract:
The subject of research in the article is the process of intelligent computer training in engineering skills. The aim is to model the process of teaching engineering skills in intelligent computer training programs through dynamic Bayesian networks. Objectives: To propose an approach to modeling the process of teaching engineering skills. To assess the student competence level by considering the algorithms development skills in engineering tasks and the algorithms implementation ability. To create a dynamic Bayesian network structure for the learning process. To select values for conditional probability tables. To solve the problems of filtering, forecasting, and retrospective analysis. To simulate the developed dynamic Bayesian network using a special Genie 2.0-environment. The methods used are probability theory and inference methods in Bayesian networks. The following results are obtained: the development of a dynamic Bayesian network for the educational process based on the solution of engineering problems is presented. Mathematical calculations for probabilistic inference problems such as filtering, forecasting, and smoothing are considered. The solution of the filtering problem makes it possible to assess the current level of the student's competence after obtaining the latest probabilities of the development of the algorithm and its numerical calculations of the task. The probability distribution of the learning process model is predicted. The number of additional iterations required to achieve the required competence level was estimated. The retrospective analysis allows getting a smoothed assessment of the competence level, which was obtained after the task's previous instance completion and after the computation of new additional probabilities characterizing the two checkpoints implementation. The solution of the described probabilistic inference problems makes it possible to provide correct information about the learning process for intelligent computer training systems. It helps to get proper feedback and to track the student's competence level. The developed technique of the kernel of probabilistic inference can be used as the decision-making model basis for an automated training process. The scientific novelty lies in the fact that dynamic Bayesian networks are applied to a new class of problems related to the simulation of engineering skills training in the process of performing algorithmic tasks.
APA, Harvard, Vancouver, ISO, and other styles
7

Schuster, Alfons, and Daniel Berrar. "Special Issue on Omnipresent Intelligent Computing – New Developments and Societal Impact." Journal of Advanced Computational Intelligence and Intelligent Informatics 15, no. 7 (August 20, 2011): 785. http://dx.doi.org/10.20965/jaciii.2011.p0785.

Full text
Abstract:
When the computer revolution began in the second half of the 20th century, few could have foreseen the pervasiveness that intelligent devices would have only half a century later. Today, consumers deal with numerous computing devices providing increasingly sophisticated services. Arguably, no other invention has so profoundly impacted on daily home and work lives as the computer. The downside, however, holds the worrying realization that many artifacts of modern technology now touch on the human sphere to the point of risking an individualfs privacy, security, and well-being. The new millennium carries the computer revolution to unprecedented levels where new computing paradigms excite researchers beyond the limits of science fiction. The burgeoning field of synthetic biology, for example, has given rise to novel computing approaches based on biomolecular materials. Indeed, silicon is no longer the only substrate for intelligent information processing. Other unconventional approaches such as computing with slime molds, for example, now embrace even living organisms. Information processing and problem solving strategies observed in nature have inspired the design of novelmachine learning algorithms. Seemingly unlimited computer power now enables the in silico simulation of living organisms and the study of evolutionary processes with enormous efficiency. Although many of these novel, nature-inspired approaches are still in their infancy, they might bring a paradigm shift in computational science. How such a technology-driven paradigm shift may affect the gsofth components of our modern complex society is a many-faceted issue that deserves our consideration and exploration. This special issue focuses on new developments in intelligent computing. A. Schuster and D. Berrar analyze the potentials and risks of current and emerging intelligent computing paradigms. Their article focuses on the interface between humans and intelligent systems and explores potentials and risks emerging for individuals and for the information society at large. L. Palafox and H. Hashimoto propose a new human activity recognition system that relies on the analysis of five key variables to categorize human activities. A prototypical implementation of the system demonstrates promising results for applications in intelligent room settings. M. Kimura and M. Sugiyama propose a novel approach to unsupervised clustering, which is based on least squares mutual information. The advantage of this approach is that hyperparameters of clustering algorithms such as kernel parameters no longer need to be manually calibrated, but they can be automatically optimized. D. Ricinschi and E. Tokumitsu explore new ways of exploiting physical properties of ferroelectric materials. They investigate how the amount of polarization generated by two electrical pulses can be modeled and explained in the framework of game theory. This special issue informs the research community about exciting new developments in intelligent computing, with an outlook on their societal impacts.
APA, Harvard, Vancouver, ISO, and other styles
8

Xing, Fei, Yi Ping Yao, Zhi Wen Jiang, and Bing Wang. "Fine-Grained Parallel and Distributed Spatial Stochastic Simulation of Biological Reactions." Advanced Materials Research 345 (September 2011): 104–12. http://dx.doi.org/10.4028/www.scientific.net/amr.345.104.

Full text
Abstract:
To date, discrete event stochastic simulations of large scale biological reaction systems are extremely compute-intensive and time-consuming. Besides, it has been widely accepted that spatial factor plays a critical role in the dynamics of most biological reaction systems. The NSM (the Next Sub-Volume Method), a spatial variation of the Gillespie’s stochastic simulation algorithm (SSA), has been proposed for spatially stochastic simulation of those systems. While being able to explore high degree of parallelism in systems, NSM is inherently sequential, which still suffers from the problem of low simulation speed. Fine-grained parallel execution is an elegant way to speed up sequential simulations. Thus, based on the discrete event simulation framework JAMES II, we design and implement a PDES (Parallel Discrete Event Simulation) TW (time warp) simulator to enable the fine-grained parallel execution of spatial stochastic simulations of biological reaction systems using the ANSM (the Abstract NSM), a parallel variation of the NSM. The simulation results of classical Lotka-Volterra biological reaction system show that our time warp simulator obtains remarkable parallel speed-up against sequential execution of the NSM.I.IntroductionThe goal of Systems biology is to obtain system-level investigations of the structure and behavior of biological reaction systems by integrating biology with system theory, mathematics and computer science [1][3], since the isolated knowledge of parts can not explain the dynamics of a whole system. As the complement of “wet-lab” experiments, stochastic simulation, being called the “dry-computational” experiment, plays a more and more important role in computing systems biology [2]. Among many methods explored in systems biology, discrete event stochastic simulation is of greatly importance [4][5][6], since a great number of researches have present that stochasticity or “noise” have a crucial effect on the dynamics of small population biological reaction systems [4][7]. Furthermore, recent research shows that the stochasticity is not only important in biological reaction systems with small population but also in some moderate/large population systems [7].To date, Gillespie’s SSA [8] is widely considered to be the most accurate way to capture the dynamics of biological reaction systems instead of traditional mathematical method [5][9]. However, SSA-based stochastic simulation is confronted with two main challenges: Firstly, this type of simulation is extremely time-consuming, since when the types of species and the number of reactions in the biological system are large, SSA requires a huge amount of steps to sample these reactions; Secondly, the assumption that the systems are spatially homogeneous or well-stirred is hardly met in most real biological systems and spatial factors play a key role in the behaviors of most real biological systems [19][20][21][22][23][24]. The next sub-volume method (NSM) [18], presents us an elegant way to access the special problem via domain partition. To our disappointment, sequential stochastic simulation with the NSM is still very time-consuming, and additionally introduced diffusion among neighbor sub-volumes makes things worse. Whereas, the NSM explores a very high degree of parallelism among sub-volumes, and parallelization has been widely accepted as the most meaningful way to tackle the performance bottleneck of sequential simulations [26][27]. Thus, adapting parallel discrete event simulation (PDES) techniques to discrete event stochastic simulation would be particularly promising. Although there are a few attempts have been conducted [29][30][31], research in this filed is still in its infancy and many issues are in need of further discussion. The next section of the paper presents the background and related work in this domain. In section III, we give the details of design and implementation of model interfaces of LP paradigm and the time warp simulator based on the discrete event simulation framework JAMES II; the benchmark model and experiment results are shown in Section IV; in the last section, we conclude the paper with some future work.II. Background and Related WorkA. Parallel Discrete Event Simulation (PDES)The notion Logical Process (LP) is introduced to PDES as the abstract of the physical process [26], where a system consisting of many physical processes is usually modeled by a set of LP. LP is regarded as the smallest unit that can be executed in PDES and each LP holds a sub-partition of the whole system’s state variables as its private ones. When a LP processes an event, it can only modify the state variables of its own. If one LP needs to modify one of its neighbors’ state variables, it has to schedule an event to the target neighbor. That is to say event message exchanging is the only way that LPs interact with each other. Because of the data dependences or interactions among LPs, synchronization protocols have to be introduced to PDES to guarantee the so-called local causality constraint (LCC) [26]. By now, there are a larger number of synchronization algorithms have been proposed, e.g. the null-message [26], the time warp (TW) [32], breath time warp (BTW) [33] and etc. According to whether can events of LPs be processed optimistically, they are generally divided into two types: conservative algorithms and optimistic algorithms. However, Dematté and Mazza have theoretically pointed out the disadvantages of pure conservative parallel simulation for biochemical reaction systems [31]. B. NSM and ANSM The NSM is a spatial variation of Gillespie’ SSA, which integrates the direct method (DM) [8] with the next reaction method (NRM) [25]. The NSM presents us a pretty good way to tackle the aspect of space in biological systems by partitioning a spatially inhomogeneous system into many much more smaller “homogeneous” ones, which can be simulated by SSA separately. However, the NSM is inherently combined with the sequential semantics, and all sub-volumes share one common data structure for events or messages. Thus, directly parallelization of the NSM may be confronted with the so-called boundary problem and high costs of synchronously accessing the common data structure [29]. In order to obtain higher efficiency of parallel simulation, parallelization of NSM has to firstly free the NSM from the sequential semantics and secondly partition the shared data structure into many “parallel” ones. One of these is the abstract next sub-volume method (ANSM) [30]. In the ANSM, each sub-volume is modeled by a logical process (LP) based on the LP paradigm of PDES, where each LP held its own event queue and state variables (see Fig. 1). In addition, the so-called retraction mechanism was introduced in the ANSM too (see algorithm 1). Besides, based on the ANSM, Wang etc. [30] have experimentally tested the performance of several PDES algorithms in the platform called YH-SUPE [27]. However, their platform is designed for general simulation applications, thus it would sacrifice some performance for being not able to take into account the characteristics of biological reaction systems. Using the similar ideas of the ANSM, Dematté and Mazza have designed and realized an optimistic simulator. However, they processed events in time-stepped manner, which would lose a specific degree of precisions compared with the discrete event manner, and it is very hard to transfer a time-stepped simulation to a discrete event one. In addition, Jeschke etc.[29] have designed and implemented a dynamic time-window simulator to execution the NSM in parallel on the grid computing environment, however, they paid main attention on the analysis of communication costs and determining a better size of the time-window.Fig. 1: the variations from SSA to NSM and from NSM to ANSMC. JAMES II JAMES II is an open source discrete event simulation experiment framework developed by the University of Rostock in Germany. It focuses on high flexibility and scalability [11][13]. Based on the plug-in scheme [12], each function of JAMES II is defined as a specific plug-in type, and all plug-in types and plug-ins are declared in XML-files [13]. Combined with the factory method pattern JAMES II innovatively split up the model and simulator, which makes JAMES II is very flexible to add and reuse both of models and simulators. In addition, JAMES II supports various types of modelling formalisms, e.g. cellular automata, discrete event system specification (DEVS), SpacePi, StochasticPi and etc.[14]. Besides, a well-defined simulator selection mechanism is designed and developed in JAMES II, which can not only automatically choose the proper simulators according to the modeling formalism but also pick out a specific simulator from a serious of simulators supporting the same modeling formalism according to the user settings [15].III. The Model Interface and SimulatorAs we have mentioned in section II (part C), model and simulator are split up into two separate parts. Thus, in this section, we introduce the designation and implementation of model interface of LP paradigm and more importantly the time warp simulator.A. The Mod Interface of LP ParadigmJAMES II provides abstract model interfaces for different modeling formalism, based on which Wang etc. have designed and implemented model interface of LP paradigm[16]. However, this interface is not scalable well for parallel and distributed simulation of larger scale systems. In our implementation, we accommodate the interface to the situation of parallel and distributed situations. Firstly, the neighbor LP’s reference is replaced by its name in LP’s neighbor queue, because it is improper even dangerous that a local LP hold the references of other LPs in remote memory space. In addition, (pseudo-)random number plays a crucial role to obtain valid and meaningful results in stochastic simulations. However, it is still a very challenge work to find a good random number generator (RNG) [34]. Thus, in order to focus on our problems, we introduce one of the uniform RNGs of JAMES II to this model interface, where each LP holds a private RNG so that random number streams of different LPs can be independent stochastically. B. The Time Warp SimulatorBased on the simulator interface provided by JAMES II, we design and implement the time warp simulator, which contains the (master-)simulator, (LP-)simulator. The simulator works strictly as master/worker(s) paradigm for fine-grained parallel and distributed stochastic simulations. Communication costs are crucial to the performance of a fine-grained parallel and distributed simulation. Based on the Java remote method invocation (RMI) mechanism, P2P (peer-to-peer) communication is implemented among all (master-and LP-)simulators, where a simulator holds all the proxies of targeted ones that work on remote workers. One of the advantages of this communication approach is that PDES codes can be transferred to various hardwire environment, such as Clusters, Grids and distributed computing environment, with only a little modification; The other is that RMI mechanism is easy to realized and independent to any other non-Java libraries. Since the straggler event problem, states have to be saved to rollback events that are pre-processed optimistically. Each time being modified, the state is cloned to a queue by Java clone mechanism. Problem of this copy state saving approach is that it would cause loads of memory space. However, the problem can be made up by a condign GVT calculating mechanism. GVT reduction scheme also has a significant impact on the performance of parallel simulators, since it marks the highest time boundary of events that can be committed so that memories of fossils (processed events and states) less than GVT can be reallocated. GVT calculating is a very knotty for the notorious simultaneous reporting problem and transient messages problem. According to our problem, another GVT algorithm, called Twice Notification (TN-GVT) (see algorithm 2), is contributed to this already rich repository instead of implementing one of GVT algorithms in reference [26] and [28].This algorithm looks like the synchronous algorithm described in reference [26] (pp. 114), however, they are essentially different from each other. This algorithm has never stopped the simulators from processing events when GVT reduction, while algorithm in reference [26] blocks all simulators for GVT calculating. As for the transient message problem, it can be neglect in our implementation, because RMI based remote communication approach is synchronized, that means a simulator will not go on its processing until the remote the massage get to its destination. And because of this, the high-costs message acknowledgement, prevalent over many classical asynchronous GVT algorithms, is not needed anymore too, which should be constructive to the whole performance of the time warp simulator.IV. Benchmark Model and Experiment ResultsA. The Lotka-Volterra Predator-prey SystemIn our experiment, the spatial version of Lotka-Volterra predator-prey system is introduced as the benchmark model (see Fig. 2). We choose the system for two considerations: 1) this system is a classical experimental model that has been used in many related researches [8][30][31], so it is credible and the simulation results are comparable; 2) it is simple but helpful enough to test the issues we are interested in. The space of predator-prey System is partitioned into a2D NXNgrid, whereNdenotes the edge size of the grid. Initially the population of the Grass, Preys and Predators are set to 1000 in each single sub-volume (LP). In Fig. 2,r1,r2,r3stand for the reaction constants of the reaction 1, 2 and 3 respectively. We usedGrass,dPreyanddPredatorto stand for the diffusion rate of Grass, Prey and Predator separately. Being similar to reference [8], we also take the assumption that the population of the grass remains stable, and thusdGrassis set to zero.R1:Grass + Prey ->2Prey(1)R2:Predator +Prey -> 2Predator(2)R3:Predator -> NULL(3)r1=0.01; r2=0.01; r3=10(4)dGrass=0.0;dPrey=2.5;dPredato=5.0(5)Fig. 2: predator-prey systemB. Experiment ResultsThe simulation runs have been executed on a Linux Cluster with 40 computing nodes. Each computing node is equipped with two 64bit 2.53 GHz Intel Xeon QuadCore Processors with 24GB RAM, and nodes are interconnected with Gigabit Ethernet connection. The operating system is Kylin Server 3.5, with kernel 2.6.18. Experiments have been conducted on the benchmark model of different size of mode to investigate the execution time and speedup of the time warp simulator. As shown in Fig. 3, the execution time of simulation on single processor with 8 cores is compared. The result shows that it will take more wall clock time to simulate much larger scale systems for the same simulation time. This testifies the fact that larger scale systems will leads to more events in the same time interval. More importantly, the blue line shows that the sequential simulation performance declines very fast when the mode scale becomes large. The bottleneck of sequential simulator is due to the costs of accessing a long event queue to choose the next events. Besides, from the comparison between group 1 and group 2 in this experiment, we could also conclude that high diffusion rate increased the simulation time greatly both in sequential and parallel simulations. This is because LP paradigm has to split diffusion into two processes (diffusion (in) and diffusion (out) event) for two interactive LPs involved in diffusion and high diffusion rate will lead to high proportional of diffusion to reaction. In the second step shown in Fig. 4, the relationship between the speedups from time warp of two different model sizes and the number of work cores involved are demonstrated. The speedup is calculated against the sequential execution of the spatial reaction-diffusion systems model with the same model size and parameters using NSM.Fig. 4 shows the comparison of speedup of time warp on a64X64grid and a100X100grid. In the case of a64X64grid, under the condition that only one node is used, the lowest speedup (a little bigger than 1) is achieved when two cores involved, and the highest speedup (about 6) is achieved when 8 cores involved. The influence of the number of cores used in parallel simulation is investigated. In most cases, large number of cores could bring in considerable improvements in the performance of parallel simulation. Also, compared with the two results in Fig. 4, the simulation of larger model achieves better speedup. Combined with time tests (Fig. 3), we find that sequential simulator’s performance declines sharply when the model scale becomes very large, which makes the time warp simulator get better speed-up correspondingly.Fig. 3: Execution time (wall clock time) of Seq. and time warp with respect to different model sizes (N=32, 64, 100, and 128) and model parameters based on single computing node with 8 cores. Results of the test are grouped by the diffusion rates (Group 1: Sequential 1 and Time Warp 1. dPrey=2.5, dPredator=5.0; Group 2: dPrey=0.25, dPredator=0.5, Sequential 2 and Time Warp 2).Fig. 4: Speedup of time warp with respect to the number of work cores and the model size (N=64 and 100). Work cores are chose from one computing node. Diffusion rates are dPrey=2.5, dPredator=5.0 and dGrass=0.0.V. Conclusion and Future WorkIn this paper, a time warp simulator based on the discrete event simulation framework JAMES II is designed and implemented for fine-grained parallel and distributed discrete event spatial stochastic simulation of biological reaction systems. Several challenges have been overcome, such as state saving, roll back and especially GVT reduction in parallel execution of simulations. The Lotka-Volterra Predator-Prey system is chosen as the benchmark model to test the performance of our time warp simulator and the best experiment results show that it can obtain about 6 times of speed-up against the sequential simulation. The domain this paper concerns with is in the infancy, many interesting issues are worthy of further investigated, e.g. there are many excellent PDES optimistic synchronization algorithms (e.g. the BTW) as well. Next step, we would like to fill some of them into JAMES II. In addition, Gillespie approximation methods (tau-leap[10] etc.) sacrifice some degree of precision for higher simulation speed, but still could not address the aspect of space of biological reaction systems. The combination of spatial element and approximation methods would be very interesting and promising; however, the parallel execution of tau-leap methods should have to overcome many obstacles on the road ahead.AcknowledgmentThis work is supported by the National Natural Science Foundation of China (NSF) Grant (No.60773019) and the Ph.D. Programs Foundation of Ministry of Education of China (No. 200899980004). The authors would like to show their great gratitude to Dr. Jan Himmelspach and Dr. Roland Ewald at the University of Rostock, Germany for their invaluable advice and kindly help with JAMES II.ReferencesH. Kitano, "Computational systems biology." Nature, vol. 420, no. 6912, pp. 206-210, November 2002.H. Kitano, "Systems biology: a brief overview." Science (New York, N.Y.), vol. 295, no. 5560, pp. 1662-1664, March 2002.A. Aderem, "Systems biology: Its practice and challenges," Cell, vol. 121, no. 4, pp. 511-513, May 2005. [Online]. Available: http://dx.doi.org/10.1016/j.cell.2005.04.020.H. de Jong, "Modeling and simulation of genetic regulatory systems: A literature review," Journal of Computational Biology, vol. 9, no. 1, pp. 67-103, January 2002.C. W. Gardiner, Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences (Springer Series in Synergetics), 3rd ed. Springer, April 2004.D. T. Gillespie, "Simulation methods in systems biology," in Formal Methods for Computational Systems Biology, ser. Lecture Notes in Computer Science, M. Bernardo, P. Degano, and G. Zavattaro, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5016, ch. 5, pp. 125-167.Y. Tao, Y. Jia, and G. T. Dewey, "Stochastic fluctuations in gene expression far from equilibrium: Omega expansion and linear noise approximation," The Journal of Chemical Physics, vol. 122, no. 12, 2005.D. T. Gillespie, "Exact stochastic simulation of coupled chemical reactions," Journal of Physical Chemistry, vol. 81, no. 25, pp. 2340-2361, December 1977.D. T. Gillespie, "Stochastic simulation of chemical kinetics," Annual Review of Physical Chemistry, vol. 58, no. 1, pp. 35-55, 2007.D. T. Gillespie, "Approximate accelerated stochastic simulation of chemically reacting systems," The Journal of Chemical Physics, vol. 115, no. 4, pp. 1716-1733, 2001.J. Himmelspach, R. Ewald, and A. M. Uhrmacher, "A flexible and scalable experimentation layer," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 827-835.J. Himmelspach and A. M. Uhrmacher, "Plug'n simulate," in 40th Annual Simulation Symposium (ANSS'07). Washington, DC, USA: IEEE, March 2007, pp. 137-143.R. Ewald, J. Himmelspach, M. Jeschke, S. Leye, and A. M. Uhrmacher, "Flexible experimentation in the modeling and simulation framework james ii-implications for computational systems biology," Brief Bioinform, vol. 11, no. 3, pp. bbp067-300, January 2010.A. Uhrmacher, J. Himmelspach, M. Jeschke, M. John, S. Leye, C. Maus, M. Röhl, and R. Ewald, "One modelling formalism & simulator is not enough! a perspective for computational biology based on james ii," in Formal Methods in Systems Biology, ser. Lecture Notes in Computer Science, J. Fisher, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5054, ch. 9, pp. 123-138. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-68413-8_9.R. Ewald, J. Himmelspach, and A. M. Uhrmacher, "An algorithm selection approach for simulation systems," pads, vol. 0, pp. 91-98, 2008.Bing Wang, Jan Himmelspach, Roland Ewald, Yiping Yao, and Adelinde M Uhrmacher. Experimental analysis of logical process simulation algorithms in james ii[C]// In M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, editors, Proceedings of the Winter Simulation Conference, IEEE Computer Science, 2009. 1167-1179.Ewald, J. Rössel, J. Himmelspach, and A. M. Uhrmacher, "A plug-in-based architecture for random number generation in simulation systems," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 836-844.J. Elf and M. Ehrenberg, "Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases." Systems biology, vol. 1, no. 2, pp. 230-236, December 2004.K. Takahashi, S. Arjunan, and M. Tomita, "Space in systems biology of signaling pathways? Towards intracellular molecular crowding in silico," FEBS Letters, vol. 579, no. 8, pp. 1783-1788, March 2005.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.D. Ridgway, G. Broderick, and M. Ellison, "Accommodating space, time and randomness in network simulation," Current Opinion in Biotechnology, vol. 17, no. 5, pp. 493-498, October 2006.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.W. G. Wilson, A. M. Deroos, and E. Mccauley, "Spatial instabilities within the diffusive lotka-volterra system: Individual-based simulation results," Theoretical Population Biology, vol. 43, no. 1, pp. 91-127, February 1993.K. Kruse and J. Elf. Kinetics in spatially extended systems. In Z. Szallasi, J. Stelling, and V. Periwal, editors, System Modeling in Cellular Biology. From Concepts to Nuts and Bolts, pages 177–198. MIT Press, Cambridge, MA, 2006.M. A. Gibson and J. Bruck, "Efficient exact stochastic simulation of chemical systems with many species and many channels," The Journal of Physical Chemistry A, vol. 104, no. 9, pp. 1876-1889, March 2000.R. M. Fujimoto, Parallel and Distributed Simulation Systems (Wiley Series on Parallel and Distributed Computing). Wiley-Interscience, January 2000.Y. Yao and Y. Zhang, “Solution for analytic simulation based on parallel processing,” Journal of System Simulation, vol. 20, No.24, pp. 6617–6621, 2008.G. Chen and B. K. Szymanski, "Dsim: scaling time warp to 1,033 processors," in WSC '05: Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference, 2005, pp. 346-355.M. Jeschke, A. Park, R. Ewald, R. Fujimoto, and A. M. Uhrmacher, "Parallel and distributed spatial simulation of chemical reactions," in 2008 22nd Workshop on Principles of Advanced and Distributed Simulation. Washington, DC, USA: IEEE, June 2008, pp. 51-59.B. Wang, Y. Yao, Y. Zhao, B. Hou, and S. Peng, "Experimental analysis of optimistic synchronization algorithms for parallel simulation of reaction-diffusion systems," High Performance Computational Systems Biology, International Workshop on, vol. 0, pp. 91-100, October 2009.L. Dematté and T. Mazza, "On parallel stochastic simulation of diffusive systems," in Computational Methods in Systems Biology, M. Heiner and A. M. Uhrmacher, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5307, ch. 16, pp. 191-210.D. R. Jefferson, "Virtual time," ACM Trans. Program. Lang. Syst., vol. 7, no. 3, pp. 404-425, July 1985.J. S. Steinman, "Breathing time warp," SIGSIM Simul. Dig., vol. 23, no. 1, pp. 109-118, July 1993. [Online]. Available: http://dx.doi.org/10.1145/174134.158473 S. K. Park and K. W. Miller, "Random number generators: good ones are hard to find," Commun. ACM, vol. 31, no. 10, pp. 1192-1201, October 1988.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Simulation Problem Analysis and Research Kernel (SPARK)"

1

Tran, Le Anh Dung. "Etude des transferts hygrothermiques dans le béton de chanvre et leur application au bâtiment." Phd thesis, Reims, 2010. http://theses.univ-reims.fr/sciences/2010REIMS012.pdf.

Full text
Abstract:
Dans le cadre du développement durable, les nouvelles réglementations en matière d’isolation thermique dans le secteur du bâtiment, conduisent les chercheurs à la recherche de nouveaux matériaux pour constituer des systèmes économes en énergie tout en assurant le confort de l’habitat. Cette recherche s’est très vite dirigée vers l’utilisation de matériaux issus de la matière végétale. Parmi les nouveaux matériaux à base végétale, le chanvre est le plus utilisé dans la construction. Les recherches effectuées jusqu’à ce jour ont permis de déterminer les propriétés physiques de ce matériau. Ces valeurs ne reflètent que partiellement le confort ressenti dans les locaux dont les parois sont en béton de chanvre. L’objectif de cette thèse est d’étudier le comportement hygrothermique du béton de chanvre au niveau d’une paroi et à l’échelle d’un local. La première partie de cette thèse est consacrée à l’étude bibliographique liée aux intérêts d’utilisation du béton de chanvre puis sa performance est comparée aux autres matériaux du génie civil. Après avoir présenté le modèle des transferts hygrothermiques dans le bâtiment ainsi que l’environnement de simulation SPARK, adapté à la résolution des systèmes d’équations non linéaires, les simulations sont effectuées à l’échelle d’une paroi simple, d’une paroi multicouche et du bâtiment afin de valider les modèles par rapport à des résultats trouvés dans la littérature. La deuxième partie est consacrée à l’étude du comportement hygrothermique d’une paroi et d’un local en béton de chanvre sous des conditions climatiques statique et dynamique. Les résultats obtenus pour les conditions hivernales réelles ont montré que la combinaison de la ventilation hygroréglable avec la capacité de sorption des parois en béton de chanvre permet de réduire de 12% les besoins énergétiques par rapport à une ventilation classique. Enfin, la dernière partie est consacrée à l’étude préliminaire d’un nouveau matériau 100% végétal à base d’une matrice d’amidon de blé et de chènevotte. Sa performance hygrothermique dans le bâtiment est mise en évidence pour les conditions climatiques de Nancy
Within the framework of the sustainable development, new thermal regulations concerning thermal insulation in building sector, lead researchers to develop new materials in order to establish energy efficient systems which insure thermal comfort. Vegetal fibre materials are a good choice to respond to this demand and in particular hemp concrete which is more and more used in the construction. The researches done until this day allowed us to determine its physical properties but there are no works about its hygrothermal performance in building envelopes. Therefore, the objective of this thesis is to study transient hygrothermal behaviour of hemp concrete in buildings. The first part of this thesis is dedicated to the bibliographical studies concerning hemp concrete use and its physical properties. Besides, it is compared to other materials used in construction. After presenting the mathematical models for heat and mass transfer in buildings and their implementation in the simulation environment SPARK suited to non linear complex problems, simulations are used for simple layer walls, multilayered walls and on the whole building level. The second part is concentrated on the hygrothermal behaviour of hemp concrete walls and buildings under static and dynamic conditions. Our results suggest that due to its high moisture buffering capacity, coupling hemp concrete with relative humidity sensitive ventilation can achieve a reduction of 12% in energy consumption when compared to a classical ventilation system. Finally, the last part of this work presents a new 100% vegetal material made of hemp fibres within starch matrix. Its hygrothermal performance in buildings is shown for the climatic conditions of Nancy City
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography