To see the other types of publications on this topic, follow the link: Graphical Modeling Framework (GMF).

Journal articles on the topic 'Graphical Modeling Framework (GMF)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Graphical Modeling Framework (GMF).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Roubi, Sarra, Mohammed Erramdani, and Samir Mbarki. "A Model Driven Approach for generating Graphical User Interface for MVC Rich Internet Application." Computer and Information Science 9, no. 2 (April 19, 2016): 91. http://dx.doi.org/10.5539/cis.v9n2p91.

Full text
Abstract:
<p><span style="font-size: 10.5pt; font-family: 'Times New Roman','serif'; mso-bidi-font-size: 12.0pt; mso-fareast-font-family: 宋体; mso-font-kerning: 1.0pt; mso-ansi-language: EN-US; mso-fareast-language: ZH-CN; mso-bidi-language: AR-SA;" lang="EN-US">Web applications have witnessed a significant improvement that exhibit advanced user interface behaviors and functionalities. Along with this evolution, Rich Internet Applications (RIAs) were proposed as a response to these necessities and have combined the richness and interactivity of desktop interfaces into the web distribution model. However, RIAs are complex applications and their development requires designing and implementation which are time-consuming and the available tools are specialized in manual design. In this paper, we present a new model driven approach in which we used well known Model-Driven Engineering (MDE) frameworks and technologies, such as Eclipse Modeling Framework (EMF), Graphical Modeling Framework (GMF), Query View Transformation (QVTo) and Acceleo to enable the design and the code automatic generation of the RIA. The method focus on simplifying the task for the designer and not necessary be aware of the implementation specification.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
2

Chama, Wafa, Allaoua Chaoui, and Seidali Rehab. "Formal Modeling and Analysis of Object Oriented Systems using Triple Graph Grammars." International Journal of Embedded and Real-Time Communication Systems 6, no. 2 (April 2015): 48–64. http://dx.doi.org/10.4018/ijertcs.2015040103.

Full text
Abstract:
This paper proposes a Model Driven Engineering automatic translation approach based on the integration of rewriting logic formal specification and UML semi-formal models. This integration is a contribution in formalizing UML models since it lacks for formal semantics. It aims at providing UML with the capabilities of rewriting logic and its Maude language to control and detect incoherencies in their diagrams. Rewriting logic Maude language allows simulation and verification of system's properties using its LTL model-checker. This automatic translation approach is based on meta-modeling and graph transformation since UML diagrams are graphs. More precisely, the authors have proposed five meta-models and three triple graph grammars to perform the translation process. The authors have used Eclipse Generative Modeling tools: Eclipse Modeling Framework (EMF) for meta-modeling, Graphical Modeling Framework (GMF) for generating visual modeling tools and TGG Interpreter for proposing triple graph grammars. The approach is illustrated through an example.
APA, Harvard, Vancouver, ISO, and other styles
3

Balaji, V., Rusty Benson, Bruce Wyman, and Isaac Held. "Coarse-grained component concurrency in Earth system modeling: parallelizing atmospheric radiative transfer in the GFDL AM3 model using the Flexible Modeling System coupling framework." Geoscientific Model Development 9, no. 10 (October 11, 2016): 3605–16. http://dx.doi.org/10.5194/gmd-9-3605-2016.

Full text
Abstract:
Abstract. Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Norling, Magnus Dahler, Leah Amber Jackson-Blake, José-Luis Guerrero Calidonio, and James Edward Sample. "Rapid development of fast and flexible environmental models: the Mobius framework v1.0." Geoscientific Model Development 14, no. 4 (April 9, 2021): 1885–97. http://dx.doi.org/10.5194/gmd-14-1885-2021.

Full text
Abstract:
Abstract. The Mobius model building system is a new open-source framework for building fast and flexible environmental models. Mobius makes it possible for researchers with limited programming experience to build performant models with potentially complicated structures. Mobius models can be easily interacted with through the MobiView graphical user interface and through the Python programming language. Mobius was initially developed to support catchment-scale hydrology and water-quality modelling but can be used to represent any system of hierarchically structured ordinary differential equations, such as population dynamics or toxicological models. Here, we demonstrate how Mobius can be used to quickly prototype several different model structures for a dissolved organic carbon catchment model and use built-in auto-calibration and statistical uncertainty analysis tools to help decide on the best model structures. Overall, we hope the modular model building platform offered by Mobius will provide a step forward for environmental modelling, providing an alternative to the “one size fits all” modelling paradigm. By making it easier to explore a broader range of model structures and parameterisations, users are encouraged to build more appropriate models, and in turn this improves process understanding and allows for more robust modelling in support of decision making.
APA, Harvard, Vancouver, ISO, and other styles
5

Lu, Guo Liang, Yi Qi Zhou, and Xue Yong Li. "Mechanical Parts Recognition with 3D Graphical Modeling." Applied Mechanics and Materials 644-650 (September 2014): 4505–8. http://dx.doi.org/10.4028/www.scientific.net/amm.644-650.4505.

Full text
Abstract:
Computer vision based mechanical parts recognition has been received much research attention in recent years. In this paper, we present a new framework to address this problem. The framework utilizes the computer graphic technology to model mechanical parts. Recognition is realized by comparing one query image to the instance images using improved affine transformation based on particle swarm optimization (PSO). Our experiment shows that the proposed framework outperforms the conventional invariant moments based recognition methods in recognition rates.
APA, Harvard, Vancouver, ISO, and other styles
6

Figueroa, Pablo A. "Visual Programming for Virtual Reality Applications Based on InTml." Journal on Interactive Systems 3, no. 1 (June 15, 2012): 1. http://dx.doi.org/10.5753/jis.2012.607.

Full text
Abstract:
This paper presents our work on a visual programming environment (VPE) for portable, implementation-independent, virtual reality (VR) applications. Previously, we have defined InTml, the Interaction Techniques Markup Language , a domain specific language for VR applications, and some initial, command-line based development tools. By using the concept of Model Driven Development (MDD) and with the aid of tools from the Eclipse Graphical Modeling Project (GMF), we built an IDE for VR applications, that allows the visual description of components, application creation, and code generation to targeted runtime environments in C++, Java, and ActionScript. We report some advantages and shortcomings in this approach for tool development, some results from our preliminary user studies and lessons learned. In general, an MDD based approach to VPE is challenging both in terms of learning curve and usability of the final IDE.
APA, Harvard, Vancouver, ISO, and other styles
7

Arabas, S., A. Jaruga, H. Pawlowska, and W. W. Grabowski. "libcloudph++ 1.0: a single-moment bulk, double-moment bulk, and particle-based warm-rain microphysics library in C++." Geoscientific Model Development 8, no. 6 (June 9, 2015): 1677–707. http://dx.doi.org/10.5194/gmd-8-1677-2015.

Full text
Abstract:
Abstract. This paper introduces a library of algorithms for representing cloud microphysics in numerical models. The library is written in C++, hence the name libcloudph++. In the current release, the library covers three warm-rain schemes: the single- and double-moment bulk schemes, and the particle-based scheme with Monte Carlo coalescence. The three schemes are intended for modelling frameworks of different dimensionalities and complexities ranging from parcel models to multi-dimensional cloud-resolving (e.g. large-eddy) simulations. A two-dimensional (2-D) prescribed-flow framework is used in the paper to illustrate the library features. The libcloudph++ and all its mandatory dependencies are free and open-source software. The Boost.units library is used for zero-overhead dimensional analysis of the code at compile time. The particle-based scheme is implemented using the Thrust library that allows one to leverage the power of graphics processing units (GPU), retaining the possibility of compiling the unchanged code for execution on single or multiple standard processors (CPUs). The paper includes a complete description of the programming interface (API) of the library and a performance analysis including comparison of GPU and CPU set-ups.
APA, Harvard, Vancouver, ISO, and other styles
8

Sundström, Gunilla A. "Modeling Information Search Behavior for Design Purposes: An Example from Process Control." Proceedings of the Human Factors Society Annual Meeting 32, no. 19 (October 1988): 1376–80. http://dx.doi.org/10.1177/154193128803201915.

Full text
Abstract:
Current models on operator behavior in supervisory control systems are reviewed with special focus on their usefulness for graphical design of human-machine interfaces in dynamic technical systems. An alternative framework is described and used in a knowledge based approach to represent information search behavior of operators for graphical design purposes.
APA, Harvard, Vancouver, ISO, and other styles
9

Broman, David. "Interactive Programmatic Modeling." ACM Transactions on Embedded Computing Systems 20, no. 4 (June 2021): 1–26. http://dx.doi.org/10.1145/3431387.

Full text
Abstract:
Modeling and computational analyses are fundamental activities within science and engineering. Analysis activities can take various forms, such as simulation of executable models, formal verification of model properties, or inference of hidden model variables. Traditionally, tools for modeling and analysis have similar workflows: (i) a user designs a textual or graphical model or the model is inferred from data, (ii) a tool performs computational analyses on the model, and (iii) a visualization tool displays the resulting data. This article identifies three inherent problems with the traditional approach: the recomputation problem, the variable inspection problem, and the model expressiveness problem. As a solution, we propose a conceptual framework called Interactive Programmatic Modeling. We formalize the interface of the framework and illustrate how it can be used in two different domains: equation-based modeling and probabilistic programming.
APA, Harvard, Vancouver, ISO, and other styles
10

Vallis, Geoffrey K., Greg Colyer, Ruth Geen, Edwin Gerber, Martin Jucker, Penelope Maher, Alexander Paterson, Marianne Pietschnig, James Penn, and Stephen I. Thomson. "Isca, v1.0: a framework for the global modelling of the atmospheres of Earth and other planets at varying levels of complexity." Geoscientific Model Development 11, no. 3 (March 6, 2018): 843–59. http://dx.doi.org/10.5194/gmd-11-843-2018.

Full text
Abstract:
Abstract. Isca is a framework for the idealized modelling of the global circulation of planetary atmospheres at varying levels of complexity and realism. The framework is an outgrowth of models from the Geophysical Fluid Dynamics Laboratory in Princeton, USA, designed for Earth's atmosphere, but it may readily be extended into other planetary regimes. Various forcing and radiation options are available, from dry, time invariant, Newtonian thermal relaxation to moist dynamics with radiative transfer. Options are available in the dry thermal relaxation scheme to account for the effects of obliquity and eccentricity (and so seasonality), different atmospheric optical depths and a surface mixed layer. An idealized grey radiation scheme, a two-band scheme, and a multiband scheme are also available, all with simple moist effects and astronomically based solar forcing. At the complex end of the spectrum the framework provides a direct connection to comprehensive atmospheric general circulation models. For Earth modelling, options include an aquaplanet and configurable continental outlines and topography. Continents may be defined by changing albedo, heat capacity, and evaporative parameters and/or by using a simple bucket hydrology model. Oceanic Q fluxes may be added to reproduce specified sea surface temperatures, with arbitrary continental distributions. Planetary atmospheres may be configured by changing planetary size and mass, solar forcing, atmospheric mass, radiation, and other parameters. Examples are given of various Earth configurations as well as a giant planet simulation, a slowly rotating terrestrial planet simulation, and tidally locked and other orbitally resonant exoplanet simulations. The underlying model is written in Fortran and may largely be configured with Python scripts. Python scripts are also used to run the model on different architectures, to archive the output, and for diagnostics, graphics, and post-processing. All of these features are publicly available in a Git-based repository.
APA, Harvard, Vancouver, ISO, and other styles
11

CHOI, BYOUNGKYU, BUMCHUL PARK, and HO YEOL RYU. "VIRTUAL FACTORY SIMULATOR FRAMEWORK FOR LINE PROTOTYPING." Journal of Advanced Manufacturing Systems 03, no. 01 (June 2004): 5–20. http://dx.doi.org/10.1142/s0219686704000363.

Full text
Abstract:
Presented in the paper is a virtual factory simulator (VFS) framework as a 3D solid-based factory simulator to be used as a line prototyping tool for an AMS (automated manufacturing system). The VFS framework supports a 3D resource & factory layout modeling, a physical emulation, and a performance simulation: the physical emulation is used for the kinematics simulation, interference check between resources, and the generation & verification of device program; and the performance simulation is used for the material flow & control logic evaluation and system performance evaluation. The proposed VFS framework supports a fully integrated graphical modeling environment for emulation scenario modeling and material flow & control logic modeling. Therefore, the proposed VFS meets the modeling requirements of high modeling power, ease of model building & validation, and ease of communication with stakeholders compared to other commercial system. Based on the VFS framework, a virtual factory simulator named VM-Factory® has been developed in C++ with OpenGL and a DEVS engine, and its validity was demonstrated by applying it in constructing a virtual factory of an FMS (flexible manufacturing system).
APA, Harvard, Vancouver, ISO, and other styles
12

XIANG, YANG, and FRANK HANSHAR. "MULTIAGENT EXPEDITION WITH GRAPHICAL MODELS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, no. 06 (December 2011): 939–76. http://dx.doi.org/10.1142/s0218488511007416.

Full text
Abstract:
We investigate a class of multiagent planning problems termed multiagent expedition, where agents move around an open, unknown, partially observable, stochastic, and physical environment, in pursuit of multiple and alternative goals of different utility. Optimal planning in multiagent expedition is highly intractable. We introduce the notion of conditional optimality, decompose the task into a set of semi-independent optimization subtasks, and apply a decision-theoretic multiagent graphical model to solve each subtask optimally. A set of techniques are proposed to enhance modeling so that the resultant graphical model can be practically evaluated. Effectiveness of the framework and its scalability are demonstrated through experiments. Multiagent expedition can be characterized as decentralized partially observable Markov decision processes (Dec-POMDPs). Hence, this work contributes towards practical planning in Dec-POMDPs.
APA, Harvard, Vancouver, ISO, and other styles
13

Eremin, I. E., V. I. Trukhin, A. V. Natsvin, and A. N. Cherkasov. "THREE-DIMENSIONAL COMPUTER MODELING OF ALBAZIN FORTRESS IN 1684. PART IV." Informatika i sistemy upravleniya, no. 4 (2020): 3–16. http://dx.doi.org/10.22250/isu.2020.66.3-16.

Full text
Abstract:
The article focuses on the effectiveness of historical reconstruction of the medieval Russia defensive structures made by graphical applications. The fourth part of the paper presents the results of computer modeling of the Resurrection prison church and the voivodship courtyard carried out within the framework of the ontological replication method proposed by the authors.
APA, Harvard, Vancouver, ISO, and other styles
14

Sato, T., and Y. Kameya. "Parameter Learning of Logic Programs for Symbolic-Statistical Modeling." Journal of Artificial Intelligence Research 15 (December 1, 2001): 391–454. http://dx.doi.org/10.1613/jair.912.

Full text
Abstract:
We propose a logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. definite clause programs containing probabilistic facts with a parameterized distribution. It extends the traditional least Herbrand model semantics in logic programming to distribution semantics, possible world semantics with a probability distribution which is unconditionally applicable to arbitrary logic programs including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM algorithm, the graphical EM algorithm, that runs for a class of parameterized logic programs representing sequential decision processes where each decision is exclusive and independent. It runs on a new data structure called support graphs describing the logical relationship between observations and their explanations, and learns parameters by computing inside and outside probability generalized for logic programs. The complexity analysis shows that when combined with OLDT search for all explanations for observations, the graphical EM algorithm, despite its generality, has the same time complexity as existing EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside algorithm for PCFGs, and the one for singly connected Bayesian networks that have been developed independently in each research field. Learning experiments with PCFGs using two corpora of moderate size indicate that the graphical EM algorithm can significantly outperform the Inside-Outside algorithm.
APA, Harvard, Vancouver, ISO, and other styles
15

Feng, Jinchao, Joshua L. Lansford, Markos A. Katsoulakis, and Dionisios G. Vlachos. "Explainable and trustworthy artificial intelligence for correctable modeling in chemical sciences." Science Advances 6, no. 42 (October 2020): eabc3204. http://dx.doi.org/10.1126/sciadv.abc3204.

Full text
Abstract:
Data science has primarily focused on big data, but for many physics, chemistry, and engineering applications, data are often small, correlated and, thus, low dimensional, and sourced from both computations and experiments with various levels of noise. Typical statistics and machine learning methods do not work for these cases. Expert knowledge is essential, but a systematic framework for incorporating it into physics-based models under uncertainty is lacking. Here, we develop a mathematical and computational framework for probabilistic artificial intelligence (AI)–based predictive modeling combining data, expert knowledge, multiscale models, and information theory through uncertainty quantification and probabilistic graphical models (PGMs). We apply PGMs to chemistry specifically and develop predictive guarantees for PGMs generally. Our proposed framework, combining AI and uncertainty quantification, provides explainable results leading to correctable and, eventually, trustworthy models. The proposed framework is demonstrated on a microkinetic model of the oxygen reduction reaction.
APA, Harvard, Vancouver, ISO, and other styles
16

Samuel, Kehinde G., Nourou-Dine M. Bouare, Oumar Maïga, and Mamadou K. Traoré. "A DEVS-based pivotal modeling formalism and its verification and validation framework." SIMULATION 96, no. 12 (September 26, 2020): 969–92. http://dx.doi.org/10.1177/0037549720958056.

Full text
Abstract:
System verification is an ever-lasting system engineering challenge. The increasing complexity in system simulation requires some level of expertise in handling the idioms of logic and discrete mathematics to correctly drive a full verification process. It is recognized that visual modeling can help to fill the knowledge gap between system experts and analysis experts. However, such an approach has been used on the one hand to specify the behavior of complex systems, and on the other hand to specify complex requirement properties, but not simultaneously. This paper proposes a framework that is unique in supporting a full system verification process based on the graphical modeling of both the system of interest and the requirements to be checked. Patterns are defined to transform the resulting models to formal specifications that a model checker can manipulate. A real-time crossing system is used to illustrate the proposed framework.
APA, Harvard, Vancouver, ISO, and other styles
17

Jiao, Jiajia. "HEAP: A Holistic Error Assessment Framework for Multiple Approximations Using Probabilistic Graphical Models." Electronics 9, no. 2 (February 22, 2020): 373. http://dx.doi.org/10.3390/electronics9020373.

Full text
Abstract:
Approximate computing has been a good paradigm of energy-efficient accelerator design. Accurate and fast error estimation is critical for appropriate approximate techniques selection so that power saving (or performance improvement) can be maximized with acceptable output quality in approximate accelerators. In the paper, we propose HEAP, a Holistic Error assessment framework to characterize multiple Approximate techniques with Probabilistic graphical models (PGM) in a joint way. HEAP maps the problem of evaluating errors induced by different approximate techniques into a PGM issue, including: (1) A heterogeneous Bayesian network is represented by converting an application’s data flow graph, where various approximate options are {precise, approximate} two-state X*-type nodes, while input or operating variables are {precise, approximate, unacceptable} three-state X-type nodes. These two different kinds of nodes are separately used to configure the available approximate techniques and track the corresponding error propagation for guaranteed configurability; (2) node learning is accomplished via an approximate library, which consists of probability mass functions of multiple approximate techniques to fast calculate each node’s Conditional Probability Table by mechanistic modeling or empirical modeling; (3) exact inference provides the probability distribution of output quality at three levels of precise, approximate, and unacceptable. We do a complete case study of 3 × 3 Gaussian kernels with different approximate configurations to verify HEAP. The comprehensive results demonstrate that HEAP is helpful to explore design space for power-efficient approximate accelerators, with just 4.18% accuracy loss and 3.34 × 105 speedup on average over Mentor Carlo simulation.
APA, Harvard, Vancouver, ISO, and other styles
18

Bukunova, Olga, and Konstantin Shumilov. "Parametric design as information modeling tool in dynamic architecture." SHS Web of Conferences 44 (2018): 00019. http://dx.doi.org/10.1051/shsconf/20184400019.

Full text
Abstract:
Dynamic architecture is a separate upcoming trend in educational process. The article gives examples of parametric software languages development built into graphic packages. The implementation is carried out within the framework of improving the educational process of training specialists in the construction profile with the aim of implementing the program of the digital economy and information modeling technology in construction. Visual programming allows you to create programs without directly writing code by means of graphical objects manipulation. At present, parametric design can be referred to a young and rapidly developing sphere. Essential condition for its further development is availability of trained specialists of respective qualification.
APA, Harvard, Vancouver, ISO, and other styles
19

Attoh-Okine, Nii O. "Probabilistic analysis of factors affecting highway construction costs: a belief network approach." Canadian Journal of Civil Engineering 29, no. 3 (June 1, 2002): 369–74. http://dx.doi.org/10.1139/l02-003.

Full text
Abstract:
This paper presents the application of belief networks to make inferences in highway construction costs. The methodology is evolving; it works very well when sufficient information and incomplete quantitative data are available. It is an attempt to identify the extent of influence of selected variables on highway construction costs. Belief networks are an expressive graphical language for representing uncertain knowledge about causal and associational relations among construction cost variables. This then provides a graphical representation of probabilistic construction cost models. The graph-theoretic framework of the belief lends itself for modeling probabilistic dependence and flow information between different construction costs and related variables in overall highway construction cost determination.Key words: construction costs, belief networks, graphical models, uncertainty, and Bayesian network.
APA, Harvard, Vancouver, ISO, and other styles
20

Smyth, Padhraic, David Heckerman, and Michael I. Jordan. "Probabilistic Independence Networks for Hidden Markov Probability Models." Neural Computation 9, no. 2 (February 1, 1997): 227–69. http://dx.doi.org/10.1162/neco.1997.9.2.227.

Full text
Abstract:
Graphical techniques for modeling the dependencies of random variables have been explored in a variety of different areas, including statistics, statistical physics, artificial intelligence, speech recognition, image processing, and genetics. Formalisms for manipulating these models have been developed relatively independently in these research communities. In this paper we explore hidden Markov models (HMMs) and related structures within the general framework of probabilistic independence networks (PINs). The paper presents a self-contained review of the basic principles of PINs. It is shown that the well-known forward-backward (F-B) and Viterbi algorithms for HMMs are special cases of more general inference algorithms for arbitrary PINs. Furthermore, the existence of inference and estimation algorithms for more general graphical models provides a set of analysis tools for HMM practitioners who wish to explore a richer class of HMM structures. Examples of relatively complex models to handle sensor fusion and coarticulation in speech recognition are introduced and treated within the graphical model framework to illustrate the advantages of the general approach.
APA, Harvard, Vancouver, ISO, and other styles
21

Hajjar, Dany, Simaan AbouRizk, and Jianfei Xu. "Construction site dewatering analysis using a special purpose simulation-based framework." Canadian Journal of Civil Engineering 25, no. 5 (October 1, 1998): 819–28. http://dx.doi.org/10.1139/l98-016.

Full text
Abstract:
Computer simulation has been successfully implemented in the area of construction management. However, this success has generally been limited to the academic arena with the industry lagging far behind. This failure is partly due to the inherent complexity of general simulators and their inability to abstract the underlying modeling fundamentals. Special purpose simulation (SPS) is a framework developed to address the stated drawbacks by focusing on the needs of the construction practitioner. The idea is to build modeling environments tailored to the specific requirements of a given industry domain. This paper presents the development and implementation of a construction dewatering analysis framework based on the ideas of SPS. Object-oriented design and graphical user interfaces are used in the development of an abstraction layer between a steady state hydrological model and the user. The integration capability of the framework are then presented by constructing an optimization module and linking it to the main modeling environment. A case study is provided to demonstrate the usefulness, intuitiveness, and validity of the framework.Key words: simulation, special purpose simulation, construction dewatering, optimization, computer applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Roubi, Sarra, Mohammed Erramdani, and Samir Mbarki. "A Model Driven Approach based on Interaction Flow Modeling Language to Generate Rich Internet Applications." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 6 (December 1, 2016): 3073. http://dx.doi.org/10.11591/ijece.v6i6.10541.

Full text
Abstract:
<p><span lang="EN-US">A Rich Internet Applications (RIAs) combine the simplicity of the hypertext paradigm with the flexibility of desktop interfaces. These appliations were proposed as a solution to follow the rapid growth and evolution of the Graphical User Interfaces. However, RIAs are complex applications and their development requires designing and implementation which are time-consuming and the available tools are specialized in manual design. In this paper, we present a model driven approach to generat GUI for Rich Internet Application. The approach exploits the new language IFML recently adopted by the Object Management Group. We used frameworks and technologies known to Model-Driven Engineering, such as Eclipse Modeling Framework (EMF) for Meta-modeling, Query View Transformation (QVT) for model transformations and Acceleo for code generation. The approach allows to quickly and efficiently generating a RIA focusing on the graphical aspect of the application.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
23

Roubi, Sarra, Mohammed Erramdani, and Samir Mbarki. "A Model Driven Approach based on Interaction Flow Modeling Language to Generate Rich Internet Applications." International Journal of Electrical and Computer Engineering (IJECE) 6, no. 6 (December 1, 2016): 3073. http://dx.doi.org/10.11591/ijece.v6i6.pp3073-3079.

Full text
Abstract:
<p><span lang="EN-US">A Rich Internet Applications (RIAs) combine the simplicity of the hypertext paradigm with the flexibility of desktop interfaces. These appliations were proposed as a solution to follow the rapid growth and evolution of the Graphical User Interfaces. However, RIAs are complex applications and their development requires designing and implementation which are time-consuming and the available tools are specialized in manual design. In this paper, we present a model driven approach to generat GUI for Rich Internet Application. The approach exploits the new language IFML recently adopted by the Object Management Group. We used frameworks and technologies known to Model-Driven Engineering, such as Eclipse Modeling Framework (EMF) for Meta-modeling, Query View Transformation (QVT) for model transformations and Acceleo for code generation. The approach allows to quickly and efficiently generating a RIA focusing on the graphical aspect of the application.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
24

Frapolli, Fulvio, Amos Brocco, Apostolos Malatras, and Béat Hirsbrunner. "Decoupling Aspects in Board Game Modeling." International Journal of Gaming and Computer-Mediated Simulations 2, no. 2 (April 2010): 18–35. http://dx.doi.org/10.4018/jgcms.2010040102.

Full text
Abstract:
Existing research on computer enhanced board games is mainly focused on user interaction issues and look-and-feel, however, this overlooks the flexibility of traditional board games when it comes to game rule handling. In this respect, the authors argue that successful game designs need to exploit the advantages of the digital world as well as retaining such flexibility. To achieve this goal, both the rules of the game and the graphical representation should be simple to define at the design stage, and easy to change before or even during a game session. For that reason, the authors propose a framework allowing the implementation of all aspects of a board game in a fully flexible and decoupled way. This paper will describe the Flexiblerules approach, which combines both a model driven and an aspect oriented design of computer enhanced board games. The benefits of this approach are discussed and illustrated in the case of three different board games.
APA, Harvard, Vancouver, ISO, and other styles
25

Aburatani, Sachiyo, Satoru Kuhara, Hiroyuki Toh, and Katsuhisa Horimoto. "Deduction of a gene regulatory relationship framework from gene expression data by the application of graphical Gaussian modeling." Signal Processing 83, no. 4 (April 2003): 777–88. http://dx.doi.org/10.1016/s0165-1684(02)00476-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Dixon, Mark B. "A Graphical Based Approach to the Conceptual Modeling, Validation and Generation of XML Schema Definitions." International Journal of Information Technology and Web Engineering 8, no. 1 (January 2013): 1–22. http://dx.doi.org/10.4018/jitwe.2013010101.

Full text
Abstract:
This paper discusses the research and development of a modeling tool that provides a graphical approach to the definition, validation and generation of XML schemas. Although XML has had a ubiquitous web presence for a number of years the strength of its underlying validation framework is often not leveraged to its maximum potential. Additionally the design process followed when developing XML data formats is often rather ad-hoc and driven by technical requirements of the application rather than a conceptual level analysis of the problem domain. This work contributes to research knowledge by proposing and validating a mechanism for allowing non-programmers to easily visualise and design the rules to which XML documents should comply. The use of an underlying meta-case platform provides a unique opportunity to allow highly customisable support and automatic code generation for any number of schema definition languages, thus providing a test-bed for future research activities.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Qingyang. "Testing Differential Gene Networks under Nonparanormal Graphical Models with False Discovery Rate Control." Genes 11, no. 2 (February 5, 2020): 167. http://dx.doi.org/10.3390/genes11020167.

Full text
Abstract:
The nonparanormal graphical model has emerged as an important tool for modeling dependency structure between variables because it is flexible to non-Gaussian data while maintaining the good interpretability and computational convenience of Gaussian graphical models. In this paper, we consider the problem of detecting differential substructure between two nonparanormal graphical models with false discovery rate control. We construct a new statistic based on a truncated estimator of the unknown transformation functions, together with a bias-corrected sample covariance. Furthermore, we show that the new test statistic converges to the same distribution as its oracle counterpart does. Both synthetic data and real cancer genomic data are used to illustrate the promise of the new method. Our proposed testing framework is simple and scalable, facilitating its applications to large-scale data. The computational pipeline has been implemented in the R package DNetFinder, which is freely available through the Comprehensive R Archive Network.
APA, Harvard, Vancouver, ISO, and other styles
28

Liu, Yan Fang, Di Xie, and Yong Jiu Yuan. "A Research in Knowledge Management System for the Development of DSS." Advanced Materials Research 945-949 (June 2014): 3037–40. http://dx.doi.org/10.4028/www.scientific.net/amr.945-949.3037.

Full text
Abstract:
This paper researches the knowledge management system for the decision support system (DSS). DSS is presented by the framework of knowledge management system, and it supports database modeling and data mining; it can define, analyze and track models flexibly. Supporting the key working performance evaluation and these performance models should be shown to the managers in the form of graphical interface directly. It is easy to the leaders and related persons to make specific decisions.
APA, Harvard, Vancouver, ISO, and other styles
29

Yüksek, K., M. Alparslan, and E. Mendi. "Effective 3-D surface modeling for geographic information systems." Natural Hazards and Earth System Sciences 16, no. 1 (January 18, 2016): 123–33. http://dx.doi.org/10.5194/nhess-16-123-2016.

Full text
Abstract:
Abstract. In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.
APA, Harvard, Vancouver, ISO, and other styles
30

Yüksek, K., M. Alparslan, and E. Mendi. "Effective 3-D surface modeling for geographic information systems." Natural Hazards and Earth System Sciences Discussions 1, no. 6 (November 5, 2013): 6093–131. http://dx.doi.org/10.5194/nhessd-1-6093-2013.

Full text
Abstract:
Abstract. In this work, we propose a dynamic, flexible and interactive urban digital terrain platform (DTP) with spatial data and query processing capabilities of Geographic Information Systems (GIS), multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized Directional Replacement Policy (DRP) based buffer management scheme. Polyhedron structures are used in Digital Surface Modeling (DSM) and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g. X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.
APA, Harvard, Vancouver, ISO, and other styles
31

Ayadi, Rim, Yasser Hachaichi, and Jamel Feki. "A Framework for Knowledge Models Transformation: A Step Towards Knowledge Integration and Warehousing." Journal of Information & Knowledge Management 18, no. 02 (June 2019): 1950025. http://dx.doi.org/10.1142/s0219649219500254.

Full text
Abstract:
An intelligent decision support system should based on a knowledge warehouse (KW). A KW gathers knowledge initially expressed in different formalisms and therefore heterogeneous. Consequently, the KW building process requires knowledge homogenisation. This paper deals with this main issue; it introduces a three-layer architecture for a KW; more precisely, it focuses on the first layer architecture called Knowledge Acquisition and Transformation. This layer aims to transform heterogeneous knowledge models into the MOT (Modeling with Object Types) semi-formal language [Paquette, G (2002). Knowledge and Skills Modeling: A Graphical Language for Designing and Learning. Sainte-Foy: University of Quebec Press (in French).] that we have selected as a pivot knowledge model. For this transformation step, first, we design four meta-models; one for MOT and one for each of the three explicit knowledge models, namely, decision tree, association rules and clustering. Secondly, we define 15 transformation rules that we formalise in ATL (Atlas Transformation Language). Finally, we exemplify the knowledge transformation in order to show its usefulness for the KW building process.
APA, Harvard, Vancouver, ISO, and other styles
32

Pastorino, Martina, Alessandro Montaldo, Luca Fronda, Ihsen Hedhli, Gabriele Moser, Sebastiano B. Serpico, and Josiane Zerubia. "Multisensor and Multiresolution Remote Sensing Image Classification through a Causal Hierarchical Markov Framework and Decision Tree Ensembles." Remote Sensing 13, no. 5 (February 25, 2021): 849. http://dx.doi.org/10.3390/rs13050849.

Full text
Abstract:
In this paper, a hierarchical probabilistic graphical model is proposed to tackle joint classification of multiresolution and multisensor remote sensing images of the same scene. This problem is crucial in the study of satellite imagery and jointly involves multiresolution and multisensor image fusion. The proposed framework consists of a hierarchical Markov model with a quadtree structure to model information contained in different spatial scales, a planar Markov model to account for contextual spatial information at each resolution, and decision tree ensembles for pixelwise modeling. This probabilistic graphical model and its topology are especially fit for application to very high resolution (VHR) image data. The theoretical properties of the proposed model are analyzed: the causality of the whole framework is mathematically proved, granting the use of time-efficient inference algorithms such as the marginal posterior mode criterion, which is non-iterative when applied to quadtree structures. This is mostly advantageous for classification methods linked to multiresolution tasks formulated on hierarchical Markov models. Within the proposed framework, two multimodal classification algorithms are developed, that incorporate Markov mesh and spatial Markov chain concepts. The results obtained in the experimental validation conducted with two datasets containing VHR multispectral, panchromatic, and radar satellite images, verify the effectiveness of the proposed framework. The proposed approach is also compared to previous methods that are based on alternate strategies for multimodal fusion.
APA, Harvard, Vancouver, ISO, and other styles
33

Payandeh, Shahram, John Dill, and Zhu Liang Cai. "On interacting with physics-based models of graphical objects." Robotica 22, no. 2 (March 2004): 223–30. http://dx.doi.org/10.1017/s0263574703005617.

Full text
Abstract:
Enhancing graphical objects whose behaviors are governed by the laws of physics is an important requirement in modeling virtual physical environments. In such environments, the user can interact with graphical objects and is able to either feel the simulated reaction forces through a physical computer interface such as a force feedback mouse or through such interactions, objects behave in a natural way. One of the key requirements for such interaction is determination of the type of contact between the user controlled object and the objects representing the environment. This paper presents an approach for reconstructing the contact configuration between two objects. This is accomplished through usage of the time history of the motion of the approaching objects for inverse trajectory mapping of polygonal representation. In the case of deformable objects and through usage of mass-spring-damper system this paper also presents a special global filter that can map the local deformation of an object to the adjacent vertices of polygonal mesh. In addition to offering a fast computational framework, the proposed method also offers more realistic representation of the deformation. The results of this paper are shown through detailed examples and comparison analysis using different computational platforms.
APA, Harvard, Vancouver, ISO, and other styles
34

Li, Hua Peng, Dong Wang, Jian Hui Xi, and Yi Bo Li. "Review on Modeling Control System of Earth Pressure Balance Shield Machine." Advanced Materials Research 181-182 (January 2011): 389–94. http://dx.doi.org/10.4028/www.scientific.net/amr.181-182.389.

Full text
Abstract:
Analysis and design cannot be solved until a proper model of hydraulic control system of earth pressure balance shield machine is built. There are two kinds of modeling methods in common use. (1) Dynamic mathematical modeling. Two models, transfer function and state space, are mainly included. For linear system, the two methods can be provided with a fairly complete theoretical framework. However, if the system is in the complicated working environment or it is non-linear, the modeling process needs a mass of mathematical verification. This has become a limitation for application of these methods; (2) Graphical Modeling. It is more direct for the complex hydraulic control system of earth pressure balance shield machine. First, the main components are separately modeled. Then connect them according to the flow direction of signal. Some special software, such as AMESim, can be used to simplify the simulation process.
APA, Harvard, Vancouver, ISO, and other styles
35

PAUL, SANTANU, and ATUL PRAKASH. "SUPPORTING QUERIES ON SOURCE CODE: A FORMAL FRAMEWORK." International Journal of Software Engineering and Knowledge Engineering 04, no. 03 (September 1994): 325–48. http://dx.doi.org/10.1142/s0218194094000167.

Full text
Abstract:
Querying source code interactively for information is a critical task in reverse engineering of software. However, current source code query systems succeed in handling only small subsets of the wide range of queries possible on code, trading generality and expressive power for ease of implementation and practicality. We attribute this to the absence of clean formalisms for modeling and querying source code. In this paper, we present an algebraic framework (Source Code Algebra or SCA) that forms the basis of our source code query system. The benefits of using SCA include the integration of structural and flow information into a single source code data model, the ability to process high-level source code queries (command-line, graphical, relational, or pattern-based) by expressing them as equivalent SCA expressions, the use of SCA itself as a powerful low-level source code query language, and opportunities for query optimization. We present the SCA’s data model and operators and show that a variety of source code queries can be easily expressed using them. An algebraic model of source code addresses the issues of conceptual integrity, expressive power, and performance of a source code query system within a unified framework.
APA, Harvard, Vancouver, ISO, and other styles
36

Ahuja, L. R., J. C. Ascough, and O. David. "Developing natural resource models using the object modeling system: feasibility and challenges." Advances in Geosciences 4 (August 9, 2005): 29–36. http://dx.doi.org/10.5194/adgeo-4-29-2005.

Full text
Abstract:
Abstract. Current challenges in natural resource management have created demand for integrated, flexible, and easily parameterized hydrologic models. Most of these monolithic models are not modular, thus modifications (e.g., changes in process representation) require considerable time, effort, and expense. In this paper, the feasibility and challenges of using the Object Modeling System (OMS) for natural resource model development will be explored. The OMS is a Java-based modeling framework that facilitates simulation model development, evaluation, and deployment. In general, the OMS consists of a library of science, control, and database modules and a means to assemble the selected modules into an application-specific modeling package. The framework is supported by data dictionary, data retrieval, GIS, graphical visualization, and statistical analysis utility modules. Specific features of the OMS that will be discussed include: 1) how to reduce duplication of effort in natural resource modeling; 2) how to make natural resource models easier to build, apply, and evaluate; 3) how to facilitate long-term maintainability of existing and new natural resource models; and 4) how to improve the quality of natural resource model code and ensure credibility of model implementations. Examples of integrating a simple water balance model and a large monolithic model into the OMS will be presented.
APA, Harvard, Vancouver, ISO, and other styles
37

Harris, Marcus, and Martin Zwick. "Graphical Models in Reconstructability Analysis and Bayesian Networks." Entropy 23, no. 8 (July 30, 2021): 986. http://dx.doi.org/10.3390/e23080986.

Full text
Abstract:
Reconstructability Analysis (RA) and Bayesian Networks (BN) are both probabilistic graphical modeling methodologies used in machine learning and artificial intelligence. There are RA models that are statistically equivalent to BN models and there are also models unique to RA and models unique to BN. The primary goal of this paper is to unify these two methodologies via a lattice of structures that offers an expanded set of models to represent complex systems more accurately or more simply. The conceptualization of this lattice also offers a framework for additional innovations beyond what is presented here. Specifically, this paper integrates RA and BN by developing and visualizing: (1) a BN neutral system lattice of general and specific graphs, (2) a joint RA-BN neutral system lattice of general and specific graphs, (3) an augmented RA directed system lattice of prediction graphs, and (4) a BN directed system lattice of prediction graphs. Additionally, it (5) extends RA notation to encompass BN graphs and (6) offers an algorithm to search the joint RA-BN neutral system lattice to find the best representation of system structure from underlying system variables. All lattices shown in this paper are for four variables, but the theory and methodology presented in this paper are general and apply to any number of variables. These methodological innovations are contributions to machine learning and artificial intelligence and more generally to complex systems analysis. The paper also reviews some relevant prior work of others so that the innovations offered here can be understood in a self-contained way within the context of this paper.
APA, Harvard, Vancouver, ISO, and other styles
38

Saragih, Hoga, Gusvita Gusvita, Bobby Reza, Didik Setiyadi, and Rufman Akbar. "PENGEMBANGAN SISTEM INFORMASI DISTRIBUSI INFORMASI SEKOLAH MELALUI SMS GATEWAY DENGAN ZACHMAN FRAMEWORK." Jurnal Sistem Informasi 8, no. 1 (October 4, 2013): 32. http://dx.doi.org/10.21609/jsi.v8i1.320.

Full text
Abstract:
Dunia pendidikan memanfaatkan perkembangan teknologi informasi dan komunikasi untuk semakin meningkatkan kualitas pendidikan. Salah satunya dengan cara mengembangkan Sistem Informasi distribusi informasi sekolah melalui SMS Gateway. Pendistribusian informasi melalui SMS Gateway diharapkan dapat mengoptimalkan sistem pendistribusian informasi yang sedang berjalan di SDS Gembala Baik I dan memudahkan orang tua murid dalam pengaksesan informasi sekolah. Pada penelitian ini dihasilkan sebuah rancangan sistem informasi distribusi informasi sekolah melalui SMS Gateway dengan alur proses perancangan menggunakan pendekatan Zachman Framework. Perancangan dimulai dengan dengan melakukan identifikasi kebutuhan sistem yang dimodelkan dengan menggunakan Unified Modeling Language (UML), perancangan antar muka pengguna menggunakan Graphical User Interface (GUI) dan perancangan basis data. Hasil dari perancangan Sistem Informasi distribusi informasi Sekolah ini mempunyai tiga tab menu utama yaitu tab menu SMS Server, Pengumuman Umum dan Pengumuman Kelas. Hasil dari perancangan ini dapat digunakan sebagai acuan yang terstruktur untuk pembuatan coding program dan uji coba pengembangan sistem informasi distribusi informasi sekolah melalui SMS Gateway di SDS Gembala Baik I Pontianak.
APA, Harvard, Vancouver, ISO, and other styles
39

Rodrigo, Enrique G., Juan C. Alfaro, Juan A. Aledo, and José A. Gámez. "Mixture-Based Probabilistic Graphical Models for the Label Ranking Problem." Entropy 23, no. 4 (March 31, 2021): 420. http://dx.doi.org/10.3390/e23040420.

Full text
Abstract:
The goal of the Label Ranking (LR) problem is to learn preference models that predict the preferred ranking of class labels for a given unlabeled instance. Different well-known machine learning algorithms have been adapted to deal with the LR problem. In particular, fine-tuned instance-based algorithms (e.g., k-nearest neighbors) and model-based algorithms (e.g., decision trees) have performed remarkably well in tackling the LR problem. Probabilistic Graphical Models (PGMs, e.g., Bayesian networks) have not been considered to deal with this problem because of the difficulty of modeling permutations in that framework. In this paper, we propose a Hidden Naive Bayes classifier (HNB) to cope with the LR problem. By introducing a hidden variable, we can design a hybrid Bayesian network in which several types of distributions can be combined: multinomial for discrete variables, Gaussian for numerical variables, and Mallows for permutations. We consider two kinds of probabilistic models: one based on a Naive Bayes graphical structure (where only univariate probability distributions are estimated for each state of the hidden variable) and another where we allow interactions among the predictive attributes (using a multivariate Gaussian distribution for the parameter estimation). The experimental evaluation shows that our proposals are competitive with the start-of-the-art algorithms in both accuracy and in CPU time requirements.
APA, Harvard, Vancouver, ISO, and other styles
40

Venugopala, Katharigatta N., Sandeep Chandrashekharappa, Christophe Tratrat, Pran Kishore Deb, Rahul D. Nagdeve, Susanta K. Nayak, Mohamed A. Morsy, et al. "Crystallography, Molecular Modeling, and COX-2 Inhibition Studies on Indolizine Derivatives." Molecules 26, no. 12 (June 10, 2021): 3550. http://dx.doi.org/10.3390/molecules26123550.

Full text
Abstract:
The cyclooxygenase-2 (COX-2) enzyme is an important target for drug discovery and development of novel anti-inflammatory agents. Selective COX-2 inhibitors have the advantage of reduced side-effects, which result from COX-1 inhibition that is usually observed with nonselective COX inhibitors. In this study, the design and synthesis of a new series of 7-methoxy indolizines as bioisostere indomethacin analogues (5a–e) were carried out and evaluated for COX-2 enzyme inhibition. All the compounds showed activity in micromolar ranges, and the compound diethyl 3-(4-cyanobenzoyl)-7-methoxyindolizine-1,2-dicarboxylate (5a) emerged as a promising COX-2 inhibitor with an IC50 of 5.84 µM, as compared to indomethacin (IC50 = 6.84 µM). The molecular modeling study of indolizines indicated that hydrophobic interactions were the major contribution to COX-2 inhibition. The title compound diethyl 3-(4-bromobenzoyl)-7-methoxyindolizine-1,2-dicarboxylate (5c) was subjected for single-crystal X-ray studies, Hirshfeld surface analysis, and energy framework calculations. The X-ray diffraction analysis showed that the molecule (5c) crystallizes in the monoclinic crystal system with space group P 21/n with a = 12.0497(6)Å, b = 17.8324(10)Å, c = 19.6052(11)Å, α = 90.000°, β = 100.372(1)°, γ = 90.000°, and V = 4143.8(4)Å3. In addition, with the help of Crystal Explorer software program using the B3LYP/6-31G(d, p) basis set, the theoretical calculation of the interaction and graphical representation of energy value was measured in the form of the energy framework in terms of coulombic, dispersion, and total energy.
APA, Harvard, Vancouver, ISO, and other styles
41

Kryštof, Jan. "Towards an MDA-based approach for development of a structural scope of the presentation layer." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 57, no. 6 (2009): 123–32. http://dx.doi.org/10.11118/actaun200957060123.

Full text
Abstract:
This paper presents an approach for developing the presentation layer of software applications. The approach is based on the concept of the Model Driven Architecture (MDA) and uses a UML – based model of graphical user interfaces, which is created according to rules defined in a meta – model. The meta – model is not oriented to a particular platform, thus all designed models can be created independently of the programming language and widget library. This platform independent UML based model can be transformed into source – code for an arbitrary programming language and can be used in a software development process.The meta – model of our approach is an extension of the common UML and provides support for modeling the presentation layer. The meta – model thus fills a gap that exists in modeling three – laye­red software applications, beside the application and the data layer. By providing this possibility for modeling the presentation layer, we can crucially impact current approaches to the development of three layered software applications. All model artifacts contain essential information about the gra­phi­cal user interface and can be used for a code generation. Since the UML is widely used by analysts, they can produce models which de-facto represent source code and thus they reduce the workload for programmers, who create source code by some traditional approaches. Our model – based approach also strictly separates the appearance and the structure of graphical user – interfaces and both of them are developed separately, which brings higher modularity of software.In this paper, we demonstrate our development approach by focusing on the structure of graphical user interfaces. Our approach is influenced by the concept of Model Driven Architecture and we deal with all related issues, such as meta – model, user models, model transformations and source – code generation. For evaluating our approach, we designed and developed a software framework, we integrated it into a generic modeling tool, and used approach principles during the development of a module of an information system.
APA, Harvard, Vancouver, ISO, and other styles
42

Leitch, Michael, Yishak Yusuf, and Yongsheng Ma. "Interdisciplinary semantic model for managing the design of a steam-assisted gravity drainage tooling system." Journal of Computational Design and Engineering 5, no. 1 (November 13, 2017): 68–79. http://dx.doi.org/10.1016/j.jcde.2017.11.004.

Full text
Abstract:
Abstract Complex engineering systems often require extensive coordination between different expert areas in order to avoid costly design iterations and rework. Cyber-physics system (CPS) engineering methods could provide valuable insights to help model these interactions and optimize the design of such systems. In this work, steam assisted gravity drainage (SAGD), a complex oil extraction process that requires deep understanding of several physical-chemical phenomena, is examined whereby the complexities and interdependencies of the system are explored. Based on an established unified feature modeling scheme, a software modeling framework is proposed to manage the design process of the production tools used for SAGD oil extraction. Applying CPS methods to unify complex phenomenon and engineering models, the proposed CPS model combines effective simulation with embedded knowledge of completion tooling design in order to optimize reservoir performance. The system design is expressed using graphical diagrams of the unified modelling language (UML) convention. To demonstrate the capability of this system, a distributed research group is described, and their activities coordinated using the described CPS model. Highlights A modelling framework is proposed to manage interaction between engineering systems. Phenomenon feature concept is introduced to facilitate knowledge representation. Model framework is extensible and facilitates interoperability. Steam assisted gravity drainage oil extraction process is modelled.
APA, Harvard, Vancouver, ISO, and other styles
43

Pugh, Zachary H., and Douglas J. Gillan. "Propositional Constraint Graphs: An Intuitive, Domain-General Tool for Diagramming Knowledge, Assumptions, and Uncertainties." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64, no. 1 (December 2020): 254–58. http://dx.doi.org/10.1177/1071181320641060.

Full text
Abstract:
A diagramming method called Propositional Constraint (PC) graphing was developed as an aid for tasks involving argumentation, planning, and design. Motivated by several AI models of defeasible (or non- monotonic) reasoning, PC graphs were designed to represent knowledge according to an analogical framework in which constraints (e.g., evidence, goals, system constraints) may elicit or deny possibilities (e.g., explanations, decisions, behaviors). In cases of underspecification, an absence of constraints yields uncertainty and competition among plausible outcomes. In cases of overspecification, no plausible outcome is yielded until one of the constraints is amended or forfeited. This framework shares features with theoretical models of reasoning and argumentation, but despite its intuitiveness and applicability, we know of no modeling language or graphical aid that explicitly depicts this defeasible constraint structure. We describe the syntax and semantics for PC graphing and then illustrate potential uses for it.
APA, Harvard, Vancouver, ISO, and other styles
44

Leanza, Antonio, Giulio Reina, and José-Luis Blanco-Claraco. "A Factor-Graph-Based Approach to Vehicle Sideslip Angle Estimation." Sensors 21, no. 16 (August 10, 2021): 5409. http://dx.doi.org/10.3390/s21165409.

Full text
Abstract:
Sideslip angle is an important variable for understanding and monitoring vehicle dynamics, but there is currently no inexpensive method for its direct measurement. Therefore, it is typically estimated from proprioceptive sensors onboard using filtering methods from the family of the Kalman filter. As a novel alternative, this work proposes modeling the problem directly as a graphical model (factor graph), which can then be optimized using a variety of methods, such as whole-dataset batch optimization for offline processing or fixed-lag smoothing for on-line operation. Experimental results on real vehicle datasets validate the proposal, demonstrating a good agreement between estimated and actual sideslip angle, showing similar performance to state-of-the-art methods but with a greater potential for future extensions due to the more flexible mathematical framework. An open-source implementation of the proposed framework has been made available online.
APA, Harvard, Vancouver, ISO, and other styles
45

Sbaï, Zohra, and Rawand Guerfel. "CTL Model Checking of Web Services Composition based on Open Workflow Nets Modeling." International Journal of Service Science, Management, Engineering, and Technology 7, no. 1 (January 2016): 27–42. http://dx.doi.org/10.4018/ijssmet.2016010102.

Full text
Abstract:
Web services composition (WSC) has an enormous potential for the organizations in the B2B area. In fact, different services collaborate through the exchange of messages to implement complex business processes. BPEL is one of the most used languages to develop such cooperation. However, it has been proved that its use is complex and can require some expertise in XML syntax. Even its graphical representation is not evident to handle. This is why the authors propose to model Web services using oWF-nets, a subclass of Petri nets, and then, to translate them to BPEL. Whilst, a WSC is with added value only if the involved services are compatible. So in this context, across the translation proposed the researchers develop a verification layer of the WSC compatibility. Hence, they propose a framework named D&A4WSC which allows to model the WSC by oWF-nets, to check their compatibility with the model checker NuSMV and to translate them if they are compatible in BPEL processes using the oWFN2BPEL compiler. D&A4WSC permits, furthermore, to formally analyze a BPEL process.
APA, Harvard, Vancouver, ISO, and other styles
46

Spiegler, Ran. "Behavioral Implications of Causal Misperceptions." Annual Review of Economics 12, no. 1 (August 2, 2020): 81–106. http://dx.doi.org/10.1146/annurev-economics-072219-111921.

Full text
Abstract:
This review presents an approach to modeling decision making under misspecified subjective models. The approach is based on the idea that decision makers impose subjective causal interpretations on observed correlations, and it borrows basic concepts and tools from the statistics and artificial intelligence literatures on Bayesian networks. While these background literatures used Bayesian networks as a platform for normative and computational analysis of probabilistic and causal inference, in the framework proposed here graphical models represent causal misperceptions and help analyze their behavioral implications. I show how this approach sheds light on earlier equilibrium models with nonrational expectations and demonstrate its scope of economic applications.
APA, Harvard, Vancouver, ISO, and other styles
47

Schwanghart, W., and D. Scherler. "Short Communication: TopoToolbox 2 – MATLAB-based software for topographic analysis and modeling in Earth surface sciences." Earth Surface Dynamics 2, no. 1 (January 15, 2014): 1–7. http://dx.doi.org/10.5194/esurf-2-1-2014.

Full text
Abstract:
Abstract. TopoToolbox is a MATLAB program for the analysis of digital elevation models (DEMs). With the release of version 2, the software adopts an object-oriented programming (OOP) approach to work with gridded DEMs and derived data such as flow directions and stream networks. The introduction of a novel technique to store flow directions as topologically ordered vectors of indices enables calculation of flow-related attributes such as flow accumulation ∼20 times faster than conventional algorithms while at the same time reducing memory overhead to 33% of that required by the previous version. Graphical user interfaces (GUIs) enable visual exploration and interaction with DEMs and derivatives and provide access to tools targeted at fluvial and tectonic geomorphologists. With its new release, TopoToolbox has become a more memory-efficient and faster tool for basic and advanced digital terrain analysis that can be used as a framework for building hydrological and geomorphological models in MATLAB.
APA, Harvard, Vancouver, ISO, and other styles
48

LEE, BURTON H. "Using FMEA models and ontologies to build diagnostic models." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 15, no. 4 (September 2001): 281–93. http://dx.doi.org/10.1017/s089006040115403x.

Full text
Abstract:
Product design and diagnosis are, today, worlds apart. Despite strong areas of overlap at the ontological level, traditional design process theory and practice does not recognize diagnosis as a part of the modeling process chain; neither do diagnosis knowledge engineering processes reference design modeling tasks as a source of knowledge acquisition. This paper presents the DAEDALUS knowledge engineering framework as a methodology for integrating design and diagnosis tasks, models, and modeling environments around a common Domain Ontology and Product Models Library. The approach organizes domain knowledge around the execution of a set of tasks in an enterprise product engineering task workflow. Each task employs a Task Application which uses a customized subset of the Domain Ontology—the Task Ontology—to construct a graphical Product Model. The Ontology is used to populate the models with relevant concepts (variables) and relations (relationships), thus serving as a concept dictionary-style mechanism for knowledge sharing and reuse across the different Task Applications. For inferencing, each task employs a local Problem-solving Method (PSM), and a Model-PSM Mapping, which operate on the local Product Model to produce reasoning outcomes. The use of a common Domain Ontology across tasks and models facilitates semantic consistency of variables and relations in constructing Bayesian networks for design and diagnosis.The approach is motivated by inefficiencies encountered in cleanly exchanging and integrating design FMEA and diagnosis models. Demonstration software under development is intended to illustrate how the DAEDALUS framework can be applied to knowledge sharing and exchange between Bayesian network-based design FMEA and diagnosis modeling tasks. Anticipated limitations of the DAEDALUS methodology are discussed, as is its relationship to Tomiyama's Knowledge Intensive Engineering Framework (KIEF). DAEDALUS is grounded in formal knowledge engineering principles and methodologies established during the past decade. Finally, the framework is presented as one possible approach for improved integration of generalized design and diagnostic modeling and knowledge exchange.
APA, Harvard, Vancouver, ISO, and other styles
49

Kotiang, Stephen, and Ali Eslami. "A probabilistic graphical model for system-wide analysis of gene regulatory networks." Bioinformatics 36, no. 10 (February 25, 2020): 3192–99. http://dx.doi.org/10.1093/bioinformatics/btaa122.

Full text
Abstract:
Abstract Motivation The inference of gene regulatory networks (GRNs) from DNA microarray measurements forms a core element of systems biology-based phenotyping. In the recent past, numerous computational methodologies have been formalized to enable the deduction of reliable and testable predictions in today’s biology. However, little focus has been aimed at quantifying how well existing state-of-the-art GRNs correspond to measured gene-expression profiles. Results Here, we present a computational framework that combines the formulation of probabilistic graphical modeling, standard statistical estimation, and integration of high-throughput biological data to explore the global behavior of biological systems and the global consistency between experimentally verified GRNs and corresponding large microarray compendium data. The model is represented as a probabilistic bipartite graph, which can handle highly complex network systems and accommodates partial measurements of diverse biological entities, e.g. messengerRNAs, proteins, metabolites and various stimulators participating in regulatory networks. This method was tested on microarray expression data from the M3D database, corresponding to sub-networks on one of the best researched model organisms, Escherichia coli. Results show a surprisingly high correlation between the observed states and the inferred system’s behavior under various experimental conditions. Availability and implementation Processed data and software implementation using Matlab are freely available at https://github.com/kotiang54/PgmGRNs. Full dataset available from the M3D database.
APA, Harvard, Vancouver, ISO, and other styles
50

Wada, Hiroshi, Junichi Suzuki, and Katsuya Oba. "Leveraging Early Aspects in End-to-End Model Driven Development for Non-Functional Properties in Service Oriented Architecture." Journal of Database Management 22, no. 2 (April 2011): 93–123. http://dx.doi.org/10.4018/jdm.2011040104.

Full text
Abstract:
In Service Oriented Architecture (SOA), each application is designed with a set of reusable services and a business process. To retain the reusability of services, non-functional properties of applications must be separated from their functional properties. This paper investigates a model-driven development framework that separates non-functional properties from functional properties and manages them. This framework proposes two components: (1) a programming language, called BALLAD, for a new per-process strategy to specify non-functional properties for business processes, and (2) a graphical modeling method, called FM-SNFPs, to define a series of constraints among non-functional properties. BALLAD leverages aspects in aspect oriented programming/modeling. Each aspect is used to specify a set of non-functional properties that crosscut multiple services in a business process. FM-SNFPs leverage the notion of feature modeling to define constraints among non-functional properties like dependency and mutual exclusion constraints. BALLAD and FM-SNFPs free application developers from manually specifying, maintaining and validating non-functional properties and constraints for services one by one, reducing the burdens/costs in development and maintenance of service-oriented applications. This paper describes the design details of BALLAD and FM-SNFPs, and demonstrates how they are used in developing service-oriented applications. BALLAD significantly reduces the costs to implement and maintain non-functional properties in service-oriented applications.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography