To see the other types of publications on this topic, follow the link: Sequence diagrams.

Dissertations / Theses on the topic 'Sequence diagrams'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sequence diagrams.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Alwanain, Mohammed Ibrahim. "Automated composition of sequence diagrams." Thesis, University of Birmingham, 2016. http://etheses.bham.ac.uk//id/eprint/6919/.

Full text
Abstract:
Software design is a significant stage in software development life cycle as it creates a blueprint for the implementation of the software. Design-errors lead to costly and insufficient implementation. Hence, it is crucial to provide solutions to discover the design error in early stage of the system development and solve them. Inspired by various engineering disciplines, the software community proposed the concept of modelling in order to reduce these costly errors. Modelling provides a platform to create an abstract representation of the software systems concluding to the birth of various modelling languages such as Unified Modelling Language (UML), Automata, and Petri Net. Due to the modelling raises the level of abstraction throughout the analysis and design process, it enables the system discovers to efficiently identify errors. Since modern systems become more complex, models are often produced part-by-part to help reduce the complexity of the design. This often results in partial specifications captured in models focusing on a subset of the system. To produce an overall model of the system, such partial models must be composed together. Model composition is the process of combining partial models to create a single coherent model. Due to manual model composition is error prone, time-consuming and tedious, it must be replaced by automated model compositions. This thesis presents a novel approach for an automatic composition technique for creating behaviour models, such as a sequence diagram, from partial specifications captured in multiple sequence diagrams with the help of constraint solvers.
APA, Harvard, Vancouver, ISO, and other styles
2

Asikhan-Berlinguette, Nursel. "Communications service synthesis from informal specifications and sequence diagrams." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0015/MQ57081.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ameedeen, Mohamed Ariff. "A model driven approach to analysis and synthesis of sequence diagrams." Thesis, University of Birmingham, 2012. http://etheses.bham.ac.uk//id/eprint/3282/.

Full text
Abstract:
Software design is a vital phase in a software development life cycle as it creates a blueprint for the implementation of the software. It is crucial that software designs are error-free since any unresolved design-errors could lead to costly implementation errors. To minimize these errors, the software community adopted the concept of modelling from various other engineering disciplines. Modelling provides a platform to create and share abstract or conceptual representations of the software system – leading to various modelling languages, among them Unified Modelling Language (UML) and Petri Nets. While Petri Nets strong mathematical capability allows various formal analyses to be performed on the models, UMLs user-friendly nature presented a more appealing platform for system designers. Using Multi Paradigm Modelling, this thesis presents an approach where system designers may have the best of both worlds; SD2PN, a model transformation that maps UML Sequence Diagrams into Petri Nets allows system designers to perform modelling in UML while still using Petri Nets to perform the analysis. Multi Paradigm Modelling also provided a platform for a well-established theory in Petri Nets – synthesis to be adopted into Sequence Diagram as a method of putting-together different Sequence Diagrams based on a set of techniques and algorithms.
APA, Harvard, Vancouver, ISO, and other styles
4

Ma, Jinyong. "Topics in sequence analysis." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45908.

Full text
Abstract:
This thesis studies two topics in sequence analysis. In the first part, we investigate the large deviations of the shape of the random RSK Young diagrams, associated with a random word of size n whose letters are independently drawn from an alphabet of size m=m(n). When the letters are drawn uniformly and when both n and m converge together to infinity, m not growing too fast with respect to n, the large deviations of the shape of the Young diagrams are shown to be the same as that of the spectrum of the traceless GUE. Since the length of the top row of the Young diagrams is the length of the longest (weakly) increasing subsequence of the random word, the corresponding large deviations follow. When the letters are drawn with non-uniform probability, a control of both highest probabilities will ensure that the length of the top row of the diagrams satisfies a large deviation principle. In either case, both speeds and rate functions are identified. To complete our study, non-asymptotic concentration bounds for the length of the top row of the diagrams, are obtained for both models. In the second part, we investigate the order of the r-th, 1<= r < +∞, central moment of the length of the longest common subsequence of two independent random words of size n whose letters are identically distributed and independently drawn from a finite alphabet. When all but one of the letters are drawn with small probabilities, which depend on the size of the alphabet, the r-th central moment is shown to be of order n^{r/2}. In particular, when r=2, we get the order of the variance of the longest common subsequence.
APA, Harvard, Vancouver, ISO, and other styles
5

Bell, Cameron Pearce MacDonald. "A critical assessment of ages derived using pre-main-sequence isochrones in colour-magnitude diagrams." Thesis, University of Exeter, 2012. http://hdl.handle.net/10036/4017.

Full text
Abstract:
In this thesis a critical assessment of the ages derived using theoretical pre-main-sequence (pre-MS) stellar evolutionary models is presented by comparing the predictions to the low-mass pre-MS population of 14 young star-forming regions (SFRs) in colour-magnitude diagrams (CMDs). Deriving pre-MS ages requires precise distances and estimates of the reddening. Therefore, the main-sequence (MS) members of the SFRs have been used to derive a self-consistent set of statistically robust ages, distances and reddenings with associated uncertainties using a maximum-likelihood fitting statistic and MS evolutionary models. A photometric method (known as the Q-method) for de-reddening individual stars in regions where the extinction is spatially variable has been updated and is presented. The effects of both the model dependency and the SFR composition on these derived parameters are also discussed. The problem of calibrating photometric observations of red pre-MS stars is examined and it is shown that using observations of MS stars to transform the data into a standard photometric system can introduce significant errors in the position of the pre-MS locus in CMD space. Hence, it is crucial that precise photometric studies (especially of pre- MS objects) be carried out in the natural photometric system of the observations. This therefore requires a robust model of the system responses for the instrument used, and thus the calculated responses for the Wide-Field Camera on the Isaac Newton Telescope are presented. These system responses have been tested using standard star observations and have been shown to be a good representation of the photometric system. A benchmark test for the pre-MS evolutionary models is performed by comparing them to a set of well-calibrated CMDs of the Pleiades in the wavelength regime 0.4−2.5 μm. The masses predicted by these models are also tested against dynamical masses using a sample of MS binaries by calculating the system magnitude in a given photometric band- pass. This analysis shows that for Teff ≤ 4000 K the models systematically overestimate the flux by a factor of 2 at 0.5 μm, though this decreases with wavelength, becoming negligible at 2.2 μm. Thus before the pre-MS models are used to derive ages, a recalibration of the models is performed by incorporating an empirical colour-Teff relation and bolometric corrections based on the Ks-band luminosity of Pleiades members, with theoretical corrections for the dependence on the surface gravity (log g). The recalibrated pre-MS model isochrones are used to derive ages from the pre-MS populations of the SFRs. These ages are then compared with the MS derivations, thus providing a powerful diagnostic tool with which to discriminate between the different pre- MS age scales that arise from a much stronger model dependency in the pre-MS regime. The revised ages assigned to each of the 14 SFRs are up to a factor two older than previous derivations, a result with wide-ranging implications, including that circumstellar discs survive longer and that the average Class II lifetime is greater than currently believed.
APA, Harvard, Vancouver, ISO, and other styles
6

Alanazi, Mohammad N. "Consistency checking in multiple UML state diagrams using super state analysis." Diss., Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gopidi, Vijay Kumar. "Evaluation of Live Sequence Charts Using Play Engine Tool." Thesis, Blekinge Tekniska Högskola, Avdelningen för programvarusystem, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-5428.

Full text
Abstract:
Capturing a requirement is a great challenge in the initial stages of the software development, be it a system requirement or a customer requirement to the software engineers. Understanding the requirement and predicting or differentiating what may happen and what must happen is difficult especially in the complex real time systems. Live sequence charts are extensions of the message sequence charts which can specify the live ness of the requirements. And the play engine tool is used to specify, validate, and analyze the scenarios of the requirements. This thesis is to evaluate live sequence charts using the play engine tool and to see if the built-in model checkers can detect inconsistencies in the LSC’s.
The requirements capturing and analysis has always been the initial criteria and main problem during the software design and development for the software engineers. It’s been very common to use natural language for capturing the requirements in the industries because of its ease of use. The graphical languages were used to represent the requirements, its behavior and the scenarios graphically or visually, for example UML. UML Sequence diagrams are used in the real time software development to capture the requirements which specifies the scenarios of the system behavior and also the interactions between the objects graphically. Message sequence charts are also a graphical language for representing the scenarios and also the behavior of the system especially in the telecommunication domain. But these two are only useful in specifying the one aspect of the behavior and not much helpful in specifying the liveness of the requirement. Liveness can be defined as something good will happen [34] or something must happen. For this reason live sequence charts were developed which can specify the liveness of the requirement. Live sequence charts are capable of specifying the scenarios what may happen and what must happen. This thesis is to evaluate the live sequence charts using the play engine tool running on the windows machine and also to study the built in model checkers for formal verification. The thesis starts with the various types of graphical representation of requirements in Software Engineering, followed by the Research Methodology, next a bit more explanation of Live Sequence Charts, Evaluation, Result, Conclusions and Lessons Learned from the thesis.
Permanent Address: C/O: K.VIJAYA H.NO. 3-1-39/12/3/2 TEACHER'S COLONY ARMOOR-503224 ANDHRAPRADESH INDIA.
APA, Harvard, Vancouver, ISO, and other styles
8

Nejad-Hosseinian, Seyed Hamed. "Automatic generation of generalized event sequence diagrams for guiding simulation based dynamic probabilistic risk assessment of complex systems." College Park, Md. : University of Maryland, 2007. http://hdl.handle.net/1903/7750.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2007.
Thesis research directed by: Mechanical Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
9

Bannour, Boutheina. "Symbolic analysis of scenario based timed models for component based systems : Compositionality results for testing." Phd thesis, Ecole Centrale Paris, 2012. http://tel.archives-ouvertes.fr/tel-00997776.

Full text
Abstract:
In this thesis, we describe how to use UML sequence diagrams with MARTE timing constraints to specify entirely the behavior of component-based systems while abstracting as much as possible the functional roles of components composing it. We have shown how to conduct compositional analysis of such specifications. For this, we have defined operational semantics to sequence diagrams by translating them into TIOSTS which are symbolic automata with timing constraints. We have used symbolic execution techniques to compute possible executions of the system in the form of a symbolic tree. We have defined projection mechanisms to extract the execution tree associated with any distinguished component. The resulting projected tree characterizes the possible behaviors of the component with respect to the context of the whole system specification. As such, it represents a constraint to be satisfied by the component and it can be used as a correctness reference to validate the system in a compositional manner. For that purpose, we have grounded our validation framework on testing techniques. We have presented compositional results relating the correctness of a system to the correctness of components. Based on these results, we have defined an incremental approach for testing from sequence diagrams.
APA, Harvard, Vancouver, ISO, and other styles
10

Becker, Marcelo. "Uma alternativa para o ensino de geometria : visualização geométrica e representações de sólidos no plano." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2009. http://hdl.handle.net/10183/17161.

Full text
Abstract:
Essa dissertação aborda a visualização geométrica e a representação de objetos tridimensionais em diagramas bidimensionais. O objetivo da pesquisa é a criação de uma seqüência didática que atenda esses propósitos. Para desenvolver essa seqüência, foram realizadas diversas atividades piloto com sujeitos do ensino fundamental, médio e superior. A análise da produção dos alunos determinou a seleção e adaptação de atividades para compor a seqüência didática, testada em alunos do terceiro ano do ensino médio, apresentada com a respectiva análise dos resultados. Uma atividade teve destaque nesse trabalho, e por ser de criação própria, foi denominada Caixa de Becker, que consiste na interação com sólidos através do tato. Para análise dos dados, foram utilizadas as teorias de van Hiele, Gutiérrez e Piaget. Fez-se uma breve retomada na história do ensino de matemática, especificamente na área de geometria, para entender a forma em que esse conteúdo é abordado nos livros didáticos.
This dissertation focuses on visualization and geometric representation of three dimensional objects in two-dimensional diagrams. The aim of this research is to establish a teaching sequence to create such diagrams. To achieve this objective several pilot activities were executed with primary, secondary and tertiary education students. The selection and adjustment of activities to compose the teaching sequence were determined by the analysis of the results from a test applied to students in the last year of the high school. The "Becker Box" which was a highlighted activity during the research and is my own invention, provides a way to interact with solid geometric forms by touch. The theories from Van Hiele, Gutiérrez and Piaget were used to make the data analysis. A summary of the mathematics education history were made to understand how this subject is presented and discussed in teaching books.
Esa disertación aborda la visualización geométrica y la representación de los objetos tridimensionales en diagramas bidimensionales. El objetivo de la pesquisa es la creación de una secuencia didáctica que atienda eses propósitos . Para denvolver esa secuencia, fueron realizadas diversas actividades piloto con sujetos de la enseñanza básica, media y superior. El análisis de la producción de los alumnos determinó la selección y la adaptación de las actividades para componer la secuencia didáctica, testada en alumnos del tercer año de la enseñanza media, con el respectivo análisis de los resultados. Una actividade tuvo destaque en ese trabajo, y por ser de creación propria, fue llamada Caja de Becker, que consiste en la interacción con sólidos por medio del tacto. Para análisis de los datos, fueron utilizadas las teorías de van Hiele, Gutiérrez y Piaget. Se hizo una breve retomada en la historia de la enseñanza de la matemática, especificamente en el área de geometría, para entender la forma en que ese contenido es abordado en los libros didácticos.
APA, Harvard, Vancouver, ISO, and other styles
11

Noda, Kunihiro, Takashi Kobayashi, Kiyoshi Agusa, and Shinichiro Yamamoto. "Sequence Diagram Slicing." IEEE, 2009. http://hdl.handle.net/2237/14042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Milone, A. P., A. F. Marino, L. R. Bedin, J. Anderson, D. Apai, A. Bellini, P. Bergeron, A. J. Burgasser, A. Dotter, and J. M. Rees. "The HST large programme on omega Centauri - I. Multiple stellar populations at the bottom of the main sequence probed in NIR-Optical." OXFORD UNIV PRESS, 2017. http://hdl.handle.net/10150/624420.

Full text
Abstract:
As part of a large investigation with Hubble Space Telescope to study the faintest stars within the globular cluster omega Centauri, in this work we present early results on the multiplicity of its main sequence (MS) stars, based on deep optical and near-infrared observations. By using appropriate colour-magnitude diagrams, we have identified, for the first time, the two main stellar Populations I and II along the entire MS, from the turn-off towards the hydrogen-burning limit. We have compared the observations with suitable synthetic spectra of MS stars and conclude that the two main sequences (MSs) are consistent with stellar populations with different metallicity, helium and light-element abundance. Specifically, MS-I corresponds to a metal-poor stellar population ([Fe/H] similar to -1.7) with Y similar to 0.25 and [O/Fe] similar to 0.30. The MS-II hosts helium-rich (Y similar to 0.37-0.40) stars with metallicity ranging from [Fe/H] similar to -1.7 to -1.4. Below the MS knee (m(F160W) similar to 19.5), our photometry reveals that each of the two main MSs hosts stellar subpopulations with different oxygen abundances, with very O-poor stars ([O/Fe] similar to -0.5) populating the MS-II. Such a complexity has never been observed in previous studies of M-dwarfs in globular clusters. A few months before the launch of the James Webb Space Telescope, these results demonstrate the power of optical and near-infrared photometry in the study of multiple stellar populations in globular clusters.
APA, Harvard, Vancouver, ISO, and other styles
13

Kelmaitė, Lina. "MagicDraw UML įrankio praplėtimas sekų ir būsenų diagramų suderinimo galimybe." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2007. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2007~D_20070816_142342-94258.

Full text
Abstract:
Darbo tikslas – pagerinti UML kalbą naudojančius projektavimo procesus, papildant juos sekų ir būsenų diagramų derinimo galimybėmis, praplėsti UML specifikaciją, kad būtų įmanomas abipusis sekų diagramų ir būsenų mašinų transformavimas. Antrame darbo skyriuje pateikti keli literatūroje pasiūlyti sekų diagramų transformavimo į būsenų mašinas algoritmai ir transformacijų pavyzdžiai. Skyriaus gale pateikta šių algoritmų palyginimo lentelė pagal tam tikras savybes bei keleto CASE įrankių, kuriuose galėtų būti įgyvendinti algoritmai, apžvalga. Trečiame skyriuje pateikta abipusės sekų ir būsenų diagramų transformacijos metodika. Ketvirtame skyriuje pateiktas pagal trečio skyriaus metodiką atliktos sekų ir būsenų diagramų transformacijų realizacijos projektas praplečiant MagicDraw UML įrankį. Penktame skyriuje pateiktas sukurto įskiepio efektyvumo tyrimas bei transformacijų pavyzdžiai. Šeštame skyriuje pateiktos bendros darbo išvados.
In this master thesis the transformation from sequence diagrams to statemachines and vice versa is presented. The first section describes a research of four existing algorithms of generating state machines from sequence diagrams. For diagrams transformation plug-in for CASE tool MagicDraw is created according MDA standards. Transformation plug-in takes sequence (state) diagram model as input and generates state (sequence) diagrams according to transformation sules. Created plug-in requirements,functional specification and architecture described in Project section. The investigation section describes investigation of the developed plug-in. In this section were investigate the working efficiency of designer trying to reconcile model diagrams.
APA, Harvard, Vancouver, ISO, and other styles
14

Chraibi, Kaadoud Ikram. "apprentissage de séquences et extraction de règles de réseaux récurrents : application au traçage de schémas techniques." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0032/document.

Full text
Abstract:
Deux aspects importants de la connaissance qu'un individu a pu acquérir par ses expériences correspondent à la mémoire sémantique (celle des connaissances explicites, comme par exemple l'apprentissage de concepts et de catégories décrivant les objets du monde) et la mémoire procédurale (connaissances relatives à l'apprentissage de règles ou de la syntaxe). Cette "mémoire syntaxique" se construit à partir de l'expérience et notamment de l'observation de séquences, suites d'objets dont l'organisation séquentielle obéit à des règles syntaxiques. Elle doit pouvoir être utilisée ultérieurement pour générer des séquences valides, c'est-à-dire respectant ces règles. Cette production de séquences valides peut se faire de façon explicite, c'est-à-dire en évoquant les règles sous-jacentes, ou de façon implicite, quand l'apprentissage a permis de capturer le principe d'organisation des séquences sans recours explicite aux règles. Bien que plus rapide, plus robuste et moins couteux en termes de charge cognitive que le raisonnement explicite, le processus implicite a pour inconvénient de ne pas donner accès aux règles et de ce fait, de devenir moins flexible et moins explicable. Ces mécanismes mnésiques s'appliquent aussi à l'expertise métier : la capitalisation des connaissances pour toute entreprise est un enjeu majeur et concerne aussi bien celles explicites que celles implicites. Au début, l'expert réalise un choix pour suivre explicitement les règles du métier. Mais ensuite, à force de répétition, le choix se fait automatiquement, sans évocation explicite des règles sous-jacentes. Ce changement d'encodage des règles chez un individu en général et particulièrement chez un expert métier peut se révéler problématique lorsqu'il faut expliquer ou transmettre ses connaissances. Si les concepts métiers peuvent être formalisés, il en va en général de tout autre façon pour l'expertise. Dans nos travaux, nous avons souhaité nous pencher sur les séquences de composants électriques et notamment la problématique d’extraction des règles cachées dans ces séquences, aspect important de l’extraction de l’expertise métier à partir des schémas techniques. Nous nous plaçons dans le domaine connexionniste, et nous avons en particulier considéré des modèles neuronaux capables de traiter des séquences. Nous avons implémenté deux réseaux de neurones récurrents : le modèle de Elman et un modèle doté d’unités LSTM (Long Short Term Memory). Nous avons évalué ces deux modèles sur différentes grammaires artificielles (grammaire de Reber et ses variations) au niveau de l’apprentissage, de leurs capacités de généralisation de celui-ci et leur gestion de dépendances séquentielles. Finalement, nous avons aussi montré qu’il était possible d’extraire les règles encodées (issues des séquences) dans le réseau récurrent doté de LSTM, sous la forme d’automate. Le domaine électrique est particulièrement pertinent pour cette problématique car il est plus contraint avec une combinatoire plus réduite que la planification de tâches dans des cas plus généraux comme la navigation par exemple, qui pourrait constituer une perspective de ce travail
There are two important aspects of the knowledge that an individual acquires through experience. One corresponds to the semantic memory (explicit knowledge, such as the learning of concepts and categories describing the objects of the world) and the other, the procedural or syntactic memory (knowledge relating to the learning of rules or syntax). This "syntactic memory" is built from experience and particularly from the observation of sequences of objects whose organization obeys syntactic rules.It must have the capability to aid recognizing as well as generating valid sequences in the future, i.e., sequences respecting the learnt rules. This production of valid sequences can be done either in an explicit way, that is, by evoking the underlying rules, or implicitly, when the learning phase has made it possible to capture the principle of organization of the sequences without explicit recourse to the rules. Although the latter is faster, more robust and less expensive in terms of cognitive load as compared to explicit reasoning, the implicit process has the disadvantage of not giving access to the rules and thus becoming less flexible and less explicable. These mnemonic mechanisms can also be applied to business expertise. The capitalization of information and knowledge in general, for any company is a major issue and concerns both the explicit and implicit knowledge. At first, the expert makes a choice to explicitly follow the rules of the trade. But then, by dint of repetition, the choice is made automatically, without explicit evocation of the underlying rules. This change in encoding rules in an individual in general and particularly in a business expert can be problematic when it is necessary to explain or transmit his or her knowledge. Indeed, if the business concepts can be formalized, it is usually in any other way for the expertise which is more difficult to extract and transmit.In our work, we endeavor to observe sequences of electrical components and in particular the problem of extracting rules hidden in these sequences, which are an important aspect of the extraction of business expertise from technical drawings. We place ourselves in the connectionist domain, and we have particularly considered neuronal models capable of processing sequences. We implemented two recurrent neural networks: the Elman model and a model with LSTM (Long Short Term Memory) units. We have evaluated these two models on different artificial grammars (Reber's grammar and its variations) in terms of learning, their generalization abilities and their management of sequential dependencies. Finally, we have also shown that it is possible to extract the encoded rules (from the sequences) in the recurrent network with LSTM units, in the form of an automaton. The electrical domain is particularly relevant for this problem. It is more constrained with a limited combinatorics than the planning of tasks in general cases like navigation for example, which could constitute a perspective of this work
APA, Harvard, Vancouver, ISO, and other styles
15

Moutet, Laurent. "Diagrammes et théorie de la relativité restreinte : une ingénierie didactique." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC275/document.

Full text
Abstract:
Nous avons développé et mis à l’épreuve des activités utilisant un registre basé sur des diagrammes lors de l’enseignement de la théorie de la relativité restreinte avec des élèves de terminale S. L’approche graphique est source de difficultés didactiques mais les potentialités didactiques peuvent être plus avantageuses. Une étude épistémologique sur les diagrammes utilisables en relativité restreinte permet de voir les liens importants entre les mathématiques et la genèse de la théorie. C’est le cas du diagramme de Minkowski. Nous avons également étudié les diagrammes de Brehme et de Loedel, créés beaucoup plus tard pour des raisons didactiques. Après les séances pilotes, nous avons développé un nouveau cadre théorique, permettant d’analyser plus finement les interactions développées par les élèves résolvant un problème utilisant des diagrammes en relativité restreinte. Nous avons modifié les espaces de travail mathématique (ETM) en rajoutant le cadre de rationalité de la physique à celui des mathématiques. Le cadre des ETM étendu nous a permis de concevoir plusieurs versions de séquences et de réaliser une analyse a priori de leur niveau de difficulté et a posteriori en étudiant des travaux d’élèves. L’analyse du travail de groupes d’élèves a été effectuée lors d’une séquence utilisant le diagramme de Minkowski avec GeoGebra, un logiciel de simulation graphique. Le degré de maitrise du diagramme de Minkowski a été évalué pour chaque élève du point de vue des mathématiques et de la physique. Les résultats sont prometteurs, ils montrent une appropriation réelle des concepts de la théorie de la relativité restreinte via une approche utilisant des diagrammes
We tried to develop and test several activities using a register based on diagrams for teaching the special theory of relativity to S class of twelfth graders. The graphic approach may result it complications in learning. However, its educational potential can turn out to be more beneficial. An epistemological study on diagrams used in special relativity allowed us to report important links between mathematics and the genesis of the special theory of relativity. This is the case of the Minkowski diagram. We were also interested in two other diagrams, Brehme and Loedel, which were developed much more later for teaching purposes. Following experimental sessions, we developed a new theoretical frame to comprehensively analyse the interactions developed by students to solve a problem using diagrams in special relativity. We modified the mathematical working spaces (MWS) by adding a new frame of rationality to the existing mathematic workspace to physics. The extended frame of the MWS allowed us to plan several versions of sequences proposed to the students and realize a priori analysis of their difficulty level and a posteriori study by analysing pupils' works. We have considered several works of student groups during a sequence using the Minkowski diagram with GeoGebra, a graphic simulation software. It allowed us to estimate the degree of control of the Minkowski diagram for every student, both from the frame of rationality of the mathematics and the physical sciences’ point of view. The results are promising and they tend to show a real appropriation of the concepts of the special theory of relativity with an approach using diagrams
APA, Harvard, Vancouver, ISO, and other styles
16

Toresson, Gabriel. "Documenting and Improving the Design of a Large-scale System." Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-157733.

Full text
Abstract:
As software systems become increasingly larger and more complex, the need to make them easily maintained increases, as large systems are expected to last for many years. It has been estimated that system maintenance is a large part of many IT-departments’ software develop­ment costs. In order to design a complex system to be maintainable it is necessary to introduce structure, often as models in the form of a system architecture and a system design. As development of complex large-scale systems progresses over time, the models may need to be reconstructed. Perhaps because development may have diverted from the initial plan, or because changes had to be made during implementation. This thesis presents a reconstructed documentation of a complex large-scale system, as well as suggestions for how to improve the existing design based on identified needs and insufficiencies. The work was performed primarily using a qualitative manual code review approach of the source code, and the proposal was generated iteratively. The proposed design was evaluated and it was concluded that it does address the needs and insufficiencies, and that it can be realistically implemented.
APA, Harvard, Vancouver, ISO, and other styles
17

Marehalli, Jayavardhan N. "Assembly Sequence Optimization and Assembly Path Planning." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/44837.

Full text
Abstract:
This thesis addresses two important aspects of automatic assembly viz., assembly sequence planning and assembly path planning. These issues are addressed separately starting with sequence planning followed by assembly path planning. For efficient assembly without feedback systems (or, passive assembly), an assembler should know the ideal orientation of each component and the order in which to put the parts together (or, assembly sequence). A heuristic is presented to find the optimal assembly sequence and prescribe the orientation of the components for a minimum set of grippers = ideally one. The heuristic utilizes an index of difficulty (ID) that quantifies assembly. The ID for each task in the assembly process is computed on the basis of a number of geometrical and operational properties. The objective of the optimization problem here is to minimize the assembly ID and categorize parts/subassemblies based on their preferred direction of assembly while allowing re-orientation of the base part. It is assumed that the preferred direction of assembly is vertically downward, consistent with manual as well as most automatic assembly protocols. Our attempt is to minimize the number of degrees of freedom required in a re-orienting fixture and derive the requirements for such a fixture. The assembly of a small engine is used as an example in this study due to the variety of ideally rigid parts involved. In high precision assembly tasks, contact motion is common and often desirable. This entails a careful study of contact states of the parts being assembled. Recognition of contact states is crucial in planning and executing contact motion plans due to inevitable uncertainties. Dr. Jing Xiao of UNCC introduced the concept of principal contacts (PC) and contact formation (CF) for contact state recognition. The concept of using CFs (as sets of PCs) has the inherent advantage that a change of CF is often coincident with a discontinuity of the general contact force (force and torque). Previous work in contact motion planning has shown that contact information at the level of PCs along with the sensed location and force information is often sufficient for planning high precision assembly operations. In this thesis, we present results from experiments involving planned contact motions to validate the notion of PCs and CFs -- an abrupt change in general contact force often accompanies a change between CFs. We are only concerned with solving the 2D peg-in-corner problem.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
18

Zunic, Dragisa. "Computing with sequents and diagrams in classical logic - calculi *X, dX and ©X." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2007. http://tel.archives-ouvertes.fr/tel-00265549.

Full text
Abstract:
Cette thèse de doctorat étudie l'interprétation calculatoire des preuves de la logique classique. Elle présente trois calculs reflétant trois approches différentes de la question.

Cette thèse est donc composée de trois parties.

La première partie introduit le *X calcul, dont les termes représentent des preuves dans le calcul des séquents classique. Les règles de réduction du *X calcul capture la plupart des caractéristiques de l'élimination des coupures du calcul des séquents. Ce calcul introduit des termes permettant une
implémentation implicite de l'effacement et de la duplication. Pour autant que nous sachions, c'est le premier tel calcul pour la logique classique.

La deuxième partie étudie la possibilité de représenter les calculs classiques au moyen de diagrammes. Nous présentons le dX calcul, qui est le calcul diagrammatique de la logique classique, et dont les diagrammes sont issus des
*X-termes. La différence principale réside dans le fait que dX fonctionne à un niveau supérieur d'abstraction. Il capture l'essence des preuves du calcul des séquents ainsi que l'essence de l'élimination classique des coupures.

La troisième partie relie les deux premières. Elle présente le $copy;X calcul qui est une version unidimensionnelle du calcul par diagramme. Nous commencons par le *X, où nous identifions explicitement les termes qui doivent l'être. Ceux-ci
sont les termes qui encodent les preuves des séquents qui sont équivalentes modulo permutation de règles d'inférence indépendantes. Ces termes ont également la même représentation par diagramme. Une telle identification induit une relation de congruence sur les termes. La relation de réduction est définie modulo la congruence, et les règles de réduction correspondent à celle du dX calcul.
APA, Harvard, Vancouver, ISO, and other styles
19

Genest, Blaise. "L'odyssée des graphes de diagrammes de séquences ( MSC-Graphes)." Paris 7, 2004. http://www.theses.fr/2004PA077210.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Mohanty, Pragyan Paramita. "Function-based Algorithms for Biological Sequences." OpenSIUC, 2015. https://opensiuc.lib.siu.edu/dissertations/1120.

Full text
Abstract:
AN ABSTRACT OF THE DISSERTATION OF PRAGYAN P. MOHANTY, for the Doctor of Philosophy degree in ELECTRICAL AND COMPUTER ENGINEERING, presented on June 11, 2015, at Southern Illinois University Carbondale. TITLE: FUNCTION-BASED ALGORITHMS FOR BIOLOGICAL SEQUENCES MAJOR PROFESSOR: Dr. Spyros Tragoudas Two problems at two different abstraction levels of computational biology are studied. At the molecular level, efficient pattern matching algorithms in DNA sequences are presented. For gene order data, an efficient data structure is presented capable of storing all gene re-orderings in a systematic manner. A common characteristic of presented methods is the use of binary decision diagrams that store and manipulate binary functions. Searching for a particular pattern in a very large DNA database, is a fundamental and essential component in computational biology. In the biological world, pattern matching is required for finding repeats in a particular DNA sequence, finding motif and aligning sequences etc. Due to immense amount and continuous increase of biological data, the searching process requires very fast algorithms. This also requires encoding schemes for efficient storage of these search processes to operate on. Due to continuous progress in genome sequencing, genome rearrangements and construction of evolutionary genome graphs, which represent the relationships between genomes, become challenging tasks. Previous approaches are largely based on distance measure so that relationship between more phylogenetic species can be established with some specifically required rearrangement operations and hence within certain computational time. However because of the large volume of the available data, storage space and construction time for this evolutionary graph is still a problem. In addition, it is important to keep track of all possible rearrangement operations for a particular genome as biological processes are uncertain. This study presents a binary function-based tool set for efficient DNA sequence storage. A novel scalable method is also developed for fast offline pattern searches in large DNA sequences. This study also presents a method which efficiently stores all the gene sequences associated with all possible genome rearrangements such as transpositions and construct the evolutionary genome structure much faster for multiple species. The developed methods benefit from the use of Boolean functions; their compact storage using canonical data structure and the existence of built-in operators for these data structures. The time complexities depend on the size of the data structures used for storing the functions that represent the DNA sequences and/or gene sequences. It is shown that the presented approaches exhibit sub linear time complexity to the sequence size. The number of nodes present in the DNA data structure, string search time on these data structures, depths of the genome graph structure, and the time of the rearrangement operations are reported. Experiments on DNA sequences from the NCBI database are conducted for DNA sequence storage and search process. Experiments on large gene order data sets such as: human mitochondrial data and plant chloroplast data are conducted and depth of this structure was studied for evolutionary processes on gene sequences. The results show that the developed approaches are scalable.
APA, Harvard, Vancouver, ISO, and other styles
21

Sin, Thant. "Improving Novice Analyst Performance in Modeling the Sequence Diagram in Systems Analysis: A Cognitive Complexity Approach." FIU Digital Commons, 2009. http://digitalcommons.fiu.edu/etd/86.

Full text
Abstract:
The Unified Modeling Language (UML) has quickly become the industry standard for object-oriented software development. It is being widely used in organizations and institutions around the world. However, UML is often found to be too complex for novice systems analysts. Although prior research has identified difficulties novice analysts encounter in learning UML, no viable solution has been proposed to address these difficulties. Sequence-diagram modeling, in particular, has largely been overlooked. The sequence diagram models the behavioral aspects of an object-oriented software system in terms of interactions among its building blocks, i.e. objects and classes. It is one of the most commonly-used UML diagrams in practice. However, there has been little research on sequence-diagram modeling. The current literature scarcely provides effective guidelines for developing a sequence diagram. Such guidelines will be greatly beneficial to novice analysts who, unlike experienced systems analysts, do not possess relevant prior experience to easily learn how to develop a sequence diagram. There is the need for an effective sequence-diagram modeling technique for novices. This dissertation reports a research study that identified novice difficulties in modeling a sequence diagram and proposed a technique called CHOP (CHunking, Ordering, Patterning), which was designed to reduce the cognitive load by addressing the cognitive complexity of sequence-diagram modeling. The CHOP technique was evaluated in a controlled experiment against a technique recommended in a well-known textbook, which was found to be representative of approaches provided in many textbooks as well as practitioner literatures. The results indicated that novice analysts were able to perform better using the CHOP technique. This outcome seems have been enabled by pattern-based heuristics provided by the technique. Meanwhile, novice analysts rated the CHOP technique more useful although not significantly easier to use than the control technique. The study established that the CHOP technique is an effective sequence-diagram modeling technique for novice analysts.
APA, Harvard, Vancouver, ISO, and other styles
22

Nandakumar, Govind. "L’archéologie galactique et son application au centre galactique." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4064/document.

Full text
Abstract:
L'archéologie galactique consiste à disséquer et analyser les nombreuses composantes de la Voie Lactée afin de mettre en évidence et distinguer les processus physiques qui contribuent à sa formation et son évolution. Ceci est possible grâce à une estimation précise des positions, des vitesses ainsi que des propriétés de l'atmosphère stellaire des étoiles individuelles qui appartiennent aux différents populations stellaires qui composent chacune de ces composantes. De ce fait, ce domaine dépend non seulement d'observations photométriques, astrométriques et spectroscopiques permettant de mesurer en détail les propriétés stellaires mentionnées mais également de modèles théoriques précis afin de les confronter avec les données observationnelles. Au cours de cette thèse, j'ai mené une étude détaillée sur les effets de fonction de sélection sur les abondances métalliques en utilisant des sondages spectroscopiques aux grandes échelles, suivi d'observations spectroscopiques de petites et grandes résolutions sur les parties internes de la Voie Lactée afin de caractériser la nature chimique du bulbe galactique ainsi que le taux de formation stellaire dans la zone centrale moléculaire (CMZ). Avec les présents et futurs grands sondages dédiés à l'archéologie galactique tels que APOGEE, RAVE, LAMOST, GALAH, etc.., il est essentiel de connaître la fonction de sélection spécifique qui est associée à la stratégie de ciblage de chacun de ces sondages. En utilisant des champs communs et des lignes de visée similaires entre APOGEE, LAMOST, GES et RAVE, et tout en considérant des modèles de synthèse de population stellaire, J'ai étudié les effets de fonction de sélection sur la fonction de distribution de la métallicité (MDF) et sur le gradient vertical de métallicité dans le voisinage solaire. Mes résultats indiquent qu'il y a un négligeable effet de fonction de sélection sur la MDF ainsi que sur le gradient vertical de métallicité. Ces résultats suggèrent alors que différents sondages spectroscopiques (de différentes résolutions et de longueurs d'onde) peuvent être combinés dans des études similaires à condition que les métallicité soient placées sur la même échelle. Tandis que de plus en plus d'observations spectroscopiques des régions externes du bulbe de la Voie Lactée révèlent la complexité de sa morphologie, sa cinétique et de sa nature chimique, les études détaillées sur les abondances chimiques de la région interne du bulbe (400-500 pc) font en revanche défaut. Je présenterai alors des spectres de haute résolution dans la bande K d'étoiles géantes K/M issues de cette région obscure et obtenus à partir du spectrographe de haute résolution dans l'infrarouge, CRIRES (R-50,000) situé au VLT. Je discuterai ensuite la MDF et les abondances chimiques détaillées de notre échantillon dans cette région et également la symétrie Nord-Sud dans la MDF le long du petit axe du bulbe. Un enjeu majeur dans les modèles d'évolution chimique est le manque de connaissance vis à vis de l'histoire et du taux de la formation stellaire de la Voie Lactée. La partie centrale de la Voie Lactée (<200 pc), appelée communément la zone centrale moléculaire, possède un grand réservoir de gaz moléculaire avec des indications d'activités de formation stellaire durant les 100 000 dernières années. J'ai utilisé des spectres KMOS (VLT) de petite résolution afin d'identifier et analyser les objects stellaires jeunes et massifs (YSOs) et afin d'estimer le taux de formation stellaire dans la CMZ en utilisant la méthode de contage YSO
Galactic archaeology deals with dissecting the Milky Way into its various components with the objective to disentangle processes contributing to the Milky Way formation and evolution. This relies on precise estimation of positions, velocities as well as stellar atmosphere properties of individual stars belonging to different stellar populations that make up each of these components. Thus this field relies on photometric, astrometric and spectroscopic observations to measure the above mentioned stellar properties in detail in addition to accurate models to compare the observed results with. In this thesis, I have carried out a detailed study of selection function effects on metallicity trends using larges scale spectroscopic surveys, followed by high and low resolution spectroscopic observations towards the inner Milky Way to characterise the chemical nature of the inner Galactic bulge and to measure the star formation rate in the central molecular zone (CMZ), respectively. With ongoing and upcoming large Galactic archaeology spectroscopic surveys such as APOGEE, RAVE, LAMOST, GALAH etc, it is essential to know the specific selection function which is related to the targeting strategy of each of them. By using common fields along similar lines of sight between APOGEE, LAMOST, GES and RAVE, and together with stellar population synthesis models, I investigate the selection function effect on the metallicity distribution function (MDF) and the vertical metallicitiy gradients in the solar neighborhood. My results indicate that there is negligible selection function effect on the MDF and the vertical metallicity gradients. These results suggest that different spectroscopic surveys (different resolutions and wavelength range) can be combined for such studies provided their metallicities are put on the same scale. While more and more spectroscopic observations of the outer bulge regions reveal the complex morphological, kinematic and chemical nature of the Milky Way bulge, there is a lack of detailed chemical abundances studies in the inner bulge region (400-500 pc). I will present high resolution K-band spectra of K/M giants in this highly obscured region obtained using the high resolution infrared spectrograph, CRIRES (R-50,000), on VLT. I will discuss the MDF and detailed chemical abundances of our sample in this region as well as the North-South symmetry in MDF along the bulge minor axis. A major challenge in the chemical evolution models is the lack of knowledge about the star formation history and the star formation rate in the Milky Way. The inner 200 pc of the Milky way, the so called central molecular zone, has a large reservoir of molecular gas with the evidence of star formation activity during the last 100,000 years. I used low resolution KMOS spectra (VLT) to identify and analyse massive young stellar objects (YSOs) and estimated the star formation rate in the CMZ using the YSO counting method
APA, Harvard, Vancouver, ISO, and other styles
23

Kučera, Antonín. "Návrh využití vývojového rámce Scrum a modelovacího jazyka UML pro zefektivnění tvorby webových stránek." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-165084.

Full text
Abstract:
The main goal of this diploma thesis is the proposal to use agile development procedures of Scrum framework with UML modeling language applicable for creating websites of small scale. The emphasis is especially put on creating a practically usable type model of the websites. The theoretical part of the thesis has two main chapters. In the "Agile methodology" chapter are listed principles of agile approach to creating software. The next part of this chapter is focus on agile methodologies (Scrum, XP, ASD, FDD, DSDM, LD and Crystal). In the "UML" chapter is introduced modeling language UML and its principles. The next part of this chapter is focus on UML diagrams used in practical part of this thesis (use case diagram, class diagram, sequence diagram, activity diagram and deployment diagram). The practical part of this thesis focuses on the design type model for creating websites of small scale. The type model is based both on the theoretical part of this work and practical experience of the author. In the first part is introduced scheme of model, where is shown the mutual relationship of individual phases. In the next part of this chapter are discussed the individual phases where is used the principle of agile development framework Scrum and examples of use of the UML diagrams.
APA, Harvard, Vancouver, ISO, and other styles
24

Mhanna, Hussein. "Intégration du model-based testing dans un processus de développement logiciel." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS115.

Full text
Abstract:
Les tests de logiciels jouent un rôle non négligeable en minimisant les coûts de développement de ces derniers. L'une des tendances les plus importantes pour cet objectif est l'utilisation du MBT (Model-Based Testing). Le MBT consiste à générer des tests de validation automatiquement en se basant sur un modèle personnalisé qui décrit certains aspects fonctionnels des SUT(System Under Test : Système Sous Test). Cependant, un modèle de test dédié doit être créé. Ce processus est consommateur de temps et de main d'œuvre et, par conséquent, il n'est pas largement adopté dans l'industrie. Dans notre travail, nous présentons une méthodologie pour faciliter l'utilisation du MBT dans les entreprises en utilisant des artefacts de projet pour créer automatiquement un modèle de test basé sur l'utilisation du SUT (ce que nous appelons un modèle d'usage). Dans nos travaux, nous nous concentrons sur la transformation des diagrammes de séquence UML en un modèle de test. Cette transformation se fera sur la base de la notion du context, qui est l'historique des événements précédents sur le SUT pour pouvoir factoriser et obtenir un modèle d'utilisation valide. Cette notion est très importante lorsque deux événements sont gérés différemment par le SUT en fonction de l'ensemble des entrées précédentes. Un cas d'étude académique modélisant une machine ATM (Automated Teller Machine) est présenté pour décrire le besoin et les problèmes associés. Ensuite notre méthodologie a été implémentée pour répondre à quelques exigences du projet National Clarity
Software testing plays a significant role in minimizing software development costs. One of the most important trends for this goal is the use of MBT (Model-Based Testing). MBT consists on automatically generating validation tests from a personalized model describing certain functional aspects of the system under test (SUT). However, a dedicated test model must be created. This process is time and labor consuming and therefore not widely used in the industry. In our work, we present a methodology to facilitate the use of MBT in companies by using project artifacts to automatically create a test model based on the use of SUT (what we call a usage model). In our work, we focus on transforming UML sequence diagrams into a test model.This transformation will be done on the basis of the notion of context, which is the history of previous events on the SUT in order to factorize and obtain a valid usage model. This notion is very important when two events are managed differently by the SUT according to all of the previous inputs of this SUT. An academic case study modeling an ATM (Automated Teller Machine) is presented to describe the need and the associated problems. Then, our methodology is implemented to meet some requirements of the French project Clarity
APA, Harvard, Vancouver, ISO, and other styles
25

He, Xiongwei. "Synthese et caracterisation d'un copolymere multisequence contenant des sequences cristalisables : etude de ses gels physiques." Université Louis Pasteur (Strasbourg) (1971-2008), 1986. http://www.theses.fr/1986STR13321.

Full text
Abstract:
Preparation par hydrosilylation de copolymeres a sequences organosiliciques cristallisables et a sequence dimethylsiloxane. Cinetique de formation des gels physiques. Proprietes thermodynamiques des deux homopolymeres. Mecanisme de la gelification. Une separation de phase liquide-liquide precede la cristallisation
APA, Harvard, Vancouver, ISO, and other styles
26

Hort, Jan. "Automatické vyhodnocování e-learningových testů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235443.

Full text
Abstract:
The thesis is dealing with research of automatic evaluation of e-learning tests. That technology is useful for domain of computer aided learning and e-learning and intelligent tutoring systems. This is extending possibility of online testing the student's knowledge. This thesis is also dealing with norms for learning and intelligent tutoring systems. Briefly introduce some project of domain of intelligent tutoring systems.
APA, Harvard, Vancouver, ISO, and other styles
27

Justová, Markéta. "Aplikace objektových metod v návrhu informačního systému platební instituce." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-204003.

Full text
Abstract:
The aim of the Diploma thesis is to evaluate the selected object-oriented (OO) methodology as it was defined by its author, on the basis of defined criteria, whether it is applicable in practice when designing an information system, with a main focus on analysis of the new core banking system supporting key processes of payment institution and Forex broker. Diploma thesis describes selected OO methodologies and notations used in the analysis and design of information systems. Further, it focuses on the evaluation of the real usage of selected method (Unified Process) in the environment of payment institution. It confronts the theoretical definition of a selected OO methodology with its application during the analysis of IS through practical demonstrations created within the case study.
APA, Harvard, Vancouver, ISO, and other styles
28

Schlosser, Edson Rodrigo. "Síntese de redes lineares de antenas de microfita com diagramas de irradiação conformados para sistemas de comunicação 4G." Universidade Federal do Pampa, 2014. http://dspace.unipampa.edu.br:8080/xmlui/handle/riu/254.

Full text
Abstract:
Submitted by Sandro Camargo (sandro.camargo@unipampa.edu.br) on 2015-05-09T23:05:48Z No. of bitstreams: 1 117110020.pdf: 13178438 bytes, checksum: f29a9ebb8c16cd912a974177b062c58e (MD5)
Made available in DSpace on 2015-05-09T23:05:48Z (GMT). No. of bitstreams: 1 117110020.pdf: 13178438 bytes, checksum: f29a9ebb8c16cd912a974177b062c58e (MD5) Previous issue date: 2014-04-04
Neste trabalho é apresentado um estudo de redes lineares de antenas com elementos isotrópicos e projetos em tecnologia de microfita, com diagrama de irradiação conformado e controle dos lóbulos secundários. A especificação a ser cumprida éa de uma antena para estações rádio-base operando na frequência alocada par a tecnologia 4G no Brasil. O trabalho descreve ferramentas computacionais que auxiliem na síntese de redes de antenas. Como resultado do processo de otimização, os coeficientes de excitação dos diversos elementos que compõem as redes de antenas são obtidos. Primeiramente é descrito o sistema de telefonia móvel celular, apresentando os principais pontos de interesse neste trabalho, tais como evolução dos sistemas de comunicação sem fio, conceito de reuso de frequência, capacidade do sistema, interferência co-canal e posicionamento das estações rádio-base no sistema celular, além das principais características elétricas das antenas utilizadas atualmente no atendimento aos assinantes. Em seguida, considera-se a existência de uma linha de visada entre a estação rádio-base e o usuário, possibilitando a obtenção do nível aproximado de potência recebida à medida este se afasta da torre. Como meta, deseja-se a distribuição uniforme da potência até o limite da célula, que resulta em um diagrama de irradiação para a antena na forma de cossecante ao quadrado. Além disso, busca-se a minimização da interferência co-canal e a redução do percentual de energia irradiada na região do horizonte. Métodos de síntese de diagrama são estudados e implementados em Matlab visanado a obtenção do diagrama desejado a partir dos coeficientes de excitação dos elementos da rede linear, tais como, método da Transformada de Fourier, de Woodward-Lawson e dos mínimos quadrados. Apesar destes métodos serem vastamente utilizados, não foi possível obter uma síntese adequada do diagrama desejado. Desta forma, métodos de otimização iterativos foram investigados visando ao controle dos lóbulos secundários e a conformação do diagrama. Optou-se pela combinação do algoritmo genético com a programação quadrática sequencial, empregado para a busca de um mínimo local tendo como partida a melhor solução encontrada pelo algoritmo genético. Tal combinação permitiu rápida convergência na obtenção dos coeficientes de excitação da rede. Em seguida uma rede de antenas de microfita foi projetada no software HFSS para reproduzir o diagrama na forma de cossecante ao quadrado quando excitadas pelos coeficientes calculados com a ferramenta computacional. Durante o processo de síntese, todos os efeitos observados em uma rede de antenas foram considerados, tais como acoplamento mútuo e efeito de borda. Após conhecidos o coeficientes de excitação, projetou-se um sistema alimentador em tecnologia de microfita para fornecer os respectivos valores de correntes às antenas que compõem a rede. Por fim, é realizada a comparação entre o diagrama sintetizado e o obtido, o que permitiu a validação da ferramenta computacional.
In this work, the development of a computational tool that performs the synthesis of linear antenna arrays is presented and extensively discussed. The main intended applications is the design of an antenna suitable for radio-base stations of mobile communications systems, whereby uniform power distribution should be achieved inside the cell. The present work starts with the presentation of a review about mobile communication systems, whereby the main aspects are briefly discussed. An analysis is carried out to derive an expression that relates the antenna radiation pattern to the uniform power distribution inside a cell. This feature is achieved if the radiation pattern of the transmitting structure exhibits squared-cosecant shaped. Classical methods for pattern synthesis have been implemented in Matlab and have tested for the case of a squared-cosecant shape. Neither the Fourier Transform nor the Woodward-Lawson techniques could synthesize such a pattern satisfactorily. Thus iterative methods have been studied and implemented. The first iterative technique was based on the minimization of least-square errors, which has been used successfully for beamforming purposes. However, this technique proved to be unsuitable for the synthesis of squared-cosecant shaped patterns. Acceptable results could only be obtained by means of a genetic algorithm-square quadratic programming combined approach, which allowed obtaining fast convergence of the optimization of the excitation coefficients for the linear array elements. In order to demonstrate the efficiency of the developed tool, two microstrip antenna arrays that operate in the frequency band allocated for the Brazilian 4G systems have been studied. The embedded radiation pattern of each array element has been taken into account during the synthesis of the squared-cosecant shaped pattern. This approach allowed mutual coupling, as well as the truncation of the ground plane, to be compensated during the optimization process. The synthesized pattern enabled to reduce the power radiated above the horizon, as well as the co-channel interference. In order to validate the technique, a complete array including the power divider has been designed using the electromagnetic simulator HFSS. Good agreement has been obtained between the synthesized pattern and the simulated one in HFSS.
APA, Harvard, Vancouver, ISO, and other styles
29

Avellaneda, Florent. "Vérification de réseaux de Pétri avec états sous une sémantique d'ordres partiels." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM4087/document.

Full text
Abstract:
Les MSG (pour « Message Sequence Graphs ») sont un formalisme bien connu et souvent utilisé pour décrire des ensembles de scénarios de manière visuelle dans le domaine des protocoles de communication. Nous nous intéressons dans la première partie de la thèse à la détection de la divergence, la vérification de la coopération globale ainsi que la vérification de propriétés d’accessibilité et de couverture. Notre première contribution consiste à utiliser des solveurs SAT afin de résoudre ces problèmes efficacement. Afin de munir le formalisme des MSG de compteurs, de timers et d’autres aspects, nous introduisons le modèle des PNS (pour « réseaux de Petri avec états ») et une sémantique de processus non-branchants. Ce modèle est non seulement plus expressif que les MSG, mais il permet également des spécifications plus concises. Nous nous intéressons à trois problèmes de vérification classiques sur l’ensemble des marquages accessibles par les préfixes des processus : le caractère borné, la couverture et l’accessibilité. Afin de considérer des systèmes paramétrés, nous introduisons également la notion de borne semi-structurelle. Cela consiste à fixer le marquage initial d’un sous-ensemble approprié de places, puis à vérifier que le système est borné, quelles que soient les valeurs des paramètres. Nous montrons comment un dépliage conduit à un problème plus simple à vérifier. Une caractéristique particulièrement attrayante des MSG et des PNS réside dans leur représentation graphique similaire à un automate. Il est donc intéressant de décrire les bugs de manière visuelle. Nous montrons comment calculer en temps polynomial une représentation simple et concise d’un bug
Message Sequence Charts (MSCs) are a popular model often used for the docu- mentation of telecommunication protocols. In the first part of the thesis, we focus on detecting process divergence, checking global-cooperation and checking reachability properties. Our first contribution is to use SAT solvers to solve these problems effectively. In order to study MSC specifications with counters, timers and other features, we introduce the model of Petri nets with states together with a non-branching non-sequential process seman- tics. We obtain a framework that is more expressive and more concise than MSGs. We consider then three classical verification problems for the set of markings reached by prefixes of processes : boundedness, covering and reachability. We consider also the notion of semi-structural property in order to study parametrized sys- tems. In this way, only part of the places are provided with an initial marking. Unfolding such a system leads to a simpler problem in the form of a linear programme. A particularly attractive feature of MSG and PNS lies in their graphical representation similar to an automaton. So, it is interesting to describe the bugs visually. We show how to compute in polynomial time a simple and concise representation of a bug
APA, Harvard, Vancouver, ISO, and other styles
30

Pospíšil, Jiří. "Analýza a návrh informačního systému řízení know-how v ICT společnosti." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2007. http://www.nusl.cz/ntk/nusl-412760.

Full text
Abstract:
The thesis deals with a problem of the design of information system. The design of system is provided with Rational Unifeid Process methodology. This thesis creates a list of requests to system. It makes an analysis and a design of current information system. It useses a RUP methodology to realize two first phases, inception phase and elaboration phase. Created elaboration phase of document is a base for creating programming prototype in Ruby on Rails environment using Ruby language with combination of HTML code.
APA, Harvard, Vancouver, ISO, and other styles
31

Miranda, Carranza Pablo. "Program Matters : From Drawing to Code." Doctoral thesis, KTH, Stadsbyggnad, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-218462.

Full text
Abstract:
Whether on paper, on site or mediating between both, means for reading and writing geometry have been central to architecture: the use of compasses and rulers, strings, pins, stakes or plumb-lines enabled the analysis and reproduction of congruent figures on different surfaces since antiquity, and from the renaissance onwards, the consistent planar representation of three-dimensional shapes by means of projective geometry. Tacitly through practice, or explicitly encoded in classical geometry, the operational syntaxes of drawing instruments, real or imaginary, have determined the geometric literacies regulating the production and instruction of architecture. But making marks on the surfaces of paper, stone or the ground has recently given way to the fundamentally different sequential operations of computers as the material basis of architectural inscription. Practices which have dominated architecture since antiquity make little sense in its current reading and writing systems.  This thesis examines technologies of digital inscription in a search for literacies equivalent to those of drawn geometry. It particularly looks at programming as a form of notation in close correspondence with its material basis as a technology, and its effects on architecture. It includes prototypes and experiments, graphics, algorithms and software, together with their descriptions and theoretical analyses. While the artefacts and texts respond to the different forms, styles, interests and objectives specific to the fields and contexts in which they have originated, their fundamental purpose is always to critique and propose ways of writing and reading architecture through programming, the rationale of the research and practice they stem from.

QC 20171129

APA, Harvard, Vancouver, ISO, and other styles
32

Paoli, Elena. "Classificazione spettrale delle stelle." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/11441/.

Full text
Abstract:
Da sempre l'uomo ha osservato le stelle e il primo a intuire che lo studio degli spettri avrebbe permesso la comprensione della fisica e della chimica delle stelle fu il padre gesuita Secchi. Uno spettro contiene informazioni sui vari processi atmosferici stellari, dove la profondità e la forma di una riga permettono di raccogliere dati sulle condizioni fisiche del gas nelle regioni in cui essa si è formata. La classificazione spettrale è iniziata con il padre gesuita Secchi, che divise le stelle in quattro categorie. In seguito fu sostituita dalla classificazione di Harvard, composta da sette classi, O, B, A, F, G, K, M, caratterizzate da diversi range di temperature. Per risolvere il problema delle diverse luminosità all'interno della stessa classe, Yerkes creó u a nuova classificazione, data da un sistema bidimensionale che oltre a considerare la temperatura tiene conto della luminosità, che influisce molto nella struttura dello spettro. Si sono presi in considerazione anche gli spettri peculiari, cioè che presentano delle anomalie, come un'insolita abbondanza di metalli. Nell'ultimo capitolo viene trattato il diagramma H-R come applicazione delle classificazioni spettrali, accennando come il punto di turn-off del diagramma permetta di calcolare l'età di un ammasso stellare.
APA, Harvard, Vancouver, ISO, and other styles
33

Sendrowski, Janek. "Feigenbaum Scaling." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-96635.

Full text
Abstract:
In this thesis I hope to provide a clear and concise introduction to Feigenbaum scaling accessible to undergraduate students. This is accompanied by a description of how to obtain numerical results by various means. A more intricate approach drawing from renormalization theory as well as a short consideration of some of the topological properties will also be presented. I was furthermore trying to put great emphasis on diagrams throughout the text to make the contents more comprehensible and intuitive.
APA, Harvard, Vancouver, ISO, and other styles
34

Hsiao, ShihChieh, and 蕭世杰. "A Test Case Generator For Sequence Diagrams." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/36169595306961101665.

Full text
Abstract:
碩士
國立中正大學
資訊工程研究所
100
Software testing is one of the main approaches to ensuring software quality. However, manual software testing incurs high development costs and is prone to errors. This thesis implements a test case generator to automatically generate test cases for integration testing. This test case generator can reduce development costs and upgrade software quality. This tool uses UML sequence diagrams, UML class diagrams, and the Object Constraint Language (OCL) as specification languages. Sequence diagrams are used to describe the behavior of the method calls among objects. Class diagrams and OCL are used to describe the behavior within a method. This tool first converts each class diagram and OCL to a method constraint graph. It also converts each sequence diagram to a method sequence graph. This tool then systematically generates test paths on the method sequence graph. This tool then generates a corresponding constraint logic programming predicate for each test path and the method constraint graphs of the called methods in the test path. A constraint logic programming predicate represents the set of constraints in a test path. The solution of the constraint logic programming predicate is the test input and expected output satisfying the constraints on the test path. Finally, the test input and expected output are used to generate a Java test class. Users can then execute the generated Java test class in the JUnit testing framework to automatically test the class under test.
APA, Harvard, Vancouver, ISO, and other styles
35

Liu, Pang Yu, and 劉邦渝. "Tool Support for Creating and Animating Sequence Diagrams." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/68153141837108046424.

Full text
Abstract:
碩士
國立臺北科技大學
資訊工程系碩士班
91
Recently, design patterns have been used to solve software design problems. Since designing with patterns is a complex activity, tool support for pattern-oriented design is important. PDA (Pattern-oriented Design Assistant) is such a tool to assist programmers in constructing a pattern-oriented design model. Currently, the prototype of PDA only supports the creation of static structures of software design. Our research adds the functionality of drawing and animating sequence diagrams to PDA. Our system can display the collaborations of sequence diagram at the object level, the pattern level and the system level; moreover, user can freely navigate in the three levels. Our file format conforms with the XMI (XML Metadata Interchange) standard and makes uses of the XMI Framework.
APA, Harvard, Vancouver, ISO, and other styles
36

Lei, Yao-cheng, and 雷曜誠. "A Test Case Generator Based on Sequence Diagrams." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/32308548301416272662.

Full text
Abstract:
碩士
國立中正大學
資訊工程所
96
It’s usually difficult to prove the correctness of software in software development process. We can only discover software errors by using verification techniques. This verification process is generally called software testing. Formerly, software testing was performed at the last step of software development process. This implies that the verification proceeds after the implementation is complete. This late application of software testing is very cost-ineffective. Recently, although software testing has been integrated to every step in the development process and the effictiveness of software testing has increased greatly, the cost of software testing is still very high. So the automation of software testing becomes very important. This thesis describes a semi-automatic test case generator based on sequence diagrams. Sequence diagrams specify the dynamic behaviors among objects. This tool first generates test paths by analyzing these dynamic behaviors. After obtaining input data and expected output data for each test path from the user, the tool will automatically generate Java testing code for each test path following the JUnit framework.
APA, Harvard, Vancouver, ISO, and other styles
37

LIANG, HONGZHI. "Sequence Diagrams Integration via Typed Graphs: Theory and Implementation." Thesis, 2009. http://hdl.handle.net/1974/5129.

Full text
Abstract:
It is widely accepted within the software engineering community that the support for integration is necessary for requirement models. Several methodologies, such as the role-based software development, that have appeared in the literature are relying on some kind of integration. However, current integration techniques and their tools support are insufficient. In this research, we discuss our solution to the problem. More precisely, we present a general integration approach for scenario-based models, particularly for UML Sequence Diagrams, based on the colimit construction known from category theory. In our approach, Sequence Diagrams are represented by SD-graphs, a special kind of typed graphs. The merge algorithm for SD-graphs is an extension of existing merge operations on sets and graphs. On the one hand, the merge algorithm ensures traceability and guarantees key theoretical properties (e.g., “everything is represented and nothing extra is acquired” during the merge). On the other hand, our formalization of Sequence Diagrams as SD-graphs retains the graphical nature of Sequence Diagrams, yet is amenable to algebraic manipulations. Another important property of our process is that our approach is applicable to other kinds of models as long as they can be represented by typed graphs. A prototype Sequence Diagram integration tool following the approach has been implemented. The tool is not only a fully functional integration tool, but also served as a test bed for our theory and provided feedback for our theoretical framework. To support the discovery and specification of model relationships, we also present a list of high-level merge patterns in this dissertation. We believe our theory and tool are beneficial to both academia and industry, as the initial evaluation has shown that the ideas presented in this dissertation represent promising steps towards the more rigorous management of requirement models. We also present an approach connecting model transformation with source transformation and allowing an existing source transformation language (TXL) to be used for model transformation. Our approach leverages grammar generators to ease the task of creating model transformations and inherits many of the strengths of the underlying transformation language (e.g., efficiency and maturity).
Thesis (Ph.D, Computing) -- Queen's University, 2009-08-28 13:03:08.607
APA, Harvard, Vancouver, ISO, and other styles
38

Bennett, Chris. "Tool features for understanding large reverse engineered sequence diagrams." Thesis, 2008. http://hdl.handle.net/1828/1004.

Full text
Abstract:
Originally devised as a notation to capture scenarios during analysis and design, sequence diagrams can also aid understanding of existing software through visualization of execution call traces. Reverse engineered sequence diagrams are typically huge and designing tools to help users cope with the size and complexity of such traces is a major problem. While preprocessing may be necessary to reduce the complexity of a sequence diagram, interactive tool support is critical to help the user explore and understand the resulting diagram. This thesis examines tool features necessary to effectively support sequence diagram exploration by reverse engineers. Features were derived from a literature survey and empirically evaluated using an exploratory user study. The tool features were further evaluated by situating them within theories of cognitive support.
APA, Harvard, Vancouver, ISO, and other styles
39

Soares, João António Custódio. "Automatic Model Transformation from UML Sequence Diagrams to Coloured Petri Nets." Dissertação, 2017. https://repositorio-aberto.up.pt/handle/10216/106158.

Full text
Abstract:
A dependência da sociedade em sistemas de software cada vez mais complexos torna a tarefa de testar e validar estes sistemas cada vez mais importante e desafiante. Em vários casos, múltiplos sistemas independentes e heterogéneos formam um sistema de sistemas responsável por providenciar serviços aos utilizadores e as ferramentas e técnicas atuais de automação de testes aos mesmos oferecem pouco suporte e apoio para para o desempenho desta tarefa.Este trabalho está inserido num projeto de maior escala que tem como objetivo produzir uma ferramenta de Model-based Testing que automatizará o processo de teste de sistemas distribuídos, a partir de diagramas de sequência UML. Estes diagramas definem graficamente a interação entre os diferentes módulos de um sistema e os seus atores de uma forma sequencial, facilitando a compreensão do funcionamento do sistema e possibilitando a definição de secções críticas dos sistemas distribuídos como situações de concorrência e paralelismo. Esta dissertação pretende desenvolver um dos componentes deste projeto que terá como objetivo a conversão dos diagramas descritivos do sistema em Petri Nets. Petri Nets são um formalismo de modelação que é indicado para descrição de sistemas distribuídos pela sua capacidade de definição de tarefas de comunicação e de sincronização, e pela possibilidade de execução usando ferramentas como CPN Tools.O objetivo será a definição de regras de tradução Model-to-Model que permitirão a conversão de modelos, de modo a possibilitar a integração com o sistema desejado, tirando partido de frameworks existentes de transformação de modelos (por exemplo, EMF - Eclipse Modeling Framework). Com isto conseguimos esconder a complexidade da análise do sistema ao utilizador (Software Tester) introduzindo automação, geração e execução de testes a partir dos diagramas de casos de teste, e apresentando os resultados (Erros e Cobertura de Código) visualmente.Este documento está dividido em quatro secções. A primeira secção apresenta o contexto e motivação para a dissertação e define o problema e objetivos. A segunda secção consiste no resumo dos conceitos necessários à compreensão da dissertação, o estado da arte dos estudos neste domínio e análise das ferramentas para implementar a solução. A terceira sec�\xA7ão explica a arquitetura e as escolhas tecnológicas para a solução proposta. Finalmente, a última secção explica as conclusões para este estudo e define o plano para trabalho futuro.
The dependence of our society on ever more complex software systems makes the task of testing and validating this software increasingly important and challenging. In many cases, multiple independent and heterogeneous systems form a system of systems responsible for providing services to users, and the current testing automation tools and techniques provide little support for the performance of this task.This dissertation is part of a larger scale project that aims to produce a Model-based Testing tool that will automate the process of testing distributed systems, from UML sequence diagrams. These diagrams graphically define the interaction between the different modules of a system and its actors in a sequential way, facilitating the understanding of the system's operation and allowing the definition of critical sections of distributed systems such as situations of concurrency and parallelism.This dissertation intends to develop one of the components of this project that will be in charge of the conversion of the descriptive diagrams of the system in Petri Nets. Petri Nets are a modeling formalism that is indicated for describing distributed systems by their ability to define communication and synchronization tasks, and by the possibility of executing them in runtime using tools such as CPN Tools.The objective will be to define Model-to-Model translation rules that will allow the conversion of models, in order to allow integration with the target system, taking advantage of existing model transformation frameworks (e.g. EMF - Eclipse Modeling Framework). With this, we have been able to hide the complexity of the system analysis to the user (Software Tester) introducing the possibility of automation, generation and execution of tests from the diagrams of test cases, and presenting the results (Errors and Code Coverage) visually.This document is divided in four sections. The first section introduces the context and motivation for the dissertation and defines the problem and goals. The second section consists in the summarization of concepts required to understand this dissertation, the state of the art of studies in this domain and an analysis of the tools to implement the solution. The third section explains the architecture and technological choices for the proposed solution. Finally, the last section explains the conclusions for this study and defines the future work plan.
APA, Harvard, Vancouver, ISO, and other styles
40

Soares, João António Custódio. "Automatic Model Transformation from UML Sequence Diagrams to Coloured Petri Nets." Master's thesis, 2017. https://repositorio-aberto.up.pt/handle/10216/106158.

Full text
Abstract:
A dependência da sociedade em sistemas de software cada vez mais complexos torna a tarefa de testar e validar estes sistemas cada vez mais importante e desafiante. Em vários casos, múltiplos sistemas independentes e heterogéneos formam um sistema de sistemas responsável por providenciar serviços aos utilizadores e as ferramentas e técnicas atuais de automação de testes aos mesmos oferecem pouco suporte e apoio para para o desempenho desta tarefa.Este trabalho está inserido num projeto de maior escala que tem como objetivo produzir uma ferramenta de Model-based Testing que automatizará o processo de teste de sistemas distribuídos, a partir de diagramas de sequência UML. Estes diagramas definem graficamente a interação entre os diferentes módulos de um sistema e os seus atores de uma forma sequencial, facilitando a compreensão do funcionamento do sistema e possibilitando a definição de secções críticas dos sistemas distribuídos como situações de concorrência e paralelismo. Esta dissertação pretende desenvolver um dos componentes deste projeto que terá como objetivo a conversão dos diagramas descritivos do sistema em Petri Nets. Petri Nets são um formalismo de modelação que é indicado para descrição de sistemas distribuídos pela sua capacidade de definição de tarefas de comunicação e de sincronização, e pela possibilidade de execução usando ferramentas como CPN Tools.O objetivo será a definição de regras de tradução Model-to-Model que permitirão a conversão de modelos, de modo a possibilitar a integração com o sistema desejado, tirando partido de frameworks existentes de transformação de modelos (por exemplo, EMF - Eclipse Modeling Framework). Com isto conseguimos esconder a complexidade da análise do sistema ao utilizador (Software Tester) introduzindo automação, geração e execução de testes a partir dos diagramas de casos de teste, e apresentando os resultados (Erros e Cobertura de Código) visualmente.Este documento está dividido em quatro secções. A primeira secção apresenta o contexto e motivação para a dissertação e define o problema e objetivos. A segunda secção consiste no resumo dos conceitos necessários à compreensão da dissertação, o estado da arte dos estudos neste domínio e análise das ferramentas para implementar a solução. A terceira sec�\xA7ão explica a arquitetura e as escolhas tecnológicas para a solução proposta. Finalmente, a última secção explica as conclusões para este estudo e define o plano para trabalho futuro.
The dependence of our society on ever more complex software systems makes the task of testing and validating this software increasingly important and challenging. In many cases, multiple independent and heterogeneous systems form a system of systems responsible for providing services to users, and the current testing automation tools and techniques provide little support for the performance of this task.This dissertation is part of a larger scale project that aims to produce a Model-based Testing tool that will automate the process of testing distributed systems, from UML sequence diagrams. These diagrams graphically define the interaction between the different modules of a system and its actors in a sequential way, facilitating the understanding of the system's operation and allowing the definition of critical sections of distributed systems such as situations of concurrency and parallelism.This dissertation intends to develop one of the components of this project that will be in charge of the conversion of the descriptive diagrams of the system in Petri Nets. Petri Nets are a modeling formalism that is indicated for describing distributed systems by their ability to define communication and synchronization tasks, and by the possibility of executing them in runtime using tools such as CPN Tools.The objective will be to define Model-to-Model translation rules that will allow the conversion of models, in order to allow integration with the target system, taking advantage of existing model transformation frameworks (e.g. EMF - Eclipse Modeling Framework). With this, we have been able to hide the complexity of the system analysis to the user (Software Tester) introducing the possibility of automation, generation and execution of tests from the diagrams of test cases, and presenting the results (Errors and Code Coverage) visually.This document is divided in four sections. The first section introduces the context and motivation for the dissertation and defines the problem and goals. The second section consists in the summarization of concepts required to understand this dissertation, the state of the art of studies in this domain and an analysis of the tools to implement the solution. The third section explains the architecture and technological choices for the proposed solution. Finally, the last section explains the conclusions for this study and defines the future work plan.
APA, Harvard, Vancouver, ISO, and other styles
41

Campean, I. Felician, and Unal Yildirim. "Enhanced sequence diagram for function modelling of complex systems." 2017. http://hdl.handle.net/10454/12111.

Full text
Abstract:
Yes
This paper introduces a novel method referred to as Enhanced Sequence Diagram (ESD) to support rigorous functional modelling of complex multidisciplinary systems. The ESD concept integrates an exchanges based functional requirements reasoning based on a coherent graphical schema, integrated with the system operational analysis based on a sequence diagram. The effectiveness of the method to support generic function modelling of complex multidisciplinary systems at the early conceptual design stages is discussed in conjunction with an electric vehicle powertrain example, followed by an assessment of potential impact for broader application of the method in the industry.
APA, Harvard, Vancouver, ISO, and other styles
42

張淯僑. "Fourth-grade students’reading comprehension on the sequence of definition diagrams and definition texts of quadrilaterals." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/03594747442503512442.

Full text
Abstract:
碩士
國立新竹教育大學
數理教育研究所
104
Abstract In this study, three different ways of presenting definition diagrams and definition texts of quadrilaterals for exploring students’ reading comprehension were designed. The subjects for this study were 18 fourth grade students who have not learned the geometric content. Among the participating subjects, 6 were low achievers, 6 were average students, and the remaining 6 were high achievers. We aimed to investigate how the sequences of definition diagrams and definition texts influence students’ reading comprehension; particularly focusing on the three reading levels: “focusing and retrieving explicitly stated information”, “interpreting and integrating ideas and information”, and “examining and evaluating content, language and textual elements”. The findings show that 1. Different ways of presenting definition diagrams and definition texts did influence students’ reading comprehension of quadrilaterals. (1) When presenting definition diagrams first and then definition texts, fourth grade students performed better on both “interpreting and integrating ideas and information” and “examining and evaluating content, language and textual elements”. They were more able to justify quadrilateral definitions and infer the relations of the diagrams. (2) When presenting definition texts first and then definition diagrams, fourth grade students, especially low achievers, were more likely to performed worse on reading comprehension as they thought definition texts for different quadrilaterals were the same. (3) While presenting both definition diagrams and definition texts at the same time, students performed better on “focusing on and retrieving stated information”. Students were able to clarify the definition when observing those definition diagrams that they have learned previously. However, students could not infer relevant geometric properties when they observed those diagrams that they have not learned before. 2. Findings on reading comprehension of geometric diagrams We found that concept images of definition diagrams play more important roles in student learning when compared to definition texts. Teacher questioning and manipulation of concrete objects by students did promote students in retrieving geometric meaning and identifying prototype diagrams and non-prototype ones. Regarding reading comprehension of the labels and marks on the diagrams, when students correctly identified the labels and marks, they can successfully infer geometric properties from the given diagrams. However, for those students who could not understand the meaning of labels and marks on the diagrams, they made mistakes of the diagram accordingly. 3. Findings on reading comprehension of definition texts (1)The property “two pairs of opposite sides are parallel” involves three chuck of information, which significantly influenced students in understanding the meaning of quadrilaterals. The number of words in the definition texts also significantly influenced students in establishing the relationship among different definition diagrams. Students may incorrectly treat the more chucks of information in definition text as the set whereas the less chucks is its subsets. Captures the same part of the definition texts apply to understand the relationship between each pattern quadrilateral. (2)We also noted that students confused between mathematics language and real-life language which resulted in misunderstanding the meaning of geometric properties. Among the four types of quadrilaterals (square, rectangle, rhombus, and parallelogram), teaching starting from rectangle definition contributes to students’ understanding of “opposite sides”, “parallel”, “congruent sides” and “right angles”. Key words: Graphic order Adverbial Conjunction Mental image
APA, Harvard, Vancouver, ISO, and other styles
43

Yildirim, Unal, and I. Felician Campean. "Functional modelling of complex multi‑disciplinary systems using the enhanced sequence diagram." 2020. http://hdl.handle.net/10454/17978.

Full text
Abstract:
Yes
This paper introduces an Enhanced Sequence Diagram (ESD) as the basis for a structured framework for the functional analysis of complex multidisciplinary systems. The ESD extends the conventional sequence diagrams (SD) by introducing a rigorous functional flow-based modelling schemata to provide an enhanced basis for model-based functional requirements and architecture analysis in the early systems design stages. The proposed ESD heuristics include the representation of transactional and transformative functions required to deliver the use case sequence, and fork and join nodes to facilitate analysis of combining and bifurcating operations on flows. A case study of a personal mobility device is used to illustrate the deployment of the ESD methodology in relation to three common product development scenarios: (i) reverse engineering, (ii) the introduction of a specific technology to an existent system; and (iii) the introduction of a new feature as user-centric innovation for an existing system, at a logical design level, without reference to any solution. The case study analysis provides further insights into the effectiveness of the ESD to support function modelling and functional requirements capture, and architecture development. The significance of this paper is that it establishes a rigorous ESD-based functional analysis methodology to guide the practitioner with its deployment, facilitating its impact to both the engineering design and systems engineering communities, as well as the design practice in the industry.
APA, Harvard, Vancouver, ISO, and other styles
44

Stevenson, Sean. "Investigating Software Reconnaissance as a Technique to Support Feature Location and Program Analysis Tasks using Sequence Diagrams." Thesis, 2013. http://hdl.handle.net/1828/5112.

Full text
Abstract:
Software reconnaissance is a very useful technique for locating features in software systems that are unfamiliar to a developer. The technique was, however, limited by the need to execute multiple test cases and record the components used in each one. Tools that recorded the execution traces of a program made it more practical to use the software reconnaissance technique. Diver was developed as an execution trace visualization tool using sequence diagrams to display the dynamic behaviour of a program. The addition of software reconnaissance to Diver and its trace-focused user interface feature improved the filtering of the Eclipse environment based on the contents of execution traces and led to a very powerful program comprehension tool. Myers' work on Diver was grounded in cognitive support theory research into how to build tools. He conducted a user study to validate the work done on Diver, but the study's findings were limited due to a number of issues. In this thesis, we expand on the study run by Myers, improve on its design, and investigate if software reconnaissance improves Diver's effectiveness and efficiency for program comprehension tasks. We also analyze the influence of software reconnaissance on the interactions of Diver's users, which allows us to identify successful usage patterns for completing program comprehension and feature location tasks. We research the connection between cognitive support theory and the design of Diver and use the study to attempt to validate the cognitive support offered by Diver. Finally, we present the results of a survey of the study participants to determine the usefulness, ease of use, and ease of learning of the tool.
Graduate
0984
APA, Harvard, Vancouver, ISO, and other styles
45

Liao, Jian-chih, and 廖建智. "Transformation from Sequence Diagram to Class Diagram." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/09705155237689350207.

Full text
Abstract:
碩士
國立中山大學
資訊管理學系研究所
92
Modeling software with object-oriented technique and Unified Modeling Language (UML) has become the new paradigm of modern information systems analysis and design. Selonen et al. (2003) proposed a framework for transformation within UML. However, they do not precisely define the operations and the rules they use. This research presents a systematic method which enhances Selonen et al.’s work to transform the sequence diagram into the class diagram. The transformation process consists of three phases: (1) mapping the given sequence diagram to a notation-independent and semantically equivalent minimal model, (2) transforming the minimal model of the sequence diagram into the minimal model of a class diagram, and (3) mapping the minimal model to a class diagram. A real-world case is used to illustrate the concepts, application, and the advantages of using the proposed method. With this approach, the system developer can transform sequence diagram into class diagram automatically and thereby enhance the efficiency of system development.
APA, Harvard, Vancouver, ISO, and other styles
46

Hsu, Chih-Tung, and 許志同. "A Methodology for Transformation from Sequence Diagram to Class Diagram." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/47213188397963439856.

Full text
Abstract:
碩士
國立中山大學
資訊管理學系研究所
94
Today, modeling the software with unified modeling language (UML) and computer-aided software engineering (CASE) tool becomes the main stream for the objected-oriented systems analysis and design. To enhance the degree of transformation automation and reuse in the system development process, prior research suggested that most parts of the class diagram can be transformed from the sequence diagram directly. However, the explicit guideline for the transformation is lacking. This study presents a methodology, extended from Selonen et al (2003), to transform the sequence diagram into the class diagram. A real-world case using the integrated techniques is presented to illustrate the concepts, application, and the advantages of using the proposed approach. With this approach, the system developer can transform most parts of the sequence diagram into its associated class diagram automatically and thereby enhance the efficiency of system development.
APA, Harvard, Vancouver, ISO, and other styles
47

LIN, HSIN-YU, and 林信又. "UML Sequence Diagram Transformation-Based on XML." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/18777877059699983788.

Full text
Abstract:
碩士
國立成功大學
資訊工程學系碩博士班
94
Using Unified Modeling Language (UML) to represent Object Oriented Analysis/Design(OOA/D) has become a standard for information system development obviously in this era of software engineering. UML defines many different models to present information system with different views and abstract levels. In these models, there are certainly some interdependent relations and overlapping parts. Although many scholars had proposed transformation theory between the models, they did not define and transformation mapping methods and detail steps. In this study we propose a procedure which transforms sequence diagram into class diagram by using metamodel. We propose a model transformation system that uses XML specification to describe UML models and verify the transformation procedure proposed can work currently. By means of the proposed system, programmer can transform sequence diagram into class diagram automatically. This results in that the software system can be developed effectively, due to programmer can manually enforcing the static structure to achieve the final class diagram. Another advantage is using XML to represent data can enhance the exchangeability and reusability.
APA, Harvard, Vancouver, ISO, and other styles
48

Myers, Del. "Improving the scalability of tools incorporating sequence diagram visualizations of large execution traces." Thesis, 2011. http://hdl.handle.net/1828/3444.

Full text
Abstract:
Sequence diagrams are a popular way to visualize dynamic software execution traces. However, they tend to be extremely large, causing significant scalability problems. Not only is it difficult from a technical perspective to build interactive sequence diagram tools that are able to display large traces, it is also difficult for people to understand them. While cognitive support theory exists to help cope with the later problem, no work to date has described how to implement the cognitive support theory in sequence diagram tools. In this thesis, we tackle both the technical and cognitive support problems. First, we use previous research about cognitive support feature requirements to design and engineer an interactive, widget-based sequence diagram visualization. After implementing the visualization, we use benchmarks to test its scalability and ensure that it is efficient enough to be used in realistic applications. Then, we present two novel approaches for reducing the cognitive overhead required to understand large sequence diagrams. The first approach is to compact sequence diagrams using loops found in source code. We present an algorithm that is able to compact diagrams by up to 80%. The second approach is called the trace-focused user interface which uses software reconnaissance to create a degree-of-interest model to help users focus on particular software features and navigate to portions of the sequence diagram that are related to those features. We present a small user study that indicates the viability of the trace-focused user interface. Finally, we present the results of a small survey that indicates that users of the software find the loop compaction and the trace-focused user interface both useful.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
49

Hua, Jin-di, and 花金地. "A Supporting Tool for Establishing Aspectual Code from Aspect-Enhanced Goal-Driven Sequence Diagram." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/40262260270798618614.

Full text
Abstract:
碩士
國立中央大學
資訊工程學系碩士在職專班
100
This paper designed six converting rules of operators for the six operators from the previous research [1]. Based on these rules, this paper developed a converting tool to convert the behaviors, which were modeled in Aspect-Enhanced Sequence Diagram (AESD) [1], into an aspect code automatically for a consistent expression in the requirement and implement phases of software development life cycle. Finally, a Meeting Scheduler System [1] was implemented, by modeling the early aspect behaviors into AESD and then converting into an aspect code template, to verify the consistency in the requirement, implement phases and the converting mechanism.
APA, Harvard, Vancouver, ISO, and other styles
50

Zamboj, Michal. "Matematická teorie žonglování." Master's thesis, 2014. http://www.nusl.cz/ntk/nusl-341198.

Full text
Abstract:
Title: The mathematical theory of juggling Author: Bc. Michal Zamboj Department: Department of Mathematics Education Supervisor: RNDr. Antonín Slavík, Ph.D. Abstract: This diploma thesis extends the bachelor thesis of the same name. It deals with the graphic representation of juggling sequences by the cyclic diagram. Using the Burnside theorem and cyclic diagrams, we calculate the number of all genera- tors of juggling sequences. The relation between juggling and the theory of braids is described as well. The mathematical model of inside and outside throws is made from an empirical observation of trajectories of balls. Braids of juggling sequences and their attributes are provided using a real model of ladder. A sketch of the proof of the theorem that any braid is juggleable is given as well.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography