To see the other types of publications on this topic, follow the link: Incremental Model.

Dissertations / Theses on the topic 'Incremental Model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Incremental Model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ogunyomi, Babajide J. "Incremental model-to-text transformation." Thesis, University of York, 2016. http://etheses.whiterose.ac.uk/14244/.

Full text
Abstract:
Model-driven engineering (MDE) promotes the use of abstractions to simplify the development of complex software systems. Through several model management tasks (e.g., model verification, re-factoring, model transformation), many software development tasks can be automated. For example, model-to-text transformations (M2T) are used to realize textual development artefacts (e.g., documentation, configuration scripts, code, etc.) from underlying source models. Despite the importance of M2T transformation, contemporary M2T languages lack support for developing transformations that scale. As MDE is applied to systems of increasing size and complexity, a lack of scalable M2T transformations and other model management tasks hinders industrial adoption. This is largely due to the fact that model management tools do not support efficient propagation of changes from models to other development artefacts. As such, the re-synchronisation of generated textual artefacts with underlying system models can take considerably large amount of time to execute due to redundant re-computations. This thesis investigates scalability in the context of M2T transformation, and proposes two novel techniques that enable efficient incremental change propagation from models to generated textual artefacts. In contrast to existing incremental M2T transformation technique, which relies on model differencing, our techniques employ fundamentally different approaches to incremental change propagation: they use a form of runtime analysis that identifies the impact of source model changes on generated textual artefacts. The structures produced by this runtime analysis, are used to perform efficient incremental transformations (scalable transformations). This claim is supported by the results of empirical evaluation which shows that the techniques proposed in this thesis can be used to attain an average reduction of 60% in transformation execution time compared to non-incremental (batch) transformation.
APA, Harvard, Vancouver, ISO, and other styles
2

Forsman, Mikael. "A Model Implementation of Incremental Risk Charge." Thesis, KTH, Matematisk statistik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-102752.

Full text
Abstract:
Abstract In 2009 the Basel Committee on Banking Supervision released the final guidelines for computing capital for the Incremental Risk Charge, which is a complement to the traditional Value at Risk intended to measure the migration risk and the default risk in the trading book. Before Basel III banks will have to develop their own Incremental Risk Charge model following these guidelines. The development of such a model that computes the capital charge for a portfolio of corporate bonds is described in this thesis. Essential input parameters like the credit ratings of the underlying issuers, credit spreads, recovery rates at default, liquidity horizons and correlations among the positions in the portfolio will be discussed. Also required in the model is the transition matrix with probabilities of migrating between different credit states, which is measured by historical data from Moody´s rating institute. Several sensitivity analyses and stress tests are then made by generating different scenarios and running them in the model and the results of these tests are compared to a base case. As it turns out, the default risk contributes for the most part of the Incremental Risk Charge.
APA, Harvard, Vancouver, ISO, and other styles
3

Hinkel, Georg [Verfasser]. "Implicit Incremental Model Analyses and Transformations / Georg Hinkel." Karlsruhe : KIT Scientific Publishing, 2021. http://d-nb.info/1239420544/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Fang, Yimai. "Proposition-based summarization with a coherence-driven incremental model." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/287468.

Full text
Abstract:
Summarization models which operate on meaning representations of documents have been neglected in the past, although they are a very promising and interesting class of methods for summarization and text understanding. In this thesis, I present one such summarizer, which uses the proposition as its meaning representation. My summarizer is an implementation of Kintsch and van Dijk's model of comprehension, which uses a tree of propositions to represent the working memory. The input document is processed incrementally in iterations. In each iteration, new propositions are connected to the tree under the principle of local coherence, and then a forgetting mechanism is applied so that only a few important propositions are retained in the tree for the next iteration. A summary can be generated using the propositions which are frequently retained. Originally, this model was only played through by hand by its inventors using human-created propositions. In this work, I turned it into a fully automatic model using current NLP technologies. First, I create propositions by obtaining and then transforming a syntactic parse. Second, I have devised algorithms to numerically evaluate alternative ways of adding a new proposition, as well as to predict necessary changes in the tree. Third, I compared different methods of modelling local coherence, including coreference resolution, distributional similarity, and lexical chains. In the first group of experiments, my summarizer realizes summary propositions by sentence extraction. These experiments show that my summarizer outperforms several state-of-the-art summarizers. The second group of experiments concerns abstractive generation from propositions, which is a collaborative project. I have investigated the option of compressing extracted sentences, but generation from propositions has been shown to provide better information packaging.
APA, Harvard, Vancouver, ISO, and other styles
5

ラハディアン, ユスフ, and Rahadian Yusuf. "Evolving user-specific emotion recognition model via incremental genetic programming." Thesis, https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB13044976/?lang=0, 2017. https://doors.doshisha.ac.jp/opac/opac_link/bibid/BB13044976/?lang=0.

Full text
Abstract:
本論文では,漸進型遺伝的プログラミングを用いて特定ユーザを対象にした感情認識モデルを進化的に実現する方法論について提案した.特徴量の木構造で解を表現する遺伝的プログラミングを用い,時間情報も含め顔表情データを取得できる汎用センサの情報を基にユーザ適応型の感情認識モデルを進化させた.同時に遺伝的プログラミングの非決定性,汎化性の欠如,過適応に対処するため,進化を漸進的に展開する機構を組み込んだ漸進型遺伝的プログラミング法を開発した.
This research proposes a model to tackle challenges common in Emotion Recognition based on facial expression. First, we use pervasive sensor and environment, enabling natural expressions of user, as opposed to unnatural expressions on a large dataset. Second, the model analyzes relevant temporal information, unlike many other researches. Third, we employ user-specific approach and adaptation to user. We also show that our evolved model by genetic programming can be analyzed on how it really works and not a black-box model.
博士(工学)
Doctor of Philosophy in Engineering
同志社大学
Doshisha University
APA, Harvard, Vancouver, ISO, and other styles
6

Mao, Ai-sheng. "A Theoretical Network Model and the Incremental Hypercube-Based Networks." Thesis, University of North Texas, 1995. https://digital.library.unt.edu/ark:/67531/metadc277860/.

Full text
Abstract:
The study of multicomputer interconnection networks is an important area of research in parallel processing. We introduce vertex-symmetric Hamming-group graphs as a model to design a wide variety of network topologies including the hypercube network.
APA, Harvard, Vancouver, ISO, and other styles
7

Josimovic, Aleksandra. "AI as a Radical or Incremental Technology Tool Innovation." Thesis, KTH, Industriell Management, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230603.

Full text
Abstract:
As researchers found that throughout the history a common challenge for companies across different industries, when it comes to leveraging and capturing value from a technology innovation is strongly influenced by the company’s dominant business model, an established framework through which assessment takes place. The overall purpose of this study is to provide a deeper understanding of the role that company's dominant business model has on the assessment of the impact that new technology innovation, in this case, AI, will have on the company and the market on which company operates. This thesis is partially exploratory and partially descriptive with a qualitative and deductive nature. In order to answer the purpose, a research strategy of case studies was used where empirical data was collected from interviews held with 47 company’s top executives from different hierarchical levels and business units, from Sweden, Switzerland, the USA, Germany, and Finland. The theoretical framework that describes the how AI as a new technology tool is perceived from the Company X perspective, either as a radical, game changer, or an incremental innovation technology tool and examines the role that dominant business model has on this perception was created. The developed implementation framework had its foundation in previous research concerning innovation business model theories. The data that was collected from the company’s executives were then analyzed and compared to the model. The most significant findings suggest that AI as a new technology tool is perceived as a game changer, radical innovation tool for some areas within the Company X and that the company dominant business model profoundly influences this perception.
Som forskare fann att genom hela historien är en gemensam utmaning för företag inom olika branscher när det gäller att utnyttja och fånga värde från en teknologisk innovation starkt påverkad av företagets dominerande affärsmodell, en etablerad ram genom vilken bedömning sker. Det övergripande syftet med denna studie är att ge en djupare förståelse för den roll som företagets dominerande affärsmodell har vid bedömningen av den inverkan som ny teknik innovation, i detta fall AI, kommer att ha på företaget och marknaden där företaget driver . Denna avhandling är delvis undersökande och delvis beskrivande med kvalitativ och deduktiv natur. För att svara på målet användes en forskningsstrategi av fallstudier där empiriska data samlades in från intervjuer med 47 bolagets ledande befattningshavare från olika hierarkiska nivåer och affärsenheter, från Sverige, Schweiz, USA, Tyskland och Finland. Den teoretiska ram som beskriver hur AI som ett nytt teknikverktyg uppfattas ur företagets Xperspektiv, antingen som en radikal, spelväxlare eller ett inkrementellt innovationsteknologiprogram och undersöker den roll som dominerande affärsmodell har på denna uppfattning skapades. Den utvecklade implementeringsramen har grundat sig i tidigare forskning rörande innovationsmodellteorier. Data som samlades in från företagets chefer analyserades sedan och jämfördes med modellen. De viktigaste resultaten tyder på att AI som ett nytt teknikverktyg uppfattas som en spelväxlare, radikalt innovationsverktyg för vissa områden inom företaget X och att företagets dominerande affärsmodell påverkar denna uppfattning väsentligt.
APA, Harvard, Vancouver, ISO, and other styles
8

Hinkel, Georg [Verfasser], and R. [Akademischer Betreuer] Reussner. "Implicit Incremental Model Analyses and Transformations / Georg Hinkel ; Betreuer: R. Reussner." Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1163320390/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sindhu, Muddassar. "Incremental Learning and Testing of Reactive Systems." Licentiate thesis, KTH, Teoretisk datalogi, TCS, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37763.

Full text
Abstract:
This thesis concerns the design, implementation and evaluation of a specification based testing architecture for reactive systems using the paradigm of learning-based testing. As part of this work we have designed, verified and implemented new incremental learning algorithms for DFA and Kripke structures.These have been integrated with the NuSMV model checker to give a new learning-based testing architecture. We have evaluated our architecture on case studies and shown that the method is effective.
QC 20110822
APA, Harvard, Vancouver, ISO, and other styles
10

Balasubramanian, Harish. "Incremental Design Migration Support in Industrial Control Systems Development." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/50990.

Full text
Abstract:
Industrial control systems (ICS) play an extremely important role in the world around us. They have helped in reducing human effort and contributed to automation of processes in oil refining, power generation, food and beverage and production lines. With advancement in technology, embedded platforms have emerged as ideal platforms for implementation of such ICSes. Traditional approaches in ICS design involve switching from a model or modeling environment directly to a real-world implementation. Errors have the potential to go unnoticed in the modeling environment and have a tendency to affect real control systems. Current models for error identification are complex and affect the design process of ICS appreciably. This thesis adds an additional layer to ICS design: an Interface Abstraction Process (IAP). IAP helps in incremental migration from a modeling environment to a real physical environment by supporting intermediate design versions. Implementation of the IAP is simple and independent of control system complexity. Early error identification is possible since intermediate versions are supported. Existing control system designs can be modified minimally to facilitate the addition of an extra layer. The overhead of adding the IAP is measured and analysed. With early validation, actual behavior of the ICS in the real physical setting matches the expected behavior in the modeling environment. This approach to ICS design adds a significant amount of latency to existing ICSes without affecting the design process significantly. Since the IAP helps in early design validation, it can be removed before deployment in the real-world.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
11

Pinkel, Christoph [Verfasser], and Heiner [Akademischer Betreuer] Stuckenschmidt. "i3MAGE: Incremental, Interactive, Inter-Model Mapping Generation / Christoph Pinkel. Betreuer: Heiner Stuckenschmidt." Mannheim : Universitätsbibliothek Mannheim, 2016. http://d-nb.info/1104129094/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Rodrigues, Thiago Fredes. "A probabilistic and incremental model for online classification of documents : DV-INBC." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/142171.

Full text
Abstract:
Recentemente, houve um aumento rápido na criação e disponibilidade de repositórios de dados, o que foi percebido nas áreas de Mineração de Dados e Aprendizagem de Máquina. Este fato deve-se principalmente à rápida criação de tais dados em redes sociais. Uma grande parte destes dados é feita de texto, e a informação armazenada neles pode descrever desde perfis de usuários a temas comuns em documentos como política, esportes e ciência, informação bastante útil para várias aplicações. Como muitos destes dados são criados em fluxos, é desejável a criação de algoritmos com capacidade de atuar em grande escala e também de forma on-line, já que tarefas como organização e exploração de grandes coleções de dados seriam beneficiadas por eles. Nesta dissertação um modelo probabilístico, on-line e incremental é apresentado, como um esforço em resolver o problema apresentado. O algoritmo possui o nome DV-INBC e é uma extensão ao algoritmo INBC. As duas principais características do DV-INBC são: a necessidade de apenas uma iteração pelos dados de treino para criar um modelo que os represente; não é necessário saber o vocabulário dos dados a priori. Logo, pouco conhecimento sobre o fluxo de dados é necessário. Para avaliar a performance do algoritmo, são apresentados testes usando datasets populares.
Recently the fields of Data Mining and Machine Learning have seen a rapid increase in the creation and availability of data repositories. This is mainly due to its rapid creation in social networks. Also, a large part of those data is made of text documents. The information stored in such texts can range from a description of a user profile to common textual topics such as politics, sports and science, information very useful for many applications. Besides, since many of this data are created in streams, scalable and on-line algorithms are desired, because tasks like organization and exploration of large document collections would be benefited by them. In this thesis an incremental, on-line and probabilistic model for document classification is presented, as an effort of tackling this problem. The algorithm is called DV-INBC and is an extension to the INBC algorithm. The two main characteristics of DV-INBC are: only a single scan over the data is necessary to create a model of it; the data vocabulary need not to be known a priori. Therefore, little knowledge about the data stream is needed. To assess its performance, tests using well known datasets are presented.
APA, Harvard, Vancouver, ISO, and other styles
13

Jayyousi, Enan Fakhri. "Evaluation of Flood Routing Techniques for Incremental Damage Assessment." DigitalCommons@USU, 1994. https://digitalcommons.usu.edu/etd/4529.

Full text
Abstract:
Incremental damage assessment is a tool used to assess the justification for expensive modifications of inadequate dams. The input data to incremental damage assessment are the output from the breach analysis and flood routing. For this reason, flood routing should be conducted carefully. Distorted results from the flood routing technique or unstable modeling of the problem will distort the results of an incremental damage assessment, because an error in the estimated incremental stage will cause a certain error in the estimated incremental damages. The objectives of this study were (1) to perform a comprehensive survey of the available dam break flood-routing techniques, (2) to evaluate the performance of commonly used flood-routing techniques for predicting failure and no-failure stage, incremental stage, average velocities, and travel times, and (3) to develop a set of recommendations upon which future applications of dam break models can be based. Flood-routing techniques that are evaluated cover dynamic routing as contained in DAMBRK, and kinematic, Muskingum-Cunge, and normal depth storage routing as contained in the Hydrological Engineering Center (HEC 1). These techniques were evaluated against the more accurate two-dimensional flood-routing technique contained in the diffusion hydrodynamic model (DHM). Results and errors from different techniques for different downstream conditions were calculated and conclusions were drawn. The effect of the errors on the incremental stage and the errors in the incremental stage were estimated. Overall, the performance of one-dimensional techniques in predicting peak stages, and assessing a two-feet criterion showed that DAMBRK did best, and normal depth storage and outflow did worst. This overall ranking matches the degree of simplification in representing the true flood-routing situation. However, in some circumstances DAMBRK performed worst, and normal depth storage and outflow outperformed either the Muskingum-Cunge or kinematic techniques. Thus, it is important to understand the specific performance characteristics of all the methods when selecting one for a flood-routing application.
APA, Harvard, Vancouver, ISO, and other styles
14

Alameddin, Shadi [Verfasser]. "A semi-incremental model order reduction approach for fatigue damage computations / Shadi Alameddin." Hannover : Gottfried Wilhelm Leibniz Universität Hannover, 2020. http://d-nb.info/1209267985/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kitchen, Ryan L. "Improving Steering Module Efficiency for Incremental Loading Finite Element Numeric Models." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1248.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Hao. "Incremental sheet forming process : control and modelling." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:a80370f5-2287-4c6b-b7a4-44f06211564f.

Full text
Abstract:
Incremental Sheet Forming (ISF) is a progressive metal forming process, where the deformation occurs locally around the point of contact between a tool and the metal sheet. The final work-piece is formed cumulatively by the movements of the tool, which is usually attached to a CNC milling machine. The ISF process is dieless in nature and capable of producing different parts of geometries with a universal tool. The tooling cost of ISF can be as low as 5–10% compared to the conventional sheet metal forming processes. On the laboratory scale, the accuracy of the parts created by ISF is between ±1.5 mm and ±3mm. However, in order for ISF to be competitive with a stamping process, an accuracy of below ±1.0 mm and more realistically below ±0.2 mm would be needed. In this work, we first studied the ISF deformation process by a simplified phenomenal linear model and employed a predictive controller to obtain an optimised tool trajectory in the sense of minimising the geometrical deviations between the targeted shape and the shape made by the ISF process. The algorithm is implemented at a rig in Cambridge University and the experimental results demonstrate the ability of the model predictive controller (MPC) strategy. We can achieve the deviation errors around ±0.2 mm for a number of simple geometrical shapes with our controller. The limitations of the underlying linear model for a highly nonlinear problem lead us to study the ISF process by a physics based model. We use the elastoplastic constitutive relation to model the material law and the contact mechanics with Signorini’s type of boundary conditions to model the process, resulting in an infinite dimensional system described by a partial differential equation. We further developed the computational method to solve the proposed mathematical model by using an augmented Lagrangian method in function space and discretising by finite element method. The preliminary results demonstrate the possibility of using this model for optimal controller design.
APA, Harvard, Vancouver, ISO, and other styles
17

Lauder, Marius Verfasser], Andy [Akademischer Betreuer] Schürr, and Holger [Akademischer Betreuer] [Giese. "Incremental Model Synchronization with Precedence-Driven Triple Graph Grammars / Marius Lauder. Betreuer: Andy Schürr ; Holger Giese." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2012. http://d-nb.info/1106454367/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lauder, Marius Paul [Verfasser], Andy Akademischer Betreuer] Schürr, and Holger [Akademischer Betreuer] [Giese. "Incremental Model Synchronization with Precedence-Driven Triple Graph Grammars / Marius Lauder. Betreuer: Andy Schürr ; Holger Giese." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2012. http://nbn-resolving.de/urn:nbn:de:tuda-tuprints-33520.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Oliveira, Luan Soares. "Classificação de fluxos de dados não estacionários com algoritmos incrementais baseados no modelo de misturas gaussianas." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-06042016-143503/.

Full text
Abstract:
Aprender conceitos provenientes de fluxos de dados é uma tarefa significamente diferente do aprendizado tradicional em lote. No aprendizado em lote, existe uma premissa implicita que os conceitos a serem aprendidos são estáticos e não evoluem significamente com o tempo. Por outro lado, em fluxos de dados os conceitos a serem aprendidos podem evoluir ao longo do tempo. Esta evolução é chamada de mudança de conceito, e torna a criação de um conjunto fixo de treinamento inaplicável neste cenário. O aprendizado incremental é uma abordagem promissora para trabalhar com fluxos de dados. Contudo, na presença de mudanças de conceito, conceitos desatualizados podem causar erros na classificação de eventos. Apesar de alguns métodos incrementais baseados no modelo de misturas gaussianas terem sido propostos na literatura, nota-se que tais algoritmos não possuem uma política explicita de descarte de conceitos obsoletos. Nesse trabalho um novo algoritmo incremental para fluxos de dados com mudanças de conceito baseado no modelo de misturas gaussianas é proposto. O método proposto é comparado com vários algoritmos amplamente utilizados na literatura, e os resultados mostram que o algoritmo proposto é competitivo com os demais em vários cenários, superando-os em alguns casos.
Learning concepts from data streams differs significantly from traditional batch learning. In batch learning there is an implicit assumption that the concept to be learned is static and does not evolve significantly over time. On the other hand, in data stream learning the concepts to be learned may evolve over time. This evolution is called concept drift, and makes the creation of a fixed training set be no longer applicable. Incremental learning paradigm is a promising approach for learning in a data stream setting. However, in the presence of concept drifts, out dated concepts can cause misclassifications. Several incremental Gaussian mixture models methods have been proposed in the literature, but these algorithms lack an explicit policy to discard outdated concepts. In this work, a new incremental algorithm for data stream with concept drifts based on Gaussian Mixture Models is proposed. The proposed methodis compared to various algorithms widely used in the literature, and the results show that it is competitive with them invarious scenarios, overcoming them in some cases.
APA, Harvard, Vancouver, ISO, and other styles
20

Rhee, Jay Hyuk. "Toward a contingency model of incremental international expansion : the impact of firm, industry and host country characteristics." The Ohio State University, 1999. http://rave.ohiolink.edu/etdc/view?acc_num=osu1272392336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Baba, Reizo, Emiko Mori, Nobuo Tauchi, and Masami Nagashima. "Simple exponential regression model to describe the relation between minute ventilation and oxygen uptake during incremental exercise." Nagoya University School of Medicine, 2002. http://hdl.handle.net/2237/5381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Hapfelmeier, Andreas [Verfasser], Burkhard [Akademischer Betreuer] [Gutachter] Rost, and Stefan [Gutachter] Kramer. "Incremental Linear Model Trees on Big Data / Andreas Hapfelmeier ; Gutachter: Burkhard Rost, Stefan Kramer ; Betreuer: Burkhard Rost." München : Universitätsbibliothek der TU München, 2016. http://d-nb.info/1114885037/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gerlitz, Thomas [Verfasser], Stefan [Akademischer Betreuer] Kowalewski, and Ina [Akademischer Betreuer] Schaefer. "Incremental Integration and Static Analysis of Model-Based Automotive Software Artifacts / Thomas Gerlitz ; Stefan Kowalewski, Ina Schaefer." Aachen : Universitätsbibliothek der RWTH Aachen, 2017. http://d-nb.info/1162451181/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Carter, Devin. "Examining the Incremental Validity of Working Memory for Predicting Learning and Task Performance: A Partial Mediation Model." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/81312.

Full text
Abstract:
General intelligence (“g”) has long been used as an effective predictor of both learning and job performance. Further, other more specific cognitive abilities have not been able to consistently predict incremental variance in job knowledge and job performance beyond “g”. However, the processes associated with working memory (WM) are important for these outcomes and are not captured by our traditional tests of “g”. This study tested a partial mediation model in which working memory (WM) incrementally predicts task performance above “g” through task knowledge and through a direct effect. Participants were given measures of “g” and WM in a lab. They were then given a learning opportunity and a task that applies this newly learned knowledge in order to tests the effects of WM. Results indicate that WM explains additional variance in both task knowledge and task performance, and the partial mediation model was supported using one of the two WM tasks used.
Master of Science
General intelligence is widely used in personnel selection because it is consistent in predicting the job performance of future employees. Other cognitive abilities have also been examined to determine whether they are able to predict job performance as well as general intelligence. However, most of these other cognitive abilities have come up short. This study hypothesized that working memory (WM) is a cognitive ability that may be able to predict job performance even after controlling for general intelligence. A sample of undergraduates completed tasks that measured general intelligence and WM, and this study examined how well each measure predicted both learning and performance on a relatively novel task. Results indicated that WM was able to predict both learning and performance after controlling for general intelligence.
APA, Harvard, Vancouver, ISO, and other styles
25

CONFESSOR, Kliver Lamarthine Alves. "Payout incremental e o modelo de três fatores de Fama e French: um estudo das empresas brasileiras." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/18580.

Full text
Abstract:
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-04-18T18:39:39Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação (2016-03-04) - KLIVER LAMARTHINE ALVES CONFESSOR.pdf: 1386264 bytes, checksum: 187856adab13aa330884ca934200e20d (MD5)
Made available in DSpace on 2017-04-18T18:39:39Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação (2016-03-04) - KLIVER LAMARTHINE ALVES CONFESSOR.pdf: 1386264 bytes, checksum: 187856adab13aa330884ca934200e20d (MD5) Previous issue date: 2016-03-04
Este estudo tem o objetivo de analisar se a inclusão do fator Payout no modelo de três fatores de Fama e French (1993) é relevante para explicação do retorno das empresas cotadas na BM&FBOVESPA entre o período de 2004 e 2014. O Payout avalia o nível de pagamento de dividendos. O prêmio pelo fator Payout é obtido pela diferença dos retornos entre as empresas que pagaram Payout Incremental – percentual de dividendos maior do que versa a legislação – e o retorno daquelas empresas que não pagaram dividendos. O método utilizado nesse trabalho baseia-se no modelo de Fama e French (1993), onde o fator Payout foi adicionado aos fatores prêmio pelo risco de mercado (RM-RF), prêmio pelo fator tamanho (SMB) e prêmio pelo fator book-to-market (HML) criando um novo modelo de 4 fatores. O poder explicativo desse modelo foi testado em face do retorno de 12 carteiras criadas a partir da ortogonalização dos desses fatores. Os resultados indicam que o fator Payout é significativo no modelo e que este fator geralmente possui uma relação negativa com o retorno das carteiras. O modelo consegue explicar melhor o retorno de sete dentre as doze carteiras estudadas, dessas destacam-se as carteiras de pequenas, de alto valor e que pagaram dividendos incrementais, pequenas, de baixo valor e que pagaram dividendos incrementais, pequenas, de baixo valor e que não pagaram dividendos, com um poder explicativo de mais de 70%. Para as carteiras grande, de alto valor e que não pagaram dividendos, grande, de baixo valor e que não pagaram dividendos, pequenas, de baixo valor e que pagaram dividendos mínimo, pequenas, de alto valor e que não pagaram dividendos, o modelo explica o retorno em mais de 50% com as variáveis apresentadas. A variável Payout não foi significativa apenas para a carteira pequena, de baixo valor e que pagaram dividendos. Portanto, a inclusão do fator Payout ao modelo de Fama e French (1993) possui relevância para os estudos de avaliação de portfólios. Este estudo contribui para as discussões e aprimoramento dos modelos de precificação de ativos no mercado brasileiro.
This study aims to analyze whether the inclusion of the Payout factor on the three factors of Fama and French (1993) is relevant to an explanation of the return of the companies listed on the BM&FBOVESPA between 2004 and 2014. The Payout evaluates the level of payment of dividends. The premium of the Payout’s factor is obtained by the difference of returns among the companies that paid the dividend percentage – Incremental Payout higher than what legislation suggests – and the return of the companies that did not pay dividends. The method used in this paper is based on Fame and French (1993) model’s, which the Payout factor was added to by the market risk premium (RM-RF), an award by the factor (SMB) size and prize for the book-to-market factor (HML) creating a new model of 4 factors. The explanatory power of this model was tested in the face of the return of 12 portfolios created by orthogonalizing these factors. The results indicate that the Payout factor is significant in the model and that this factor generally has a negative relationship with the return of portfolios. The model can explain better the return of seven from twelve portfolios studied. From these portfolios stands out portfolios with little value, high value and that paid dividends, small, low-value and that paid dividends, small, low-value and that did not pay dividends, with an explanatory power of over 70%. For great portfolios, high value and that did not pay dividends, large, low-value and that did not pay dividends, small, low-value and that paid minimum dividends, small, high value and that did not pay dividends, the model explains the return in more than 50% with the variables presented. The variable Payout was not significant for small, low portfolio value and that paid dividends. Therefore, the inclusion of the Payout factor model of Fame and French (1993) has relevance to portfolio assessment studies. This study contributes to the discussion and improvement of asset pricing models in the Brazilian market.
APA, Harvard, Vancouver, ISO, and other styles
26

García, Hernández Mònica, and Madeleine Volter. "Incremental digital product innovation in social mobile games : A case study of King Digital Entertainment." Thesis, Umeå universitet, Institutionen för informatik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-90205.

Full text
Abstract:
The aim of this thesis was to increase understanding of King company success in the social mobile game industry by asking the question: How does a company manage to organize the innovation work in successful casual games within social mobile gaming industry? In order to answer it, we conducted a case study research with secondary data in which we examined the company to discover the elements that contribute to this success, despite a lack of research in how these kind of companies build their business model and strategies, highlighting the players' behaviour. Our findings conclude it is possible to success in social mobile game industry using incremental innovation in different aspects: games design, implementation of the games, and in the business model.  By applying this innovation, with a good viral strategy and giving the player the decision to play by free or purchasing virtual goods, King has been able to become the largest developer game company on Facebook.
APA, Harvard, Vancouver, ISO, and other styles
27

Zuñiga, Prieto Miguel Ángel. "Reconfiguración Dinámica e Incremental de Arquitecturas de Servicios Cloud Dirigida por Modelos." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/86288.

Full text
Abstract:
Cloud computing represents a fundamental change in the way organizations acquire technological resources (e.g., hardware, development and execution environments, applications); where, instead of buying them, they acquire remote access to them in the form of cloud services supplied through the Internet. Among the main characteristics of cloud computing is the allocation of resources in an agile and elastic way, reserved or released depending on the demand of the users or applications, enabling the payment model based on consumption metrics. The development of cloud applications mostly follows an incremental approach, where the incremental delivery of functionalities to the client changes - or reconfigures - successively the current architecture of the application. Cloud providers have their own standards for both implementation technologies and service management mechanisms, requiring solutions that facilitate: building, integrating and deploying portable services; interoperability between services deployed across different cloud providers; and continuity In the execution of the application while its architecture is reconfigured product of the integration of the successive increments. The principles of the model-driven development approach, the architectural style service-oriented architectures, and the dynamic reconfiguration play an important role in this context. The hypothesis of this doctoral thesis is that model-driven development methods provide cloud service developers with abstraction and automation mechanisms for the systematic application of the principles of model engineering during the design, implementation, and incremental deployment of cloud services, facilitating the dynamic reconfiguration of the service-oriented architecture of cloud applications. The main objective of this doctoral thesis is therefore to define and validate empirically DIARy, a method of dynamic and incremental reconfiguration of service-oriented architectures for cloud applications. This method will allow specifying the architectural integration of the increment with the current cloud application, and with this information to automate the derivation of implementation artifacts that facilitate the integration and dynamic reconfiguration of the service architecture of the cloud application. This dynamic reconfiguration is achieved by running reconfiguration artifacts that not only deploy / un-deploy increment's services and orchestration services between services of the increment with the services of the current cloud application; but also, they change the links between services at runtime. A software infrastructure that supports the activities of the proposed method has also been designed and implemented. The software infrastructure includes the following components: i) a set of DSLs, with their respective graphical editors, that allow to describe aspects related to the architectural integration, implementation and provisioning of increments in cloud environments; ii) transformations that generate platform-specific implementation and provisioning models; (iii) transformations that generate artifacts that implement integration logic and orchestration of services, and scripts of provisioning, deployment, and dynamic reconfiguration for different cloud vendors. This doctoral thesis contributes to the field of service-oriented architectures and in particular to the dynamic reconfiguration of cloud services architectures in an iterative and incremental development context. The main contribution is a well-defined method, based on the principles of model-driven development, which makes it easy to raise the level of abstraction and automate, through transformations, the generation of artifacts that perform the dynamic reconfiguration of cloud applications.
La computación cloud representa un cambio fundamental en la manera en la que las organizaciones adquieren recursos tecnológicos (p. ej., hardware, entornos de desarrollo y ejecución, aplicaciones); en donde, en lugar de comprarlos adquieren acceso remoto a ellos en forma de servicios cloud suministrados a través de Internet. Entre las principales características de la computación cloud está la asignación de recursos de manera ágil y elástica, reservados o liberados dependiendo de la demanda de los usuarios o aplicaciones, posibilitando el modelo de pago basado en métricas de consumo. El desarrollo de aplicaciones cloud sigue mayoritariamente un enfoque incremental, en donde la entrega incremental de funcionalidades al cliente cambia - o reconfigura - sucesivamente la arquitectura actual de la aplicación. Los proveedores cloud tienen sus propios estándares tanto para las tecnologías de implementación como para los mecanismos de gestión de servicios, requiriéndose soluciones que faciliten: la construcción, integración y despliegue de servicios portables; la interoperabilidad entre servicios desplegados en diferentes proveedores cloud; y la continuidad en la ejecución de la aplicación mientras su arquitectura es reconfigurada producto de la integración de los sucesivos incrementos. Los principios del enfoque de desarrollo dirigido por modelos, del estilo arquitectónico de arquitecturas orientadas a servicios y de la reconfiguración dinámica cumplen un papel importante en este contexto. La hipótesis de esta tesis doctoral es que los métodos de desarrollo dirigido por modelos brindan a los desarrolladores de servicios cloud mecanismos de abstracción y automatización para la aplicación sistemática de los principios de la ingeniería de modelos durante el diseño, implementación y despliegue incremental de servicios cloud, facilitando la reconfiguración dinámica de la arquitectura orientada a servicios de las aplicaciones cloud. El objetivo principal de esta tesis doctoral es por tanto definir y validar empíricamente DIARy, un método de reconfiguración dinámica e incremental de arquitecturas orientadas a servicios. Este método permitirá especificar la integración arquitectónica del incremento con la aplicación cloud actual, y con esta información automatizar la derivación de los artefactos de implementación que faciliten la integración y reconfiguración dinámica de la arquitectura de servicios de la aplicación cloud. Esta reconfiguración dinámica se consigue al ejecutar los artefactos de reconfiguración que no solo despliegan/repliegan los servicios del incremento y servicios de orquestación entre los servicios del incremento con los servicios de la aplicación cloud actual; sino también, cambian en tiempo de ejecución los enlaces entre servicios. También se ha diseñado e implementado una infraestructura software que soporta las actividades del método propuesto e incluye los siguientes componentes: i) un conjunto de DSLs, con sus respectivos editores gráficos, que permiten describir aspectos relacionados a la integración arquitectónica, implementación y aprovisionamiento de incrementos en entornos cloud; ii) transformaciones que generan modelos de implementación y aprovisionamiento; iii) transformaciones que generan artefactos que implementan la lógica de integración y orquestación de servicios, y scripts de aprovisionamiento, despliegue y reconfiguración dinámica específicos para distintos proveedores cloud. Esta tesis doctoral contribuye al campo de las arquitecturas orientadas a servicios y en particular a la reconfiguración dinámica de arquitecturas de servicios cloud en contextos de desarrollo iterativo e incremental. El principal aporte es un método bien definido, basado en los principios del desarrollo dirigido por modelos, que facilita elevar el nivel de abstracción y automatizar por medio de transformaciones la generación de artefactos que real
La computació cloud representa un canvi fonamental en la manera en què les organitzacions adquirixen recursos tecnològics (ej., maquinari, entorns de desplegament i execució, aplicacions) ; on, en compte de comprar-los adquirixen accés remot a ells en forma de servicis cloud subministrats a través d'Internet. Entre les principals característiques de la computació cloud els recursos cloud són assignats de manera àgil i elàstica, reservats o alliberats depenent de la demanda dels usuaris o aplicacions, possibilitant el model de pagament basat en mètriques de consum. El desenrotllament d'aplicacions cloud seguix majoritàriament un enfocament incremental, on l'entrega incremental de funcionalitats al client canvia - o reconfigura - successivament l'arquitectura actual de l'aplicació. Els proveïdors cloud tenen els seus propis estàndards tant per a les tecnologies d'implementació com per als mecanismes de gestió de servicis, requerint-se solucions que faciliten: la construcció, integració i desplegament de servicis portables; la interoperabilitat entre servicis desplegats en diferents proveïdors cloud; i la continuïtat en l'execució de l'aplicació mentres la seua arquitectura és reconfigurada producte de la integració dels successius increments. Els principis de l'enfocament de desenrotllament dirigit per models, de l'estil arquitectònic d'arquitectures orientades a servicis i de la reconfiguració dinàmica complixen un paper important en este context. La hipòtesi d'esta tesi doctoral és que els mètodes de desenrotllament dirigit per models brinden als desenvolupadors de servicis cloud mecanismes d'abstracció i automatització per a l'aplicació sistemàtica dels principis de l'enginyeria de models durant el disseny, implementació i desplegament incremental de servicis cloud, facilitant la reconfiguració dinàmica de l'arquitectura orientada a servicis de les aplicacions cloud. L'objectiu principal d'esta tesi doctoral és per tant de definir i validar empí-ricamente DIARy, un mètode de reconfiguració dinàmica i incremental d'arquitectures orientades a servicis per a aplicacions cloud. Este mètode permetrà especificar la integració arquitectònica de l'increment amb l'aplicació cloud actual, i amb esta informació automatitzar la derivació dels artefactes d'implementació que faciliten la integració i reconfiguració dinàmica de l'arquitectura de servicis de l'aplicació cloud. Esta reconfi-guración dinàmica s'aconseguix a l'executar els artefactes de reconfiguració que no sols despleguen/repleguen els servicis de l'increment i servicis d'orquestració entre els servicis de l'increment amb els servicis de l'aplicació cloud actual; sinó també, canvien en temps d'execució els enllaços entre servicis. També s'ha dissenyat i implementat una infraestructura programari que suporta les activitats del mètode proposat i inclou els següents components: i) un conjunt de DSLs, amb els seus respectius editors gràfics, que permeten descriure aspectes relacionats a la integració arquitectònica, implementació i aprovisionament en entorns cloud dels increments; ii) transformacions que generen models d'implementació i aprovisionament específics de la plataforma a partir dels models d'integració d'alt nivell; iii) transformacions que generen artefactes que implementen la lògica d'integració i orquestració de servicis, i scripts d'aprovisionament, desplegament i reconfiguració dinàmica específics per a distints proveïdors cloud. Esta tesi doctoral contribuïx al camp de les arquitectures orientades a servicis i en particular a la reconfiguració dinàmica d'arquitectures de servicis cloud en contextos de desenrotllament iteratiu i incremental. La principal aportació és un mètode ben definit, basat en els principis del desenrotllament dirigit per models, que facilita elevar el nivell d'abstracció i automatitzar per mitjà de transformacions la generació d'artefactes que r
Zuñiga Prieto, MÁ. (2017). Reconfiguración Dinámica e Incremental de Arquitecturas de Servicios Cloud Dirigida por Modelos [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/86288
TESIS
APA, Harvard, Vancouver, ISO, and other styles
28

Lloyd, Evan Robert. "A model for the economic analysis of road projects in an urban network with interrelated incremental traffic assignment method." University of Western Australia. Economics Discipline Group, 2005. http://theses.library.uwa.edu.au/adt-WU2005.0083.

Full text
Abstract:
[Truncated abstract] In an urban network, any change to the capacity of a road or an intersection will generally result in some traffic changing its route. In addition the presence of intersections creates the need for frequent stops. These stops increase the fuel consumption by anywhere between thirty to fifty percent as evidenced by published standardised vehicle fuel consumption figures for urban and for country driving. Other components of vehicle operating costs such as tyre and brake wear and time costs will also be increased by varying amounts. Yet almost all methods in use for economic evaluation of urban road projects use open road vehicle operating costs (sometimes factored to represent an average allowance for stopping at intersections) for one year or sometimes two years in the analysis period and then make assumptions about how the year by year road user benefits may change throughout the period in order to complete the analysis. This thesis will describe a system for estimating road user costs in an urban network that calculates intersection effects separately and then adds these effects to the travel costs of moving between intersections. Daily traffic estimates are used with a distribution of the flow rate throughout the twenty-four hours giving variable speed of travel according to the level of congestion at different times of the day. For each link, estimates of traffic flow at two points in time are used to estimate the year-by-year traffic flow throughout the analysis period by linear interpolation or extrapolation. The annual road user costs are then calculated from these estimates. Annual road user benefits are obtained by subtracting the annual road user costs for a modified network from the annual road user costs for an unmodified network. The change in the road network maintenance costs are estimated by applying an annual per lane maintenance cost to the change in lane-kilometres of road in the two networks. The Benefit Cost Ratio is calculated for three discount rates. An estimate of the likely range of error in the Benefit Cost Ratio is also calculated
APA, Harvard, Vancouver, ISO, and other styles
29

Lawrence, Lisa Knopp. "The Long-term effects of an incremental development model of instruction upon student achievement and student attitude toward mathematics /." Access abstract and link to full text, 1992. http://0-wwwlib.umi.com.library.utulsa.edu/dissertations/fullcit/9222150.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

SOARES, Felipe Santana Furtado. "Uma estratégia incremental para implantação de gestão ágil de projeto sem organizações de desenvolvimento de software que buscam aderência ao CMMI." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/18414.

Full text
Abstract:
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2017-03-14T18:24:41Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) UFPE-Tese-FelipeFurtado-2015.pdf: 5186939 bytes, checksum: 3c19f526ae55d20b293ecbb65967ffa7 (MD5)
Made available in DSpace on 2017-03-14T18:24:41Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) UFPE-Tese-FelipeFurtado-2015.pdf: 5186939 bytes, checksum: 3c19f526ae55d20b293ecbb65967ffa7 (MD5) Previous issue date: 2015-05
A transição dos métodos tradicionais para os métodos ágeis de gerenciamento de projeto e as mudanças necessárias para a obtenção de seus reais benefícios são difíceis de alcançar. A mudança afeta não apenas o time envolvido com a gestão e o desenvolvimento de software, mas também diversas áreas da organização e, principalmente, exige uma mudança cultural. Aplicar métodos ágeis com aderênciaaos modelos de maturidade, como o Capability Maturity Model Integration (CMMI) ou Organizational Project Management Maturity Model (OPM3), tem sido o foco de discussão no meio acadêmico e no ambiente da indústria de software. As duas abordagens, aparentemente, têm alguns princípios fundamentais e bases diferentes, mas, por outro lado, adotá-las em conjunto é cada vez mais uma realidade para as organizações que desejam produzir software com maior qualidade e acelerando o tempo de desenvolvimento. No entanto, a pressa para chegar a níveis de maturidade, dentro de prazos que são cada vez mais curtos, pode resultar em programas de melhoria com objetivos únicos de adesão a tais modelos, e, muitas vezes, reflete na realização de atividades desnecessárias e geração de documentação excessiva. Neste contexto, os métodos ágeis são mais atraentes, pois são mais leves e aparentemente oferecem um desenvolvimento mais rápido com um custo mais baixo. Assim, processos, modelos e frameworks que resultem em maturidade de processos baseados em princípios ágeis têm sido alvo comum entre as empresas de software. Considerando o alto índice de falha na adoção de agilidade, este trabalho busca responder como é possível definir práticas de gestão de projetos aderentes ao CMMI, utilizando uma estratégia ágil em organizações de desenvolvimento de software de forma gradativa e disciplinada. Neste cenário, o presente trabalho propõe uma estratégia incremental baseada no modelo de maturidade CMMI, fazendo uso das melhores práticas da Agile Project Management (APM) e dos principais Métodos Ágeis: Scrum, Feature Driven Development (FDD), Lean, Kanban, Crystal, Extreme Programming (XP). O método utilizado para avaliação da pesquisa foi baseado em dois grupos focais e um survey com grupos de especialistas da academia e da indústria. Cada grupo com suas especialidades sugeriu mudanças na estratégia ao longo de sua construção e confirmou a sua completude, clareza, e adequação de uso para a realidade da indústria, mostrando ser viável a sua utilização para gestão ágil de projetos em conjunto com o CMMI.
The transition from traditional for agile project management methods and the necessary changes to obtain its real benefits are difficult to achieve. The change affects not only the team involved with management and software development, but also several organizational areas and, especially, requires a cultural change. Apply agile methods complying to maturity models such as Capability Maturity Model Integration (CMMI) or Organizational Project Management Maturity Model (OPM3), has been the focus of discussion in academic field and in software industry environment. Both approaches appear to have some fundamental principles and different bases, but on the other hand, adopt them together is becoming a reality for organizations that wish to produce software with higher quality and faster development time. However, the rush to reach maturity levels within shorter time limits, may result in improvement programs with unique objectives of adherence to these models, and often reflected on unnecessary activities and excessive documents generation. In this context, agile methods are more attractive because they are lighter and provide an apparently faster development at a lower cost. Thus, process, model and frameworks that result in mature processes based on agile principles have been a common target among software companies. Considering the high failure rate in the adoption of agility, this work seeks to answer how it is possible to define project management practices adherent to CMMI using an agile strategy in software development organizations in a gradual and disciplined manner. In this scenario, this work proposes an incremental strategy based on the CMMI maturity model, making use of the best practices of Agile Project Management (APM) and the main agile methods: Scrum, Feature Driven Development (FDD), Lean, Kanban, Crystal, Extreme Programming (XP). The method used to evaluate the research was based on two focus groups and a survey with experts from academy and industry.Each group with its specialties suggested changes in strategy throughout its construction and confirmed its completeness, clarity, and appropriateness of use to the reality of the industry, proving to be viable its use for agile project management in conjunction with CMMI.
APA, Harvard, Vancouver, ISO, and other styles
31

Katko, Nicholas John. "Hard-Hearted Doctors: Hard-Hearted Doctors: The Incremental Validity of Explicit and Implicit-Based Methods in Predicting Cardiovascular Disease in Physicians." University of Toledo / OhioLINK, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1290084946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ernst, Jan [Verfasser]. "The Trace Model for Spatial Invariance with Applications in Structured Pattern Recognition, Image Patch Matching and Incremental Visual Tracking / Jan Ernst." Aachen : Shaker, 2014. http://d-nb.info/1060622025/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Stern, Barry L. "Fear of intimacy, adult attachment theory, and the five-factor model of personality : a test of empirical convergence and incremental validity /." free to MU campus, to others for purchase, 1999. http://wwwlib.umi.com/cr/mo/fullcit?p9951126.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Holm, Oscar. "Improving the Development of Safety Critical Software : Automated Test Case Generation for MC/DC Coverage using Incremental SAT-Based Model Checking." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-161335.

Full text
Abstract:
The importance and requirements of certifying safety critical software is today more apparent than ever. This study focuses on the standards and practices used within the avionics, automotive and medical domain when it comes to safety critical software. We identify critical problems and trends when certifying safety critical software and propose a proof-of-concept using static analysis, model checking and incremental SAT solving as a contribution towards solving the identified problems. We present quantitative execution times and code coverage results of our proposed solution. The proposed solution is developed under the assumptions of safety critical software standards and compared to other studies proposing similar methods. Lastly, we conclude the issues and advantages of our proof-of-concept in perspective of the software developer community
APA, Harvard, Vancouver, ISO, and other styles
35

Johansson, Nils. "Estimation of fatigue life by using a cyclic plasticity model and multiaxial notch correction." Thesis, Linköpings universitet, Mekanik och hållfasthetslära, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158095.

Full text
Abstract:
Mechanical components often possess notches. These notches give rise to stress concentrations, which in turn increases the likelihood that the material will undergo yielding. The finite element method (FEM) can be used to calculate transient stress and strain to be used in fatigue analyses. However, since yielding occurs, an elastic-plastic finite element analysis (FEA) must be performed. If the loading sequence to be analysed with respect to fatigue is long, the elastic-plastic FEA is often not a viable option because of its high computational requirements. In this thesis, a method that estimates the elastic-plastic stress and strain response as a result of input elastic stress and strain using plasticity modelling with the incremental Neuber rule has been derived and implemented. A numerical methodology to increase the accuracy when using the Neuber rule with cyclic loading has been proposed and validated for proportional loading. The results show fair albeit not ideal accuracy when compared to elastic-plastic finite element analysis. Different types of loading have been tested, including proportional and non-proportional as well as complex loadings with several load reversions. Based on the computed elastic-plastic stresses and strains, fatigue life is predicted by the critical plane method. Such a method has been reviewed, implemented and tested in this thesis. A comparison has been made between using a new damage parameter by Ince and an established damage parameter by Fatemi and Socie (FS). The implemented algorithm and damage parameters were evaluated by comparing the results of the program using either damage parameter to fatigue experiments of several different load cases, including non-proportional loading. The results are fairly accurate for both damage parameters, but the one by Ince tend to be slightly more accurate, if no fitted constant to use in the FS damage parameter can be obtained.
APA, Harvard, Vancouver, ISO, and other styles
36

Du, Wenjie (James). "EXAMINING THE INCREMENTAL EFFECTS OF PARTICIPANT SPORTING EVENTS IN PROMOTING ACTIVE LIVING: CREATING ACTIONABLE KNOWLEDGE TO TACKLE A PUBLIC HEALTH CRISIS." Diss., Temple University Libraries, 2017. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/428449.

Full text
Abstract:
Tourism and Sport
Ph.D.
Using a theoretical synergy between the Psychological Continuum Model (PCM) and Behavioral Ecological Model (BEM), the current dissertation research provides empirical evidence to support that organized participant sporting events can play a significant role in building a healthier community. First, using a proprietary U.S. community-based panel data from 2008 to 2014, study 1 examines the incremental effects of participant sporting events (PSE) in promoting active living at the population level. Panel regression with an instrumental variable approach and Multigroup Latent Growth Curve Analysis were administered. The key findings included (1) these population-based interventions have the capacity to impact population health at the state level; (2) such an influence significantly varies across the United States contingent upon a state’s economic development and the geographical region to which a state belongs. In study 2, the Multilevel Mediation Analysis was conducted with a spatially clustered cross-sectional data in 2014. The findings revealed that the access to exercise opportunities at the state level represents the underlying mechanism through which various forms of participant sporting events have the ability to elicit positive effects on health with respects to mental health, physical health, and physical activity participation at the county level. The findings suggested that PSEs represent effective public health platform to create healthier communities through integrating physically active leisure into population’s everyday routines. Overall, empirical results also help us better understand the importance of effectively leveraging community sporting events to deliver required health benefits to the general public and create practical guidelines to inform policy formation on resource allocation.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
37

García, Díaz Vicente. "MDCI: Model-Driven Continuous Integration." Doctoral thesis, Universidad de Oviedo, 2011. http://hdl.handle.net/10803/80298.

Full text
Abstract:
El propósito de esta Tesis es llevar a cabo un proceso en el que se aplique la práctica de la integración continua en un desarrollo de software dirigido por modelos de forma eficiente, mediante el cual los desarrollos de software puedan beneficiarse conjuntamente de las mejoras y ventajas que proporcionan la aproximación de desarrollo de la ingeniería dirigida por modelos y la práctica de la integración continua. La aproximación de la ingeniería dirigida por modelos es el último salto natural de la ingeniería del software en cuanto a la búsqueda de métodos de desarrollo que elevan el nivel de abstracción hasta el punto en el que los expertos de un dominio de conocimiento, ajenos al mundo informático, son capaces de guiar y cambiar la lógica de los sistemas informáticos. La práctica de la integración continua es una recomendación de las principales metodologías de desarrollo, que tiene como objetivo la realización de integraciones automáticas del software en etapas tempranas del desarrollo, ofreciendo ventajas como la reducción del riesgo intrínseco que, dado su carácter temporal y único, tienen todos los proyectos. Con la unión de la ingeniería dirigida por modelos y de la práctica de la integración continua se busca ofrecer, a los equipos de desarrollo que trabajan utilizando algún tipo de iniciativa de la ingeniería dirigida por modelos, la posibilidad de integrar de forma continua y distribuida sus desarrollos. Al mismo tiempo, los clientes, verdaderos expertos del dominio de conocimiento en su ámbito de negocio, se benefician del aumento del nivel de abstracción de las técnicas de desarrollo para que ellos mismos, y de forma transparente, sean capaces de modificar su propio sistema informático sin la ayuda de personal técnico ajeno a su negocio, ahorrando así tiempo y costes. Para cumplir con el objetivo de esta Tesis doctoral se construye un prototipo que salva los impedimentos actuales que no permiten la unión entre estos dos nuevos activos de la ingeniería del software. Los principales problemas encontrados están relacionados con la selección de una iniciativa de desarrollo apropiada, los sistemas de control de versiones especialmente adaptados para trabajar con modelos, la generación incremental de artefactos a partir de modelos y la adaptación a las herramientas actuales de integración continua de forma optimizada. La separación del trabajo realizado en diferentes bloques permite ofrecer soluciones de forma tanto aislada como en conjunto, dando lugar a un trabajo iterativo e incremental de comienzo a fin. Para analizar las ventajas que ofrece la propuesta de este trabajo frente a otras posibilidades de desarrollo, se realiza una evaluación mediante la creación de diferentes casos de prueba en los que la medición de diferentes parámetros ofrecen una estimación numérica de las ventajas reales obtenidas. El análisis descriptivo, el contraste de hipótesis y las técnicas de regresión permiten una mejor interpretación de los resultados. Finalmente, se define el proceso, objetivo último de este trabajo, mediante la respuesta a diferentes preguntas planteadas, que facilitan su comprensión y entendimiento.
The purpose of this Thesis is to create a process in which the continuous integration practice can be applied to a model-driven software development in an e ective way, through which software developments can bene t jointly and simultaneously from the improvements and advantages provided by the model-driven engineering development approach and the continuous integration practice. The model-driven engineering approach is the last natural step of software engineering in the search for development approaches that raise the level of abstraction to the point that experts in a domain of knowledge, outside the computer world, are able to guide and change the logic of computer systems. The continuous integration practice is a recommendation of the most widely accepted development methodologies that aims to carry out automatic software integrations in early stages of development, o ering bene ts such as reducing the inherent risk that, given its unique nature, every project has. By merging the model-driven engineering and the continuous integration practice, the aim is to provide to development teams that work using some kind of model-driven engineering initiative, the possibility to integrate their developments in a continuous and distributed way. At the same time, customers, the real experts in the domain of knowledge in their eld of business, can bene t from the increased level of abstraction in developing techniques. Thus, they, in a transparent manner, are able to modify their own computer system without the help of external technical sta , so saving time and costs. To meet the objective of this Thesis, a prototype which saves all the current constraints that do not allow the union between these two new tools of software engineering is build. The main problems found were related to the selection of an appropriate development initiative, the version control systems specially adapted to working with models, the incremental generation of artifacts from models, and the optimized adaptation to existing continuous integration tools. The separation of work in di erent blocks can provide solutions, both in isolation or in conjunction, resulting in an iterative and incremental work from beginning to end. To analyze the bene ts of the proposal in this work compared to other development possibilities, an evaluation is performed by creating di erent test cases in which the measurement of di erent parameters can give a numerical estimate of the real bene ts obtained. The descriptive analysis, the hypothesis testing, and regression techniques allow a better interpretation of results. Finally, the process, the main objective of this work, is de ned by answering various questions posed to facilitate its comprehension and understanding.
APA, Harvard, Vancouver, ISO, and other styles
38

Tezeghdanti, Walid. "Stratégie de réduction de modèle appliquée à un problème de fissuration dans un milieu anisotrope : application à la modélisation de la plasticité crystalline." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLN006/document.

Full text
Abstract:
Les aubes des turbines à haute pression des réacteurs d'avion subissent des chargements complexes dans un environnement réactif. Prédire leur durée de vie peut nécessiter une approche en tolérance aux dommages, basée sur la prédiction de la propagation d'une fissure supposée. Mais cette approche est confrontée au comportement non linéaire sous des chargements à amplitudes variables et au coût énorme des calculs elasto-plastiques des structures 3D complexes sur des millions des cycles. Dans ce cadre, un modèle incrémental de fissuration a été proposé. Ce modèle est basé sur la plasticité comme mécanisme principal de propagation de fissure par fatigue pure. Cette modélisation passe par une réduction de modèle de type POD. La plasticité en pointe de la fissure est alors modélisée par un nombre réduit de variables non locales et des variables internes. Un ensemble d'hypothèses doit être respecté pour garantir la validité de cette modélisation. Pour décliner ce modèle dans le cas d'un matériau anisotrope représentatif du comportement des monocristaux, une première étude a été faite sur le cas d'une élasticité cubique avec de la plasticité de Von-Mises. Une stratégie a été proposée pour identifier un modèle matériau basé sur les facteurs d'intensité non locaux. Cette stratégie comporte une détermination de la fonction critère basée sur les solutions élastiques en anisotrope. L'étude des directions d'écoulement plastique avec les variables non locales montre une forte dépendance à l'anisotropie élastique du modèle même avec une plasticité associée de Von-Mises. La stratégie comporte également une identification des variables internes.Dans la deuxième partie, le problème d'une fissure avec un modèle de plasticité cristalline a été traité. L'activation de différents systèmes de glissement a été alors prise en compte dans la modélisation. Finalement, différentes méthodologies ont été explorées en vue de transposer le modèle local de plasticité cristalline à l'échelle non locale de la région en pointe de la fissure
The fatigue life prediction of high pressure turbine blades may require a damage tolerance approach based on the study of possible crack propagation. The nonlinear behavior of the material under complex nonproportional loadings and the high cost of running long and expensive elastic-plastic FE computations on complex 3D structures over millions of cycles are some major issues that may encounter this type of approach.Within this context, an incremental model was proposed based on plasticity as a main mechanism for fatigue crack growth.A model reduction strategy using the Proper Orthogonal Decomposition (POD) was used to reduce the cost of FEA. Based on a set of hypotheses, the number of the degrees of freedom of the problem is reduced drastically. The plasticity at the crack tip is finally described by a set of empirical equations of few nonlocal variables and some internal variables.In order to apply this modeling strategy to the case of anisotropic materials that represent the behavior of single crystals, a first study was done with cubic elasticity and a Von-Mises plasticity. Elastic and plastic reference fields, required to reduce the model, were determined. Then, a material model of the near crack tip region was proposed based on nonlocal intensity factors. A yield criterion function was proposed based on Hoenig's asymptotic solutions for anisotropic materials. The study of plastic flow directions with the nonlocal variables of the model shows a strong dependency on the cubic elasticity. A strategy to identify internal variables is proposed as well. In the second part, a crystal plasticity model was implemented. The activation of different slip systems was taken into account in the model reduction strategy. A kinematic basis was constructed for each slip system. Finally, a strategy was proposed to transpose the local crystal plasticity model to the nonlocal scale of the crack
APA, Harvard, Vancouver, ISO, and other styles
39

Higa, Mali Naomi. "Determinação do limiar de anaerobiose pela análise visual gráfica e pelo modelo matemático de regressão linear bi-segmentado de Hinkley em mulheres saudáveis." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/17/17145/tde-07122006-084132/.

Full text
Abstract:
O limiar de anaerobiose (LA) é definido como a intensidade de exercício físico em que a produção de energia pelo metabolismo aeróbio é suplementada pelo metabolismo anaeróbio. Este índice constitui-se de um delimitador fisiológico de grande importância para o fornecimento de informações concernentes aos principais sistemas biológicos do organismo, os quais estão envolvidos na realização de um exercício físico. O LA é um importante parâmetro de determinação da capacidade aeróbia funcional de um indivíduo. Diversos métodos são usados para estimar o LA durante exercício. Existem métodos invasivos, como a medida repetida da concentração de lactato sanguíneo; e métodos não-invasivos, por meio de análise de variáveis biológicas como medidas contínuas dos gases respiratórios, através da análise de mudança do padrão de resposta das variáveis ventilatórias e metabólicas, e também pela análise da mudança do padrão de resposta da freqüência cardíaca (FC) frente a um exercício físico incremental. O objetivo deste estudo foi comparar e correlacionar o LA determinado por métodos não-invasivos de análise visual gráfica das variáveis ventilatórias e metabólicas, considerado como padrão-ouro neste estudo, e pelo modelo matemático de regressão linear bi-segmentado utilizando o algoritmo de Hinkley, aplicado a série de dados de FC (Hinkley – FC) e da produção de dióxido de carbono ( CO2) (Hinkley – CO2). Metodologia: Treze mulheres jovens (24 ± 2,63 anos) e dezesseis mulheres na pós-menopausa (57 ± 4,79 anos), saudáveis e sedentárias realizaram teste ergoespirométrico continuo do tipo rampa em cicloergômetro (Quinton Corival 400), com incrementos de 10 a 20 Watts/min até a exaustão física. As variáveis ventilatórias e metabólicas foram captadas respiração a respiração (CPX-D, Medical Graphics), e a FC batimento a batimento (ECAFIX, ACTIVE-E). Os dados foram analisados por testes não paramétricos de Friedman, Mann-Whitney e correlação de Spearman. Nível de significância de ? = 5%. Resultados: Os valores das variáveis potência (W), FC (bpm), consumo de oxigênio relativo ( O2) (mL/kg/min), O2 absoluto (mL/min), CO2 (mL/min) e ventilação pulmonar ( E) (L/min) no LA não apresentaram diferenças significativas entre as metodologias (p > 0,05) nos dois grupos de mulheres estudadas. A análise de correlação dos valores de potência em W, FC em bpm, O2 em mL/kg/min, O2 em mL/min, CO2 em mL/min e E em L/min, entre o método padrão-ouro com o Hinkley – CO2 foram respectivamente: rs=0,75; rs=0,57; rs=0,48; rs=0,66; rs=0,47 e rs=0,46 no grupo jovem, e rs=-0,013; rs=0,77; rs=0,88; rs=0,60; rs=0,76 e rs=0,80 no grupo pós-menopausa. Os valores de correlação do método padrão-ouro com Hinkley – FC para as variáveis potência em W, FC em bpm, O2 em mL/kg/min, O2 em mL/min, CO2 em mL/min e E em L/min, obtidas no LA foram respectivamente: rs=0,58; rs=0,42; rs=0,61; rs=0,57; rs=0,33 e rs=0,39 no grupo de jovens, e rs=0,14; rs=0,87; rs=0,76; rs=0,52; rs=0,33 e rs=0,65 no grupo pós-menopausa. O grupo pós-menopausa apresentou melhores valores de correlação em relação ao grupo de jovens, exceto para as variáveis potência e consumo de oxigênio absoluto (mL/min). Este fato pode estar relacionado a uma maior taxa de variação e magnitude das variáveis analisadas em indivíduos jovens em relação aos de meia-idade, sendo, desta forma, obtida melhor adequação do modelo matemático estudado em mulheres de meia idade. Conclusão: O algoritmo matemático de Hinkley proposto para detectar a mudança no padrão de resposta da CO2 e da FC foi eficiente nos indivíduos de meia-idade, portanto, a metodologia matemática utilizada no presente estudo constitui-se de uma ferramenta promissora para detectar o LA em mulheres saudáveis, por ser um método semi-automatizado, não invasivo e objetivo na determinação do LA.
The anaerobic threshold (AT) is defined as the intensity level of physical exercise at which energy production by aerobic metabolism is supplemented by anaerobic metabolism. This index provides a physiologic delimitation of great importance to supply the organism biological systems information involved in physical exercise performance. The AT constitutes a most important determining of an individuals functional aerobic capacity. Several methods are used for estimating the AT during exercise. There are invasive methods that require repeated blood lactate accumulation; and there exist non-invasive methods by biological variables analysis, like continuous respiratory gases determination by analysis of changes in pattern respiratory and metabolic responses, and heart rate (HR) responses too. The aim of the present study was to compare AT obtained by a graphic visual method of ventilatory and metabolic variables, considered by gold standard method in the present study, with the bi-segmental linear regression mathematic model of Hinkley’s algorithm applied in a HR (Hinkley – HR) and carbon dioxide output ( CO2) (Hinkley – CO2) data. Methodology: Thirteen young women, 24 ± 2,63 years old, and sixteen postmenopausal women, 57 ± 4,79 years old, leading healthy and sedentary life style were submitted to an incremental test in a cicloergometer electromagnetic braking (Quinton Corival 400), with 10 to 20 W/min increments up to physical exhaustion. The ventilatory variables were registered breath-to-breath (CPX-D, Medical Graphics) and HR was obtained beat-to-beat (ECAFIX, ACTIVE-E), over real time. The data were analyzed by Friedman’s test and Spearman’s correlation test, with a level of significance set at 5%. Results: The Power output (W), HR (bpm), oxygen uptake ( O2) (mL/kg/min), O2 (mL/min), CO2 (mL/min) and pulmonary ventilation ( E) (L/min) data in AT have showed no significant differences (p > 0,05) between methods to determine AT in both women groups. The correlation analysis of power output in W, HR in bpm, O2 in mL/kg/min, O2 in mL/min, CO2 in mL/min and E in L/min values, determined by gold standard method and by Hinkley – CO2 data were respectively: rs=0,75; rs=0,57; rs=0,48; rs=0,66; rs=0,47 and rs=0,46 in young group, and rs=-0,013; rs=0,77; rs=0,88; rs=0,60; rs=0,76 and rs=0,80 in postmenopausal group. The correlation analysis by gold standard method and Hinkley – FC in AT of power output in W, HR in bpm, O2 in mL/kg/min, O2 in mL/min, CO2 in mL/min and E in L/min data were respectively: rs=0,58; rs=0,42; rs=0,61; rs=0,57; rs=0,33 and rs=0,39 in young group, and rs=0,14; rs=0,87; rs=0,76; rs=0,52; rs=0,33 and rs=0,65 in postmenopausal group. The postmenopausal group presents better correlations values than young group, except in power output and O2 (mL/min) data. This may be related to more variability rate and higher kinetics responses to variables studied in young group in relation to postmenopausal group. Nevertheless, there was obtained better mathematical model adequacy in middle-age women. Conclusion: the Hinkley’s mathematical algorithm proposed to detect the response patterns changes of CO2 and HR variables was efficient to detect AT in health postmenopausal women’s group, therefore, the mathematical methodology used in the present study showed be a promissory tool because this method represent a semi-automatized, non invasive and objective measure of AT determination.
APA, Harvard, Vancouver, ISO, and other styles
40

Herzig, Sebastian J. I. "A Bayesian learning approach to inconsistency identification in model-based systems engineering." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53576.

Full text
Abstract:
Designing and developing complex engineering systems is a collaborative effort. In Model-Based Systems Engineering (MBSE), this collaboration is supported through the use of formal, computer-interpretable models, allowing stakeholders to address concerns using well-defined modeling languages. However, because concerns cannot be separated completely, implicit relationships and dependencies among the various models describing a system are unavoidable. Given that models are typically co-evolved and only weakly integrated, inconsistencies in the agglomeration of the information and knowledge encoded in the various models are frequently observed. The challenge is to identify such inconsistencies in an automated fashion. In this research, a probabilistic (Bayesian) approach to abductive reasoning about the existence of specific types of inconsistencies and, in the process, semantic overlaps (relationships and dependencies) in sets of heterogeneous models is presented. A prior belief about the manifestation of a particular type of inconsistency is updated with evidence, which is collected by extracting specific features from the models by means of pattern matching. Inference results are then utilized to improve future predictions by means of automated learning. The effectiveness and efficiency of the approach is evaluated through a theoretical complexity analysis of the underlying algorithms, and through application to a case study. Insights gained from the experiments conducted, as well as the results from a comparison to the state-of-the-art have demonstrated that the proposed method is a significant improvement over the status quo of inconsistency identification in MBSE.
APA, Harvard, Vancouver, ISO, and other styles
41

Roymoulik, Santanu. "Incremental recovery of volumetric models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ29627.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Brendel, Marc Levin. "Incremental identification of complex reaction systems /." Düsseldorf : VDI-Verl, 2006. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=015009980&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Florez-Larrahondo, German. "Incremental learning of discrete hidden Markov models." Diss., Mississippi State : Mississippi State University, 2005. http://library.msstate.edu/etd/show.asp?etd=etd-05312005-141645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

BENATTOU, MOHAMMED. "Heritage incremental : modele methode et validation formelle." Clermont-Ferrand 2, 1997. http://www.theses.fr/1997CLF21931.

Full text
Abstract:
La semantique de l'heritage est basee sur la relation de sous-type. Plusieurs auteurs ont montre que la relation de sous-type est insuffisante pour expliquer le mecanisme de l'heritage utilise dans l'approche orientee objets. L'heritage est un mecanisme incremental de construction de programmes. Cette approche a ete introduite par w. R. Cook pour modeliser l'heritage simple. L'objectif de cet article est de proposer un modele formalisant le mecanisme incremental d'heritage simple et multiple. Ce modele, base sur une explication intuitive de l'utilisation propre de l'heritage, est essentiellement dedie a l'heritage dynamique des proprietes, pour les sgbdoo. La methode denotationnelle utilisant le mecanisme incremental d'heritage permet une evaluation dynamique de message. Enfin, nous proposons une validation formelle montrant que, lorsqu'on utilise le systeme de typage de o2, la contrainte induite par l'heritage (contrainte de cook) est respectee.
APA, Harvard, Vancouver, ISO, and other styles
45

Vuaden, Elisabete. "MORFOMETRIA E INCREMENTO DE Cordia trichotoma (Vell.) Arráb. ex Steud. NA REGIÃO CENTRAL DO RIO GRANDE DO SUL." Universidade Federal de Santa Maria, 2013. http://repositorio.ufsm.br/handle/1/3762.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
This work aimed to evaluate the morphology and describe the growth of competing and free individual trees of Cordia trichotoma (Vell.) Arráb. ex Steud. The study of free from competition trees was held in the Central region of the state of Rio Grande do Sul, in the cities of Santa Maria and Silveira Martins, and trees under competition were measured in the Campo de Instrução do Ministério do Exército of Santa Maria - CISM and also in Silveira Martins. Competing and free trees larger than 5 centimeters of dbh were numbered and its dendrometric, morphometric and qualitative were measured. The increment data of the last 4 years was obtained from two increment cores collected using Pressler borer. Competition between trees in the forest was calculated based on the number of trees per hectare obtained from the methodologies of Spurr, Bitterlich and Prodan. The louro-pardo trees growing free of competition, has diameter at breast height, crown diameter and the salience index similar to the developed under competition. Under competition, this species invests more in total height, commercial height and height of the crown, however, has less crown length, lower crown percentage and index scope. The louros growing free of competition have periodic annual diameter increment (IPAd) and basal area (IPAg) significantly higher than those under competition. The IPAg this species of free of tree competition can be predicted by dbh, crown factor (fac) and crown density (dec) by two different models, but the model that best fits the data was: IPAg = 0.6665. and 0.0725. (fac.dec). dbh, which considers the fac and dec as discrete variables to determine the slope. The IPAg of louro-pardo under competition can be predicted by the estimated increment from free of competition trees and subtracting the estimated difference between the increments of louros under and free of competition: IPAg = [(0,6665 e 0,0725. (fac.dec) . dap)] [562,28. (N(GBit))-0,585]. The model ln IPAg = 0,5456 . ln dap + 0,1412 . (fac . dec) - 0,00008905 . N(GBit) which has no relationship to the previous model can be used as well, to estimate the increment for louro-pardo under competition, with some advantages over the previous one.
Este trabalho teve como objetivo avaliar a morfometria e descrever o incremento de árvores individuais livres e em competição de Cordia Trichotoma (Vell.) Arráb. ex Steud. O estudo das árvores livres de competição foi realizado na região Central do estado do Rio Grande do Sul, nas cidades de Santa Maria e Silveira Martins e as árvores sob concorrência foram mensuradas no Campo de Instrução do Ministério do Exército de Santa Maria CISM e também em Silveira Martins. Para cada árvore livre e sob competição, foram numeradas as que possuíam dap igual ou superior a 5 cm, e medidas suas variáveis dendrométricas, morfométricas e qualitativas. Os dados de incremento dos últimos 4 anos foram obtidos pela análise de duas baguetas, coletados com a utilização do trado de Pressler. Para a determinação da concorrência entre as árvores na floresta, foi calculado o número de árvores por hectare baseados nas metodologias de Spurr, Bitterlich e Prodan. O louro-pardo quando cresceu livre de competição, apresentou diâmetro a altura do peito (dap), diâmetro de copa e índice de saliência semelhante ao que se desenvolveu em competição. Quando sob competição, esta espécie investiu mais em altura total, altura comercial, altura de início da copa, porém, apresentou menor comprimento de copa, percentagem de copa e índice de abrangência. Os louros quando cresceram livres de competição apresentaram incremento periódico anual em diâmetro (IPAd) e em área basal (IPAg) significativamente superiores quando comparados aos sob competição. O IPAg desta espécie livre de competição pode ser predito pelo dap, pelo fator de copa (fac) e densidade de copa (dec) a partir de dois modelos distintos sendo que o modelo que apresentou melhores ajustes foi: IPAg = 0,6665 . e 0,0725.(fac.dec) . dap pelo qual considera o fac e dec como variáveis discretas para determinar o coeficiente angular. O IPAg do louro-pardo sob competição pode ser predito pela estimativa de incremento que ele atinge quando cresce livre de competição subtraído pela estimativa da diferença de incrementos entre os louros livres e sob competição: IPAg = [(0,6665 e 0,0725. (fac.dec) . dap)] [562,28. (N(GBit))-0,585]. O modelo ln IPAg = 0,5456 . ln dap + 0,1412 . (fac . dec) - 0,00008905 . N(GBit) pelo qual não tem relação com o incremento das árvores livres também pode ser utilizado para a estimativa do incremento dos louros sob competição, com algumas vantagens em relação ao anterior.
APA, Harvard, Vancouver, ISO, and other styles
46

Largenton, Rodrigue. "Modélisation du comportement effectif du combustible MOX : par une analyse micro-mécanique en champs de transformation non uniformes." Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4773/document.

Full text
Abstract:
Parmi les combustibles nucléaires irradiés dans les Réacteurs à Eau Pressurisée d'Électricité de France, on trouve le combustible MOX, acronyme anglais de Mixed Oxide car il combine du dioxyde de plutonium et d'uranium. On y distingue trois phases, correspondant à des teneurs massiques en plutonium différentes. La teneur en matière fissile y étant différente, ces phases évoluent différemment sous irradiation, tant du point de vue mécanique que du point de vue chimique. Pour modéliser correctement le comportement macroscopique du combustible MOX dans un code de calcul industriel, les modèles ont besoin d'être alimentés de façon pertinente en propriétés effectives, mais il est aussi intéressant de disposer d'informations sur les champs locaux afin d'établir des couplages entre les mécanismes (couplage mécanique physico-chimie). L'objectif de la thèse fut donc de développer une modélisation par changement d'échelles, basée sur l'approche NTFA (Michel et Suquet 2003). Ces développements ont été réalisés sur des microstructures tridimensionnelles (3D) représentatives du combustible MOX et pour un comportement local visco-élastique vieillissant avec déformations libres. Dans un premier temps, pour représenter le combustible MOX en 3D nous avons utilisé des méthodes existantes pour traiter et segmenter les images expérimentales 2D, puis nous avons remonté les informations 2D indispensables (fuseau diamétral des inclusions et fractions surfaciques respectives) en 3D par la méthode stéréologique de Saltykov (Saltykov 1967) et enfin nous avons développé des outils pour représenter et discrétiser un composite multiphasé particulaire, type MOX
Among the nuclear fuels irradiated in the Pressure Water Reactor of Électricité de France, MOX fuel is used, a Mixed OXide of plutonium and uranium. In this fuel, three phases with different plutonium content can be observed. The different fissile plutonium content in each phase leads different mechanical and physico-chemical evolutions under irradiation. To predict correctly the macroscopic behavior of MOX nuclear fuels in industrial nuclear fuel codes, models need to be fed in effective properties. But it's also interresting to obtain the local fields to establish coupling between mechanisms (mechanical and physico-chemical coupling). The aim of the PhD was to develop homogenisation method based on Non uniform Transformation Field Analysis (NTFA Michel and Suquet 2003}). These works were realised on three dimensions MOX microstructures and for local ageing visco-elastic behavior with free strains. The first work of the PhD was the numerical representation of the MOX microstructure in 3D. Three steps were realized. The first one consisted in the acquisition and the treatment of experimental pictures thanks to two soft-wares already developed. The second used the stereological model of textit{Saltykov} cite{R2S67} to go back up the two-dimensional statistical information in three-dimensional. And the last step was to develop tools which are able to build a numerical representation of the MOX microstructure. The second work of the PhD was to develop the NTFA model. Some theoretical (three dimensional, free strains and ageing hadn't ever studied) and numerical (choice and reduction of plastic modes, impact of the microstructures) studies were realised
APA, Harvard, Vancouver, ISO, and other styles
47

Losing, Viktor [Verfasser]. "Memory Models for Incremental Learning Architectures / Viktor Losing." Bielefeld : Universitätsbibliothek Bielefeld, 2019. http://d-nb.info/1191896420/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Pinto, Rafael Coimbra. "Continuous reinforcement learning with incremental Gaussian mixture models." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157591.

Full text
Abstract:
A contribução original desta tese é um novo algoritmo que integra um aproximador de funções com alta eficiência amostral com aprendizagem por reforço em espaços de estados contínuos. A pesquisa completa inclui o desenvolvimento de um algoritmo online e incremental capaz de aprender por meio de uma única passada sobre os dados. Este algoritmo, chamado de Fast Incremental Gaussian Mixture Network (FIGMN) foi empregado como um aproximador de funções eficiente para o espaço de estados de tarefas contínuas de aprendizagem por reforço, que, combinado com Q-learning linear, resulta em performance competitiva. Então, este mesmo aproximador de funções foi empregado para modelar o espaço conjunto de estados e valores Q, todos em uma única FIGMN, resultando em um algoritmo conciso e com alta eficiência amostral, i.e., um algoritmo de aprendizagem por reforço capaz de aprender por meio de pouquíssimas interações com o ambiente. Um único episódio é suficiente para aprender as tarefas investigadas na maioria dos experimentos. Os resultados são analisados a fim de explicar as propriedades do algoritmo obtido, e é observado que o uso da FIGMN como aproximador de funções oferece algumas importantes vantagens para aprendizagem por reforço em relação a redes neurais convencionais.
This thesis’ original contribution is a novel algorithm which integrates a data-efficient function approximator with reinforcement learning in continuous state spaces. The complete research includes the development of a scalable online and incremental algorithm capable of learning from a single pass through data. This algorithm, called Fast Incremental Gaussian Mixture Network (FIGMN), was employed as a sample-efficient function approximator for the state space of continuous reinforcement learning tasks, which, combined with linear Q-learning, results in competitive performance. Then, this same function approximator was employed to model the joint state and Q-values space, all in a single FIGMN, resulting in a concise and data-efficient algorithm, i.e., a reinforcement learning algorithm that learns from very few interactions with the environment. A single episode is enough to learn the investigated tasks in most trials. Results are analysed in order to explain the properties of the obtained algorithm, and it is observed that the use of the FIGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks.
APA, Harvard, Vancouver, ISO, and other styles
49

Suhaib, Syed Mohammed. "XFM: An Incremental Methodology for Developing Formal Models." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/9905.

Full text
Abstract:
We present a methodology of an agile formal method named eXtreme Formal Modeling (XFM) recently developed by us, based on Extreme Programming concepts to construct abstract models from a natural language specification of a complex system. In particular, we focus on Prescriptive Formal Models (PFMs) that capture the specification of the system under design in a mathematically precise manner. Such models can be used as golden reference models for formal verification, test generation, etc. This methodology for incrementally building PFMs work by adding user stories (expressed as LTL formulae) gleaned from the natural language specifications, one by one, into the model. XFM builds the models, retaining correctness with respect to incrementally added properties by regressively model checking all the LTL properties captured theretofore in the model. We illustrate XFM with a graded set of examples including a traffic light controller, a DLX pipeline and a Smart Building control system. To make the regressive model checking steps feasible with current model checking tools, we need to keep the model size increments under control. We therefore analyze the effects of ordering LTL properties in XFM. We compare three different property-ordering methodologies: 'arbitrary ordering', 'property based ordering' and 'predicate based ordering'. We experiment on the models of the ISA bus monitor and the arbitration phase of the Pentium Pro bus. We experimentally show and mathematically reason that predicate based ordering is the best among these orderings. Finally, we present a GUI based toolbox for users to build PFMs using XFM.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
50

Cunha, Thiago Augusto da. "MODELAGEM DO INCREMENTO DE ÁRVORES INDIVIDUAIS DE Cedrela odorata L. NA FLORESTA AMAZÔNICA." Universidade Federal de Santa Maria, 2009. http://repositorio.ufsm.br/handle/1/8654.

Full text
Abstract:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
The periodic growth in basal area of 62 trees of cedro (Cedrela odorata L.) was reconstructed from 2005 to 2008, by dendrochronological techniques in Porto Acre, AC, to quantify and describe the growth rate by relationship with morfometric variable, competition index, sociological position, crown shape and occurrence of lianas on the crown. Significant difference in basal area growth was verified between tree DBH class (Pr<0,0001), where the class 70 to 90 cm grown an average of 222,1 cm2, and the class 10 to 30 cm with 27,8 cm2. Larger variation in the growth rate (CV=71.5%) occurred in the 10 to 30 DBH class, proportionate, possibly, by the bad sociological position and the presence of lianas on the crown. The morfometric index: slenderness degree, index of vital space and weight of the crown are significantly correlated with the periodic growth (r=-0,647, Pr<0,0001; r=0,592, Pr<0,0001; r=0,366, Pr=0,0034) respectively. The competitive status, measured by Hegyi, Glover & Holl, and Vertical Competition index, showed negative influence on the growth in basal area. The average values for 0,96; 0,39 and 84,16, respectively, indicate high competition in the trees sampled. The light, described by sociological position and the size are decisive to provide high rates for periodic growth in basal area. The periodic increment model in basal area showed adjustment and precision (R2 aj. = 0,928; CV = 5,8%), when used as predictor variable the tree size (total height, slenderness degree, length and the weight of the crown) and the competition. The size of the tree accounted for 87,2% of the variation in basal area growth and the competition index explain 5,6%. By the growth ring analysis, using dendrochronological techniques is possible to quantify the rate of periodic increment in basal area of cedro trees.
O crescimento periódico em área basal de 62 árvores de cedro (Cedrela odorata L.) foi reconstruído no período de 2005 a 2008, com emprego de técnicas dendrocronológicas em Porto Acre, AC, com o objetivo de quantificar e descrever sua taxa mediante relações com variáveis morfométricas, índices de competição, posição sociológica, forma da copa e ocorrência de lianas sobre a copa. Verificou-se diferença significativa do incremento periódico em área basal entre as classes de diâmetro avaliadas (Pr<0,0001), ocorrendo a maior taxa média na classe 70 a 90 cm, com 222,1 cm2 e a menor taxa nas árvores de menor porte (classe 10 a 30 cm) com 27,8 cm2. Alta variação (CV=71,5%) ocorreu na classe de diâmetro 10 a 30 cm decorrente, possivelmente, da má posição sociológica e da presença de lianas na copa. Os índices morfométricos: grau de esbeltez, índice de espaço vital e peso da copa são significativamente correlacionados com o incremento em área basal (r=-0,647, Pr<0,0001; r=0,592, Pr<0,0001; r=0,366, Pr=0,0034), respectivamente. O status competitivo, medido pelo índice de Hegyi, Glover e Holl, e de competição vertical (ICV), mostrou influência negativa das árvores competidoras sobre o incremento em área basal do cedro. Seus valores médios de 0,96; 0,39 e 84,16, respectivamente, indicam que as árvores de cedro amostradas encontram-se sob alta competição. A luz solar, descrita pela posição sociológica, e o tamanho da copa são fatores decisivos para proporcionar maiores taxas de crescimento. O modelo de crescimento em área basal mostrou bom ajuste e precisão (R2 aj.= 0,928; CV=5,8%), quando utilizado como variáveis preditoras o tamanho da árvore (altura total, o grau de esbeltez, o comprimento e o peso da copa) e a competição. As variáveis de tamanho da árvore responderam por 87,2% da variação e a competição explica 5,6%. Pela análise de anéis de crescimento, utilizando técnicas dendrocronológicas, é possível quantificar o incremento periódico em área basal em árvores de cedro.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography