To see the other types of publications on this topic, follow the link: Consistence.

Dissertations / Theses on the topic 'Consistence'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Consistence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

ROSSI, LUIS FILIPE. "CONSISTENCE OF PERFORMANCE IN STOCK FUNDS IN BRAZIL." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2004. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=5411@1.

Full text
Abstract:
A presente tese tem por objetivo investigar consistência de desempenho em fundos de ações no Brasil, durante o período ocorrido entre julho/1994 e junho/2001, buscando resposta para a seguinte pergunta: analisando as séries temporais de retornos de fundos de ações no Brasil, será possível determinar aqueles com maior probabilidade de virem a se tornar os de maiores retornos no futuro? Em outras palavras, será o desempenho de fundos de ações consistente ao longo de tempo, permitindo a construção de modelos com capacidade preditiva? A metodologia utilizada foi de caráter exclusivamente quantitativo, hipótese de preços traduzindo informações completas. Foram aplicados testes de consistência de desempenho do tipo Tabelas de Contingência 2X2 (sem ajuste para risco e com ajuste para risco, através de classificação por Alfa s de Jensen) e Portfolio Change Measure (PCM). Os testes PCM utilizam as ponderações das carteiras, sendo tal metodologia de aplicação inédita em fundos de ações no Brasil. Os resultados apresentam evidências de consistência de desempenho entre o primeiro e o período analisados. Tais evidências não se sustentam entre o segundo e o terceiro período. O padrão de consistência é mais forte no fundos de desempenho inferior. Foram aplicados testes para detectar e mensurar impactos de viés de sobrevivência e custos de transação nos retornos dos fundos. Os resultados, obtidos através de ponderação de carteiras e demonstrativos de fontes e aplicações de recursos dos fundos, não detectam viés de sobrevivência nem influência de custos de transações nos retornos dos fundos de ações analisados. O desempenho dos fundos de ações analisados deve ser considerado insatisfatório.<br>The present dissertation intents to analyze the consistence of performance in stock funds in Brazil, from july/1994 to june/2001. We search the answer to the following question: through the analysis of times series of returns of stock funds in Brazil, is it possible to determine the most likely winners in the future? In other words: the performance of stock funds is consistent along time, allowing for the building of models with predictive power? The methodology applied was a quantitative approach, prices with complete information. I analyzed time series of financial returns of stock funds. The tests for consistence of performance were Contingency Tables 2X2 (with risk adjustment and without risk adjustment through alfa s of Jensen) and Portfolio Change Measure (PCM). The outcomes presents a pattern of consistency between the 1st and the 2nd periods analyzed. This standard is absent between the 2nd and the 3rd periods. The observed pattern of consistence is stronger for the worse performance. We applied tests to detect and measure the effects of survival bias and transaction costs in the returns of funds. The outcomes, obtained trough the analysis of the weighted portfolios and the statements of sources and uses of cash, are the absence of survival bias and effects of transactions costs in the returns of the stock funds analyzed. The performance of the stock funds analyzed must to be considered to be unsatisfactory.
APA, Harvard, Vancouver, ISO, and other styles
2

Shams, M. A. "The use of ternary blended binders in high-consistence concrete." Thesis, University College London (University of London), 2014. http://discovery.ucl.ac.uk/1437231/.

Full text
Abstract:
This study has investigated the feasibility and advantages of using ternary blended binders containing limestone powder (LP), i.e. Portland-limestone cement (PLC), with fly ash (FA) or ground granulated blastfurnace slag (GGBS) in three types of high-consistence concrete i.e. self-compacting concrete (SCC), flowing concrete (FC) and underwater concrete (UWC), concentrating on the hardened mechanical and durability properties. Initially, mix design methods, tests, target fresh properties and constituent materials were selected for each concrete type. In the first stage of the study SCC mixes were formulated with binary and ternary binder blends with up to 80% cement replacement (by volume). The hardened properties of these, i.e. compressive and tensile strength, sorptivity and rapid chloride penetration resistance, were measured and the relationships between these investigated. Optimum replacement levels of GGBS and FA were estimated (40 and 20% respectively), and were used in the subsequent stages of the study on FC and UWC. The main outcomes were: -It is feasible to produce high-consistence concrete using ternary blended binders with LP and GGBS or FA. -It is possible to achieve similar or higher long-term compressive strengths with ternary binder mixes than with binary binder mixes for concrete with low water/cement ratio (<0.4). -A good relationship was obtained between the sorptivity results and the compressive strength which was independent of the concrete type, age and powder composition. -No relationship between the rapid chloride penetration test results and the compressive strength was obtained; the results had a high degree of scatter. There were reductions in the total embodied carbon contents of the concrete mixes with the incorporation of additions. There is scope for further investigating the synergistic effect between limestone powder and ggbs and fly ash to further reduce the Portland cement content leading to greater potential economic and environmental advantages.
APA, Harvard, Vancouver, ISO, and other styles
3

Leclere, Vincent. "Contributions to decomposition methods in stochastic optimization." Thesis, Paris Est, 2014. http://www.theses.fr/2014PEST1092/document.

Full text
Abstract:
Le contrôle optimal stochastique (en temps discret) s'intéresse aux problèmes de décisions séquentielles sous incertitude. Les applications conduisent à des problèmes d'optimisation degrande taille. En réduisant leur taille, les méthodes de décomposition permettent le calcul numérique des solutions. Nous distinguons ici deux formes de décomposition. La emph{décomposition chaînée}, comme la Programmation Dynamique, résout successivement, des sous-problèmes de petite taille. La décomposition parallèle, comme le Progressive Hedging, consiste à résoudre itérativement et parallèlement les sous-problèmes, coordonnés par un algorithme maître. Dans la première partie de ce manuscrit, Dynamic Programming: Risk and Convexity, nous nous intéressons à la décomposition chaînée, en particulier temporelle, connue sous le nom de Programmation Dynamique. Dans le chapitre 2, nous étendons le cas traditionnel, risque-neutre, de la somme en temps des coûts, à un cadre plus général pour lequel nous établissons des résultats de cohérence temporelle. Dans le chapitre 3, nous étendons le résultat de convergence de l'algorithme SDDP (Stochastic Dual Dynamic Programming Algorithm) au cas où les fonctions de coûts (convexes) ne sont plus polyhédrales. Puis, nous nous tournons vers la décomposition parallèle, en particulier autour des méthodes de décomposition obtenues en dualisant les contraintes (contraintes spatiales presque sûres, ou de non-anticipativité). Dans la seconde partie de ce manuscrit, Duality in Stochastic Optimization, nous commençons par souligner que de telles contraintes peuvent soulever des problèmes de dualité délicats (chapitre 4).Nous établissons un résultat de dualité dans les espaces pairés $Bp{mathrm{L}^infty,mathrm{L}^1}$ au chapitre 5. Finalement, au chapitre 6, nous montrons un résultat de convergence de l'algorithme d'Uzawa dans $mathrm{L}^inftyp{Omega,cF,PP;RR^n}$, qui requière l'existence d'un multiplicateur optimal. La troisième partie de ce manuscrit, Stochastic Spatial Decomposition Methods, est consacrée à l'algorithme connu sous le nom de DADP (Dual Approximate Dynamic Programming Algorithm). Au chapitre 7, nous montrons qu'une suite de problèmes d'optimisation où une contrainte presque sûre est relaxée en une contrainte en espérance conditionnelle épi-converge vers le problème original si la suite des tribus converge vers la tribu globale. Finalement, au chapitre 8, nous présentons l'algorithme DADP, des interprétations, et des résultats de convergence basés sur la seconde partie du manuscrit<br>Stochastic optimal control addresses sequential decision-making under uncertainty. As applications leads to large-size optimization problems, we count on decomposition methods to tackle their mathematical analysis and their numerical resolution. We distinguish two forms of decomposition. In chained decomposition, like Dynamic Programming, the original problemis solved by means of successive smaller subproblems, solved one after theother. In parallel decomposition, like Progressive Hedging, the original problemis solved by means of parallel smaller subproblems, coordinated and updated by amaster algorithm. In the first part of this manuscript, Dynamic Programming: Risk and Convexity, we focus on chained decomposition; we address the well known time decomposition that constitutes Dynamic Programming with two questions. In Chapter 2, we extend the traditional additive in time and risk neutral setting to more general ones for which we establish time-consistency. In Chapter 3, we prove a convergence result for the Stochastic Dual Dynamic Programming Algorithm in the case where (convex) cost functions are no longer polyhedral. Then, we turn to parallel decomposition, especially decomposition methods obtained by dualizing constraints (spatial or non-anticipative). In the second part of this manuscript, Duality in Stochastic Optimization, we first point out that such constraints lead to delicate duality issues (Chapter 4).We establish a duality result in the pairing $Bp{mathrm{L}^infty,mathrm{L}^1}$ in Chapter 5. Finally, in Chapter 6, we prove the convergence of the Uzawa Algorithm in~$mathrm{L}^inftyp{Omega,cF,PP;RR^n}$.The third part of this manuscript, Stochastic Spatial Decomposition Methods, is devoted to the so-called Dual Approximate Dynamic Programming Algorithm. In Chapter 7, we prove that a sequence of relaxed optimization problems epiconverges to the original one, where almost sure constraints are replaced by weaker conditional expectation ones and that corresponding $sigma$-fields converge. In Chapter 8, we give theoretical foundations and interpretations to the Dual Approximate Dynamic Programming Algorithm
APA, Harvard, Vancouver, ISO, and other styles
4

Lautner, Erik, and Daniel Körner. "An integrated System Development Approach for Mobile Machinery in consistence with Functional Safety Requirements." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-200666.

Full text
Abstract:
The article identifies the challenges during the system and specifically the software development process for safety critical electro-hydraulic control systems by using the example of the hydrostatic driveline with a four speed transmission of a feeder mixer. An optimized development approach for mobile machinery has to fulfill all the requirements according to the Machinery Directive 2006/42/EC, considering functional safety, documentation and testing requirements from the beginning and throughout the entire machine life cycle. The functionality of the drive line control could be verified in advance of the availability of a prototype by using a “software-in-the-loop” development approach, based on a MATLAB/SIMULINK model of the drive line in connection with the embedded software.
APA, Harvard, Vancouver, ISO, and other styles
5

Aurichi, Leandro Fiorini. "A influência dos subespaços discretos sobre os espaços topológicos." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/45/45131/tde-24092010-150623/.

Full text
Abstract:
São apresentados resultados envolvendo subespaços discretos em diversos tipos de problemas em Topologia Geral. São também apresentadas construções de contraexemplos tanto em ZFC como com axiomas extras.<br>It is presented some results involving discrete subspaces in many kind of problems in General Topology. It is also presented some constructions of counterexamples in ZFC and assuming extra axioms.
APA, Harvard, Vancouver, ISO, and other styles
6

Seabra, Margarida Maria Ortigão Ramos. "A coerência gráfica do design de comunicação da identidade visual." Master's thesis, Universidade de Lisboa, Faculdade de Arquitetura, 2017. http://hdl.handle.net/10400.5/13876.

Full text
Abstract:
Dissertação de Mestrado em Design, com a especialização em Design de Comunicação, apresentada na Faculdade de Arquitetura da Universidade de Lisboa para obtenção do grau de Mestre.<br>Por ocasião da finalização do mestrado em Design de Comunicação, surge a necessidade e o desejo da realização de um estágio dentro da área. O principal objetivo no decorrer do estágio foi atingir resultados sobre a temática da identidade visual que possam vir a ser úteis para os profissionais da área do design. Para além disto, foi uma forma de aplicar todos os conhecimentos adquiridos no percurso académico da mestranda, tendo assim um contacto real com o mercado de trabalho, onde teve a oportunidade de aprender novas técnicas e metodologias. Ao longo deste trabalho é realizado um estudo sobre a coerência gráfica na comunicação da Identidade Visual das marcas, através do trabalho desenvolvido durante o estágio. Um designer deve sempre ter em conta o modo como a identidade visual de uma marca foi utilizada anteriormente, para que consiga ser coerente. Deve ainda procurar esta coerência entre os diversos suportes e elementos utilizados. A empresa escolhida para esta investigação ativa foi a J.W.Thompson Lisboa, responsável por vários marcas como Montepio, O Bongo, IKEA e Galp, no entanto a mestranda esteve maioritariamente inserida em projetos referentes à marca Vodafone. O principal interesse por esta agência surgiu da fidelidade que estas marcas têm com a mesma e pela natureza dos trabalhos que produz. Pretende-se que através da presente investigação, recorrendo a livros como “Design de Identidade da Marca”, de Alina R.Wheeler e “Design de Identidade e Imagem Corporativa” de Daniel Raposo, em conjunto com a avaliação dos casos de estudo selecionados, seja atingida por parte da mestranda a capacidade de ser coerente na utilização da Identidade Visual de uma marca.<br>ABSTRACT: Within the context of the completion of the master’s degree in Commu¬nication Design, arise the need and the desire of a development of an internship within the graphic design area. The main objective of the in¬ternship was achieving results on the topic of visual identity that may be useful to design professionals. In addition, it was a way to apply all the knowledge acquired in the Master’s degree, thus having a real contact with the labor market, where she had the opportunity to learn new techni¬ques and methodologies. This work aims to study the graphical consistency in communicating the visual identity of the brand, having as example the work developed during the internship. A designer should always have concern about the ways that the visual identity of a brand has been used previously, to be consistent in the next projects. It is also very important to have this consistency between the various media and all the elements. The company chosen for this active investigation was J.W.Thompson Lisbon, responsible for various brands like Montepio, Bongo, IKEA and Galp, but the graduate student was mostly set in projects related to the Vodafone brand. The main interest in this agency grew out of the loyalty that these brands have with them and, of course, the nature of the work they produce. It is intended that through this investigation, using books such as “Brand Identity Design” by Alina R.Wheeler and “Design de Identidade e Imagem Corporativa” de Daniel Raposo, together with the assessment of the se¬lected case studies, be achieved the ability to be consistent in the use of the visual identity of a brand.<br>N/A
APA, Harvard, Vancouver, ISO, and other styles
7

Baltrušis, Audrius. "Žemsiurbių panaudojimas Lietuvos ežerų valymui." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2009~D_20090603_095254-09070.

Full text
Abstract:
Lietuvoje sparčiai daugėja nykstančių ežerų, kuriuos būtina valyti. Šiuo metu, jų skaičius jau siekia apie tris tūkstančius. Uždumblėję ežerai daro neigiamą poveikį aplinkai, nyksta ir nebeatlieka savo paskirties. Siekiant sustabdyti ežerų nykimo procesą, didžiausias dėmesys yra skiriamas ežerų valymo būdui naudojant žemsiurbes. Šis valymo būdas yra gana populiarus nors ir brangus. Tačiau ne visos žemsiurbės pritaikytos Lietuvos ežerams valyti. Pagrindinė darbo problema – dumblėjančių Lietuvos ežerų atgaivinimas panaudojant žemsiurbes. Darbo tikslas – įvertinti Lietuvoje naudojamų žemsiurbių tinkamumą ežerams valyti. Darbo uždaviniai: 1. įvertinti gaminamų žemsiurbių tinkamumą Lietuvos ežerų valymui; 2. išanalizuoti šalyje naudojamų žemsiurbių eksploatacinį našumą, pulpos konsistenciją ir transportavimo atstumą; 3. įvertinti iškasamo dumblo tolygumą, valant ežerus; 4. įvertinti Lietuvoje naudojamų žemsiurbių tinkamumą ežerų valymui. Darbo struktūra. Darbą sudaro: įvadas, 3 skyriai, išvados, literatūros sąrašas ir priedai. Darbe pateikti 38 paveikslai, 1 lentelė, 4 priedai. Pirmoje darbo dalyje nagrinėjamas pasirinktos temos aktualumas atliekant literatūros analizę. Antroje darbo dalyje pateikiama tyrimų metodika. Trečioje darbo dalyje pateikiami atlikto tyrimo rezultatai. Darbe aprašomi atlikti trijų Vainiekio, Paežerėlių ir Talkšos ežerų, valytų 2002, 2006, 2007 ir 2008 metais, bei juose naudotų žemsiurbių VT – 2, „Watermaster Classic III“, „Christa Wendland” gauti... [toliau žr. visą tekstą]<br>The extinction of the lakes, that should be inevitably cleaned, is in great progress in Lithuania. There are about three thousand such kind of lakes. The siltified lakes have the negative influence on the natural environment, they are nearly to extinction and do not serve the purpose. In order to stop the process of the absolute extinction of the lakes, the great attention is paid to the means and methods for cleaning the lakes by using the special machines called suction dredges that are popular, nevertheless, quite expensive. Though, not all kinds of dredges produced are suitable for cleaning the lakes of Lithuania. The main problem of the work- the revival of the siltified lakes by using the suction dredges. The aim of the work - to evaluate the usability of dredges for cleaning the lakes of Lithuania. The objectives of the work: 1. Evaluate the usability of dredges that are produced, for the cleaning of the lakes of Lithuania. 2. Analyze the efficiency of exploitation of the dredges used in Lithuania, the consistence of the solid silt and the distance of transportation of the pulp. 3. Evaluate the even distribution of the silt that is removed. 4. Evaluate the usability of the suction dredges that are exploited in Lithuania. The structure of the work. The parts of the final work are: introduction, 3 sections, conclusions, recommendations, list of literature sources and appendixes. There are 38 pictures, 1 table, 4 appendixes in the final work. In the first part of the... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
8

El, Heda Khadijetou. "Choix optimal du paramètre de lissage dans l'estimation non paramétrique de la fonction de densité pour des processus stationnaires à temps continu." Thesis, Littoral, 2018. http://www.theses.fr/2018DUNK0484/document.

Full text
Abstract:
Les travaux de cette thèse portent sur le choix du paramètre de lissage dans le problème de l'estimation non paramétrique de la fonction de densité associée à des processus stationnaires ergodiques à temps continus. La précision de cette estimation dépend du choix de ce paramètre. La motivation essentielle est de construire une procédure de sélection automatique de la fenêtre et d'établir des propriétés asymptotiques de cette dernière en considérant un cadre de dépendance des données assez général qui puisse être facilement utilisé en pratique. Cette contribution se compose de trois parties. La première partie est consacrée à l'état de l'art relatif à la problématique qui situe bien notre contribution dans la littérature. Dans la deuxième partie, nous construisons une méthode de sélection automatique du paramètre de lissage liée à l'estimation de la densité par la méthode du noyau. Ce choix issu de la méthode de la validation croisée est asymptotiquement optimal. Dans la troisième partie, nous établissons des propriétés asymptotiques, de la fenêtre issue de la méthode de la validation croisée, données par des résultats de convergence presque sûre<br>The work this thesis focuses on the choice of the smoothing parameter in the context of non-parametric estimation of the density function for stationary ergodic continuous time processes. The accuracy of the estimation depends greatly on the choice of this parameter. The main goal of this work is to build an automatic window selection procedure and establish asymptotic properties while considering a general dependency framework that can be easily used in practice. The manuscript is divided into three parts. The first part reviews the literature on the subject, set the state of the art and discusses our contribution in within. In the second part, we design an automatical method for selecting the smoothing parameter when the density is estimated by the Kernel method. This choice stemming from the cross-validation method is asymptotically optimal. In the third part, we establish an asymptotic properties pertaining to consistency with rate for the resulting estimate of the window-width
APA, Harvard, Vancouver, ISO, and other styles
9

Vieira, Mafalda Sofia Carreira. "Divulgação de informação sobre partes relacionadas e sua influência na valorização das empresas cotadas portuguesas." Master's thesis, Instituto Superior de Economia e Gestão, 2017. http://hdl.handle.net/10400.5/15083.

Full text
Abstract:
Mestrado em Contabilidade, Fiscalidade e Finanças Empresariais<br>O presente estudo tem como objetivos: (1) avaliar o nível de divulgação da informação relacionada com as transações com partes relacionadas das empresas portuguesas cotadas, através da criação de um Índice de Divulgação relativo não ponderado; (2) identificar os principais determinantes do nível de divulgação das empresas; e (3) verificar se o mercado valoriza a divulgação de informação sobre as transações com partes relacionadas. Relativamente ao nível de divulgação da informação e conformidade com os requisitos da IAS 24, é possível verificar que as empresas portuguesas cotadas apresentam um nível médio de divulgação de 78%. As principais transações realizadas com partes relacionadas são de natureza operacional (cerca de 71%), nomeadamente Vendas e Prestações de Serviços. Por sua vez, as transações classificadas como atividades de financiamento são as que apresentam um maior peso em termos de valor monetário (cerca de 65%). No que diz respeito aos determinantes da divulgação da informação, os resultados sugerem que as empresas portuguesas cotadas de maior dimensão e mais endividadas apresentam níveis de divulgação superiores. Por fim, não foi encontrada evidência de que o mercado valoriza as empresas que apresentam níveis de divulgação superiores, já que a relação entre a divulgação das transações com partes relacionadas e a valorização das empresas não se revelou estatisticamente significativa.<br>The present study as the aim of: (1) to assess the disclosure's level of information related to transactions with related parties of listed Portuguese companies, through the creation of an unweighted relative Disclosure Index; (2) identification of the main determinants of the corporate disclosure's level; and (3) to verification whether the market values the disclosure of information on transactions with related parties. In relation to the disclosure's level of information and compliance with the requirements of IAS 24, it is possible to verify that Portuguese listed companies have an average disclosure level of 78%. The main transactions with related parties have operational nature (about 71%), namely Sales and Services. In turn, transactions classified as financing activities are the ones that carry a greater weight in terms of monetary value (about 65%). Regarding the determinants of disclosure, the results suggest that larger and more indebted listed Portuguese companies show higher levels of disclosure. Finally, there was no evidence that market values companies with higher levels of disclosure, since the relationship between the disclosure of transactions with related parties and the valuation of companies was not statistically significant.<br>info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
10

Dʹavila, David Michael. "The consistency bias and categorization : the effects of consistent contrast and hierarchical organization on category learning and transfer." Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/28919.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Hatia, Saalik. "Leveraging formal specification to implement a database backend." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS137.

Full text
Abstract:
Conceptuellement, un système de stockage de base de données n'est qu'une correspondance entre des clés et des valeurs. Cependant, pour offrir des performances élevées et une fiabilité, une base de donnée moderne est un système complexe et concurrent, rendant le système prône aux erreurs. Cette thèse relate notre parcours, allant de la spécification formelle d'une base de données à son implémentation. La spécification est courte et non ambigüe, et aide à raisonner sur la justesse. La lecture du pseudocode de la spécification fournit une base rigoureuse pour une implémentation. La spécification décrit la couche de stockage comme une mémoire partagée transactionnelle simple, avec deux variantes (au comportement équivalent), basées sur une map et un journal. Nous implémentons ces deux variantes en restant fidèles à notre spécification. Nous spécifions les fonctionnalités d'une base de données moderne, ayant un système de journalisation avec des snapshots et de la troncature, comme une composition des deux variants. La spécification de cette dernière est particulièrement simple. Finalement, nous présentons une évaluation expérimentale avec des performances qui sont acceptables pour une implémentation qui est correcte<br>Conceptually, a database storage backend is just a map of keys to values. However, to provide performance and reliability, a modern store is a complex, concurrent software system, opening many opportunities for bugs. This thesis reports on our journey from formal specification of a store to its implementation. The specification is terse and unambiguous, and helps reason about correctness. Read as pseudocode, the specification provides a rigorous grounding for implementation. The specification describes a store as a simple transactional shared memory, with two (behaviourally equivalent) variants, map- and journal-based. We implement these two basic variants verbatim in Java. We specify the features of a modern store, such as a write-ahead log with checkpointing and truncation, as a dynamic composition of instances of the two basic variants. The specification of correct composition is particularly simple. Our experimental evaluation of an implementation has acceptable performance, while our rigorous methodology increases confidence in its correctness
APA, Harvard, Vancouver, ISO, and other styles
12

Šohajek, Jiří. "Analýza vyvrtávacího procesu automatické horizontální vyvrtávačky." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2008. http://www.nusl.cz/ntk/nusl-228141.

Full text
Abstract:
The object of this dissertation is to analyse a drilling process of an automatic horizontal boring machine the SVD2 for drilling of bearing holes. There were problems with vibrations during drilling process, causing an extreme noise and this dissertation solves them. First of all it was necessary to analyse the vibrations, to examine a machining process and measure machine dynamics and statics. Another task was to compare measured results with mathematical models and after their analysis to design structural and other solutions. The SVD2 machine was designed on the basis of a previous type the SVD. Its conversion to a new one was based on adding of six spindles. As a result a number of drilled bodies was increased from one to four, so a performance was increased four times. That is why there was an original intention to rebuilt the other two spindle machines. To leave the basical parts of the machine without any modification ( such as a reinforcement of a mount and adjusting of a bigger distance of a side linear spindle guideway and a cross linear guideway of a support for clamping devices), would be unsuitable solution, which was confirmed after a measurement and a following analysis.
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Kwok-tai Teddy, and 廖國泰. "Do they know the sources of Hong Kong's advantage?: How well and to what extent do the Hong Kong companiesleverage on Hong Kong's advantages? : a study of the consistence ofthe corporate strategies of these companies vs the Hong Kong businessenvironment." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1998. http://hub.hku.hk/bib/B31269060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Jordan, Patrick William. "Consistency and usability." Thesis, University of Glasgow, 1993. http://theses.gla.ac.uk/5577/.

Full text
Abstract:
This thesis is divided into two main parts. In the first a multi-component framework for usability is outlined. The components - guessability, learnability, experienced user performance, system potential, and re-usability - describe how users' performance on a task changes with experience, and each is associated with a different part of a learning curve. Definitions of the components are offered, as are examples of where each may be particularly important. The potential advantages of the framework are then discussed. An experimental methodology and criteria based analysis method for quantifying the components of usability is proposed. This is demonstrated in the evaluation of a video cassette recorder. Non-experimental alternatives to this methodology are also considered. The second part of the thesis is about the issue of consistency. A distinction between two types of consistency - set compatibility and rule compatibility - is proposed. Set compatibility is concerned with inter-task associations, whilst rule compatibility concerns the prior association of tasks and action-rules. Predictions are made about the effects of each type of consistency on the various components of usability, and these are tested in the context of a study of the invocation of menu commands. Results indicated that rule compatibility had the greater effect on early interactions, whilst set compatibility was more salient later on. A series of further studies is then reported, the aim of which was to investigate whether these effects were general across types and levels of interface, and other levels of task. Results mainly, but not entirely, indicated that they were. Data from a more `ecologically valid' usability evaluation was re-analysed, to investigate whether the effects of consistency are important outside of artificial and tightly controlled experiments. Apparently they are - almost half of the difficulties encountered during users' early interactions with a commercially available word processor could be attributed to either set or rule incompatibilities.
APA, Harvard, Vancouver, ISO, and other styles
15

Chang, Sharon Elaine. "Interdatabase consistency management." Thesis, Massachusetts Institute of Technology, 1990. https://hdl.handle.net/1721.1/128795.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990.<br>Includes bibliographical references (p. 77-78).<br>by Sharon Elaine Chang.<br>Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1990.
APA, Harvard, Vancouver, ISO, and other styles
16

Race, David M. (David Michael). "Consistency in Lattices." Thesis, North Texas State University, 1986. https://digital.library.unt.edu/ark:/67531/metadc331688/.

Full text
Abstract:
Let L be a lattice. For x ∈ L, we say x is a consistent join-irreducible if x V y is a join-irreducible of the lattice [y,1] for all y in L. We say L is consistent if every join-irreducible of L is consistent. In this dissertation, we study the notion of consistent elements in semimodular lattices.
APA, Harvard, Vancouver, ISO, and other styles
17

Simonovic, Marko. "Cosmological Consistency Relations." Doctoral thesis, SISSA, 2014. http://hdl.handle.net/20.500.11767/3879.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Kwok-tai Teddy. "Do they know the sources of Hong Kong's advantage? : How well and to what extent do the Hong Kong companies leverage on Hong Kong's advantages? : a study of the consistence of the corporate strategies of these companies vs the Hong Kong business environment." Hong Kong : University of Hong Kong, 1998. http://sunzi.lib.hku.hk/hkuto/record.jsp?B19874005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Luu, Duy tung. "Exponential weighted aggregation : oracle inequalities and algorithms." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC234/document.

Full text
Abstract:
Dans plusieurs domaines des statistiques, y compris le traitement du signal et des images, l'estimation en grande dimension est une tâche importante pour recouvrer un objet d'intérêt. Toutefois, dans la grande majorité de situations, ce problème est mal-posé. Cependant, bien que la dimension ambiante de l'objet à restaurer (signal, image, vidéo) est très grande, sa ``complexité'' intrinsèque est généralement petite. La prise en compte de cette information a priori peut se faire au travers de deux approches: (i) la pénalisation (très populaire) et (ii) l'agrégation à poids exponentiels (EWA). L'approche penalisée vise à chercher un estimateur qui minimise une attache aux données pénalisée par un terme promouvant des objets de faible complexité (simples). L'EWA combine une famille des pré-estimateurs, chacun associé à un poids favorisant exponentiellement des pré-estimateurs, lesquels privilègent les mêmes objets de faible complexité.Ce manuscrit se divise en deux grandes parties: une partie théorique et une partie algorithmique. Dans la partie théorique, on propose l'EWA avec une nouvelle famille d'a priori favorisant les signaux parcimonieux à l'analyse par group dont la performance est garantie par des inégalités oracle. Ensuite, on analysera l'estimateur pénalisé et EWA, avec des a prioris généraux favorisant des objets simples, dans un cardre unifié pour établir des garanties théoriques. Deux types de garanties seront montrés: (i) inégalités oracle en prédiction, et (ii) bornes en estimation. On les déclinera ensuite pour des cas particuliers dont certains ont été étudiés dans littérature. Quant à la partie algorithmique, on y proposera une implémentation de ces estimateurs en alliant simulation Monte-Carlo (processus de diffusion de Langevin) et algorithmes d'éclatement proximaux, et montrera leurs garanties de convergence. Plusieurs expériences numériques seront décrites pour illustrer nos garanties théoriques et nos algorithmes<br>In many areas of statistics, including signal and image processing, high-dimensional estimation is an important task to recover an object of interest. However, in the overwhelming majority of cases, the recovery problem is ill-posed. Fortunately, even if the ambient dimension of the object to be restored (signal, image, video) is very large, its intrinsic ``complexity'' is generally small. The introduction of this prior information can be done through two approaches: (i) penalization (very popular) and (ii) aggregation by exponential weighting (EWA). The penalized approach aims at finding an estimator that minimizes a data loss function penalized by a term promoting objects of low (simple) complexity. The EWA combines a family of pre-estimators, each associated with a weight exponentially promoting the same objects of low complexity.This manuscript consists of two parts: a theoretical part and an algorithmic part. In the theoretical part, we first propose the EWA with a new family of priors promoting analysis-group sparse signals whose performance is guaranteed by oracle inequalities. Next, we will analysis the penalized estimator and EWA, with a general prior promoting simple objects, in a unified framework for establishing some theoretical guarantees. Two types of guarantees will be established: (i) prediction oracle inequalities, and (ii) estimation bounds. We will exemplify them for particular cases some of which studied in the literature. In the algorithmic part, we will propose an implementation of these estimators by combining Monte-Carlo simulation (Langevin diffusion process) and proximal splitting algorithms, and show their guarantees of convergence. Several numerical experiments will be considered for illustrating our theoretical guarantees and our algorithms
APA, Harvard, Vancouver, ISO, and other styles
20

Verwaal, Nathaly. "Ambiguous memory consistency models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ35005.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Viger, Christopher David. "Investigations into mathematical consistency." Thesis, McGill University, 1989. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=61255.

Full text
Abstract:
Two distinct proofs of Godel's first incompleteness theorem are presented and compared. A presentation of Godel's second incompleteness theorem is also given. There follows a detailed analysis of Gentzen's consistency proof of elementary number theory; in particular, the success of the reduction procedure is rigorously demonstrated. Finally, the epistemic value of any such consistency proof is considered.
APA, Harvard, Vancouver, ISO, and other styles
22

Michael, L. A. "Consistency results concerning subtlety." Thesis, University of Oxford, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Karvinen, T. (Tuulikki). "Ultra high consistency forming." Doctoral thesis, Oulun yliopisto, 2019. http://urn.fi/urn:isbn:9789526222639.

Full text
Abstract:
Abstract This study focused on web forming at a 5–10% consistency range, termed Ultra High Consistency (UHC). The study continued work done by Gullichsen with his research groups (1981–2007) and combined it with the HC forming research done by Valmet (HC, 1999–2004). The hypothesis was that by utilizing a rotor to fluidize suspension and a wedge to eliminate the free jet and thus prevent reflocculation, web forming at UHC is feasible at commercial speeds. The research method was experimental. The bulk of the research was conducted at pilot scale. A new UHC headbox was designed and mounted on a pilot former. The key elements of the headbox are the rotor and the wedge. As fluidization forms the base for UHC forming, this was evaluated at the pilot former using image analysis. In addition, fluidization was studied using a laboratory-scale device. Besides basic paper analysis, X-ray microtomography and sheet splitting methods were utilized to analyze the sheet structure. The results show that forming is possible within the focus area, 5–10% consistency and machine speeds of 150–600 m/min, although the operation potential of the UHC former is even wider. The results demonstrate that the wedge is needed for successful UHC forming, but the rotor is not required, providing the flow rate is sufficiently high. This indicates that various forces induced by the flow itself can be adequate to fluidize suspension for forming. The critical Reynolds number of full fluidization was found to be 200–250. The Reynolds numbers were estimated utilizing the linear dependencies found between the apparent viscosity and consistency, using the maximum mean flow velocities inside the headbox, and neglecting the possible rotation of the rotor. The corresponding critical flow velocities at 10% consistency are 12 and 19 m/s for a eucalyptus and pine pulp. The velocities are on average 70 and 60% lower than those given in the literature (40–50 m/s). The results reveal that the fiber orientation of UHC sheets is planar, the floc size of the web increases with consistency, the internal bond increases linearly with the floc size, and the tensile strength appears to decrease with increasing floc size. In consequence, it is postulated that the increase in the out-of-plane strength at the expense of in-plane strength with the consistency increase results from a more flocculated structure<br>Tiivistelmä Tutkimus keskittyi rainanmuodostukseen 5–10 % sakeudessa. Sakeusalue nimitettiin ultra korkeaksi (lyhenne UHC). Tämä työ jatkoi Gullichsenin ja hänen tutkimusryhmiensä tutkimustyötä (1981–2007) ja samalla yhdisti sen Valmetin tekemään suursakeusrainaustutkimukseen (HC, 1994–2004). Työn hypoteesina oli, että käyttämällä roottoria massan fluidisoimiseen sekä ns. wedgeä eliminoimaan vapaa suihku ja estämään jälleenflokkaantuminen, rainanmuodostus UHC-sakeudessa on mahdollista kaupallisissa nopeuksissa. Tutkimusmetodi oli kokeellinen. Pääosa tutkimuksesta suoritettiin koekonemittakaavassa. Uusi UHC-perälaatikko suunniteltiin ja asennettiin koeformeille. Perälaatikon pääelementit ovat pyörivä roottori ja wedge. Koska fluidisointi muodostaa UHC-rainauksen perustan, fluidisointia evaluoitiin koekoneella käyttäen kuva-analyysiä sekä tutkittiin lisäksi käyttäen röntgenmikrotomografia ja arkin halkaisu -metodeja. Tulokset osoittavat, että rainaaminen on mahdollista määritellyllä fokusalueella, 5–10 %sakeudessa ja konenopeudella150–600 m/min, joskin UHC-formerin toimintapotentiaali on vieläkin laajempi. Tulokset osoittavat, että wedge tarvitaan onnistuneeseen UHC-muodostamiseen, mutta roottoria ei tarvita, mikäli virtausnopeus on riittävän suuri. Tämä tarkoittaa, että virtauksen aikaansaamat voimat voivat itsessään olla riittäviä massan fluidisoimiseksi rainaamista varten. Täyden fluidisaation kriittisen Reynoldsin luvun havaittiin olevan välillä 200–250. Reynoldsin luvut arvioitiin käyttäen löydettyjä viskositeetin ja sakeuden välisiä lineaarisia riippuvuuksia, päävirtauksen maksiminopeuksia perälaatikossa ja jättäen huomioon ottamatta mahdollinen roottorin pyöriminen. Reynoldsin lukuja vastaavat kriittiset virtausnopeudet 10 % sakeudessa ovat eukalyptus- ja mäntymassalla 12 ja 19 m/s. Nopeudet ovat keskimäärin 70 ja 60 % pienempiä kuin kirjallisuudessa annetut (40–50 m/s). Tulokset osoittavat, että UHC-arkeissa kuituorientaatio on tasomainen, rainan flokkikoko kasvaa sakeuden kasvaessa, palstautumislujuus kasvaa lineaarisesti flokkikoon kanssa ja vetolujuus näyttäisi laskevan flokkikoon kasvaessa. Näin ollen esitetään, että sakeuden kasvaessa tapahtuva palstautumislujuuden kasvu tasolujuuksien kustannuksella johtuu flokkaantuneemmasta rakenteesta
APA, Harvard, Vancouver, ISO, and other styles
24

Hupfeld, Felix. "Causal weak-consistency replication." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät II, 2009. http://dx.doi.org/10.18452/15971.

Full text
Abstract:
Replikation kann helfen, in einem verteilten System die Fehlertoleranz und Datensicherheit zu verbessern. In Systemen, die über Weitverkehrsnetze kommunizieren oder mobile Endgeräte einschließen, muß das Replikationssystem mit großen Kommunikationslatenzen umgehen können. Deshalb werden in solchen Systemen in der Regel nur asynchrone Replikationsalgorithmen mit schwach-konsistenter Änderungssemantik eingesetzt, da diese die lokale Annahme von Änderungen der Daten und deren Koordinierung mit anderen Replikaten entkoppeln und somit ein schnelles Antwortverhalten bieten können. Diese Dissertation stellt einen Ansatz für die Entwicklung schwach-konsistenter Replikationssysteme mit erweiterten kausalen Konsistenzgarantien vor und weist nach, daß auf seiner Grundlage effiziente Replikationssysteme konstruiert werden können. Dazu werden Mechanismen, Algorithmen und Protokolle vorgestellt, die Änderungen an replizierten Daten aufzeichnen und verteilen und dabei Kausalitätsbeziehungen erhalten. Kern ist ein Änderungsprotokoll, das sowohl als grundlegende Datenstruktur der verteilten Algorithmen agiert, als auch für die Konsistenz der lokalen Daten nach Systemabstürzen sorgt. Die kausalen Garantien werden mit Hilfe von zwei Algorithmen erweitet, die gleichzeitige Änderungen konsistent handhaben. Beide Algorithmen basieren auf der Beobachtung, daß die Divergenz der Replikate durch unkoordinierte, gleichzeitige Änderungen nicht unbedingt als Inkonsistenz gesehen werden muß, sondern auch als das Erzeugen verschiedener Versionen der Daten modelliert werden kann. Distributed Consistent Branching (DCB) erzeugt diese alternativen Versionen der Daten konsistent auf allen Replikaten; Distributed Consistent Cutting (DCC) wählt eine der Versionen konsistent aus. Die vorgestellten Algorithmen und Protokolle wurden in einer Datenbankimplementierung validiert. Mehrere Experimente zeigen ihre Einsetzbarkeit und helfen, ihr Verhalten unter verschiedenen Bedingungen einzuschätzen.<br>Data replication techniques introduce redundancy into a distributed system architecture that can help solve several of its persistent problems. In wide area or mobile systems, a replication system must be able to deal with the presence of unreliable, high-latency links. Only asynchronous replication algorithms with weak-consistency guarantees can be deployed in these environments, as these algorithms decouple the local acceptance of changes to the replicated data from coordination with remote replicas. This dissertation proposes a framework for building weak-consistency replication systems that provides the application developer with causal consistency guarantees and mechanisms for handling concurrency. By presenting an integrated set of mechanisms, algorithms and protocols for capturing and disseminating changes to the replicated data, we show that causal consistency and concurrency handling can be implemented in an efficient and versatile manner. The framework is founded on log of changes, which both acts the core data structure for its distributed algorithms and protocols and serves as the database log that ensures the consistency of the local data replica. The causal consistency guarantees are complemented with two distributed algorithms that handle concurrent operations. Both algorithms are based on the observation that uncoordinated concurrent operations introduce a divergence of state in a replication system that can be modeled as the creation of version branches. Distributed Consistent Branching (DCB) recreates these branches on all participating processes in a consistent manner. Distributed Consistent Cutting (DCC) selects one of the possible branches in a consistent and application-controllable manner and enforces a total causal order for all its operations. The contributed algorithms and protocols were validated in an database system implementation, and several experiments assess the behavior of these algorithms and protocols under varying conditions.
APA, Harvard, Vancouver, ISO, and other styles
25

Santos, Kionna Oliveira Bernardes. "Estresse ocupacional e sa?de mental: desempenho de instrumento de avalia??o em popula??o de trabalhadores na Bahia, Brasil." UNIVERSIDADE ESTADUAL DE FEIRA DE SANTANA, 2006. http://localhost:8080/tede/handle/tede/36.

Full text
Abstract:
Made available in DSpace on 2015-07-15T13:31:39Z (GMT). No. of bitstreams: 1 Dissertacao de Kionna Oliveira versao PDF.pdf: 1801004 bytes, checksum: 416418b72e9ce91090c13cc2f37adee1 (MD5) Previous issue date: 2006-12-21<br>The assessment of the research instrument performance used in the occupational health is of fundamental importance in order to cautiously specify, the trustworthiness of the commonly found situations in work environments. The validation of an instrument has the purpose of verifying whether the mensuration carried out consistently measures what the instrument intends to measure.Nevertheless, to be regarded as a valid one, an instrument must also be reliable,which means that its reliability is required, yet it is not enough for its validity status.This current study of validation has been aimed for assessing the fulfillment of two questionnaires: the Job Content Questionnaire (JCQ), and the Self Reporting Questionnaire (SRQ-20), which were used in the evaluation of labor psychosocial aspects and in the workers mental health respectively. The accomplished research has involved five studies which differed in selected occupations. The analysis was done in two stages; in the first, the evaluation of the SRQ-20 was carried out by examining the ROC curve, factor analysis of tetracoric correlations, multiple correspondence analysis, the internal consistency was evaluated by the formula of Kuder-Richardson (KD-20); in the second stage, the JCQ performance was assessed, in which it has been used statistical procedures such as factor analysis with the extraction method of main components and VARIMAX rotation and calculus of Cronbach s Alpha. The results showed satisfactory indicators of performance for both appraised instruments. The SRQ-20 evaluation revealed the extraction of four factors confirmed by the graphic of the analysis of multiple correspondences, the instrument showed reasonable internal consistency among the scales, the questionnaire performance was satisfactory since it identified symptoms and correctly classified subjects about the suspicion of usual mental disturbs. In general,the JCQ revealed adequacy of its scales to the theoretical model proposed by Karasek, acceptable internal consistency among scales. The good performance of the instruments evaluated in this study showed that these instruments satisfactorily assess what it is intended to be measured and can contribute towards building of indicators of occupational health and efficient policies for the working class.<br>Scanear
APA, Harvard, Vancouver, ISO, and other styles
26

Caparica, Alvaro de Almeida. "Correlações em sistemas de bósons carregados." Universidade de São Paulo, 1985. http://www.teses.usp.br/teses/disponiveis/54/54131/tde-03092010-142342/.

Full text
Abstract:
O gás de Bose carregado foi estudado em duas e três dimensões, sendo que no caso bidimensional foram considerados dois tipos distintos de interação: l/r e ln(r). Aplicamos a esses sistemas o método do campo auto-consistente que leva em consideração a interação de curto alcance entre os bosons através de uma correção de campo local. Por meio de cálculos numéricos auto-consistentes determinamos o fator de estrutura S(&#8594k) em um amplo intervalo de densidades. A partir de S(&#8594k) obtivemos a função de correlação dos pares, a energia do estado fundamental que é essencialmente a energia de correlação, a pressão do gás e o espectro de excitações elementares. Calculamos ainda a densidade de blindagem induzida por uma impureza carregada fixada no gás. No limite de altas densidades nossos cálculos reproduzem os resultados da teoria de perturbação de Bogoliubov. Na região de densidades intermediárias em que os sistemas são fortemente correlacionados nossos resultados apresentam uma boa concordância com cálculos baseados na aproximação de HNC e no método de Monte Carlo. Nossos resultados são em várias situações confrontados com os de RPA demonstrando que o método que utilizamos é muito mais adequado para tratar o sistema. Os sistemas bidimensionais mostraram-se mais correlacionados que o tridimensional, sendo que o gás com interação l/r é mais correlacionado que o logarítmico a altas densidades, mas na região de densidades baixas essa situação se inverte. Finalmente calculamos as funções termodinâmicas dos sistemas bi e tridimensionais a temperaturas finitas próximas do zero absoluto baseando-nos nos espectros de excitação do gás a temperatura zero.<br>The two and three-dimensional charged Bose gas have been studied. In the bidimensional case two different types of interaction were considered: l/r and ln(r).We have applied to these systems the self-consistent-field method, which takes into account the short range correlations between the bosons through a local-field correction. By using self-consistent numerical calculations we determinate the structure factor S(&#8594k) in a wide range of densities. From S(&#8594k) we obtained the pair-correlation function, the ground-state energy, the pressure of the gas and the spectrum of elementary excitations. In addition we calculated the screening density induced by a fixed charged impurity. In the high-density limit our calculations reproduce the results given by Bogoliubov\'s perturbation theory. In the intermediate-density region, corresponding to the strongly coupled systems, our results are in very good agreement with calculations based on HNC approximation as well as Monte Carlo method. Our results are compared in several situations with RPA results showing that the self-consistent method is much more accurate. The two-dimensional systems showed to be more correlated than the three-dimensional one; the gas with interaction l/r is also more correlated than the logarithmic one at high densities, but it begins to be less correlated than this one in the low-density region. Finally we calculated the thermodynamic functions of the two and three-dimensional systems at finite temperatures near absolute zero, based upon the gas excitation spectra at zero temperature.
APA, Harvard, Vancouver, ISO, and other styles
27

Uldall, Brian Robert. "Counterfactual thinking and cognitive consistency." Columbus, Ohio : Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1132685877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Cummings, James William Radford. "Consistency results on cardinal exponentiation." Thesis, University of Cambridge, 1988. https://www.repository.cam.ac.uk/handle/1810/250931.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Asplund, Mikael. "Restoring Consistency after Network Partitions." Licentiate thesis, Linköping : Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-9913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Walpole, J. "Maintaining consistency in distributed IPSEs." Thesis, Lancaster University, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383572.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Xiang, Fei. "Bayesian consistency for regression models." Thesis, University of Kent, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.587522.

Full text
Abstract:
Bayesian consistency is an important issue in the context of non- parametric problems. The posterior consistency is a validation of a Bayesian approach and guarantees the posterior mass accumulates around the true density, which" is unknown in most circumstances, as the number of observations goes to infinity. This thesis mainly considers the consistency for nonparametric regression models over both random and non random covariates. The techniques to achieve consistency under random covariates are similar to that derived in Walker (2003, 2004) which is designed for the consistency of independent and identically distributed variables. We contribute a new idea to deal with the supremum metric over covariates when the regression model is with non random covariates. That is, if a regression density is away from the true density in the Hellinger sense, then there is a covariate, whose value is picked from a specific design, such that the density indexed by this value is also away from the true density. As a result, the posterior concentrates in the supremum Hellinger neighbourhood of the real model under conditions on the prior such as the Kullback-Leibler property and the summability of the square rooted prior mass on Hellinger covering balls. Furthermore, the predictive is also shown to be consistent and we illustrate our results on a normal mean regression function and demonstrate the usefulness of a model based on piecewise constant functions. We also investigate conditions under which a piecewise density model is consistent.
APA, Harvard, Vancouver, ISO, and other styles
32

Hancock, Alexander J. "The consistency of outcome effect." Thesis, University of York, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.489195.

Full text
Abstract:
The aim of the experiments reported in this thesis was to establish if the US is susceptible to learned changes in stimulus effectiveness (a notion well established for the CS). In Chapter Two a shock was preexposed followed by either consistent or inconsistent post-US events (or outcomes). It was found that a shock was less effective as a US hi a subsequent conditioning procedure when it was followed by an inconsistent, rather than a consistent, outcome m preexposure. This effect was observed whether the outcome was varying (that is, sometimes a clicker sometimes a noise stunulus) or partial (a clicker or noise stimulus occurring on only half of the trials).
APA, Harvard, Vancouver, ISO, and other styles
33

Outten, Juliet Leigh 1977. "Consistency checks in decision analysis." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/27020.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Engineering Systems Division, Technology and Policy Program, 2004.<br>Includes bibliographical references (leaves 81-82).<br>Given the range of events that can occur at a nuclear power plant throughout its lifetime, the appropriate course of action associated with addressing these events is equally variant. While the issues concerning quantification of the frequency associated with these events can be addressed using reliability models or PSA, the actual path and decisions made concerning the appropriate outcome is not readily obvious. Decisions are made on the basis of importance of cost, health and safety to the decision maker. These decisions include external influences, such as regulation and media attention, as well as internal influences. Using a formalized decision making process, the importance of these factors to the decision maker can be determined utilizing sets of weights and values for cost, health and safety (performance measures). Having assigned weights and values to each performance measure, an integral part of the decision making process is comparing assigned values to ensure consistency. It is useful to examine the "value of a life" in order to perform these consistency or "sanity checks." A case study is performed on decision making for a nuclear power plant and presented reflecting the data collection process for formal decision making with the aid of consistency checks. The formal decision methodology is completed by the consistency checks. Having established consistency checks and completed the analysis through their use, the consistency checks may be applied toward standing policy. Specifically, the Reactor Oversight Process, developed by the Nuclear Regulatory Commission to establish a more transparent procedure for assessing licensee performance, is paralleled to a formal decision making process. A proposal for the use of consistency checks in this established<br>(cont.) policy is presented to promote a well-rounded and less subjective procedure.<br>by Juliet Leigh Outten.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
34

Hedden, Brian (Brian Robert). "Consistency in choice and credence." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/77798.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Linguistics and Philosophy, 2012.<br>"September 2012." Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 75-78).<br>This thesis concerns epistemic and practical rationality. That is, it is about what to believe and what to do. In Chapter 1, 1 argue that theories of practical rationality should be understood as evaluating decisions as opposed to ordinary sorts of non-mental actions. In Chapter 2, I use the machinery developed in Chapter 1 to rebut 'Money Pump' or 'Diachronic Dutch Book' arguments, which draw conclusions about rational beliefs and preferences from premises about how rational agents will behave over time. In Chapter 3, I develop a new objection to the Synchronic Dutch Book Argument, which concludes that rational agents must have probabilistic degrees of belief in order to avoid predictable exploitation in betting scenarios.<br>by Brian Hedden.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
35

Wieweg, William. "Towards Arc Consistency in PLAS." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232081.

Full text
Abstract:
The Planning And Scheduling (PLAS) module of ICE (Intelligent Control Environment) is responsible for planning and scheduling a large fleet of vehicles. This process involves the creation of tasks to be executed by the vehicles. Using this information, PLAS decides which vehicles should execute which tasks, which are modelled as constraint satisfaction problems. Solving the constraint satisfaction problems is slow. To improve efficiency, a number of different techniques exist. One of these is arc consistency, that entails taking a constraint satisfaction problem and evaluating its variables pairwise by applying the constraints among them. Using arc consistency, we can discern the candidate solutions to constraint satisfaction problems faster than doing a pure search. In addition, arc consistency allows us to detect and act early on inconsistencies in constraint satisfaction problems. The work in this master thesis includes the implementation of a constraint solver for symbolic constraints, containing the arc consistency algorithm AC3. Furthermore, it encompasses the implementation of a constraint satisfaction problem generator, based on the Erdős-Rényi graph model, inspired by the quasigroup completion problem with holes, that allows the evaluation of the constraint solver on large-sized problems. Using the constraint satisfaction problem generator, a set of experiments were performed to evaluate the constraint solver. Furthermore, a set of complementary scenarios using manually created constraint satisfaction problems were performed to augment the experiments. The results show that the performance scales up well.<br>Schemaläggningsmodulen PLAS som är en del av ICE (Intelligent Control Environment) är ansvarig för planering och schemaläggning av stora mängder fordonsflottor. Denna process involverar skapandet av uppgifter som behöver utföras av fordonen. Utifrån denna information bestämmer PLAS vilka fordon som ska utföra vilka uppgifter, vilket är modellerat som villkorsuppfyllelseproblem. Att lösa villkorsuppfyllelseproblem är långsamt. För att förbättra prestandan, så finns det en mängd olika tekniker. En av dessa är bågkonsekvens, vilket involverar att betrakta ett villkorsuppfyllelseproblem och utvärdera dess variabler parvis genom att tillämpa villkoren mellan dem. Med hjälp av bågkonsekvens kan vi utröna kandidatlösningar för villkorsuppfyllelseproblemen snabbare, jämfört med ren sökning. Vidare, bågkonsenvens möjliggör upptäckande och bearbetning av inkonsekvenser i villkorsuppfyllelseproblem. Arbetet i denna masteruppsats omfattar genomförandet av en villkorslösare för symboliska villkor, innehållandes bågkonsekvensalgoritmen AC3. Vidare, så innefattar det genomförandet av en villkorsuppfyllelseproblemgenerator, baserad på grafmodellen Erdős-Rényi, inspirerad av kvasigruppkompletteringsproblem med hål, villket möjliggör utvärdering av villkorslösaren på stora problem. Med hjälp av villkorsuppfyllelseproblemgeneratorn så utfördes en mängd experiment för att utvärdera villkorslösaren. Vidare så kompletterades experimenten av en mängd scenarion utförda på manuellt skapade villkorsuppfyllelseproblem. Resultaten visar att prestandan skalar upp bra.
APA, Harvard, Vancouver, ISO, and other styles
36

Lobb, Sarah Beverley. "Lagrangian structures and multidimensional consistency." Thesis, University of Leeds, 2010. http://etheses.whiterose.ac.uk/1921/.

Full text
Abstract:
The conventional point of view is that the Lagrangian is a scalar object (or equivalently a volume form), which through the Euler-Lagrange equations provides us with one single equation (i.e., one per component of the dependent variable). Multidimensional consistency is a key integrability property of certain discrete systems; it implies that we are dealing with infinite hierarchies of compatible equations rather than with one single equation. Requiring the property of multidimensional consistency to be reflected also in the Lagrangian formulation of such systems, we arrive naturally at the construction of Lagrangian multiforms, i.e., Lagrangians which are the components of a form and satisfy a closure relation. We demonstrate that the Lagrangians of many important examples fit into this framework: the so-called ABS list of systems on quad graphs, which includes the discrete Korteweg-de Vries equation; the Gel'fand-Dikii hierarchy, which includes the discrete Boussinesq equation; and the bilinear discrete Kadomtsev-Petviashvili equation. On the basis of this we propose a new variational principle for integrable systems which brings in the geometry of the space of independent variables, and from this principle we can then derive any equation in the hierarchy. We also extend the notion of Lagrangian forms, and the corresponding new variational principle, to continuous systems, using the example of the generating partial dierential equation for the Korteweg-de Vries hierarchy.
APA, Harvard, Vancouver, ISO, and other styles
37

Chen, Xu. "Indexing consistency between online catalogues." Doctoral thesis, Humboldt-Universität zu Berlin, Philosophische Fakultät I, 2008. http://dx.doi.org/10.18452/15777.

Full text
Abstract:
In der globalen Online-Umgebung stellen viele bibliographische Dienstleistungen integrierten Zugang zu unterschiedlichen internetbasierten OPACs zur Verfügung. In solch einer Umgebung erwarten Benutzer mehr Übereinstimmungen innerhalb und zwischen den Systemen zu sehen. Zweck dieser Studie ist, die Indexierungskonsistenz zwischen Systemen zu untersuchen. Währenddessen werden einige Faktoren, die die Indexierungskonsistenz beeinflussen können, untersucht. Wichtigstes Ziel dieser Studie ist, die Gründe für die Inkonsistenzen herauszufinden, damit sinnvolle Vorschläge gemacht werden können, um die Indexierungskonsistenz zu verbessern. Eine Auswahl von 3307 Monographien wurde aus zwei chinesischen bibliographischen Katalogen gewählt. Nach Hooper’s Formel war die durchschnittliche Indexierungskonsistenz für Indexterme 64,2% und für Klassennummern 61,6%. Nach Rolling’s Formel war sie für Indexterme 70,7% und für Klassennummern 63,4%. Mehrere Faktoren, die die Indexierungskonsistenz beeinflussen, wurden untersucht: (1) Indexierungsbereite; (2) Indexierungsspezifizität; (3) Länge der Monographien; (4) Kategorie der Indexierungssprache; (5) Sachgebiet der Monographien; (6) Entwicklung von Disziplinen; (7) Struktur des Thesaurus oder der Klassifikation; (8) Erscheinungsjahr. Gründe für die Inkonsistenzen wurden ebenfalls analysiert. Die Analyse ergab: (1) den Indexieren mangelt es an Fachwissen, Vertrautheit mit den Indexierungssprachen und den Indexierungsregeln, so dass viele Inkonsistenzen verursacht wurden; (2) der Mangel an vereinheitlichten oder präzisen Regeln brachte ebenfalls Inkonsistenzen hervor; (3) verzögerte Überarbeitungen der Indexierungssprachen, Mangel an terminologischer Kontrolle, zu wenige Erläuterungen und "siehe auch" Referenzen, sowie die hohe semantische Freiheit bei der Auswahl von Deskriptoren oder Klassen, verursachten Inkonsistenzen.<br>In the global online environment, many bibliographic services provide integrated access to different web-based OPACs. In such an environment, users expect to see more consistency within and between systems. In practice, indexers are not always consistent with each other, because subject indexing is essentially a subjective process. The purpose of this study is to investigate the indexing consistency between systems and to find out whether it is still frustrated in the new networked environment. Meanwhile, some factors which may influence indexing consistency will be examined. The most important aim of this study is to find out the reasons for inconsistencies, so that some reasonable suggestions can be made to improve indexing consistency. A sample of 3,307 monographs, i.e. 6,614 records was drawn from two Chinese bibliographic catalogues. According to Hooper¡¯s formula, the average consistency for index terms was 64.2% and for class numbers 61.6%. According to Rolling¡¯s formula, for index terms it was 70.7%, for class numbers 63.4%. Several factors affecting indexing consistency were examined: (1) exhaustivity of indexing; (2) specificity; (3) length of monographs indexed; (4) category of indexing languages; (5) subject area of monographs indexed; (6) development of disciplines; (7) structure of vocabulary; (8) year of publication. The reasons for inconsistencies were also analyzed. The analysis revealed that: (1) indexers¡¯ lack of subject knowledge, their unfamiliarity with indexing languages and indexing rules led to a lot of inconsistencies; (2) the lack of unified or detailed indexing policies brought about inconsistencies as well; (3) delayed revision of indexing languages, lack of vocabulary control, shortage of scope notes and ¡°see also¡± reference notes, and high semantic freedom by term or class choosing also caused inconsistencies.
APA, Harvard, Vancouver, ISO, and other styles
38

Martin, Benoît. "TTCC : Transactional-Turn Causal Consistency." Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS114.

Full text
Abstract:
Les applications serverless sont construites à l'aide de frameworks asynchrones basés sur des messages qui permettent aux utilisateurs de composer de manière abstraite des fonctions dans le cloud. Très souvent, ces applications ont besoin de stocker leur état et de gérer la logique métier, comme l'authentification des utilisateurs, le traitement des flux en temps réel ou les espaces de travail collaboratifs. Le serverless avec état stocke l'état dans une base de données distribuée telle que DynamoDB. Ce scénario architectural courant est fragile car les garanties de cohérence des données pour la composition de la couche de messages et de la couche de base de données ne sont pas bien définies. Cela peut entraîner des incohérences, des pannes et des pertes de données. La cohérence est l'ensemble des règles qui contraignent l'ordre dans lequel les mises à jour (par exemple, l'envoi d'un message ou l'affectation d'une variable partagée) deviennent visibles (pour les récepteurs ou les lecteurs). Il existe de nombreux modèles de cohérence, allant de la sérialisation ou de la linéarisation stricte, où les mises à jour sont visibles instantanément par tous les processus, aux modèles de cohérence les plus faibles, où les mises à jour sont transmises aux nœuds distants dans un ordre imprévisible. La plupart des systèmes existants sont conçus pour communiquer soit par message, soit par mémoire partagée, mais pas les deux. Par conséquent, la littérature définit séparément les modèles de cohérence basés sur les messages et ceux basés sur les données. Cependant, le serverless combine les deux paradigmes. Maintenir des garanties de cohérence séparées n'est pas intuitif et peut conduire à des incohérences entre les garanties de transmission de messages et d'objets partagés. Un modèle de cohérence unifié pour le passage de messages et la mémoire partagée est nécessaire pour éviter de telles erreurs. Ce modèle doit garantir que plusieurs éléments de données restent cohérents, que les données soient envoyées par messages ou partagées dans une mémoire distribuée. Le serverless est asynchrone et donc incompatible avec une cohérence forte (par exemple, linéarisation ou sérialisation) car elle impose des exigences de synchronisation fortes. À l'autre extrémité du spectre, la cohérence éventuelle viole l'intuition. La cohérence causale est un modèle intermédiaire utile, parce qu'il est asynchrone, et en même temps fournit des garanties utiles qui facilitent le raisonnement. Notre modèle TTCC est un modèle transactionnel, causalement cohérent, mémoire-message, qui unifie le passage de messages et la mémoire partagée dans un environnement asynchrone et isolé<br>Serverless computing applications are built using asynchronous message-based frameworks which enable users to abstractly compose functions in the cloud. Very often, applications need to store state and to handle business logic, such as user authentication, real-time stream processing or collaborative workspaces. Stateful serverless computing stores state in a distributed database such as DynamoDB. This common architectural scenario is brittle because the data consistency guarantees for the composition of the message layer and of the database layer are not well defined. This can result in inconsistency, crashes and data loss. Consistency is the set of rules that constraint the order in which updates (e.g. sending a message or assigns a shared variable) become visible (to receivers or to readers). There are many consistency models, ranging from strict serializability or linearizability, where become updates are visible instantaneously to all processes, to the weakest consistency models, where updates are delivered to remote nodes in no predictable order. Most existing systems are designed to communicate either by message or by shared memory, but not both. Therefore, the literature defines message-based and data-based consistency models separately. However, serverless computing combines both paradigms. Maintaining separate consistency guarantees is non-intuitive and can lead to inconsistencies between message-delivery and shared-object guarantees. A unified consistency model for message passing and shared memory is required to avoid such errors. The model should ensure that multiple pieces of data remain emph{mutually} consistent, whether data is sent using messages or shared in a distributed memory. Serverless computing is asynchronous and thus incompatible with strong consistency (e.g. linearizability or serializability) as it imposes strong synchronicity requirements. On the other end of the spectrum, eventual consistency violates intuition. Causal consistency is a useful intermediate model, because it is asynchronous, and at the same time provides useful guarantees that ease reasoning. Our TTCC model is a transactional, causally consistent, memory-message model, that unifies message passing and shared memory in an asynchronous and isolated environment
APA, Harvard, Vancouver, ISO, and other styles
39

Shah, Nikhil Jeevanlal. "A simulation framework to ensure data consistency in sensor networks." Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/541.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Skladanek, Yan. "Formulation d’un élément fini de poutre pour la dynamique des pales d’hélicoptère de géométrie complexe." Thesis, Lyon, INSA, 2011. http://www.theses.fr/2011ISAL0122.

Full text
Abstract:
L'optimisation des rotors d'hélicoptère, tant en termes de forme, de structure interne, ou de performance aérodynamique conduit à explorer de nouveaux types de design pour les pales. L'emploi massif de matériaux composites, le recours à des formes courbes et non plus simplement droites ou encore l'ajustement du vrillage aérodynamique font partie des pistes explorées. Ces nouveaux concepts de pales font apparaitre des comportements élastiques complexes où la torsion, la flexion et l'allongement axial viennent se coupler entre eux. L'étude de ces couplages est réalisée dans le repère tournant afin de pouvoir y intégrer tous les effets inhérents à la rotation des pales. Un élément fini de poutre droite non-linéaire et haute précision est formulé dans ce mémoire afin de répondre aux besoins de modélisation tant pour la prédiction des déformations quasi-statiques sous charge aérodynamique et centrifuge que pour la réalisation d'études dynamiques et de stabilité sur les pales. Le modèle a pour but d'être implémenté dans un code de calcul global de simulation d'hélicoptère et se doit donc de proposer un compromis acceptable entre la précision, la robustesse et le temps de calcul. La validation du modèle proposé s'appuie sur des études analytiques, numériques et expérimentales. La grande précision de l'élément fini proposé est démontrée sur des pales de dernière génération. Il est maintenant attendu que le couplage de ce modèle élastique avec les modèles aérodynamiques les plus avancés permette d'améliorer sensiblement la précision des outils de simulation, en particulier lors de l'étude de phénomènes instables dont la maitrise est indispensable au vol de l'hélicoptère<br>Structural, shape and performances optimization in helicopter rotor leads to design composite blades initially curved and twisted. This design yields a highly coupled behavior between torsion, longitudinal and bending motions of blades. Besides, dynamic studies of blades have to be performed in the rotational frame, so that all rotatory effects could be siezed by the modeling. A highly accurate non-linear straight beam finite element is proposed to predict the static deformation under aerodynamic and centrifugal loads and achieve dynamic and stability analysis. This elastic model is to be implemented in a comprehensive rotorcraft analysis code, which means accuracy, reliability and calculation time compromise. Model validation is based on analytical, numerical and experimental investigations. The developed model reveals to be very accurate for new blade design including important twist angle and initially curved shape. It is expected to improve prediction quality for full helicopter simulation tools, undergoing strong coupling with advanced aerodynamic model
APA, Harvard, Vancouver, ISO, and other styles
41

Fuad, Mohammad Naser Mohammad. "On the consistency of hysteresis models." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/353622.

Full text
Abstract:
Hysteresis is a nonlinear behavior encountered in a wide variety of processes including biology, optics, electronics, ferroelectricity, magnetism, mechanics, structures, among other areas. One of the main features of hysteresis processes is the property of consistency formalized in [52]. The class of operators that are considered in [52] consists of the causal ones, with the additional condition that a constant input leads to a constant output. For this class of systems, consistency has been defined formally. This property is useful in system modeling and identification as it limits the search for the system's parameters to those regions where consistency holds. The thesis applies the concepts introduced in [52] to some hysteresis models, namely LuGre model and Duhem model. The aim of the thesis is to derive necessary conditions and sufficient one for consistency (or/and strong consistency) to hold. For the LuGre model, the consistency and the strong consistency are studied under minimal conditions in Chapter 2. As a by-product of this study, explicit expressions are derived for the hysteresis. Such expression may be useful for identification purposes as shown in [53]. A classification of the possible Duhem models in terms of their consistency is carried out in Chapter 3. This study shows that a parameter’s should be one for the Duhem model to be compatible with a hysteresis behavior.<br>La histéresis es un fenómeno nolineal encontrado en varios procesos como biología, óptica, electrónica, ferroelectricidad, magnetismo, mecánica, estructuras, así como en otras áreas. Una de las características de los sistemas con histéresis es la propiedad de consistencia formalizada en [52]. La clase de operadores considerados en [52] consiste en aquellos que son causales, con la condición adicional que a una entrada constante corresponda una salida constante. Para esta clase d sistemas, la consitencia ha sido definida formalmente. Esta propiedad es útil en modelado e identificación dado que limita la búsqueda de parámetros a aquellas regiones donde la consistencia es válida. * Esta tesis aplica los conceptos introducidos en [52] a algunos modelos de histéresis, más precisamente al modelo de LuGre y al modelo de Duhem. El objetivo de esta tesis es encontrar condiciones necesarias y condiciones suficientes para que se satisfaga la consistencia (o/y la consitencia fuerte). * Para el modelo de LuGre, la consistencia "fuerte" se estudia en el capítulo 2 bajo condiciones mínimas. Como resultado de este estudio, se hallan fórmulas explícitas del lazo de histéresis. Tales fórmulas podrían ser de utilidad para tareas de identificación como se demuestra en [53]. * El capítulo 3 de la tesis presenta una clasificación de los modelos de Duhem posibles en términos de su consistencia. Este estudio muestra que hay un parámetro que tiene que valer uno para que el modelo sea compatible con un comportamiento histerético
APA, Harvard, Vancouver, ISO, and other styles
42

Otth, Daniel. "Types and consistency in combinatory algebras /." [S.l.] : [s.n.], 1992. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=9800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Tran, Sy Nguyen. "Consistency techniques for test data generation." Université catholique de Louvain, 2005. http://edoc.bib.ucl.ac.be:81/ETD-db/collection/available/BelnUcetd-05272005-173308/.

Full text
Abstract:
This thesis presents a new approach for automated test data generation of imperative programs containing integer, boolean and/or float variables. A test program (with procedure calls) is represented by an Interprocedural Control Flow Graph (ICFG). The classical testing criteria (statement, branch, and path coverage), widely used in unit testing, are extended to the ICFG. Path coverage is the core of our approach. Given a specified path of the ICFG, a path constraint is derived and solved to obtain a test case. The constraint solving is carried out based on a consistency notion. For statement (and branch) coverage, paths reaching a specified node or branch are dynamically constructed. The search for suitable paths is guided by the interprocedural control dependences of the program. The search is also pruned by our consistency filter. Finally, test data are generated by the application of the proposed path coverage algorithm. A prototype system implements our approach for C programs. Experimental results, including complex numerical programs, demonstrate the feasibility of the method and the efficiency of the system, as well as its versatility and flexibility to different classes of problems (integer and/or float variables; arrays, procedures, path coverage, statement coverage).
APA, Harvard, Vancouver, ISO, and other styles
44

Stüber, Torsten. "Consistency of Probabilistic Context-Free Grammars." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-86943.

Full text
Abstract:
We present an algorithm for deciding whether an arbitrary proper probabilistic context-free grammar is consistent, i.e., whether the probability that a derivation terminates is one. Our procedure has time complexity $\\\\mathcal O(n^3)$ in the unit-cost model of computation. Moreover, we develop a novel characterization of consistent probabilistic context-free grammars. A simple corollary of our result is that training methods for probabilistic context-free grammars that are based on maximum-likelihood estimation always yield consistent grammars.
APA, Harvard, Vancouver, ISO, and other styles
45

Sancar, Yilmaz Aysun. "Edge Preserving Smoothing With Directional Consistency." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608511/index.pdf.

Full text
Abstract:
Images may be degraded by some random error which is called noise. Noise may occur during image capture, transmission or processing and its elimination is achieved by smoothing filters. Linear smoothing filters blur the edges and edges are important characteristics of images and must be preserved. Various edge preserving smoothing filters are proposed in the literature. In this thesis, most common smoothing and edge preserving smoothing filters are discussed and a new method is proposed by modifying Ambrosio Tortorelli approximation of Mumford Shah Model. New method takes into edge direction consistency account and produces sharper results at comparable scales.
APA, Harvard, Vancouver, ISO, and other styles
46

Ciftci, Serdar. "Improving Edge Detection Using Intersection Consistency." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613846/index.pdf.

Full text
Abstract:
Edge detection is an important step in computer vision since edges are utilized by the successor visual processing stages including many tasks such as motion estimation, stereopsis, shape representation and matching, etc. In this study, we test whether a local consistency measure based on image orientation (which we call Intersection Consistency - IC), which was previously shown to improve detection of junctions, can be used for improving the quality of edge detection of seven different detectors<br>namely, Canny, Roberts, Prewitt, Sobel, Laplacian of Gaussian (LoG), Intrinsic Dimensionality, Line Segment Detector (LSD). IC works well on images that contain prominent objects which are different in color from their surroundings. IC give good results on natural images that have especially cluttered background. On images involving human made objects, IC leads to good results as well. But, depending on the amount of clutter, the loss of true positives might be more crucial. Through our comprehensive investigation, we show that approximately 21% increase in f-score is obtained whereas some important edges are lost. We conclude from our experiments that IC is suitable for improving the quality of edge detection in some detectors such as Canny, LoG and LSD.
APA, Harvard, Vancouver, ISO, and other styles
47

Mokal, Rizwaan Jameel. "Consistency of principle in corporate liquidation." Thesis, University College London (University of London), 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270410.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Welch, Lorrie V. S. "Low-consistency refining of mechanical pulps." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape8/PQDD_0014/NQ38999.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Dijkman, Remco Matthijs. "Consistency in multi-viewpoint architectural design." Enschede : Centre for Telematics and Information Technology, 2006. http://doc.utwente.nl/51104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Qin, Wenjuan. "High consistency enzymatic hydrolysis of lignocellulose." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/24374.

Full text
Abstract:
The work described in this thesis focused on the development of a practical, high consistency hydrolysis and fermentation processes utilizing existing pulp mill equipment. Carrying out enzymatic hydrolysis at high substrate loading provided a practical means of reducing the overall cost of a lignocellulose to ethanol bioconversion process. A laboratory peg mixer was used to carry out high consistency hydrolysis of several lignocellulosic substrate including an unbleached hardwood pulp (UBHW), an unbleached softwood pulp (UBSW), and an organosolv pretreated poplar (OPP) pulp. Enzymatic hydrolysis of OPP for 48 hours resulted in a hydrolysate with a glucose content of 158 g/L. This is among the highest glucose concentration reported for the enzymatic hydrolysis of lignocellulosic substrates. The fermentation of UBHW and OPP hydrolysates with high glucose content led to high ethanol concentrations in the final fermentation broth (50.4 and 63.1 g/L, respectively). These values were again as high as any values reported previously in the literature. To overcome end-product inhibition caused by the high glucose concentration resulting from hydrolysis at high substrate concentration, a new hydrolysis and fermentation configuration, (liquefaction followed by simultaneous saccharification and fermentation (LSSF)), was developed and evaluated using the OPP substrate. Applying LSSF led to a production of 63 g/L ethanol from OPP. The influence of enzyme loading and β-glucosidase addition on ethanol yield from the LSSF process was also investigated. It was found that, at higher enzyme loading (10FPU or higher), the ethanol production from LSSF was superior to that of the SHF process. It was apparent that the LSSF process could significantly reduce end-product inhibition when compared to a Separate Hydrolysis and Fermentation (SHF) process. It was also apparent that β-glucosidase addition was necessary to achieve efficient ethanol production when using the LSSF process. A 10CBU β-glucosidase supplement was enough for the effective conversion of the 20% consistency OPP by LSSF. The rheological property change of the different substrates at the liquefaction stage was also examined using the rheometer technique. The use of a fed-batch hydrolysis process to further improve the high consistency hydrolysis efficiency was also assessed.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography