To see the other types of publications on this topic, follow the link: Generative Reasoning.

Dissertations / Theses on the topic 'Generative Reasoning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 43 dissertations / theses for your research on the topic 'Generative Reasoning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Griffith, Todd W. "A computational theory of generative modeling in scientific reasoning." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/8177.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Yu. "Topological reasoning using a generative representation and a genetic algorithm." Thesis, Cardiff University, 2009. http://orca.cf.ac.uk/54999/.

Full text
Abstract:
This thesis studies the use of a generative representation with a genetic algorithm (GA) to solve topological reasoning problems. Literature review indicates that generative representations outperform the non-generative ones for certain design optimisation and automation problems. However, it also indicates a lack of understanding of this relatively new class of representations. Many problems and questions about the implementation of generative representations are still to be addressed and answered. The results and findings presented in this thesis contribute to the knowledge of generative representations by: 1. explaining why genotype formatting is important for the representation and how it influences the performance of both the representation and the algorithm 2. providing different crossover and mutation methods, including both existing and newly developed ones, that are available to GA when used with the presentation and, more importantly, revealing their different properties in generating new individuals 3. providing alternative ways to map turtle graphs into the design space to form the actual designs and showing the properties of these different mapping methods and how they influence the outcome of the search. In general, this thesis examines the key issues in setting up and implementing generative representations with genetic algorithms. It improves the understanding of generative representations and contributes to the knowledge that is required to further develop them for real-world use. Based on the results and findings of this study, directions for future work are also provided.
APA, Harvard, Vancouver, ISO, and other styles
3

Misino, Eleonora. "Deep Generative Models with Probabilistic Logic Priors." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/24058/.

Full text
Abstract:
Many different extensions of the VAE framework have been introduced in the past. How­ ever, the vast majority of them focused on pure sub­-symbolic approaches that are not sufficient for solving generative tasks that require a form of reasoning. In this thesis, we propose the probabilistic logic VAE (PLVAE), a neuro-­symbolic deep generative model that combines the representational power of VAEs with the reasoning ability of probabilistic ­logic programming. The strength of PLVAE resides in its probabilistic ­logic prior, which provides an interpretable structure to the latent space that can be easily changed in order to apply the model to different scenarios. We provide empirical results of our approach by training PLVAE on a base task and then using the same model to generalize to novel tasks that involve reasoning with the same set of symbols.
APA, Harvard, Vancouver, ISO, and other styles
4

Southard, Katelyn M. "Exploring Features of Expertise and Knowledge Building among Undergraduate Students in Molecular and Cellular Biology." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/612137.

Full text
Abstract:
Experts in the field of molecular and cellular biology (MCB) use domain-specific reasoning strategies to navigate the unique complexities of the phenomena they study and creatively explore problems in their fields. One primary goal of instruction in undergraduate MCB is to foster the development of these domain-specific reasoning strategies among students. However, decades of evidence-based research and many national calls for undergraduate instructional reform have demonstrated that teaching and learning complex fields like MCB is difficult for instructors and learners alike. Therefore, how do students develop rich understandings of biological mechanisms? It is the aim of this dissertation work to explore features of expertise and knowledge building in undergraduate MCB by investigating knowledge organization and problem-solving strategies. Semi-structured clinical think-aloud interviews were conducted with introductory and upper-division students in MCB. Results suggest that students must sort ideas about molecular mechanism into appropriate mental categories, create connections using function-driven and mechanistic rather than associative reasoning, and create nested and overlapping ideas in order to build a nuanced network of biological ideas. Additionally, I characterize the observable components of generative multi-level mechanistic reasoning among undergraduate MCB students constructing explanations about in two novel problem-solving contexts. Results indicate that like MCB experts, students are functionally subdividing the overarching mechanism into functional modules, hypothesizing and instantiating plausible schema, and even flexibly consider the impact of mutations across ontological and biophysical levels. However "filling in" these more abstract schema with molecular mechanisms remains problematic for many students, with students instead employing a range of developing mechanistic strategies. Through this investigation of expertise and knowledge building, I characterize several of the ways in which knowledge integration and generative explanation building are productively constrained by domain-specific features, expand on several discovered barriers to productive knowledge organization and mechanistic explanation building, and suggest instructional implications for undergraduate learning.
APA, Harvard, Vancouver, ISO, and other styles
5

Langer, Tomáš. "Metafory a analogie v ekonomické vědě a vzdělávání." Doctoral thesis, Vysoká škola ekonomická v Praze, 2016. http://www.nusl.cz/ntk/nusl-264278.

Full text
Abstract:
Presented thesis explores the ground at the intersection of three topics: education, relational thinking and economics. Within the sphere of economic education it investigates the use of concepts known from (i) conceptual metaphor theory, (ii) psychological model of analogical reasoning, (iii) model of generative learning from educational psychology and (iv) existing research on use of metaphors and analogies in natural sciences education. Thesis shows the potential of metaphorical origin of economic terminology for teaching economic concepts and educational use of economic media content. At the same time it proposes a notation for visualizing metaphorical mappings between domains. It addresses economic interpretation from the viewpoint of the relationship between theoretical economic models and actual economic situations, as well as from the viewpoint of the relationship between mathematical structure of the model and its economic meaning. In the first case, it shows interpretative skill, being framed within the revised Bloom taxonomy, as a complex cognitive task, in the second case it develops model of economic interpretation of mathematical structures on the basis of psychological model of analogical reasoning. In both cases it formulates highlights students should know about the analogical nature of economic models. On the basis of the model of generative learning it develops a set of visual methaphors applicable in the introductory topics of microeconomics and macroeconomics and examines effects of their use in the classes of economics. By undertaking such research it initiates the exploration of paths leading in the directions suggested by the theoretical analysis.
APA, Harvard, Vancouver, ISO, and other styles
6

Nyrup, Rune. "Hypothesis generation and pursuit in scientific reasoning." Thesis, Durham University, 2017. http://etheses.dur.ac.uk/12200/.

Full text
Abstract:
This thesis draws a distinction between (i) reasoning about which scientific hypothesis to accept, (ii) reasoning concerned with generating new hypotheses and (iii) reasoning about which hypothesis to pursue. I argue that (ii) and (iii) should be evaluated according to the same normative standard, namely whether the hypotheses generated/selected are pursuit worthy. A consequentialist account of pursuit worthiness is defended, based on C. S. Peirce’s notion of ‘abduction’ and the ‘economy of research’, and developed as a family of formal, decision-theoretic models. This account is then deployed to discuss four more specific topics concerning scientific reasoning. First, I defend an account according to which explanatory reasoning (including the ‘inference to the best explanation’) mainly provides reasons for pursuing hypotheses, and criticise empirical arguments for the view that it also provides reasons for acceptance. Second, I discuss a number of pursuit worthiness accounts of analogical reasoning in science, arguing that, in some cases, analogies allow scientists to transfer an already well-understood modelling framework to a new domain. Third, I discuss the use of analogies within archaeological theorising, arguing that the distinction between using analogies for acceptance, generation and pursuit is implicit in methodological discussions in archaeology. A philosophical analysis of these uses is presented. Fourth, diagnostic reasoning in medicine is analysed from the perspective of Peircean abduction, where the conception of abduction as strategic reasoning is shown to be particularly important.
APA, Harvard, Vancouver, ISO, and other styles
7

Townsend, Joseph Paul. "Artificial development of neural-symbolic networks." Thesis, University of Exeter, 2014. http://hdl.handle.net/10871/15162.

Full text
Abstract:
Artificial neural networks (ANNs) and logic programs have both been suggested as means of modelling human cognition. While ANNs are adaptable and relatively noise resistant, the information they represent is distributed across various neurons and is therefore difficult to interpret. On the contrary, symbolic systems such as logic programs are interpretable but less adaptable. Human cognition is performed in a network of biological neurons and yet is capable of representing symbols, and therefore an ideal model would combine the strengths of the two approaches. This is the goal of Neural-Symbolic Integration [4, 16, 21, 40], in which ANNs are used to produce interpretable, adaptable representations of logic programs and other symbolic models. One neural-symbolic model of reasoning is SHRUTI [89, 95], argued to exhibit biological plausibility in that it captures some aspects of real biological processes. SHRUTI's original developers also suggest that further biological plausibility can be ascribed to the fact that SHRUTI networks can be represented by a model of genetic development [96, 120]. The aims of this thesis are to support the claims of SHRUTI's developers by producing the first such genetic representation for SHRUTI networks and to explore biological plausibility further by investigating the evolvability of the proposed SHRUTI genome. The SHRUTI genome is developed and evolved using principles from Generative and Developmental Systems and Artificial Development [13, 105], in which genomes use indirect encoding to provide a set of instructions for the gradual development of the phenotype just as DNA does for biological organisms. This thesis presents genomes that develop SHRUTI representations of logical relations and episodic facts so that they are able to correctly answer questions on the knowledge they represent. The evolvability of the SHRUTI genomes is limited in that an evolutionary search was able to discover genomes for simple relational structures that did not include conjunction, but could not discover structures that enabled conjunctive relations or episodic facts to be learned. Experiments were performed to understand the SHRUTI fitness landscape and demonstrated that this landscape is unsuitable for navigation using an evolutionary search. Complex SHRUTI structures require that necessary substructures must be discovered in unison and not individually in order to yield a positive change in objective fitness that informs the evolutionary search of their discovery. The requirement for multiple substructures to be in place before fitness can be improved is probably owed to the localist representation of concepts and relations in SHRUTI. Therefore this thesis concludes by making a case for switching to more distributed representations as a possible means of improving evolvability in the future.
APA, Harvard, Vancouver, ISO, and other styles
8

Papacchini, Fabio. "Minimal model reasoning for modal logic." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/minimal-model-reasoning-for-modal-logic(dbfeb158-f719-4640-9cc9-92abd26bd83e).html.

Full text
Abstract:
Model generation and minimal model generation are useful for tasks such as model checking, query answering and for debugging of logical specifications. Due to this variety of applications, several minimality criteria and model generation methods for classical logics have been studied. Minimal model generation for modal logics how ever did not receive the same attention from the research community. This thesis aims to fill this gap by investigating minimality criteria and designing minimal model generation procedures for all the sublogics of the multi-modal logic S5(m) and their extensions with universal modalities. All the procedures are minimal model sound and complete, in the sense that they generate all and only minimal models. The starting point of the investigation is the definition of a Herbrand semantics for modal logics on which a syntactic minimality criterion is devised. The syntactic nature of the minimality criterion allows for an efficient minimal model generation procedure, but, on the other hand, the resulting minimal models can be redundant or semantically non minimal with respect to each other. To overcome the syntactic limitations of the first minimality criterion, the thesis moves from minimal modal Herbrand models to semantic minimality criteria based on subset-simulation. At first, theoretical procedures for the generation of models minimal modulo subset-simulation are presented. These procedures for the generation of models minimal modulo subset-simulation are minimal model sound and complete, but they might not terminate. The minimality criterion and the procedures are then refined in such a way that termination can be ensured while preserving minimal model soundness and completeness.
APA, Harvard, Vancouver, ISO, and other styles
9

de, Leng Daniel. "Spatio-Temporal Stream Reasoning with Adaptive State Stream Generation." Licentiate thesis, Linköpings universitet, Artificiell intelligens och integrerade datorsystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-138645.

Full text
Abstract:
A lot of today's data is generated incrementally over time by a large variety of producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, making sense of these streams of data through reasoning is challenging. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in a physical environment. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and its refinement an important problem. Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this thesis, we integrate techniques for logic-based spatio-temporal stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over streaming data and the problem of robustly managing streaming data and its refinement. The main contributions of this thesis are (1) a logic-based spatio-temporal reasoning technique that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt in situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in the context of a case study on run-time adaptive reconfiguration. The results show that the proposed system – by combining reasoning over and reasoning about streams – can robustly perform spatio-temporal stream reasoning, even when the availability of streaming resources changes.<br><p>The series name <em>Linköping Studies in Science and Technology Licentiate Thesis</em> is inocorrect. The correct series name is <em>Linköping Studies in Science and Technology Thesis</em>.</p><br>NFFP6<br>CENIIT
APA, Harvard, Vancouver, ISO, and other styles
10

Ma, Liangjun, and Shouchuan Zhang. "Generating Fuzzy Rules For Case-based Classification." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-16444.

Full text
Abstract:
As a technique to solve new problems based on previous successful cases, CBR represents significant prospects for improving the accuracy and effectiveness of unstructured decision-making problems. Similar problems have similar solutions is the main assumption. Utility oriented similarity modeling is gradually becoming an important direction for Case-based reasoning research. In this thesis, we propose a new way to represent the utility of case by using fuzzy rules. Our method could be considered as a new way to estimate case utility based on fuzzy rule based reasoning. We use modified WANG’s algorithm to generate a fuzzy if-then rule from a case pair instead of a single case. The fuzzy if-then rules have been identified as a powerful means to capture domain information for case utility approximation than traditional similarity measures based on feature weighting. The reason why we choose the WANG algorithm as the foundation is that it is a simpler and faster algorithm to generate if-then rules from examples. The generated fuzzy rules are utilized as a case matching mechanism to estimate the utility of the cases for a given problem. The given problem will be formed with each case in the case library into pairs which are treated as the inputs of fuzzy rules to determine whether or to which extent a known case is useful to the problem. One case has an estimated utility score to the given problem to help our system to make decision. The experiments on several data sets have showed the superiority of our method over traditional schemes, as well as the feasibility of learning fuzzy if-then rules from a small number of cases while still having good performances.
APA, Harvard, Vancouver, ISO, and other styles
11

Tomizawa, Hajime. "AUTOMATED SCENARIO GENERATION SYSTEM IN A SIMULATION." Master's thesis, University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2637.

Full text
Abstract:
Developing training scenarios that induce a trainee to utilize specific skills is one of the facets of simulation-based training that requires significant effort. Simulation-based training systems have become more complex in recent years. Because of this added complexity, the amount of effort required to generate and maintain training scenarios has increased. This thesis describes an investigation into automating the scenario generation process. The Automated Scenario Generation System (ASGS) generates expected action flow as contexts in chronological order from several events and tasks with estimated time for the entire training mission. When the training objectives and conditions are defined, the ASGS will automatically generate a scenario, with some randomization to ensure no two equivalent scenarios are identical. This makes it possible to train different groups of trainees sequentially who may have the same level or training objectives without using a single scenario repeatedly. The thesis describes the prototype ASGS and the evaluation results are described and discussed. SVSTM Desktop is used as the development infrastructure for ASGS as prototype training system.<br>M.S.Cp.E.<br>School of Electrical Engineering and Computer Science<br>Engineering and Computer Science<br>Modeling and Simulation
APA, Harvard, Vancouver, ISO, and other styles
12

Ware, Stephen G. "A Plan-Based Model of Conflict for Narrative Reasoning and Generation." Thesis, North Carolina State University, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3647582.

Full text
Abstract:
<p> Narrative is one of the fundamental cognitive tools that we use to understand the world around us. When interacting with other humans we rely on a shared knowledge of narrative structure, but in order to enable this kind of communication with digital artifacts we must first formalize these narrative conventions. Narratology, computer science, and cognitive science have formed a symbiotic relationship around this endeavor to create computational models of narrative. These models provide us a deeper understanding of story structure and will enable us to create a fundamentally new kind of interactive narrative experience in which the author, the audience, and the machine all participate in the story composition process. </p><p> This document presents a computational model of narrative conflict, its empirical evaluation, and its deployment in an interactive narrative experience. Narratologists have described conflict in terms of the di&#14;culties that an intelligent agent encounters while executing a plan to achieve a gol.. This definition is inherently plan-based, and has been integrated into an existing model of narrative based on the data structures and algorithms of artificial intelligence planning|the process of constructing a sequence of actions to achieve a goal. The conflict Partial Order Causal Link (or CPOCL) model of narrative represents the events of a story along with their causal structure and temporal constraints. It extends previous models by representing non-executed actions which describe how an agent intended to complete its plans even if those plans failed, thus enabling an explicit representation of thwarted plans and conflict. The model also includes seven dimensions which can distinguish one conflict from another and provide authors with greater control over story generation: participants, topic, duration, balance, directness, intensity, and resolution. </p><p> One valuable aspect of plan-based models is that they can be generated and modified automatically. Two story creation methods are discussed: the plan-space CPOCL algorithm that works directly with the rich CPOCL knowledge representation and the state-space Glaive algorithm which is significantly faster. Glaive achieves its speedup by incorporating research from fast forward-chaining state-space heuristic search planning and by using the constraints that a valid narrative plan must obey to calculate a more accurate heuristic. Glaive is fast enough to solve certain non-trivial narrative planning problems in real time. </p><p> This computational model of narrative conflict has been evaluated in a series of empirical experiments. The first validates the three discrete dimensions of conflict: participants, topic, and duration. It demonstrates that a human audience recognizes thwarted plans in static text stories in the same places that the CPOCL model defines them to exist. The second experiment validates the four continuous dimensions|balance, directness, intensity, and resolution|by showing that a human audience ranks static text stories in the same order defined by the formulas for those dimensions. </p><p> The final experiment is an evaluation of an interactive narrative video game called The Best Laid Plans, which uses Glaive to generate a story at run time from atomic actions and without recourse to pre-scripted behaviors or story fragments. In this game, the player first acts out a plan to achieve a goal and then Glaive coordinates all the non-player characters in the game to thwart the player's plan. The game is evaluated relative to two other versions: a control in which the other characters do nothing and a scripted version in which the other characters are controlled by programs written by a human author. Players recognize intentionality and conflict in the stories Glaive produces more so than in the control and comparably to the human scripted version. </p><p> In summary, this document describes how a narratological definition of conflict as thwarted plans has been operationalized in plan data structures and incorporated into a narrative planning algorithm. The knowledge representation is rich enough that a human audience recognizes thwarted plans where the model defines them to exist. The algorithm is fast enough to be used in a real time interactive context for certain non-trivial story domains. This work represents one small advancement toward understanding human storytelling and leveraging that understanding in interactive systems.</p>
APA, Harvard, Vancouver, ISO, and other styles
13

Vianna, Regina Ferreira. "Qualitative reasoning methodology for the generation of process plant operating procedures." Thesis, University of Leeds, 1995. http://etheses.whiterose.ac.uk/745/.

Full text
Abstract:
The analysis of operating procedures in the early stages of design can lead to safer and higher performance plants. Qualitative reasoning techniques hold considerable promise in supporting generations of operating procedures, since they are able to describe possible trajectories of a system based on non-quantitative information and provide explanation about process behaviour in a way which gives insight into the underlying physical processes. Despite this potential, existing techniques still present limitations related to the tendency for generating non-real behaviour patterns and the inability to describe distributed parameter systems. This study presents a qualitative reasoning methodology, weighted digraph (WDG) approach, for describing the dynamics of complex chemical processes, and in particular of distributed parameter systems, with a considerable reduction in the generation of spurious solutions. It is based on a generalisation of the signed digraph approach and retains its main advantages, such as the ability to easily represent intuitive and causal knowledge and a graph structure which makes apparent the flow of information between variables. In addition, it incorporates several new features, making use of functional weighting, differential nodes and temporal edges, which enable the procedure to qualitatively describe complex patterns of behaviour. The effectiveness of the approach is demonstrated by considering the qualitative modelling and simulation of the dynamic behaviour of several chemical processes: heat-exchanger, CSTR with and without temperature control and distillation column. The proposed weighted digraph approach is used to support generation of start-up procedures with reference to two case studies: a network of heat-exchangers and an integrated system composed of a CSTR and a feed/effluent heat-exchanger. It is shown that the digraph based strategy has the ability to generate feasible operating procedures in the presence of operational constraints and identify the need for modifications of the process topology in order to allow the start-up of the system. Results also indicate that work is still needed in order to further improve the methodology and create an interactive computer based interface to help with reasoning about complex patterns of behaviour.
APA, Harvard, Vancouver, ISO, and other styles
14

Nilsson, Linnéa, Peter Enhörning, and Christoffer Lindgren. "Thoughts and reasoning in family businesses : Founders thoughts and reasoning behind decisionsduring the expansion phase in a first generation family business with few owners." Thesis, Linnéuniversitetet, Ekonomihögskolan, ELNU, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-20364.

Full text
Abstract:
This thesis focuses on the decision making in the most common business form: family businesses. A well-established theoretical model within the family business field is The three circle-model, which is based on three different dimensions: family, ownership and business. Most of the family businesses stay small but the ones expanding face the dilemma of balancing the best development of the dimensions. However, these three dimensions can contradict each other and as a result the founders are forced to choose which of the dimensions to prioritize when taking decisions.The purpose of this thesis is to create an understanding of how the family, the ownership and the business dimensions affect founders thoughts and reasoning behind decisions in the expansion phase in first generation family firms with few owners.We have reached the conclusions with a qualitative approach using case studies. We have gathered the empirical data by using Life story and Critical incident to define expansion decisions in two companies. Furthermore we used semi-structured interviews with the aim of creating an understanding of the founders thoughts and reasoning behind the taken decisions.Our conclusion shows that business opportunities and the objective to remain in control of the family business highly influence the decision making during the expansion phase. Another conclusion is that the family has been affected far more by the decisions than it has had an impact on the decisions. The thesis gives insight about an area within the family business field, which previously has been neglected by researchers.
APA, Harvard, Vancouver, ISO, and other styles
15

Morak, Michael. "The impact of disjunction on reasoning under existential rules." Thesis, University of Oxford, 2014. https://ora.ox.ac.uk/objects/uuid:b8f012c4-0210-41f6-a0d3-a9d1ea5f8fac.

Full text
Abstract:
Ontological database management systems are a powerful tool that combine traditional database techniques with ontological reasoning methods. In this setting, a classical extensional database is enriched with an ontology, or a set of logical assertions, that describe how new, intensional knowledge can be derived from the extensional data. Conjunctive queries are therefore answered against this combined knowledge base of extensional and intensional data. Many languages that represent ontologies have been introduced in the literature. In this thesis we will focus on existential rules (also called tuple-generating dependencies or Datalog<sup>&plusmn;</sup> rules), and three established languages in this area, namely guarded-based rules, sticky rules and weakly-acyclic rules. The main goal of the thesis is to enrich these languages with non-deterministic constructs (i.e. disjunctions) and investigate the complexity of the answering conjunctive queries under these extended languages. As is common in the literature, we will distinguish between combined complexity, where the database, the ontology and the query are considered as input, and data complexity, where only the database is considered as input. The latter case is relevant in practice, as usually the ontology and the query can be considered as fixed, and are usually much smaller than the database itself. After giving appropriate definitions to extend the considered languages to disjunctive existential rules, we establish a series of complexity results, completing the complexity picture for each of the above languages, and four different query languages: arbitrary conjunctive queries, bounded (hyper-)treewidth queries, acyclic queries and atomic queries. For the guarded-based languages, we show a strong 2EXPTIME lower bound for general queries that holds even for fixed ontologies, and establishes 2EXPTIME-completeness of the query answering problem in this case. For acyclic queries, the complexity can be reduced to EXPTIME, if the predicate arity is bounded, and the problem even becomes tractable for certain restricted languages, if only atomic queries are used. For ontologies represented by sticky disjunctive rules, we show that the problem becomes undecidable, even in the case of data complexity and atomic queries. Finally, for weakly-acyclic rules, we show that the complexity increases from 2EXPTIME to coN2EXPTIME in general, and from tractable to coNP in case of the data complexity, independent of which query language is used. After answering the open complexity questions, we investigate applications and relevant consequences of our results for description logics and give two generic complexity statements, respectively, for acyclic and general conjunctive query answering over description logic knowledge bases. These generic results allow for an easy determination of the complexity of this reasoning task, based on the expressivity of the considered description logic.
APA, Harvard, Vancouver, ISO, and other styles
16

Salama, Mohamed Ahmed Said. "Automatic test data generation from formal specification using genetic algorithms and case based reasoning." Thesis, University of the West of England, Bristol, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Pritchard, Jane Cynthia. "Inter-group communication between baby boomer leaders and generation Y followers: a cultural reasoning perspective." Thesis, Curtin University, 2014. http://hdl.handle.net/20.500.11937/44.

Full text
Abstract:
This study investigated the inter-group communication between Baby Boomer Leaders and Generation Y Followers. This qualitative study, set in the Energy Utilities industry in Australia, pioneered two models, cultural reasoning and ServQual as ways of increasing understanding of how these very different generations prefer to communicate. The findings showed that Baby Boomer Leaders were more formal, compliant and rational in their communication methods whereas Generation Y Followers were more informal, questioning and intuitive and socio-centric.
APA, Harvard, Vancouver, ISO, and other styles
18

Mekni, Mehdi. "Automated Generation of Geometrically-Precise and Semantically-Informed Virtual Geographic Environnements Populated with Spatially-Reasoning Agents." Thesis, Université Laval, 2010. http://www.theses.ulaval.ca/2010/27351/27351.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Dufour-Lussier, Valmi. "Reasoning with qualitative spatial and temporal textual cases." Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0182/document.

Full text
Abstract:
Cette thèse propose un modèle permettant la mise en œuvre d'un système de raisonnement à partir de cas capable d'adapter des procédures représentées sous forme de texte en langue naturelle, en réponse à des requêtes d'utilisateurs. Bien que les cas et les solutions soient sous forme textuelle, l'adaptation elle-même est d'abord appliquée à un réseau de contraintes temporelles exprimées à l'aide d'une algèbre qualitative, grâce à l'utilisation d'un opérateur de révision des croyances. Des méthodes de traitement automatique des langues sont utilisées pour acquérir les représentations algébriques des cas ainsi que pour regénérer le texte à partir du résultat de l'adaptation<br>This thesis proposes a practical model making it possible to implement a case-based reasoning system that adapts processes represented as natural language text in response to user queries. While the cases and the solutions are in textual form, the adaptation itself is performed on networks of temporal constraints expressed with a qualitative algebra, using a belief revision operator. Natural language processing methods are used to acquire case representations and to regenerate text based on the adaptation result
APA, Harvard, Vancouver, ISO, and other styles
20

Bordes, Patrick. "Deep Multimodal Learning for Joint Textual and Visual Reasoning." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS370.

Full text
Abstract:
Au cours de la dernière décennie, l'évolution des techniques d'apprentissage en profondeur, combinée à une augmentation importante des données multimodales a suscité un intérêt croissant dans la communauté de recherche pour la compréhension conjointe du langage et de la vision. Le défi au cœur de l'apprentissage automatique multimodal est la différence sémantique entre le langage et la vision: alors que la vision représente fidèlement la réalité et transmet une sémantique de bas niveau, le langage porte un raisonnement de haut niveau. D'une part, le langage peut améliorer les performances des modèles de vision. L'hypothèse sous-jacente est que les représentations textuelles contiennent des informations visuelles. Nous appliquons ce principe au Zero-Shot Learning. Dans la première contribution en ZSL, nous étendons une hypothèse commune, qui stipule que les représentations textuelles codent des informations sur l'apparence visuelle des objets, en montrant qu'elles codent également des informations sur leur environnement visuel et leur fréquence réelle. Dans une seconde contribution, nous considérons le cadre transductif en ZSL. Nous proposons une solution aux limites des approches transductives actuelles, qui supposent que l'espace visuel est bien groupé, ce qui n'est pas vrai lorsque le nombre de classes inconnues est élevé. D'un autre côté, la vision peut élargir les capacités des modèles linguistiques. Nous le démontrons en abordant la génération de questions visuelles (VQG), qui étend la tâche standard de génération de questions en utilisant une image comme entrée complémentaire, en utilisant des représentations visuelles dérivées de la vision par ordinateur<br>In the last decade, the evolution of Deep Learning techniques to learn meaningful data representations for text and images, combined with an important increase of multimodal data, mainly from social network and e-commerce websites, has triggered a growing interest in the research community about the joint understanding of language and vision. The challenge at the heart of Multimodal Machine Learning is the intrinsic difference in semantics between language and vision: while vision faithfully represents reality and conveys low-level semantics, language is a human construction carrying high-level reasoning. One the one hand, language can enhance the performance of vision models. The underlying hypothesis is that textual representations contain visual information. We apply this principle to two Zero-Shot Learning tasks. In the first contribution on ZSL, we extend a common assumption in ZSL, which states that textual representations encode information about the visual appearance of objects, by showing that they also encode information about their visual surroundings and their real-world frequence. In a second contribution, we consider the transductive setting in ZSL. We propose a solution to the limitations of current transductive approaches, that assume that the visual space is well-clustered, which does not hold true when the number of unknown classes is high. On the other hand, vision can expand the capacities of language models. We demonstrate it by tackling Visual Question Generation (VQG), which extends the standard Question Generation task by using an image as complementary input, by using visual representations derived from Computer Vision
APA, Harvard, Vancouver, ISO, and other styles
21

Ertoptamis, Ozge. "Enhancing Creativity In The Concept Generation Phase: Implementation Of Black Box As A Tool For Analogical Reasoning." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/3/12608219/index.pdf.

Full text
Abstract:
In recent years, the field of design has met new grounds with the growing awareness among design researchers of the potential relationships between cognitive studies of creativity and computational modeling. The turn of the studies has given rise to the emergence of a new paradigm of modeling and understanding mental processes in creative design. This study tries to gain further insight into the creative occurrence by blending virtual experiences with designer actions in a model of creative thinking in concept generation phase based on the Geneplore Model by Finke et al. (1995) and supported by analogy construction incorporating the implementation of a computer based tool (Black Box) running on PC platform as a potential immanent part of the concept generation phase. Black Box is devised in such a way that the core of the constructive process of the analogy relies on the designer&amp<br>#8217<br>s expressional, perceptual and conceptual actions which are presented in the traditional methods of sketching and writing, whereas the change and expansion of the design space is realized through the virtual worlds the tool offers via the computer screen. The research method is based on the development of Black Box tool and its subsequent implementation in a study with eight experienced design consultants, utilizing a procedure composed of preliminary interview, observational protocol analysis, questionnaire and retrospective interview. Through encoding actions of individual designers by means of their maps in the computational tool, the study yields significant results in revealing differing thinking maps of different designers which have been used to propose a general creative thinking map of concept generation in Black Box presented in a way to be adapted for further studies. Moreover, the study provided insight on the methods used to assist creativity in concept generation by different designers, on the selection of inspirational material and on the integration of analogies as knowledge transformers to evoke design concepts.
APA, Harvard, Vancouver, ISO, and other styles
22

Dufour-Lussier, Valmi. "Reasoning with qualitative spatial and temporal textual cases." Electronic Thesis or Diss., Université de Lorraine, 2014. http://www.theses.fr/2014LORR0182.

Full text
Abstract:
Cette thèse propose un modèle permettant la mise en œuvre d'un système de raisonnement à partir de cas capable d'adapter des procédures représentées sous forme de texte en langue naturelle, en réponse à des requêtes d'utilisateurs. Bien que les cas et les solutions soient sous forme textuelle, l'adaptation elle-même est d'abord appliquée à un réseau de contraintes temporelles exprimées à l'aide d'une algèbre qualitative, grâce à l'utilisation d'un opérateur de révision des croyances. Des méthodes de traitement automatique des langues sont utilisées pour acquérir les représentations algébriques des cas ainsi que pour regénérer le texte à partir du résultat de l'adaptation<br>This thesis proposes a practical model making it possible to implement a case-based reasoning system that adapts processes represented as natural language text in response to user queries. While the cases and the solutions are in textual form, the adaptation itself is performed on networks of temporal constraints expressed with a qualitative algebra, using a belief revision operator. Natural language processing methods are used to acquire case representations and to regenerate text based on the adaptation result
APA, Harvard, Vancouver, ISO, and other styles
23

Nguyen, Duc Minh Chau. "Affordance learning for visual-semantic perception." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2021. https://ro.ecu.edu.au/theses/2443.

Full text
Abstract:
Affordance Learning is linked to the study of interactions between robots and objects, including how robots perceive objects by scene understanding. This area has been popular in the Psychology, which has recently come to influence Computer Vision. In this way, Computer Vision has borrowed the concept of affordance from Psychology in order to develop Visual-Semantic recognition systems, and to develop the capabilities of robots to interact with objects, in particular. However, existing systems of Affordance Learning are still limited to detecting and segmenting object affordances, which is called Affordance Segmentation. Further, these systems are not designed to develop specific abilities to reason about affordances. For example, a Visual-Semantic system, for captioning a scene, can extract information from an image, such as “a person holds a chocolate bar and eats it”, but does not highlight the affordances: “hold” and “eat”. Indeed, these affordances and others commonly appear within all aspects of life, since affordances usually connect to actions (from a linguistic view, affordances are generally known as verbs in sentences). Due to the above mentioned limitations, this thesis aims to develop systems of Affordance Learning for Visual-Semantic Perception. These systems can be built using Deep Learning, which has been empirically shown to be efficient for performing Computer Vision tasks. There are two goals of the thesis: (1) study what are the key factors that contribute to the performance of Affordance Segmentation and (2) reason about affordances (Affordance Reasoning) based on parts of objects for Visual-Semantic Perception. In terms of the first goal, the thesis mainly investigates the feature extraction module as this is one of the earliest steps in learning to segment affordances. The thesis finds that the quality of feature extraction from images plays a vital role in improved performance of Affordance Segmentation. With regard to the second goal, the thesis infers affordances from object parts to reason about part-affordance relationships. Based on this approach, the thesis devises an Object Affordance Reasoning Network that can learn to construct relationships between affordances and object parts. As a result, reasoning about affordance becomes achievable in the generation of scene graphs of affordances and object parts. Empirical results, obtained from extensive experiments, show the potential of the system (that the thesis developed) towards Affordance Reasoning from Scene Graph Generation.
APA, Harvard, Vancouver, ISO, and other styles
24

Robertson, Laura, Andrea Lowery, Lindsay Lester, and Renee Rice Moran. "The Intersection of 5Es Instruction, and the Claims, Evidence, and Reasoning Framework: A Hands-on Approach Supporting the NGSS in Upper Elementary Classrooms." Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/etsu-works/1308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Michel, Frank [Verfasser], Carsten [Akademischer Betreuer] Rother, Stefan [Akademischer Betreuer] Gumhold, Carsten [Gutachter] Rother, and Carsten [Gutachter] Steger. "Hypothesis Generation for Object Pose Estimation From local sampling to global reasoning / Frank Michel ; Gutachter: Carsten Rother, Carsten Steger ; Carsten Rother, Stefan Gumhold." Dresden : Technische Universität Dresden, 2019. http://d-nb.info/1226897592/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Manthey, Norbert. "Towards Next Generation Sequential and Parallel SAT Solvers." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-158672.

Full text
Abstract:
This thesis focuses on improving the SAT solving technology. The improvements focus on two major subjects: sequential SAT solving and parallel SAT solving. To better understand sequential SAT algorithms, the abstract reduction system Generic CDCL is introduced. With Generic CDCL, the soundness of solving techniques can be modeled. Next, the conflict driven clause learning algorithm is extended with the three techniques local look-ahead, local probing and all UIP learning that allow more global reasoning during search. These techniques improve the performance of the sequential SAT solver Riss. Then, the formula simplification techniques bounded variable addition, covered literal elimination and an advanced cardinality constraint extraction are introduced. By using these techniques, the reasoning of the overall SAT solving tool chain becomes stronger than plain resolution. When using these three techniques in the formula simplification tool Coprocessor before using Riss to solve a formula, the performance can be improved further. Due to the increasing number of cores in CPUs, the scalable parallel SAT solving approach iterative partitioning has been implemented in Pcasso for the multi-core architecture. Related work on parallel SAT solving has been studied to extract main ideas that can improve Pcasso. Besides parallel formula simplification with bounded variable elimination, the major extension is the extended clause sharing level based clause tagging, which builds the basis for conflict driven node killing. The latter allows to better identify unsatisfiable search space partitions. Another improvement is to combine scattering and look-ahead as a superior search space partitioning function. In combination with Coprocessor, the introduced extensions increase the performance of the parallel solver Pcasso. The implemented system turns out to be scalable for the multi-core architecture. Hence iterative partitioning is interesting for future parallel SAT solvers. The implemented solvers participated in international SAT competitions. In 2013 and 2014 Pcasso showed a good performance. Riss in combination with Copro- cessor won several first, second and third prices, including two Kurt-Gödel-Medals. Hence, the introduced algorithms improved modern SAT solving technology.
APA, Harvard, Vancouver, ISO, and other styles
27

Howarth, Stephanie. "Believe it or not : examining the case for intuitive logic and effortful beliefs." Thesis, University of Plymouth, 2015. http://hdl.handle.net/10026.1/3322.

Full text
Abstract:
The overall objective of this thesis was to test the Default Interventionist (DI) account of belief-bias in human reasoning using the novel methodology introduced by Handley, Newstead & Trippas (2011). DI accounts focus on how our prior beliefs are the intuitive output that bias our reasoning process (Evans, 2006), whilst judgments based on logical validity require effortful processing. However, recent research has suggested that reasoning on the basis of beliefs may not be as fast and automatic as previous accounts claim. In order to investigate whether belief based judgments are resource demanding we instructed participants to reason on the basis of both the validity and believability of a conclusion whilst simultaneously engaging in a secondary task (Experiment 1 - 5). We used both a within and between subjects design (Experiment 5) examining both simple and complex arguments (Experiment 4 – 9). We also analysed the effect of incorporating additional instructional conditions (Experiment 7 – 9) and tested the relationships between various individual differences (ID) measures under belief and logic instruction (Experiment 4, 5, 7, 8, & 9). In line with Handley et al.’s findings we found that belief based judgments were more prone to error and that the logical structure of a problem interfered with judging the believability of its conclusion, contrary to the DI account of reasoning. However, logical outputs sometimes took longer to complete and were more affected by random number generation (RNG) (Experiment 5). To reconcile these findings we examined the role of Working Memory (WM) and Inhibition in Experiments 7 – 9 and found, contrary to Experiment 5, belief judgments were more demanding of executive resources and correlated with ID measures of WM and inhibition. Given that belief based judgments resulted in more errors and were more impacted on by the validity of an argument the behavioural data does not fit with the DI account of reasoning. Consequently, we propose that there are two routes to a logical solution and present an alternative Parallel Competitive model to explain the data. We conjecture that when instructed to reason on the basis of belief an automatic logical output completes and provides the reasoner with an intuitive logical cue which requires inhibiting in order for the belief based response to be generated. This creates a Type 1/Type 2 conflict, explaining the impact of logic on belief based judgments. When explicitly instructed to reason logically, it takes deliberate Type 2 processing to arrive at the logical solution. The engagement in Type 2 processing in order to produce an explicit logical output is impacted on by demanding secondary tasks (RNG) and any task that interferes with the integration of premise information (Experiments 8 and 9) leading to increased latencies. However the relatively simple nature of the problems means that accuracy is less affected. We conclude that the type of instructions provided along with the complexity of the problem and the inhibitory demands of the task all play key roles in determining the difficulty and time course of logical and belief based responses.
APA, Harvard, Vancouver, ISO, and other styles
28

Lundberg, Didrik. "Provably Sound and Secure Automatic Proving and Generation of Verification Conditions." Thesis, KTH, Teoretisk datalogi, TCS, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-239441.

Full text
Abstract:
Formal verification of programs can be done with the aid of an interactive theorem prover. The program to be verified is represented in an intermediate language representation inside the interactive theorem prover, after which statements and their proofs can be constructed. This is a process that can be automated to a high degree. This thesis presents a proof procedure to efficiently generate a theorem stating the weakest precondition for a program to terminate successfully in a state upon which a certain postcondition is placed. Specifically, the Poly/ML implementation of the SML metalanguage is used to generate a theorem in the HOL4 interactive theorem prover regarding the properties of a program written in BIR, an abstract intermediate representation of machine code used in the PROSPER project.<br>Bevis av säkerhetsegenskaper hos program genom formell verifiering kan göras med hjälp av interaktiva teorembevisare. Det program som skall verifieras representeras i en mellanliggande språkrepresentation inuti den interaktiva teorembevisaren, varefter påståenden kan konstrueras, som sedan bevisas. Detta är en process som kan automatiseras i hög grad. Här presenterar vi en metod för att effektivt skapa och bevisa ett teorem som visar sundheten hos den svagaste förutsättningen för att ett program avslutas framgångsrikt under ett givet postvillkor. Specifikt använder vi Poly/ML-implementationen av SML för att generera ett teorem i den interaktiva teorembevisaren HOL4 som beskriver egenskaper hos ett program i BIR, en abstrakt mellanrepresentation av maskinkod som används i PROSPER-projektet.
APA, Harvard, Vancouver, ISO, and other styles
29

Bughio, Kulsoom Saima. "IoMT security: A semantic framework for vulnerability detection in remote patient monitoring." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2024. https://ro.ecu.edu.au/theses/2841.

Full text
Abstract:
The increasing need to safeguard patient data in Internet of Medical Things (IoMT) devices highlights the critical importance of reducing vulnerabilities within these systems. The widespread adoption of IoMT has transformed healthcare by enabling continuous remote patient monitoring (RPM), which enhances patient outcomes and optimizes healthcare delivery. However, the integration of IoMT devices into healthcare systems presents significant security challenges, particularly in protecting sensitive patient data and ensuring the reliability of medical devices. The diversity of data formats used by various vendors in RPM complicates data aggregation and fusion, thereby hindering overall cybersecurity efforts. This thesis proposes a novel semantic framework for vulnerability detection in RPM settings within the IoMT system. The framework addresses interoperability, heterogeneity, and integration challenges through meaningful data aggregation. The core of this framework is a domain ontology that captures the semantics of concepts and properties related to the primary security aspects of IoT medical devices. This ontology is supported by a comprehensive ruleset and complex queries over aggregated knowledge. Additionally, the implementation integrates medical device data with the National Vulnerability Database (NVD) via an API, enabling real-time detection of vulnerabilities and improving the security of RPM systems. By capturing the semantics of medical devices and network components, the proposed semantic model facilitates partial automation in detecting network anomalies and vulnerabilities. A logic-based ruleset enhances the system’s robustness and efficiency, while its reasoning capabilities enable the identification of potential vulnerabilities and anomalies in IoMT systems, thereby improving security measures in remote monitoring settings. The semantic framework also supports knowledge graph visualization and efficient querying through SPARQL. The knowledge graph provides a structured representation of interconnected data and stores Cyber Threat Intelligence (CTI) to enhance data integration, visualization, and semantic enrichment. The query mechanism enables healthcare providers to extract valuable insights from IoMT data, notifying them about new system vulnerabilities or vulnerable medical devices. This demonstrates the impact of vulnerabilities on cybersecurity requirements (Confidentiality, Integrity, and Availability) and facilitates countermeasures based on severity. Consequently, the framework promotes timely decision-making, enhancing the overall efficiency and effectiveness of IoMT systems. The semantic framework is validated through various use cases and existing frameworks, demonstrating its effectiveness and robustness in vulnerability detection within the domain of IoMT security.
APA, Harvard, Vancouver, ISO, and other styles
30

Steyn, Paul Stephanes. "Validating reasoning heuristics using next generation theorem provers." Thesis, 2009. http://hdl.handle.net/10500/2793.

Full text
Abstract:
The specification of enterprise information systems using formal specification languages enables the formal verification of these systems. Reasoning about the properties of a formal specification is a tedious task that can be facilitated much through the use of an automated reasoner. However, set theory is a corner stone of many formal specification languages and poses demanding challenges to automated reasoners. To this end a number of heuristics has been developed to aid the Otter theorem prover in finding short proofs for set-theoretic problems. This dissertation investigates the applicability of these heuristics to next generation theorem provers.<br>Computing<br>M.Sc. (Computer Science)
APA, Harvard, Vancouver, ISO, and other styles
31

Liaw, Jyh-Guann, and 廖志冠. "Applying Case-Based Reasoning to Generate Floor Plans." Thesis, 1996. http://ndltd.ncl.edu.tw/handle/73042548589606642989.

Full text
Abstract:
碩士<br>國立臺灣大學<br>土木工程研究所<br>84<br>Exist computational models for floor plan layout generation rely on generate-and-test method, and require knowledge engineers as human translators between designers and computers. To extend the problem solving abilities and to avoid the knowledge acquisition bottleneck of current design computation tools, systems allowing the problem-solver to learn from experience are necessary. One possibility of knowledge acquisition is to learn from design cases supplied by designers. Case-based reasoning provides a model for applying past experience directly to new problems, which emphasizes on memory retrieval rather than on computing solutions. The retrieved case may either match the current situation exactly or need modification. This leads to the fundamental assumptions of this research that applying case-based reasoning and artificial neural networks on knowledge-based design systems. In order to investigate the assumptions, this research proposes computational models for layout design which draws on traditional symbolic approach of case-based reasoning in machine learning together with a numerical approach of artificial neural networks. As well as, computer systems are developed to demonstrate different approaches of proposed model. The results show that applying case-based reasoning and artificial neural networks for building knowledge-based design systems are feasible approaches.
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Kung, and 楊洸. "An Approximation Reasoning Approach for Generating Fuzzy Decision Rules." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/86110227192570464543.

Full text
Abstract:
碩士<br>國立臺灣師範大學<br>資訊教育研究所<br>87<br>Most fuzzy classification systems proposed before applied a crisp-cut approach on the fuzzy degrees of the fuzzy attributes and conclusions to generate decision rules. Although, by the crisp-cut approach, decision rules with conjunction-disjunction form can be derived from training-samples, the membership functions of the conclusions cannot be generated. In this paper, a learning method named Fuzzy Approximation Reasoning Method is proposed. Two requirements can be satisfied by the method:(1)deriving fuzzy decision rules with conjunction-disjunction form from training-samples, and (2)generating the membership functions for the conclusions. In Fuzzy Approximation Reasoning Method, the dependency-degree function is designed for estimating the relationship between a conclusion and the fuzzy attributes. For the fuzzy attributes related to the conclusion, their membership functions will be combined to construct the membership function of the conclusion such that the associated fuzzy decision rule is derived. Moreover, the Fuzzy Approximation Reasoning Method also can be used to mine fuzzy association rules. In this paper, the Approximation Inducing Method is proposed to demonstrate how to mine fuzzy association rules by applying the Fuzzy Approximation Reasoning Method.
APA, Harvard, Vancouver, ISO, and other styles
33

Lin, Hsien-ta, and 林顯達. "Case-Based Reasoning for the Generation of Function Structures." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/03649886085400149681.

Full text
Abstract:
碩士<br>國立臺灣大學<br>機械工程學系<br>86<br>Based on the design methodology introduced by Pahl & Beitz, and Case-Based Reasoning(CBR), a systematic approach for generating the function structures of mechanical products is proposed. Generation of function structures is the main process of conceptual design stage, which determines the principle solution of design. In this stage, designers transform design specifications to more concrete concept represented by an assembly of functions which could achieve the purpose of design requirements. CBR, as a design process, aims at assisting engineers by recalling and reusing the previous design experience. It includes the subtasks of representing, indexing, retrieving and adapting the design cases. Under the structure of CBR, a system is proposed that receives design specifications of problems as in put, retrieves similar design case by using Structure Mapping, and returns the generated function structures as output. Some examples have been tested to verify the feasibility of this approach.
APA, Harvard, Vancouver, ISO, and other styles
34

Striegnitz, Kristina [Verfasser]. "Generating anaphoric expressions : contextual reasoning in sentence planning / vorgelegt von Kristina Striegnitz." 2005. http://d-nb.info/977077519/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Michel, Frank. "Hypothesis Generation for Object Pose Estimation From local sampling to global reasoning." Doctoral thesis, 2017. https://tud.qucosa.de/id/qucosa%3A33169.

Full text
Abstract:
Pose estimation has been studied since the early days of computer vision. The task of object pose estimation is to determine the transformation that maps an object from it's inherent coordinate system into the camera-centric coordinate system. This transformation describes the translation of the object relative to the camera and the orientation of the object in three dimensional space. The knowledge of an object's pose is a key ingredient in many application scenarios like robotic grasping, augmented reality, autonomous navigation and surveillance. A general estimation pipeline consists of the following four steps: extraction of distinctive points, creation of a hypotheses pool, hypothesis verification and, finally, the hypotheses refinement. In this work, we focus on the hypothesis generation process. We show that it is beneficial to utilize geometric knowledge in this process. We address the problem of hypotheses generation of articulated objects. Instead of considering each object part individually we model the object as a kinematic chain. This enables us to use the inner-part relationships when sampling pose hypotheses. Thereby we only need K correspondences for objects consisting of K parts. We show that applying geometric knowledge about part relationships improves estimation accuracy under severe self-occlusion and low quality correspondence predictions. In an extension we employ global reasoning within the hypotheses generation process instead of sampling 6D pose hypotheses locally. We therefore formulate a Conditional-Random-Field operating on the image as a whole inferring those pixels that are consistent with the 6D pose. Within the CRF we use a strong geometric check that is able to assess the quality of correspondence pairs. We show that our global geometric check improves the accuracy of pose estimation under heavy occlusion.
APA, Harvard, Vancouver, ISO, and other styles
36

Huang, Ming-Hung, and 黃名宏. "Automatically Generating the Chinese News Summary Based on Fuzzy Reasoning and Domain Ontology." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/42v5c3.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>資訊工程系<br>100<br>In this thesis, we present a new method for automatically generating the Chinese weather news summary based on fuzzy reasoning and the domain ontology, where the weather ontology, the time ontology and the geography ontology are predefined by domain experts. We slice the original Chinese weather news articles into a set of terms. Then, we use two ontological features (i.e., the degree of depth of the ontology and the degree of width of the ontology) and one statistical feature (i.e., the frequency) as inputs of the system. The values of those features are represented by fuzzy sets. Then, the fuzzy reasoning mechanism calculates the score of each sentence. The summary is composed of candidate sentences which have higher scores, where the experimental data are adopted from the Chinese weather news website of Taiwan. The experimental results show that the proposed method outperforms the methods presented in [14] and [15] for automatically generating the Chinese news summary.
APA, Harvard, Vancouver, ISO, and other styles
37

Li, Yulin Ph D. "The diagrammatic specification and automatic generation of geometry subroutines." Thesis, 2010. http://hdl.handle.net/2152/ETD-UT-2010-05-1287.

Full text
Abstract:
Programming has advanced a great deal since the appearance of the stored-program architecture. Through the successive generations of machine codes, assembly languages, high-level languages, and object-oriented languages, the drive has been toward program descriptions that express more meaning in a shorter space. This trend continues today with domain-specific languages. However, conventional languages rely on a textual formalism (commands, statements, lines of code) to capture the programmer's intent, which, regardless of its level of abstraction, imposes inevitable overheads. Before successful programming activities can take place, the syntax has to be mastered, names and keywords memorized, the library routines mastered, etc. Existing visual programming languages avoid some of these overheads, but do not release the programmer from the task of specifying the program logic, which consumes the main portion of programming time and also is the major source of difficult bugs. Our work aims to minimize the demands a formalism imposes on the programmer of geometric subroutines other than what is inherent in the problem itself. Our approach frees the programmer from syntactic constraints and generates logically correct programs automatically from program descriptions in the form of diagrams. To write a program, the programmer simply draws a few diagrams to depict the problem context and specifies all the necessary parameters through menu operations. Diagrams are succinct, easy to learn, and intuitive to use. They are much easier to modify than code, and they help the user visualize and analyze the problem, in addition to providing information to the computer. Furthermore, diagrams describe a situation rather than a task and thus are reusable for different tasks—in general, a single diagram can generate many programs. For these reasons, we have chosen diagrams as the main specification mechanism. In addition, we leverage the power of automatic inference to reason about diagrams and generic components—the building blocks of our programs—and discover the logic for assembling these components into correct programs. To facilitate inference, symbolic facts encode entities present in the diagrams, their spatial relationships, and the preconditions and effects of reusable components. We have developed a reference implementation and tested it on a number of real-world examples to demonstrate the feasibility and efficacy of our approach.<br>text
APA, Harvard, Vancouver, ISO, and other styles
38

Alves, Paulo. "E-generation: especificação de uma arquitectura para intranets educacionais baseada em agentes." Doctoral thesis, 2007. http://hdl.handle.net/10198/1118.

Full text
Abstract:
A rápida evolução das tecnologias de informação e comunicação (TIC) nos mais variados sectores, levou ao aparecimento de um conjunto de terminologias, tais como o e-learning, e-commerce, e-government, e-business, entre outros. Pretendendo dar resposta à necessidade de integração das TIC nas instituições de ensino superior, surge a arquitectura E-generation, com o objectivo de contribuir para uma melhoria dos processos de ensino, gestão e investigação, através da adopção de Intranets educacionais. Esta abordagem centra-se na mudança de paradigma educacional decorrente do Processo de Bolonha, o qual passa a estar centrado no aluno. Deste modo, constatandose que a maioria das plataformas de e-learning são usadas como simples repositórios de conteúdos, pretende-se, através de uma abordagem baseada em actividades de aprendizagem, incentivar a mudança do processo educativo, passando a estar centrado em actividades, suportadas por recursos estruturados. Para além da mudança do processo educativo, ainda que de interesse, é também estudado neste trabalho o impacto que os agentes tutor suportados pelo raciocínio baseado em casos podem ter no apoio ao aluno, nomeadamente adaptando o ambiente de aprendizagem, detectando dificuldades e dando sugestões de recursos que reforcem os conhecimentos, tendo o papel do “anjo da guarda” do aluno. Para estudar o impacto das TIC na mudança dos processos de ensino, gestão e investigação, foi efectuado um estudo sobre a utilização da Intranet educacional Domus e do protótipo iDomus, que permitiu avaliar em que medida uma abordagem educativa baseada em actividades de aprendizagem e suportada por agentes tutor, pode contribuir para a mudança do paradigma educacional, indo de encontro aos objectivos do Processo de Bolonha e da sociedade baseada no conhecimento. No âmbito da implementação da arquitectura orientada a serviços E-generation, foi possível verificar que o desenvolvimento de ambientes baseados em Intranets educacionais contribui para uma maior eficácia dos processos de ensino, de gestão e de investigação das instituições de ensino superior, tendo por base tecnologias de elearning, e-management e e-research suportadas por agentes tutor.<br>PRODEP
APA, Harvard, Vancouver, ISO, and other styles
39

Weiser, Gary. "Developing NGSS-Aligned Assessments to Measure Crosscutting Concepts in Student Reasoning of Earth Structures and Systems." Thesis, 2019. https://doi.org/10.7916/d8-h5d3-5j83.

Full text
Abstract:
The past two decades of research on how students develop their science understandings as they make sense of phenomena that occur in the natural world has culminated in a movement to redefine science educational standards. The so-called Next Generation Science Standards (or NGSS) codify this new definition into a set of distinct performance expectations, which outline how students might reveal to what extent they have sufficient understanding of disciplinary core ideas (DCIs), science practices (SEPs), and crosscutting concepts (CCCs). The latter of these three dimensions is unique both in being the most recent to the field and in being the least supported by prior science education research. More crucially, as a policy document, the NGSS alone does not provide the supports teachers need to bring reforms to their classrooms, particularly not summative assessments. This dissertation addresses both of these gaps using a combination of quantitative and qualitative techniques. First, I analyze differential categorization of problems that require respondents to engage with their CCC understandings via confirmatory factor analysis inference. Second, I use a set of Rasch models to measure preliminary learning progressions for CCCs evident in student activity within a computer-assisted assessment experience. Third, I analyze student artifacts, think-aloud interviews, and post-task reflective interviews via activity theory to adapt the progression into a task model in which students explain and predict aspects of Earth systems. The culmination of these three endeavors not only sets forth a methodology for researching CCCs in a way that is more integrative to the other dimensions of the NGSS, but also provides a framework for developing assessments that are aligned to the goals of these new standards.
APA, Harvard, Vancouver, ISO, and other styles
40

Chih-Chung, Lai, and 賴志忠. "Junior High School Students’ Informal Reasoning ability Analysis on Socio-scientific Issue: 「Generate electric by wind」 and 「Eco-alamedas」." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/70222743851665836779.

Full text
Abstract:
碩士<br>國立臺中教育大學<br>科學應用與推廣學系科學教育碩士班<br>96<br>This study is to investigate the 8th grader students’ informal reasoning ability by Socio-scientific issues, and how the reasoning patterns and the informal reasoning ability are influenced by biodiversity knowledge and biodiversity attitudes? There are 84 effective samples in the formal questionnaire. Research tools include a Biodiversity knowledge questionnaire, Biodiversity attitudes questionnaire, and open-end Socio-scientific Issues questionnaire. By investigating students’ informal reasoning content and quantify the reasoning content as score, and individual interviews. We hope to find out the informal reasoning patterns and dividing line of Biodiversity knowledge and Biodiversity attitudes score between high score, middle score, and low score groups. The results of this study are as follows:(1)To analyze the junior high school students’ informal reasoning processes, we can find that student usually use rational reasoning patterns in 「Generate electric by wind」 (90.25%) issue and 「Eco-alamedas」 (90.84%) issue. (2)Informal reasoning score in students’ individual background, such as gender variable, how much time do students spend on reading science books in a week, and joined about science activity last year, different groups have little difference ,however, it does not show significant difference.(3)In linear regression analysis, the grade of biodiversity knowledge score and school subject(including Chinese, Science, and school term average achievement) to informal reasoning score are reaching significant difference. This shows that above variables are critical point in informal reasoning ability and quality high or low of students, however, biodiversity knowledge score is the most influence factor. In different biodiversity knowledge score group there is a threshold between different groups, the threshold is between middle and low score group in 「Generate electric by wind」issue, and between high and middle score group in 「Eco-alamedas」issue. This shows that there actually is a divide line of knowledge to separate the reasoning ability of students.(4)One-way analysis of variance in this two issues, only 「Generate electric by wind」issue reached significant difference in high, middle and low score group of Biodiversity attitudes. The threshold in different Biodiversity attitudes score grouping in「Generate electric by wind」issue is between high and low groups, however, difference score groups in「Eco-alamedas」issue have no significant difference. This implies that Biodiversity knowledge has more influence in students’ informal reasoning quality than Biodiversity attitudes.
APA, Harvard, Vancouver, ISO, and other styles
41

Manthey, Norbert. "Towards Next Generation Sequential and Parallel SAT Solvers." Doctoral thesis, 2014. https://tud.qucosa.de/id/qucosa%3A28471.

Full text
Abstract:
This thesis focuses on improving the SAT solving technology. The improvements focus on two major subjects: sequential SAT solving and parallel SAT solving. To better understand sequential SAT algorithms, the abstract reduction system Generic CDCL is introduced. With Generic CDCL, the soundness of solving techniques can be modeled. Next, the conflict driven clause learning algorithm is extended with the three techniques local look-ahead, local probing and all UIP learning that allow more global reasoning during search. These techniques improve the performance of the sequential SAT solver Riss. Then, the formula simplification techniques bounded variable addition, covered literal elimination and an advanced cardinality constraint extraction are introduced. By using these techniques, the reasoning of the overall SAT solving tool chain becomes stronger than plain resolution. When using these three techniques in the formula simplification tool Coprocessor before using Riss to solve a formula, the performance can be improved further. Due to the increasing number of cores in CPUs, the scalable parallel SAT solving approach iterative partitioning has been implemented in Pcasso for the multi-core architecture. Related work on parallel SAT solving has been studied to extract main ideas that can improve Pcasso. Besides parallel formula simplification with bounded variable elimination, the major extension is the extended clause sharing level based clause tagging, which builds the basis for conflict driven node killing. The latter allows to better identify unsatisfiable search space partitions. Another improvement is to combine scattering and look-ahead as a superior search space partitioning function. In combination with Coprocessor, the introduced extensions increase the performance of the parallel solver Pcasso. The implemented system turns out to be scalable for the multi-core architecture. Hence iterative partitioning is interesting for future parallel SAT solvers. The implemented solvers participated in international SAT competitions. In 2013 and 2014 Pcasso showed a good performance. Riss in combination with Copro- cessor won several first, second and third prices, including two Kurt-Gödel-Medals. Hence, the introduced algorithms improved modern SAT solving technology.
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Hsuan-Ting active 2013. "Capturing the nature of issue publics : selectivity, deliberation, and activeness in the new media environment." Thesis, 2013. http://hdl.handle.net/2152/21372.

Full text
Abstract:
This dissertation seeks to understand how issue publics contribute to citizen competence and the functioning of democracy. In the first part of the dissertation, a new measurement was constructed by theoretically and empirically analyzing the attributes of issue public members. Through the hypotheses testing, the new measure was more reliable in identifying issue public members compared to previous measurement strategies. Employing the new measure, results show that issue public members with concern about a specific issue, exercised their issue-specificity in seeking information (i.e., issue-based selectivity) with exposure to both attitude-consistent and counter-attitudinal perspectives. Issue public membership also had significant effects on issue-specific knowledge, and generating rationales for their own and other's oppositional viewpoints. These direct effects were mediated by issue-based selectivity. The relationships highlight the importance of issue publics in contributing to the deliberative democracy. In addition, issue publics play a significant role in contributing to the participatory democracy in that issue public members have greater intentions to participate in issue-related activities than nonmembers. However, while issue publics come close to solve the deliberative-participatory paradox, it was found that their information selectivity and argument generation were unbalanced in a way of favoring pro-attitudinal perspectives over counter-attitudinal perspectives. The second part of the dissertation examined conditional factors--accuracy and directional goals in affecting information selectivity and processing. The findings show that directional goals influenced participants to apply either the strategies of selective approach or selective avoidance to seek information depending on the issue. Accuracy goals exerted a main effect on the issue that is relatively less controversial and less obtrusive. They also interacted with issue public membership in influencing the less controversial and less obtrusive issue. Argument generation was not affected by accuracy or directional goals. Overall, through conceptualizing citizens as members of different issue publics, individuals are more competent then we thought. Their intrinsic interest in an issue serves as a strong factor affecting their information selectivity, information processing, and political actions. Despite finding an optimistic role for issue publics in the democratic process, their limitations also should be recognized. The implications for the deliberative and participatory democracy are discussed.
APA, Harvard, Vancouver, ISO, and other styles
43

Perrier, Charlotte. "Une activité d'élaboration d'hypothèses pour soutenir le développement du RCI d'étudiantes en sciences infirmières." Thèse, 2013. http://hdl.handle.net/1866/10998.

Full text
Abstract:
L'enseignement du raisonnement clinique infirmier (RCI) est une préoccupation importante des formateurs en sciences infirmières depuis plusieurs années. Les étudiantes en sciences infirmières éprouvent des difficultés à formuler des hypothèses cliniques, à savoir trouver les explications pouvant justifier la coexistence d'une combinaison de données cliniques. Pourtant, la formulation d’hypothèses constitue une étape déterminante du RCI. Dans cette étude qualitative exploratoire, nous avons mis à l'essai une activité d'apprentissage par vignette clinique courte (AVCC) qui fournit aux étudiantes l'occasion d'exercer spécifiquement la formulation d'hypothèses cliniques. L'étude visait à documenter la capacité d'étudiantes de troisième année au baccalauréat en sciences infirmières à formuler des hypothèses cliniques durant l'activité. Dix-sept étudiantes ont été recrutées par convenance et divisées en groupes selon leurs disponibilités. Au total, quatre séances ont eu lieu. Les participantes étaient invitées à réfléchir à une vignette clinique courte et à construire un algorithme qui incluait: 1) leurs hypothèses concernant la nature du problème clinique, 2) les éléments d'informations essentiels à rechercher pour vérifier chaque hypothèse et 3) les moyens pour trouver ces informations. L'observation participante, l'enregistrement audio-vidéo et un questionnaire auto-administré ont servi à collecter les données. Les stratégies de RCI décrites par Fonteyn (1998) ont servi de cadre théorique pour guider l’analyse, sous forme de matrices comprenant des verbatims et des notes de terrain. Les résultats suggèrent que l'AVCC stimule la formulation d'hypothèses cliniques et la réactivation des connaissances antérieures. Cette activité pourrait donc être utile en complément d'autres activités éducatives pour favoriser le développement du RCI chez les étudiantes en sciences infirmières.<br>Teaching and learning clinical reasoning has been a major concern amongst nurse educators for many years. Hypothesis generation is a critical milestone in clinical nursing reasoning which students are still struggling with at the end of their program. In a qualitative exploratory study, we tested a vignette-based activity to provide to the students an opportunity to specifically practice hypotheses generation. The study aimed at documenting nursing student’s capacity to formulate hypotheses during the activity. Seventeen nursing students in the last semester of their program were recruited by convenience and grouped accordingly to their availability to participate. The activity was held four times. Participants were asked to focus on a brief clinical vignette and to build an algorithm that would include 1) their hypotheses regarding the nature of the problem, 2) the essential pieces of information to collect in order to verify each hypothesis, and 3) the way the information was to be found. The combined methods used for data collection were participative observation, videotaping the activity and a written questionnaire immediately after the activity. Data were then classified in matrices in the form of verbatim and notes using clinical nursing reasoning skills described by Fonteyn (1998) as the theoretical framework. Results suggest that the vignette-based activity does stimulate students to formulate hypotheses. It also stimulates sharing and recollection of knowledge amongst students. This type of activity could therefore be useful in promoting the development of clinical reasoning as a complement to other educative activities used in nursing education programs.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!