Tesi sul tema "Partially Observable Markov Decision Processes (POMDPs)"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-36 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Partially Observable Markov Decision Processes (POMDPs)".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.

1

Aberdeen, Douglas Alexander, and doug aberdeen@anu edu au. "Policy-Gradient Algorithms for Partially Observable Markov Decision Processes." The Australian National University. Research School of Information Sciences and Engineering, 2003. http://thesis.anu.edu.au./public/adt-ANU20030410.111006.

Testo completo
Abstract (sommario):
Partially observable Markov decision processes are interesting because of their ability to model most conceivable real-world learning problems, for example, robot navigation, driving a car, speech recognition, stock trading, and playing games. The downside of this generality is that exact algorithms are computationally intractable. Such computational complexity motivates approximate approaches. One such class of algorithms are the so-called policy-gradient methods from reinforcement learning. They seek to adjust the parameters of an agent in the direction that maximises the long-term average
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Olafsson, Björgvin. "Partially Observable Markov Decision Processes for Faster Object Recognition." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-198632.

Testo completo
Abstract (sommario):
Object recognition in the real world is a big challenge in the field of computer vision. Given the potentially enormous size of the search space it is essential to be able to make intelligent decisions about where in the visual field to obtain information from to reduce the computational resources needed. In this report a POMDP (Partially Observable Markov Decision Process) learning framework, using a policy gradient method and information rewards as a training signal, has been implemented and used to train fixation policies that aim to maximize the information gathered in each fixation. The p
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Lusena, Christopher. "Finite Memory Policies for Partially Observable Markov Decision Proesses." UKnowledge, 2001. http://uknowledge.uky.edu/gradschool_diss/323.

Testo completo
Abstract (sommario):
This dissertation makes contributions to areas of research on planning with POMDPs: complexity theoretic results and heuristic techniques. The most important contributions are probably the complexity of approximating the optimal history-dependent finite-horizon policy for a POMDP, and the idea of heuristic search over the space of FFTs.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Torre, tresols Juan Jesús. "The partially observable brain : An exploratory study on the use of partially observable Markov decision processes as a general framework for brain-computer interfaces." Electronic Thesis or Diss., Toulouse, ISAE, 2024. http://www.theses.fr/2024ESAE0038.

Testo completo
Abstract (sommario):
Malgré les progrès récents, les interfaces cerveau-machine (ICM) continuent de se heurter à plusieurs limites qui entravent leur passage des laboratoires aux applications réelles. Les problèmes persistants des faux positifs et des temps de décodage fixes sont particulièrement préoccupants. Cette thèse propose l'intégration d'un cadre décisionnel structuré, en particulier le processus de décision markovien partiellement observable (POMDP), dans la technologie BCI.Notre objectif est de concevoir un modèle POMDP complet adaptable à diverses modalités d'ICB, servant de composante décisionnelle dan
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Skoglund, Caroline. "Risk-aware Autonomous Driving Using POMDPs and Responsibility-Sensitive Safety." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300909.

Testo completo
Abstract (sommario):
Autonomous vehicles promise to play an important role aiming at increased efficiency and safety in road transportation. Although we have seen several examples of autonomous vehicles out on the road over the past years, how to ensure the safety of autonomous vehicle in the uncertain and dynamic environment is still a challenging problem. This thesis studies this problem by developing a risk-aware decision making framework. The system that integrates the dynamics of an autonomous vehicle and the uncertain environment is modelled as a Partially Observable Markov Decision Process (POMDP). A risk m
Gli stili APA, Harvard, Vancouver, ISO e altri
6

You, Yang. "Probabilistic Decision-Making Models for Multi-Agent Systems and Human-Robot Collaboration." Electronic Thesis or Diss., Université de Lorraine, 2023. http://www.theses.fr/2023LORR0014.

Testo completo
Abstract (sommario):
Dans cette thèse, nous nous intéressons à la prise de décision haut niveau (planification de tâches) pour la robotique à l'aide de modèles de prise de décision markoviens et sous deux aspects : la collaboration robot-robot et la collaboration homme-robot. Dans le cadre de la collaboration robot-robot (RRC), nous étudions les problèmes de décision de plusieurs robots devant atteindre un objectif commun de manière collaborative, et nous utilisons le cadre des processus de décision markoviens partiellement observables et décentralisés (Dec-POMDP) pour modéliser de tels problèmes. Nous proposons d
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Cheng, Hsien-Te. "Algorithms for partially observable Markov decision processes." Thesis, University of British Columbia, 1988. http://hdl.handle.net/2429/29073.

Testo completo
Abstract (sommario):
The thesis develops methods to solve discrete-time finite-state partially observable Markov decision processes. For the infinite horizon problem, only discounted reward case is considered. Several new algorithms for the finite horizon and the infinite horizon problems are developed. For the finite horizon problem, two new algorithms are developed. The first algorithm is called the relaxed region algorithm. For each support in the value function, this algorithm determines a region not smaller than its support region and modifies it implicitly in later steps until the exact support region is fo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Jaulmes, Robin. "Active learning in partially observable Markov decision processes." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98733.

Testo completo
Abstract (sommario):
People are efficient when they make decisions under uncertainty, even when their decisions have long-term ramifications, or when their knowledge and their perception of the environment are uncertain. We are able to experiment with the environment and learn, improving our behavior as experience is gathered. Most of the problems we face in real life are of that kind, and most of the problems that an automated agent would face in robotics too.<br>Our goal is to build Artificial Intelligence algorithms able to reproduce the reasoning of humans for these complex problems. We use the Reinforcement L
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Aberdeen, Douglas Alexander. "Policy-gradient algorithms for partially observable Markov decision processes /." View thesis entry in Australian Digital Theses Program, 2003. http://thesis.anu.edu.au/public/adt-ANU20030410.111006/index.html.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Zawaideh, Zaid. "Eliciting preferences sequentially using partially observable Markov decision processes." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=18794.

Testo completo
Abstract (sommario):
Decision Support systems have been gaining in importance recently. Yet one of the bottlenecks of designing such systems lies in understanding how the user values different decision outcomes, or more simply what the user preferences are. Preference elicitation promises to remove the guess work of designing decision making agents by providing more formal methods for measuring the `goodness' of outcomes. This thesis aims to address some of the challenges of preference elicitation such as the high dimensionality of the underlying problem. The problem is formulated as a partially observable Markov
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Williams, Jason Douglas. "Partially observable Markov decision processes for spoken dialogue management." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612754.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Lusena, Christopher. "Finite memory policies for partially observable Markov decision processes." Lexington, Ky. : [University of Kentucky Libraries], 2001. http://lib.uky.edu/ETD/ukycosc2001d00021/lusena01.pdf.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--University of Kentucky, 2001.<br>Title from document title page. Document formatted into pages; contains viii, 89 p. : ill. Includes abstract. Includes bibliographical references (p. 81-86).
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Yu, Huizhen Ph D. Massachusetts Institute of Technology. "Approximate solution methods for partially observable Markov and semi-Markov decision processes." Thesis, Massachusetts Institute of Technology, 2006. http://hdl.handle.net/1721.1/35299.

Testo completo
Abstract (sommario):
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Includes bibliographical references (p. 165-169).<br>We consider approximation methods for discrete-time infinite-horizon partially observable Markov and semi-Markov decision processes (POMDP and POSMDP). One of the main contributions of this thesis is a lower cost approximation method for finite-space POMDPs with the average cos
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Tobin, Ludovic. "A Stochastic Point-Based Algorithm for Partially Observable Markov Decision Processes." Thesis, Université Laval, 2008. http://www.theses.ulaval.ca/2008/25194/25194.pdf.

Testo completo
Abstract (sommario):
La prise de décision dans un environnement partiellement observable est un sujet d'actualité en intelligence artificielle. Une façon d'aborder ce type de problème est d'utiliser un modèle mathématique. Notamment, les POMDPs (Partially Observable Markov Decision Process) ont fait l'objet de plusieurs recherches au cours des dernières années. Par contre, résoudre un POMDP est un problème très complexe et pour cette raison, le modèle n'a pas été utilisé abondamment. Notre objectif était de continuer les progrès ayant été réalisé lors des dernières années, avec l'espoir que nos travaux de recherch
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Olsen, Alan. "Pond-Hindsight: Applying Hindsight Optimization to Partially-Observable Markov Decision Processes." DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/1035.

Testo completo
Abstract (sommario):
Partially-observable Markov decision processes (POMDPs) are especially good at modeling real-world problems because they allow for sensor and effector uncertainty. Unfortunately, such uncertainty makes solving a POMDP computationally challenging. Traditional approaches, which are based on value iteration, can be slow because they find optimal actions for every possible situation. With the help of the Fast Forward (FF) planner, FF- Replan and FF-Hindsight have shown success in quickly solving fully-observable Markov decision processes (MDPs) by solving classical planning translations of the pro
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Hudson, Joshua. "A Partially Observable Markov Decision Process for Breast Cancer Screening." Thesis, Linköpings universitet, Statistik och maskininlärning, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-154437.

Testo completo
Abstract (sommario):
In the US, breast cancer is one of the most common forms of cancer and the most lethal. There are many decisions that must be made by the doctor and/or the patient when dealing with a potential breast cancer. Many of these decisions are made under uncertainty, whether it is the uncertainty related to the progression of the patient's health, or that related to the accuracy of the doctor's tests. Each possible action under consideration can have positive effects, such as a surgery successfully removing a tumour, and negative effects: a post-surgery infection for example. The human mind simply ca
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Castro, Rivadeneira Pablo Samuel. "On planning, prediction and knowledge transfer in fully and partially observable Markov decision processes." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=104525.

Testo completo
Abstract (sommario):
This dissertation addresses the problem of sequential decision making under uncertainty in large systems. The formalisms used to study this problem are fully and partially observable Markov Decision Processes (MDPs and POMDPs, respectively). The first contribution of this dissertation is a theoretical analysis of the behavior of POMDPs when only subsets of the observation set are used. One of these subsets is used to update the agent's state estimate, while the other subset contains observations the agent is interested in predicting and/or optimizing. The behaviors are formalized as three type
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Horgan, Casey Vi. "Dealing with uncertainty : a comparison of robust optimization and partially observable Markov decision processes." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112410.

Testo completo
Abstract (sommario):
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2017.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 131-132).<br>Uncertainty is often present in real-life problems. Deciding how to deal with this uncertainty can be difficult. The proper formulation of a problem can be the larger part of the work required to solve it. This thesis is intended to be used by a decision maker to determine how best to formulate a problem. Robust optimization and partially observable Markov decision processes (POMDPs) are two me
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Crook, Paul A. "Learning in a state of confusion : employing active perception and reinforcement learning in partially observable worlds." Thesis, University of Edinburgh, 2007. http://hdl.handle.net/1842/1471.

Testo completo
Abstract (sommario):
In applying reinforcement learning to agents acting in the real world we are often faced with tasks that are non-Markovian in nature. Much work has been done using state estimation algorithms to try to uncover Markovian models of tasks in order to allow the learning of optimal solutions using reinforcement learning. Unfortunately these algorithms which attempt to simultaneously learn a Markov model of the world and how to act have proved very brittle. Our focus differs. In considering embodied, embedded and situated agents we have a preference for simple learning algorithms which reliably lear
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Omidshafiei, Shayegan. "Decentralized control of multi-robot systems using partially observable Markov Decision Processes and belief space macro-actions." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101447.

Testo completo
Abstract (sommario):
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2015.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Cataloged from student-submitted PDF version of thesis.<br>Includes bibliographical references (pages 129-139).<br>Planning, control, perception, and learning for multi-robot systems present signicant challenges. Transition dynamics of the robots may be stochastic, making it difficult to select the best action each robot should take at a give
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Folsom-Kovarik, Jeremiah. "Leveraging Help Requests in POMDP Intelligent Tutors." Doctoral diss., University of Central Florida, 2012. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5210.

Testo completo
Abstract (sommario):
Intelligent tutoring systems (ITSs) are computer programs that model individual learners and adapt instruction to help each learner differently. One way ITSs differ from human tutors is that few ITSs give learners a way to ask questions. When learners can ask for help, their questions have the potential to improve learning directly and also act as a new source of model data to help the ITS personalize instruction. Inquiry modeling gives ITSs the ability to answer learner questions and refine their learner models with an inexpensive new input channel. In order to support inquiry modeling, an a
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Pradhan, Neil. "Deep Reinforcement Learning for Autonomous Highway Driving Scenario." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-289444.

Testo completo
Abstract (sommario):
We present an autonomous driving agent on a simulated highway driving scenario with vehicles such as cars and trucks moving with stochastically variable velocity profiles. The focus of the simulated environment is to test tactical decision making in highway driving scenarios. When an agent (vehicle) maintains an optimal range of velocity it is beneficial both in terms of energy efficiency and greener environment. In order to maintain an optimal range of velocity, in this thesis work I proposed two novel reward structures: (a) gaussian reward structure and (b) exponential rise and fall reward s
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Murugesan, Sugumar. "Opportunistic Scheduling Using Channel Memory in Markov-modeled Wireless Networks." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1282065836.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Ibrahim, Rita. "Utilisation des communications Device-to-Device pour améliorer l'efficacité des réseaux cellulaires." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLC002/document.

Testo completo
Abstract (sommario):
Cette thèse étudie les communications directes entre les mobiles, appelées communications D2D, en tant que technique prometteuse pour améliorer les futurs réseaux cellulaires. Cette technologie permet une communication directe entre deux terminaux mobiles sans passer par la station de base. La modélisation, l'évaluation et l'optimisation des différents aspects des communications D2D constituent les objectifs fondamentaux de cette thèse et sont réalisés principalement à l'aide des outils mathématiques suivants: la théorie des files d'attente, l'optimisation de Lyapunov et les processus de décis
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Gonçalves, Luciano Vargas. "Uma arquitetura de Agentes BDI para auto-regulação de Trocas Sociais em Sistemas Multiagentes Abertos." Universidade Catolica de Pelotas, 2009. http://tede.ucpel.edu.br:8080/jspui/handle/tede/105.

Testo completo
Abstract (sommario):
Made available in DSpace on 2016-03-22T17:26:22Z (GMT). No. of bitstreams: 1 dm2_Luciano_vargas.pdf: 637463 bytes, checksum: b08b63e8c6a347cd2c86fc24fdfd8986 (MD5) Previous issue date: 2009-03-31<br>The study and development of systems to control interactions in multiagent systems is an open problem in Artificial Intelligence. The system of social exchange values of Piaget is a social approach that allows for the foundations of the modeling of interactions between agents, where the interactions are seen as service exchanges between pairs of agents, with the evaluation of the realized or rece
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Sachan, Mohit. "Learning in Partially Observable Markov Decision Processes." 2013. http://hdl.handle.net/1805/3451.

Testo completo
Abstract (sommario):
Indiana University-Purdue University Indianapolis (IUPUI)<br>Learning in Partially Observable Markov Decision process (POMDP) is motivated by the essential need to address a number of realistic problems. A number of methods exist for learning in POMDPs, but learning with limited amount of information about the model of POMDP remains a highly anticipated feature. Learning with minimal information is desirable in complex systems as methods requiring complete information among decision makers are impractical in complex systems due to increase of problem dimensionality. In this thesis we addres
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Koltunova, Veronika. "Active Sensing for Partially Observable Markov Decision Processes." Thesis, 2013. http://hdl.handle.net/10012/7222.

Testo completo
Abstract (sommario):
Context information on a smart phone can be used to tailor applications for specific situations (e.g. provide tailored routing advice based on location, gas prices and traffic). However, typical context-aware smart phone applications use very limited context information such as user identity, location and time. In the future, smart phones will need to decide from a wide range of sensors to gather information from in order to best accommodate user needs and preferences in a given context. In this thesis, we present a model for active sensor selection within decision-making processes, in which
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Aberdeen, Douglas. "Policy-Gradient Algorithms for Partially Observable Markov Decision Processes." Phd thesis, 2003. http://hdl.handle.net/1885/48180.

Testo completo
Abstract (sommario):
Partially observable Markov decision processes are interesting because of their ability to model most conceivable real-world learning problems, for example, robot navigation, driving a car, speech recognition, stock trading, and playing games. The downside of this generality is that exact algorithms are computationally intractable. Such computational complexity motivates approximate approaches. One such class of algorithms are the so-called policy-gradient methods from reinforcement learning. They seek to adjust the parameters of an agent in the direction that maximises the long-term average o
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Kinathil, Shamin. "Closed-form Solutions to Sequential Decision Making within Markets." Phd thesis, 2018. http://hdl.handle.net/1885/186490.

Testo completo
Abstract (sommario):
Sequential decision making is a pervasive and inescapable requirement of every day life. Deciding upon which sequence of actions to take is complicated by incomplete information about the environment, the effects of each decision upon the future state of the environment, ill-defined objectives and our own cognitive limitations. These challenges are exacerbated in financial markets which are in a constant state of flux, with prices adjusting to new information, winning traders replacing losing traders and the introduction of new technologies. Decision theoretic planning provides powerful and fl
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Daswani, Mayank. "Generic Reinforcement Learning Beyond Small MDPs." Phd thesis, 2015. http://hdl.handle.net/1885/110545.

Testo completo
Abstract (sommario):
Feature reinforcement learning (FRL) is a framework within which an agent can automatically reduce a complex environment to a Markov Decision Process (MDP) by finding a map which aggregates similar histories into the states of an MDP. The primary motivation behind this thesis is to build FRL agents that work in practice, both for larger environments and larger classes of environments. We focus on empirical work targeted at practitioners in the field of general reinforcement learning, with theoretical results wherever necessary.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Poupart, Pascal. "Exploiting structure to efficiently solve large scale partially observable Markov decision processes." 2005. http://link.library.utoronto.ca/eir/EIRdetail.cfm?Resources__ID=232732&T=F.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Leung, Siu-Ki. "Exploring partially observable Markov decision processes by exploting structure and heuristic information." Thesis, 1996. http://hdl.handle.net/2429/5772.

Testo completo
Abstract (sommario):
This thesis is about chance and choice, or decisions under uncertainty. The desire for creating an intelligent agent performing rewarding tasks in a realistic world urges for working models to do sequential decision making and planning. In responding to this grand wish, decision-theoretic planning (DTP) has evolved from decision theory and control theory, and has been applied to planning in artificial intelligence. Recent interest has been directed toward Markov Decision Processes (MDPs) introduced from operations research. While fruitful results have been tapped from research in fully
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Poupart, Pascal. "Approximate value-directed belief state monitoring for partially observable Markov decision processes." Thesis, 2000. http://hdl.handle.net/2429/11462.

Testo completo
Abstract (sommario):
Partially observable Markov decision processes (POMDPs) provide a principled approach to planning under uncertainty. Unfortunately, several sources of intractability currently limit the application of POMDPs to simple problems. The following thesis is concerned with one source of intractability in particular, namely the belief state monitoring task. As an agent executes a plan, it must track the state of the world by updating its beliefs with respect to the current state. Then, based on its current beliefs, the agent can look up the next action to execute in its plan. In many situations
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Amato, Christopher. "Increasing scalability in algorithms for centralized and decentralized partially observable Markov decision processes: Efficient decision-making and coordination in uncertain environments." 2010. https://scholarworks.umass.edu/dissertations/AAI3427492.

Testo completo
Abstract (sommario):
As agents are built for ever more complex environments, methods that consider the uncertainty in the system have strong advantages. This uncertainty is common in domains such as robot navigation, medical diagnosis and treatment, inventory management, sensor networks and e-commerce. When a single decision maker is present, the partially observable Markov decision process (POMDP) model is a popular and powerful choice. When choices are made in a decentralized manner by a set of decision makers, the problem can be modeled as a decentralized partially observable Markov decision process (DEC-POMD
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Goswami, Anindya. "Semi-Markov Processes In Dynamic Games And Finance." Thesis, 2008. https://etd.iisc.ac.in/handle/2005/727.

Testo completo
Abstract (sommario):
Two different sets of problems are addressed in this thesis. The first one is on partially observed semi-Markov Games (POSMG) and the second one is on semi-Markov modulated financial market model. In this thesis we study a partially observable semi-Markov game in the infinite time horizon. The study of a partially observable game (POG) involves three major steps: (i) construct an equivalent completely observable game (COG), (ii) establish the equivalence between POG and COG by showing that if COG admits an equilibrium, POG does so, (iii) study the equilibrium of COG and find the correspondi
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Goswami, Anindya. "Semi-Markov Processes In Dynamic Games And Finance." Thesis, 2008. http://hdl.handle.net/2005/727.

Testo completo
Abstract (sommario):
Two different sets of problems are addressed in this thesis. The first one is on partially observed semi-Markov Games (POSMG) and the second one is on semi-Markov modulated financial market model. In this thesis we study a partially observable semi-Markov game in the infinite time horizon. The study of a partially observable game (POG) involves three major steps: (i) construct an equivalent completely observable game (COG), (ii) establish the equivalence between POG and COG by showing that if COG admits an equilibrium, POG does so, (iii) study the equilibrium of COG and find the correspondin
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!