To see the other types of publications on this topic, follow the link: Computer-based mathematical model.

Dissertations / Theses on the topic 'Computer-based mathematical model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 33 dissertations / theses for your research on the topic 'Computer-based mathematical model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Vadeby, Anna. "Computer based statistical treatment in models with incidental parameters : inspired by car crash data." Doctoral thesis, Linköping : Univ, 2003. http://www.bibl.liu.se/liupubl/disp/disp2003/tek814s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Moseley, Charles Warren. "A Timescale Estimating Model for Rule-Based Systems." Thesis, North Texas State University, 1987. https://digital.library.unt.edu/ark:/67531/metadc332089/.

Full text
Abstract:
The purpose of this study was to explore the subject of timescale estimating for rule-based systems. A model for estimating the timescale necessary to build rule-based systems was built and then tested in a controlled environment.
APA, Harvard, Vancouver, ISO, and other styles
3

Patrick-Aldaco, Romano. "A Model Based Framework for Fault Diagnosis and Prognosis of Dynamical Systems with an Application to Helicopter Transmissions." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16266.

Full text
Abstract:
The thesis presents a framework for integrating models, simulation, and experimental data to diagnose incipient failure modes and prognosticate the remaining useful life of critical components, with an application to the main transmission of a helicopter. Although the helicopter example is used to illustrate the methodology presented, by appropriately adapting modules, the architecture can be applied to a variety of similar engineering systems. Models of the kind referenced are commonly referred to in the literature as physical or physics-based models. Such models utilize a mathematical description of some of the natural laws that govern system behaviors. The methodology presented considers separately the aspects of diagnosis and prognosis of engineering systems, but a similar generic framework is proposed for both. The methodology is tested and validated through comparison of results to data from experiments carried out on helicopters in operation and a test cell employing a prototypical helicopter gearbox. Two kinds of experiments have been used. The first one retrieved vibration data from several healthy and faulted aircraft transmissions in operation. The second is a seeded-fault damage-progression test providing gearbox vibration data and ground truth data of increasing crack lengths. For both kinds of experiments, vibration data were collected through a number of accelerometers mounted on the frame of the transmission gearbox. The applied architecture consists of modules with such key elements as the modeling of vibration signatures, extraction of descriptive vibratory features, finite element analysis of a gearbox component, and characterization of fracture progression. Contributions of the thesis include: (1) generic model-based fault diagnosis and failure prognosis methodologies, readily applicable to a dynamic large-scale mechanical system; (2) the characterization of the vibration signals of a class of complex rotary systems through model-based techniques; (3) a reverse engineering approach for fault identification using simulated vibration data; (4) the utilization of models of a faulted planetary gear transmission to classify descriptive system parameters either as fault-sensitive or fault-insensitive; and (5) guidelines for the integration of the model-based diagnosis and prognosis architectures into prognostic algorithms aimed at determining the remaining useful life of failing components.
APA, Harvard, Vancouver, ISO, and other styles
4

Chen, Zebin. "Framework-based model construction with AOP assistance /." Connect to title online (ProQuest), 2008. http://proquest.umi.com/pqdweb?did=1588418351&sid=1&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2008.
Typescript. Includes vita and abstract. Includes bibliographical references (leaves 123-127). Also available online in ProQuest, free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
5

Prost, Jean-Philippe. "Modelling Syntactic Gradience with Loose Constraint-based Parsing." Phd thesis, Université de Provence - Aix-Marseille I, 2008. http://tel.archives-ouvertes.fr/tel-00352828.

Full text
Abstract:
La grammaticalité d'une phrase est habituellement conçue comme une notion binaire : une phrase est soit grammaticale, soit agrammaticale. Cependant, bon nombre de travaux se penchent de plus en plus sur l'étude de degrés d'acceptabilité intermédiaires, auxquels le terme de gradience fait parfois référence. À ce jour, la majorité de ces travaux s'est concentrée sur l'étude de l'évaluation humaine de la gradience syntaxique. Cette étude explore la possibilité de construire un modèle robuste qui s'accorde avec ces jugements humains.
Nous suggérons d'élargir au langage mal formé les concepts de Gradience Intersective et de Gradience Subsective, proposés par Aarts pour la modélisation de jugements graduels. Selon ce nouveau modèle, le problème que soulève la gradience concerne la classification d'un énoncé dans une catégorie particulière, selon des critères basés sur les caractéristiques syntaxiques de l'énoncé. Nous nous attachons à étendre la notion de Gradience Intersective (GI) afin qu'elle concerne le choix de la meilleure solution parmi un ensemble de candidats, et celle de Gradience Subsective (GS) pour qu'elle concerne le calcul du degré de typicité de cette structure au sein de sa catégorie. La GI est alors modélisée à l'aide d'un critère d'optimalité, tandis que la GS est modélisée par le calcul d'un degré d'acceptabilité grammaticale. Quant aux caractéristiques syntaxiques requises pour permettre de classer un énoncé, notre étude de différents cadres de représentation pour la syntaxe du langage naturel montre qu'elles peuvent aisément être représentées dans un cadre de syntaxe modèle-théorique (Model-Theoretic Syntax). Nous optons pour l'utilisation des Grammaires de Propriétés (GP), qui offrent, précisément, la possibilité de modéliser la caractérisation d'un énoncé. Nous présentons ici une solution entièrement automatisée pour la modélisation de la gradience syntaxique, qui procède de la caractérisation d'une phrase bien ou mal formée, de la génération d'un arbre syntaxique optimal, et du calcul d'un degré d'acceptabilité grammaticale pour l'énoncé.
À travers le développement de ce nouveau modèle, la contribution de ce travail comporte trois volets.
Premièrement, nous spécifions un système logique pour les GP qui permet la révision de sa formalisation sous l'angle de la théorie des modèles. Il s'attache notamment à formaliser les mécanismes de satisfaction et de relâche de contraintes mis en oeuvre dans les GP, ainsi que la façon dont ils permettent la projection d'une catégorie lors du processus d'analyse. Ce nouveau système introduit la notion de satisfaction relâchée, et une formulation en logique du premier ordre permettant de raisonner au sujet d'un énoncé.
Deuxièmement, nous présentons notre implantation du processus d'analyse syntaxique relâchée à base de contraintes (Loose Satisfaction Chart Parsing, ou LSCP), dont nous prouvons qu'elle génère toujours une analyse syntaxique complète et optimale. Cette approche est basée sur une technique de programmation dynamique (dynamic programming), ainsi que sur les mécanismes décrits ci-dessus. Bien que d'une complexité élevée, cette solution algorithmique présente des performances suffisantes pour nous permettre d'expérimenter notre modèle de gradience.
Et troisièmement, après avoir postulé que la prédiction de jugements humains d'acceptabilité peut se baser sur des facteurs dérivés de la LSCP, nous présentons un modèle numérique pour l'estimation du degré d'acceptabilité grammaticale d'un énoncé. Nous mesurons une bonne corrélation de ces scores avec des jugements humains d'acceptabilité grammaticale. Qui plus est, notre modèle s'avère obtenir de meilleures performances que celles obtenues par un modèle préexistant que nous utilisons comme référence, et qui, quant à lui, a été expérimenté à l'aide d'analyses syntaxiques générées manuellement.
APA, Harvard, Vancouver, ISO, and other styles
6

Hensley, Kiersten Kenning. "Examining the effects of paper-based and computer-based modes of assessment on mathematics curriculum-based measurement." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1627.

Full text
Abstract:
The computer to pupil ratio has changed drastically in the past decades, from 125:1 in 1983 to less than 2:1 in 2009 (Gray, Thomas, and Lewis, 2010), allowing for teachers and students to integrate technology throughout the educational experience. The area of educational assessment has adapted to the increased use of technology. Trends in assessment and technology include a movement from paper-based to computer-based testing for all types of assessments, from large-scale assessments to teacher-created classroom tests. Computer-based testing comes with many benefits when compared to paper-based testing, but it is necessary to determine if results are comparable, especially in situations where computer-based and paper-based tests can be used interchangeably. The main purpose of this study was to expand upon the base of research comparing paper-based and computer-based testing, specifically with elementary students and mathematical fluency. The study was designed to answer the following research questions: (1) Are there differences in fluency-based performance on math computation problems presented on paper versus on the computer? (2) Are there differential mode effects on computer-based tests based on sex, grade level, or ability level? A mixed-factorial design with both within- and between-subject variables was used to investigate the differences between performance on paper-based and computer-based tests of mathematical fluency. Participants completed both paper- and computer-based tests, as well as the Group Math Assessment and Diagnostic Evaluation as a measure of general math ability. Overall findings indicate that performance on paper- and computer-based tests of mathematical fluency are not comparable and student grade-level may be a contributing factor in that difference.
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Ximeng 1979. "A model-driven approach to scenario-based requirements engineering /." Thesis, McGill University, 2007. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=101655.

Full text
Abstract:
A model-driven approach to scenario-based requirements engineering is proposed. The approach, which is based on Computer Automated Multi-Paradigm Modeling (CAMPaM), aims to improve the software process. A framework is given and implemented to reason about models of systems at multiple levels of abstraction, to transform between models in different formalisms, and to provide and evolve modeling formalisms.
The model-driven approach starts with modeling requirements of a system in scenario models and the subsequent automatic transformation to state-based behavior models. Then, either code can be synthesized or models can be further transformed into models with additional information such as explicit timing information or interactions between components. These models, together with the inputs (e.g., queries, performance metrics, test cases, etc.) generated directly from the scenario models, can be used for a variety of purposes, such as verification, analysis, simulation, animation and so on.
A visual modeling environment is built in AToM3 using Meta-Modeling and Model Transformation. It supports modeling in Sequence Diagrams, automatic transformation to Statecharts, and automatic generation of requirements text from Sequence Diagrams.
An application of the model-driven approach to the assessment of use cases for dependable systems is shown.
APA, Harvard, Vancouver, ISO, and other styles
8

Haidar, Imad. "Short-term forecasting model for crude oil price based on artificial neural networks /." Access document online, 2008. http://archimedes.ballarat.edu.au:8080/vital/access/HandleResolver/1959.17/5946.

Full text
Abstract:
Thesis (Masters) -- University of Ballarat, 2008.
Submitted in total fulfillment of the requirements for Masters of Computing, School of Information Technology and Mathematical Sciences. Bibliography: leaves cxxii-cxxvii.
APA, Harvard, Vancouver, ISO, and other styles
9

Al-Aboodi, Maher. "Enhanced receiver architectures for processing multi GNSS signals in a single chain : based on partial differential equations mathematical model." Thesis, University of Buckingham, 2016. http://bear.buckingham.ac.uk/136/.

Full text
Abstract:
The focus of our research is on designing a new architecture (RF front-end and digital) for processing multi GNSS signals in a single receiver chain. The motivation is to save in overhead cost (size, processing time and power consumption) of implementing multiple signal receivers side-by-side on-board Smartphones. This thesis documents the new multi-signal receiver architecture that we have designed. Based on this architecture, we have achieved/published eight novel contributions. Six of these implementations focus on multi GNSS signal receivers, and the last two are for multiplexing Bluetooth and GPS received signals in a single processing chain. We believe our work in terms of the new innovative and novel techniques achieved is a major contribution to the commercial world especially that of Smartphones. Savings in both silicon size and processing time will be highly beneficial to reduction of costs but more importantly for conserving the energy of the battery. We are proud that we have made this significant contribution to both industry and the scientific research and development arena. The first part of the work focus on the Two GNSS signal detection front-end approaches that were designed to explore the availability of the L1 band of GPS, Galileo and GLONASS at an early stage. This is so that the receiver devotes appropriate resources to acquire them. The first approach was based on folding the carrier frequency of all the three GNSS signals with their harmonics to the First Nyquist Zone (FNZ), as depicted by the BandPass Sampling Receiver technique (BPSR). Consequently, there is a unique power distribution of these folded signals based on the actual present signals that can be detected to alert the digital processing parts to acquire it. Volterra Series model is used to estimate the existing power in the FNZ by extracting the kernels of these folded GNSS signals, if available. The second approach filters out the right-side lobe of the GLONASS signal and the left-side lobe of the Galileo signal, prior to the folding process in our BPSR implementation. This filtering is important to enable none overlapped folding of these two signals with the GPS signal in the FNZ. The simulation results show that adopting these two approaches can save much valuable acquisition processing time. Our Orthogonal BandPass Sampling Receiver and Orthogonal Complex BandPass Sampling Receiver are two methods designed to capture any two wireless signals simultaneously and use a single channel in the digital domain to process them, including tracking and decoding, concurrently. The novelty of the two receivers is centred on the Orthogonal Integrated Function (OIF) that continuously harmonies the two received signals to form a single orthogonal signal allowing the “tracking and decoding” to be carried out by a single digital channel. These receivers employ a Hilbert Transform for shifting one of the input signals by 90-degrees. Then, the BPSR technique is used to fold back the two received signals to the same reference frequency in the FNZ. Results show that these designed methods also reduce the sampling frequency to a rate proportional to the maximum bandwidth, instead of the summation of bandwidths, of the input signals. Two combined GPS L1CA and L2C signal acquisition channels are designed based on applying the idea of the OIF to enhance the power consumption and the implementation complexity in the existing combination methods and also to enhance the acquisition sensitivity. This is achieved by removing the Doppler frequency of the two signals; our methods add the in-phase component of the L2C signal together with the in-phase component of the L1CA signal, which is then shifted by 90-degree before adding it to the remaining components of these two signals, resulting in an orthogonal form of the combined signals. This orthogonal signal is then fed to our developed version of the parallel-code-phase-search engine. Our simulation results illustrate that the acquisition sensitivity of these signals is improved successfully by 5.0 dB, which is necessary for acquiring weak signals in harsh environments. The last part of this work focuses on the tracking stage when specifically multiplexing Bluetooth and L1CA GPS signals in a single channel based on using the concept of the OIF, where the tracking channel can be shared between the two signals without losing the lock or degrading its performance. Two approaches are designed for integrating the two signals based on the mathematical analysis of the main function of the tracking channel, which the Phase-Locked Loop (PLL). A mathematical model of a set of differential equations has been developed to evaluate the PLL when it used to track and demodulated two signals simultaneously. The simulation results proved that the implementation of our approaches has reduced by almost half the size and processing time.
APA, Harvard, Vancouver, ISO, and other styles
10

Meesumrarn, Thiraphat. "Simulation of Dengue Outbreak in Thailand." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1248484/.

Full text
Abstract:
The dengue virus has become widespread worldwide in recent decades. It has no specific treatment and affects more than 40% of the entire population in the world. In Thailand, dengue has been a health concern for more than half a century. The highest number of cases in one year was 174,285 in 1987, leading to 1,007 deaths. In the present day, dengue is distributed throughout the entire country. Therefore, dengue has become a major challenge for public health in terms of both prevention and control of outbreaks. Different methodologies and ways of dealing with dengue outbreaks have been put forward by researchers. Computational models and simulations play an important role, as they have the ability to help researchers and officers in public health gain a greater understanding of the virus's epidemic activities. In this context, this dissertation presents a new framework, Modified Agent-Based Modeling (mABM), a hybrid platform between a mathematical model and a computational model, to simulate a dengue outbreak in human and mosquito populations. This framework improves on the realism of former models by utilizing the reported data from several Thai government organizations, such as the Thai Ministry of Public Health (MoPH), the National Statistical Office, and others. Additionally, its implementation takes into account the geography of Thailand, as well as synthetic mosquito and synthetic human populations. mABM can be used to represent human behavior in a large population across variant distances by specifying demographic factors and assigning mobility patterns for weekdays, weekends, and holidays for the synthetic human population. The mosquito dynamic population model (MDP), which is a component of the mABM framework, is used for representing the synthetic mosquito population dynamic and their ecology by integrating the regional model to capture the effect of dengue outbreak. The two synthetic populations can be linked to each other for the purpose of presenting their interactions, and the Local Stochastic Contact Model for Dengue (LSCM-DEN) is utilized. For validation, the number of cases from the experiment is compared to reported cases from the Thailand Vector Borne Disease Bureau for the selected years. This framework facilitates model configuration for sensitivity analysis by changing parameters, such as travel routes and seasonal temperatures. The effects of these parameters were studied and analyzed for an improved understanding of dengue outbreak dynamics.
APA, Harvard, Vancouver, ISO, and other styles
11

Sakita, Saori. "Development and Use of a Physiologically Based Mathematical Model Describing the Relationships and Contributions of Macronutrients to Weight and Body Composition Changes." BYU ScholarsArchive, 2010. https://scholarsarchive.byu.edu/etd/2552.

Full text
Abstract:
The effect of the dietary macronutrient composition on weight loss has been a controversial issue for decades. During that time, a high-protein, high-fat, and low-carbohydrate diet has been one of the more popular weight loss diets with the public. We hypothesized that a computer simulation model using STELLA software could help to better understanding the effect of the dietary macronutrient composition on weight loss. We calculated daily total oxidation instead of total energy expenditure as others have done based on the facts that carbohydrate, fat, and protein intake influence carbohydrate, fat, and protein oxidation. In order to create a simple and accurate model comparing dietary macronutrient composition effects, we eliminated exercise as a factor and focused on a sedentary population. The model was validated by five sets of published human data. Following model validation, simulations were carried out to compare the traditional high-carbohydrate diet recommended by the American Dietetic Association and two well-known high-protein diets (Atkins and the Zone diet). The results of computer simulation suggested that the lean tissue retention effect of a high-protein diet, especially with a lower-fat diet, compared with a traditional high carbohydrate diet over 6 months.
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Ying. "High volume conveyor sortation system analysis." Diss., Available online, Georgia Institute of Technology, 2006, 2006. http://etd.gatech.edu/theses/available/etd-05122006-110242/.

Full text
Abstract:
Thesis (Ph. D.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2007.
Yorai Wardi, Committee Member ; Gunter Sharp, Committee Member ; Spiridon Reveliotis, Committee Member ; Leon F. McGinnis, Committee Member ; Chen Zhou, Committee Chair.
APA, Harvard, Vancouver, ISO, and other styles
13

Bajaj, Manas. "Knowledge composition methodology for effective analysis problem formulation in simulation-based design." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26639.

Full text
Abstract:
Thesis (Ph.D)--Mechanical Engineering, Georgia Institute of Technology, 2009.
Committee Co-Chair: Dr. Christiaan J. J. Paredis; Committee Co-Chair: Dr. Russell S. Peak; Committee Member: Dr. Charles Eastman; Committee Member: Dr. David McDowell; Committee Member: Dr. David Rosen; Committee Member: Dr. Steven J. Fenves. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
14

Elahi, Behin. "Integrated Optimization Models and Strategies for Green Supply Chain Planning." University of Toledo / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1467266039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Yunming. "Machine vision algorithms for mining equipment automation." Thesis, Queensland University of Technology, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
16

Panchal, Jitesh H. "A framework for simulation-based integrated design of multiscale products and design processes." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-11232005-112626/.

Full text
Abstract:
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2006.
Eastman, Chuck, Committee Member ; Paredis, Chris, Committee Co-Chair ; Allen, Janet, Committee Member ; Rosen, David, Committee Member ; Tsui, Kwok, Committee Member ; McDowell, David, Committee Member ; Mistree, Farrokh, Committee Chair. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
17

Kono, Frank Augusto Micheletto. "Um modelo de representação computacional baseado em conceitos de crescimento urbano associados a alvarás e primitivas em banco de dados espacial." Universidade Tecnológica Federal do Paraná, 2016. http://repositorio.utfpr.edu.br/jspui/handle/1/2035.

Full text
Abstract:
A expansão urbana resultante do rápido progresso das cidades é um grande desafio para o desenvolvimento sustentável. Deste modo a concepção de modelos computacionais adequados que permitam a simulação, visualização espacializada e análise do processo de crescimento urbano é fundamental. Políticas de gestão de bairros e tipos de crescimento urbano são constituídos por equipamentos representados por diferentes tipos de alvarás ou concessões para abertura e funcionamento de negócios. Também por sistemas viários, sistemas de transporte, limites políticos e administrativos, zoneamento e arruamento. Estes mecanismos podem ser evidenciados em um banco de dados espacial por (a) dados abertos georreferenciados, um termo que caracteriza elementos humanos, informações demográficas, socioeconômicas, infraestrutura, condições ambientais e históricas, (b) diferentes geometrias (ponto, linha e polígono) e (c) utilização de funções espaciais para representar relações topológicas, direcionais ou métricas entre os equipamentos. Para construção e implementação do modelo proposto neste trabalho são utilizados os itens acima descritos (a, b, c) e um conjunto de perguntas elaboradas por especialistas na área de urbanismo, apontadas como conceitos primordiais à área de crescimento urbano. Em relação ao objetivo e a modelagem as mais relevantes contribuições encontram-se: (1) na representação por meio de um pequeno conjunto de primitivas em banco de dados com extensão espacial, (2) na elaboração de um vocabulário ou atribuição de uma semântica ao modelo, (3) na interação entre diferentes conceitos associados ao processo de crescimento urbano, (4) na possibilidade de ampliação e integração de outros domínios de dados georreferenciados e abertos e (5) no tempo de execução inferior a 10 segundos para 70% das consultas espaciais. As contribuições em relação ao experimento com os usuários, considerado a interface web desenvolvida neste trabalho, encontram-se: (1) no fato de que a ferramenta atende as necessidades no tocante a geração e visualização espacializada de dados para 4 de 5 usuários, (2) na interação com dados georreferenciados de alvarás de funcionamento, divisa de bairros e ruas e (3) na visualização dos dados do ponto de vista histórico e espacial.
The resulting urban expansion from the rapid development of cities is a major challenge for sustainable development. Thus, the design of appropriate computational models that enable the simulation, spatialized visualization and analysis of the process of urban growth is critical. Neighborhood Management policies and types of urban growth are made up of equipment represented by different types of permits or concessions for opening and business operation. Also for road systems, transportation systems, political and administrative boundaries, zoning and street layout. It can be demonstrated in a spatial database by (a) georeferenced open data, a term that characterizes human elements, demographic, socio-economic, infrastructure, environmental and historical conditions, (b) different geometries (point, line and polygon) and (c) use of spatial functions for topological relations, directional or metrics between devices. Construction and implementation of the proposed model in this paper are used the above items (a, b, c) and a set of questions prepared by experts in the planning area, identified as primordial concepts to urban growth area. In relation to the goal and modeling the most relevant contributions are: (1) the representation by means of a small set of primitives in a database with spatial extension, (2) the development of a vocabulary or assigning a semantic the model, (3) the interaction between different concepts associated with the process of urban growth, (4) the possibility of expansion and integration of other areas of georeferenced data and open and (5) in the lower run time to 10 seconds to 70% spatial queries. Contributions in relation to the experiment with users, considered the web interface developed in this work: (1) on the fact that the tool covers the needs as regards the generation and spatialized visualization of data for 4 of 5 users, (2 ) interacting with georeferenced data of business licenses, neighborhoods and streets boundary and (3) in the data visualization from the historical and spatial point of view.
APA, Harvard, Vancouver, ISO, and other styles
18

Filippi, Sarah. "Stratégies optimistes en apprentissage par renforcement." Phd thesis, Ecole nationale supérieure des telecommunications - ENST, 2010. http://tel.archives-ouvertes.fr/tel-00551401.

Full text
Abstract:
Cette thèse traite de méthodes « model-based » pour résoudre des problèmes d'apprentissage par renforcement. On considère un agent confronté à une suite de décisions et un environnement dont l'état varie selon les décisions prises par l'agent. Ce dernier reçoit tout au long de l'interaction des récompenses qui dépendent à la fois de l'action prise et de l'état de l'environnement. L'agent ne connaît pas le modèle d'interaction et a pour but de maximiser la somme des récompenses reçues à long terme. Nous considérons différents modèles d'interactions : les processus de décisions markoviens, les processus de décisions markoviens partiellement observés et les modèles de bandits. Pour ces différents modèles, nous proposons des algorithmes qui consistent à construire à chaque instant un ensemble de modèles permettant d'expliquer au mieux l'interaction entre l'agent et l'environnement. Les méthodes dites « model-based » que nous élaborons se veulent performantes tant en pratique que d'un point de vue théorique. La performance théorique des algorithmes est calculée en terme de regret qui mesure la différence entre la somme des récompenses reçues par un agent qui connaîtrait à l'avance le modèle d'interaction et celle des récompenses cumulées par l'algorithme. En particulier, ces algorithmes garantissent un bon équilibre entre l'acquisition de nouvelles connaissances sur la réaction de l'environnement (exploration) et le choix d'actions qui semblent mener à de fortes récompenses (exploitation). Nous proposons deux types de méthodes différentes pour contrôler ce compromis entre exploration et exploitation. Le premier algorithme proposé dans cette thèse consiste à suivre successivement une stratégie d'exploration, durant laquelle le modèle d'interaction est estimé, puis une stratégie d'exploitation. La durée de la phase d'exploration est contrôlée de manière adaptative ce qui permet d'obtenir un regret logarithmique dans un processus de décision markovien paramétrique même si l'état de l'environnement n'est que partiellement observé. Ce type de modèle est motivé par une application d'intérêt en radio cognitive qu'est l'accès opportuniste à un réseau de communication par un utilisateur secondaire. Les deux autres algorithmes proposés suivent des stratégies optimistes : l'agent choisit les actions optimales pour le meilleur des modèles possibles parmi l'ensemble des modèles vraisemblables. Nous construisons et analysons un tel algorithme pour un modèle de bandit paramétrique dans un cas de modèles linéaires généralisés permettant ainsi de considérer des applications telles que la gestion de publicité sur internet. Nous proposons également d'utiliser la divergence de Kullback-Leibler pour la construction de l'ensemble des modèles vraisemblables dans des algorithmes optimistes pour des processus de décision markoviens à espaces d'états et d'actions finis. L'utilisation de cette métrique améliore significativement le comportement de des algorithmes optimistes en pratique. De plus, une analyse du regret de chacun des algorithmes permet de garantir des performances théoriques similaires aux meilleurs algorithmes de l'état de l'art.
APA, Harvard, Vancouver, ISO, and other styles
19

Liang, Yu-Tsung, and 梁裕宗. "Development of Personal Computer Based Flight Simulator With Distributed Interactive Simulator Protocol-- Development of Mathematical Model." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/57883413703920173088.

Full text
Abstract:
碩士
淡江大學
機械工程學系
85
Most of flight simulators are based on expensive workstations or other high-end computing equipment. To provide a high-end and low-cost flight simulator, an experiment using networked personal computers (PCs) for fairly sophisticated flight simulation with DIS protocol is presented. Two Pentium PCs connected with TCP/IP or IPX local area network protocol are used as a flight simulator. One of them is used as the work platform for the computation of flight dynamics and Dead Reckoning models, and implementaon of pilot interface. The other one is used as the work platform for visual effect and DIS system. In this study, the flight dynamic model of flight simulator is derived. The aircraft is assumed as a rigid body, the behavior of flight can be described by Six-Degree-of-Freedom (6DOF) equations of motion. Also the real-time configured numerical integration method, and the coordinate systems, earth model, which are compatible with the DIS protocol, are developed. Dead Reckoning (DR) algorithm is an important chnique that is widely used in DIS. The purpose of DR is to reduce updates required by each simulator on the network to better utilize the available bandwidth. Extrapolation formulas are discussed based on network communication traffic and the amount of computation performed by simulators. Smoothing method used during data update process is also discussed. With this study, a low cost, efficient high-fidelity networked PC flight simulator is feasible.
APA, Harvard, Vancouver, ISO, and other styles
20

(9896135), BM Huang. "Computer model of the shaft kiln process at Queensland Magnesia (Operations) Pty. Ltd." Thesis, 1999. https://figshare.com/articles/thesis/Computer_model_of_the_shaft_kiln_process_at_Queensland_Magnesia_Operations_Pty_Ltd_/13459442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

"Example-based interpolation for correspondence-based computer vision problems." Thesis, 2006. http://library.cuhk.edu.hk/record=b6074147.

Full text
Abstract:
EBI and iEBI mechanism have all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples.
Example-Based Interpolation (EBI) is a powerful method to interpolate function from a set of input-output examples. The first part of the dissertation exams the EBI in detail and proposes a new enhanced EBI, indexed function Example-Based Interpolation (iEBI). The second part demonstrates the application of both EBI and iEBI to solve three well-defined problems of computer vision.
First, the dissertation has analyzed EBI solution in detail. It argues and demonstrates that there are three desired properties for any EBI solution. To satisfy all three desirable properties, the EBI solution must have adequate degrees of freedom. This dissertation shows in details that, for the EBI solution to have enough degrees of freedom, it needs only be in a simple format: the sum of a basis function plus a linear function. This dissertation also presents that a particular EBI solution, in a certain least-squares-error sense, could satisfy exactly all the three desirable properties.
Moreover, this dissertation also points out EBI's restriction and describes a new interpolation mechanism that could overcome EBI's restriction by constructing general indexed function from examples. The new mechanism, referred to as the general indexed function Example-Based Interpolation (iEBI) mechanism, first applies EBI to establish the initial correspondences over all input examples, and then interpolates the general indexed function from those initial correspondences.
Novel View Synthesis (NVS) is an important problem in image rendering. It tries to synthesize an image of a scene at any specified (novel) viewpoint using only a few images of that scene at some sample viewpoints. To avoid explicit 3-D reconstruction of the scene, this dissertation formulates the problem of NVS as an indexed function interpolation problem by treating viewpoint and image as the input and output of a function. The interpolation formulation has at least two advantages. First, it allows certain imaging details like camera intrinsic parameters to be unknown. Second, the viewpoint specification need not be physical. For example, the specification could consist of any set of values that adequately describe the viewpoint space and need not be measured in metric units. This dissertation solves the NVS problem using the iEBI formulation and presents how the iEBI mechanism could be used to synthesize images at novel viewpoints and acquire quality novel views even from only a few example views.
Stereo matching, or the determination of corresponding image points projected by the same 3-D feature, is one of the fundamental and long-studied problems in computer vision. Yet, few have tried to solve it using interpolation. This dissertation presents an interpolation approach, Interpolation-based Iterative Stereo Matching (IISM), that could construct dense correspondences in stereo image from sparse initial correspondences. IISM improves the existing EBI to ensure that the established correspondences satisfy exactly the epipolar constraint of the image pair, and to a certain extent, preserve discontinuities in the stereo disparity space of the imaged scene. IISM utilizes the refinement technique of coarse-to-fine to iteratively apply the improved EBI algorithm, and eventually, produces the dense disparity map for stereo image pair.
The second part of the dissertation focuses on applying the EBI and iEBI methods to solve three correspondence-based problems in computer vision: (1) stereo matching, (2) novel view synthesis, and (3) viewpoint determination.
This dissertation also illustrates, for all the three problems, experimental results on a number of real and benchmarking image datasets, and shows that interpolation-based methods could be effective in arriving at good solution even with sparse input examples.
Viewpoint determination of image is the problem of, given an image, determining the viewpoint from which the image was taken. This dissertation demonstrates to solve this problem without referencing to or estimating any explicit 3-D structure of the imaged scene. Used for reference are a small number of sample snapshots of the scene, each of which has the associated viewpoint. By treating image and its associated viewpoint as the input and output of a function, and the given snapshot-viewpoint pairs as examples of that function, the problem has a natural formulation of interpolation. Same as that in NVS, the interpolation formulation allows the given images to be uncalibrated and the viewpoint specification to be not necessarily measured. This dissertation presents an interpolation-based solution using iEBI mechanism that guarantees all given sample data are satisfied exactly with the least complexity in the interpolated function.
Liang Bodong.
"February 2006."
Adviser: Ronald Chi-kit Chung.
Source: Dissertation Abstracts International, Volume: 67-11, Section: B, page: 6516.
Thesis (Ph.D.)--Chinese University of Hong Kong, 2006.
Includes bibliographical references (p. 127-145).
Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.
Abstracts in English and Chinese.
School code: 1307.
APA, Harvard, Vancouver, ISO, and other styles
22

"Segmentation based variational model for accurate optical flow estimation." 2009. http://library.cuhk.edu.hk/record=b5894018.

Full text
Abstract:
Chen, Jianing.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.
Includes bibliographical references (leaves 47-54).
Abstract also in Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 1.1 --- Background --- p.1
Chapter 1.2 --- Related Work --- p.3
Chapter 1.3 --- Thesis Organization --- p.5
Chapter 2 --- Review on Optical Flow Estimation --- p.6
Chapter 2.1 --- Variational Model --- p.6
Chapter 2.1.1 --- Basic Assumptions and Constraints --- p.6
Chapter 2.1.2 --- More General Energy Functional --- p.9
Chapter 2.2 --- Discontinuity Preserving Techniques --- p.9
Chapter 2.2.1 --- Data Term Robustification --- p.10
Chapter 2.2.2 --- Diffusion Based Regularization --- p.11
Chapter 2.2.3 --- Segmentation --- p.15
Chapter 2.3 --- Chapter Summary --- p.15
Chapter 3 --- Segmentation Based Optical Flow Estimation --- p.17
Chapter 3.1 --- Initial Flow --- p.17
Chapter 3.2 --- Color-Motion Segmentation --- p.19
Chapter 3.3 --- Parametric Flow Estimating Incorporating Segmentation --- p.21
Chapter 3.4 --- Confidence Map Construction --- p.24
Chapter 3.4.1 --- Occlusion detection --- p.24
Chapter 3.4.2 --- Pixel-wise motion coherence --- p.24
Chapter 3.4.3 --- Segment-wise model confidence --- p.26
Chapter 3.5 --- Final Combined Variational Model --- p.28
Chapter 3.6 --- Chapter Summary --- p.28
Chapter 4 --- Experiment Results --- p.30
Chapter 4.1 --- Quantitative Evaluation --- p.30
Chapter 4.2 --- Warping Results --- p.34
Chapter 4.3 --- Chapter Summary --- p.35
Chapter 5 --- Application - Single Image Animation --- p.37
Chapter 5.1 --- Introduction --- p.37
Chapter 5.2 --- Approach --- p.38
Chapter 5.2.1 --- Pre-Process Stage --- p.39
Chapter 5.2.2 --- Coordinate Transform --- p.39
Chapter 5.2.3 --- Motion Field Transfer --- p.41
Chapter 5.2.4 --- Motion Editing and Apply --- p.41
Chapter 5.2.5 --- Gradient-domain composition --- p.42
Chapter 5.3 --- Experiments --- p.43
Chapter 5.3.1 --- Active Motion Transfer --- p.43
Chapter 5.3.2 --- Animate Stationary Temporal Dynamics --- p.44
Chapter 5.4 --- Chapter Summary --- p.45
Chapter 6 --- Conclusion --- p.46
Bibliography --- p.47
APA, Harvard, Vancouver, ISO, and other styles
23

"Computer simulations of microwave circuit discontinuities using the edge-based finite element method." 2000. http://library.cuhk.edu.hk/record=b5890261.

Full text
Abstract:
by Cheng Yat Man.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2000.
Includes bibliographical references (leaves 1-6 (2nd gp.)).
Abstracts in English and Chinese.
Acknowledgements
Abstract:
A CD containing the Simulator and Results
List of Figures
List of Tables
Chapter 1. --- Introduction
Introduction --- p.1
Chapter 2 --- Background Theory
Chapter 2.1 --- Empirical Design Formulas for Some Passive Microwave structures --- p.2
Chapter 2.2.1 --- Short Dipole and Monopole --- p.4
Chapter 2.2.2 --- Slot Antenna --- p.6
Chapter 2.2.3 --- Stripline --- p.8
Chapter 2.2.4 --- Microstrip --- p.10
Chapter 2.2 --- Edge Based Finite Element Method and the Generalized Variational Principle --- p.12
Chapter 2.2.1 --- Vector Finite Element Method for Electromagnetics --- p.14
Chapter 2.2.1.1 --- Variational Formulation --- p.14
Chapter 2.2.1.2 --- Advantages in Total Field Formulation --- p.16
Chapter 2.2.2 --- Formulation by Method of Weighted Residual the Galerkin's Approach --- p.17
Chapter 2.2.3 --- "the Vector Bases for BRICK, PRISM, TETRA" --- p.21
Chapter 2.2.3.1 --- BRICK --- p.23
Chapter 2.2.3.2 --- PRISM --- p.26
Chapter 2.2.4 --- "Domain Discretization: Mesh Generation Scheme for 3D, 2D, ID Geometrical Entities in the Cartesian Domain" --- p.29
Chapter 2.3 --- Construction of the Functional with Total Field Formulation --- p.31
Chapter 2.3.1 --- Vector Wave Equation in the Cartesian Domain --- p.32
Chapter 2.3.2 --- Boundary Conditions in the Cartesian Domain --- p.33
Chapter 2.3.2.1 --- Perfect Magnetic Wall (Neumann's Boundary Condition) --- p.34
Chapter 2.3.2.2 --- Perfect Electric Wall (Dirichlet Boundary Condition) --- p.34
Chapter 2.3.2.3 --- Anisotropic Perfectly Matched Layer (APML) --- p.35
Chapter 2.3.2.4 --- 2nd Order Absorbing Boundary Conditions --- p.39
Chapter 2.3.2.5 --- Plane Wave Incidence (Uinc) --- p.40
Chapter 2.3.2.6 --- Magnetic Aperture (M) --- p.42
Chapter 2.3.2.7 --- Passive Lumped Load (ZL1D ) --- p.42
Chapter 2.3.2.8 --- Current Feed (J) --- p.42
Chapter 2.3.2.9 --- Voltage Feed (impressed E-field) --- p.43
Chapter 2.3.2.10 --- Resistive Sheet ( =lst order ABC = standard IBC ) --- p.44
Chapter 2.4 --- Visualization and Post-Processing of the Solution Field --- p.45
Chapter 2.4.1 --- Field Pattern Plot --- p.45
Chapter 2.4.2 --- Impedance at Input Port --- p.45
Chapter 2.4.3 --- Y-parameter Extraction. --- p.46
Chapter 3. --- Simulation Results and Discussion;
Chapter 3.1 --- Radiating Structures --- p.48
Chapter 3.1.1 --- Short Dipole and Monopole --- p.48
Chapter 3.1.1.1 --- Short Dipole --- p.48
Chapter 3.1.1.2 --- Equivalent Monopole --- p.50
Chapter 3.1.2 --- Slot Antenna --- p.52
Chapter 3.1.2.1 --- Slot Antenna excited by the equivalent magnetic aperture --- p.52
Chapter 3.1.2.3 --- Slot Antenna Excited by Unit Current Feed with Plane Wave Incidence ( Uinc) --- p.54
Chapter 3.2 --- Striplines --- p.57
Chapter 3.2.1 --- A Straight 50Ω Stripline --- p.57
Chapter 3.2.1.1 --- Optimizing the Thickness and Number of Layer of PML --- p.58
Chapter 3.2.1.2 --- Different Combination of BRICK and PRISM Mesh --- p.60
Chapter 3.2.2 --- A Cross Junction --- p.63
Chapter 3.2.3 --- A Squared 90° Corner --- p.69
Chapter 3.2.4 --- A Champfered 90° Corner --- p.73
Chapter 3.2.5 --- A Pair of Slot-Coupled Stripline Each Terminated with Open Circuit at Slot + λ/2 --- p.75
Chapter 3.2.6 --- A Pair of Slot-Coupled Stripline Each Terminated with Short Circuit at Slot + λ/4 --- p.78
Chapter 3.2.7 --- A Pair of Slot-Coupled Striplines Each Terminated with Short Circuit at Slot + λ/4 and Shorted through the Slot --- p.80
Chapter 3.2.8 --- A Pair of Slot-Coupled Striplines Each Terminated with Short Circuit at Slot + λ/4 and has 50Ω Load through the Slot --- p.83
Chapter 3.3 --- Calculating the Input Impedance ( Vport / Iport )
Chapter 3.3.1 --- A Pair of Slot-Coupled Stripline Each Terminated with Short Circuit at Slot + λ/4 --- p.85
Chapter 4 --- Conclusion
Chapter 4.1 --- Conclusion --- p.88
Chapter 4.2 --- Minor Problems Encountered --- p.89
Chapter 4.2 --- To Probe Further --- p.90
Chapter Appendex: --- Implementation of the Edge-Based Finite Element Method
Chapter A.1 --- Mesh Generation Scheme --- p.92
Chapter A.1.1 --- "Global Node, Edge and Primitive Assignment" --- p.93
Chapter A.1.2 --- Property Assignment Local to Every Basis and Primitive --- p.94
Chapter A.2 --- Assembly the Global System of Equations from the Element Stamps of all Primitives wrt. Global Edge Numbering --- p.95
Chapter A.2.1 --- Setting up the Volumetric Integral for the Vector Wave Equation --- p.95
Chapter A.2.1.1 --- Volume Integration of Constant Tangential Brick Elements --- p.96
Chapter A.2.1.2 --- Volume Integration of Constant Tangential Pyramidal Elements --- p.98
Chapter A.2.2 --- Incorporation of Boundary Conditions --- p.100
Chapter A.2.2.1 --- Surface Integration of Constant Tangential Brick Elements --- p.100
Chapter A.2.2.2 --- Surface Integration of Constant Tangential Pyramidal Elements --- p.105
Chapter A.3 --- Solution to the Final System --- p.110
Chapter A.3.1 --- Solving a System of Linear Equations by Diagonalization & Blockwise Partitioning --- p.111
Chapter A.3.2 --- Direct Solution Method for Complex-valued System --- p.114
Chapter A.4 --- Visualization and Post-Processing of the Solution Field --- p.115
Chapter A.4.1 --- Field Pattern Visualization --- p.115
Chapter A.4.2 --- Input Impedance Definition ( Vport /Iport). --- p.115
Chapter A.4.3 --- Y-parameter Extraction. --- p.115
Chapter A.4.3.1 --- Surface Integration of Brick Elements --- p.115
Chapter A.4.3.2 --- Surface Integration of Pyramidal Elements --- p.117
Chapter A.5 --- Simulation Setup with BRICK+PRISM+higher order TETRA --- p.119
References
Books:
Journals and Papers:
for Hierarchal Edge Bases and FEM Formulations:
for ABC and PML:
for Mesh Generation:
for Free FEM Source Code Matrix Solver:
Miscellaneous:
APA, Harvard, Vancouver, ISO, and other styles
24

Ghane, Parisa. "Silent speech recognition in EEG-based brain computer interface." Thesis, 2015. http://hdl.handle.net/1805/9886.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
A Brain Computer Interface (BCI) is a hardware and software system that establishes direct communication between human brain and the environment. In a BCI system, brain messages pass through wires and external computers instead of the normal pathway of nerves and muscles. General work ow in all BCIs is to measure brain activities, process and then convert them into an output readable for a computer. The measurement of electrical activities in different parts of the brain is called electroencephalography (EEG). There are lots of sensor technologies with different number of electrodes to record brain activities along the scalp. Each of these electrodes captures a weighted sum of activities of all neurons in the area around that electrode. In order to establish a BCI system, it is needed to set a bunch of electrodes on scalp, and a tool to send the signals to a computer for training a system that can find the important information, extract them from the raw signal, and use them to recognize the user's intention. After all, a control signal should be generated based on the application. This thesis describes the step by step training and testing a BCI system that can be used for a person who has lost speaking skills through an accident or surgery, but still has healthy brain tissues. The goal is to establish an algorithm, which recognizes different vowels from EEG signals. It considers a bandpass filter to remove signals' noise and artifacts, periodogram for feature extraction, and Support Vector Machine (SVM) for classification.
APA, Harvard, Vancouver, ISO, and other styles
25

Mukherjee, Prateep. "Active geometric model : multi-compartment model-based segmentation & registration." Thesis, 2014. http://hdl.handle.net/1805/4908.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
We present a novel, variational and statistical approach for model-based segmentation. Our model generalizes the Chan-Vese model, proposed for concurrent segmentation of multiple objects embedded in the same image domain. We also propose a novel shape descriptor, namely the Multi-Compartment Distance Functions or mcdf. Our proposed framework for segmentation is two-fold: first, several training samples distributed across various classes are registered onto a common frame of reference; then, we use a variational method similar to Active Shape Models (or ASMs) to generate an average shape model and hence use the latter to partition new images. The key advantages of such a framework is: (i) landmark-free automated shape training; (ii) strict shape constrained model to fit test data. Our model can naturally deal with shapes of arbitrary dimension and topology(closed/open curves). We term our model Active Geometric Model, since it focuses on segmentation of geometric shapes. We demonstrate the power of the proposed framework in two important medical applications: one for morphology estimation of 3D Motor Neuron compartments, another for thickness estimation of Henle's Fiber Layer in the retina. We also compare the qualitative and quantitative performance of our method with that of several other state-of-the-art segmentation methods.
APA, Harvard, Vancouver, ISO, and other styles
26

Xu, Lin. "A Study of Efficiency, Accuracy, and Robustness in Intensity-Based Rigid Image Registration." Thesis, 2008. http://hdl.handle.net/10012/4077.

Full text
Abstract:
Image registration is widely used in different areas nowadays. Usually, the efficiency, accuracy, and robustness in the registration process are concerned in applications. This thesis studies these issues by presenting an efficient intensity-based mono-modality rigid 2D-3D image registration method and constructing a novel mathematical model for intensity-based multi-modality rigid image registration. For mono-modality image registration, an algorithm is developed using RapidMind Multi-core Development Platform (RapidMind) to exploit the highly parallel multi-core architecture of graphics processing units (GPUs). A parallel ray casting algorithm is used to generate the digitally reconstructed radiographs (DRRs) to efficiently reduce the complexity of DRR construction. The optimization problem in the registration process is solved by the Gauss-Newton method. To fully exploit the multi-core parallelism, almost the entire registration process is implemented in parallel by RapidMind on GPUs. The implementation of the major computation steps is discussed. Numerical results are presented to demonstrate the efficiency of the new method. For multi-modality image registration, a new model for computing mutual information functions is devised in order to remove the artifacts in the functions and in turn smooth the functions so that optimization methods can converge to the optimal solutions accurately and efficiently. With the motivation originating from the objective to harmonize the discrepancy between the image presentation and the mutual information definition in previous models, the new model computes the mutual information function using both the continuous image function representation and the mutual information definition for continuous random variables. Its implementation and complexity are discussed and compared with other models. The mutual information computed using the new model appears quite smooth compared with the functions computed by others. Numerical experiments demonstrate the accuracy and efficiency of optimization methods in the case that the new model is used. Furthermore, the robustness of the new model is also verified.
APA, Harvard, Vancouver, ISO, and other styles
27

Sithole, Zola. "Parameterisation of the 3-PG process-based model in predicting the growth and water use of Pinus elliottii in South Africa." Thesis, 2011. http://hdl.handle.net/10413/9880.

Full text
Abstract:
A simplified process-based model simulating growth and water use in forest plantations was utilised to predict the growth of Pinus elliottii in South African forest plantations. The model is called 3-PG (Physiological Principles in Predicting Growth) and predicted the growth of trees by simulating physiological processes that determine growth and water use, and the way trees are affected by the physical conditions to which they are subjected, and with which they react. Pinus elliottii growth data recorded in 301 sample stands around South Africa were sourced from forestry companies. A selection procedure reduced the number of stands to 44, where 32 were used to parameterise 3-PG and 12 were reserved for testing the final model parameters. This was accomplished by matching model output to observed data. All stand simulations were initialised at age four years and continued to the maximum age of recorded growth. A provisional set of parameter values provided a good fit to most stands and minor adjustments of the specific leaf area (σ), which was assigned a value of 5 m2.kg-1, were made, bringing about an improved fit. The predictions of mean DBH, Height, and TPH were relatively good, achieving R2 of 0.8036, 0.8975, and 0.661 respectively, while predictions of stem volumes were worse (R2 =0.5922, n=32). The 3-PG model over-predicted DBH in 20 stands, while modelled volume predictions improved substantially in thinned stands (R2 =0.8582, n=14) compared to unthinned stands (R2 =0.3456, n=18). The height predictions were generally good producing an R2 =0.8975. The final set of 3-PG parameter values was then validated against growth data from the 12 independent stands. The predictions of mean DBH, Height, and TPH were relatively good, achieving R2 of 0.8467, 0.7649, and 0.9916 respectively, while predictions of stem volumes were worse (R2 =0.5766, n=12). The results of this study demonstrated the potential for 3-PG to respond to many growth factors and to predict growth and water use by trees with encouraging realism. Patterns of changing leaf area index (L) over time, responses to drought, and annual evaporation patterns all look realistic. Consequently, 3-PG is judged to have potential as a strategic forestry tool.
Thesis (M.Sc.)-University of KwaZulu-Natal, Pietermaritzburg, 2011.
APA, Harvard, Vancouver, ISO, and other styles
28

Stamps, Kenyon. "A steady-state visually evoked potential based brain-computer interface system for control of electric wheelchairs." 2012. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1001343.

Full text
Abstract:
M. Tech. Electrical Engineering
Determines whether Hidden Markov models (HMM) can be used to classify steady state visual evoked electroencephalogram signals in a BCI system. This is for the purpose of aiding disabled people in driving a wheelchair.
APA, Harvard, Vancouver, ISO, and other styles
29

Werner, Edith Benedicta Maria. "Learning Finite State Machine Specifications from Test Cases." Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-0006-B3D7-E.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Zeiß, Benjamin. "Quality Assurance of Test Specifications for Reactive Systems." Doctoral thesis, 2010. http://hdl.handle.net/11858/00-1735-0000-0006-B3DA-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Azzam, Ibrahim Ahmed Aref. "Implicit Concept-based Image Indexing and Retrieval for Visual Information Systems." Thesis, 2006. https://vuir.vu.edu.au/479/.

Full text
Abstract:
This thesis focuses on Implicit Concept-based Image Indexing and Retrieval (ICIIR), and the development of a novel method for the indexing and retrieval of images. Image indexing and retrieval using a concept-based approach involves extraction, modelling and indexing of image content information. Computer vision offers a variety of techniques for searching images in large collections. We propose a method, which involves the development of techniques to enable components of an image to be categorised on the basis of their relative importance within the image in combination with filtered representations. Our method concentrates on matching subparts of images, defined in a variety of ways, in order to find particular objects. The storage of images involves an implicit, rather than an explicit, indexing scheme. Retrieval of images will then be achieved by application of an algorithm based on this categorisation, which will allow relevant images to be identified and retrieved accurately and efficiently. We focus on Implicit Concept-based Image Indexing and Retrieval, using fuzzy expert systems, density measure, supporting factors, weights and other attributes of image components to identify and retrieve images.
APA, Harvard, Vancouver, ISO, and other styles
32

von, Sydow Momme. "Towards a Flexible Bayesian and Deontic Logic of Testing Descriptive and Prescriptive Rules." Doctoral thesis, 2006. http://hdl.handle.net/11858/00-1735-0000-0006-AC29-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Gorrín, Manzuli Arnélida. "Wissensgestütztes Beobachtungs- und Evaluierungssystem der Landnutzung." Doctoral thesis, 2005. http://hdl.handle.net/11858/00-1735-0000-0006-B337-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography