To see the other types of publications on this topic, follow the link: Decision algorithm.

Dissertations / Theses on the topic 'Decision algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Decision algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Gato, Gonçalo. "Algorithm and decision in musical composition." Thesis, Guildhall School of Music and Drama, 2016. http://openaccess.city.ac.uk/17292/.

Full text
Abstract:
Through a series of creative projects this doctorate set out to research how computer-assisted composition (CAC) of music affects decision-making in my compositional practice. By reporting on the creative research journey, this doctorate is a contribution towards a better understanding of the implications of CAC by offering new insights into the composing process. It is also a contribution to the composition discipline as new techniques were devised, together with new applications of existing techniques. Using OpenMusic as the sole programming environment, the manual/machine interface was explored through different balances between manual and algorithmic composition and through aesthetic reflection guiding the composing process. This helped clarify the purpose, adequacy and nature of each method as decisions were constantly being taken towards completing the artistic projects. The most suitable use of algorithms was as an environment for developing, testing, refining and assessing compositional techniques and the music materials they generate: a kind of musical laboratory. As far as a technique can be described by a set of rules, algorithms can help formulate and refine it. Also capable of incorporating indeterminism, they can act as powerful devices in discovering unforeseen musical implications and results. Algorithms alone were found to be insufficient to simulate human creative thought because aspects such as (but not limited to) imagination, judgement and personal bias could only, and hypothetically, be properly simulated by the most sophisticated forms of artificial intelligence. Furthermore, important aspects of composition such as instrumentation, articulation and orchestration were not subjected to algorithmic treatment because, not being sufficiently integrated in OpenMusic currently, they would involve a great deal of knowledge to be specified and adapted to computer language. These shortcomings of algorithms, therefore, implied varying degrees of manual interventions to be carried out on raw materials coming out of their evaluations. A combination of manual and algorithmic composition was frequently employed so as to properly handle musical aspects such as cadence, discourse, monotony, mechanicalness, surprise, and layering, among others. The following commentary illustrates this varying dialogue between automation and intervention, placing it in the wider context of other explorations at automating aspects of musical composition.
APA, Harvard, Vancouver, ISO, and other styles
2

Kassim, M. E. "Elliptical cost-sensitive decision tree algorithm (ECSDT)." Thesis, University of Salford, 2018. http://usir.salford.ac.uk/47191/.

Full text
Abstract:
Cost-sensitive multiclass classification problems, in which the task of assessing the impact of the costs associated with different misclassification errors, continues to be one of the major challenging areas for data mining and machine learning. The literature reviews in this area show that most of the cost-sensitive algorithms that have been developed during the last decade were developed to solve binary classification problems where an example from the dataset will be classified into only one of two available classes. Much of the research on cost-sensitive learning has focused on inducing decision trees, which are one of the most common and widely used classification methods, due to the simplicity of constructing them, their transparency and comprehensibility. A review of the literature shows that inducing nonlinear multiclass cost-sensitive decision trees is still in its early stages and further research could result in improvements over the current state of the art. Hence, this research aims to address the following question: 'How can non-linear regions be identified for multiclass problems and utilized to construct decision trees so as to maximize the accuracy of classification, and minimize misclassification costs?' This research addresses this problem by developing a new algorithm called the Elliptical Cost-Sensitive Decision Tree algorithm (ECSDT) that induces cost-sensitive non-linear (elliptical) decision trees for multiclass classification problems using evolutionary optimization methods such as particle swarm optimization (PSO) and Genetic Algorithms (GAs). In this research, ellipses are used as non-linear separators, because of their simplicity and flexibility in drawing non-linear boundaries by modifying and adjusting their size, location and rotation towards achieving optimal results. The new algorithm was developed, tested, and evaluated in three different settings, each with a different objective function. The first considered maximizing the accuracy of classification only; the second focused on minimizing misclassification costs only, while the third considered both accuracy and misclassification cost together. ECSDT was applied to fourteen different binary-class and multiclass data sets and the results have been compared with those obtained by applying some common algorithms from Weka to the same datasets such as J48, NBTree, MetaCost, and the CostSensitiveClassifier. The primary contribution of this research is the development of a new algorithm that shows the benefits of utilizing elliptical boundaries for cost-sensitive decision tree learning. The new algorithm is capable of handling multiclass problems and an empirical evaluation shows good results. More specifically, when considering accuracy only, ECSDT performs better in terms of maximizing accuracy on 10 out of the 14 datasets, and when considering minimizing misclassification costs only, ECSDT performs better on 10 out of the 14 datasets, while when considering both accuracy and misclassification costs, ECSDT was able to obtain higher accuracy on 10 out of the 14 datasets and minimize misclassification costs on 5 out of the 14 datasets. The ECSDT also was able to produce smaller trees when compared with J48, LADTree and ADTree.
APA, Harvard, Vancouver, ISO, and other styles
3

Ogunsanya, Oluwole Victor. "Decision support using Bayesian networks for clinical decision making." Thesis, Queen Mary, University of London, 2012. http://qmro.qmul.ac.uk/xmlui/handle/123456789/8688.

Full text
Abstract:
This thesis investigates the use of Bayesian Networks (BNs), augmented by the Dynamic Discretization Algorithm, to model a variety of clinical problems. In particular, the thesis demonstrates four novel applications of BN and dynamic discretization to clinical problems. Firstly, it demonstrates the flexibility of the Dynamic Discretization Algorithm in modeling existing medical knowledge using appropriate statistical distributions. Many practical applications of BNs use the relative frequency approach while translating existing medical knowledge to a prior distribution in a BN model. This approach does not capture the full uncertainty surrounding the prior knowledge. Secondly, it demonstrates a novel use of the multinomial BN formulation in learning parameters of categorical variables. The traditional approach requires fixed number of parameters during the learning process but this framework allows an analyst to generate a multinomial BN model based on the number of parameters required. Thirdly, it presents a novel application of the multinomial BN formulation and dynamic discretization to learning causal relations between variables. The idea is to consider competing causal relations between variables as hypotheses and use data to identify the best hypothesis. The result shows that BN models can provide an alternative to the conventional causal learning techniques. The fourth novel application is the use of Hierarchical Bayesian Network (HBN) models, augmented by dynamic discretization technique, to meta-analysis of clinical data. The result shows that BN models can provide an alternative to classical meta analysis techniques. The thesis presents two clinical case studies to demonstrate these novel applications of BN models. The first case study uses data from a multi-disciplinary team at the Royal London hospital to demonstrate the flexibility of the multinomial BN framework in learning parameters of a clinical model. The second case study demonstrates the use of BN and dynamic discretization to solving decision problem. In summary, the combination of the Junction Tree Algorithm and Dynamic Discretization Algorithm provide a unified modeling framework for solving interesting clinical problems.
APA, Harvard, Vancouver, ISO, and other styles
4

Bacak, Hikmet Ozge. "Decision Making System Algorithm On Menopause Data Set." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12612471/index.pdf.

Full text
Abstract:
Multiple-centered clustering method and decision making system algorithm on menopause data set depending on multiple-centered clustering are described in this study. This method consists of two stages. At the first stage, fuzzy C-means (FCM) clustering algorithm is applied on the data set under consideration with a high number of cluster centers. As the output of FCM, cluster centers and membership function values for each data member is calculated. At the second stage, original cluster centers obtained in the first stage are merged till the new numbers of clusters are reached. Merging process relies upon a &ldquo
similarity measure&rdquo
between clusters defined in the thesis. During the merging process, the cluster center coordinates do not change but the data members in these clusters are merged in a new cluster. As the output of this method, therefore, one obtains clusters which include many cluster centers. In the final part of this study, an application of the clustering algorithms &ndash
including the multiple centered clustering method &ndash
a decision making system is constructed using a special data on menopause treatment. The decisions are based on the clusterings created by the algorithms already discussed in the previous chapters of the thesis. A verification of the decision making system / v decision aid system is done by a team of experts from the Department of Department of Obstetrics and Gynecology of Hacettepe University under the guidance of Prof. Sinan Beksaç
.
APA, Harvard, Vancouver, ISO, and other styles
5

Shi, Haijian. "Best-first Decision Tree Learning." The University of Waikato, 2007. http://hdl.handle.net/10289/2317.

Full text
Abstract:
In best-first top-down induction of decision trees, the best split is added in each step (e.g. the split that maximally reduces the Gini index). This is in contrast to the standard depth-first traversal of a tree. The resulting tree will be the same, just how it is built is different. The objective of this project is to investigate whether it is possible to determine an appropriate tree size on practical datasets by combining best-first decision tree growth with cross-validation-based selection of the number of expansions that are performed. Pre-pruning, post-pruning, CART-pruning can be performed this way to compare.
APA, Harvard, Vancouver, ISO, and other styles
6

Bai, Ming. "Optimization decision maker algorithm for infrastructure interdependencies with I2Sim applications." Thesis, University of British Columbia, 2012. http://hdl.handle.net/2429/42809.

Full text
Abstract:
The study of complex interdependent systems is an important research area. In recent years, it has been applied to disaster response management and building energy systems. I2Sim (Infrastructures Interdependencies Simulator) is a software simulation toolbox developed by the Power Lab at the University of BC. It has a wide range of capabilities including simulation of disasters scenarios and energy system optimization. The user needs to provide Human Readable Tables (HRTs) as inputs for the program. The basic ontology of the I2Sim Resource Layer includes cells, channels and tokens, which are abstractions from real life objects. Initially, the intent of this thesis was to examine the energy usage pattern of the Kaiser Building, perform energy optimization modeling and examine how it relates to energy policies. After some initial research, it was not possible to proceed further due to a lack of metered data. The research focus was changed to disaster scenario simulation. This thesis proposes a new optimization algorithm named Lagrange Based Optimization (LBO). The main objective is to maximize the number of discharged patients from the hospitals simulated in this study. The first scenario modeled is a three-hospital scenario with no transportation to illustrate the principles of the algorithm. Then a three-venue three-hospital scenario with transportation was modeled to maximize both the number of patients transported to the hospitals and the number of patients discharged from the hospitals. After that, the first scenario is compared against the performance of a Reinforcement Learning (RL) agent method concurrently developed in the same research group. Overall, the LBO algorithm demonstrates optimal results in the various I2Sim modeling scenarios.
APA, Harvard, Vancouver, ISO, and other styles
7

Manongga, D. H. F. "Using genetic algorithm-based methods for financial analysis." Thesis, University of East Anglia, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Chi. "A constrained MDP-based vertical handoff decision algorithm for wireless networks." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/1243.

Full text
Abstract:
The 4th generation wireless communication systems aim to provide users with the convenience of seamless roaming among heterogeneous wireless access networks. To achieve this goal, the support of vertical handoff is important in mobility management. This thesis focuses on the vertical handoff decision algorithm, which determines the criteria under which vertical handoff should be performed. The problem is formulated as a constrained Markov decision process. The objective is to maximize the expected total reward of a connection subject to the expected total access cost constraint. In our model, a benefit function is used to assess the quality of the connection, and a penalty function is used to model the signaling incurred and call dropping. The user's velocity and location information are also considered when making the handoff decisions. The policy iteration and Q-learning algorithms are employed to determine the optimal policy. Structural results on the optimal vertical handoff policy are derived by using the concept of supermodularity. We show that the optimal policy is a threshold policy in bandwidth, delay, and velocity. Numerical results show that our proposed vertical handoff decision algorithm outperforms other decision schemes in a wide range of conditions such as variations on connection duration, user's velocity, user's budget, traffic type, signaling cost, and monetary access cost.
APA, Harvard, Vancouver, ISO, and other styles
9

Nkansah-Gyekye, Yaw. "An intelligent vertical handoff decision algorithm in next generation wireless networks." Thesis, University of the Western Cape, 2010. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_2726_1307443785.

Full text
Abstract:

The objective of the thesis research is to design such vertical handoff decision algorithms in order for mobile field workers and other mobile users equipped with contemporary multimode mobile devices to communicate seamlessly in the NGWN. In order to tackle this research objective, we used fuzzy logic and fuzzy inference systems to design a suitable handoff initiation algorithm that can handle imprecision and uncertainties in data and process multiple vertical handoff initiation parameters (criteria)
used the fuzzy multiple attributes decision making method and context awareness to design a suitable access network selection function that can handle a tradeoff among many handoff metrics including quality of service requirements (such as network conditions and system performance), mobile terminal conditions, power requirements, application types, user preferences, and a price model
used genetic algorithms and simulated annealing to optimise the access network selection function in order to dynamically select the optimal available access network for handoff
and we focused in particular on an interesting use case: vertical handoff decision between mobile WiMAX and UMTS access networks. The implementation of our handoff decision algorithm will provide a network selection mechanism to help mobile users select the best wireless access network among all available wireless access networks, that is, one that provides always best connected services to users.

APA, Harvard, Vancouver, ISO, and other styles
10

Ahmed, Mansoor, and Ali Murtaza. "Decision algorithm and procedure for fast handover between 3G and WLAN." Thesis, Halmstad University, School of Information Science, Computer and Electrical Engineering (IDE), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-5146.

Full text
Abstract:

Different types of wireless network systems have been developed to offer Internet access to mobile end users. Now days a burning issue is to provide the network facility to end users on anywhere and anytime bases. Typical examples of wireless networks are 3G and WLAN (wireless local area network). An important issue is to integrate these heterogeneous networks and manage the mobile nodes while moving across heterogeneous networks with session continuity, low latency for handover between networks that are based on different technologies (vertical handover) and minimum packet loss. To achieve this, it is important to find the right point in time to perform a handover between two networks and to find a new network that, in fact, improves the connectivity over a reasonable time span. In this thesis, we propose vertical handover techniques for mobile users, based on predictive and adaptive schemes for the selection of the next network. The handover decision between the technologies takes the velocity of the mobile node, battery condition of mobile node, signal to interference ratio (SIR), application requirements and received signal strength RSS) as decision parameters. In order to reduce unnecessary handover, the concept of dwell timer is used and to reduce the handover latency, the predictive mode of Fast Mobile IPv6 (FMIPv6) and for minimum packet loss, the concept of tunnelling and buffering of packets on new access router in advance are proposed.

APA, Harvard, Vancouver, ISO, and other styles
11

Tobin, Ludovic. "A Stochastic Point-Based Algorithm for Partially Observable Markov Decision Processes." Thesis, Université Laval, 2008. http://www.theses.ulaval.ca/2008/25194/25194.pdf.

Full text
Abstract:
La prise de décision dans un environnement partiellement observable est un sujet d'actualité en intelligence artificielle. Une façon d'aborder ce type de problème est d'utiliser un modèle mathématique. Notamment, les POMDPs (Partially Observable Markov Decision Process) ont fait l'objet de plusieurs recherches au cours des dernières années. Par contre, résoudre un POMDP est un problème très complexe et pour cette raison, le modèle n'a pas été utilisé abondamment. Notre objectif était de continuer les progrès ayant été réalisé lors des dernières années, avec l'espoir que nos travaux de recherches seront un pas de plus vers l'application des POMDPs dans des applications d'envergures. Dans un premier temps, nous avons développé un nouvel algorithme hors-ligne qui, sur des problèmes tests, est plus performant que les meilleurs algorithmes existants. La principale innovation vient du fait qu'il s'agit d'un algorithme stochastique alors que les algorithmes traditionnels sont déterministes. Dans un deuxième temps, nous pouvons également appliquer cet algorithme dans des environnements en-lignes. Lorsque ceux-ci revêtent une certaine particularité, notre algorithme est beaucoup plus performant que la compétition. Finalement, nous avons appliqué une version simplifiée de notre algorithme dans le cadre du projet Combat Identification du RDDC-Valcartier.
Decision making under uncertainty is a popular topic in the field of artificial intelligence. One popular way to attack such problems is by using a sound mathematical model. Notably, Partially Observable Markov Processes (POMDPs) have been the subject of extended researches over the last ten years or so. However, solving a POMDP is a very time-consuming task and for this reason, the model has not been used extensively. Our objective was to continue the tremendous progress that has been made over the last couple of years, with the hope that our work will be a step toward applying POMDPs in large-scale problems. To do so, we combined different ideas in order to produce a new algorithm called SSVI (Stochastic Search Value Iteration). Three major accomplishments were achieved throughout this research work. Firstly, we developed a new offline POMDP algorithm which, on benchmark problems, proved to be more efficient than state of the arts algorithm. The originality of our method comes from the fact that it is a stochastic algorithm, in comparison with the usual determinist algorithms. Secondly, the algorithm we developed can also be applied in a particular type of online environments, in which this algorithm outperforms by a significant margin the competition. Finally, we also applied a basic version of our algorithm in a complex military simulation in the context of the Combat Identification project from DRDC-Valcartier.
APA, Harvard, Vancouver, ISO, and other styles
12

Trivedi, Ankit P. "Decision tree-based machine learning algorithm for in-node vehicle classification." Thesis, California State University, Long Beach, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10196455.

Full text
Abstract:

This paper proposes an in-node microprocessor-based vehicle classification approach to analyze and determine the types of vehicles passing over a 3-axis magnetometer sensor. The approach for vehicle classification utilizes J48 classification algorithm implemented in Weka (a machine learning software suite). J48 is Quinlan's C4.5 algorithm, an extension of decision tree machine learning based on an ID3 algorithm. The decision tree model is generated from a set of features extracted from vehicles passing over the 3-axis sensor. The features are attributes provided with correct classifications to the J48 training algorithm to generate a decision tree model with varying degrees of classification rates based on cross-validation. Ideally, using fewer attributes to generate the model allows for the highest computational efficiency due to fewer features needed to be calculated while minimalizing the tree with fewer branches. The generated tree model can then be easily implemented using nested if-loops in any language on a multitude of microprocessors. Also, setting an adaptive baseline to negate the effects of the background magnetic field allows reuse of the same tree model in multiple environments. The result of the experiment shows that the vehicle classification system is effective and efficient.

APA, Harvard, Vancouver, ISO, and other styles
13

Lubbe, Hendrik Gideon. "Intelligent automated guided vehicle (AGV) with genetic algorithm decision making capabilities." Thesis, [Bloemfontein?] : Central University of Technology, Free State, 2007. http://hdl.handle.net/11462/85.

Full text
Abstract:
Thesis (M.Tech.) - Central University of Technology, Free State, 2006
The ultimate goal regarding this research was to make an intelligent learning machine, thus a new method had to be developed. This was to be made possible by creating a programme that generates another programme. By constantly changing the generated programme to improve itself, the machines are given the ability to adapt to there surroundings and, thus, learn from experience. This generated programme had to perform a specific task. For this experiment the programme was generated for a simulated PIC microcontroller aboard a simulated robot. The goal was to get the robot as close to a specific position inside a simulated maze as possible. The robot therefore had to show the ability to avoid obstacles, although only the distance to the destination was given as an indication of how well the generated programme was performing. The programme performed experiments by randomly changing a number of instructions in the current generated programme. The generated programme was evaluated by simulating the reactions of the robot. If the change to the generated programme resulted in getting the robot closer to the destination, then the changed generated programme was kept for future use. If the change resulted in a less desired reaction, then the newly generated programme was removed and the unchanged programme was kept for future use. This process was repeated for a total of one hundred thousand times before the generated program was considered valid. Because there was a very slim chance that the instruction chosen will be advantageous to the programme, it will take many changes to get the desired instruction and, thus, the desired result. After each change an evaluation was made through simulation. The amount of necessary changes to the programme is greatly reduced by giving seemingly desirable instructions a higher chance of being chosen than the other seemingly unsatisfactory instructions. Due to the extensive use of the random function in this experiment, the results differ from one another. To overcome this barrier, many individual programmes had to be generated by simulating and changing an instruction in the generated programme a hundred thousand times. This method was compared against Genetic Algorithms, which were used to generate a programme for the same simulated robot. The new method made the robot adapt much faster to its surroundings than the Genetic Algorithms. A physical robot, similar to the virtual one, was build to prove that the programmes generated could be used on a physical robot. There were quite a number of differences between the generated programmes and the way in which a human would generally construct the programme. Therefore, this method not only gives programmers a new perspective, but could also possibly do what human programmers have not been able to achieve in the past.
APA, Harvard, Vancouver, ISO, and other styles
14

Chung, Sai-ho, and 鍾世豪. "A multi-criterion genetic algorithm for supply chain collaboration." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B29357275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Krook, Jonatan. "Predicting low airfares with time series features and a decision tree algorithm." Thesis, Uppsala universitet, Statistiska institutionen, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-353274.

Full text
Abstract:
Airlines try to maximize revenue by letting prices of tickets vary over time. This fluctuation contains patterns that can be exploited to predict price lows. In this study, we create an algorithm that daily decides whether to buy a certain ticket or wait for the price to go down. For creation and evaluation, we have used data from searches made online for flights on the route Stockholm – New York during 2017 and 2018. The algorithm is based on time series features selected by a decision tree and clearly outperforms the selected benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
16

Saleh, Areej. "A Location-aided Decision Algorithm for Handoff Across Heterogeneous Wireless Overlay Networks." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/10043.

Full text
Abstract:
Internetworking third generation (3G) technologies with wireless LAN (WLAN) technologies such as Universal Mobile Telecommunication Systems (UMTS) and IEEE 802.11, respectively, is an emerging trend in the wireless domain. Its development was aimed at increasing the UMTS network'­s capacity and optimizing performance. The increase in the number of wireless users requires an increase in the number of smaller WLAN cells in order to maintain an acceptable level of QoS. Deploying smaller cells in areas of higher mobility (e.g., campuses, subway stations, city blocks, malls, etc.) results in the user only spending a short period of time in each cell, which significantly increases the rate of handoff. If the user does not spend sufficient time in the discovered WLAN's coverage area, the application cannot benefit from the higher data rates. Therefore, the data interruption and performance degradation associated with the handoff cannot be compensated for. This counters the initial objective for integrating heterogeneous technologies, thus only handoffs that are followed by a sufficient visit to the discovered WLAN should be triggered. The conventional RF-based handoff decision method does not have the necessary means for making an accurate decision in the type of environments described above. Therefore, a location-aided handoff decision algorithm was developed to prevent the triggering of handoffs that result from short visits to discovered WLAN's ­ coverage area. The algorithm includes a location-based evaluation that runs on the network side and utilizes a user's location, speed, and direction as well as handoff-delay values to compute the minimum required visit duration and the user'­s trajectory. A WLAN coverage database is queried to determine whether the trajectory's end point falls within the boundaries of the discovered WLAN's coverage area. If so, the mobile node is notified by the UMTS network to trigger the handoff. Otherwise, the location-based evaluation reiterates until the estimated trajectory falls within the boundaries of the discovered WLAN'­s coverage area, or the user exits the coverage area. By taking into consideration more then merely RF-measurements, the proposed algorithm is able to predict whether the user'­s visit to the WLAN will exceed the minimum requirements and make the decision accordingly. This allows the algorithm to prevent the performance degradation and cost associated with unbeneficial/unnecessary handoffs.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
17

Cannon, Stephen J. "Analysis of the relationship between partially dynamic Bayesian network architecture and inference algorithm effectiveness." Fairfax, VA : George Mason University, 2007. http://hdl.handle.net/1920/3181.

Full text
Abstract:
Thesis (M.S.)--George Mason University, 2007.
Vita: p. 192. Thesis director: Kathryn Blackmond Laskey. Submitted in partial fulfillment of the requirements for the degree of Master of Science in Systems Engineering. Title from PDF t.p. (viewed Aug. 13, 2008). Additional zip folders contain software, thesis defense powerpoint and analysis documents. Includes bibliographical references (p. 190-191). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
18

Skipper, Lee R. "Development of a microcomputer-based capital budgeting algorithm for the dynamic decision environment." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/101266.

Full text
Abstract:
The capital budgeting process is conducted in a dynamic, uncertain environment. In each period of the process, a manager has only estimated values for system parameters, project costs, and project returns. The manager must consider project firm's capital among interdependencies the available in allocating the projects. After completing the allocation process in one period, the chosen projects are funded until the end of the next period. These projects are then considered along with new projects and the process is repeated again. The capital budgeting decision in one period is therefore only one of a long sequence of such decisions, all of which are made in a dynamic, uncertain environment. The algorithm presented in this study models the dynamic environment of uncertainty. The algorithm utilizes a future worth of net return criterion in conducting the decision. Available projects may be estimated as discrete point estimates or as combinations of continuous functions. All projects under consideration need not have the same life; unequal-lived projects may be considered. After the optimal combination of projects is identified, four sensitivity analyses may be run to analyze the effect of any uncertainty in that period. The dynamic environment may then be analyzed by simulating the environment which would be faced when the decision is made .again at the end of the next period. Any of the system parameters and estimates of the continuing projects may be altered in that period to reflect the changes in the last period's estimates. An example is provided to illustrate the workings of the algorithm.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
19

Yang, Chin-Min, and 楊欽閔. "Hybrid Fast Mode Decision Algorithm." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/vx5488.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Hahn, Sung Chu. "Analysis of a distribution decision algorithm." Thesis, 1985. http://hdl.handle.net/10945/21143.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lai, Ting-Chao, and 賴廷昭. "Hybrid Fast Inter-mode Decision Algorithm Research." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/44451629986485861660.

Full text
Abstract:
碩士
國立中央大學
通訊工程研究所碩士在職專班
95
H.264 is the latest video coding standard. In order to achieve the highest coding efficiency, H.264 adopts complicated coding schemes,employing motion compensation with variable blocks sizes motion estimation, multiple reference frames motion estimation, de-blocking filter and integer transform, etc. Unfortunately, these features incur a considerable increase in encoder complexity, mainly regards to mode decision and motion estimation. In this thesis, we use the property of variance to determine the motion characteristic of the subject such as stationary and non- stationary, and then we will further use the merging and splitting method to reduce the data amount of block type. The experimental results show that our proposed algorithms can save the coding time, with a negligible peak-signal-to-noise ratio.
APA, Harvard, Vancouver, ISO, and other styles
22

Chiu, Chun-Chieh, and 邱俊傑. "CUDT: A CUDA Based Decision Tree Algorithm." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/88189185018035112843.

Full text
Abstract:
碩士
國立交通大學
資訊科學與工程研究所
99
Classification is an important issue both in Machine Learning and Data Mining. Decision tree is one of the famous classification models. In the reality case, the dimension of data is high and the data size is huge. Building a decision in large data base cost much time in computation. It is a computationally expensive problem. GPU is a special design processor of graphic. The highly parallel features of graphic processing made today’s GPU architecture. GPGPU means use GPU to solve non-graphic problems which need amounts of computation power. Since the high performance and capacity/price ratio, many researches use GPU to process lots computation. Compute Unified Device Architecture (CUDA) is a GPGPU solution provided by NVIDIA. This paper provides a new parallel decision tree algorithm base on CUDA. The algorithm parallel computes building phase of decision tree. In our system, CPU is responsible for flow control and GPU is responsible for computation. We compare our system to the Weka-j48 algorithm. The result shows out system is 6~5x times faster than Weka-j48. Compare with SPRINT on large data set, our CUDT has about 18 times speedup.
APA, Harvard, Vancouver, ISO, and other styles
23

Liou, Ji-Yang, and 柳奇洋. "Fast Coding Unit Decision Algorithm for HEVC." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/mkes64.

Full text
Abstract:
碩士
國立東華大學
電機工程學系
100
The latest video coding standard called high efficiency video coding (HEVC) is currently in its standardization process and expected to be the next generation video coding standard due to the significant bit-rate reduction compared to the state-of-the-art video coding standard H.264/AVC. However, the high coding performance mainly comes from the expense of massive computational complexity especially for mode decision, which occupies more than 80% of total coding time. To solve the high computation complexity problem, this thesis proposes a fast coding unit determination algorithm for HEVC by using temporal depth correlation. The early termination on current CU and PU can split current CUs into 4 sub-CUs to ignore the operation of the PU in current CU. Simulation results show that our proposed algorithm can achieve 37% to 65% total coding reduction with ignorable rate distortion performance degradation.
APA, Harvard, Vancouver, ISO, and other styles
24

Chang, Chih-Hao, and 張智皓. "Fast Mode Decision Algorithm for Scalable Video Coding." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/30556415238674225334.

Full text
Abstract:
碩士
雲林科技大學
電子與資訊工程研究所
98
SVC (Scalable Video Coding) is an extent of H.264 standard in which it procures better encoding efficiency and quality and the scalabilities of temporal, spatiality and SNR. Frame rate, resolution and quality in video coding are flexible. Through the description above, coping with bandwidth limitation or demands of clients, SVC holds higher scalability and efficiency in usage which is proper for wireless communication, internet transmission and other variable bandwidth applications. The internet condition and bandwidth are varied for the usages of PC, Notebook, PDA, Smart phone and etc, therefore, the demands of resolution, quality, frame rate are also varied. For reach UMA (Universal Multimedia Access) bit-stream, SVC is launched. Three scalabilities have brought into SVC in H.264/AVC coding method. Spatial scalability exploits the similarity of different resolution. Enhancement layer refers to the base layer data to heighten encoding rate. Temporal scalability through Hierarchical B structure, different frame rate could be transferred separately. SNR scalability is able to control coding image quality and to cope with the demands from different clients. In the whole SVC coding process, the most complex and time costing is in mode decision. Apart from the traditional H.264 inter and intra prediction, SVC includes three types of inter-layer predictions: inter-layer intra, inter-layer motion and inter-layer residual. Hence, the computational complexity is increased. Thus, the fast mode decision algorithm has been proposed very often. Mode decision is to decide current MB best mode. Each mode holds different chance of been selected and is computed in the traditional H.264 and SVC methods. By comparing, the smallest rdcost would be selected for best mode. As the fast mode decision developed, regardless of compilation statistic or coded data exploitation, computational complexity is reduced. Moreover, another developing direction is emphasizing on pixel searching method in motion estimation. Those methods are also able to lower the computational complexity. We proposed an fast mode decision algorithm that include four innovation methods which effectively reduces 83.48% in the full encode time, PSNR only drops 0.255dB in D1 (720*480) resolution. The fast mode decision algorithm in HD720(1280*720) resolution which reduces 79.17% full encode time, PSNR only drops 0.123dB. In Full-HD(1920*1080) resolution which saving 72.22% full encoder time, PSNR only drops 0.198dB by used fast mode decision algorithm.
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Tzu-han, and 吳姿函. "A Fast Inter-mode Decision Algorithm for HEVC." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/16543766282215502250.

Full text
Abstract:
碩士
國立中央大學
通訊工程學系
101
Multimedia devices are weeding out the old and bringing forth the new constantly, so the demand for image quality are increasing. The common used video compression standard H.264/AVC was inadequate, a new generation of video compression standard HEVC is substantially improving H.264. HEVC has improved video quality and compression ratio, but the coding complexity also increases a lot. In this thesis, we proposed a fast inter-mode decision algorithm for HEVC, we utilize the number of zero-blocks and distribution in CU / PU to simplify the mode by detecting the probability distribution mode earlier, and to simplify the computational complexity by detecting TU size probability distributions. Finally, we combine two algorithms to achieve the purpose of accelerating the encoding procedure. According to the experimental results, our proposed algorithm can achieve 62.22% of time saving on average while maintaining coding performance.
APA, Harvard, Vancouver, ISO, and other styles
26

Hsieh, Cheng-Hao, and 謝正豪. "An Approach with Bat Algorithm and Decision Tree." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/38211713696274285313.

Full text
Abstract:
碩士
華梵大學
資訊管理學系碩士班
102
The amount of information is rapidly increasing. Data mining is widely used. Decision tree can process data and provide the rules of tree structure, so that decision-maker can rapidly obtain information behind the hidden data. Therefore, decision tree can be used to solve problems from other domains. It should set the parameters before using decision tree, because how to properly set the parameters will affect the results. It is an important issue to adjust the parameters of decision tree. The different problems have different best parameters of decision tree. It spends a lot of time to adjust the parameters manually. Therefore, the thesis uses bat algorithm to adjust the parameters of decision tree. It can ameliorate the original accuracy of decision tree and find the best combination of parameters. After combining the algorithm, it will find the best combination of parameters for different datasets and generate the results. To compare the results with decision tree and support vector machine, the proposed algorithm can promote the accuracy of the original decision tree and has better results than support vector machine. Therefore, the bat algorithm can find the most appropriate parameters of decision tree in different problems.
APA, Harvard, Vancouver, ISO, and other styles
27

Wei, C. J., and 魏君任. "Improved Algorithm for Binary Decision Diagram Minimization Problem." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/46565537111016330931.

Full text
Abstract:
碩士
國立臺灣師範大學
資訊教育研究所
90
Binary Decision Diagram (BDD) is a data structure for the representation and manipulation of Boolean functions, which is applied in many areas. However, finding the optimal variable ordering of BDD seems to be intractable. Though numerous algorithms for BDD minimization are proposed in the last decade, no feasible one is able to find a good variable ordering for Boolean functions with up to hundreds variables. In this thesis we present a randomized algorithm to minimize BDD. With the help of our new approach, Sifting algorithm works as well as an exact algorithm. The new approach can be parallelized easily to meet the need of complex function minimization. In the thesis, LGSynth91 circuits with less than 500 variables are all minimized with very good results, except only C6288.blif. The better variable orderings of these benchmark circuits are listed in Appendix. Experimental results show that the BDDs' sizes are smaller than previous known results, and it's computing time is very small and very stable. It turns out that this randomized algorithm is a robust method for BDD minimization.
APA, Harvard, Vancouver, ISO, and other styles
28

Chang, Cheng-Yu, and 張哲瑜. "Fast Mode Decision Algorithm in H.264/AVC." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/17252852566565039281.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
95
Multimedia applications have played an important role in our daily life. We can see them in the family, such as DVD player, digital TV, or portable devices such as mobile phones, digital camera, even in the subway and on the bus. The multimedia applications are everywhere. The technological core of the different applications is the codec. In the new generation of codec standard, the multi-mode decision is offered and increases the complexity of whole system. So, the real-time applications are not easily to be realized. This thesis focuses on the probability of different modes and the mode correlations of neighboring blocks. With different early termination conditions and the fast motion estimation algorithm, the new algorithm shortens the total encoding time. The fast mode decision algorithm achieves up to 2.5 times speedup and the joint algorithm achieves up to 13 times speedup in total encoding time. It makes real-time application possible.
APA, Harvard, Vancouver, ISO, and other styles
29

Blazquez, Carola A. "Decision-rule algorithm for map-matching in transportation." 2005. http://catalog.hathitrust.org/api/volumes/oclc/71242453.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Shih-Hsin, and 陳世昕. "Soft-decision Reed Decoding Algorithm Using Factor Graph." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/16769438787069669132.

Full text
Abstract:
碩士
國立中興大學
電機工程學系所
103
In this study, we propose a modified Reed decoding Algorithm of Reed-Muller code for channel coding. [1][2][3] Objective of this study is to reduce the bit error rate, to in Reed-Muller codes.[4][5][6] We use factor graph[9][10] to explain Reed decoding algorithm, and propose an algorithm to reduce the bit error rate, which call modified Reed-Muller decoding algorithm. Because the problem of Majority Reed decoding algorithm is the highest order decoding error, affecting the low order, we make improvements from high-order decoding section. Unlike traditional Reed decoding algorithms use hard decision decoding, we use a soft decision decoding[11][14], because in general the situation is better than the hard decision. In this paper , we use factor graph to study soft decision decoding algorithm to improve the bit error rate. Finally ,we simulate and analyze the performance of this modified Reed-Muller decoding algorithm.
APA, Harvard, Vancouver, ISO, and other styles
31

Ho-MingKuo and 郭和明. "Fast Direction Decision Algorithm for HEVC Intra Prediction." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/18921679922074832975.

Full text
Abstract:
碩士
國立成功大學
電機工程學系
102
High Efficiency Video Coding (HEVC) is a new video coding standard, its goal is to achieve higher compression coding efficiency than H.264/AVC, and its supported resolutions is from HD (720p or 1080p) to Ultra HD (7680x4320). Furthermore, coding unit, CU, is provided in HEVC. Its size is ranging from 8x8 to 64x64, replacing the fixed-size marcoblock in H.264/AVC. The new technique results in the decrease in bitrate but the increase in computational complexity. The modes of HEVC intra prediction was increased to 35 modes with 33 angular modes, DC mode and planar mode. In HEVC, a frame is divided to Largest Coding Unit (LCU) with uniform size, each LCU is then divided into several Coding Unit (CU) with various size. The coding unit partition corresponds to a quad-tree structure. Since HEVC needs to calculate RDO to determine the prediction unit mode selection, the calculation of RDO usually consumes a lot of coding time. In order to reduce the calculation of RDO, the proposed algorithm will use the gradient operator, pre-select the possible direction in each prediction unit, the calculation of unnecessary RDO will be reduced. Experimental result shows that the proposed algorithm is capable to speed up the coding time of intra prediction averagely 31% with negligible PSNR loss and slight bitrate increase.
APA, Harvard, Vancouver, ISO, and other styles
32

Wu, Chang-Wei, and 吳昶緯. "Fast Soft-Decision Decoding Algorithm Based on Error Weights." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/82842691690366908029.

Full text
Abstract:
碩士
長庚大學
電機工程研究所
96
A new fast soft-decision decoding algorithm for binary linear block codes is proposed in this thesis. Different than other algorithms using the threshold base on the signal-to-noise ratio to stop the decoding process, the proposed algorithm utilizes the flip of the least reliable bit according to the error weights to reduce the computational complexity. This reduces the computational complexity compared to the Chase algorithm which needs to test large amount of error patterns. The purpose of the thesis is to present a new approach to reduce a significant amount of decoding complexity by sacrificing less coding gain. Furthermore, in terms of test patterns, decoding complexity is reduced to 20% on average, compared to that of Chase algorithm with dmin-1 tested positions. At the block-error rate=10-5, the coding gain just reduced at the expense of about 0.1dB to 0.5dB.
APA, Harvard, Vancouver, ISO, and other styles
33

Su, Je-hon, and 蘇哲弘. "Adapted fast block mode decision algorithm for H.264." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/80778584693161584090.

Full text
Abstract:
碩士
國立臺灣科技大學
電子工程系
95
H.264/AVC is the latest video coding standard developed jointly by the ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG). Compare with previous video coding standards (MPEG2, MPEG4, and H.263 etc.), the encoder of H.264 can compress videos more efficiently. H.264 adopts many coding schemes, such as motion compensation with variable blocks, quarter pixel motion vector accuracy, multiple reference frames, intra coding, the deblock filter, integer transform, CAVLC and CABAC entropy coding. The encoding time is increased because these complication coding schemes, especially for motion estimation and microblock mode decision. In this thesis, we discussed some researches on the fast mode decision of H.264 and proposed an adapted algorithm for fast mode decision in H.264. the algorithm considers block stationarity, neighbor blocks, and RDCost comparison. In the experiments, we adopted the fast motion estimation algorithm (JVT-F017) [32] proposed by JM98. The results showed that our proposed algorithm can save 63% coding time compare with full mode decision in JM98. The quality and bits approach the results of full mode decision.
APA, Harvard, Vancouver, ISO, and other styles
34

Teng, Su-Wei, and 鄧書緯. "Fast Mode Decision Algorithm for HEVC Residual Quadtree Coding." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/88631656238371565232.

Full text
Abstract:
碩士
國立交通大學
電子研究所
100
With the demand for high resolution video application and the prevalence of video streaming in wireless networking, recently ITU VCEG and MPEG formed the Joint Collaborative Team on Video Coding (JCT-VC) to develop the next generation video coding standard, High Efficiency Video Coding (HEVC). The target aims to achieve 50% bit rate reduction at about the same video quality, or keep the same compression ratio with less computing power, compared with the state-of-art ITU/MPEG H.264/AVC standard. Taking advantage of the conventional hybrid coding good performance, HEVC keep a similar basic structure to the H.264/AVC coder but with further enhancement on each coding tool to increase the compression efficiency. Among them, a Residual Quadtree (RQT) coding scheme is added to the conventional transform coding procedure. The compression efficiency is increased but at the cost of additional computational complexity compared to the traditional fixed-size transform. In this thesis, we design a fast algorithm for deciding the Residual Quadtree mode. After evaluating the Rate-Distortion efficiency of Transform Unit (TU) sizes, we replace the original depth-first mode decision process by a Merge-and-Split decision process. Furthermore, because a substantial numbers of zero-blocks are produced after quantization and mode decision, we develop a termination condition to eliminate the unnecessary computation by using the inheritance property of zero-blocks. In addition, for nonzero-blocks, two early termination schemes are developed for both TU Merge and TU Split procedures, respectively. The early zero-blocks detection algorithm also saves computing power. Comparing to HM 2.0, our method saves the RQT encoding time from 43% to 65% for a number of test videos with negligible coding loss.
APA, Harvard, Vancouver, ISO, and other styles
35

Wu, Tsai-Ju, and 吳彩如. "The Optimization of Capacity Allocation Decision using Genetic Algorithm." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/88420298682203070322.

Full text
Abstract:
碩士
國立成功大學
工學院工程管理專班
96
Regarding to the current electronic products market, as the consumers are getting higher and higher expectation for the products, the life cycle of the products is getting shorter. Taking mobile phones as an example, what consumer concern is not only versatile but also lightweight. It is necessary to develop new processes in order to meet the above requirement. The bumping process of assembly was developed through such a demand. Electronic products keep up continuous innovation, and each brand gets into the price war. From the aspect of manufacturing, the process flow has to be shorten in the production lines. All of the machines have to be flexible to meet the small-quantity/diverse-product demand in the market. As a manager, he has to face the dynamic market, the orders that are amall-quantity/diverse-product, and the huge market demand. The purpose of this research is trying to find, based on the various loadings, how to place the order to the production line as well as the raw material cooperative with machine to make the maxima profit. This research applies linear programming to get the goal and constraint, and then looks for the best solution by Genetic Algorithm. Many previous studies used the linear programming method to find the maximum and minimum, but this approach is inflexible. The Genetic Algorithm has given different possibility to meet enterprise require, so it can be flexible used for calculate the maximum profit and enhance the machine efficient. As the result, it shows that the Genetic Algorithm has the higher performance for the capacity allocation.
APA, Harvard, Vancouver, ISO, and other styles
36

Lanning, James Michael. "A kernelized genetic algorithm decision tree with information criteria." 2008. http://etd.utk.edu/2008/August2008Dissertations/LanningJamesMichael.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Yao, Shan-Ru, and 姚劭儒. "Visual Perception for Fast Coding Decision Algorithm for HEVC." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/7q4mdt.

Full text
Abstract:
碩士
國立暨南國際大學
電機工程學系
106
In recent years, Video Coding Technology High Efficiency Video Coding (High Efficiency Video Coding Technology) referred to as HEVC, also known as H.265 flourish, after H.264 / MPEG-4 AVC is considered not only to improve image quality, but also to achieve H.264 / MPEG-4 AVC double the compression rate (equivalent to the same picture quality bit rate reduced to 50%), but these advantages also represent the need for higher computational and computer environments to perform. In order to solve the computational problem, most of the research methods are looking for relevance on the coding of HEVC, but it seldom make fast decision based on human senses subjectively and objectively. In this thesis, we focus on the analysis of each process in the latest generation of video compression standard HEVC, and optimize the time for the most time-consuming calculation. For the Coding Unit (CU) segmentation and prediction unit , PU) mode decision using visual perception to optimize, respectively, based on dynamic estimation, brightness histogram support and edge detection methods to cut the CTU and PU decision-making mode to be modified to reduce the unnecessary operation time and speed up HEVC coding, replacing the original complex calculation. The experimental results show that the proposed method can subjectively avoid over-segmentation and partition relatively interested blocks belonging to the human eye. Objective data show that the average bit-rate increases by 2.866% But only lost an average of 0.1234dB PSNR, BDBR improved a little, but the average can be suppressed below 3%, BDPSNR also decreased only average -0.1127dB time also reduced by an average of 36.25% of the overall time.
APA, Harvard, Vancouver, ISO, and other styles
38

Chen, Guan-Hon, and 陳冠宏. "Reinsurance Retention Decision-using Genetic Algorithm as a Tool." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/52019417711041723356.

Full text
Abstract:
碩士
國立高雄第一科技大學
風險管理與保險所
93
Reinsurance is the crucial part of the insurance system, especially reflect on the changeable technology, more productive cause more hazardous. So insurance company should rely on reinsurance to breakup it's own risk, making a retention decision by itself is more important particularly. In this research, using evolution mechanism of Gene Algorithm to imitate the best tactic to conform the equity between ceding company and ceded company then provide ceding company the information of retention decision. Keywords:Reinsurance retention、Genetic algorithms
APA, Harvard, Vancouver, ISO, and other styles
39

Tsai, Yu-Ju, and 蔡育儒. "Paralleled CHAID Decision Tree Algorithm with Big-Data Capability." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/08393327566226485549.

Full text
Abstract:
碩士
淡江大學
統計學系碩士班
102
As technology advances, the era of Big-Data has finally arrived. As the amount of data increases , the improvement of computing speed becomes an important development technology. If data training and analysis time are reduced, we could make the prediction or decision much earlier then expected. As a result, parallel computation is one of the methods which can reduce the analysis time. In this paper, we rewrite the CHAID decision tree algorithm for parallel computation and Big-Data capability. Our simulation results show that, when the CPU has more than one kernel, the computation time of our improved CHAID tree is significantly reduced. When we have a huge amount of data, the difference of computation times is even more significant.
APA, Harvard, Vancouver, ISO, and other styles
40

HUANG, JIA-RONG, and 黃家榮. "RID (robust interactive decision-analysis) algorithm and application exploration." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/75796811877350058930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Šurín, Lukáš. "Využitie genetických algoritmov pri tvorbe rozhodovacích stromov." Master's thesis, 2015. http://www.nusl.cz/ntk/nusl-350916.

Full text
Abstract:
Decision trees are recognized and widely used technique for processing and analyzing data. These trees are designed with typical and generally known inductive techniques (such as ID3, C4.5, C5.0, CART, CHAID, MARS). Predictive power of created trees is not always perfect and they often provide a room for improvement. Induction of trees with difficult criterias is hard and sometime impossible. In this paper we will deal with decision trees, namely their creation. We use the mentioned room for improvement by metaheuristic, genetic algorithms, which is used in all types of optimalization. The work also includes an implementation of a new proposed algorithm in the form of plug-in into Weka environment. A comparison of the proposed method for induction of decision trees with known algorithm C4.5 is an integral part of this thesis. Powered by TCPDF (www.tcpdf.org)
APA, Harvard, Vancouver, ISO, and other styles
42

Jian-Rong, Tzeng. "Semi-Soft Decision VBLAST Detection Algorithm for Spatial Multiplexing System." 2006. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-1507200614322000.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, Yi-Kun, and 林義坤. "Apply Decision Tree Algorithm to the Analysis of Web Log." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/17894208644016211291.

Full text
Abstract:
碩士
華梵大學
資訊管理學系碩士班
99
With the rapid development of Internet and network services increased, the network has imperceptibly become an important tool in our lives, which makes network security events be endless. In these events, the reasons of many attacked websites by hackers are most of web application programmers who have no awareness of network security. Then, vulnerabilities are existed in websites such as Cross Site Request Forgeries (CSRF), SQL Injection, Script Insertion, and Cross Site Scripting (XSS). The attacked websites would lead to serious damages of data leakage for personal information or confidential business information. From this viewpoint, it is important to find an effective security defense strategy for attacks. This research will simulate the environment for the attack of hacker on a virtual machine, and then implement their attack methods. To establish a dataset by the web log collected from the attacks and then use the algorithm of decision tree to analyze and determine whether a web page is subject to malicious attacks or not. There are 7 obtained rules from experiment results. They can be used to analyze web log and provide the decision supports for the network administrator. Keywords: SQL Injection, Cross Site Scripting, Web Log, Decision Tree Algorithm
APA, Harvard, Vancouver, ISO, and other styles
44

Shiau, Fang-Jr, and 蕭方智. "Developing a Hierarchical Particle Swarm based Fuzzy Decision Tree Algorithm." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/38520035342800327252.

Full text
Abstract:
碩士
元智大學
工業工程與管理學系
93
Decision tree is one of most common techniques for classification problems in data mining. Recently, fuzzy set theory has been applied to decision tree construction to improve its performance. However, how to design flexile fuzzy membership functions for each attribute and how to reduce the total number of rules and improve the classification interpretability are two major concerns. To solve the problems, this research proposes a hieratical particle swarm optimization to develop a fuzzy decision tree algorithm (HPS-FDT). In this proposed HPS-FDT algorithm, all particles are encoded using a hieratical approach to improve the efficiency of solution search. The developed HPS-FDT builds a decision tree to achieve: (1) Maximize the classification accuracy, (2) Minimize the number of rules and (3) Minimize the number of attributes and membership functions. Through a serious of benchmark data validation, the proposed HPS-FDT algorithm shows the high performance for several classification problems. In addition, the proposed HPS-FDT algorithm is tested using a mutual fund dataset provided by an internet bank to show the real world implementation possiblility. With the results, managers can make a better marketing strategy for specific target customers.
APA, Harvard, Vancouver, ISO, and other styles
45

Tzeng, Jian-Rong, and 曾建榮. "Semi-Soft Decision VBLAST Detection Algorithm for Spatial Multiplexing System." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/78033023682793944377.

Full text
Abstract:
碩士
國立臺灣大學
電信工程學研究所
94
For a general MIMO system, we usually use the vertically co-polarized antenna arrays at both transmitter and receive ends. If we use cross polarization antenna scheme symmetrically in both ends, the capacity is higher than conventional antenna polarization scheme under the LOS condition due to the high isolation between orthogonal polarizations. However, in NLOS condition, the result is contrary because of the loss of power gain. VBLAST is an efficient detection algorithm for MIMO system, the performance, however, is still far from optimal ML algorithm. Therefore, several modified algorithm has been proposed and we implement them to compare their performance. Finally, we propose a new modified algorithm called semi-soft decision VBLAST. Simulation results show that our proposed algorithm not only outperform than conventional VBLAST considerably but also can be easily combined with any kind of VBLAST algorithms and their modified version to achieve more tremendous performance. VBLAST-OFDM is introduced to convert common frequency-selective fading channel into Rayleigh fading channel which is suitable for VBLAST. Channel model adopted by next-generation WLAN standard, 802.11n, is used in our simulation. Channel estimation algorithm is also considered. There is more interference when more antennas are used and the nulling operation cannot work well in the correlated channel. Consequently, in the correlated channel, using more antennas cannot guarantee the higher performance as what we can see in i.i.d. channel.
APA, Harvard, Vancouver, ISO, and other styles
46

Guo, Jin-Jhong, and 郭金忠. "Medical and Biological Datasets Using Decision Tree and Genetic Algorithm." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/16671173991554268590.

Full text
Abstract:
碩士
國立雲林科技大學
工業工程與管理研究所碩士班
94
This study describes the application of decision tree and genetic algorithm (GA) to evolve useful subset of discriminatory features for medical and biological data. We have shown that a GA combined with decision tree performs well in extracting information about the relationship between biological variables. The result suggests that six feature set evolutions : Aspartate aminotransferase (GOT)、HBV Surface Ag (HBs-Ag)、Blood urea nitrogen (BUN)、Globulin (Glo)、White blood cell count(WBC)、Electrolyte Ca(Ca) are appropriate for our large dataset. Those feature sets represent the relationship between biological variables and provide useful information for clinical diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
47

Lin, Chih-ming, and 林志明. "Fast Mode Decision Algorithm for Compressing Depth-Maps in HEVC." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/tx3cum.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
100
In 3-D video sequences, color plus depth-maps are used to generate virtual views. Each color map as well as the generated virtual view constitute a stereo pair to create a 3D scene by the stereoscopic video system. This thesis presents a fast mode decision algorithm for compressing depth-maps in HEVC (High Efficiency Video Coding). The proposed mode decision algorithm determines the minimum coding-unit size in the quadtree structure of HEVC as early as possible while preserving an acceptable quality by using Zhao et al.'s depth no-synthesis-error model. Experimental results demonstrate that the proposed mode decision algorithm significantly outperforms the one in HEVC in terms of computational time complexity while having negligible quality and bitrate performance degradation.
APA, Harvard, Vancouver, ISO, and other styles
48

Chang, Wei-Hsiang, and 張維翔. "Fast Coding Unit Decision Algorithm for Multi-View Video Coding." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/3ee6pb.

Full text
Abstract:
碩士
國立東華大學
電機工程學系
102
To reduce the huge computational complexity and coding time of multi-view video coding based on high efficiency video coding, this thesis presents a fast coding unit decision method by utilizing the information of the independent view. We accelerate the coding time of dependent view by early determine the depth of the largest coding unit, which is calculated by the depths in the independent view according to global disparity vector and the spatial correlation. We also propose the predicted unit (PU) decision strategy to keep the images quality. Experimental results demonstrate that our algorithm can achieve 57% time saving and maintain good performance of quality and bit-rate compared to HTM8.0.
APA, Harvard, Vancouver, ISO, and other styles
49

Lin, Jie-Ru, and 林杰儒. "Vision-oriented Algorithm for Fast Decision in 3D Video Coding." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/e7ac69.

Full text
Abstract:
碩士
國立東華大學
電機工程學系
104
With the growing demand for high resolution and high quality video stream, the newest international video coding standard H.265/HEVC not only provides about 50% bit-rate saving than H.264/AVC but also supports compression for super high resolution video. Recently, the standard for 3D video coding was established to enrich the multimedia experience and application.The input data formats of 3D-HEVC are multiview color texture with associated depth map. 3D-HEVC can make use of inter-view correlation and inter-component correlation to reach better coding efficiency. However, it increases the coding complexity which is not beneficial to real-time application. As a result, the goal of this thesis is to reduce the coding complexity of 3D-HEVC encoder. This thesis applies the human vision models to detect the edge of color and depth information and classify the coding tree unit (CTU) into different types. We analyze the properties of color texture, depth map and CTU types to perform various mode decision algorithms. The experimental results show that the proposed algorithm can provide much more time-saving and better video quality than previous work.
APA, Harvard, Vancouver, ISO, and other styles
50

Chiu, Wei-yao, and 邱韋堯. "Advanced Zero-block Mode Decision Algorithm for H.264/AVC." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/gqcsu9.

Full text
Abstract:
碩士
國立中央大學
通訊工程研究所
97
In the previous work we presented a zero-block mode decision algorithm. Although it reduces the computation time base upon the number of zero blocks, its performance on time saving is still limited for high bit-rate. We therefore propose some methods to reduce the computational complexity of inter/intra mode decision as possible. At first, we find the improvement by using different motion vectors to compute the number of zero-blocks (ZBs). Then, we utilize both 8×8 and 4×4 zero-blocks distribution as another zero-block mode decision algorithm. Finally, we integrate these ideals into a hybrid algorithm and SATD-based for I4MB intra mode prediction. According to the experimental results, our proposed algorithm can achieve 73% of time saving on average while maintaining coding performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography