To see the other types of publications on this topic, follow the link: Algorithmic exchange.

Dissertations / Theses on the topic 'Algorithmic exchange'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Algorithmic exchange.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Idvall, Patrik, and Conny Jonsson. "Algorithmic Trading : Hidden Markov Models on Foreign Exchange Data." Thesis, Linköping University, Department of Mathematics, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-10719.

Full text
Abstract:
<p>In this master's thesis, hidden Markov models (HMM) are evaluated as a tool for forecasting movements in a currency cross. With an ever increasing electronic market, making way for more automated trading, or so called algorithmic trading, there is constantly a need for new trading strategies trying to find alpha, the excess return, in the market.</p><p>HMMs are based on the well-known theories of Markov chains, but where the states are assumed hidden, governing some observable output. HMMs have mainly been used for speech recognition and communication systems, but have lately also been utilized on financial time series with encouraging results. Both discrete and continuous versions of the model will be tested, as well as single- and multivariate input data.</p><p>In addition to the basic framework, two extensions are implemented in the belief that they will further improve the prediction capabilities of the HMM. The first is a Gaussian mixture model (GMM), where one for each state assign a set of single Gaussians that are weighted together to replicate the density function of the stochastic process. This opens up for modeling non-normal distributions, which is often assumed for foreign exchange data. The second is an exponentially weighted expectation maximization (EWEM) algorithm, which takes time attenuation in consideration when re-estimating the parameters of the model. This allows for keeping old trends in mind while more recent patterns at the same time are given more attention.</p><p>Empirical results shows that the HMM using continuous emission probabilities can, for some model settings, generate acceptable returns with Sharpe ratios well over one, whilst the discrete in general performs poorly. The GMM therefore seems to be an highly needed complement to the HMM for functionality. The EWEM however does not improve results as one might have expected. Our general impression is that the predictor using HMMs that we have developed and tested is too unstable to be taken in as a trading tool on foreign exchange data, with too many factors influencing the results. More research and development is called for.</p>
APA, Harvard, Vancouver, ISO, and other styles
2

Song, Yupu. "A Forex Trading System Using Evolutionary Reinforcement Learning." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/1240.

Full text
Abstract:
Building automated trading systems has long been one of the most cutting-edge and exciting fields in the financial industry. In this research project, we built a trading system based on machine learning methods. We used the Recurrent Reinforcement Learning (RRL) algorithm as our fundamental algorithm, and by introducing Genetic Algorithms (GA) in the optimization procedure, we tackled the problems of picking good initial values of parameters and dynamically updating the learning speed in the original RRL algorithm. We call this optimization algorithm the Evolutionary Recurrent Reinforcement Learning algorithm (ERRL), or the GA-RRL algorithm. ERRL allows us to find many local optimal solutions easier and faster than the original RRL algorithm. Finally, we implemented the GA-RRL system on EUR/USD at a 5-minute level, and the backtest performance showed that our GA-RRL system has potentially promising profitability. In future research we plan to introduce some risk control mechanism, implement the system on different markets and assets, and perform backtest at higher frequency level.
APA, Harvard, Vancouver, ISO, and other styles
3

Mozayyan, Esfahani Sina. "Algorithmic Trading and Prediction of Foreign Exchange Rates Based on the Option Expiration Effect." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252297.

Full text
Abstract:
The equity option expiration effect is a well observed phenomenon and is explained by delta hedge rebalancing and pinning risk, which makes the strike price of an option work as a magnet for the underlying price. The FX option expiration effect has not previously been explored to the same extent. In this paper the FX option expiration effect is investigated with the aim of finding out whether it provides valuable information for predicting FX rate movements. New models are created based on the concept of the option relevance coefficient that determines which options are at higher risk of being in the money or out of the money at a specified future time and thus have an attraction effect. An algorithmic trading strategy is created to evaluate these models. The new models based on the FX option expiration effect strongly outperform time series models used as benchmarks. The best results are obtained when the information about the FX option expiration effect is included as an exogenous variable in a GARCH-X model. However, despite promising and consistent results, more scientific research is required to be able to draw significant conclusions.<br>Effekten av aktieoptioners förfall är ett välobserverat fenomen, som kan förklaras av delta hedge-ombalansering och pinning-risk. Som följd av dessa fungerar lösenpriset för en option som en magnet för det underliggande priset. Effekten av FX-optioners förfall har tidigare inte utforskats i samma utsträckning. I denna rapport undersöks effekten av FX-optioners förfall med målet att ta reda på om den kan ge information som kan användas till prediktioner av FX-kursen. Nya modeller skapas baserat på konceptet optionsrelevanskoefficient som bestämmer huruvida optioner har en större sannolikhet att vara "in the money" eller "out of the money" vid en specificerad framtida tidpunkt och därmed har en attraktionseffekt. En algoritmisk tradingstrategi skapas för att evaluera dessa modeller. De nya modellerna baserade på effekten av FX-optioners förfall överpresterar klart jämfört med de tidsseriemodeller som användes som riktmärken. De bästa resultaten uppnåddes när informationen om effekten av FX-optioners förfall inkluderas som en exogen variabel i en GARCH-X modell. Dock, trots lovande och konsekventa resultat, behövs mer vetenskaplig forskning för att kunna dra signifikanta slutsatser.
APA, Harvard, Vancouver, ISO, and other styles
4

Gomolka, Johannes. "Algorithmic Trading : Analyse von computergesteuerten Prozessen im Wertpapierhandel unter Verwendung der Multifaktorenregression." Phd thesis, Universität Potsdam, 2011. http://opus.kobv.de/ubp/volltexte/2011/5100/.

Full text
Abstract:
Die Elektronisierung der Finanzmärkte ist in den letzten Jahren weit vorangeschritten. Praktisch jede Börse verfügt über ein elektronisches Handelssystem. In diesem Kontext beschreibt der Begriff Algorithmic Trading ein Phänomen, bei dem Computerprogramme den Menschen im Wertpapierhandel ersetzen. Sie helfen dabei Investmententscheidungen zu treffen oder Transaktionen durchzuführen. Algorithmic Trading selbst ist dabei nur eine unter vielen Innovationen, welche die Entwicklung des Börsenhandels geprägt haben. Hier sind z.B. die Erfindung der Telegraphie, des Telefons, des FAX oder der elektronische Wertpapierabwicklung zu nennen. Die Frage ist heute nicht mehr, ob Computerprogramme im Börsenhandel eingesetzt werden. Sondern die Frage ist, wo die Grenze zwischen vollautomatischem Börsenhandel (durch Computer) und manuellem Börsenhandel (von Menschen) verläuft. Bei der Erforschung von Algorithmic Trading wird die Wissenschaft mit dem Problem konfrontiert, dass keinerlei Informationen über diese Computerprogramme zugänglich sind. Die Idee dieser Dissertation bestand darin, dieses Problem zu umgehen und Informationen über Algorithmic Trading indirekt aus der Analyse von (Fonds-)Renditen zu extrahieren. Johannes Gomolka untersucht daher die Forschungsfrage, ob sich Aussagen über computergesteuerten Wertpapierhandel (kurz: Algorithmic Trading) aus der Analyse von (Fonds-)Renditen ziehen lassen. Zur Beantwortung dieser Forschungsfrage formuliert der Autor eine neue Definition von Algorithmic Trading und unterscheidet mit Buy-Side und Sell-Side Algorithmic Trading zwei grundlegende Funktionen der Computerprogramme (die Entscheidungs- und die Transaktionsunterstützung). Für seine empirische Untersuchung greift Gomolka auf das Multifaktorenmodell zur Style-Analyse von Fung und Hsieh (1997) zurück. Mit Hilfe dieses Modells ist es möglich, die Zeitreihen von Fondsrenditen in interpretierbare Grundbestandteile zu zerlegen und den einzelnen Regressionsfaktoren eine inhaltliche Bedeutung zuzuordnen. Die Ergebnisse dieser Dissertation zeigen, dass man mit Hilfe der Style-Analyse Aussagen über Algorithmic Trading aus der Analyse von (Fonds-)Renditen machen kann. Die Aussagen sind jedoch keiner technischen Natur, sondern auf die Analyse von Handelsstrategien (Investment-Styles) begrenzt.<br>During the last decade the electronic trading on the stock exchanges advanced rapidly. Today almost every exchange is running an electronic trading system. In this context the term algorithmic trading describes a phenomenon, where computer programs are replacing the human trader, when making investment decisions or facilitating transactions. Algorithmic trading itself stands in a row of many other innovations that helped to develop the financial markets technologically (see for example telegraphy, the telephone, FAX or electronic settlement). Today the question is not, whether computer programs are used or not. The question arising is rather, where the border between automatic, computer driven and human trading can be drawn. Conducting research on algorithmic trading confronts scientists always with the problem of limited availability of information. The idea of this dissertation is to circumnavigate this problem and to extract information indirectly from an analysis of a time series of (fund)-returns data. The research question here is: Is it possible to draw conclusions about algorithmic trading from an analysis of (funds-)return data? To answer this question, the author develops a complete definition of algorithmic trading. He differentiates between Buy-Side and Sell-Side algorithmic trading, depending on the functions of the computer programs (supporting investment-decisions or transaction management). Further, the author applies the multifactor model of the style analysis, formely introduced by Fung and Hsieh (1997). The multifactor model allows to separate fund returns into regression factors that can be attributed to different reasons. The results of this dissertation do show that it is possible to draw conclusions about algorithmic trading out of the analysis of funds returns. Yet these conclusions cannot be of technical nature. They rather have to be attributed to investment strategies (investment styles).
APA, Harvard, Vancouver, ISO, and other styles
5

Xu, Siyao. "Bi-Objective Optimization of Kidney Exchanges." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/62.

Full text
Abstract:
Matching people to their preferences is an algorithmic topic with real world applications. One such application is the kidney exchange. The best "cure" for patients whose kidneys are failing is to replace it with a healthy one. Unfortunately, biological factors (e.g., blood type) constrain the number of possible replacements. Kidney exchanges seek to alleviate some of this pressure by allowing donors to give their kidney to a patient besides the one they most care about and in turn the donor for that patient gives her kidney to the patient that this first donor most cares about. Roth et al.~first discussed the classic kidney exchange problem. Freedman et al.~expanded upon this work by optimizing an additional objective in addition to maximal matching. In this work, I implement the traditional kidney exchange algorithm as well as expand upon more recent work by considering multi-objective optimization of the exchange. In addition I compare the use of 2-cycles to 3-cycles. I offer two hypotheses regarding the results of my implementation. I end with a summary and a discussion about potential future work.
APA, Harvard, Vancouver, ISO, and other styles
6

Uematsu, Akira Arice de Moura Galvão. "Algoritmos de negociação com dados de alta frequência." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-28042012-114138/.

Full text
Abstract:
Em nosso trabalho analisamos os dados provenientes da BM&F Bovespa, a bolsa de valores de São Paulo, no período de janeiro de 2011, referentes aos índices: BOVESPA (IND), o mini índice BOVESPA (WIN) e a taxa de câmbio (DOL). Estes dados são de alta frequência e representam vários aspectos da dinâmica das negociações. No conjunto de valores encontram-se horários e datas dos negócios, preços, volumes oferecidos e outras características da negociação. A primeira etapa da tese foi extrair as informações necessárias para análises a partir de um arquivo em protocolo FIX, foi desenvolvido um programa em R com essa finalidade. Em seguida, estudamos o carácter da dependência temporal nos dados, testando as propriedades de Markov de um comprimento de memória fixa e variável. Os resultados da aplicação mostram uma grande variabilidade no caráter de dependência, o que requer uma análise mais aprofundada. Acreditamos que esse trabalho seja de muita importância em futuros estudos acadêmicos. Em particular, a parte do carácter específico do protocolo FIX utilizado pela Bovespa. Este era um obstáculo em uma série de estudos acadêmicos, o que era, obviamente, indesejável, pois a Bovespa é um dos maiores mercados comerciais do mundo financeiro moderno.<br>In our work we analyzed data from BM&F Bovespa, the stock exchange in São Paulo. The dataset refers to the month January 2011 and is related to BOVESPA index (IND), mini BOVESPA index (WIN) and the exchange tax (DOL). These, are high frequency data representing various aspects of the dynamic of negotiations. The array of values includes the dates/times of trades, prices, volumes offered for trade and others trades characteristics. The first stage of the thesis was to extract information to the analysis from an archive in FIX protocol, it was developed a program in R with this aim. Afterwards, we studied the character of temporal dependence in the data, testing Markov properties of a fixed and variable memory length. The results of this application show a great variability in the character of dependence, which requires further analysis. We believe that our work is of great importance in future academic studies. In particular, the specific character of the FIX protocol used by Bovespa. This was an obstacle in a number of academic studies, which was, obviously, undesirable since Bovespa is one of the largest trading markets in the modern financial world.
APA, Harvard, Vancouver, ISO, and other styles
7

Stehlik, Milan. "Some Properties of Exchange Design Algorithms Under Correlation." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2006. http://epub.wu.ac.at/994/1/document.pdf.

Full text
Abstract:
In this paper we discuss an algorithm for the construction of D-optimal experimental designs for the parameters in a regression model when the errors have a correlation structure. We show that design points can collapse under the presence of some covariance structures and a so called nugget can be employed in a natural way. We also show that the information of equidistant design on covariance parameter is increasing with the number of design points under exponential variogram, however these designs are not D-optimal. Also in higher dimensions the exponential structure without nugget leads to collapsing of the D-optimal design when also parameters of covariance structure are of interest. However, if only trend parameters are of interest, the designs covering uniformly the whole design space are very efficient. For illustration some numerical examples are also included. (author's abstract)<br>Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
8

Rambaldi, Marcello. "Some applications of Hawkes point processes to high frequency finance." Doctoral thesis, Scuola Normale Superiore, 2017. http://hdl.handle.net/11384/85718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gotta, Nancy C. (Nancy Colleen) 1975. "A translation algorithm to solve semantic conflicts in information exchange." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/47524.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Müller, Johannes Christian [Verfasser]. "Auctions in Exchange Trading Systems: Modeling Techniques and Algorithms / Johannes Müller." Berlin : epubli GmbH, 2014. http://d-nb.info/106322747X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Rahman, Md Anisur. "Tabular Representation of Schema Mappings: Semantics and Algorithms." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20032.

Full text
Abstract:
Our thesis investigates a mechanism for representing schema mapping by tabular forms and checking utility of the new representation. Schema mapping is a high-level specification that describes the relationship between two database schemas. Schema mappings constitute essential building blocks of data integration, data exchange and peer-to-peer data sharing systems. Global-and-local-as-view (GLAV) is one of the approaches for specifying the schema mappings. Tableaux are used for expressing queries and functional dependencies on a single database in a tabular form. In our thesis, we first introduce a tabular representation of GLAV mappings. We find that this tabular representation helps to solve many mapping-related algorithmic and semantic problems. For example, a well-known problem is to find the minimal instance of the target schema for a given instance of the source schema and a set of mappings between the source and the target schema. Second, we show that our proposed tabular mapping can be used as an operator on an instance of the source schema to produce an instance of the target schema which is `minimal' and `most general' in nature. There exists a tableaux-based mechanism for finding equivalence of two queries. Third, we extend that mechanism for deducing equivalence between two schema mappings using their corresponding tabular representations. Sometimes, there exist redundant conjuncts in a schema mapping which causes data exchange, data integration and data sharing operations more time consuming. Fourth, we present an algorithm that utilizes the tabular representations for reducing number of constraints in the schema mappings. At present, either schema-level mappings or data-level mappings are used for data sharing purposes. Fifth, we introduce and give the semantics of bi-level mapping that combines the schema-level and data-level mappings. We also show that bi-level mappings are more effective for data sharing systems. Finally, we implemented our algorithms and developed a software prototype to evaluate our proposed strategies.
APA, Harvard, Vancouver, ISO, and other styles
12

Rouhi, Mohammad. "Topology optimization of continuum structures using element exchange method." Master's thesis, Mississippi State : Mississippi State University, 2009. http://library.msstate.edu/etd/show.asp?etd=etd-04032009-063432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Elidrissi, Rayane. "Tensions paradoxales et échange médié par l'intelligence artificielle : l'expérience de travail au cœur de la GRH algorithmique." Electronic Thesis or Diss., Université Côte d'Azur, 2024. http://www.theses.fr/2024COAZ0025.

Full text
Abstract:
L'introduction de l'intelligence artificielle (IA) dans la fonction Ressources Humaines (RH) fait l'objet d'un intérêt croissant tant auprès du monde académique que socio-professionnel. Dans ce contexte, l'objectif de cette thèse vise à comprendre d'une part, les effets de l'introduction de l'IA dans la fonction RH et d'autre part, l'émergence de nouvelles façons de travailler des collaborateurs.Sur le plan théorique, le cadre d'analyse de la théorie des paradoxes, propose d'identifier les réponses des individus face aux tensions paradoxales qui émergent dans un contexte de Gestion des Ressources Humaines (GRH) algorithmique. Il est complété par la théorie de l'échange social médié par l'IA qui permet de considérer l'IA comme un médiateur, reconfigurant les relations d'échanges.La méthodologie mixte mobilisée, s'appuie sur une première étude qualitative exploratoire réalisée auprès des directeurs et responsables RH, des éditeurs de solutions IA-RH et des experts en IA. Elle est suivie d'une seconde étude quantitative exploratoire, menée auprès des collaborateurs permettant de tester le modèle à travers des équations structurelles. Ces deux études permettent de couvrir de façon inédite, l'ensemble des acteurs concernés par l'introduction de l'IA dans la fonction RH et les nouvelles façons de travailler.Nos résultats qualitatifs révèlent une perception duale de l'IA, en raison d'une conscientisation disparate des répondants. Des tensions paradoxales émergent, nécessitant un renouvellement de la représentation de la fonction RH et la mise en place de leviers d'actions managériales pour les accepter. Les expressions d'un besoin de transparence, d'équité algorithmique et d'autonomie professionnelle pour une relation augmentée avec l'IA sont évoquées. Nos résultats quantitatifs révèlent que la transparence perçue de l'IA exerce deux influences : une influence directe sur l'engagement au travail, et une influence indirecte sur cette variable via deux médiateurs partiels : l'équité algorithmique perçue et l'autonomie professionnelle perçue.Notre recherche contribue à mieux identifier les tensions paradoxales émergentes lors de l'échange social médié par l'IA et les nouvelles façons de travailler avec la fonction RH pour une meilleure expérience collaborateur. La proposition d'une fresque de l'IA que nous avons réalisé à destination des responsables RH et des collaborateurs participe à une IA explicable et compréhensible<br>The introduction of artificial intelligence (AI) in the Human Resources (HR) function is the subject of growing interest in both the academic and socio-professional worlds. In this context, the aim of this thesis is to understand, on the one hand, the effects of the introduction of AI in the HR function and, on the other hand, the emergence of new ways of working for employees.From a theoretical perspective, the paradox theory analysis framework proposes to identify the responses of individuals to the paradoxical tensions that emerge in a context of algorithmic HRM. It is complemented by the theory of AI-mediated social exchange, which considers AI as a mediator, reconfiguring exchange relationships.The mixed methodology used is based on an initial exploratory qualitative study of HR directors and managers, AI solutions providers for HR and AI experts. It is followed by a second exploratory quantitative study, conducted among employees, to test the model through structural equations. These two studies provide unprecedented coverage of all parties involved by the introduction of AI in the HR function and new ways of working.Our qualitative results reveal a dual perception of AI, due to the disparate awareness of respondents. Paradoxical tensions emerge, requiring a renewal of the representation of the HR function and the implementation of managerial action levers to accept them. The need for transparency, algorithmic fairness and professional autonomy for a better relationship with AI is expressed. Our quantitative results reveal that the perceived transparency of AI has two influences: a direct influence on commitment to work, and an indirect influence on this variable via two partial mediators: perceived algorithmic fairness and perceived professional autonomy.Our research contributes to better identifying the paradoxical tensions emerging during AI-mediated social exchange and the new ways of working with the HR function to improve the employee experience. The AI mural that we have produced for HR managers and employees contributed to explainable and understandable AI
APA, Harvard, Vancouver, ISO, and other styles
14

Tai, Kam Fong. "Trend following algorithms in automated stock market trading." Thesis, University of Macau, 2011. http://umaclib3.umac.mo/record=b2492841.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Chahine, Firas Safwan. "A Genetic Algorithm that Exchanges Neighboring Centers for Fuzzy c-Means Clustering." NSUWorks, 2012. http://nsuworks.nova.edu/gscis_etd/116.

Full text
Abstract:
Clustering algorithms are widely used in pattern recognition and data mining applications. Due to their computational efficiency, partitional clustering algorithms are better suited for applications with large datasets than hierarchical clustering algorithms. K-means is among the most popular partitional clustering algorithm, but has a major shortcoming: it is extremely sensitive to the choice of initial centers used to seed the algorithm. Unless k-means is carefully initialized, it converges to an inferior local optimum and results in poor quality partitions. Developing improved method for selecting initial centers for k-means is an active area of research. Genetic algorithms (GAs) have been successfully used to evolve a good set of initial centers. Among the most promising GA-based methods are those that exchange neighboring centers between candidate partitions in their crossover operations. K-means is best suited to work when datasets have well-separated non-overlapping clusters. Fuzzy c-means (FCM) is a popular variant of k-means that is designed for applications when clusters are less well-defined. Rather than assigning each point to a unique cluster, FCM determines the degree to which each point belongs to a cluster. Like k-means, FCM is also extremely sensitive to the choice of initial centers. Building on GA-based methods for initial center selection for k-means, this dissertation developed an evolutionary program for center selection in FCM called FCMGA. The proposed algorithm utilized region-based crossover and other mechanisms to improve the GA. To evaluate the effectiveness of FCMGA, three independent experiments were conducted using real and simulated datasets. The results from the experiments demonstrate the effectiveness and consistency of the proposed algorithm in identifying better quality solutions than extant methods. Moreover, the results confirmed the effectiveness of region-based crossover in enhancing the search process for the GA and the convergence speed of FCM. Taken together, findings in these experiments illustrate that FCMGA was successful in solving the problem of initial center selection in partitional clustering algorithms.
APA, Harvard, Vancouver, ISO, and other styles
16

Murphy, Nicholas John. "An online learning algorithm for technical trading." Master's thesis, Faculty of Science, 2019. http://hdl.handle.net/11427/31048.

Full text
Abstract:
We use an adversarial expert based online learning algorithm to learn the optimal parameters required to maximise wealth trading zero-cost portfolio strategies. The learning algorithm is used to determine the relative population dynamics of technical trading strategies that can survive historical back-testing as well as form an overall aggregated portfolio trading strategy from the set of underlying trading strategies implemented on daily and intraday Johannesburg Stock Exchange data. The resulting population time-series are investigated using unsupervised learning for dimensionality reduction and visualisation. A key contribution is that the overall aggregated trading strategies are tested for statistical arbitrage using a novel hypothesis test proposed by Jarrow et al. [31] on both daily sampled and intraday time-scales. The (low frequency) daily sampled strategies fail the arbitrage tests after costs, while the (high frequency) intraday sampled strategies are not falsified as statistical arbitrages after costs. The estimates of trading strategy success, cost of trading and slippage are considered along with an offline benchmark portfolio algorithm for performance comparison. In addition, the algorithms generalisation error is analysed by recovering a probability of back-test overfitting estimate using a nonparametric procedure introduced by Bailey et al. [19]. The work aims to explore and better understand the interplay between different technical trading strategies from a data-informed perspective.
APA, Harvard, Vancouver, ISO, and other styles
17

Chaigusin, Suchira. "An investigation into the use of neural networks for the prediction of the stock exchange of Thailand." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2011. https://ro.ecu.edu.au/theses/386.

Full text
Abstract:
Stock markets are affected by many interrelated factors such as economics and politics at both national and international levels. Predicting stock indices and determining the set of relevant factors for making accurate predictions are complicated tasks. Neural networks are one of the popular approaches used for research on stock market forecast. This study developed neural networks to predict the movement direction of the next trading day of the Stock Exchange of Thailand (SET) index. The SET has yet to be studied extensively and research focused on the SET will contribute to understanding its unique characteristics and will lead to identifying relevant information to assist investment in this stock market. Experiments were carried out to determine the best network architecture, training method, and input data to use for this task. With regards network architecture, feedforward networks with three layers were used - an input layer, a hidden layer and an output layer - and networks with different numbers of nodes in the hidden layers were tested and compared. With regards training method, neural networks were trained with back-propagation and with genetic algorithms. With regards input data, three set of inputs, namely internal indicators, external indicators and a combination of both were used. The internal indicators are based on calculations derived from the SET while the external indicators are deemed to be factors beyond the control of the Thailand such as the Down Jones Index.
APA, Harvard, Vancouver, ISO, and other styles
18

Pathirana, Vindya Kumari. "Nearest Neighbor Foreign Exchange Rate Forecasting with Mahalanobis Distance." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5757.

Full text
Abstract:
Foreign exchange (FX) rate forecasting has been a challenging area of study in the past. Various linear and nonlinear methods have been used to forecast FX rates. As the currency data are nonlinear and highly correlated, forecasting through nonlinear dynamical systems is becoming more relevant. The nearest neighbor (NN) algorithm is one of the most commonly used nonlinear pattern recognition and forecasting methods that outperforms the available linear forecasting methods for the high frequency foreign exchange data. The basic idea behind the NN is to capture the local behavior of the data by selecting the instances having similar dynamic behavior. The most relevant k number of histories to the present dynamical structure are the only past values used to predict the future. Due to this reason, NN algorithm is also known as the k-nearest neighbor algorithm (k-NN). Here k represents the number of chosen neighbors. In the k-nearest neighbor forecasting procedure, similar instances are captured through a distance function. Since the forecasts completely depend on the chosen nearest neighbors, the distance plays a key role in the k-NN algorithm. By choosing an appropriate distance, we can improve the performance of the algorithm significantly. The most commonly used distance for k-NN forecasting in the past was the Euclidean distance. Due to possible correlation among vectors at different time frames, distances based on deterministic vectors, such as Euclidean, are not very appropriate when applying for foreign exchange data. Since Mahalanobis distance captures the correlations, we suggest using this distance in the selection of neighbors. In the present study, we used five different foreign currencies, which are among the most traded currencies, to compare the performances of the k-NN algorithm with traditional Euclidean and Absolute distances to performances with the proposed Mahalanobis distance. The performances were compared in two ways: (i) forecast accuracy and (ii) transforming their forecasts in to a more effective technical trading rule. The results were obtained with real FX trading data, and the results showed that the method introduced in this work outperforms the other popular methods. Furthermore, we conducted a thorough investigation of optimal parameter choice with different distance measures. We adopted the concept of distance based weighting to the NN and compared the performances with traditional unweighted NN algorithm based forecasting. Time series forecasting methods, such as Auto regressive integrated moving average process (ARIMA), are widely used in many ares of time series as a forecasting technique. We compared the performances of proposed Mahalanobis distance based k-NN forecasting procedure with the traditional general ARIM- based forecasting algorithm. In this case the forecasts were also transformed into a technical trading strategy to create buy and sell signals. The two methods were evaluated for their forecasting accuracy and trading performances. Multi-step ahead forecasting is an important aspect of time series forecasting. Even though many researchers claim that the k-Nearest Neighbor forecasting procedure outperforms the linear forecasting methods for financial time series data, and the available work in the literature supports this claim with one step ahead forecasting. One of our goals in this work was to improve FX trading with multi-step ahead forecasting. A popular multi-step ahead forecasting strategy was adopted in our work to obtain more than one day ahead forecasts. We performed a comparative study on the performance of single step ahead trading strategy and multi-step ahead trading strategy by using five foreign currency data with Mahalanobis distance based k-nearest neighbor algorithm.
APA, Harvard, Vancouver, ISO, and other styles
19

Alston, Katherine Yvette. "A heuristic on the rearrangeability of shuffle-exchange networks." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2521.

Full text
Abstract:
The algorithms which control network routing are specific to the network because the algorithms are designed to take advantage of that network's topology. The "goodness" of a network includes such criteria as a simple routing algorithm and a simple routing algorithm would increase the use of the shuffle-exchange network.
APA, Harvard, Vancouver, ISO, and other styles
20

Bevilacqua, Sara. "New biologically inspired models towards understanding the Italian Power Exchange market." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15036/.

Full text
Abstract:
Questa tesi tratta l’estensione di un modello ad agenti per la simulazione e lo studio del mercato elettrico Italiano. Nel modello, gli agenti, che rappresentano le compagnie energetiche presenti nel mercato Italiano, competono tra loro con l’obiettivo di avere il più alto profitto. Ogni agente determina la propria strategia attraverso un algoritmo genetico, che opera su una popolazione di strategie. Questo modello presenta però due limitazioni: (1) agli impianti di produzione di un agente, situati nella stessa zona geografica e aventi in comune la tecnologia di produzione, viene applicata la stessa strategia; (2) per la corretta evoluzione delle popolazioni private, gli agenti si scambiano informazioni circa la loro scelta strategica. Con questa tesi vogliamo dimostrare, prima di tutto, che il rilassamento delle ipotesi del primo punto non peggiora la qualità dei risultati. Successivamente, vogliamo testare l’efficacia dell’algoritmo genetico, applicando e testando algoritmi alternativi, quali l’ottimizzazione Monte Carlo e l’algoritmo Particle Swarm Optimization. La seconda parte del lavoro si focalizza sull’introduzione di agenti intelligenti che siano capaci di aggiornare due popolazioni, una composta dalle strategie dei concorrenti e l’altra con le proprie strategie. In questo modo, gli agenti non devono più condividere le proprie scelte strategiche, ma cercano di fare previsioni sui concorrenti, come nel mondo reale, e reagire di conseguenza, avvalendosi di tecniche di adversarial reasoning. Vogliamo infine mostrare i risultati che si ottengono con questa ultima estensione.
APA, Harvard, Vancouver, ISO, and other styles
21

Karim, Fazal. "Parallel Heart Analysis Algorithms Utilizing Multi-core for Optimized Medical Data Exchange over Voice and Data Networks." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-51087.

Full text
Abstract:
In today’s research and market, IT applications for health-care are gaining huge interest of both IT and medical researchers. Cardiovascular diseases (CVDs) are considered the largest cause of death for both men and women regardless of ethnic backgrounds. More efficient treatments and most importantly efficient methods of cardiac diagnosis that examine heart diseases are desired. Electrocardiography (ECG) is an essential method used to diagnose heart diseases. However, diagnosing any cardiovascular disease based on the 12-lead ECG printout from an ECG machine using human eye might seriously impair analysis accuracy. To meet this challenge of today’s ECG analysis methodology, a more reliable solution that can analyze huge amount of patient’s data in real-time is desired. The software solution presented in this article is aimed to reduce the risk while diagnosing cardiovascular diseases (CVDs) by human eye, computation of large-scale patient’s data in real-time at the patient’s location and sending the required results or summary to the doctor/nurse. Keeping in mind the importance of real-time analysis of patient’s data, the software system has built upon small individual algorithms/modules designed for multi-core architecture, where each module is supposed to be processed by an individual core/processor in parallel. All the input and output processes to the analysis system are made automated, which reduces operator’s interaction to the system and thus reducing the cost. The outputs/results of the processing are summarized to smaller files in both ASCII and binary formats to meet the requirement of exchanging the data over Voice and Data Networks.
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Qinyan. "Optimal coordinate sensor placements for estimating mean and variance components of variation sources." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/2238.

Full text
Abstract:
In-process Optical Coordinate Measuring Machine (OCMM) offers the potential of diagnosing in a timely manner variation sources that are responsible for product quality defects. Such a sensor system can help manufacturers improve product quality and reduce process downtime. Effective use of sensory data in diagnosing variation sources depends on the optimal design of a sensor system, which is often known as the problem of sensor placements. This thesis addresses coordinate sensor placement in diagnosing dimensional variation sources in assembly processes. Sensitivity indices of detecting process mean and variance components are defined as the design criteria and are derived in terms of process layout and sensor deployment information. Exchange algorithms, originally developed in the research of optimal experiment deign, are employed and revised to maximize the detection sensitivity. A sort-and-cut procedure is used, which remarkably improve the algorithm efficiency of the current exchange routine. The resulting optimal sensor layouts and its implications are illustrated in the specific context of a panel assembly process.
APA, Harvard, Vancouver, ISO, and other styles
23

Obayopo, S. O. (Surajudeen Olanrewaju). "Performance enhancement in proton exchange membrane cell - numerical modeling and optimisation." Thesis, University of Pretoria, 2012. http://hdl.handle.net/2263/26247.

Full text
Abstract:
Sustainable growth and development in a society requires energy supply that is efficient, affordable, readily available and, in the long term, sustainable without causing negative societal impacts, such as environmental pollution and its attendant consequences. In this respect, proton exchange membrane (PEM) fuel cells offer a promising alternative to existing conventional fossil fuel sources for transport and stationary applications due to its high efficiency, low-temperature operation, high power density, fast start-up and its portability for mobile applications. However, to fully harness the potential of PEM fuel cells, there is a need for improvement in the operational performance, durability and reliability during usage. There is also a need to reduce the cost of production to achieve commercialisation and thus compete with existing energy sources. The present study has therefore focused on developing novel approaches aimed at improving output performance for this class of fuel cell. In this study, an innovative combined numerical computation and optimisation techniques, which could serve as alternative to the laborious and time-consuming trial-and-error approach to fuel cell design, is presented. In this novel approach, the limitation to the optimal design of a fuel cell was overcome by the search algorithm (Dynamic-Q) which is robust at finding optimal design parameters. The methodology involves integrating the computational fluid dynamics equations with a gradient-based optimiser (Dynamic-Q) which uses the successive objective and constraint function approximations to obtain the optimum design parameters. Specifically, using this methodology, we optimised the PEM fuel cell internal structures, such as the gas channels, gas diffusion layer (GDL) - relative thickness and porosity - and reactant gas transport, with the aim of maximising the net power output. Thermal-cooling modelling technique was also conducted to maximise the system performance at elevated working temperatures. The study started with a steady-state three-dimensional computational model to study the performance of a single channel proton exchange membrane fuel cell under varying operating conditions and combined effect of these operating conditions was also investigated. From the results, temperature, gas diffusion layer porosity, cathode gas mass flow rate and species flow orientation significantly affect the performance of the fuel cell. The effect of the operating and design parameters on PEM fuel cell performance is also more dominant at low operating cell voltages than at higher operating fuel cell voltages. In addition, this study establishes the need to match the PEM fuel cell parameters such as porosity, species reactant mass flow rates and fuel gas channels geometry in the system design for maximum power output. This study also presents a novel design, using pin fins, to enhance the performance of the PEM fuel cell through optimised reactant gas transport at a reduced pumping power requirement for the reactant gases. The results obtained indicated that the flow Reynolds number had a significant effect on the flow field and the diffusion of the reactant gas through the GDL medium. In addition, an enhanced fuel cell performance was achieved using pin fins in a fuel cell gas channel, which ensured high performance and low fuel channel pressure drop of the fuel cell system. It should be noted that this study is the first attempt at enhancing the oxygen mass transfer through the PEM fuel cell GDL at reduced pressure drop, using pin fin. Finally, the impact of cooling channel geometric configuration (in combination with stoichiometry ratio, relative humidity and coolant Reynolds number) on effective thermal heat transfer and performance in the fuel cell system was investigated. This is with a view to determine effective thermal management designs for this class of fuel cell. Numerical results shows that operating parameters such as stoichiometry ratio, relative humidity and cooling channel aspect ratio have significant effect on fuel cell performance, primarily by determining the level of membrane dehydration of the PEM fuel cell. The result showed the possibility of operating a PEM fuel cell beyond the critical temperature ( 80„aC), using the combined optimised stoichiometry ratio, relative humidity and cooling channel geometry without the need for special temperature resistant materials for the PEM fuel cell which are very expensive. In summary, the results from this study demonstrate the potential of optimisation technique in improving PEM fuel cell design. Overall, this study will add to the knowledge base needed to produce generic design information for fuel cell systems, which can be applied to better designs of fuel cell stacks.<br>Thesis (PhD)--University of Pretoria, 2012.<br>Mechanical and Aeronautical Engineering<br>unrestricted
APA, Harvard, Vancouver, ISO, and other styles
24

Müller, Johannes Christian [Verfasser], Alexander [Akademischer Betreuer] Martin, and Vyve Mathieu [Akademischer Betreuer] Van. "Auctions in Exchange Trading Systems: Modeling Techniques and Algorithms / Johannes Christian Müller. Gutachter: Alexander Martin ; Mathieu Van Vyve." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2014. http://d-nb.info/1075833981/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lippold, Georg. "Encryption schemes and key exchange protocols in the certificateless setting." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/41697/1/Georg_Lippold_Thesis.pdf.

Full text
Abstract:
The contributions of this thesis fall into three areas of certificateless cryptography. The first area is encryption, where we propose new constructions for both identity-based and certificateless cryptography. We construct an n-out-of- n group encryption scheme for identity-based cryptography that does not require any special means to generate the keys of the trusted authorities that are participating. We also introduce a new security definition for chosen ciphertext secure multi-key encryption. We prove that our construction is secure as long as at least one authority is uncompromised, and show that the existing constructions for chosen ciphertext security from identity-based encryption also hold in the group encryption case. We then consider certificateless encryption as the special case of 2-out-of-2 group encryption and give constructions for highly efficient certificateless schemes in the standard model. Among these is the first construction of a lattice-based certificateless encryption scheme. Our next contribution is a highly efficient certificateless key encapsulation mechanism (KEM), that we prove secure in the standard model. We introduce a new way of proving the security of certificateless schemes based that are based on identity-based schemes. We leave the identity-based part of the proof intact, and just extend it to cover the part that is introduced by the certificateless scheme. We show that our construction is more efficient than any instanciation of generic constructions for certificateless key encapsulation in the standard model. The third area where the thesis contributes to the advancement of certificateless cryptography is key agreement. Swanson showed that many certificateless key agreement schemes are insecure if considered in a reasonable security model. We propose the first provably secure certificateless key agreement schemes in the strongest model for certificateless key agreement. We extend Swanson's definition for certificateless key agreement and give more power to the adversary. Our new schemes are secure as long as each party has at least one uncompromised secret. Our first construction is in the random oracle model and gives the adversary slightly more capabilities than our second construction in the standard model. Interestingly, our standard model construction is as efficient as the random oracle model construction.
APA, Harvard, Vancouver, ISO, and other styles
26

Kim, Pansoo. "Near optimal design of fixture layouts in multi-station assembly processes." Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/1076.

Full text
Abstract:
This dissertation presents a methodology for the near optimal design of fixture layouts in multi-station assembly processes. An optimal fixture layout improves the robustness of a fixture system, reduces product variability and leads to manufacturing cost reduction. Three key aspects of the multi-station fixture layout design are addressed: a multi-station variation propagation model, a quantitative measure of fixture design, and an effective and efficient optimization algorithm. Multi-station design may have high dimensions of design space, which can contain a lot of local optima. In this dissertation, I investigated two algorithms for optimal fixture layout designs. The first algorithm is an exchange algorithm, which was originally developed in the research of optimal experimental designs. I revised the exchange routine so that it can remarkably reduce the computing time without sacrificing the optimal values. The second algorithm uses data-mining methods such as clustering and classification. It appears that the data-mining method can find valuable design selection rules that can in turn help to locate the optimal design efficiently. Compared with other non-linear optimization algorithms such as the simplex search method, simulated annealing, genetic algorithm, the data-mining method performs the best and the revised exchange algorithm performs comparably to simulated annealing, but better than the others. A four-station assembly process for a sport utility vehicle (SUV) side frame is used throughout the dissertation to illustrate the relevant concepts and the resulting methodology.
APA, Harvard, Vancouver, ISO, and other styles
27

Kamenieva, Iryna. "Research Ontology Data Models for Data and Metadata Exchange Repository." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-6351.

Full text
Abstract:
<p>For researches in the field of the data mining and machine learning the necessary condition is an availability of various input data set. Now researchers create the databases of such sets. Examples of the following systems are: The UCI Machine Learning Repository, Data Envelopment Analysis Dataset Repository, XMLData Repository, Frequent Itemset Mining Dataset Repository. Along with above specified statistical repositories, the whole pleiad from simple filestores to specialized repositories can be used by researchers during solution of applied tasks, researches of own algorithms and scientific problems. It would seem, a single complexity for the user will be search and direct understanding of structure of so separated storages of the information. However detailed research of such repositories leads us to comprehension of deeper problems existing in usage of data. In particular a complete mismatch and rigidity of data files structure with SDMX - Statistical Data and Metadata Exchange - standard and structure used by many European organizations, impossibility of preliminary data origination to the concrete applied task, lack of data usage history for those or other scientific and applied tasks.</p><p>Now there are lots of methods of data miming, as well as quantities of data stored in various repositories. In repositories there are no methods of DM (data miming) and moreover, methods are not linked to application areas. An essential problem is subject domain link (problem domain), methods of DM and datasets for an appropriate method. Therefore in this work we consider the building problem of ontological models of DM methods, interaction description of methods of data corresponding to them from repositories and intelligent agents allowing the statistical repository user to choose the appropriate method and data corresponding to the solved task. In this work the system structure is offered, the intelligent search agent on ontological model of DM methods considering the personal inquiries of the user is realized.</p><p>For implementation of an intelligent data and metadata exchange repository the agent oriented approach has been selected. The model uses the service oriented architecture. Here is used the cross platform programming language Java, multi-agent platform Jadex, database server Oracle Spatial 10g, and also the development environment for ontological models - Protégé Version 3.4.</p>
APA, Harvard, Vancouver, ISO, and other styles
28

Zhang, Xiaoqin. "THERMAL-ECONOMIC OPTIMIZATION AND STRUCTURAL EVALUATION FOR AN ADVANCED INTERMEDIATE HEAT EXCHANGER DESIGN." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1462891005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Křesťan, Zdeněk. "Automatizované obchodování na kryptoměnových burzách." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385948.

Full text
Abstract:
This thesis focuses on automated trading on cryptocurrency exchanges. Cryptos are now widespread. The possibility of hier automated buying and selling is an interesting topic, which is more and more mentioned. The main part of the thesis is the design of an algorithm for processing data from stock exchanges, their evaluation and subsequent execution of cryptocurrency trades. It also describes its implementation, testing and possible further extensions.
APA, Harvard, Vancouver, ISO, and other styles
30

Halton, Christopher H. "An effectiveness study for prioritization algorithms in a communications node model for the Copernicus Tactical Data Information Exchange System (TADIXS)." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1997. http://handle.dtic.mil/100.2/ADA337403.

Full text
Abstract:
Thesis (M.S. in Systems Technology (Joint Command, Control, and Communications)) Naval Postgraduate School, Sept. 1997.<br>Thesis advisors, Michael G. Sovereign, Orin E. Marvel. Includes bibliographical references (p. 131). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
31

Gendre, Victor Hugues. "Predicting short term exchange rates with Bayesian autoregressive state space models: an investigation of the Metropolis Hastings algorithm forecasting efficiency." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1437399395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Sokolovski, Valeri. "Analysis of the predictive ability and profitability of an analytically derived trading algorithm in the intra-day spot foreign exchange market." Master's thesis, University of Cape Town, 2011. http://hdl.handle.net/11427/11470.

Full text
Abstract:
Includes abstract.<br>Includes bibliographical references (leaves 87-93).<br>This paper examines the predictive power and profitability of an analytically derived, technical trading algorithm in the intraday spot foreign exchange market, using over nine years of hourly data. This trading rule, the reservation price policy (RPP), stems from the computer science literature and, based on certain assumptions, is shown to be efficient under the worst-case scenario criterion. The results indicate the existence of significant information content in the trading rule, which is robust to the parameter choice and consistent across the eleven currencies examined. But, the nonparametric, bootstrap analysis shows that the rule does not capture any incremental information above what is accounted for by the seasonal GARCH(1,1)-MA(1) model.
APA, Harvard, Vancouver, ISO, and other styles
33

Thouillez, Thomas. "Anatomie des marchés financiers à haute fréquence : analyse de l'Influence de l'automatisation sur la microstructure des marchés financiers." Thesis, Paris 1, 2020. http://www.theses.fr/2020PA01E049.

Full text
Abstract:
Cette thèse étudie les principales transformations de la microstructure des marchés financiers depuis la généralisation de l’automatisation des marchés. Aujourd'hui, la modification structurelle des marchés financiers, associée à l'évolution des technologies de l'information, ont entrainé des bouleversements, tant dans les pratiques de marchés, que dans les instruments de mesures de la qualité de marché. Le coût de la liquidité a continué de s’améliorer entre 2010 et 2019, notamment en réduisant les spreads des sociétés moins importantes du SBF 120. En revanche, les spreads effectifs se réduisent nettement moins montrant la faible profondeur du carnet d’ordres aux meilleures limites pour les entreprises les plus petites. Les travaux présentent les mutations des plateformes de négociation et les évolutions technologiques qui ont permis le déploiement du trading haute-fréquence. L’équipe de recherche a développé un outil de réplication des marchés financiers, VirteK. La librairie a permis une simulation répliquant les faits stylisés du flash-crash du 6 mai 2010 illustrant les déséquilibres du carnet d’ordres à l’aide du VPIN<br>This thesis studies major market microstructure transformations since the automation of financial markets. Today, structural modification of financial markets, associated with the improvement of information and communication technology, lead to important shifts regarding market practices, and market quality measures. Liquidity costs continued to improve between 2010 and 2019, reducing quoted spread especially for SBF 120 small capitalizations. However, effective spreads decreased significantly less than quoted spreads for those small cap proving the weak resilience of the order book on the best limits. This work presents execution venues transformation and technological evolutions to implement high-frequency trading. The research team built a financial market replicating library called VirteK. This library helped to recover stylized facts from the May 6, 2010 flash-crash illustrating limit order book imbalances with the VPIN measure
APA, Harvard, Vancouver, ISO, and other styles
34

Pettersson, Ruiz Eric. "Combating money laundering with machine learning : A study on different supervised-learning algorithms and their applicability at Swedish cryptocurrency exchanges." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300375.

Full text
Abstract:
In 2018, Europol (2018) estimated that more than $22 billion dollars were laundered in Europe by using cryptocurrencies. The Financial Action Task Force explains that moneylaunderers may exchange their illicitly gained fiat-money for crypto, launder that crypto by distributing the funds to multiple accounts and then re-exchange the crypto back to fiat-currency. This process of exchanging currencies is done through a cryptocurrency exchange, giving the exchange an ideal position to prevent money laundering from happening as it acts as middleman (FATF, 2021). However, current AML efforts at these exchanges have shown to be outdated and need to be improved. Furthermore, Weber et al. (2019) argue that machine learning could be used for this endeavor. The study's purpose is to investigate how machine learning can be used to combat money laundering activities performed using cryptocurrency. This is done by exploring what machine learning algorithms are suitable for this purpose. In addition, the study further seeks to understand the applicability of the investigated algorithms by exploring their fit at cryptocurrency exchanges. To answer the research question, four supervised-learning algorithms are compared by using the Bitcoin Elliptic Dataset. Moreover, with the objective of quantitively understanding the algorithmic performance differences, three key evaluation metrics are used: F1-score, precision and recall. Then, in order to understand the investigated algorithms applicability, two complementary qualitative interviews are performed at Swedish cryptocurrency exchanges. The study cannot conclude if there is a most suitable algorithm for detecting transactions related to money-laundering. However, the applicability of the decision tree algorithm seems to be more promising at Swedish cryptocurrency exchanges, compared to the other three algorithms.<br>Europol (2018) uppskattade år 2018, att mer än 22 miljarder USD tvättades i Europa genom användning av kryptovalutor. Financial Action Task Force förklarar att penningtvättare kan byta deras olagligt förvärvade fiat-valutor mot kryptovaluta, tvätta kryptovalutan genom att fördela tillgångarna till ett flertal konton och sedan återväxla kryptovalutan tillbaka till fiat-valuta. Denna process, att växla valutor, görs genom en kryptovalutaväxlare, vilket ger växlaren en ideal position för att förhindra att tvättning sker eftersom de agerar som mellanhänder (FATF, 2021). Dock har de aktuella AMLansträngningarna vid dessa växlare visat sig vara föråldrade och i behov av förbättring. Dessutom hävdar Weber et al. (2019) att maskininlärning skulle kunna användas i denna strävan. Denna studies syfte är att undersöka hur maskininlärning kan användas för att bekämpa penningtvättaktiviteter där kryptovaluta används. Detta görs genom att utforska vilka maskininlärningsalgoritmer som är användbara för detta ändamål. Dessutom strävar undersökningen till att ge förståelse för tillämpligheten hos de undersökta algoritmerna genom att utforska deras lämplighet hos kryptovalutaväxlare. För att besvara frågeställningen har fyra supervised-learning algoritmer jämförts genom att använda Bitcoin Elliptic Dataset. För att kvantitativt förstå olikheterna i algoritmisk prestanda, har tre utvärderingsverktyg använts: F1-score, Precision och Recall. Slutligen, för att ytterligare förstå de undersökta algoritmernas tillämplighet, har två kompletterande kvalitativa intervjuer med svenska kryptovalutaväxlare gjorts. Studien kan inte dra slutsatsen att det finns en bästa algoritm för att upptäcka transaktioner som kan relateras till penningtvätt. Dock verkar tillämpbarheten hos decision tree algoritmen vara mer lovande vid de svenska kyptovalutaväxlarna än de tre andra algoritmerna.
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Qi. "Application of Improved Feature Selection Algorithm in SVM Based Market Trend Prediction Model." Thesis, Portland State University, 2019. http://pqdtopen.proquest.com/#viewpdf?dispub=10979352.

Full text
Abstract:
<p> In this study, a <b>Prediction Accuracy Based Hill Climbing Feature Selection Algorithm</b> <b>(AHCFS)</b> is created and compared with an <b>Error Rate Based Sequential Feature Selection Algorithm</b> <b> (ERFS)</b> which is an existing Matlab algorithm. The goal of the study is to create a new piece of an algorithm that has potential to outperform the existing Matlab sequential feature selection algorithm in predicting the movement of S&amp;P 500 (</p><p>GSPC) prices under certain circumstances. The twoalgorithms are tested based on historical data of </p><p>GSPC, and <b>SupportVector Machine</b> <b>(SVM)</b> is employed by both as the classifier. A prediction without feature selection algorithm implemented is carried out and used as a baseline for comparison between the two algorithms. The prediction horizon set in this study for both algorithms varies from one to 60 days. The study results show that AHCFS reaches higher prediction accuracy than ERFS in the majority of the cases.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
36

Repka, Martin. "Investiční modely v prostředí finančních trhů." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2013. http://www.nusl.cz/ntk/nusl-224014.

Full text
Abstract:
This thesis focuses on automated trading systems for financial markets trading. It describes theoretical background of financial markets, different technical analysis approaches and theoretical knowledge about automated trading systems. The output of the present paper is a diversified portfolio comprising four different investment models aimed to trading futures contracts of cocoa and gold. The portfolio tested on market data from the first quarter 2013 achieved 46.74% increase on the initial equity. The systems have been designed in Adaptrade Builder software using genetic algorithms and subsequently tested in the MetaTrader trading platform. They have been finally optimized using sensitivity analysis.
APA, Harvard, Vancouver, ISO, and other styles
37

Ondo, Ondrej. "Návrh a optimalizace automatického obchodního systému." Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2014. http://www.nusl.cz/ntk/nusl-224709.

Full text
Abstract:
This thesis focuses on automated trading systems for foreign exchange markets. It describes theoretical background of financial markets, technical analysis approaches and theoretical knowledge about automated trading systems. The output of the thesis is set of two automated trading systems built for trading the most liquid currency pairs. The process of developing automated trading system as well as its practical start up in Spartacus Company Ltd. is documented in the form of project documentation. The project documentation captures choosing necessary hardware components, their installation and oricess of ensuring smooth operation, as well as the selection and installation of the necessary software resources. In the Adaptrade Builder enviroment there has been shown the process of developing strategies and consequently theirs characteristics, performance, as well as a graph showing the evolution of the account at the time. Selected portfolio strategy has been tested in the MetaTrader platform and in the end of the thesis is offered assessing achievements and draw an overall conclusion.
APA, Harvard, Vancouver, ISO, and other styles
38

Allaya, Mouhamad M. "Méthodes de Monte-Carlo EM et approximations particulaires : application à la calibration d'un modèle de volatilité stochastique." Thesis, Paris 1, 2013. http://www.theses.fr/2013PA010072/document.

Full text
Abstract:
Ce travail de thèse poursuit une perspective double dans l'usage conjoint des méthodes de Monte Carlo séquentielles (MMS) et de l'algorithme Espérance-Maximisation (EM) dans le cadre des modèles de Markov cachés présentant une structure de dépendance markovienne d'ordre supérieur à 1 au niveau de la composante inobservée. Tout d'abord, nous commençons par un exposé succinct de l'assise théorique des deux concepts statistiques à Travers les chapitres 1 et 2 qui leurs sont consacrés. Dans un second temps, nous nous intéressons à la mise en pratique simultanée des deux concepts au chapitre 3 et ce dans le cadre usuel ou la structure de dépendance est d'ordre 1, l'apport des méthodes MMS dans ce travail réside dans leur capacité à approximer efficacement des fonctionnelles conditionnelles bornées, notamment des quantités de filtrage et de lissage dans un cadre non linéaire et non gaussien. Quant à l'algorithme EM, il est motivé par la présence à la fois de variables observables, et inobservables (ou partiellement observées) dans les modèles de Markov Cachés et singulièrement les modèles de volatilité stochastique étudié. Après avoir présenté aussi bien l'algorithme EM que les méthodes MCS ainsi que quelques une de leurs propriétés dans les chapitres 1 et 2 respectivement, nous illustrons ces deux outils statistiques au travers de la calibration d'un modèle de volatilité stochastique. Cette application est effectuée pour des taux change ainsi que pour quelques indices boursiers au chapitre 3. Nous concluons ce chapitre sur un léger écart du modèle de volatilité stochastique canonique utilisé ainsi que des simulations de Monte Carlo portant sur le modèle résultant. Enfin, nous nous efforçons dans les chapitres 4 et 5 à fournir les assises théoriques et pratiques de l'extension des méthodes Monte Carlo séquentielles notamment le filtrage et le lissage particulaire lorsque la structure markovienne est plus prononcée. En guise d’illustration, nous donnons l'exemple d'un modèle de volatilité stochastique dégénéré dont une approximation présente une telle propriété de dépendance<br>This thesis pursues a double perspective in the joint use of sequential Monte Carlo methods (SMC) and the Expectation-Maximization algorithm (EM) under hidden Mar­kov models having a Markov dependence structure of order grater than one in the unobserved component signal. Firstly, we begin with a brief description of the theo­retical basis of both statistical concepts through Chapters 1 and 2 that are devoted. In a second hand, we focus on the simultaneous implementation of both concepts in Chapter 3 in the usual setting where the dependence structure is of order 1. The contribution of SMC methods in this work lies in their ability to effectively approximate any bounded conditional functional in particular, those of filtering and smoothing quantities in a non-linear and non-Gaussian settings. The EM algorithm is itself motivated by the presence of both observable and unobservable ( or partially observed) variables in Hidden Markov Models and particularly the stochastic volatility models in study. Having presented the EM algorithm as well as the SMC methods and some of their properties in Chapters 1 and 2 respectively, we illustrate these two statistical tools through the calibration of a stochastic volatility model. This application is clone for exchange rates and for some stock indexes in Chapter 3. We conclude this chapter on a slight departure from canonical stochastic volatility model as well Monte Carlo simulations on the resulting model. Finally, we strive in Chapters 4 and 5 to provide the theoretical and practical foundation of sequential Monte Carlo methods extension including particle filtering and smoothing when the Markov structure is more pronounced. As an illustration, we give the example of a degenerate stochastic volatility model whose approximation has such a dependence property
APA, Harvard, Vancouver, ISO, and other styles
39

Fettaka, Salim. "Application of Multiobjective Optimization in Chemical Engineering Design and Operation." Thèse, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23209.

Full text
Abstract:
The purpose of this research project is the design and optimization of complex chemical engineering problems, by employing evolutionary algorithms (EAs). EAs are optimization techniques which mimic the principles of genetics and natural selection. Given their population-based approach, EAs are well suited for solving multiobjective optimization problems (MOOPs) to determine Pareto-optimal solutions. The Pareto front refers to the set of non-dominated solutions which highlight trade-offs among the different objectives. A broad range of applications have been studied, all of which are drawn from the chemical engineering field. The design of an industrial packed bed styrene reactor is initially studied with the goal of maximizing the productivity, yield and selectivity of styrene. The dual population evolutionary algorithm (DPEA) was used to circumscribe the Pareto domain of two and three objective optimization case studies for three different configurations of the reactor: adiabatic, steam-injected and isothermal. The Pareto domains were then ranked using the net flow method (NFM), a ranking algorithm that incorporates the knowledge and preferences of an expert into the optimization routine. Next, a multiobjective optimization of the heat transfer area and pumping power of a shell-and-tube heat exchanger is considered to provide the designer with multiple Pareto-optimal solutions which capture the trade-off between the two objectives. The optimization was performed using the fast and elitist non-dominated sorting genetic algorithm (NSGA-II) on two case studies from the open literature. The algorithm was also used to determine the impact of using discrete standard values of the tube length, diameter and thickness rather than using continuous values to obtain the optimal heat transfer area and pumping power. In addition, a new hybrid algorithm called the FP-NSGA-II, is developed in this thesis by combining a front prediction algorithm with the fast and elitist non-dominated sorting genetic algorithm-II (NSGA-II). Due to the significant computational time of evaluating objective functions in real life engineering problems, the aim of this hybrid approach is to better approximate the Pareto front of difficult constrained and unconstrained problems while keeping the computational cost similar to NSGA-II. The new algorithm is tested on benchmark problems from the literature and on a heat exchanger network problem.
APA, Harvard, Vancouver, ISO, and other styles
40

Pansart, Lucie. "Algorithmes de chemin élémentaire : application aux échanges de reins." Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM039.

Full text
Abstract:
Cette thèse traite de problèmes de chemins élémentaires et leur application au problème d’échange de reins. Nous nous concentrons sur des programmes d’échange de reins qui incluent des donneurs altruistes, qui sont essentiels pour les patients avec une maladie rénale, mais représentent un défi pour les méthodes de recherche opérationnelle. Notre objectif est de développer un algorithme efficace qui pourra être utilisé pour résoudre des instances futures, qui sont susceptibles d’impliquer un grand nombre de participants. Nous rencontrons des problèmes étroitement lié au notre : problèmes de packing, de tournée de véhicules, de stable. Pour ce dernier, nous présentons une nouvelle formulation étendue et prouvons qu’elle est idéale et compacte pour les graphes parfaits sans griffe. Nous nous focalisons ensuite sur la conception d’une génération de colonnes dédiée au problème d’échange de reins et nous attaquons à son problème de pricing, NP-difficile. Nous abordons le problème du chemin élémentaire minimum avec contrainte de taille, qui modélise la recherche de chaînes de dons intéressantes à ajouter dans la phase du pricing. Nous étudions des approches dynamiques, en particulier la relaxation NG-route et l’heuristique de color coding, et les améliorons en exploitant la contrainte de taille et la faible densité des graphes considérés. Nous nous intéressons ensuite au color coding dans un contexte plus général, proposant de nouvelles stratégies randomisées qui apportent une garantie d’amélioration. Ces stratégies s’appuient sur un ordonnancement du graphe et introduisent un biais dans la loi de probabilité pour augmenter les chances de trouver une solution optimale<br>This thesis deals with elementary path problems and their application to the kidney exchange problem. We focus on kidney exchange programs including altruistic donors, which are crucial for patients with renal disease and challenging for operations research methods. The goal of this work is to develop an efficient algorithm that can be used to solve future instances, which are likely to involve a large number of donors and patients. While we progress on this topic, we encounter closely related problems on packing, vehicle routing and stable set. For this last problem, we introduce a new extended formulation and prove it is ideal and compact for claw-free perfect graphs by characterizing its polytope. We then concentrate on the design of a column generation dedicated to the kidney exchange problem and confront its NP-hard pricing problem. The specific problem that we address is the elementary path problem with length constraint, which models the search for interesting chains of donation to add during the pricing step. We investigate dynamic approaches, in particular the NG-route relaxation and the color coding heuristic, and improve them by exploiting the length constraint and sparsity of graphs. We study the color coding in a more general context, providing a guaranteed improvement by proposing new randomized strategies. They are based on ordering the graph before coloring it and introduce a bias in the probability distribution to increase the probability of finding an optimal solution
APA, Harvard, Vancouver, ISO, and other styles
41

Rozsnyó, Tomáš. "Modular Multiple Liquidity Source Price Streams Aggregator." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236492.

Full text
Abstract:
This MSc Thesis was performed during a study stay at the Hochschule Furtwangen University, Furtwangen, Germany. This Master Project provides a theoretical background for understanding financial market principles. It focuses on foreign exchange market, where it gives a description of fundamentals and price analysis. Further, it covers principles of high-frequency trading including strategy, development and cost. FIX protocol is the financial market communication protocol and is discussed in detail. The core part of Master Project are sorting algorithms, these are covered on theoretical and practical level. Aggregator design includes implementation environment, specification and individual parts of aggregator application represented as objects. Implementation overview can be found in last Chapter.
APA, Harvard, Vancouver, ISO, and other styles
42

Cordeiro, Jelson Andre. "Meta-heurísticas aplicadas ao problema de projeção do preço de ações na bolsa de valores." Universidade Tecnológica Federal do Paraná, 2013. http://repositorio.utfpr.edu.br/jspui/handle/1/733.

Full text
Abstract:
A projeção do preço de ações na bolsa de valores é um campo atraente para a investigação devido às suas aplicações comerciais e os benefícios financeiros oferecidos. O objetivo deste trabalho é analisar o desempenho de dois algoritmos meta-heurísticos, o Algoritmo do Morcego e o Algoritmo Genético, para o problema de projeção do preço de ações. Os indivíduos da população dos algoritmos foram modelados utilizando os parâmetros de 7 indicadores técnicos. O lucro final ao fim de um período é maximizado através da escolha do momento adequado para compra e venda de ações. Para avaliar a metodologia proposta foram realizados experimentos utilizando dados históricos reais (2006-2012) de 92 ações listadas na bolsa de valores do Brasil. A validação cruzada foi aplicada nos experimentos para evitar o overfiting, utilizando 3 períodos para treinamento e 4 para teste. Os resultados dos algoritmos foram comparados entre si e com o indicador de desempenho Buy and Hold (B&H). Para 91,30% das ações os algoritmos obtiveram lucro superior ao B&H, sendo que em 79,35% delas o Algoritmo do Morcego teve o melhor desempenho, enquanto que para 11,95% das ações o Algoritmo Genético foi melhor. Os resultados alcançados indicam que é promissora a aplicação de meta-heurísticas com a modelagem proposta para o problema de projeção do preço de ações na bolsa de valores.<br>The stock prices prediction in the stock exchange is an attractive field for research due to its commercial applications and financial benefits offered. The objective of this work is to analyze the performance of two meta-heuristic algorithms, Bat Algorithm and Genetic Algorithm to the problem of stock prices prediction. The individuals in the population of the algorithms were modeled using 7 technical indicators. The profit at the end of a period is maximized by choosing the right time to buy and sell stocks. To evaluate the proposed methodology, experiments were performed using real historical data (2006-2012) of 92 stocks listed on the stock exchange in Brazil. Cross-validation was applied in the experiments to avoid the overfiting using 3 periods for training and 4 for testing. The results of the algorithms were compared among them and also the performance indicator BuyandHold (B&H).For 91.30% of the stocks, the algorithms obtained profit higher than the B&H, and in 79.35% of them Bat Algorithm had the best performance, while for 11.95% of the stocks Genetic Algorithm was better. The results indicate that it is promising to apply meta-heuristics with the proposed model to the problem of stock prices prediction in the stock exchange.
APA, Harvard, Vancouver, ISO, and other styles
43

Greenland, Christopher. "Flux Measurements at Lake Erken." Thesis, Uppsala universitet, Luft-, vatten- och landskapslära, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-440331.

Full text
Abstract:
Turbulent fluxes govern the exchange of momentum, heat and moisture between the Earth’s surface and the overlying air. Computations of these fluxes are crucial, particularly over lakes and seas because most of the earth’s surface consists of water. One of the most common methods of calculating turbu- lent fluxes is the bulk method, where the fluxes are expressed with exchange coefficients. With more knowledge of these coefficients, the fluxes can be determined with a higher accuracy. Consequently, the turbulence structure and the exchange of moisture, momentum and heat between the surface and the overlying air can be better understood. The goal of this study was to compute the neutral exchange co- efficients for drag (CDN), heat (CHN) and moisture (CEN) and investigate their dependency on various atmospheric conditions, based on four years of measurements from Lake Erken, located about 70 km east of Uppsala. The coefficients were evaluated against the wind speed, stratification and time over water TOW (the time that the air is above the water before it reaches the tower). A special analysis was done by studying the variation of the coefficients with the wind speed during the UVCN-regime. Another analysis was done to see if the coefficients may have been influenced by non-local processes, e.g. advection from the surroundings. Additionally, normalized standard deviations for the temperature and humidity were evaluated for different stabilities. The results were compared with estimations by the COARE3.0 algorithm (for the dependency on the wind speed and the stability) in a previous report and other earlier studies.  The results indicated that the neutral exchange coefficients were higher and more dispersed during near neutral stratification and low TOWs. The normalized standard deviations also increased during neutral conditions. The explanation for this could be related to the presence of the UVCN-regime or non-local effects such as advection or entrainment from the surroundings. The wind speed had no ob- vious impact on the coefficients. However, the drag coefficient was larger and more spread out in the wind speed range 1-3 m/s. In comparison to earlier studies, the exchange coefficients were higher and scattered to a greater extent. This may be because of a strong UVCN-regime, sustainable non-local influences, relatively steeper waves than open-sea conditions or outliers in the data.
APA, Harvard, Vancouver, ISO, and other styles
44

Nascimento, Thiago Pinheiro do. "UM SERVIÇO BASEADO EM ALGORITMOS GENÉTICOS PARA PREDIÇÃO DA BOLSA DE VALORES." Universidade Federal do Maranhão, 2015. http://tedebc.ufma.br:8080/jspui/handle/tede/290.

Full text
Abstract:
Made available in DSpace on 2016-08-17T14:52:38Z (GMT). No. of bitstreams: 1 Dissertacao Thiago Pinheiro do Nascimento.pdf: 1192729 bytes, checksum: e1b66a16e5d2f323aa7e6bc4444f7f20 (MD5) Previous issue date: 2015-02-10<br>To anticipate the stock exchange pricing is not considered a simple task, because involves many obscure variables, which must be able to represent the real marketplace situation. This is a fundamental reason for the investors lose money and end up giving up investing in the capital market. In an attempt to address this issue, this paper proposes the development of a servisse capable of estimating the future price of assets in the Stock Exchange. For this, it used of a genetic algorithm, which allows to extract features from the market, necessary for estimating the future behavior of an action.<br>Antecipar a precificação futura da bolsa de valores não é considerada uma tarefa simples, pois envolve uma série de variáveis obscuras, que devem ser capazes de representar a situação real do mercado. Esse motivo faz com que vários investidores percam dinheiro e acabem desistindo de investir no mercado de capitais. Como tentativa de contornar essa situação, o presente trabalho propõe o desenvolvimento de um serviço capaz de estimar o preço futuro de ativos na bolsa de valores. Para isso, faz o uso de um algoritmo genético, o qual permite extrair características do mercado, necessárias para estimar o comportamento futuro de um ativo.
APA, Harvard, Vancouver, ISO, and other styles
45

Soundararajan, Ranjith. "Energy system analysis." Thesis, Högskolan i Borås, Akademin för textil, teknik och ekonomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-12001.

Full text
Abstract:
The purpose of this thesis is to use a model to optimize the heat exchanger network for process industry and to estimate the minimum cost required for the heat exchanger network without compromising the energy demand by each stream as much as possible with the help of MATLAB programming software. Here, the optimization is done without considering stream splitting and stream combining. The first phase involves with deriving a simple heat exchanger network consisting of four streams i.e... Two hot streams and two cold streams required for the heat exchanger using the traditional Pinch Analysis method. The second phase of this work deals with randomly placing the heat exchanger network between the hot and cold streams and calculating the minimum cost of the heat exchanger network using genetic coding which is nothing but thousands of randomly created heat exchangers which are evolved over series of population.
APA, Harvard, Vancouver, ISO, and other styles
46

Lei, Jiansheng. "Using graph theory to resolve state estimator issues faced by deregulated power systems." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Salviano, Leandro Oliveira. "Optimization of vortex generators positions and angles in fin-tube compact heat exchanger at low Reynolds number." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3150/tde-26122014-120408/.

Full text
Abstract:
In the last few decades, augmentation of heat transfer has emerged as an important research topic. Although many promising heat transfer enhancement techniques have been proposed, such as the use of longitudinal vortex generators, few researches deal with thermal optimization. In the present work, it was conducted an optimization of delta winglet vortex generators position and angles in a fin-tube compact heat exchanger with two rows of tubes in staggered tube arrangement. Two approaches were evaluated: Response Surface Methodology (Neural Networking) and Direct Optimization. Finite-Volume based commercial software (Fluent) was used to analyze heat transfer, flow structure and pressure loss in the presence of longitudinal vortex generators (LVG). The delta winglet aspect ratio was 2 and the Reynolds numbers, based on fin pitch, were 250 and 1400. Four vortex generator parameters which impact heat exchanger performance were analyzed: LVG position in direction x-y, attack angle (&#952;) and roll angle (&#7529;). The present work is the first to study the influence of LVG roll angle on heat transfer enhancement. In total, eight independent LVG parameters were considered: (x&#8321;y&#8321;&#952;&#8321;&#7529;&#8321;) for the first tube and (x&#8322;y&#8322;&#952;&#8322;&#7529;&#8322;) for the second tube. Factor Analysis method (software ModeFrontier) was used to study of the influence of these LVG parameters in heat exchanger performance. The effect of each LVG parameter on heat transfer and pressure loss, expressed in terms of Colburn factor (j) and Friction factor (f), respectively, were evaluated. The optimized LVG configurations led to heat transfer enhancement rates that are much higher than reported in the literature. Direct Optimization reported better results than Response Surface Methodology for all objective functions. Important interactions were found between VG1 and VG2, which influenced the results of Colburn (j) and Friction (f) factors for each Reynolds number. Particularly, it was found that the asymmetry of the LVG, in which the VG2 parameters strongly depend on the VG1 parameters, plays a key role to enhance heat transfer. Moreover, for each Reynolds number and each objective function, there is an optimal LVG arrangement. If the objective is to mitigate pressure drop, VG1 may be suppressed because its main goal is increasing the heat transfer downstream. On the other hand, VG2 was relevant for both increase the heat transfer and decrease the pressure drop. Roll angle had a strong influence on Friction factor (f), especially for VG1 and low Reynolds number.<br>Por muitos anos, a intensificação da transferência de calor tem despontado como um importante tópico de pesquisa. Embora existam muitas técnicas eficazes de intensificação da transferência de calor, como o uso de geradores de vórtices, poucos trabalhos de pesquisa lidam com a otimização. Neste trabalho, foi realizada a otimização das posições e ângulos dos geradores de vórtice longitudinal (LVG) tipo meia asa delta, considerando um trocador de calor tubo-aleta compacto com duas linhas de tubos desalinhados. Duas abordagens foram empregadas: Método da Superfície de Resposta (Neural Networking) e Otimização Direta. Um software comercial (Fluent), baseado na metodologia de volumes finitos, foi empregado na análise numérica da transferência de calor, estruturas vorticais e perda de pressão no escoamento, na presença de LVG. A razão de aspecto dos geradores de vórtice foi 2 e o número de Reynolds, baseado na distância entre as aletas, foram de 250 e 1400. Foram analisados quatro parâmetros dos LVG, os quais impactam na performance do trocador de calor: a posição do LVG na direção x-y, o ângulo de ataque (&#952;) e o ângulo de rolamento (&#7529;). O ângulo de rolamento foi primeiramente estudado neste trabalho. No total, oito parâmetros independentes do LVG foram considerados: (x&#8321;y&#8321;&#952;&#8321;&#7529;&#8321;) para o primeiro tubo e (x&#8322;y&#8322;&#952;&#8322;&#7529;&#8322;) para o segundo tubo. O método da Análise Fatorial (software ModeFrontier) foi aplicado no estudo da influência destes parâmetros dos LVG na performance do trocador de calor. Também foi avaliado o efeito de cada um destes parâmetros na transferência de calor e perda de pressão do escoamento, expressos em termos do fator de Colburn (j) e do fator de Atrito (f), respectivamente. As configurações otimizadas dos LVG, conduziram à taxas de transferência de calor maiores do que aquelas reportadas pela literatura. A Otimização Direta mostrou resultados melhores do que através da metodologia de Superfície de Resposta para todas as funções objetivas avaliadas neste trabalho. Importantes interações foram identificadas entre VG1 e VG2, os quais influenciaram nos resultados dos fatores de Colburn (j) e Atrito (f) para cada número de Reynolds. Particularmente, foi identificado que a assimetria dos LVG desempenha um papel fundamental na intensificação da transferência de calor, onde os parâmetros de VG2 dependem fortemente dos parâmetros de VG1. Além disso, para cada número de Reynolds e para cada função objetivo, existe uma configuração ótima dos parâmetros do LVG. Se o objetivo é a redução da perda de pressão global, VG1 poderia ser suprimido da modelagem, pois a sua principal função é aumentar a transferência de calor ao longo da aleta. Por outro lado, VG2 foi relevante tanto para aumentar a transferência de calor quanto para diminuir a perda de pressão. O ângulo de rolamento teve grande influência sobre o resultado do fator de Atrito (f), especialmente para VG1 e para baixo número de Reynolds.
APA, Harvard, Vancouver, ISO, and other styles
48

Biruk, David D. "Neural Network Based Control of Integrated Recycle Heat Exchanger Superheaters in Circulating Fluidized Bed Boilers." UNF Digital Commons, 2013. http://digitalcommons.unf.edu/etd/470.

Full text
Abstract:
The focus of this thesis is the development and implementation of a neural network model predictive controller to be used for controlling the integrated recycle heat exchanger (Intrex) in a 300MW circulating fluidized bed (CFB) boiler. Discussion of the development of the controller will include data collection and preprocessing, controller design and controller tuning. The controller will be programmed directly into the plant distributed control system (DCS) and does not require the continuous use of any third party software. The intrexes serve as the loop seal in the CFB as well as intermediate and finishing superheaters. Heat is transferred to the steam in the intrex superheaters from the circulating ash which can vary in consistency, quantity and quality. Fuel composition can have a large impact on the ash quality and in turn, on intrex performance. Variations in MW load and airflow settings will also impact intrex performance due to their impact on the quantity of ash circulating in the CFB. Insufficient intrex heat transfer will result in low main steam temperature while excessive heat transfer will result in high superheat attemperator sprays and/or loss of unit efficiency. This controller will automatically adjust to optimize intrex ash flow to compensate for changes in the other ash properties by controlling intrex air flows. The controller will allow the operator to enter a target intrex steam temperature increase which will cause all of the intrex air flows to adjust simultaneously to achieve the target temperature. The result will be stable main steam temperature and in turn stable and reliable operation of the CFB.
APA, Harvard, Vancouver, ISO, and other styles
49

Cao, Phuong Thao. "Approximation of OLAP queries on data warehouses." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00905292.

Full text
Abstract:
We study the approximate answers to OLAP queries on data warehouses. We consider the relative answers to OLAP queries on a schema, as distributions with the L1 distance and approximate the answers without storing the entire data warehouse. We first introduce three specific methods: the uniform sampling, the measure-based sampling and the statistical model. We introduce also an edit distance between data warehouses with edit operations adapted for data warehouses. Then, in the OLAP data exchange, we study how to sample each source and combine the samples to approximate any OLAP query. We next consider a streaming context, where a data warehouse is built by streams of different sources. We show a lower bound on the size of the memory necessary to approximate queries. In this case, we approximate OLAP queries with a finite memory. We describe also a method to discover the statistical dependencies, a new notion we introduce. We are looking for them based on the decision tree. We apply the method to two data warehouses. The first one simulates the data of sensors, which provide weather parameters over time and location from different sources. The second one is the collection of RSS from the web sites on Internet.
APA, Harvard, Vancouver, ISO, and other styles
50

Havlů, Michal. "Algoritmus automatického výběru vhodného typu zařízení z databáze výměníků tepla." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2009. http://www.nusl.cz/ntk/nusl-228730.

Full text
Abstract:
Thesis is devoted to development of an database algorithm for selection (or necking selection) of suitable type of heat exchanger for given industrial application. Database creates a part of multipurpose calculation system containing three individual modules: (i) module for selection (or necking selection) of type of heat exchanger for given application, (ii) module for thermal-hydraulic design or rating of heat exchanger, (iii) module for calculation of investments and operating cost. Thesis describes details of method for selection of suitable heat exchanger type for given application and presents and discuss individual criteria for selection process which influence values in tables of priorites for given equipment. These tables are unavoible part of selection algorithm. Details of software application of selection algorithm are also presented in the thesis. Description of behaviour of individual types of heat exchanger creates important part of thesis. Practical application of developed selection algorithm is demonstrated on several industrial examples.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography