To see the other types of publications on this topic, follow the link: Random forest regression.

Dissertations / Theses on the topic 'Random forest regression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Random forest regression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Linusson, Henrik, Robin Rudenwall, and Andreas Olausson. "Random forest och glesa datarespresentationer." Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-16672.

Full text
Abstract:
In silico experimentation is the process of using computational and statistical models to predict medicinal properties in chemicals; as a means of reducing lab work and increasing success rate this process has become an important part of modern drug development. There are various ways of representing molecules - the problem that motivated this paper derives from collecting substructures of the chemical into what is known as fractional representations. Assembling large sets of molecules represented in this way will result in sparse data, where a large portion of the set is null values. This consumes an excessive amount of computer memory which inhibits the size of data sets that can be used when constructing predictive models.In this study, we suggest a set of criteria for evaluation of random forest implementations to be used for in silico predictive modeling on sparse data sets, with regard to computer memory usage, model construction time and predictive accuracy.A novel random forest system was implemented to meet the suggested criteria, and experiments were made to compare our implementation to existing machine learning algorithms to establish our implementation‟s correctness. Experimental results show that our random forest implementation can create accurate prediction models on sparse datasets, with lower memory usage overhead than implementations using a common matrix representation, and in less time than existing random forest implementations evaluated against. We highlight design choices made to accommodate for sparse data structures and data sets in the random forest ensemble technique, and therein present potential improvements to feature selection in sparse data sets.
Program: Systemarkitekturutbildningen
APA, Harvard, Vancouver, ISO, and other styles
2

Linusson, Henrik. "Multi-Output Random Forests." Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-17167.

Full text
Abstract:
The Random Forests ensemble predictor has proven to be well-suited for solving a multitudeof different prediction problems. In this thesis, we propose an extension to the Random Forestframework that allows Random Forests to be constructed for multi-output decision problemswith arbitrary combinations of classification and regression responses, with the goal ofincreasing predictive performance for such multi-output problems. We show that our methodfor combining decision tasks within the same decision tree reduces prediction error for mosttasks compared to single-output decision trees based on the same node impurity metrics, andprovide a comparison of different methods for combining such metrics.
Program: Magisterutbildning i informatik
APA, Harvard, Vancouver, ISO, and other styles
3

Adriansson, Nils, and Ingrid Mattsson. "Forecasting GDP Growth, or How Can Random Forests Improve Predictions in Economics?" Thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-243028.

Full text
Abstract:
GDP is used to measure the economic state of a country and accurate forecasts of it is therefore important. Using the Economic Tendency Survey we investigate forecasting quarterly GDP growth using the data mining technique Random Forest. Comparisons are made with a benchmark AR(1) and an ad hoc linear model built on the most important variables suggested by the Random Forest. Evaluation by forecasting shows that the Random Forest makes the most accurate forecast supporting the theory that there are benefits to using Random Forests on economic time series.
APA, Harvard, Vancouver, ISO, and other styles
4

Asritha, Kotha Sri Lakshmi Kamakshi. "Comparing Random forest and Kriging Methods for Surrogate Modeling." Thesis, Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20230.

Full text
Abstract:
The issue with conducting real experiments in design engineering is the cost factor to find an optimal design that fulfills all design requirements and constraints. An alternate method of a real experiment that is performed by engineers is computer-aided design modeling and computer-simulated experiments. These simulations are conducted to understand functional behavior and to predict possible failure modes in design concepts. However, these simulations may take minutes, hours, days to finish. In order to reduce the time consumption and simulations required for design space exploration, surrogate modeling is used. \par Replacing the original system is the motive of surrogate modeling by finding an approximation function of simulations that is quickly computed. The process of surrogate model generation includes sample selection, model generation, and model evaluation. Using surrogate models in design engineering can help reduce design cycle times and cost by enabling rapid analysis of alternative designs.\par Selecting a suitable surrogate modeling method for a given function with specific requirements is possible by comparing different surrogate modeling methods. These methods can be compared using different application problems and evaluation metrics. In this thesis, we are comparing the random forest model and kriging model based on prediction accuracy. The comparison is performed using mathematical test functions. This thesis conducted quantitative experiments to investigate the performance of methods. After experimental analysis, it is found that the kriging models have higher accuracy compared to random forests. Furthermore, the random forest models have less execution time compared to kriging for studied mathematical test problems.
APA, Harvard, Vancouver, ISO, and other styles
5

Wålinder, Andreas. "Evaluation of logistic regression and random forest classification based on prediction accuracy and metadata analysis." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-35126.

Full text
Abstract:
Model selection is an important part of classification. In this thesis we study the two classification models logistic regression and random forest. They are compared and evaluated based on prediction accuracy and metadata analysis. The models were trained on 25 diverse datasets. We calculated the prediction accuracy of both models using RapidMiner. We also collected metadata for the datasets concerning number of observations, number of predictor variables and number of classes in the response variable.     There is a correlation between performance of logistic regression and random forest with significant correlation of 0.60 and confidence interval [0.29 0.79]. The models appear to perform similarly across the datasets with performance more influenced by choice of dataset rather than model selection.     Random forest with an average prediction accuracy of 81.66% performed better on these datasets than logistic regression with an average prediction accuracy of 73.07%. The difference is however not statistically significant with a p-value of 0.088 for Student's t-test.     Multiple linear regression analysis reveals none of the analysed metadata have a significant linear relationship with logistic regression performance. The regression of logistic regression performance on metadata has a p-value of 0.66. We get similar results with random forest performance. The regression of random forest performance on metadata has a p-value of 0.89. None of the analysed metadata have a significant linear relationship with random forest performance.     We conclude that the prediction accuracies of logistic regression and random forest are correlated. Random forest performed slightly better on the studied datasets but the difference is not statistically significant. The studied metadata does not appear to have a significant effect on prediction accuracy of either model.
APA, Harvard, Vancouver, ISO, and other styles
6

Björk, Gustaf, and Carlsson Tobias. "Klassificeringsmetoder med medicinska tillämpningar : En jämförande studie mellan logistisk regression, elastic net och random forest." Thesis, Umeå universitet, Statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-122698.

Full text
Abstract:
Idag genererar den medicinska forskningen mycket stora mängder data med en skiftande karaktär. Detta har gjort att statistiska klassificeringsmetoder blivit allt mer populära som beslutstöd inom medicinsk forskning och praktik. Den här uppsatsen försöker reda ut om någon av klassificeringsmetoderna logistisk regression, elastic net eller random forest presterar bättre än övriga metoder när datamaterialets förhållande mellan observationer och förklaringsvariabler varierar. Metodernas klassificeringsprestation utvärderas med hjälp av korsvalidering. Resultatet visar att metoderna presterar likvärdigt när datamaterialet består av fler observationer än förklaringsvariabler och även då datamaterialet innehåller fler förklaringsvariabler än observationer. Däremot presterar elastic net klart bättre än de övriga metoderna på det datamaterial där antalet observationer är ungefär lika som antalet förklaringsvariabler. Vidare tyder resultaten på att alla tre metoder med fördel kan användas på datamaterial med fler variabler än observationer vilket är vanligt för datamaterial som rör genetik. Detta givet att en manuell variabelselektion sker för logistisk regression för att metoden ska kunna appliceras i denna situation.
APA, Harvard, Vancouver, ISO, and other styles
7

Ankaräng, Marcus, and Jakob Kristiansson. "Comparison of Logistic Regression and an Explained Random Forest in the Domain of Creditworthiness Assessment." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301907.

Full text
Abstract:
As the use of AI in society is developing, the requirement of explainable algorithms has increased. A challenge with many modern machine learning algorithms is that they, due to their often complex structures, lack the ability to produce human-interpretable explanations. Research within explainable AI has resulted in methods that can be applied on top of non- interpretable models to motivate their decision bases. The aim of this thesis is to compare an unexplained machine learning model used in combination with an explanatory method, and a model that is explainable through its inherent structure. Random forest was the unexplained model in question and the explanatory method was SHAP. The explainable model was logistic regression, which is explanatory through its feature weights. The comparison was conducted within the area of creditworthiness and was based on predictive performance and explainability. Furthermore, the thesis intends to use these models to investigate what characterizes loan applicants who are likely to default. The comparison showed that no model performed significantly better than the other in terms of predictive performance. Characteristics of bad loan applicants differed between the two algorithms. Three important aspects were the applicant’s age, where they lived and whether they had a residential phone. Regarding explainability, several advantages with SHAP were observed. With SHAP, explanations on both a local and a global level can be produced. Also, SHAP offers a way to take advantage of the high performance in many modern machine learning algorithms, and at the same time fulfil today’s increased requirement of transparency.
I takt med att AI används allt oftare för att fatta beslut i samhället, har kravet på förklarbarhet ökat. En utmaning med flera moderna maskininlärningsmodeller är att de, på grund av sina komplexa strukturer, sällan ger tillgång till mänskligt förståeliga motiveringar. Forskning inom förklarar AI har lett fram till metoder som kan appliceras ovanpå icke- förklarbara modeller för att tolka deras beslutsgrunder. Det här arbetet syftar till att jämföra en icke- förklarbar maskininlärningsmodell i kombination med en förklaringsmetod, och en modell som är förklarbar genom sin struktur. Den icke- förklarbara modellen var random forest och förklaringsmetoden som användes var SHAP. Den förklarbara modellen var logistisk regression, som är förklarande genom sina vikter. Jämförelsen utfördes inom området kreditvärdighet och grundades i prediktiv prestanda och förklarbarhet. Vidare användes dessa modeller för att undersöka vilka egenskaper som var kännetecknande för låntagare som inte förväntades kunna betala tillbaka sitt lån. Jämförelsen visade att ingen av de båda metoderna presterande signifikant mycket bättre än den andra sett till prediktiv prestanda. Kännetecknande särdrag för dåliga låntagare skiljde sig åt mellan metoderna. Tre viktiga aspekter var låntagarens °ålder, vart denna bodde och huruvida personen ägde en hemtelefon. Gällande förklarbarheten framträdde flera fördelar med SHAP, däribland möjligheten att kunna producera både lokala och globala förklaringar. Vidare konstaterades att SHAP gör det möjligt att dra fördel av den höga prestandan som många moderna maskininlärningsmetoder uppvisar och samtidigt uppfylla dagens ökade krav på transparens.
APA, Harvard, Vancouver, ISO, and other styles
8

Jonsson, Estrid, and Sara Fredrikson. "An Investigation of How Well Random Forest Regression Can Predict Demand : Is Random Forest Regression better at predicting the sell-through of close to date products at different discount levels than a basic linear model?" Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302025.

Full text
Abstract:
Allt eftersom klimatkrisen fortskrider ökar engagemanget kring hållbarhet inom företag. Växthusgaser är ett av de största problemen och matsvinn har därför fått mycket uppmärksamhet sedan det utnämndes till den tredje största bidragaren till de globala utsläppen. För att minska sitt bidrag rabatterar många matbutiker produkter med kort bästföredatum, vilket kommit att kräva en förståelse för hur priskänslig efterfrågan på denna typ av produkt är. Prisoptimering görs vanligtvis med så kallade Generalized Linear Models men då efterfrågan är ett komplext koncept har maskininl ärningsmetoder börjat utmana de traditionella modellerna. En sådan metod är Random Forest Regression, och syftet med uppsatsen är att utreda ifall modellen är bättre på att estimera efterfrågan baserat på rabattnivå än en klassisk linjär modell. Vidare utreds det ifall ett tydligt linjärt samband existerar mellan rabattnivå och efterfrågan, samt ifall detta beror av produkttyp. Resultaten visar på att Random Forest tar bättre hänsyn till det komplexa samband som visade sig finnas, och i detta specifika fall presterar bättre. Vidare visade resultaten att det sammantaget inte finns något linjärt samband, men att vissa produktkategorier uppvisar svag linjäritet.
As the climate crisis continues to evolve many companies focus their development on becoming more sustainable. With greenhouse gases being highlighted as the main problem, food waste has obtained a great deal of attention after being named the third largest contributor to global emissions. One way retailers have attempted to improve is through offering close-to-date produce at discount, hence decreasing levels of food being thrown away. To minimize waste the level of discount must be optimized, and as the products can be seen as flawed the known price-to-demand relation of the products may be insufficient. The optimization process historically involves generalized linear regression models, however demand is a complex concept influenced by many factors. This report investigates whether a Machine Learning model, Random Forest Regression, is better at estimating the demand of close-to-date products at different discount levels than a basic linear regression model. The discussion also includes an analysis on whether discounts always increase the will to buy and whether this depends on product type. The results show that Random Forest to a greater extent considers the many factors influencing demand and is superior as a predictor in this case. Furthermore it was concluded that there is generally not a clear linear relation however this does depend on product type as certain categories showed some linearity.
APA, Harvard, Vancouver, ISO, and other styles
9

Maginnity, Joseph D. "Comparing the Uses and Classification Accuracy of Logistic and Random Forest Models on an Adolescent Tobacco Use Dataset." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1586997693789325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Braff, Pamela Hope. "Not All Biomass is Created Equal: An Assessment of Social and Biophysical Factors Constraining Wood Availability in Virginia." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/63997.

Full text
Abstract:
Most estimates of wood supply do not reflect the true availability of wood resources. The availability of wood resources ultimately depends on collective wood harvesting decisions across the landscape. Both social and biophysical constraints impact harvesting decisions and thus the availability of wood resources. While most constraints do not completely inhibit harvesting, they may significantly reduce the probability of harvest. Realistic assessments of woody availability and distribution are needed for effective forest management and planning. This study focuses on predicting the probability of harvest at forested FIA plot locations in Virginia. Classification and regression trees, conditional inferences trees, random forest, balanced random forest, conditional random forest, and logistic regression models were built to predict harvest as a function of social and biophysical availability constraints. All of the models were evaluated and compared to identify important variables constraining harvest, predict future harvests, and estimate the available wood supply. Variables related to population and resource quality seem to be the best predictors of future harvest. The balanced random forest and logistic regressions models are recommended for predicting future harvests. The balanced random forest model is the best predictor, while the logistic regression model can be most easily shared and replicated. Both models were applied to predict harvest at recently measured FIA plots. Based on the probability of harvest, we estimate that between 2012 and 2017, 10 – 21 percent of total wood volume on timberland will be available for harvesting.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
11

Bandreddy, Neel Kamal. "Estimation of Unmeasured Radon Concentrations in Ohio Using Quantile Regression Forest." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1418311498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Pettersson, Anders. "High-Dimensional Classification Models with Applications to Email Targeting." Thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168203.

Full text
Abstract:
Email communication is valuable for any modern company, since it offers an easy mean for spreading important information or advertising new products, features or offers and much more. To be able to identify which customers that would be interested in certain information would make it possible to significantly improve a company's email communication and as such avoiding that customers start ignoring messages and creating unnecessary badwill. This thesis focuses on trying to target customers by applying statistical learning methods to historical data provided by the music streaming company Spotify. An important aspect was the high-dimensionality of the data, creating certain demands on the applied methods. A binary classification model was created, where the target was whether a customer will open the email or not. Two approaches were used for trying to target the costumers, logistic regression, both with and without regularization, and random forest classifier, for their ability to handle the high-dimensionality of the data. Performance accuracy of the suggested models were then evaluated on both a training set and a test set using statistical validation methods, such as cross-validation, ROC curves and lift charts. The models were studied under both large-sample and high-dimensional scenarios. The high-dimensional scenario represents when the number of observations, N, is of the same order as the number of features, p and the large sample scenario represents when N ≫ p. Lasso-based variable selection was performed for both these scenarios, to study the informative value of the features. This study demonstrates that it is possible to greatly improve the opening rate of emails by targeting users, even in the high dimensional scenario. The results show that increasing the amount of training data over a thousand fold will only improve the performance marginally. Rather efficient customer targeting can be achieved by using a few highly informative variables selected by the Lasso regularization.
Företag kan använda e-mejl för att på ett enkelt sätt sprida viktig information, göra reklam för nya produkter eller erbjudanden och mycket mer, men för många e-mejl kan göra att kunder slutar intressera sig för innehållet, genererar badwill och omöjliggöra framtida kommunikation. Att kunna urskilja vilka kunder som är intresserade av det specifika innehållet skulle vara en möjlighet att signifikant förbättra ett företags användning av e-mejl som kommunikationskanal. Denna studie fokuserar på att urskilja kunder med hjälp av statistisk inlärning applicerad på historisk data tillhandahållen av musikstreaming-företaget Spotify. En binärklassificeringsmodell valdes, där responsvariabeln beskrev huruvida kunden öppnade e-mejlet eller inte. Två olika metoder användes för att försöka identifiera de kunder som troligtvis skulle öppna e-mejlen, logistisk regression, både med och utan regularisering, samt random forest klassificerare, tack vare deras förmåga att hantera högdimensionella data. Metoderna blev sedan utvärderade på både ett träningsset och ett testset, med hjälp av flera olika statistiska valideringsmetoder så som korsvalidering och ROC kurvor. Modellerna studerades under både scenarios med stora stickprov och högdimensionella data. Där scenarion med högdimensionella data representeras av att antalet observationer, N, är av liknande storlek som antalet förklarande variabler, p, och scenarion med stora stickprov representeras av att N ≫ p. Lasso-baserad variabelselektion utfördes för båda dessa scenarion för att studera informationsvärdet av förklaringsvariablerna. Denna studie visar att det är möjligt att signifikant förbättra öppningsfrekvensen av e-mejl genom att selektera kunder, även när man endast använder små mängder av data. Resultaten visar att en enorm ökning i antalet träningsobservationer endast kommer förbättra modellernas förmåga att urskilja kunder marginellt.
APA, Harvard, Vancouver, ISO, and other styles
13

Lind, Sebastian. "Ensemble approach to prediction of initial velocity centered around random forest regression and feed forward deep neural networks." Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-79956.

Full text
Abstract:
Prediction of initial velocity of artillery system is a feature that is hard to determine with statistical and analytical models. Machine learning is therefore to be tested, in order to achieve a higher accuracy than the current method (baseline). An ensemble approach will be explored in this paper, centered around feed forward deep neural network and random forest regression. Furthermore, collinearity of features and their importance will be investigated. The impact of the measured error on the range of the projectile will also be derived by finding a numerical solution with Newton Raphsons method. For the five systemstest data was used on, the mean absolute errors were 26, 9.33, 8.72 and 9.06 for deep neural networks,random forest regression, ensemble learning and conventional method, respectively. For future works,more models should be tested with ensemble learning, as well as investigation on the feature space for the input data.
APA, Harvard, Vancouver, ISO, and other styles
14

Jovanovic, Filip, and Paul Singh. "Modelling default probabilities: The classical vs. machine learning approach." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273570.

Full text
Abstract:
Fintech companies that offer Buy Now, Pay Later products are heavily dependent on accurate default probability models. This is since the fintech companies bear the risk of customers not fulfilling their obligations. In order to minimize the losses incurred to customers defaulting several machine learning algorithms can be applied but in an era in which machine learning is gaining popularity, there is a vast amount of algorithms to select from. This thesis aims to address this issue by applying three fundamentally different machine learning algorithms in order to find the best algorithm according to a selection of chosen metrics such as ROCAUC and precision-recall AUC. The algorithms that were compared are Logistic Regression, Random Forest and CatBoost. All these algorithms were benchmarked against Klarna's current XGBoost model. The results indicated that the CatBoost model is the optimal one according to the main metric of comparison, the ROCAUC-score. The CatBoost model outperformed the Logistic Regression model by seven percentage points, the Random Forest model by three percentage points and the XGBoost model by one percentage point.
Fintechbolag som erbjuder Köp Nu, Betala Senare-tjänster är starkt beroende av välfungerande fallissemangmodeller. Detta då dessa fintechbolag bär risken av att kunder inte betalar tillbaka sina krediter. För att minimera förlusterna som uppkommer när en kund inte betalar tillbaka finns flera olika maskininlärningsalgoritmer att applicera, men i dagens explosiva utveckling på maskininlärningsfronten finns det ett stort antal algoritmer att välja mellan. Denna avhandling ämnar att testa tre olika maskininlärningsalgoritmer för att fastställa vilken av dessa som presterar bäst sett till olika prestationsmått så som ROCAUC och precision-recall AUC. Algoritmerna som jämförs är Logistisk Regression, Random Forest och CatBoost. Samtliga algoritmers prestanda jämförs även med Klarnas nuvarande XGBoost-modell. Resultaten visar på att CatBoost-modellen är den mest optimala sett till det primära prestationsmåttet ROCAUC. CatBoost-modellen var överlägset bättre med sju procentenheter högre ROCAUC än Logistisk Regression, tre procentenheter högre ROCAUC än Random Forest och en procentenhet högre ROCAUC än Klarnas nuvarande XGBoost-modell
APA, Harvard, Vancouver, ISO, and other styles
15

Barr, Kajsa, and Hampus Pettersson. "Predicting and Explaining Customer Churn for an Audio/e-book Subscription Service using Statistical Analysis and Machine Learning." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252723.

Full text
Abstract:
The current technology shift has contributed to increased consumption of media and entertainment through various mobile devices, and especially through subscription based services. Storytel is a company offering a subscription based streaming service for audio and e-books, and has grown rapidly in the last couple of years. However, when operating in a competitive market, it is of great importance to understand the behavior and demands of the customer base. It has been shown that it is more profitable to retain existing customers than to acquire new ones, which is why a large focus should be directed towards preventing customers from leaving the service, that is preventing customer churn. One way to cope with this problem is by applying statistical analysis and machine learning in order to identify patterns and customer behavior in data. In this thesis, the models logistic regression and random forest are used with an aim to both predict and explain churn in early stages of a customer's subscription. The models are tested together with the feature selection methods Elastic Net, RFE and PCA, as well as with the oversampling method SMOTE. One main finding is that the best predictive model is obtained by using random forest together with RFE, producing a prediction score of 0.2427 and a recall score of 0.7699. The other main finding is that the explanatory model is given by logistic regression together with Elastic Net, where significant regression coefficient estimates can be used to explain patterns associated with churn and give useful findings from a business perspective.
Det pågående teknologiskiftet har bidragit till en ökad konsumtion av digital media och underhållning via olika typer av mobila enheter, t.ex. smarttelefoner. Storytel är ett företag som erbjuder en prenumerationstjänst för ljud- och e-böcker och har haft en kraftig tillväxt de senaste åren. När företag befinner sig i en konkurrensutsatt marknad är det av stor vikt att förstå sig på kunders beteende samt vilka krav och önskemål kunder har på tjänsten. Det har nämligen visat sig vara mer lönsamt att behålla existerande kunder i tjänsten än hela tiden värva nya, och det är därför viktigt att se till att en befintlig kund inte avslutar sin prenumeration. Ett sätt att hantera detta är genom att använda statistisk analys och maskininlärningsmetoder för att identifiera mönster och beteenden i data. I denna uppsats används både logistisk regression och random forest med syfte att både prediktera och förklara uppsägning av tjänsten i ett tidigt stadie av en kunds prenumeration. Modellerna testas tillsammans med variabelselektionsmetoderna Elastic Net, RFE och PCA, samt tillsammans med översamplingsmetoden SMOTE. Resultatet blev att random forest tillsammans med RFE bäst predikterade uppsägning av tjänsten med 0.2427 i måttet precision och 0.7699 i måttet recall. Ett annat viktigt resultat är att den förklarande modellen ges av logistisk regression tillsammans med Elastic Net, där signifikanta estimat av regressionskoefficienterna ökar förklaringsgraden för beteenden och mönster relaterade till kunders uppsägning av tjänsten. Därmed ges användbara insikter ur ett företagsperspektiv.
APA, Harvard, Vancouver, ISO, and other styles
16

Wilhelmsson, Mikael, and Laban Ögren. "Jämförelse av Ordinal regression och Random Forest för att prediktera utfall efter stroke : En studie baserat på data från Riksstroke." Thesis, Umeå universitet, Statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184935.

Full text
Abstract:
Studien har som mål att undersöka utfall efter stroke med hjälp av två statistiska modeller. Mer specifikt är det av intresse att prediktera risken att avlida eller få en nedsatt funktionsförmåga efter en stroke samt även jämföra prediktionsförmågan för de två modellerna. För att undersöka detta har en Ordinal regression och Random Forest modell applicerats på ett datamaterial från det svenska strokeregistret Riksstroke. Båda modellerna producerar bra prediktioner som hade ett testfel på cirka 25 procent. Resultaten visar att det inte finns några större skillnader mellan modellerna i avseende på  prediktionsförmåga. Då resultaten är liknande vägs även andra aspekter in i jämförelsen. Ordinal regression har en hög tolkningsbarhet på modellkoefficienterna medan Random Forest är mer svårtolkad. Även modelleringsprocessen tas i beräkning där Ordinal regression kräver mer manuell hantering än Random Forest.
APA, Harvard, Vancouver, ISO, and other styles
17

Wagner, Christopher. "Regression Model to Project and Mitigate Vehicular Emissions in Cochabamba, Bolivia." University of Dayton / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1501719312999566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kerek, Hanna. "Product Similarity Matching for Food Retail using Machine Learning." Thesis, KTH, Matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-273606.

Full text
Abstract:
Product similarity matching for food retail is studied in this thesis. The goal is to find products that are similar but not necessarily of the same brand which can be used as a replacement product for a product that is out of stock or does not exist in a specific store. The aim of the thesis is to examine which machine learning model that is best suited to perform the product similarity matching. The product data used for training the models were name, description, nutrients, weight and filters (labels, for example organic). Product similarity matching was performed pairwise and the similarity between the products was measured by jaccard distance for text attributes and relative difference for numeric values. Random Forest, Logistic Regression and Support Vector Machines were tested and compared to a baseline. The baseline computed the jaccard distance for the product names and did the classification based on a threshold value of the jaccard distance. The result was measured by accuracy, F-measure and AUC score. Random Forest performed best in terms of all evaluation metrics and Logistic Regression, Random Forest and Support Vector Machines all performed better than the baseline.
I den här rapporten studeras produktliknande matchning för livsmedel. Målet är att hitta produkter som är liknande men inte nödvändigtvis har samma märke som kan vara en ersättningsprodukt till en produkt som är slutsåld eller inte säljs i en specifik affär. Syftet med den här rapporten är att undersöka vilken maskininlärningsmodel som är bäst lämpad för att göra produktliknande matchning. Produktdatan som användes för att träna modellerna var namn, beskrivning, näringsvärden, vikt och märkning (exempelvis ekologisk). Produktmatchningen gjordes parvis och likhet mellan produkterna beräknades genom jaccard index för textattribut och relativ differens för numeriska värden. Random Forest, logistisk regression och Support Vector Machines testades och jämfördes mot en baslinje. I baslinjen räknades jaccard index ut enbart för produkternas namn och klassificeringen gjordes genom att använda ett tröskelvärde för jaccard indexet. Resultatet mättes genom noggrannhet, F-measure och AUC. Random Forest presterade bäst sett till alla prestationsmått och logistisk regression, Random Forest och Support Vector Machines gav alla bättre resultat än baslinjen.
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Xiaoyang. "Machine Learning Models in Fullerene/Metallofullerene Chromatography Studies." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/93737.

Full text
Abstract:
Machine learning methods are now extensively applied in various scientific research areas to make models. Unlike regular models, machine learning based models use a data-driven approach. Machine learning algorithms can learn knowledge that are hard to be recognized, from available data. The data-driven approaches enhance the role of algorithms and computers and then accelerate the computation using alternative views. In this thesis, we explore the possibility of applying machine learning models in the prediction of chromatographic retention behaviors. Chromatographic separation is a key technique for the discovery and analysis of fullerenes. In previous studies, differential equation models have achieved great success in predictions of chromatographic retentions. However, most of the differential equation models require experimental measurements or theoretical computations for many parameters, which are not easy to obtain. Fullerenes/metallofullerenes are rigid and spherical molecules with only carbon atoms, which makes the predictions of chromatographic retention behaviors as well as other properties much simpler than other flexible molecules that have more variations on conformations. In this thesis, I propose the polarizability of a fullerene molecule is able to be estimated directly from the structures. Structural motifs are used to simplify the model and the models with motifs provide satisfying predictions. The data set contains 31947 isomers and their polarizability data and is split into a training set with 90% data points and a complementary testing set. In addition, a second testing set of large fullerene isomers is also prepared and it is used to testing whether a model can be trained by small fullerenes and then gives ideal predictions on large fullerenes.
Machine learning models are capable to be applied in a wide range of areas, such as scientific research. In this thesis, machine learning models are applied to predict chromatography behaviors of fullerenes based on the molecular structures. Chromatography is a common technique for mixture separations, and the separation is because of the difference of interactions between molecules and a stationary phase. In real experiments, a mixture usually contains a large family of different compounds and it requires lots of work and resources to figure out the target compound. Therefore, models are extremely import for studies of chromatography. Traditional models are built based on physics rules, and involves several parameters. The physics parameters are measured by experiments or theoretically computed. However, both of them are time consuming and not easy to be conducted. For fullerenes, in my previous studies, it has been shown that the chromatography model can be simplified and only one parameter, polarizability, is required. A machine learning approach is introduced to enhance the model by predicting the molecular polarizabilities of fullerenes based on structures. The structure of a fullerene is represented by several local structures. Several types of machine learning models are built and tested on our data set and the result shows neural network gives the best predictions.
APA, Harvard, Vancouver, ISO, and other styles
20

Lundström, Love, and Oscar Öhman. "Machine Learning in credit risk : Evaluation of supervised machine learning models predicting credit risk in the financial sector." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-164101.

Full text
Abstract:
When banks lend money to another party they face a risk that the borrower will not fulfill its obligation towards the bank. This risk is called credit risk and it’s the largest risk banks faces. According to the Basel accord banks need to have a certain amount of capital requirements to protect themselves towards future financial crisis. This amount is calculated for each loan with an attached risk-weighted asset, RWA. The main parameters in RWA is probability of default and loss given default. Banks are today allowed to use their own internal models to calculate these parameters. Thus hold capital with no gained interest is a great cost, banks seek to find tools to better predict probability of default to lower the capital requirement. Machine learning and supervised algorithms such as Logistic regression, Neural network, Decision tree and Random Forest can be used to decide credit risk. By training algorithms on historical data with known results the parameter probability of default (PD) can be determined with a higher certainty degree compared to traditional models, leading to a lower capital requirement. On the given data set in this article Logistic regression seems to be the algorithm with highest accuracy of classifying customer into right category. However, it classifies a lot of people as false positive meaning the model thinks a customer will honour its obligation but in fact the customer defaults. Doing this comes with a great cost for the banks. Through implementing a cost function to minimize this error, we found that the Neural network has the lowest false positive rate and will therefore be the model that is best suited for this specific classification task.
När banker lånar ut pengar till en annan part uppstår en risk i att låntagaren inte uppfyller sitt antagande mot banken. Denna risk kallas för kredit risk och är den största risken en bank står inför. Enligt Basel föreskrifterna måste en bank avsätta en viss summa kapital för varje lån de ger ut för att på så sätt skydda sig emot framtida finansiella kriser. Denna summa beräknas fram utifrån varje enskilt lån med tillhörande risk-vikt, RWA. De huvudsakliga parametrarna i RWA är sannolikheten att en kund ej kan betala tillbaka lånet samt summan som banken då förlorar. Idag kan banker använda sig av interna modeller för att estimera dessa parametrar. Då bundet kapital medför stora kostnader för banker, försöker de sträva efter att hitta bättre verktyg för att uppskatta sannolikheten att en kund fallerar för att på så sätt minska deras kapitalkrav. Därför har nu banker börjat titta på möjligheten att använda sig av maskininlärningsalgoritmer för att estimera dessa parametrar. Maskininlärningsalgoritmer såsom Logistisk regression, Neurala nätverk, Beslutsträd och Random forest, kan användas för att bestämma kreditrisk. Genom att träna algoritmer på historisk data med kända resultat kan parametern, chansen att en kund ej betalar tillbaka lånet (PD), bestämmas med en högre säkerhet än traditionella metoder. På den givna datan som denna uppsats bygger på visar det sig att Logistisk regression är den algoritm med högst träffsäkerhet att klassificera en kund till rätt kategori. Däremot klassifiserar denna algoritm många kunder som falsk positiv vilket betyder att den predikterar att många kunder kommer betala tillbaka sina lån men i själva verket inte betalar tillbaka lånet. Att göra detta medför en stor kostnad för bankerna. Genom att istället utvärdera modellerna med hjälp av att införa en kostnadsfunktion för att minska detta fel finner vi att Neurala nätverk har den lägsta falsk positiv ration och kommer därmed vara den model som är bäst lämpad att utföra just denna specifika klassifierings uppgift.
APA, Harvard, Vancouver, ISO, and other styles
21

Aghi, Nawar, and Ahmad Abdulal. "House Price Prediction." Thesis, Högskolan Kristianstad, Fakulteten för naturvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-20945.

Full text
Abstract:
This study proposes a performance comparison between machine learning regression algorithms and Artificial Neural Network (ANN). The regression algorithms used in this study are Multiple linear, Least Absolute Selection Operator (Lasso), Ridge, Random Forest. Moreover, this study attempts to analyse the correlation between variables to determine the most important factors that affect house prices in Malmö, Sweden. There are two datasets used in this study which called public and local. They contain house prices from Ames, Iowa, United States and Malmö, Sweden, respectively.The accuracy of the prediction is evaluated by checking the root square and root mean square error scores of the training model. The test is performed after applying the required pre-processing methods and splitting the data into two parts. However, one part will be used in the training and the other in the test phase. We have also presented a binning strategy that improved the accuracy of the models.This thesis attempts to show that Lasso gives the best score among other algorithms when using the public dataset in training. The correlation graphs show the variables' level of dependency. In addition, the empirical results show that crime, deposit, lending, and repo rates influence the house prices negatively. Where inflation, year, and unemployment rate impact the house prices positively.
APA, Harvard, Vancouver, ISO, and other styles
22

Amlathe, Prakhar. "Standard Machine Learning Techniques in Audio Beehive Monitoring: Classification of Audio Samples with Logistic Regression, K-Nearest Neighbor, Random Forest and Support Vector Machine." DigitalCommons@USU, 2018. https://digitalcommons.usu.edu/etd/7050.

Full text
Abstract:
Honeybees are one of the most important pollinating species in agriculture. Every three out of four crops have honeybee as their sole pollinator. Since 2006 there has been a drastic decrease in the bee population which is attributed to Colony Collapse Disorder(CCD). The bee colonies fail/ die without giving any traditional health symptoms which otherwise could help in alerting the Beekeepers in advance about their situation. Electronic Beehive Monitoring System has various sensors embedded in it to extract video, audio and temperature data that could provide critical information on colony behavior and health without invasive beehive inspections. Previously, significant patterns and information have been extracted by processing the video/image data, but no work has been done using audio data. This research inaugurates and takes the first step towards the use of audio data in the Electronic Beehive Monitoring System (BeePi) by enabling a path towards the automatic classification of audio samples in different classes and categories within it. The experimental results give an initial support to the claim that monitoring of bee buzzing signals from the hive is feasible, it can be a good indicator to estimate hive health and can help to differentiate normal behavior against any deviation for honeybees.
APA, Harvard, Vancouver, ISO, and other styles
23

Hedén, William. "Predicting Hourly Residential Energy Consumption using Random Forest and Support Vector Regression : An Analysis of the Impact of Household Clustering on the Performance Accuracy." Thesis, KTH, Matematisk statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-187873.

Full text
Abstract:
The recent increase of smart meters in the residential sector has lead to large available datasets. The electricity consumption of individual households can be accessed in close to real time, and allows both the demand and supply side to extract valuable information for efficient energy management. Predicting electricity consumption should help utilities improve planning generation and demand side management, however this is not a trivial task as consumption at the individual household level is highly irregular. In this thesis the problem of improving load forecasting is ad-dressed using two machine learning methods, Support Vector Machines for regression (SVR) and Random Forest. For a customer base consisting of 187 households in Austin, Texas, pre-dictions are made on three spatial scales: (1) individual house-hold level (2) aggregate level (3) clusters of similar households according to their daily consumption profile. Results indicate that using Random Forest with K = 32 clusters yields the most accurate results in terms of the coefficient of variation. In an attempt to improve the aggregate model, it was shown that by adding features describing the clusters’ historic load, the performance of the aggregate model was improved using Random Forest with information added based on the grouping into K = 3 clusters. The extended aggregate model did not outperform the cluster-based models. The work has been carried out at the Swedish company Watty. Watty performs energy disaggregation and management, allowing the energy usage of entire homes to be diagnosed in detail.
Den senaste tidens ökning av smarta elmätare inom bostadssektorn medför att vi har tillgång till stora mängder data. Hushållens totala elkonsumption är tillgänglig i nära realtid, vilket tillåter både tillgångssidan och efterfrågesidan att nyttja informationen för effektiv energihantering. Att förutsäga elförbrukningen bör hjälpa elbolag att förbättra planering för elproduktion och hantering av efterfrågesidan. Dock är detta inte en trivial uppgift, då elkonsumptionen på individuell husnivå är mycket oregelbunden. Denna masteruppsats föreslår att använda två välkända maskininlärningsalgoritmer för att lösa problemet med att förbättra lastprognoser, och dessa är Support Vector Machines för regression (SVR) och Random Forest. För en kundbas bestående av 187 hushåll i Austin, Texas, gör vi prognoser baserat på tre tillvägagångssätt: (1) enskilda hushåll (2) aggregerad nivå (3) kluster av liknande hushåll enligt deras dagliga förbrukningsprofil. Resultaten visar att Random Forest med K = 32 kluster ger de mest precisa resultaten i termer av variationskoefficienten. I ett försök att förbättra den aggegerade modellen visade det sig att genom att lägga till ytterligare prediktionsvariabler som beskriver klustrens historiska last, kunde precisionen förbättras genom att använda Random Forest med information från K = 3 olika kluster. Den förbättrade aggregerade modellen presterade inte bättre jämfört med de klusterbaserade modellerna. Arbetet har utförts vid det svenska företaget Watty. Watty utför energidisaggregering och energihantering, vilket gör att bostäders energianvändning kan analyseras i detalj.
APA, Harvard, Vancouver, ISO, and other styles
24

Bitara, Matúš. "Srovnání heuristických a konvenčních statistických metod v data miningu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-400833.

Full text
Abstract:
The thesis deals with the comparison of conventional and heuristic methods in data mining used for binary classification. In the theoretical part, four different models are described. Model classification is demonstrated on simple examples. In the practical part, models are compared on real data. This part also consists of data cleaning, outliers removal, two different transformations and dimension reduction. In the last part methods used to quality testing of models are described.
APA, Harvard, Vancouver, ISO, and other styles
25

Varatharajah, Thujeepan, and Eriksson Victor. "A comparative study on artificial neural networks and random forests for stock market prediction." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186452.

Full text
Abstract:
This study investigates the predictive performance of two different machine learning (ML) models on the stock market and compare the results. The chosen models are based on artificial neural networks (ANN) and random forests (RF). The models are trained on two separate data sets and the predictions are made on the next day closing price. The input vectors of the models consist of 6 different financial indicators which are based on the closing prices of the past 5, 10 and 20 days. The performance evaluation are done by analyzing and comparing such values as the root mean squared error (RMSE) and mean average percentage error (MAPE) for the test period. Specific behavior in subsets of the test period is also analyzed to evaluate consistency of the models. The results showed that the ANN model performed better than the RF model as it throughout the test period had lower errors compared to the actual prices and thus overall made more accurate predictions.
Denna studie undersöker hur väl två olika modeller inom maskininlärning (ML) kan förutspå aktiemarknaden och jämför sedan resultaten av dessa. De valda modellerna baseras på artificiella neurala nätverk (ANN) samt random forests (RF). Modellerna tränas upp med två separata datamängder och prognoserna sker på nästföljande dags stängningskurs. Indatan för modellerna består av 6 olika finansiella nyckeltal som är baserade på stängningskursen för de senaste 5, 10 och 20 dagarna. Prestandan utvärderas genom att analysera och jämföra värden som root mean squared error (RMSE) samt mean average percentage error (MAPE) för testperioden. Även specifika trender i delmängder av testperioden undersöks för att utvärdera följdriktigheten av modellerna. Resultaten visade att ANN-modellen presterade bättre än RF-modellen då den sett över hela testperioden visade mindre fel jämfört med de faktiska värdena och gjorde därmed mer träffsäkra prognoser.
APA, Harvard, Vancouver, ISO, and other styles
26

Lood, Olof. "Prediktering av grundvattennivåi område utan grundvattenrör : Modellering i ArcGIS Pro och undersökningav olika miljövariablers betydelse." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-448020.

Full text
Abstract:
Myndigheten Sveriges Geologiska Undersökning (SGU) har ett nationellt ansvar för att övervaka Sveriges grundvattennivåer. Eftersom det inte är möjligt att få ett heltäckande mätstationssystem måste grundvattennivån beräknas på vissa platser. Därför är det intressant att undersöka sambandet mellan grundvattennivån och utvald geografisk information, så kallade miljövariabler. På sikt kan maskininlärning komma att användas inom SGU för att beräkna grundvattennivån och då kan en förstudie vara till stor hjälp. Examensarbetets syfte är att genomföra en sådan förstudie genom att undersöka vilka miljövariabler som har störst betydelse för grundvattennivån och kartlägga modellosäkerheter vid grundvattenprediktering. Förstudien genomförs på sju områden inom SGUs grundvattennät där mätstationerna finns i grupper likt kluster. I förstudien används övervakad maskininlärning som i detta examensarbete innebär att medianvärden på grundvattennivån och miljövariablerna används för att träna modellerna. Med hjälp av statistisk data från modellerna kan prestandan utvärderas och justeringar göras. Algoritmen som används heter Random Forest som skapar ett klassifikations- och regressionsträd, vilket lär modellen att utifrån given indata fatta beslut som liknar männiksans beslutfattande. Modellerna ställs upp i ArcGIS Pros verktyg Forest-based Classification and Regression. På grund av områdenas geografiska spridning sätts flera separata modeller upp. Resultatet visar att det är möjligt att prediktera grundvattennivån men betydelsen av de olika miljövariablerna varierar mellan de sju undersökta områdena. Orsaken till detta lär vara geografiska skillnader. Oftast har den absoluta höjden och markens lutningsriktning mycket stor betydelse. Höjd- och avståndsskillnad till låg och hög genomsläpplig jord har större betydelse än vad höjd- och avståndsskillnad har till medelhög genomsläpplig jord. Höjd- och avståndsskillnad har större betydelse till större vattendrag än till mindre vattendrag. Modellernas r2-värde är något låga men inom rimliga gränser för att vara hydrologiska modeller. Standardfelen är oftast inom rimliga gränser. Osäkerheten har visats genom ett     90 %-igt konfidensintervall. Osäkerheterna ökar med ökat avstånd till mätstationerna och är som högst vid hög altitud. Orsaken lär vara för få ingående observationer och för få observationer på hög höjd. Nära mätstationer, bebyggelse och i dalgångar är osäkerheterna i de flesta fallen inom rimliga gränser.
The Swedish authority Geological Survey of Sweden (SGU) has a national responsibility to oversee the groundwater levels. A national network of measurement stations has been established to facilitate this. The density of measurement stations varies considerably. Since it will never be feasible to cover the entire country with measurement stations, the groundwater levels need to be computed in areas that are not in the near vicinity of a measurement station. For that reason, it is of interest to investigate the correlation between the groundwater levels and selected geographical information, so called environmental variables. In the future, SGU may use machine learning to compute the groundwater levels. The focus of this master's thesis is to study the importance of the environmental variables and model uncertainties in order to determine if this is a feasible option for implementation on a national basis. The study uses data from seven areas of the Groundwater network of SGU, where the measuring stations are in clusters. The pilot study uses a supervised machine learning method which in this case means that the median groundwater levels and the environmental variables train the models. By evaluating the model's statistical data output the performance can gradually be improved. The algorithm used is called “Random Forest” and uses a classification and regression tree to learn how to make decisions throughout a network of nodes, branches and leaves due to the input data. The models are set up by the prediction tool “Forest-based Classification and Regression” in ArcGIS Pro. Because the areas are geographically spread out, eight unique models are set up. The results show that it’s possible to predict groundwater levels by using this method but that the importance of the environmental variables varies between the different areas used in this study. The cause of this may be due to geographical and topographical differences. Most often, the absolute level over mean sea level and slope direction are the most important variables. Planar and height distance differences to low and high permeable soils have medium high importance while the distance differences to medium high permeable soils have lower importance. Planar and height distance differences are more important to lakes and large watercourses than to small watercourses and ditches.  The model’s r2-values are slightly low in theory but within reasonable limits to be a hydrological model. The Standard Errors Estimate (SSE) are also in most cases within reasonable limits. The uncertainty is displayed by a 90 % confidence interval. The uncertainties increase with increased distance to measuring stations and become greatest at high altitude. The cause of this may be due to having too few observations, especially in areas with high altitude. The uncertainties are smaller close to the stations and in valleys.
SGUs grundvattennät
APA, Harvard, Vancouver, ISO, and other styles
27

Kamal, Adib, and Kenan Sabani. "Modeling Success Factors for Start-ups in Western Europe through a Statistical Learning Approach." Thesis, KTH, Industriell ekonomi och organisation (Inst.), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296527.

Full text
Abstract:
The purpose of this thesis was to use a quantitative method to expand on previous research in the field of start-up success prediction. This was accomplished by including more criteria in the study, which was made possible by the Crunchbase database, which is the largest available information source for start-ups. Furthermore, the data used in this thesis was limited to Western European start-ups only in order to study the effects of limiting the data to a certain geographical region on the prediction models, which to our knowledge has not been done before in this type of research. The quantitative method used was machine learning and specifically the three machine learning predictors used in this thesis were Logistic Regression, Random Forest and K-nearest Neighbor (KNN). All three models proposed and evaluated have a better prediction accuracy than guessing the outcome at random. When tested on data previously unknown to the model, Random Forest produced the greatest results, predicting a successful company as a success and a failed company as a failure with 79 percent accuracy. With accuracies of 65 percent and 59 percent, respectively, both logistic regression and K-Nearest Neighbor (KNN) were close behind.
Syftet med denna avhandling var att använda en kvantitativ metod för att utöka tidigare forskning inom modellering av framgångsfaktorer för start-ups genom maskininlärning. Detta kunde åstadkommas genom att inkludera fler kriterier i studien än vad som har gjorts tidigare, vilket möjliggjordes av Crunchbase-databasen, som är den största tillgängliga informationskällan för nystartade företag. Dessutom är den data som användes i denna avhandling begränsad till endast västeuropeiska start-ups för att studera effekterna av att begränsa data till ett visst geografiskt område i prediktionsmodellerna, vilket inte har gjorts tidigare i denna typ av forskning. Den kvantitativa metoden som användes var maskininlärning och specifikt var de tre maskininlärningsmodellerna som användes i denna avhandling Logistic Regression, Random Forest och K-Nearest Neighbor (KNN). Alla tre modeller som inkluderats och utvärderats har en bättre förutsägelsesnoggrannhet än att gissa resultatet slumpmässigt. När modellerna testades med data som tidigare varit okänd för modellerna, gav Random Forest det bästa resultatet och predikterade ett framgångsrikt företag korrekt och ett misslyckat företag korrekt med 79 procents noggrannhet. Nära efter kom både K-Nearest Neighbor (KNN) och Logistic Regression med respektive noggrannheter på 65 och 59 procent.
APA, Harvard, Vancouver, ISO, and other styles
28

GALLI, FABIAN. "Predicting PV self-consumption in villas with machine learning." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-300433.

Full text
Abstract:
In Sweden, there is a strong and growing interest in solar power. In recent years, photovoltaic (PV) system installations have increased dramatically and a large part are distributed grid connected PV systems i.e. rooftop installations. Currently the electricity export rate is significantly lower than the import rate which has made the amount of self-consumed PV electricity a critical factor when assessing the system profitability. Self-consumption (SC) is calculated using hourly or sub-hourly timesteps and is highly dependent on the solar patterns of the location of interest, the PV system configuration and the building load. As this varies for all potential installations it is difficult to make estimations without having historical data of both load and local irradiance, which is often hard to acquire or not available. A method to predict SC using commonly available information at the planning phase is therefore preferred.  There is a scarcity of documented SC data and only a few reports treating the subject of mapping or predicting SC. Therefore, this thesis is investigating the possibility of utilizing machine learning to create models able to predict the SC using the inputs: Annual load, annual PV production, tilt angle and azimuth angle of the modules, and the latitude. With the programming language Python, seven models are created using regression techniques, using real load data and simulated PV data from the south of Sweden, and evaluated using coefficient of determination (R2) and mean absolute error (MAE). The techniques are Linear Regression, Polynomial regression, Ridge Regression, Lasso regression, K-Nearest Neighbors (kNN), Random Forest, Multi-Layer Perceptron (MLP), as well as the only other SC prediction model found in the literature. A parametric analysis of the models is conducted, removing one variable at a time to assess the model’s dependence on each variable.  The results are promising, with five out of eight models achieving an R2 value above 0.9 and can be considered good for predicting SC. The best performing model, Random Forest, has an R2 of 0.985 and a MAE of 0.0148. The parametric analysis also shows that while more input data is helpful, using only annual load and PV production is sufficient to make good predictions. This can only be stated for model performance for the southern region of Sweden, however, and are not applicable to areas outside the latitudes or country tested.
I Sverige finns ett starkt och växande intresse för solenergi. De senaste åren har antalet solcellsanläggningar ökat dramatiskt och en stor del är distribuerade nätanslutna solcellssystem, dvs takinstallationer. För närvarande är elexportpriset betydligt lägre än importpriset, vilket har gjort mängden egenanvänd solel till en kritisk faktor vid bedömningen av systemets lönsamhet. Egenanvändning (EA) beräknas med tidssteg upp till en timmes längd och är i hög grad beroende av solstrålningsmönstret för platsen av intresse, PV-systemkonfigurationen och byggnadens energibehov. Eftersom detta varierar för alla potentiella installationer är det svårt att göra uppskattningar utan att ha historiska data om både energibehov och lokal solstrålning, vilket ofta inte är tillgängligt. En metod för att förutsäga EA med allmän tillgänglig information är därför att föredra.  Det finns en brist på dokumenterad EA-data och endast ett fåtal rapporter som behandlar kartläggning och prediktion av EA. I denna uppsats undersöks möjligheten att använda maskininlärning för att skapa modeller som kan förutsäga EA. De variabler som ingår är årlig energiförbrukning, årlig solcellsproduktion, lutningsvinkel och azimutvinkel för modulerna och latitud. Med programmeringsspråket Python skapas sju modeller med hjälp av olika regressionstekniker, där energiförbruknings- och simulerad solelproduktionsdata från södra Sverige används. Modellerna utvärderas med hjälp av determinationskoefficienten (R2) och mean absolute error (MAE). Teknikerna som används är linjär regression, polynomregression, Ridge regression, Lasso regression, K-nearest neighbor regression, Random Forest regression, Multi-Layer Perceptron regression. En additionell linjär regressions-modell skapas även med samma metodik som används i en tidigare publicerad rapport. En parametrisk analys av modellerna genomförs, där en variabel exkluderas åt gången för att bedöma modellens beroende av varje enskild variabel.  Resultaten är mycket lovande, där fem av de åtta undersökta modeller uppnår ett R2-värde över 0,9. Den bästa modellen, Random Forest, har ett R2 på 0,985 och ett MAE på 0,0148. Den parametriska analysen visar också att även om ingångsdata är till hjälp, är det tillräckligt att använda årlig energiförbrukning och årlig solcellsproduktion för att göra bra förutsägelser. Det måste dock påpekas att modellprestandan endast är tillförlitlig för södra Sverige, från var beräkningsdata är hämtad, och inte tillämplig för områden utanför de valda latituderna eller land.
APA, Harvard, Vancouver, ISO, and other styles
29

Svensson, William. "CAN STATISTICAL MODELS BEAT BENCHMARK PREDICTIONS BASED ON RANKINGS IN TENNIS?" Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447384.

Full text
Abstract:
The aim of this thesis is to beat a benchmark prediction of 64.58 percent based on player rankings on the ATP tour in tennis. That means that the player with the best rank in a tennis match is deemed as the winner. Three statistical model are used, logistic regression, random forest and XGBoost. The data are over a period between the years 2000-2010 and has over 60 000 observations with 49 variables each. After the data was prepared, new variables were created and the difference between the two players in hand taken all three statistical models did outperform the benchmark prediction. All three variables had an accuracy around 66 percent with the logistic regression performing the best with an accuracy of 66.45 percent. The most important variable overall for the models is the total win rate on different surfaces, the total win rate and rank.
APA, Harvard, Vancouver, ISO, and other styles
30

Ekman, Björn. "Machine Learning for Beam Based Mobility Optimization in NR." Thesis, Linköpings universitet, Kommunikationssystem, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-136489.

Full text
Abstract:
One option for enabling mobility between 5G nodes is to use a set of area-fixed reference beams in the downlink direction from each node. To save power these reference beams should be turned on only on demand, i.e. only if a mobile needs it. An User Equipment (UE) moving out of a beam's coverage will require a switch from one beam to another, preferably without having to turn on all possible beams to find out which one is the best. This thesis investigates how to transform the beam selection problem into a format suitable for machine learning and how good such solutions are compared to baseline models. The baseline models considered were beam overlap and average Reference Signal Received Power (RSRP), both building beam-to-beam maps. Emphasis in the thesis was on handovers between nodes and finding the beam with the highest RSRP. Beam-hit-rate and RSRP-difference (selected minus best) were key performance indicators and were compared for different numbers of activated beams. The problem was modeled as a Multiple Output Regression (MOR) problem and as a Multi-Class Classification (MCC) problem. Both problems are possible to solve with the random forest model, which was the learning model of choice during this work. An Ericsson simulator was used to simulate and collect data from a seven-site scenario with 40 UEs. Primary features available were the current serving beam index and its RSRP. Additional features, like position and distance, were suggested, though many ended up being limited either by the simulated scenario or by the cost of acquiring the feature in a real-world scenario. Using primary features only, learned models' performance were equal to or worse than the baseline models' performance. Adding distance improved the performance considerably, beating the baseline models, but still leaving room for more improvements.
APA, Harvard, Vancouver, ISO, and other styles
31

Olešová, Kristína. "Klasifikace stupně gliomů v MR datech mozku." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2020. http://www.nusl.cz/ntk/nusl-413113.

Full text
Abstract:
This thesis deals with a classification of glioma grade in high and low aggressive tumours and overall survival prediction based on magnetic resonance imaging. Data used in this work is from BRATS challenge 2019 and each set contains information from 4 weighting sequences of MRI. Thesis is implemented in PYTHON programming language and Jupyter Notebooks environment. Software PyRadiomics is used for calculation of image features. Goal of this work is to determine best tumour region and weighting sequence for calculation of image features and consequently select set of features that are the best ones for classification of tumour grade and survival prediction. Part of thesis is dedicated to survival prediction using set of statistical tests, specifically Cox regression
APA, Harvard, Vancouver, ISO, and other styles
32

Wirgen, Isak, and Douglas Rube. "Supervised fraud detection of mobile money transactions on different distributions of imbalanced data : A comparative study of the classification methods logistic regression, random forest, and support vector machine." Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-446108.

Full text
Abstract:
The purpose of this paper is to compare the classification methods logistic regression, random forest, and support vector machine´s performance of detecting mobile money transaction fraud. Their performance will be evaluated on different distributions of imbalanced data in a supervised framework. Model performance will be evaluated from a variety of metrics to capture the full model performance. The results show that random forest attained the highest overall performance, followed by logistic regression. Support vector machine attained the worst overall performance and produced no useful classification of fraudulent transactions. In conclusion, the study suggests that better results could be achieved with actions such as improvements of the classification algorithms as well as better feature selection, among others.
APA, Harvard, Vancouver, ISO, and other styles
33

Thorén, Daniel. "Radar based tank level measurement using machine learning : Agricultural machines." Thesis, Linköpings universitet, Programvara och system, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176259.

Full text
Abstract:
Agriculture is becoming more dependent on computerized solutions to make thefarmer’s job easier. The big step that many companies are working towards is fullyautonomous vehicles that work the fields. To that end, the equipment fitted to saidvehicles must also adapt and become autonomous. Making this equipment autonomoustakes many incremental steps, one of which is developing an accurate and reliable tanklevel measurement system. In this thesis, a system for tank level measurement in a seedplanting machine is evaluated. Traditional systems use load cells to measure the weightof the tank however, these types of systems are expensive to build and cumbersome torepair. They also add a lot of weight to the equipment which increases the fuel consump-tion of the tractor. Thus, this thesis investigates the use of radar sensors together witha number of Machine Learning algorithms. Fourteen radar sensors are fitted to a tankat different positions, data is collected, and a preprocessing method is developed. Then,the data is used to test the following Machine Learning algorithms: Bagged RegressionTrees (BG), Random Forest Regression (RF), Boosted Regression Trees (BRT), LinearRegression (LR), Linear Support Vector Machine (L-SVM), Multi-Layer Perceptron Re-gressor (MLPR). The model with the best 5-fold crossvalidation scores was Random For-est, closely followed by Boosted Regression Trees. A robustness test, using 5 previouslyunseen scenarios, revealed that the Boosted Regression Trees model was the most robust.The radar position analysis showed that 6 sensors together with the MLPR model gavethe best RMSE scores.In conclusion, the models performed well on this type of system which shows thatthey might be a competitive alternative to load cell based systems.
APA, Harvard, Vancouver, ISO, and other styles
34

Machado, Gustavo. "Presente e futuro da análise de dados de fatores associados à soroprevalência da diarreia viral bovina." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/135515.

Full text
Abstract:
O vírus da diarreia viral bovina (BVDV) causa uma das doenças mais importantes de bovinos em termos de custos econômicos e sociais, uma vez que é largamente disseminado na população de gado leiteiro. Os objetivos do trabalho foram estimar a prevalência em nível de rebanho e investigar fatores associados aos níveis de anticorpos em leite de tanque através de um estudo transversal, bem como discutir e comparar diferentes técnicas de modelagem, as tradicionais como regressão e as menos usuais para este fim, como as de Machine learning (ML) como Random Forest. O estudo transversal foi realizado no estado do Rio Grande do Sul para a estimação da prevalência de doenças reprodutivas baseados em amostras de tanque de leite, partindo de uma população total de 81.307 rebanhos. Foram coletadas 388 amostras de tanque de leite, e nas propriedades selecionadas foi aplicado um questionário epidemiológico. Como resultados se identificou uma prevalência de 23,9% (IC95% = 19,8 - 28,1) de propriedades positivas. Através de análise de regressão de Poisson se identificou como fatores associados o BVDV: o exame retal como rotina para o diagnóstico de prenhes, Razão de Prevalência [PR] = 2,73 (IC 95%: 1.87-3.98), contato direto entre animais (contato via cerca de propriedades lindeiras) (PR=1,63, IC 95%: 1.13-2.95) e propriedades que não utilizavam inseminação artificial (PR=2.07, IC 95%: 1.38-3.09) Na técnica de Random Forest pôde-se identificar uma dependência na ocorrência de BVDV devido a: inseminação artificial quando realizada pelo proprietário da propriedade ou capataz, o número de vizinhos que também possuem criação de bovinos, e em concordância com os resultados da regressão quanto a dependência da ocorrência de BVDV devido a palpação retal. Como resultado, pôde-se perceber que o BVDV está distribuído no estado do RS e caso seja de interesse do poder público, o desenvolvimento de um programa de controle da doença pode ser baseado nos resultados encontrados. Por outro lado, a contribuição deste estudo vai além das tradicionais análises realizadas em epidemiologia veterinária, principalmente devido os bons resultados obtidos com a abordagem por ML neste estudo transversal. Por fim, a utilização de técnicas estatísticas mais avançadas contribuiu para elucidar melhor os fatores possivelmente envolvidos com a ocorrência de BVDV no rebanho leiteiro gaúcho.
The bovine viral diarrhea virus (BVDV) causes one of the most important disease of cattle in terms of economic and social costs, since it is widely disseminated in dairy cattle population. The objectives were to estimate the herd level prevalence at and investigate factors associated with antibody levels in bulk tank milk through a cross sectional study, discuss and compare different modeling techniques such as the traditional regression with the ones less used for this approach machine learning (ML). The cross sectional study was conducted in Rio Grande do Sul state to estimate the prevalence of reproductive diseases based on bulk tank milk samples, from a total population of 81,307 herds. Milk samples from 388 bulk tank were sampled, and an epidemiological questionnaire was applied in each farm. The prevalence was 23.9% (95% CI 19.8 - 28.1). Through the Poisson regression analysis, the following factors associated with BVDV were found: routine use of rectal examination for pregnancy (Prevalence Ratio [PR] = 2.73 (IC 95%: 1.87-3.98), direct contact between/among animals (contact over the fence of neighboring farms) (PR = 1.63, IC 95%: 1.13-2.95) and properties that did not use artificial insemination (PR = 2.07, IC 95%: 1.38-3.09). On the other hand, using ML techniques, it was identify a dependency upon the occurrence of BVDV due to: artificial insemination when carried out by the owner of the property or foreman; the number of neighbors who also have cattle, and in accordance with the regression results as the dependence of the occurrence of BVDV due to routine use of rectal examination for pregnancy. BVDV is spread across the State and if the government's interest to launch a disease control program measures should be focusing mainly on better conditions and care in reproduction. On the other hand, the contribution of this study goes beyond traditional analyzes in veterinary epidemiology, mainly due to the good results obtained with the approach by ML in this cross-sectional study. Finally, the use of advances statistics techniques it has been made progress to better elucidate the factors possibly involved in the occurrence of BVDV in state dairy herds.
APA, Harvard, Vancouver, ISO, and other styles
35

Zhao, Zhenyu. "Factors Affecting the Preference of Buying Hybrid and Electric Vehicles." Thesis, Uppsala universitet, Statistiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447231.

Full text
Abstract:
Electric Vehicles is regarded as an important solution for emission reduction. But, the adoption to it is still a problem in many countries. With survey data containing demographic and attitude factors of respondents, this paper proposes two classification models: logistic regression and random forest using the Multiple Correspondence Analysis (MCA) as an intermediate step to identify the factors affecting the willingness of electric vehicles purchase. The analysis shows that the addition of MCA does enhance the explanatory power while it takes a low cost on prediction performance, and the results reveal that characteristics such as frequency of using modern transport services, car-sharing subscription, living place, mode of frequent trip do have a significant impact on EV purchases.
APA, Harvard, Vancouver, ISO, and other styles
36

Hauser, Andrea. "Building a risk map for hurricane-force tropical cyclones in continental Portugal." Master's thesis, Instituto Superior de Economia e Gestão, 2021. http://hdl.handle.net/10400.5/23306.

Full text
Abstract:
Mestrado Bolonha em Actuarial Science
Tropical cyclones have enormous destructive potential. In 2018 continental Por- tugal has been affected by hurricane Leslie, the weather-related event having the highest impact ever on the property portfolio of the portuguese insurance company Fidelidade, causing several millions euros of losses. The fear is that, in the near future, the occurrence of this type of events increases in intensity and frequency, as a consequence of the climate change due to the warming of the planet. Quantifying the potential loss to which the property portfolio of Fidelidade could be subject to, helps in approximately determining premiums and capital reserves, as well as in defining the coverage to be provided. In this work, an approach to model the costs caused by a tropical cyclone extreme event is presented. The model is based on the losses incurred by the property port- folio of Fidelidade due to hurricane Leslie. By using the estimated models, it is possible to produce cost estimates for different scenarios of interest for the com- pany. The estimated models are also used to build a risk map for the councils of continental Portugal. The results obtained indicate that the councils with the estimated higher average cost ratio are all located along the coast of the country.
Ciclones tropicais têm um enorme potencial de destruição. Em 2018, Portugal continental foi atingido pelo furacão Leslie, que constituiu o fenómeno metereológico de maior impacto, até à data, no portfolio da companhia de seguros Fidelidade, causando milhões de euros em perdas. De facto, os ciclones tropicais têm um enorme potencial de destruição. A preocupação é que, em breve, a ocorrência deste tipo de fenómenos aumente em intensidade e frequência, como consequência das mudanças climáticas provocadas pelo aquecimento global. Quantificar a potencial perda à qual a companhia Fidelidade pode estar sujeita ajuda a determinar aproximadamente os prémios e provisões, assim como a definir a cobertura a ser providenciada. Neste trabalho, é apresentada uma abordagem para modelar os custos causados por um ciclone tropical extremo. O modelo é baseado nas perdas provocadas ao portfolio da Fidelidade pelo furacão Leslie. Ao usar os modelos, é possível produzir custos estimados para diferentes cenários de interesse da companhia. Os modelos estimados sã também utilizados para construir um mapa de risco para os conselhos de Portugal continental. Os resultados obtidos indicam que os conselhos com a maior taxa média de custos estimada estão localizados ao longo da costa do país.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
37

Esler, William Kevin. "On the development and application of indirect site indexes based on edaphoclimatic variables for commercial forestry in South Africa." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/20145.

Full text
Abstract:
Thesis (MScFor)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: Site Index is used extensively in modern commercial forestry both as an indicator of current and future site potential, but also as a means of site comparison. The concept is deeply embedded into current forest planning processes, and without it empirical growth and yield modelling would not function in its present form. Most commercial forestry companies in South Africa currently spend hundreds of thousands of Rand annually collecting growth stock data via inventory, but spend little or no money on the default compartment data (specifically Site Index) which is used to estimate over 90% of the product volumes in their long term plans. A need exists to construct reliable methods to determine Site Index for sites which have not been physically measured (the socalled "default", or indirect Site Index). Most previous attempts to model Site Index have used multiple linear regression as the model, alternative methods have been explored in this thesis: Regression tree analysis, random forest analysis, hybrid or model trees, multiple linear regression, and multiple linear regression using regression trees to identify the variables. Regression tree analysis proves to be ideally suited to this type of data, and a generic model with only three site variables was able to capture 49.44 % of the variation in Site Index. Further localisation of the model could prove to be commercially useful. One of the key assumptions associated with Site Index, that it is unaffected by initial planting density, was tested using linear mixed effects modelling. The results show that there may well be role played by initial stocking in some species (notably E. dunnii and E. nitens), and that further work may be warranted. It was also shown that early measurement of dominant height results in poor estimates of Site Index, which will have a direct impact on inventory policies and on data to be included in Site Index modelling studies. This thesis is divided into six chapters: Chapter 1 contains a description of the concept of Site Index and it's origins, as well as, how the concept is used within the current forest planning processes. Chapter 2 contains an analysis on the influence of initial planted density on the estimate of Site Index. Chapter 3 explores the question of whether the age at which dominant height is measured has any effect on the quality of Site Index estimates. Chapter 4 looks at various modelling methodologies and compares the resultant models. Chapter 5 contains conclusions and recommendations for further study, and finally Chapter 6 discusses how any new Site Index model will effect the current planning protocol.
AFRIKAANSE OPSOMMING: Hedendaagse kommersiële bosbou gebruik groeiplek indeks (Site Index) as 'n aanduiding van huidige en toekomstige groeiplek moontlikhede, asook 'n metode om groeiplekke te vergelyk. Hierdie beginsel is diep gewortel in bestaande beplanningsprosesse en daarsonder kan empiriese groeien opbrengsmodelle nie in hul huidige vorm funksioneer nie. SuidAfrikaanse bosboumaatskappye bestee jaarliks groot bedrae geld aan die versameling van groeivoorraad data deur middel van opnames, maar weinig of geen geld word aangewend vir die insameling van ongemete vak data (veral groeiplek indeks) nie. Ongemete vak data word gebuik om meer as 90% van die produksie volume te beraam in langtermyn beplaning. 'n Behoefte bestaan om betroubare metodes te ontwikkel om groeiplek indeks te bereken vir groeiplekke wat nog nie opgemeet is nie. Die meeste vorige pogings om groeiplek indeks te beraam het meervoudige linêre regressie as model gebruik. Alternatiewe metodes is ondersoek; naamlik regressieboom analise, ewekansige woud analise, hibriedeof modelbome, meervoudige linêre regressie en meervoudige linêre regressie waarin die veranderlike faktore bepaal is deur regressiebome. Regressieboom analise blyk geskik te wees vir hierdie tipe data en 'n veralgemeende model met slegs drie groeiplek veranderlikes dek 49.44 % van die variasie in groeiplek indeks. Verdere lokalisering van die model kan dus van kommersiële waarde wees. 'n Sleutel aanname is gemaak dat aanvanklike plantdigtheid nie 'n invloed op groeiplek indeks het nie. Hierdie aanname is getoets deur linêre gemengde uitwerkings modelle. Die toetsuitslag dui op 'n moontlikheid dat plantdigtheid wel 'n invloed het op sommige spesies (vernaamlik E. dunnii en E. nitens) en verdere navorsing kan daarom geregverdig word. Dit is ook bewys dat metings van jonger bome vir dominante hoogtes gee aanleiding tot swak beramings van groeiplek indekse. Gevolglik sal hierdie toestsuitslag groeivoorraad opname beleid, asook die data wat vir groeiplek indeks modellering gebruik word, beïnvloed. Hierdie tesis word in ses hoofstukke onderverdeel. Hoofstuk een bevat 'n beskrywing van die beginsel van groeiplek indeks, die oorsprong daarvan, asook hoe die beginsel tans in huidige bosbou beplannings prosesse toegepas word. Hoofstuk twee bestaan uit ń ontleding van die invloed van aanvanklike plantdigtheid op die beraming van groeplek indeks. In hoofstuk drie word ondersoek wat die moontlike invloed is van die ouderdom waarop metings vir dominante hoogte geneem word, op die kwaliteit van groeplek indeks beramings het. Hoofstuk vier verken verskeie modelle metodologieë en vergelyk die uitslaggewende modelle. Hoofstuk vyf bevat gevolgtrekkings en voorstelle vir verdere studies. Afsluitend, is hoofstuk ses ń bespreking van hoe enige nuwe groeiplek indeks modelle die huidige beplannings protokol kan beïnvloed.
APA, Harvard, Vancouver, ISO, and other styles
38

Almér, Henrik. "Machine learning and statistical analysis in fuel consumption prediction for heavy vehicles." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-172306.

Full text
Abstract:
I investigate how to use machine learning to predict fuel consumption in heavy vehicles. I examine data from several different sources describing road, vehicle, driver and weather characteristics and I find a regression to a fuel consumption measured in liters per distance. The thesis is done for Scania and uses data sources available to Scania. I evaluate which machine learning methods are most successful, how data collection frequency affects the prediction and which features are most influential for fuel consumption. I find that a lower collection frequency of 10 minutes is preferable to a higher collection frequency of 1 minute. I also find that the evaluated models are comparable in their performance and that the most important features for fuel consumption are related to the road slope, vehicle speed and vehicle weight.
Jag undersöker hur maskininlärning kan användas för att förutsäga bränsleförbrukning i tunga fordon. Jag undersöker data från flera olika källor som beskriver väg-, fordons-, förar- och väderkaraktäristiker. Det insamlade datat används för att hitta en regression till en bränsleförbrukning mätt i liter per sträcka. Studien utförs på uppdrag av Scania och jag använder mig av datakällor som är tillgängliga för Scania. Jag utvärderar vilka maskininlärningsmetoder som är bäst lämpade för problemet, hur insamlingsfrekvensen påverkar resultatet av förutsägelsen samt vilka attribut i datat som är mest inflytelserika för bränsleförbrukning. Jag finner att en lägre insamlingsfrekvens av 10 minuter är att föredra framför en högre frekvens av 1 minut. Jag finner även att de utvärderade modellerna ger likvärdiga resultat samt att de viktigaste attributen har att göra med vägens lutning, fordonets hastighet och fordonets vikt.
APA, Harvard, Vancouver, ISO, and other styles
39

Falk, Anton, and Daniel Holmgren. "Sales Forecasting by Assembly of Multiple Machine Learning Methods : A stacking approach to supervised machine learning." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184317.

Full text
Abstract:
Today, digitalization is a key factor for businesses to enhance growth and gain advantages and insight in their operations. Both in planning operations and understanding customers the digitalization processes today have key roles, and companies are spending more and more resources in this fields to gain critical insights and enhance growth. The fast-food industry is no exception where restaurants need to be highly flexible and agile in their work. With this, there exists an immense demand for knowledge and insights to help restaurants plan their daily operations and there is a great need for organizations to continuously adapt new technological solutions into their existing processes. Well implemented Machine Learning solutions in combination with feature engineering are likely to bring value into the existing processes. Sales forecasting, which is the main field of study in this thesis work, has a vital role in planning of fast food restaurant's operations, both for budgeting purposes, but also for staffing purposes. The word fast food describes itself. With this comes a commitment to provide high quality food and rapid service to the customers. Understaffing can risk violating either quality of the food or service while overstaffing leads to low overall productivity. Generating highly reliable sales forecasts are thus vital to maximize profits and minimize operational risk. SARIMA, XGBoost and Random Forest were evaluated on training data consisting of sales numbers, business hours and categorical variables describing date and month. These models worked as base learners where sales predictions from a specific dataset were used as training data for a Support Vector Regression model (SVR). A stacking approach to this type of project shows sufficient results with a significant gain in prediction accuracy for all investigated restaurants on a 6-week aggregated timeline compared to the existing solution.
Digitalisering har idag en nyckelroll för att skapa tillväxt och insikter för företag, dessa insikter ger fördelar både inom planering och i förståelsen om deras kunder. Det här är ett område som företag lägger mer och mer resurser på för att skapa större förståelse om sin verksamhet och på så sätt öka tillväxten. Snabbmatsindustrin är inget undantag då restauranger behöver en hög grad av flexibilitet i sina arbetssätt för att möta kundbehovet. Det här skapar en stor efterfrågan av kunskap och insikter för att hjälpa dem i planeringen av deras dagliga arbete och det finns ett stort behov från företagen att kontinuerligt implementera nya tekniska lösningar i befintliga processer. Med väl implementerade maskininlärningslösningar i kombination med att skapa mer informativa variabler från befintlig data kan aktörer skapa mervärde till redan existerande processer. Försäljningsprognostisering, som är huvudområdet för den här studien, har en viktig roll för verksamhetsplaneringen inom snabbmatsindustrin, både inom budgetering och bemanning. Namnet snabbmat beskriver sig själv, med det följer ett löfte gentemot kunden att tillhandahålla hög kvalitet på maten samt att kunna tillhandahålla snabb service. Underbemanning kan riskera att bryta någon av dessa löften, antingen i undermålig kvalitet på maten eller att inte kunna leverera snabb service. Överbemanning riskerar i stället att leda till ineffektivitet i användandet av resurser. Att generera högst tillförlitliga prognoser är därför avgörande för att kunna maximera vinsten och minimera operativ risk. SARIMA, XGBoost och Random Forest utvärderades på ett träningsset bestående av försäljningssiffror, timme på dygnet och kategoriska variabler som beskriver dag och månad. Dessa modeller fungerar som basmodeller vars prediktioner från ett specifikt testset används som träningsdata till en Stödvektorsreggresionsmodell (SVR). Att använda stapling av maskininlärningsmodeller till den här typen av problem visade tillfredställande resultat där det påvisades en signifikant förbättring i prediktionssäkerhet under en 6 veckors aggregerad period gentemot den redan existerande modellen.
APA, Harvard, Vancouver, ISO, and other styles
40

Das, Abhishek. "Analyses of Crash Occurence and Injury Severities on Multi Lane Highways Using Machine Learning Algorithms." Master's thesis, University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2576.

Full text
Abstract:
Reduction of crash occurrence on the various roadway locations (mid-block segments; signalized intersections; un-signalized intersections) and the mitigation of injury severity in the event of a crash are the major concerns of transportation safety engineers. Multi lane arterial roadways (excluding freeways and expressways) account for forty-three percent of fatal crashes in the state of Florida. Significant contributing causes fall under the broad categories of aggressive driver behavior; adverse weather and environmental conditions; and roadway geometric and traffic factors. The objective of this research was the implementation of innovative, state-of-the-art analytical methods to identify the contributing factors for crashes and injury severity. Advances in computational methods render the use of modern statistical and machine learning algorithms. Even though most of the contributing factors are known a-priori, advanced methods unearth changing trends. Heuristic evolutionary processes such as genetic programming; sophisticated data mining methods like conditional inference tree; and mathematical treatments in the form of sensitivity analyses outline the major contributions in this research. Application of traditional statistical methods like simultaneous ordered probit models, identification and resolution of crash data problems are also key aspects of this study. In order to eliminate the use of unrealistic uniform intersection influence radius of 250 ft, heuristic rules were developed for assigning crashes to roadway segments, signalized intersection and access points using parameters, such as 'site location', 'traffic control' and node information. Use of Conditional Inference Forest instead of Classification and Regression Tree to identify variables of significance for injury severity analysis removed the bias towards the selection of continuous variable or variables with large number of categories. For the injury severity analysis of crashes on highways, the corridors were clustered into four optimum groups. The optimum number of clusters was found using Partitioning around Medoids algorithm. Concepts of evolutionary biology like crossover and mutation were implemented to develop models for classification and regression analyses based on the highest hit rate and minimum error rate, respectively. Low crossover rate and higher mutation reduces the chances of genetic drift and brings in novelty to the model development process. Annual daily traffic; friction coefficient of pavements; on-street parking; curbed medians; surface and shoulder widths; alcohol / drug usage are some of the significant factors that played a role in both crash occurrence and injury severities. Relative sensitivity analyses were used to identify the effect of continuous variables on the variation of crash counts. This study improved the understanding of the significant factors that could play an important role in designing better safety countermeasures on multi lane highways, and hence enhance their safety by reducing the frequency of crashes and severity of injuries. Educating young people about the abuses of alcohol and drugs specifically at high schools and colleges could potentially lead to lower driver aggression. Removal of on-street parking from high speed arterials unilaterally could result in likely drop in the number of crashes. Widening of shoulders could give greater maneuvering space for the drivers. Improving pavement conditions for better friction coefficient will lead to improved crash recovery. Addition of lanes to alleviate problems arising out of increased ADT and restriction of trucks to the slower right lanes on the highways would not only reduce the crash occurrences but also resulted in lower injury severity levels.
Ph.D.
Department of Civil and Environmental Engineering
Engineering and Computer Science
Civil Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
41

Säfström, Stella. "Predicting the Unobserved : A statistical analysis of missing data techniques for binary classification." Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-388581.

Full text
Abstract:
The aim of the thesis is to investigate how the classification performance of random forest and logistic regression differ, given an imbalanced data set with MCAR missing data. The performance is measured in terms of accuracy and sensitivity. Two analyses are performed: one with a simulated data set and one application using data from the Swedish population registries. The simulation study is created to have the same class imbalance at 1:5. The missing values are handled using three different techniques: complete case analysis, predictive mean matching and mean imputation. The thesis concludes that logistic regression and random forest are on average equally accurate, with some instances of random forest outperforming logistic regression. Logistic regression consistently outperforms random forest with regards to sensitivity. This implies that logistic regression may be the best option for studies where the goal is to accurately predict outcomes in the minority class. None of the missing data techniques stood out in terms of performance.
APA, Harvard, Vancouver, ISO, and other styles
42

Lind, Nilsson Rasmus. "Machine learning in logistics : Increasing the performance of machine learning algorithms on two specific logistic problems." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-64761.

Full text
Abstract:
Data Ductus, a multination IT-consulting company, wants to develop an AI that monitors a logistic system and looks for errors. Once trained enough, this AI will suggest a correction and automatically right issues if they arise. This project presents how one works with machine learning problems and provides a deeper insight into how cross-validation and regularisation, among other techniques, are used to improve the performance of machine learning algorithms on the defined problem. Three techniques are tested and evaluated in our logistic system on three different machine learning algorithms, namely Naïve Bayes, Logistic Regression and Random Forest. The evaluation of the algorithms leads us to conclude that Random Forest, using cross-validated parameters, gives the best performance on our specific problems, with the other two falling behind in each tested category. It became clear to us that cross-validation is a simple, yet powerful tool for increasing the performance of machine learning algorithms.
Data Ductus, ett multinationellt IT-konsultföretag vill utveckla en AI som övervakar ett logistiksystem och uppmärksammar fel. När denna AI är tillräckligt upplärd ska den föreslå korrigering eller automatiskt korrigera problem som uppstår. Detta projekt presenterar hur man arbetar med maskininlärningsproblem och ger en djupare inblick i hur kors-validering och regularisering, bland andra tekniker, används för att förbättra prestandan av maskininlärningsalgoritmer på det definierade problemet. Dessa tekniker testas och utvärderas i vårt logistiksystem på tre olika maskininlärnings algoritmer, nämligen Naïve Bayes, Logistic Regression och Random Forest. Utvärderingen av algoritmerna leder oss till att slutsatsen är att Random Forest, som använder korsvaliderade parametrar, ger bästa prestanda på våra specifika problem, medan de andra två faller bakom i varje testad kategori. Det blev klart för oss att kors-validering är ett enkelt, men kraftfullt verktyg för att öka prestanda hos maskininlärningsalgoritmer.
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Julia, and Linnéa Lindahl. "Prediktion av efterfrågan i filmbranschen baserat på maskininlärning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235719.

Full text
Abstract:
Machine learning is a central technology in data-driven decision making. In this study, machine learning in the context of demand forecasting in the motion picture industry from film exhibitors’ perspective is investigated. More specifically, it is investigated to what extent the technology can assist estimation of public interest in terms of revenue levels of unreleased movies. Three machine learning models are implemented with the aim to forecast cumulative revenue levels during the opening weekend of various movies which were released in 2010-2017 in Sweden. The forecast is based on ten attributes which range from public online user-generated data to specific movie characteristics such as production budget and cast. The results indicate that the choice of attributes as well as models in this study were not optimal on the Swedish market as the retrieved values from relevant precision metrics were inadequate, however with valid underlying reasons.
Maskininlärning är en central teknik i datadrivet beslutsfattande. I den här rapporten utreds maskininlärning isammanhanget av efterfrågeprediktion i filmbranschen från biografers perspektiv. Närmare bestämt undersöks det i vilken utsträckningtekniken kan bistå uppskattning av publikintresse i termer av intäkter vad gäller osläppta filmer hos biografer. Tremaskininlärningsmodeller implementeras i syfte att göra en prognos på kumulativa intäktsnivåer under premiärhelgen för filmer vilkahade premiär 2010-2017 i Sverige. Prognostiseringen baseras på varierande attribut som sträcker sig från publik användargenererad data på nätet till filmspecifika variabler så som produktionsbudget och uppsättning av skådespelare. De erhållna resultaten visar att valen av attribut och modeller inte var optimala på den svenska marknaden då erhållna precisionsmått från modellerna antog låga värden, med relevanta underliggande skäl.
APA, Harvard, Vancouver, ISO, and other styles
44

Kwame, Osei Eric. "Machine Learning-based Quality Prediction in the Froth Flotation Process of Mining : Master’s Degree Thesis in Microdata Analysis." Thesis, Högskolan Dalarna, Mikrodataanalys, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:du-31643.

Full text
Abstract:
In the iron ore mining fraternity, in order to achieve the desired quality in the froth flotation processing plant, stakeholders rely on conventional laboratory test technique which usually takes more than two hours to ascertain the two variables of interest. Such a substantial dead time makes it difficult to put the inherent stochastic nature of the plant system in steady-state. Thus, the present study aims to evaluate the feasibility of using machine learning algorithms to predict the percentage of silica concentrate (SiO2) in the froth flotation processing plant in real-time. The predictive model has been constructed using iron ore mining froth flotation system dataset obtain from Kaggle. Different feature selection methods including Random Forest and backward elimination technique were applied to the dataset to extract significant features. The selected features were then used in Multiple Linear Regression, Random Forest and Artificial Neural Network models and the prediction accuracy of all the models have been evaluated and compared with each other. The results show that Artificial Neural Network has the ability to generalize better and predictions were off by 0.38% mean square error (mse) on average, which is significant considering that the SiO2 range from 0.77%- 5.53% -( mse 1.1%) . These results have been obtained within real-time processing of 12s in the worst case scenario on an Inter i7 hardware. The experimental results also suggest that reagents variables have the most significant influence in SiO2 prediction and less important variable is the Flotation Column.02.air.Flow. The experiments results have also indicated a promising prospect for both the Multiple Linear Regression and Random Forest models in the field of SiO2 prediction in iron ore mining froth flotation system in general. Meanwhile, this study provides management, metallurgists and operators with a better choice for SiO2 prediction in real-time per the accuracy demand as opposed to the long dead time laboratory test analysis causing incessant loss of iron ore discharged to tailings.
APA, Harvard, Vancouver, ISO, and other styles
45

Yang, Kaolee. "A Statistical Analysis of Medical Data for Breast Cancer and Chronic Kidney Disease." Bowling Green State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1587052897029939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Spagnoli, Lorenzo. "COVID-19 prognosis estimation from CAT scan radiomics: comparison of different machine learning approaches for predicting patients survival and ICU Admission." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23926/.

Full text
Abstract:
Since the start of 2020 Sars-COVID19 has given rise to a world-wide pandemic. In an attempt to slow down the spreading of this disease various prevention and diagnostic methods have been developed. In this thesis the attention has been put on Machine Learning to predict prognosis based on data originating from radiological images. Radiomics has been used to extract information from images segmented using a software from the hospital which provided both the clinical data and images. The usefulness of different families of variables has then been evaluated through their performance in the methods used, i.e. Lasso regularized regression and Random Forest. The first chapter is introductory in nature, the second will contain a theoretical overview of the necessary concepts that will be needed throughout this whole work. The focus will be then shifted on methods and instruments used in the development of this thesis. The third chapter will report the results and finally some conclusions will be derived from the previously presented results. It will be concluded that the segmentation and feature extraction step is of pivotal importance in driving the performance of the predictions. In fact, in this thesis, it seems that the information from the images achieves the same predictive power that can be derived from the clinical data. This can be interpreted in three ways: first it can be taken as a symptom of the fact that even the more complex Sars-COVID19 cases can be segmented automatically, or semi-automatically by untrained personnel, leading to results competing with other methodologies. Secondly it can be taken to show that the performance of clinical variables can be reached by radiomic features alone in a semi-automatic pipeline, which could aid in reducing the workload imposed on medical professionals in case of pandemic. Finally it can be taken as proof that the method implemented has room to improve by more carefully investing in the segmentation phase
APA, Harvard, Vancouver, ISO, and other styles
47

Åkerblom, Thea, and Tobias Thor. "Fraud or Not?" Thesis, Uppsala universitet, Statistiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-388695.

Full text
Abstract:
This paper uses statistical learning to examine and compare three different statistical methods with the aim to predict credit card fraud. The methods compared are Logistic Regression, K-Nearest Neighbour and Random Forest. They are applied and estimated on a data set consisting of nearly 300,000 credit card transactions to determine their performance using classification of fraud as the outcome variable. The three models all have different properties and advantages. The K-NN model preformed the best in this paper but has some disadvantages, since it does not explain the data but rather predict the outcome accurately. Random Forest explains the variables but performs less precise. The Logistic Regression model seems to be unfit for this specific data set.
APA, Harvard, Vancouver, ISO, and other styles
48

Adok, Claudia. "Retrieval of Cloud Top Pressure." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-129805.

Full text
Abstract:
In this thesis the predictive models the multilayer perceptron and random forest are evaluated to predict cloud top pressure. The dataset used in this thesis contains brightness temperatures, reflectances and other useful variables to determine the cloud top pressure from the Advanced Very High Resolution Radiometer (AVHRR) instrument on the two satellites NOAA-17 and NOAA-18 during the time period 2006-2009. The dataset also contains numerical weather prediction (NWP) variables calculated using mathematical models. In the dataset there are also observed cloud top pressure and cloud top height estimates from the more accurate instrument on the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) satellite. The predicted cloud top pressure is converted into an interpolated cloud top height. The predicted pressure and interpolated height are then evaluated against the more accurate and observed cloud top pressure and cloud top height from the instrument on the satellite CALIPSO. The predictive models have been performed on the data using different sampling strategies to take into account the performance of individual cloud classes prevalent in the data. The multilayer perceptron is performed using both the original response cloud top pressure and a log transformed repsonse to avoid negative values as output which is prevalent when using the original response. Results show that overall the random forest model performs better than the multilayer perceptron in terms of root mean squared error and mean absolute error.
APA, Harvard, Vancouver, ISO, and other styles
49

Ekeberg, Lukas, and Alexander Fahnehjelm. "Maskininlärning som verktyg för att extrahera information om attribut kring bostadsannonser i syfte att maximera försäljningspris." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-240401.

Full text
Abstract:
The Swedish real estate market has been digitalized over the past decade with the current practice being to post your real estate advertisement online. A question that has arisen is how a seller can optimize their public listing to maximize the selling premium. This paper analyzes the use of three machine learning methods to solve this problem: Linear Regression, Decision Tree Regressor and Random Forest Regressor. The aim is to retrieve information regarding how certain attributes contribute to the premium value. The dataset used contains apartments sold within the years of 2014-2018 in the Östermalm / Djurgården district in Stockholm, Sweden. The resulting models returned an R2-value of approx. 0.26 and Mean Absolute Error of approx. 0.06. While the models were not accurate regarding prediction of premium, information was still able to be extracted from the models. In conclusion, a high amount of views and a publication made in April provide the best conditions for an advertisement to reach a high selling premium. The seller should try to keep the amount of days since publication lower than 15.5 days and avoid publishing on a Tuesday.
Den svenska bostadsmarknaden har blivit alltmer digitaliserad under det senaste årtiondet med nuvarande praxis att säljaren publicerar sin bostadsannons online. En fråga som uppstår är hur en säljare kan optimera sin annons för att maximera budpremie. Denna studie analyserar tre maskininlärningsmetoder för att lösa detta problem: Linear Regression, Decision Tree Regressor och Random Forest Regressor. Syftet är att utvinna information om de signifikanta attribut som påverkar budpremien. Det dataset som använts innehåller lägenheter som såldes under åren 2014-2018 i Stockholmsområdet Östermalm / Djurgården. Modellerna som togs fram uppnådde ett R²-värde på approximativt 0.26 och Mean Absolute Error på approximativt 0.06. Signifikant information kunde extraheras from modellerna trots att de inte var exakta i att förutspå budpremien. Sammanfattningsvis skapar ett stort antal visningar och en publicering i april de bästa förutsättningarna för att uppnå en hög budpremie. Säljaren ska försöka hålla antal dagar sedan publicering under 15.5 dagar och undvika att publicera på tisdagar.
APA, Harvard, Vancouver, ISO, and other styles
50

Granström, Daria, and Johan Abrahamsson. "Loan Default Prediction using Supervised Machine Learning Algorithms." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252312.

Full text
Abstract:
It is essential for a bank to estimate the credit risk it carries and the magnitude of exposure it has in case of non-performing customers. Estimation of this kind of risk has been done by statistical methods through decades and with respect to recent development in the field of machine learning, there has been an interest in investigating if machine learning techniques can perform better quantification of the risk. The aim of this thesis is to examine which method from a chosen set of machine learning techniques exhibits the best performance in default prediction with regards to chosen model evaluation parameters. The investigated techniques were Logistic Regression, Random Forest, Decision Tree, AdaBoost, XGBoost, Artificial Neural Network and Support Vector Machine. An oversampling technique called SMOTE was implemented in order to treat the imbalance between classes for the response variable. The results showed that XGBoost without implementation of SMOTE obtained the best result with respect to the chosen model evaluation metric.
Det är nödvändigt för en bank att ha en bra uppskattning på hur stor risk den bär med avseende på kunders fallissemang. Olika statistiska metoder har använts för att estimera denna risk, men med den nuvarande utvecklingen inom maskininlärningsområdet har det väckt ett intesse att utforska om maskininlärningsmetoder kan förbättra kvaliteten på riskuppskattningen. Syftet med denna avhandling är att undersöka vilken metod av de implementerade maskininlärningsmetoderna presterar bäst för modellering av fallissemangprediktion med avseende på valda modelvaldieringsparametrar. De implementerade metoderna var Logistisk Regression, Random Forest, Decision Tree, AdaBoost, XGBoost, Artificiella neurala nätverk och Stödvektormaskin. En översamplingsteknik, SMOTE, användes för att behandla obalansen i klassfördelningen för svarsvariabeln. Resultatet blev följande: XGBoost utan implementering av SMOTE visade bäst resultat med avseende på den valda metriken.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography