Academic literature on the topic 'Random forest regression'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Random forest regression.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Random forest regression"

1

Rigatti, Steven J. "Random Forest." Journal of Insurance Medicine 47, no. 1 (January 1, 2017): 31–39. http://dx.doi.org/10.17849/insm-47-01-31-39.1.

Full text
Abstract:
For the task of analyzing survival data to derive risk factors associated with mortality, physicians, researchers, and biostatisticians have typically relied on certain types of regression techniques, most notably the Cox model. With the advent of more widely distributed computing power, methods which require more complex mathematics have become increasingly common. Particularly in this era of “big data” and machine learning, survival analysis has become methodologically broader. This paper aims to explore one technique known as Random Forest. The Random Forest technique is a regression tree technique which uses bootstrap aggregation and randomization of predictors to achieve a high degree of predictive accuracy. The various input parameters of the random forest are explored. Colon cancer data (n = 66,807) from the SEER database is then used to construct both a Cox model and a random forest model to determine how well the models perform on the same data. Both models perform well, achieving a concordance error rate of approximately 18%.
APA, Harvard, Vancouver, ISO, and other styles
2

Kaymak, Sertan, and Ioannis Patras. "Multimodal random forest based tensor regression." IET Computer Vision 8, no. 6 (December 2014): 650–57. http://dx.doi.org/10.1049/iet-cvi.2013.0320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Costa, Iago Sousa Lima, Isabelle Cavalcanti Corrêa de Oliveira Serafim, Felipe Mattos Tavares, and Hugo José de Oliveira Polo. "Uranium anomalies detection through Random Forest regression." Exploration Geophysics 51, no. 5 (February 23, 2020): 555–69. http://dx.doi.org/10.1080/08123985.2020.1725387.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tsagkrasoulis, Dimosthenis, and Giovanni Montana. "Random forest regression for manifold-valued responses." Pattern Recognition Letters 101 (January 2018): 6–13. http://dx.doi.org/10.1016/j.patrec.2017.11.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pal, Mahesh, N. K. Singh, and N. K. Tiwari. "Pier scour modelling using random forest regression." ISH Journal of Hydraulic Engineering 19, no. 2 (June 2013): 69–75. http://dx.doi.org/10.1080/09715010.2013.772763.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mendez, Guillermo, and Sharon Lohr. "Estimating residual variance in random forest regression." Computational Statistics & Data Analysis 55, no. 11 (November 2011): 2937–50. http://dx.doi.org/10.1016/j.csda.2011.04.022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Grömping, Ulrike. "Variable Importance Assessment in Regression: Linear Regression versus Random Forest." American Statistician 63, no. 4 (November 2009): 308–19. http://dx.doi.org/10.1198/tast.2009.08199.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

A, Dr Akila, and Ms Padma R. "Breast Cancer Tumor Categorization using Logistic Regression, Decision Tree and Random Forest Classification Techniques." International Journal of Research in Arts and Science 5, Special Issue (August 30, 2019): 282–89. http://dx.doi.org/10.9756/bp2019.1002/27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Milanović, Slobodan, Nenad Marković, Dragan Pamučar, Ljubomir Gigović, Pavle Kostić, and Sladjan D. Milanović. "Forest Fire Probability Mapping in Eastern Serbia: Logistic Regression versus Random Forest Method." Forests 12, no. 1 (December 22, 2020): 5. http://dx.doi.org/10.3390/f12010005.

Full text
Abstract:
Forest fire risk has increased globally during the previous decades. The Mediterranean region is traditionally the most at risk in Europe, but continental countries like Serbia have experienced significant economic and ecological losses due to forest fires. To prevent damage to forests and infrastructure, alongside other societal losses, it is necessary to create an effective protection system against fire, which minimizes the harmful effects. Forest fire probability mapping, as one of the basic tools in risk management, allows the allocation of resources for fire suppression, within a fire season, from zones with a lower risk to those under higher threat. Logistic regression (LR) has been used as a standard procedure in forest fire probability mapping, but in the last decade, machine learning methods such as fandom forest (RF) have become more frequent. The main goals in this study were to (i) determine the main explanatory variables for forest fire occurrence for both models, LR and RF, and (ii) map the probability of forest fire occurrence in Eastern Serbia based on LR and RF. The most important variable was drought code, followed by different anthropogenic features depending on the type of the model. The RF models demonstrated better overall predictive ability than LR models. The map produced may increase firefighting efficiency due to the early detection of forest fire and enable resources to be allocated in the eastern part of Serbia, which covers more than one-third of the country’s area.
APA, Harvard, Vancouver, ISO, and other styles
10

Sekulić, Aleksandar, Milan Kilibarda, Gerard B. M. Heuvelink, Mladen Nikolić, and Branislav Bajat. "Random Forest Spatial Interpolation." Remote Sensing 12, no. 10 (May 25, 2020): 1687. http://dx.doi.org/10.3390/rs12101687.

Full text
Abstract:
For many decades, kriging and deterministic interpolation techniques, such as inverse distance weighting and nearest neighbour interpolation, have been the most popular spatial interpolation techniques. Kriging with external drift and regression kriging have become basic techniques that benefit both from spatial autocorrelation and covariate information. More recently, machine learning techniques, such as random forest and gradient boosting, have become increasingly popular and are now often used for spatial interpolation. Some attempts have been made to explicitly take the spatial component into account in machine learning, but so far, none of these approaches have taken the natural route of incorporating the nearest observations and their distances to the prediction location as covariates. In this research, we explored the value of including observations at the nearest locations and their distances from the prediction location by introducing Random Forest Spatial Interpolation (RFSI). We compared RFSI with deterministic interpolation methods, ordinary kriging, regression kriging, Random Forest and Random Forest for spatial prediction (RFsp) in three case studies. The first case study made use of synthetic data, i.e., simulations from normally distributed stationary random fields with a known semivariogram, for which ordinary kriging is known to be optimal. The second and third case studies evaluated the performance of the various interpolation methods using daily precipitation data for the 2016–2018 period in Catalonia, Spain, and mean daily temperature for the year 2008 in Croatia. Results of the synthetic case study showed that RFSI outperformed most simple deterministic interpolation techniques and had similar performance as inverse distance weighting and RFsp. As expected, kriging was the most accurate technique in the synthetic case study. In the precipitation and temperature case studies, RFSI mostly outperformed regression kriging, inverse distance weighting, random forest, and RFsp. Moreover, RFSI was substantially faster than RFsp, particularly when the training dataset was large and high-resolution prediction maps were made.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Random forest regression"

1

Linusson, Henrik, Robin Rudenwall, and Andreas Olausson. "Random forest och glesa datarespresentationer." Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-16672.

Full text
Abstract:
In silico experimentation is the process of using computational and statistical models to predict medicinal properties in chemicals; as a means of reducing lab work and increasing success rate this process has become an important part of modern drug development. There are various ways of representing molecules - the problem that motivated this paper derives from collecting substructures of the chemical into what is known as fractional representations. Assembling large sets of molecules represented in this way will result in sparse data, where a large portion of the set is null values. This consumes an excessive amount of computer memory which inhibits the size of data sets that can be used when constructing predictive models.In this study, we suggest a set of criteria for evaluation of random forest implementations to be used for in silico predictive modeling on sparse data sets, with regard to computer memory usage, model construction time and predictive accuracy.A novel random forest system was implemented to meet the suggested criteria, and experiments were made to compare our implementation to existing machine learning algorithms to establish our implementation‟s correctness. Experimental results show that our random forest implementation can create accurate prediction models on sparse datasets, with lower memory usage overhead than implementations using a common matrix representation, and in less time than existing random forest implementations evaluated against. We highlight design choices made to accommodate for sparse data structures and data sets in the random forest ensemble technique, and therein present potential improvements to feature selection in sparse data sets.
Program: Systemarkitekturutbildningen
APA, Harvard, Vancouver, ISO, and other styles
2

Linusson, Henrik. "Multi-Output Random Forests." Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-17167.

Full text
Abstract:
The Random Forests ensemble predictor has proven to be well-suited for solving a multitudeof different prediction problems. In this thesis, we propose an extension to the Random Forestframework that allows Random Forests to be constructed for multi-output decision problemswith arbitrary combinations of classification and regression responses, with the goal ofincreasing predictive performance for such multi-output problems. We show that our methodfor combining decision tasks within the same decision tree reduces prediction error for mosttasks compared to single-output decision trees based on the same node impurity metrics, andprovide a comparison of different methods for combining such metrics.
Program: Magisterutbildning i informatik
APA, Harvard, Vancouver, ISO, and other styles
3

Adriansson, Nils, and Ingrid Mattsson. "Forecasting GDP Growth, or How Can Random Forests Improve Predictions in Economics?" Thesis, Uppsala universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-243028.

Full text
Abstract:
GDP is used to measure the economic state of a country and accurate forecasts of it is therefore important. Using the Economic Tendency Survey we investigate forecasting quarterly GDP growth using the data mining technique Random Forest. Comparisons are made with a benchmark AR(1) and an ad hoc linear model built on the most important variables suggested by the Random Forest. Evaluation by forecasting shows that the Random Forest makes the most accurate forecast supporting the theory that there are benefits to using Random Forests on economic time series.
APA, Harvard, Vancouver, ISO, and other styles
4

Asritha, Kotha Sri Lakshmi Kamakshi. "Comparing Random forest and Kriging Methods for Surrogate Modeling." Thesis, Blekinge Tekniska Högskola, Fakulteten för datavetenskaper, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20230.

Full text
Abstract:
The issue with conducting real experiments in design engineering is the cost factor to find an optimal design that fulfills all design requirements and constraints. An alternate method of a real experiment that is performed by engineers is computer-aided design modeling and computer-simulated experiments. These simulations are conducted to understand functional behavior and to predict possible failure modes in design concepts. However, these simulations may take minutes, hours, days to finish. In order to reduce the time consumption and simulations required for design space exploration, surrogate modeling is used. \par Replacing the original system is the motive of surrogate modeling by finding an approximation function of simulations that is quickly computed. The process of surrogate model generation includes sample selection, model generation, and model evaluation. Using surrogate models in design engineering can help reduce design cycle times and cost by enabling rapid analysis of alternative designs.\par Selecting a suitable surrogate modeling method for a given function with specific requirements is possible by comparing different surrogate modeling methods. These methods can be compared using different application problems and evaluation metrics. In this thesis, we are comparing the random forest model and kriging model based on prediction accuracy. The comparison is performed using mathematical test functions. This thesis conducted quantitative experiments to investigate the performance of methods. After experimental analysis, it is found that the kriging models have higher accuracy compared to random forests. Furthermore, the random forest models have less execution time compared to kriging for studied mathematical test problems.
APA, Harvard, Vancouver, ISO, and other styles
5

Wålinder, Andreas. "Evaluation of logistic regression and random forest classification based on prediction accuracy and metadata analysis." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-35126.

Full text
Abstract:
Model selection is an important part of classification. In this thesis we study the two classification models logistic regression and random forest. They are compared and evaluated based on prediction accuracy and metadata analysis. The models were trained on 25 diverse datasets. We calculated the prediction accuracy of both models using RapidMiner. We also collected metadata for the datasets concerning number of observations, number of predictor variables and number of classes in the response variable.     There is a correlation between performance of logistic regression and random forest with significant correlation of 0.60 and confidence interval [0.29 0.79]. The models appear to perform similarly across the datasets with performance more influenced by choice of dataset rather than model selection.     Random forest with an average prediction accuracy of 81.66% performed better on these datasets than logistic regression with an average prediction accuracy of 73.07%. The difference is however not statistically significant with a p-value of 0.088 for Student's t-test.     Multiple linear regression analysis reveals none of the analysed metadata have a significant linear relationship with logistic regression performance. The regression of logistic regression performance on metadata has a p-value of 0.66. We get similar results with random forest performance. The regression of random forest performance on metadata has a p-value of 0.89. None of the analysed metadata have a significant linear relationship with random forest performance.     We conclude that the prediction accuracies of logistic regression and random forest are correlated. Random forest performed slightly better on the studied datasets but the difference is not statistically significant. The studied metadata does not appear to have a significant effect on prediction accuracy of either model.
APA, Harvard, Vancouver, ISO, and other styles
6

Björk, Gustaf, and Carlsson Tobias. "Klassificeringsmetoder med medicinska tillämpningar : En jämförande studie mellan logistisk regression, elastic net och random forest." Thesis, Umeå universitet, Statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-122698.

Full text
Abstract:
Idag genererar den medicinska forskningen mycket stora mängder data med en skiftande karaktär. Detta har gjort att statistiska klassificeringsmetoder blivit allt mer populära som beslutstöd inom medicinsk forskning och praktik. Den här uppsatsen försöker reda ut om någon av klassificeringsmetoderna logistisk regression, elastic net eller random forest presterar bättre än övriga metoder när datamaterialets förhållande mellan observationer och förklaringsvariabler varierar. Metodernas klassificeringsprestation utvärderas med hjälp av korsvalidering. Resultatet visar att metoderna presterar likvärdigt när datamaterialet består av fler observationer än förklaringsvariabler och även då datamaterialet innehåller fler förklaringsvariabler än observationer. Däremot presterar elastic net klart bättre än de övriga metoderna på det datamaterial där antalet observationer är ungefär lika som antalet förklaringsvariabler. Vidare tyder resultaten på att alla tre metoder med fördel kan användas på datamaterial med fler variabler än observationer vilket är vanligt för datamaterial som rör genetik. Detta givet att en manuell variabelselektion sker för logistisk regression för att metoden ska kunna appliceras i denna situation.
APA, Harvard, Vancouver, ISO, and other styles
7

Ankaräng, Marcus, and Jakob Kristiansson. "Comparison of Logistic Regression and an Explained Random Forest in the Domain of Creditworthiness Assessment." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-301907.

Full text
Abstract:
As the use of AI in society is developing, the requirement of explainable algorithms has increased. A challenge with many modern machine learning algorithms is that they, due to their often complex structures, lack the ability to produce human-interpretable explanations. Research within explainable AI has resulted in methods that can be applied on top of non- interpretable models to motivate their decision bases. The aim of this thesis is to compare an unexplained machine learning model used in combination with an explanatory method, and a model that is explainable through its inherent structure. Random forest was the unexplained model in question and the explanatory method was SHAP. The explainable model was logistic regression, which is explanatory through its feature weights. The comparison was conducted within the area of creditworthiness and was based on predictive performance and explainability. Furthermore, the thesis intends to use these models to investigate what characterizes loan applicants who are likely to default. The comparison showed that no model performed significantly better than the other in terms of predictive performance. Characteristics of bad loan applicants differed between the two algorithms. Three important aspects were the applicant’s age, where they lived and whether they had a residential phone. Regarding explainability, several advantages with SHAP were observed. With SHAP, explanations on both a local and a global level can be produced. Also, SHAP offers a way to take advantage of the high performance in many modern machine learning algorithms, and at the same time fulfil today’s increased requirement of transparency.
I takt med att AI används allt oftare för att fatta beslut i samhället, har kravet på förklarbarhet ökat. En utmaning med flera moderna maskininlärningsmodeller är att de, på grund av sina komplexa strukturer, sällan ger tillgång till mänskligt förståeliga motiveringar. Forskning inom förklarar AI har lett fram till metoder som kan appliceras ovanpå icke- förklarbara modeller för att tolka deras beslutsgrunder. Det här arbetet syftar till att jämföra en icke- förklarbar maskininlärningsmodell i kombination med en förklaringsmetod, och en modell som är förklarbar genom sin struktur. Den icke- förklarbara modellen var random forest och förklaringsmetoden som användes var SHAP. Den förklarbara modellen var logistisk regression, som är förklarande genom sina vikter. Jämförelsen utfördes inom området kreditvärdighet och grundades i prediktiv prestanda och förklarbarhet. Vidare användes dessa modeller för att undersöka vilka egenskaper som var kännetecknande för låntagare som inte förväntades kunna betala tillbaka sitt lån. Jämförelsen visade att ingen av de båda metoderna presterande signifikant mycket bättre än den andra sett till prediktiv prestanda. Kännetecknande särdrag för dåliga låntagare skiljde sig åt mellan metoderna. Tre viktiga aspekter var låntagarens °ålder, vart denna bodde och huruvida personen ägde en hemtelefon. Gällande förklarbarheten framträdde flera fördelar med SHAP, däribland möjligheten att kunna producera både lokala och globala förklaringar. Vidare konstaterades att SHAP gör det möjligt att dra fördel av den höga prestandan som många moderna maskininlärningsmetoder uppvisar och samtidigt uppfylla dagens ökade krav på transparens.
APA, Harvard, Vancouver, ISO, and other styles
8

Jonsson, Estrid, and Sara Fredrikson. "An Investigation of How Well Random Forest Regression Can Predict Demand : Is Random Forest Regression better at predicting the sell-through of close to date products at different discount levels than a basic linear model?" Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302025.

Full text
Abstract:
Allt eftersom klimatkrisen fortskrider ökar engagemanget kring hållbarhet inom företag. Växthusgaser är ett av de största problemen och matsvinn har därför fått mycket uppmärksamhet sedan det utnämndes till den tredje största bidragaren till de globala utsläppen. För att minska sitt bidrag rabatterar många matbutiker produkter med kort bästföredatum, vilket kommit att kräva en förståelse för hur priskänslig efterfrågan på denna typ av produkt är. Prisoptimering görs vanligtvis med så kallade Generalized Linear Models men då efterfrågan är ett komplext koncept har maskininl ärningsmetoder börjat utmana de traditionella modellerna. En sådan metod är Random Forest Regression, och syftet med uppsatsen är att utreda ifall modellen är bättre på att estimera efterfrågan baserat på rabattnivå än en klassisk linjär modell. Vidare utreds det ifall ett tydligt linjärt samband existerar mellan rabattnivå och efterfrågan, samt ifall detta beror av produkttyp. Resultaten visar på att Random Forest tar bättre hänsyn till det komplexa samband som visade sig finnas, och i detta specifika fall presterar bättre. Vidare visade resultaten att det sammantaget inte finns något linjärt samband, men att vissa produktkategorier uppvisar svag linjäritet.
As the climate crisis continues to evolve many companies focus their development on becoming more sustainable. With greenhouse gases being highlighted as the main problem, food waste has obtained a great deal of attention after being named the third largest contributor to global emissions. One way retailers have attempted to improve is through offering close-to-date produce at discount, hence decreasing levels of food being thrown away. To minimize waste the level of discount must be optimized, and as the products can be seen as flawed the known price-to-demand relation of the products may be insufficient. The optimization process historically involves generalized linear regression models, however demand is a complex concept influenced by many factors. This report investigates whether a Machine Learning model, Random Forest Regression, is better at estimating the demand of close-to-date products at different discount levels than a basic linear regression model. The discussion also includes an analysis on whether discounts always increase the will to buy and whether this depends on product type. The results show that Random Forest to a greater extent considers the many factors influencing demand and is superior as a predictor in this case. Furthermore it was concluded that there is generally not a clear linear relation however this does depend on product type as certain categories showed some linearity.
APA, Harvard, Vancouver, ISO, and other styles
9

Maginnity, Joseph D. "Comparing the Uses and Classification Accuracy of Logistic and Random Forest Models on an Adolescent Tobacco Use Dataset." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1586997693789325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Braff, Pamela Hope. "Not All Biomass is Created Equal: An Assessment of Social and Biophysical Factors Constraining Wood Availability in Virginia." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/63997.

Full text
Abstract:
Most estimates of wood supply do not reflect the true availability of wood resources. The availability of wood resources ultimately depends on collective wood harvesting decisions across the landscape. Both social and biophysical constraints impact harvesting decisions and thus the availability of wood resources. While most constraints do not completely inhibit harvesting, they may significantly reduce the probability of harvest. Realistic assessments of woody availability and distribution are needed for effective forest management and planning. This study focuses on predicting the probability of harvest at forested FIA plot locations in Virginia. Classification and regression trees, conditional inferences trees, random forest, balanced random forest, conditional random forest, and logistic regression models were built to predict harvest as a function of social and biophysical availability constraints. All of the models were evaluated and compared to identify important variables constraining harvest, predict future harvests, and estimate the available wood supply. Variables related to population and resource quality seem to be the best predictors of future harvest. The balanced random forest and logistic regressions models are recommended for predicting future harvests. The balanced random forest model is the best predictor, while the logistic regression model can be most easily shared and replicated. Both models were applied to predict harvest at recently measured FIA plots. Based on the probability of harvest, we estimate that between 2012 and 2017, 10 – 21 percent of total wood volume on timberland will be available for harvesting.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Random forest regression"

1

Elwood, Mark. Combining results from several studies: systematic reviews and meta-analyses. Oxford University Press, 2017. http://dx.doi.org/10.1093/med/9780199682898.003.0009.

Full text
Abstract:
This chapter explains systematic reviews, the PRISMA format, and meta-analysis. It discusses publication bias, outcome reporting bias, funnel plots, the issue of false positive results in small studies, along with search strategies, electronic databases, PubMed, and the Cochrane collaboration. It discusses the assessment of quality, risks of bias, limitations of meta-analysis, heterogeneity testing, effect modification, and meta-regression methods. In part two, it explains statistical methods for meta-analyses are presented, including the Mantel-Haenszel and Peto methods for individual patient data, the inverse variance weighted method using final results, and random effects methods. Forest plots and tests of heterogeneity are explained.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Random forest regression"

1

Vadlamani, Ravi, and Anurag Sharma. "Support Vector–Quantile Regression Random Forest Hybrid for Regression Problems." In Lecture Notes in Computer Science, 149–60. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13365-2_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kacete, Amine, Renaud Séguier, Michel Collobert, and Jérôme Royan. "Unconstrained Gaze Estimation Using Random Forest Regression Voting." In Computer Vision – ACCV 2016, 419–32. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54187-7_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sumit, Shahriar Shakir, Junzo Watada, Fatema Nasrin, Nafiz Ishtiaque Ahmed, and D. R. A. Rambli. "Imputing Missing Values: Reinforcement Bayesian Regression and Random Forest." In Studies in Computational Intelligence, 81–88. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-49536-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Asad, Muhammad, and Greg Slabaugh. "Hand Orientation Regression Using Random Forest for Augmented Reality." In Lecture Notes in Computer Science, 159–74. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-13969-2_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Wlaszczyk, Agata, Agnieszka Kaminska, Agnieszka Pietraszek, Jakub Dabrowski, Mikolaj A. Pawlak, and Hanna Nowicka. "Predicting Fluid Intelligence from Structural MRI Using Random Forest regression." In Adolescent Brain Cognitive Development Neurocognitive Prediction, 83–91. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31901-4_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Aliyeva, Aysel. "Predicting Stock Prices Using Random Forest and Logistic Regression Algorithms." In 11th International Conference on Theory and Application of Soft Computing, Computing with Words and Perceptions and Artificial Intelligence - ICSCCW-2021, 95–101. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-92127-9_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lasota, Tadeusz, Zbigniew Telec, Bogdan Trawiński, and Grzegorz Trawiński. "Investigation of Random Subspace and Random Forest Regression Models Using Data with Injected Noise." In Lecture Notes in Computer Science, 1–10. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-37343-5_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Roberts, M. G., Timothy F. Cootes, and J. E. Adams. "Automatic Location of Vertebrae on DXA Images Using Random Forest Regression." In Medical Image Computing and Computer-Assisted Intervention – MICCAI 2012, 361–68. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33454-2_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Yadav, Chandra Shekhar, and Aditi Sharan. "Feature Learning Using Random Forest and Binary Logistic Regression for ATDS." In Algorithms for Intelligent Systems, 341–52. Singapore: Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-3357-0_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cootes, Tim F., Mircea C. Ionita, Claudia Lindner, and Patrick Sauer. "Robust and Accurate Shape Model Fitting Using Random Forest Regression Voting." In Computer Vision – ECCV 2012, 278–91. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33786-4_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Random forest regression"

1

Lee, Sangkyu, Sarah Kerns, Barry Rosenstein, Harry Ostrer, Joseph O. Deasy, and Jung Hun Oh. "Preconditioned Random Forest Regression." In BCB '17: 8th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3107411.3108201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhu, Lin, Jiaxing Lu, and Yihong Chen. "HDI-Forest: Highest Density Interval Regression Forest." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/621.

Full text
Abstract:
By seeking the narrowest prediction intervals (PIs) that satisfy the specified coverage probability requirements, the recently proposed quality-based PI learning principle can extract high-quality PIs that better summarize the predictive certainty in regression tasks, and has been widely applied to solve many practical problems. Currently, the state-of-the-art quality-based PI estimation methods are based on deep neural networks or linear models. In this paper, we propose Highest Density Interval Regression Forest (HDI-Forest), a novel quality-based PI estimation method that is instead based on Random Forest. HDI-Forest does not require additional model training, and directly reuses the trees learned in a standard Random Forest model. By utilizing the special properties of Random Forest, HDI-Forest could efficiently and more directly optimize the PI quality metrics. Extensive experiments on benchmark datasets show that HDI-Forest significantly outperforms previous approaches, reducing the average PI width by over 20% while achieving the same or better coverage probability.
APA, Harvard, Vancouver, ISO, and other styles
3

Rodrigues, Nigel, Nelson Sequeira, Stephen Rodrigues, and Varsha Shrivastava. "Cricket Squad Analysis Using Multiple Random Forest Regression." In 2019 1st International Conference on Advances in Information Technology (ICAIT). IEEE, 2019. http://dx.doi.org/10.1109/icait47043.2019.8987367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kurniawati, Nazmia, Dianing Novita Nurmala Putri, and Yuli Kurnia Ningsih. "Random Forest Regression for Predicting Metamaterial Antenna Parameters." In 2020 2nd International Conference on Industrial Electrical and Electronics (ICIEE). IEEE, 2020. http://dx.doi.org/10.1109/iciee49813.2020.9276899.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yadav, Manish, and Vadlamani Ravi. "Quantile Regression Random Forest Hybrids Based Data Imputation." In 2018 IEEE 17th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC). IEEE, 2018. http://dx.doi.org/10.1109/icci-cc.2018.8482040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Junning Gao and Xiaochun Lu. "Forecast of China railway freight volume by random forest regression model." In 2015 International Conference on Logistics, Informatics and Service Sciences (LISS). IEEE, 2015. http://dx.doi.org/10.1109/liss.2015.7369654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hao, Zhulin, Jianqiang Du, Bin Nie, Fang Yu, Riyue Yu, and Wangping Xiong. "Random Forest Regression Based on Partial Least Squares Connect Partial Least Squares and Random Forest." In 2016 International Conference on Artificial Intelligence: Technologies and Applications. Paris, France: Atlantis Press, 2016. http://dx.doi.org/10.2991/icaita-16.2016.48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Luo, Hanwu, Xiubao Pan, Qingshun Wang, Shasha Ye, and Ying Qian. "Logistic Regression and Random Forest for Effective Imbalanced Classification." In 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC). IEEE, 2019. http://dx.doi.org/10.1109/compsac.2019.00139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Shreyas, R., D. M. Akshata, B. S. Mahanand, B. Shagun, and C. M. Abhishek. "Predicting popularity of online articles using Random Forest regression." In 2016 Second International Conference on Cognitive Computing and Information Processing (CCIP). IEEE, 2016. http://dx.doi.org/10.1109/ccip.2016.7802890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lutfi, Moch, Sheilla Putri Agustin, and Intan Nurma Yulita. "LQ45 Stock Price Prediction Using Linear Regression Algorithm, Smo Regression, And Random Forest." In 2021 International Conference on Artificial Intelligence and Big Data Analytics (ICAIBDA). IEEE, 2021. http://dx.doi.org/10.1109/icaibda53487.2021.9689749.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Random forest regression"

1

Puttanapong, Nattapong, Arturo M. Martinez Jr, Mildred Addawe, Joseph Bulan, Ron Lester Durante, and Marymell Martillan. Predicting Poverty Using Geospatial Data in Thailand. Asian Development Bank, December 2020. http://dx.doi.org/10.22617/wps200434-2.

Full text
Abstract:
This study examines an alternative approach in estimating poverty by investigating whether readily available geospatial data can accurately predict the spatial distribution of poverty in Thailand. It also compares the predictive performance of various econometric and machine learning methods such as generalized least squares, neural network, random forest, and support vector regression. Results suggest that intensity of night lights and other variables that approximate population density are highly associated with the proportion of population living in poverty. The random forest technique yielded the highest level of prediction accuracy among the methods considered, perhaps due to its capability to fit complex association structures even with small and medium-sized datasets.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Hongrui, and Rahul Ramachandra Shetty. Analytical Models for Traffic Congestion and Accident Analysis. Mineta Transportation Institute, November 2021. http://dx.doi.org/10.31979/mti.2021.2102.

Full text
Abstract:
In the US, over 38,000 people die in road crashes each year, and 2.35 million are injured or disabled, according to the statistics report from the Association for Safe International Road Travel (ASIRT) in 2020. In addition, traffic congestion keeping Americans stuck on the road wastes millions of hours and billions of dollars each year. Using statistical techniques and machine learning algorithms, this research developed accurate predictive models for traffic congestion and road accidents to increase understanding of the complex causes of these challenging issues. The research used US Accidents data consisting of 49 variables describing 4.2 million accident records from February 2016 to December 2020, as well as logistic regression, tree-based techniques such as Decision Tree Classifier and Random Forest Classifier (RF), and Extreme Gradient boosting (XG-boost) to process and train the models. These models will assist people in making smart real-time transportation decisions to improve mobility and reduce accidents.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography