To see the other types of publications on this topic, follow the link: Deep neural decision forest.

Dissertations / Theses on the topic 'Deep neural decision forest'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 37 dissertations / theses for your research on the topic 'Deep neural decision forest.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Granström, Daria, and Johan Abrahamsson. "Loan Default Prediction using Supervised Machine Learning Algorithms." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252312.

Full text
Abstract:
It is essential for a bank to estimate the credit risk it carries and the magnitude of exposure it has in case of non-performing customers. Estimation of this kind of risk has been done by statistical methods through decades and with respect to recent development in the field of machine learning, there has been an interest in investigating if machine learning techniques can perform better quantification of the risk. The aim of this thesis is to examine which method from a chosen set of machine learning techniques exhibits the best performance in default prediction with regards to chosen model evaluation parameters. The investigated techniques were Logistic Regression, Random Forest, Decision Tree, AdaBoost, XGBoost, Artificial Neural Network and Support Vector Machine. An oversampling technique called SMOTE was implemented in order to treat the imbalance between classes for the response variable. The results showed that XGBoost without implementation of SMOTE obtained the best result with respect to the chosen model evaluation metric.<br>Det är nödvändigt för en bank att ha en bra uppskattning på hur stor risk den bär med avseende på kunders fallissemang. Olika statistiska metoder har använts för att estimera denna risk, men med den nuvarande utvecklingen inom maskininlärningsområdet har det väckt ett intesse att utforska om maskininlärningsmetoder kan förbättra kvaliteten på riskuppskattningen. Syftet med denna avhandling är att undersöka vilken metod av de implementerade maskininlärningsmetoderna presterar bäst för modellering av fallissemangprediktion med avseende på valda modelvaldieringsparametrar. De implementerade metoderna var Logistisk Regression, Random Forest, Decision Tree, AdaBoost, XGBoost, Artificiella neurala nätverk och Stödvektormaskin. En översamplingsteknik, SMOTE, användes för att behandla obalansen i klassfördelningen för svarsvariabeln. Resultatet blev följande: XGBoost utan implementering av SMOTE visade bäst resultat med avseende på den valda metriken.
APA, Harvard, Vancouver, ISO, and other styles
2

Bengana, M. (Mohamed). "Land cover and forest segmentation using deep neural networks." Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201905101715.

Full text
Abstract:
Tiivistelmä. Land Use and Land Cover (LULC) information is important for a variety of applications notably ones related to forestry. The segmentation of remotely sensed images has attracted various research subjects. However this is no easy task, with various challenges to face including the complexity of satellite images, the difficulty to get hold of them, and lack of ready datasets. It has become clear that trying to classify on multiple classes requires more elaborate methods such as Deep Learning (DL). Deep Neural Networks (DNNs) have a promising potential to be a good candidate for the task. However DNNs require a huge amount of data to train including the Ground Truth (GT) data. In this thesis a DL pixel-based approach backed by the state of the art semantic segmentation methods is followed to tackle the problem of LULC mapping. The DNN used is based on DeepLabv3 network with an encoder-decoder architecture. To tackle the issue of lack of data the Sentinel-2 satellite whose data is provided for free by Copernicus was used with the GT mapping from Corine Land Cover (CLC) provided by Copernicus and modified by Tyke to a higher resolution. From the multispectral images in Sentinel-2 Red Green Blue (RGB), and Near Infra Red (NIR) channels were extracted, the 4th channel being extremely useful in the detection of vegetation. This ended up achieving quite good accuracy on a DNN based on ResNet-50 which was calculated using the Mean Intersection over Union (MIoU) metric reaching 0.53MIoU. It was possible to use this data to transfer the learning to a data from Pleiades-1 satellite with much better resolution, Very High Resolution (VHR) in fact. The results were excellent especially when compared on training right away on that data reaching an accuracy of 0.98 and 0.85MIoU.
APA, Harvard, Vancouver, ISO, and other styles
3

Kunz, Jenny. "Neural Language Models with Explicit Coreference Decision." Thesis, Uppsala universitet, Institutionen för lingvistik och filologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-371827.

Full text
Abstract:
Coreference is an important and frequent concept in any form of discourse, and Coreference Resolution (CR) a widely used task in Natural Language Understanding (NLU). In this thesis, we implement and explore two recent models that include the concept of coreference in Recurrent Neural Network (RNN)-based Language Models (LM). Entity and reference decisions are modeled explicitly in these models using attention mechanisms. Both models learn to save the previously observed entities in a set and to decide if the next token created by the LM is a mention of one of the entities in the set, an entity that has not been observed yet, or not an entity. After a theoretical analysis where we compare the two LMs to each other and to a state of the art Coreference Resolution system, we perform an extensive quantitative and qualitative analysis. For this purpose, we train the two models and a classical RNN-LM as the baseline model on the OntoNotes 5.0 corpus with coreference annotation. While we do not reach the baseline in the perplexity metric, we show that the models’ relative performance on entity tokens has the potential to improve when including the explicit entity modeling. We show that the most challenging point in the systems is the decision if the next token is an entity token, while the decision which entity the next token refers to performs comparatively well. Our analysis in the context of a text generation task shows that a wide-spread error source for the mention creation process is the confusion of tokens that refer to related but different entities in the real world, presumably a result of the context-based word representations in the models. Our re-implementation of the DeepMind model by Yang et al. 2016 performs notably better than the re-implementation of the EntityNLM model by Ji et al. 2017 with a perplexity of 107 compared to a perplexity of 131.
APA, Harvard, Vancouver, ISO, and other styles
4

Lind, Sebastian. "Ensemble approach to prediction of initial velocity centered around random forest regression and feed forward deep neural networks." Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-79956.

Full text
Abstract:
Prediction of initial velocity of artillery system is a feature that is hard to determine with statistical and analytical models. Machine learning is therefore to be tested, in order to achieve a higher accuracy than the current method (baseline). An ensemble approach will be explored in this paper, centered around feed forward deep neural network and random forest regression. Furthermore, collinearity of features and their importance will be investigated. The impact of the measured error on the range of the projectile will also be derived by finding a numerical solution with Newton Raphsons method. For the five systemstest data was used on, the mean absolute errors were 26, 9.33, 8.72 and 9.06 for deep neural networks,random forest regression, ensemble learning and conventional method, respectively. For future works,more models should be tested with ensemble learning, as well as investigation on the feature space for the input data.
APA, Harvard, Vancouver, ISO, and other styles
5

Nylund, Andreas. "To be, or not to be Melanoma : Convolutional neural networks in skin lesion classification." Thesis, KTH, Medicinsk teknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-190000.

Full text
Abstract:
Machine learning methods provide an opportunity to improve the classification of skin lesions and the early diagnosis of melanoma by providing decision support for general practitioners. So far most studies have been looking at the creation of features that best indicate melanoma. Representation learning methods such as neural networks have outperformed hand-crafted features in many areas. This work aims to evaluate the performance of convolutional neural networks in relation to earlier machine learning algorithms and expert diagnosis. In this work, convolutional neural networks were trained on datasets of dermoscopy images using weights initialized from a random distribution, a network trained on the ImageNet dataset and a network trained on Dermnet, a skin disease atlas.  The ensemble sum prediction of the networks achieved an accuracy of 89.3% with a sensitivity of 77.1% and a specificity of 93.0% when based on the weights learned from the ImageNet dataset and the Dermnet skin disease atlas and trained on non-polarized light dermoscopy images.  The results from the different networks trained on little or no prior data confirms the idea that certain features are transferable between different data. Similar classification accuracies to that of the highest scoring network are achieved by expert dermatologists and slightly higher results are achieved by referenced hand-crafted classifiers.  The trained networks are found to be comparable to practicing dermatologists and state-of-the-art machine learning methods in binary classification accuracy, benign – melanoma, with only little pre-processing and tuning.
APA, Harvard, Vancouver, ISO, and other styles
6

Landmér, Pedersen Jesper. "Weighing Machine Learning Algorithms for Accounting RWISs Characteristics in METRo : A comparison of Random Forest, Deep Learning & kNN." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-85586.

Full text
Abstract:
The numerical model to forecast road conditions, Model of the Environment and Temperature of Roads (METRo), laid the foundation of solving the energy balance and calculating the temperature evolution of roads. METRo does this by providing a numerical modelling system making use of Road Weather Information Stations (RWIS) and meteorological projections. While METRo accommodates tools for correcting errors at each station, such as regional differences or microclimates, this thesis proposes machine learning as a supplement to the METRo prognostications for accounting station characteristics. Controlled experiments were conducted by comparing four regression algorithms, that is, recurrent and dense neural network, random forest and k-nearest neighbour, to predict the squared deviation of METRo forecasted road surface temperatures. The results presented reveal that the models utilising the random forest algorithm yielded the most reliable predictions of METRo deviations. However, the study also presents the promise of neural networks and the ability and possible advantage of seasonal adjustments that the networks could offer.
APA, Harvard, Vancouver, ISO, and other styles
7

Hammarström, Tobias. "Towards Explainable Decision-making Strategies of Deep Convolutional Neural Networks : An exploration into explainable AI and potential applications within cancer detection." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-424779.

Full text
Abstract:
The influence of Artificial Intelligence (AI) on society is increasing, with applications in highly sensitive and complicated areas. Examples include using Deep Convolutional Neural Networks within healthcare for diagnosing cancer. However, the inner workings of such models are often unknown, limiting the much-needed trust in the models. To combat this, Explainable AI (XAI) methods aim to provide explanations of the models' decision-making. Two such methods, Spectral Relevance Analysis (SpRAy) and Testing with Concept Activation Methods (TCAV), were evaluated on a deep learning model classifying cat and dog images that contained introduced artificial noise. The task was to assess the methods' capabilities to explain the importance of the introduced noise for the learnt model. The task was constructed as an exploratory step, with the future aim of using the methods on models diagnosing oral cancer. In addition to using the TCAV method as introduced by its authors, this study also utilizes the CAV-sensitivity to introduce and perform a sensitivity magnitude analysis. Both methods proved useful in discerning between the model’s two decision-making strategies based on either the animal or the noise. However, greater insight into the intricacies of said strategies is desired. Additionally, the methods provided a deeper understanding of the model’s learning, as the model did not seem to properly distinguish between the noise and the animal conceptually. The methods thus accentuated the limitations of the model, thereby increasing our trust in its abilities. In conclusion, the methods show promise regarding the task of detecting visually distinctive noise in images, which could extend to other distinctive features present in more complex problems. Consequently, more research should be conducted on applying these methods on more complex areas with specialized models and tasks, e.g. oral cancer.
APA, Harvard, Vancouver, ISO, and other styles
8

Huatuco, Santos Gustavo. "Soccer Coach Decision Support System." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/15136/.

Full text
Abstract:
The savage essence and nature of sports means those who work on it hunt for the win. The sport enterprise is undergoing a gigantic digital transformation focused on imaging, real time and data analysis employed in the competitions. Conventional process methods in sports management such as fitness and health establishments, training, growth and match or game realisation are all being revolutionized by the sport digitization. In team sports it is well known that is needful an enough and simple digital methodology to organize and construct a feasible strategy. Digitization in sports is perpetually evolving and requires pervasive challenges. The sports and athletics digitization success is based on what is being done with collection of more data. Competitive advantages go to those who produce powerful operations using the data and acting on it in real time. The potential impact of these sport features in sport team operations is powerful. Data does not ride all decisions, but it empowers knowledgeable decisions. In these world circumstances, our vision with this system was born from a dream helping soccer sport management systems embrace and improve its contest success. Our perspective problem is how a decision support system for soccer coaches helps them to take enhancement decisions better. To face this problem we have created a soccer coach decision support system. This system is organised in two joined components; the first simulates the prediction of the soccer match winner through a data driven neural network. This component output activates the second to operate the logic rules learning and provides the stats, analysis, decision making and additionally plans improvements like drills and training procedures. This helps on the preparation towards upcoming matches as well as being aligned with their style and playing concepts. Future scalability and development, will analyse the mental and moral features of the teams by virtue of their athlete’s behavior changes.
APA, Harvard, Vancouver, ISO, and other styles
9

Mohamed, Abdelhack. "Top-down Modulation in Human Visual Cortex." Kyoto University, 2019. http://hdl.handle.net/2433/242434.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Varatharajah, Thujeepan, and Eriksson Victor. "A comparative study on artificial neural networks and random forests for stock market prediction." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-186452.

Full text
Abstract:
This study investigates the predictive performance of two different machine learning (ML) models on the stock market and compare the results. The chosen models are based on artificial neural networks (ANN) and random forests (RF). The models are trained on two separate data sets and the predictions are made on the next day closing price. The input vectors of the models consist of 6 different financial indicators which are based on the closing prices of the past 5, 10 and 20 days. The performance evaluation are done by analyzing and comparing such values as the root mean squared error (RMSE) and mean average percentage error (MAPE) for the test period. Specific behavior in subsets of the test period is also analyzed to evaluate consistency of the models. The results showed that the ANN model performed better than the RF model as it throughout the test period had lower errors compared to the actual prices and thus overall made more accurate predictions.<br>Denna studie undersöker hur väl två olika modeller inom maskininlärning (ML) kan förutspå aktiemarknaden och jämför sedan resultaten av dessa. De valda modellerna baseras på artificiella neurala nätverk (ANN) samt random forests (RF). Modellerna tränas upp med två separata datamängder och prognoserna sker på nästföljande dags stängningskurs. Indatan för modellerna består av 6 olika finansiella nyckeltal som är baserade på stängningskursen för de senaste 5, 10 och 20 dagarna. Prestandan utvärderas genom att analysera och jämföra värden som root mean squared error (RMSE) samt mean average percentage error (MAPE) för testperioden. Även specifika trender i delmängder av testperioden undersöks för att utvärdera följdriktigheten av modellerna. Resultaten visade att ANN-modellen presterade bättre än RF-modellen då den sett över hela testperioden visade mindre fel jämfört med de faktiska värdena och gjorde därmed mer träffsäkra prognoser.
APA, Harvard, Vancouver, ISO, and other styles
11

Lundström, Love, and Oscar Öhman. "Machine Learning in credit risk : Evaluation of supervised machine learning models predicting credit risk in the financial sector." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-164101.

Full text
Abstract:
When banks lend money to another party they face a risk that the borrower will not fulfill its obligation towards the bank. This risk is called credit risk and it’s the largest risk banks faces. According to the Basel accord banks need to have a certain amount of capital requirements to protect themselves towards future financial crisis. This amount is calculated for each loan with an attached risk-weighted asset, RWA. The main parameters in RWA is probability of default and loss given default. Banks are today allowed to use their own internal models to calculate these parameters. Thus hold capital with no gained interest is a great cost, banks seek to find tools to better predict probability of default to lower the capital requirement. Machine learning and supervised algorithms such as Logistic regression, Neural network, Decision tree and Random Forest can be used to decide credit risk. By training algorithms on historical data with known results the parameter probability of default (PD) can be determined with a higher certainty degree compared to traditional models, leading to a lower capital requirement. On the given data set in this article Logistic regression seems to be the algorithm with highest accuracy of classifying customer into right category. However, it classifies a lot of people as false positive meaning the model thinks a customer will honour its obligation but in fact the customer defaults. Doing this comes with a great cost for the banks. Through implementing a cost function to minimize this error, we found that the Neural network has the lowest false positive rate and will therefore be the model that is best suited for this specific classification task.<br>När banker lånar ut pengar till en annan part uppstår en risk i att låntagaren inte uppfyller sitt antagande mot banken. Denna risk kallas för kredit risk och är den största risken en bank står inför. Enligt Basel föreskrifterna måste en bank avsätta en viss summa kapital för varje lån de ger ut för att på så sätt skydda sig emot framtida finansiella kriser. Denna summa beräknas fram utifrån varje enskilt lån med tillhörande risk-vikt, RWA. De huvudsakliga parametrarna i RWA är sannolikheten att en kund ej kan betala tillbaka lånet samt summan som banken då förlorar. Idag kan banker använda sig av interna modeller för att estimera dessa parametrar. Då bundet kapital medför stora kostnader för banker, försöker de sträva efter att hitta bättre verktyg för att uppskatta sannolikheten att en kund fallerar för att på så sätt minska deras kapitalkrav. Därför har nu banker börjat titta på möjligheten att använda sig av maskininlärningsalgoritmer för att estimera dessa parametrar. Maskininlärningsalgoritmer såsom Logistisk regression, Neurala nätverk, Beslutsträd och Random forest, kan användas för att bestämma kreditrisk. Genom att träna algoritmer på historisk data med kända resultat kan parametern, chansen att en kund ej betalar tillbaka lånet (PD), bestämmas med en högre säkerhet än traditionella metoder. På den givna datan som denna uppsats bygger på visar det sig att Logistisk regression är den algoritm med högst träffsäkerhet att klassificera en kund till rätt kategori. Däremot klassifiserar denna algoritm många kunder som falsk positiv vilket betyder att den predikterar att många kunder kommer betala tillbaka sina lån men i själva verket inte betalar tillbaka lånet. Att göra detta medför en stor kostnad för bankerna. Genom att istället utvärdera modellerna med hjälp av att införa en kostnadsfunktion för att minska detta fel finner vi att Neurala nätverk har den lägsta falsk positiv ration och kommer därmed vara den model som är bäst lämpad att utföra just denna specifika klassifierings uppgift.
APA, Harvard, Vancouver, ISO, and other styles
12

Park, Samuel M. "A Comparison of Machine Learning Techniques to Predict University Rates." University of Toledo / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1564790014887692.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Yang, Kaolee. "A Statistical Analysis of Medical Data for Breast Cancer and Chronic Kidney Disease." Bowling Green State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1587052897029939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bono, Guillaume. "Deep multi-agent reinforcement learning for dynamic and stochastic vehicle routing problems." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI096.

Full text
Abstract:
La planification de tournées de véhicules dans des environnements urbains denses est un problème difficile qui nécessite des solutions robustes et flexibles. Les approches existantes pour résoudre ces problèmes de planification de tournées dynamiques et stochastiques (DS-VRPs) sont souvent basés sur les mêmes heuristiques utilisées dans le cas statique et déterministe, en figeant le problème à chaque fois que la situation évolue. Au lieu de cela, nous proposons dans cette thèse d’étudier l’application de méthodes d’apprentissage par renforcement multi-agent (MARL) aux DS-VRPs en s’appuyant sur des réseaux de neurones profonds (DNNs). Plus précisément, nous avons d’abord contribuer à étendre les méthodes basées sur le gradient de la politique (PG) aux cadres des processus de décision de Markov (MDPs) partiellement observables et décentralisés (Dec-POMDPs). Nous avons ensuite proposé un nouveau modèle de décision séquentiel en relâchant la contrainte d’observabilité partielle que nous avons baptisé MDP multi-agent séquentiel (sMMDP). Ce modèle permet de décrire plus naturellement les DS-VRPs, dans lesquels les véhicules prennent la décision de servir leurs prochains clients à l’issu de leurs précédents services, sans avoir à attendre les autres. Pour représenter nos solutions, des politiques stochastiques fournissant aux véhicules des règles de décisions, nous avons développé une architecture de DNN basée sur des mécanismes d’attention (MARDAM). Nous avons évalué MARDAM sur un ensemble de bancs de test artificiels qui nous ont permis de valider la qualité des solutions obtenues, la robustesse et la flexibilité de notre approche dans un contexte dynamique et stochastique, ainsi que sa capacité à généraliser à toute une classe de problèmes sans avoir à être ré-entraînée. Nous avons également développé un banc de test plus réaliste à base d’une simulation micro-traffic, et présenté une preuve de concept de l’applicabilité de MARDAM face à une variété de situations différentes<br>Routing delivery vehicles in dynamic and uncertain environments like dense city centers is a challenging task, which requires robustness and flexibility. Such logistic problems are usually formalized as Dynamic and Stochastic Vehicle Routing Problems (DS-VRPs) with a variety of additional operational constraints, such as Capacitated vehicles or Time Windows (DS-CVRPTWs). Main heuristic approaches to dynamic and stochastic problems simply consist in restarting the optimization process on a frozen (static and deterministic) version of the problem given the new information. Instead, Reinforcement Learning (RL) offers models such as Markov Decision Processes (MDPs) which naturally describe the evolution of stochastic and dynamic systems. Their application to more complex problems has been facilitated by recent progresses in Deep Neural Networks, which can learn to represent a large class of functions in high dimensional spaces to approximate solutions with high performances. Finding a compact and sufficiently expressive state representation is the key challenge in applying RL to VRPs. Recent work exploring this novel approach demonstrated the capabilities of Attention Mechanisms to represent sets of customers and learn policies generalizing to different configurations of customers. However, all existing work using DNNs reframe the VRP as a single-vehicle problem and cannot provide online decision rules for a fleet of vehicles.In this thesis, we study how to apply Deep RL methods to rich DS-VRPs as multi-agent systems. We first explore the class of policy-based approaches in Multi-Agent RL and Actor-Critic methods for Decentralized, Partially Observable MDPs in the Centralized Training for Decentralized Control (CTDC) paradigm. To address DS-VRPs, we then introduce a new sequential multi-agent model we call sMMDP. This fully observable model is designed to capture the fact that consequences of decisions can be predicted in isolation. Afterwards, we use it to model a rich DS-VRP and propose a new modular policy network to represent the state of the customers and the vehicles in this new model, called MARDAM. It provides online decision rules adapted to the information contained in the state and takes advantage of the structural properties of the model. Finally, we develop a set of artificial benchmarks to evaluate the flexibility, the robustness and the generalization capabilities of MARDAM. We report promising results in the dynamic and stochastic case, which demonstrate the capacity of MARDAM to address varying scenarios with no re-optimization, adapting to new customers and unexpected delays caused by stochastic travel times. We also implement an additional benchmark based on micro-traffic simulation to better capture the dynamics of a real city and its road infrastructures. We report preliminary results as a proof of concept that MARDAM can learn to represent different scenarios, handle varying traffic conditions, and customers configurations
APA, Harvard, Vancouver, ISO, and other styles
15

Lannge, Jakob, and Ali Majed. "Classifying human activities through machine learning." Thesis, Malmö universitet, Fakulteten för teknik och samhälle (TS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-20115.

Full text
Abstract:
Klassificering av dagliga aktiviteter (ADL) kan användas i system som bevakar människors aktiviteter i olika syften. T.ex., i nödsituationssystem. Med machine learning och bärbara sensor som samlar in data kan ADL klassificeras med hög noggrannhet. I detta arbete, ett proof-of-concept system med tre olika machine learning algoritmer utvärderas och jämförs mellan tre olika dataset, ett som är allmänt tillgängligt på (Ugulino, et al., 2012), och två som har samlats in i rapporten med hjälp av en android enhet. Algoritmerna som har använts är: Multiclass Decision Forest, Multiclass Decision Jungle and Multiclass Neural Network. Sensorerna som har använts är en accelerometer och ett gyroskop. Resultatet visar hur ett konceptuellt system kan byggas i Azure Machine Learning Studio, och hur tre olika algoritmer presterar vid klassificering av tre olika dataset. En algoritm visar högre precision vid klassning av Ugolino’s dataset, jämfört med machine learning modellen som ursprungligen används i rapporten.<br>Classifying Activities of daily life (ADL) can be used in a system that monitor people’s activities for different purposes. For example, in emergency systems. Machine learning is a way to classify ADL with high accuracy, using wearable sensors as an input. In this paper, a proof-of-concept system consisting of three different machine learning algorithms is evaluated and compared between tree different datasets, one publicly available at (Ugulino, et al., 2012), and two collected in this paper using an android device’s accelerometer and gyroscope sensor. The algorithms are: Multiclass Decision Forest, Multiclass Decision Jungle and Multiclass Neural Network. The two sensors used are an accelerometer and a gyroscope. The result shows how a system can be implemented using Azure Machine Learning Studio, and how three different algorithms performs when classifying three different datasets. One algorithm achieves a higher accuracy compared to the machine learning model initially used with the Ugolino data set.
APA, Harvard, Vancouver, ISO, and other styles
16

Nordqvist, My. "Identifiera löv i skogar – Att lära en dator känna igen löv med ImageAI." Thesis, Mittuniversitetet, Institutionen för informationssystem och –teknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-36454.

Full text
Abstract:
A current field of research today is machine learning because it can simplify everyday life for human beings. A functioning system that has learned specific tasks can make it easier for companies in both cost and time. A company who want to use machine learning is SCA, who owns and manages forests to produce products. They have a need to automate forest classification. In order to evaluate forests, and to plan forestry measures, the proportion of leafy tree that is not used in production must be determined. Today, manual work is required of people who have to investigate aerial photos to classify the tree types. This study investigates whether it is possible, through machine learning, to teach a computer to determine whether it is leaf or not in photographs. A program is constructed with the library ImageAI which receives methods for training and predicting information in images. It examines how the choice of neural network and the number of images affects the safety of the models and how reliable the models can be. Exercise time and hardware are also two factors that are investigated. The result shows that the neural network ResNet delivers the safest results and the more images the computer exercises, the safer the result. The final model is a ResNet model that has trained on 20,000 images and has 79,0 percent security. Based on 50 samples, the mean value for safety is 90,5 percent and the median is 99,6 percent.<br>Maskininlärning är idag ett aktuellt forskningsområde som kan förenkla vardagen för oss människor. Ett fungerande system som har lärt sig specifika uppgifter kan underlätta för företag i både kostnad och tid. Ett företag som vill använda maskininlärning är SCA, som äger och förvaltar skog för att producera produkter. De har behov av att automatisera klassificering av skog. För att värdera skogar, samt planera skogsåtgärder, måste andelen lövträd som inte används i produktionen bestämmas. Idag krävs det manuellt arbete av personer som måste undersöka flygfoton för att klassificera trädtyperna. Denna studie undersöker om det är möjligt, via maskininlärning, att lära en dator avgöra om det är löv eller inte i ortofoton. Ett program konstrueras med biblioteket ImageAI som erhåller metoder för att träna och förutsäga information i bilder. Det undersöks hur valet av neuralt nätverk och antalet bilder påverkar säkerheten för modellerna samt hur tillförlitlig modellerna kan bli. Träningstid och hårdvara är också två faktorer som studeras. Resultatet visar att neurala nätverket ResNet levererar säkrast resultat och desto fler bilder datorn tränar på, desto säkrare blir resultatet. Den slutgiltiga modellen är en ResNet-modell som tränat på 20 000 bilder och har 79,0 procents säkerhet. Utifrån 50 stickprov är medelvärdet för säkerheten 90,5 procent och medianen 99,6 procent.
APA, Harvard, Vancouver, ISO, and other styles
17

Кичигіна, Анастасія Юріївна. "Прогнозування ІМТ за допомогою методів машинного навчання". Bachelor's thesis, КПІ ім. Ігоря Сікорського, 2020. https://ela.kpi.ua/handle/123456789/37413.

Full text
Abstract:
Дипломна робота містить : 100 с., 17 табл., 16 рис., 2 дод. та 24 джерела. Об’єктом дослідження є індекс маси тіла людини. Предметом дослідження є методи машинного навчання – регресійні моделі, ансамблева модель випадковий ліс та нейронна мережа. В даній роботі проведено дослідження залежності індексу маси тіла людини та наявності надмірної маси тіла від харчових та побутових звичок. Для побудови дослідження були використані методи машинного навчання та аналізу даних, проведено роботу для визначення можливостей по покращенню роботи стандартних моделей та визначено кращу модель для реалізації прогнозування та класифікації на основі наведених даних. Напрямок роботи є в понижені розмірності простору ознак, відбору кращих спостережень з валідними даним для кращої роботи моделей, а також у комбінуванні різних методів навчання та отриманні більш ефективних ансамблевих моделей.<br>Thesis: 100 p., 17 tabl., 16 fig., 2 add. and 24 references. The object of the study is the human body mass index. The subject of research is machine learning methods - regression models, ensemble model random forest and neural network. In this paper, a study of the dependence of the human body mass index and the presence of excess body weight on eating and living habits. To build the study, the methods of machine learning and data analysis were used, work was done to identify opportunities to improve the performance of standard models and identified the best model for the implementation of predicting and classification based on the data. The direction of work is in the reduced dimensions of the feature space, selection of the best observations with valid data for better performance of models, as well as in combining different teaching methods and obtaining more effective ensemble models.
APA, Harvard, Vancouver, ISO, and other styles
18

Rahman, Md Abdur. "Statistical and Machine Learning for assessment of Traumatic Brain Injury Severity and Patient Outcomes." Thesis, Högskolan Dalarna, Institutionen för information och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:du-37710.

Full text
Abstract:
Traumatic brain injury (TBI) is a leading cause of death in all age groups, causing society to be concerned. However, TBI diagnostics and patient outcomes prediction are still lacking in medical science. In this thesis, I used a subset of TBIcare data from Turku University Hospital in Finland to classify the severity, patient outcomes, and CT (computerized tomography) as positive/negative. The dataset was derived from the comprehensive metabolic profiling of serum samples from TBI patients. The study included 96 TBI patients who were diagnosed as 7 severe (sTBI=7), 10 moderate (moTBI=10), and 79 mild (mTBI=79). Among them, there were 85 good recoveries (Good_Recovery=85) and 11 bad recoveries (Bad_Recovery=11), as well as 49 CT positive (CT. Positive=49) and 47 CT negative (CT. Negative=47). There was a total of 455 metabolites (features), excluding three response variables. Feature selection techniques were applied to retain the most important features while discarding the rest. Subsequently, four classifications were used for classification: Ridge regression, Lasso regression, Neural network, and Deep learning. Ridge regression yielded the best results for binary classifications such as patient outcomes and CT positive/negative. The accuracy of CT positive/negative was 74% (AUC of 0.74), while the accuracy of patient outcomes was 91% (AUC of 0.91). For severity classification (multi-class classification), neural networks performed well, with a total accuracy of 90%. Despite the limited number of data points, the overall result was satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
19

Kherroubi, Zine el abidine. "Novel off-board decision-making strategy for connected and autonomous vehicles (Use case highway : on-ramp merging)." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSE1331.

Full text
Abstract:
L'insertion sur autoroute est un défi pour réaliser une conduite entièrement automatisée (Niveau 4 de conduite autonome). La combinaison des technologies de communication et de conduite autonome, qui sous-tend la notion de Connected Autonomous Vehicles (CAV), peut améliorer considérablement les performances de sécurité lors de l'insertion sur autoroute. Cependant, même avec l'émergence des véhicules CAVs, certaines contraintes clés doivent être prises en compte afin de réaliser une insertion sécurisée sur autoroute. Tout d'abord, les véhicules conduits par des conducteurs humains seront toujours présents sur la route, et il faudra peut-être des décennies avant que tous les véhicules commercialisés ne soient entièrement autonomes et connectés. Aussi, les capteurs embarqués des véhicules peuvent fournir des données inexactes ou incomplètes en raison des limites des capteurs et des angles morts, en particulier dans de telles situations de conduite critiques. Afin de résoudre ces problèmes, la présente thèse propose une nouvelle solution utilisant une unité de bord de route (Road-Side Unit (RSU)) permettant une insertion entièrement automatisée sur autoroute pour véhicules connectés et automatisés. Notre approche est basée sur un réseau de neurones artificiels (ANN) pour prédire l'intention des conducteurs. Cette prédiction est utilisée comme état d'entrée pour un agent Deep Reinforcement Learning (DRL) qui fournit l'accélération longitudinale pour le véhicule qui s'insère. Afin d'y parvenir, nous montrons d'abord comment l'unité Road-Side Unit peut-être utilisée pour améliorer la perception dans la zone d'insertion sur autoroute. Ensuite, nous proposons un modèle de reconnaissance d'intention du conducteur qui peut prédire le comportement des véhicules conduits par des conducteurs humains sur la voie principale de l'autoroute, avec une précision de 99%. Nous utilisons la sortie de ce modèle comme état d'entrée pour entrainer un agent Twin Delayed Deep Deterministic Policy Gradients (TD3) qui apprend une politique de conduite « sûre » et « coopérative » pour effectuer l'insertion sur autoroute. Nous montrons que notre stratégie de prise de décision améliore les performances par rapport aux solutions proposées dans l'état de l'art<br>Merging in the highway on-ramp is a significant challenge toward realizing fully automated driving (level 4 of autonomous driving). The combination of communication technology and autonomous driving technology, which underpins the notion of Connected Autonomous Vehicles (CAVs), may improve greatly safety performances when performing highway on-ramp merging. However, even with the emergence of CAVs vehicles, some keys constraints should be considered to achieve a safe on-ramp merging. First, human-driven vehicles will still be present on the road, and it may take decades before all the commercialized vehicles will be fully autonomous and connected. Also, on-board vehicle sensors may provide inaccurate or incomplete data due to sensors limitations and blind spots, especially in such critical situations. To resolve these issues, the present thesis introduces a novel solution that uses an off-board Road-Side Unit (RSU) to realize fully automated highway on-ramp merging for connected and automated vehicles. Our proposed approach is based on an Artificial Neural Network (ANN) to predict drivers’ intentions. This prediction is used as an input state to a Deep Reinforcement Learning (DRL) agent that outputs the longitudinal acceleration for the merging vehicle. To achieve this, we first show how the road-side unit may be used to enhance perception in the on-ramp zone. We then propose a driver intention model that can predict the behavior of the human-driven vehicles in the main highway lane, with 99% accuracy. We use the output of this model as an input state to train a Twin Delayed Deep Deterministic Policy Gradients (TD3) agent that learns « safe » and « cooperative » driving policy to perform highway on-ramp merging. We show that our proposed decision-making strategy improves performance compared to the solutions proposed previously
APA, Harvard, Vancouver, ISO, and other styles
20

Lantz, Robin. "Time series monitoring and prediction of data deviations in a manufacturing industry." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-100181.

Full text
Abstract:
An automated manufacturing industry makes use of many interacting moving parts and sensors. Data from these sensors generate complex multidimensional data in the production environment. This data is difficult to interpret and also difficult to find patterns in. This project provides tools to get a deeper understanding of Swedsafe’s production data, a company involved in an automated manufacturing business. The project is based on and will show the potential of the multidimensional production data. The project mainly consists of predicting deviations from predefined threshold values in Swedsafe’s production data. Machine learning is a good method of finding relationships in complex datasets. Supervised machine learning classification is used to predict deviation from threshold values in the data. An investigation is conducted to identify the classifier that performs best on Swedsafe's production data. The technique sliding window is used for managing time series data, which is used in this project. Apart from predicting deviations, this project also includes an implementation of live graphs to easily get an overview of the production data. A steady production with stable process values is important. So being able to monitor and predict events in the production environment can provide the same benefit for other manufacturing companies and is therefore suitable not only for Swedsafe. The best performing machine learning classifier tested in this project was the Random Forest classifier. The Multilayer Perceptron did not perform well on Swedsafe’s data, but further investigation in recurrent neural networks using LSTM neurons would be recommended. During the projekt a web based application displaying the sensor data in live graphs is also developed.
APA, Harvard, Vancouver, ISO, and other styles
21

Elkin, Colin P. "Development of Adaptive Computational Algorithms for Manned and Unmanned Flight Safety." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1544640516618623.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Stehlík, Petr. "Analýza provozních dat a detekce anomálií při běhu úloh na superpočítači." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-412564.

Full text
Abstract:
V posledních letech jsou superpočítače stále větší a složitější, s čímž souvisí problém využití plného potenciálu systému. Tento problém se umocňuje díky nedostatku nástrojů pro monitorování, které jsou specificky přizpůsobeny uživatelům těchto systémů. Cílem práce je vytvořit nástroj, nazvaný Examon Web, pro analýzu a vizualizaci provozních dat superpočítače a provést nad těmito daty hloubkovou analýzu pomocí neurálních sítí. Ty určí, zda daná úloha běžela korektně, či vykazovala známky podezřelého a nežádoucího chování jako je nezarovnaný přístup do operační paměti nebo např. nízké využití alokovaých zdrojů. O těchto  faktech je uživatel informován pomocí GUI. Examon Web je postavený na frameworku Examon, který sbírá a procesuje metrická data ze superpočítače a následně je ukládá do databáze KairosDB. Implementace zahrnuje disciplíny od návrhu a implementace GUI, přes datovou analýzu, těžení dat a neurální sítě až po implementaci rozhraní na serverové straně. Examon Web je zaměřen zejména na uživatele, ale může být také využíván administrátory. GUI je vytvořeno ve frameworku Angular s knihovnami Dygraphs a Bootstrap. Uživatel díky tomu může analyzovat časové řady různých metrik své úlohy a stejně jako administrátor se může informovat o současném stavu superpočítače. Tento stav je zobrazen jako několik globálně agregovaných metrik v posledních 30 minutách nebo jako 3D model (či 2D model) superpočítače, který získává data ze samotných uzlů pomocí protokolu MQTT. Pro kontinuální získávání dat bylo využito rozhraní WebSocket s vlastním mechanismem přihlašování a odhlašování konkretních metrik zobrazovaných v modelu. Při analýze spuštěné úlohy má uživatel dostupné tři různé pohledy na danou úlohu. První nabízí celkový přehled o úloze a informuje o využitých zdrojích, času běhu a vytížení části superpočítače, kterou úloha využila společně s informací z neurálních sítí o podezřelosti úlohy. Další dva pohledy zobrazují metriky z výkonnostiního energetického hlediska. Pro naučení neurálních sítí bylo potřeba vytvořit novou datovou sadu ze superpočítače Galileo. Tato sada obsahuje přes 1100 úloh monitorovaných na tomto superpočítači z čehož 500 úloh bylo ručně anotováno a následně použito pro trénování sítí. Neurální sítě využívají model back-propagation, vhodný pro anotování časových sérií fixní délky. Celkem bylo vytvořeno 12 sítí pro metriky zahrnující vytížení procesoru, paměti a dalších části a např. také podíl celkového času procesoru v úsporném režimu C6. Tyto sítě jsou na sobě nezávislé a po experimentech jejich finální konfigurace 80-20-4-3-1 (80 vstupních až 1 výstupní neuron) podávaly nejlepší výsledky. Poslední síť (v konfiguraci 12-4-3-1) anotovala výsledky předešlých sítí. Celková úspěšnost  systému klasifikace do 2 tříd je 84 %, což je na použitý model velmi dobré. Výstupem této práce jsou dva produkty. Prvním je uživatelské rozhraní a jeho serverová část Examon Web, která jakožto rozšiřující vrstva systému Examon pomůže s rozšířením daného systému mezi další uživatele či přímo další superpočítačová centra. Druhým výstupem je částečně anotovaná datová sada, která může pomoci dalším lidem v jejich výzkumu a je výsledkem spolupráce VUT, UNIBO a CINECA. Oba výstupy budou zveřejněny s otevřenými zdrojovými kódy. Examon Web byl prezentován na konferenci 1st Users' Conference v Ostravě pořádanou IT4Innovations. Další rozšíření práce může být anotace datové sady a také rozšíření Examon Web o rozhodovací stromy, které určí přesný důvod špatného chování dané úlohy.
APA, Harvard, Vancouver, ISO, and other styles
23

Birindwa, Fleury. "Prestandajämförelse mellan Xception, InceptionV3 och MobileNetV2 för bildklassificering på nätpaneler." Thesis, Jönköping University, JTH, Datateknik och informatik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-51351.

Full text
Abstract:
Under de senaste året har modeller för djupinlärning använts inom nästa alla områden, från industri till akademi, särskilt för bildklassifikation. Dessa modeller är dock enorma i storlek, med miljontals parametrar, vilket gör det svårt att distribuera till mindre enheter med begränsade resurser såsom mobiltelefoner. Denna studie tar upp små modeller av faltningsnätverk som är toppmoderna inom djupinlärning och vars storlek är lämplig för mobilapplikation. Syftet med denna studie är att utvärdera prestanda på faltningsnätverken Xception, InceptionV3 och MobilNetV2 för att underlätta vid valbeslut av faltningsnätverk som bas vid utveckling av mobila applikation inom bildklassificering. För att uppnå syftet har dessa faltningsnätverk implementeras med hjälp av överföringsinlärning metod samt utformas för att skilja på bilder av nätpaneler från företaget Troax. Studien tar upp metoden som möjliggör att överföra kunskap från befintliga förtränade modeller till nya modeller. Studien förklarar även hur träningsprocessen och testprocessen gick till samt analys kring resultatet.   Resultat visade att Xception hade 86 % noggrannhet med en processtid på 10 minuter på 2000 träningsbilder och 1000st testbilder. Xceptions prestation var bäst bland alla dessa modeller. Skillnaden mellan Xception och Inception var på 10 % noggrannhet och 2 minuter processtid. Mellan Xception och MobilNetV2 var skillnaden på 23 % noggrannhet och 3 minuter processtid. Experimentet visade att dessa modeller presterade mindre bra vid mindre träningsbilder under 800st. Över 800st bilder började respektive modell att utföra prediktering över 70 % noggrannhet.<br>In recent years, deep learning models have been used in almost all areas, from industry to academia, specifically for image classification. However, these models are huge in size, with millions of parameters, making it difficult to distribute to smaller devices with limited resources such as mobile phones. This study addresses lightweight pre-trained models of convolutional neural networks which is state of art in deep learning and their size is suitable as a base model for mobile application development. The purpose of this study is to evaluate the performance of Xception, InceptionV3 and MobilNetV2 in order to facilitate selection decisions of a lightweight convolutional networks as base for the development of mobile applications in image classification. In order to achieve their purpose, these models have been implemented using the Transfer Learning method and are designed to distinguish images on mesh panels from the company Troax. The study takes up the method that allows transfer of knowledge from an existing model to a new model, explain how the training process and the test process went, as well as analysis of results. Results showed that Xception had 86% accuracy and had 10 minutes processing time on 2000 training images and 1000 test images. Exception’s performance was the best among all these models. The difference between Xception and InceptionV3 was 10% accuracy and 2 minutes process time. Between Xception and MobilNetV2 there was a difference of 23% in accuracy and 3 minutes in process time. Experiments showed that these models performed less well with smaller training images below 800 images. Over 800 images, each model began to perform prediction over 70% accuracy.
APA, Harvard, Vancouver, ISO, and other styles
24

Haris, Daniel. "Optimalizace strojového učení pro predikci KPI." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-385922.

Full text
Abstract:
This thesis aims to optimize the machine learning algorithms for predicting KPI metrics for an organization. The organization is predicting whether projects meet planned deadlines of the last phase of development process using machine learning. The work focuses on the analysis of prediction models and sets the goal of selecting new candidate models for the prediction system. We have implemented a system that automatically selects the best feature variables for learning. Trained models were evaluated by several performance metrics and the best candidates were chosen for the prediction. Candidate models achieved higher accuracy, which means, that the prediction system provides more reliable responses. We suggested other improvements that could increase the accuracy of the forecast.
APA, Harvard, Vancouver, ISO, and other styles
25

Masetti, Masha. "Product Clustering e Machine Learning per il miglioramento dell'accuratezza della previsione della domanda: il caso Comer Industries S.p.A." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
I lunghi lead time della catena di fornitura cinese dell’azienda Comer Industries S.p.A la obbligano ad ordinare i materiali con sei mesi di anticipo, data in cui spesso i clienti non sono consapevoli dei quantitativi di materiale che necessiteranno. Al fine di rispondere ai clienti mantenendo l’alto livello di servizio garantito storicamente da Comer Industries, risulta essenziale ordinare il materiale basandosi sulle previsioni della domanda. Tuttavia, attualmente le previsioni non sono sufficientemente accurate. L’obiettivo di questa ricerca è individuare un possibile metodo per incrementare l’accuratezza delle previsioni della domanda. Potrebbe, al fine del miglioramento della forecast accuracy, incidere positivamente l’utilizzo dell’Intelligenza Artificiale? Per rispondere alla domanda di ricerca, si sono implementati l’algoritmo K-Means e l’algoritmo Gerarchico in Visual Basic Application al fine di dividere i prodotti in cluster sulla base dei componenti comuni. Successivamente, si sono analizzati gli andamenti della domanda. Implementando differenti algoritmi di Machine Learning su Google Colaboratory, si sono paragonate le accuratezze ottenute e si è individuato un algoritmo di previsione ottimale per ciascun profilo di domanda. Infine, con le previsioni effettuate, si è potuto identificare con il K-means un miglioramento dell’accuracy di circa il 54,62% rispetto all’accuratezza iniziale ed un risparmio del 47% dei costi per il mantenimento del safety stock, mentre con il Clustering Gerarchico si è rilevato un miglioramento dell’accuracy del 11,15% ed un risparmio del 45% dei costi attuali. Si è, pertanto, concluso che la clusterizzazione dei prodotti potrebbe apportare un contributo positivo all’accuratezza delle previsioni. Inoltre, si è osservato come il Machine Learning potrebbe costituire lo strumento ideale per individuare le soluzioni ottimali sia all’interno degli algoritmi di Clustering sia all’interno dei metodi previsionali.
APA, Harvard, Vancouver, ISO, and other styles
26

Teng, Sin Yong. "Intelligent Energy-Savings and Process Improvement Strategies in Energy-Intensive Industries." Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-433427.

Full text
Abstract:
S tím, jak se neustále vyvíjejí nové technologie pro energeticky náročná průmyslová odvětví, stávající zařízení postupně zaostávají v efektivitě a produktivitě. Tvrdá konkurence na trhu a legislativa v oblasti životního prostředí nutí tato tradiční zařízení k ukončení provozu a k odstavení. Zlepšování procesu a projekty modernizace jsou zásadní v udržování provozních výkonů těchto zařízení. Současné přístupy pro zlepšování procesů jsou hlavně: integrace procesů, optimalizace procesů a intenzifikace procesů. Obecně se v těchto oblastech využívá matematické optimalizace, zkušeností řešitele a provozní heuristiky. Tyto přístupy slouží jako základ pro zlepšování procesů. Avšak, jejich výkon lze dále zlepšit pomocí moderní výpočtové inteligence. Účelem této práce je tudíž aplikace pokročilých technik umělé inteligence a strojového učení za účelem zlepšování procesů v energeticky náročných průmyslových procesech. V této práci je využit přístup, který řeší tento problém simulací průmyslových systémů a přispívá následujícím: (i)Aplikace techniky strojového učení, která zahrnuje jednorázové učení a neuro-evoluci pro modelování a optimalizaci jednotlivých jednotek na základě dat. (ii) Aplikace redukce dimenze (např. Analýza hlavních komponent, autoendkodér) pro vícekriteriální optimalizaci procesu s více jednotkami. (iii) Návrh nového nástroje pro analýzu problematických částí systému za účelem jejich odstranění (bottleneck tree analysis – BOTA). Bylo také navrženo rozšíření nástroje, které umožňuje řešit vícerozměrné problémy pomocí přístupu založeného na datech. (iv) Prokázání účinnosti simulací Monte-Carlo, neuronové sítě a rozhodovacích stromů pro rozhodování při integraci nové technologie procesu do stávajících procesů. (v) Porovnání techniky HTM (Hierarchical Temporal Memory) a duální optimalizace s několika prediktivními nástroji pro podporu managementu provozu v reálném čase. (vi) Implementace umělé neuronové sítě v rámci rozhraní pro konvenční procesní graf (P-graf). (vii) Zdůraznění budoucnosti umělé inteligence a procesního inženýrství v biosystémech prostřednictvím komerčně založeného paradigmatu multi-omics.
APA, Harvard, Vancouver, ISO, and other styles
27

Huang, Zi-Yu, and 黃子毓. "Early Warning Analysis of Financial Crisis by Using Decision Tree and Deep Neural Network." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/hpbj6c.

Full text
Abstract:
碩士<br>朝陽科技大學<br>財務金融系<br>107<br>Recently, the rapid growth of financial frauds, such as fraudulent financial reporting and depletion of the companys assets, have resulted in investors losing confidence in the securities market. Therefore, the Securities and Futures Institute implemented the Information Disclosure and Transparency Ranking System to make the information more transparent, and allowed the companies to disclose more information in financial statements before encountering the financial crisis. This study used the listed electronics company in stock exchange market and over-the-counter market as the research object from 2010 to 2018. Respectively, and applied decision tree and deep neural network approaches to establish a financial crisis prediction Model. By adding the information disclosure as a control variable, the empirical results showed that the overall accuracy of decision tree method reached 88.83%,and adding the information disclosure variable reaches 89.39%.The overall accuracy of the deep neural network reached 78.81%, and adding the information disclosure variable reaches 79.80%. Using the decision tree with the True Positive Rate is 89.93%, and adding the information disclosure is 91.18%, the True Positive Rate of the deep neural network is 92.54%, and adding the information disclosure variable reaches 92.79%. Both of decision tree and deep neural network could effectively predict the financial crisis of companies. Especially, the decision tree achieved the best prediction result. If selecting seven important financial ratios from thirteen variables through correlation coefficient, the accuracy rate will be affected. The accuracy rate of the decision tree for empirical results reduced to 87.15% and the deep neural network reduced to 75.98%, but the rules are reduced by half. Therefore, The results show that investors can give priority to these seven ratios, so that investors can find crisis companies early and adjust their investment strategies.
APA, Harvard, Vancouver, ISO, and other styles
28

Chou, Chia-Yu, and 周佳瑜. "Prediction of the Chemical Mechanical Polishing Removal Rate by Using a Combination of Deep Neural Network and Random Forest." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/9xwhbx.

Full text
Abstract:
碩士<br>國立交通大學<br>統計學研究所<br>105<br>In recent years, the semiconductor industry is very flourishing. However, a cumbersome process requires sophisticated control to achieve the high yield rate and keep the cost down actually. And the yield of the wafer is related to the removal rate in the Chemical Mechanical Planarization process. Therefore, the prediction of the CMP removal rate is the important issues. In this paper, we combine the deep neural network learning and random forest to predict the removal rate, and it is more stable and gets less loss than using the only one method.
APA, Harvard, Vancouver, ISO, and other styles
29

Yahi, Alexandre. "Simulating drug responses in laboratory test time series with deep generative modeling." Thesis, 2019. https://doi.org/10.7916/d8-arta-jt32.

Full text
Abstract:
Drug effects can be unpredictable and vary widely among patients with environmental, genetic, and clinical factors. Randomized control trials (RCTs) are not sufficient to identify adverse drug reactions (ADRs), and the electronic health record (EHR) along with medical claims have become an important resource for pharmacovigilance. Among all the data collected in hospitals, laboratory tests represent the most documented and reliable data type in the EHR. Laboratory tests are at the core of the clinical decision process and are used for diagnosis, monitoring, screening, and research by physicians. They can be linked to drug effects either directly, with therapeutic drug monitoring (TDM), or indirectly using drug laboratory effects (DLEs) that affect surrogate tests. Unfortunately, very few automated methods use laboratory tests to inform clinical decision making and predict drug effects, partly due to the complexity of these time series that are irregularly sampled, highly dependent on other clinical covariates, and non-stationary. Deep learning, the branch of machine learning that relies on high-capacity artificial neural networks, has known a renewed popularity this past decade and has transformed fields such as computer vision and natural language processing. Deep learning holds the promise of better performances compared to established machine learning models, although with the necessity for larger training datasets due to their higher degrees of freedom. These models are more flexible with multi-modal inputs and can make sense of large amounts of features without extensive engineering. Both qualities make deep learning models ideal candidate for complex, multi-modal, noisy healthcare datasets. With the development of novel deep learning methods such as generative adversarial networks (GANs), there is an unprecedented opportunity to learn how to augment existing clinical dataset with realistic synthetic data and increase predictive performances. Moreover, GANs have the potential to simulate effects of individual covariates such as drug exposures by leveraging the properties of implicit generative models. In this dissertation, I present a body of work that aims at paving the way for next generation laboratory test-based clinical decision support systems powered by deep learning. To this end, I organized my experiments around three building blocks: (1) the evaluation of various deep learning architectures with laboratory test time series and their covariates with a forecasting task; (2) the development of implicit generative models of laboratory test time series using the Wasserstein GAN framework; (3) the inference properties of these models for the simulation of drug effects in laboratory test time series, and their application for data augmentation. Each component has its own evaluation: The forecasting task enabled me to explore the properties and performances of different learning architectures; the Wasserstein GAN models are evaluated with both intrinsic metrics and extrinsic tasks, and I always set baselines to avoid providing results in a "neural-network only" referential. Applied machine learning, and more so with deep learning, is an empirical science. While the datasets used in this dissertation are not publicly available due to patient privacy regulation, I described pre-processing steps, hyper-parameters selection and training processes with reproducibility and transparency in mind. In the specific context of these studies involving laboratory test time series and their clinical covariates, I found that for supervised tasks, machine learning holds up well against deep learning methods. Complex recurrent architectures like long short-term memory (LSTM) do not perform well on these short time series, while convolutional neural networks (CNNs) and multi-layer perceptrons (MLPs) provide the best performances, at the cost of extensive hyper-parameter tuning. Generative adversarial networks, enabled by deep learning models, were able to generate high-fidelity laboratory test time series, and the quality of the generated samples was increased with conditional models using drug exposures as auxiliary information. Interestingly, forecasting models trained on synthetic data exclusively still retain good performances, confirming the potential of GANs in privacy-oriented applications. Finally, conditional GANs demonstrated an ability to interpolate samples from drug exposure combinations not seen during training, opening the way for laboratory test simulation with larger auxiliary information spaces. In specific cases, augmenting real training sets with synthetic data improved performances in the forecasting tasks, and could be extended to other applications where rare cases present a high prediction error.
APA, Harvard, Vancouver, ISO, and other styles
30

Roy, Bhupendra. "Identifying Deception in Online Reviews: Application of Machine Learning, Deep Learning and Natural Language Processing." Master's thesis, 2020. http://hdl.handle.net/10362/101187.

Full text
Abstract:
Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics<br>Customers increasingly rate, review and research products online, (Jansen 2010). Consequently, websites containing consumer reviews are becoming targets of opinion spam. Now-a-days, people are paid money to write fake positive review online, to misguide customer and to augment sales revenue. Alternatively, people are also paid to pose as customers and to post negative fake reviews with the objective to slash competitors. These have caused menace in social media and often resulting in customer being baffled. In this study, we have explored multiple aspects of deception classification. We have explored four kinds of treatments to input i.e., the reviews using Natural Language Processing – lemmatization, stemming, POS tagging and a mix of lemmatization and POS Tagging. Also, we have explored how each of these inputs responds to different machine learning models – Logistic Regression, Naïve Bayes, Support Vector Machine, Random Forest, Extreme Gradient Boosting and Deep Learning Neural Network. We have utilized the gold standard hotel reviews dataset created by (Ott, Choi, et al. 2011) & (Ott, Cardie and Hancock, Negative Deceptive Opinion Spam 2013). Also, we used restaurant reviews dataset and doctors’ reviews dataset used by (Li, et al. 2014). We explored the usability of these models in similar domain as well as across different domains. We trained our model with 75% of hotel reviews dataset and check the accuracy of classification on similar dataset like 25% of unseen hotel reviews and on different domain dataset like unseen restaurant reviews and unseen doctors’ reviews. We perform this to create a robust model which can be applied on same domain and across different domains. Best accuracy for testing dataset of hotels achieved by us was at 91% using Deep Learning Neural Network. Logistic regression, support vector machine and random forest had similar results like neural network. Naïve Bayes also had similar accuracy; however, it had more volatility in cross domain accuracy performance. Accuracy of extreme gradient boosting was weakest among all the models that we explored. Our results are comparable and at times exceeding performance of other researchers’ work. Additionally, we have explored various models (Logistic Regression, Naïve Bayes, Support Vector Machine, Random Forest, Extreme gradient boosting, Neural network) vis a vis various input transformation method using Natural Language Processing (lemmatized unigrams, stemmed, POS tagging and a mix of lemmatization and POS Tagging).
APA, Harvard, Vancouver, ISO, and other styles
31

Mylnikova, Ekaterina. "Multiclass Classification of Motor Insurance Customers in Portugal." Master's thesis, 2021. http://hdl.handle.net/10362/127584.

Full text
Abstract:
The insurance market is highly competitive. To stay in line with other companies in today's world, it is not enough for a company to have the best price. The most important move now is to make a personalized offer to each client. Insurance companies have an enormous amount of data that can be used to understand their customers better. What do they want? What offer would attract new clients, and what offer would keep existing customers from leaving? The project aims to classify customers’ profiles based on their individual preferences in motor insurance.<br>Internship Report presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics
APA, Harvard, Vancouver, ISO, and other styles
32

(8921381), Ali Lenjani. "Developing Artificial Intelligence-Based Decision Support for Resilient Socio-Technical Systems." Thesis, 2020.

Find full text
Abstract:
<div>During 2017 and 2018, two of the costliest years on record regarding natural disasters, the U.S. experienced 30 events with total losses of $400 billion. These exuberant costs arise primarily from the lack of adequate planning spanning the breadth from pre-event preparedness to post-event response. It is imperative to start thinking about ways to make our built environment more resilient. However, empirically-calibrated and structure-specific vulnerability models, a critical input required to formulate decision-making problems, are not currently available. Here, the research objective is to improve the resilience of the built environment through an automated vision-based system that generates actionable information in the form of probabilistic pre-event prediction and post-event assessment of damage. The central hypothesis is that pre-event, e.g., street view images, along with the post-event image database, contain sufficient information to construct pre-event probabilistic vulnerability models for assets in the built environment. The rationale for this research stems from the fact that probabilistic damage prediction is the most critical input for formulating the decision-making problems under uncertainty targeting the mitigation, preparedness, response, and recovery efforts. The following tasks are completed towards the goal.</div><div>First, planning for one of the bottleneck processes of the post-event recovery is formulated as a decision making problem considering the consequences imposed on the community (module 1). Second, a technique is developed to automate the process of extracting multiple street-view images of a given built asset, thereby creating a dataset that illustrates its pre-event state (module 2). Third, a system is developed that automatically characterizes the pre-event state of the built asset and quantifies the probability that it is damaged by fusing information from deep neural network (DNN) classifiers acting on pre-event and post-event images (module 3). To complete the work, a methodology is developed to enable associating each asset of the built environment with a structural probabilistic vulnerability model by correlating the pre-event structure characterization to the post-event damage state (module 4). The method is demonstrated and validated using field data collected from recent hurricanes within the US.</div><div>The vision of this research is to enable the automatic extraction of information about exposure and risk to enable smarter and more resilient communities around the world.</div>
APA, Harvard, Vancouver, ISO, and other styles
33

"Novel Learning-Based Task Schedulers for Domain-Specific SoCs." Master's thesis, 2020. http://hdl.handle.net/2286/R.I.62834.

Full text
Abstract:
abstract: This Master’s thesis includes the design, integration on-chip, and evaluation of a set of imitation learning (IL)-based scheduling policies: deep neural network (DNN)and decision tree (DT). We first developed IL-based scheduling policies for heterogeneous systems-on-chips (SoCs). Then, we tested these policies using a system-level domain-specific system-on-chip simulation framework [11]. Finally, we transformed them into efficient code using a cloud engine [1] and implemented on a user-space emulation framework [61] on a Unix-based SoC. IL is one area of machine learning (ML) and a useful method to train artificial intelligence (AI) models by imitating the decisions of an expert or Oracle that knows the optimal solution. This thesis's primary focus is to adapt an ML model to work on-chip and optimize the resource allocation for a set of domain-specific wireless and radar systems applications. Evaluation results with four streaming applications from wireless communications and radar domains show how the proposed IL-based scheduler approximates an offline Oracle expert with more than 97% accuracy and 1.20× faster execution time. The models have been implemented as an add-on, making it easy to port to other SoCs.<br>Dissertation/Thesis<br>Masters Thesis Computer Engineering 2020
APA, Harvard, Vancouver, ISO, and other styles
34

Silvestre, Martinho de Matos. "Three-stage ensemble model : reinforce predictive capacity without compromising interpretability." Master's thesis, 2019. http://hdl.handle.net/10362/71588.

Full text
Abstract:
Thesis proposal presented as partial requirement for obtaining the Master’s degree in Statistics and Information Management, with specialization in Risk Analysis and Management<br>Over the last decade, several banks have developed models to quantify credit risk. In addition to the monitoring of the credit portfolio, these models also help deciding the acceptance of new contracts, assess customers profitability and define pricing strategy. The objective of this paper is to improve the approach in credit risk modeling, namely in scoring models to predict default events. To this end, we propose the development of a three-stage ensemble model that combines the results interpretability of the Scorecard with the predictive power of machine learning algorithms. The results show that ROC index improves 0.5%-0.7% and Accuracy 0%-1% considering the Scorecard as baseline.
APA, Harvard, Vancouver, ISO, and other styles
35

Eusébio, Pedro Lopes. "Applicability of Multispectral Imagery for Detection of Prescribed Fires and Rekindling." Master's thesis, 2021. http://hdl.handle.net/10362/120564.

Full text
Abstract:
Forest fires are an increasingly relevant problem nowadays with the worsening of global warming’s most severe consequences. These fire occurrences, that can cause immense damage to forest ecosystems and have a great negative impact in peoples lives,begin often with rekindles. These problems can be very difficult to tackle, needing to involve a lot of people to surveil the areas at risk. A system capable of executing this surveillance protocol and alerting the fire fighting authorities of fire and possible rekindle occurrences would be extremely beneficial in these scenarios.A system aiming to achieve this goal is being implemented, composed of an UAV equipped with a multispectral camera, capturing aerial images of these areas. This dissertation presents a fire detection model to be used in prescribed fires and rekindling situations, identifying fire instances within the captured images. It makes use of the camera’s various spectral bands to highlight the areas at greatest risk and of deep learning technology to autonomously recognise these areas.<br>Incêndios florestais são um problema cada vez mais relevante nos dias de hoje com o agravamento das consequências mais graves do aquecimento global. Estas ocorrências,que podem causar imensos danos aos ecossistemas florestais e ter um grande impacto negativo na vida das pessoas, são muitas vezes iniciadas por reacendimentos. Estes problemas podem ser muito difíceis de combater, necessitando de envolver muitas pessoas para vigiar as áreas de risco. Um sistema capaz de executar este protocolo de vigilância e alertar as autoridades de combate a incêndio sobre fogos e possíveis reacendimentos seria extremamente benéfico nestes cenários.Para alcançar este objetivo, está a ser implementado um sistema composto por um UAV, equipado com uma câmera multiespectral, que irá capturar imagens aéreas dessas áreas. Esta dissertação apresenta um modelo de detecção de incêndios para ser utilizado em situações de fogos controlados e reacendimentos, identificando ocorrências de fogo nas imagens capturadas. Faz uso das várias bandas espectrais da câmera para destacar as áreas de maior risco e de tecnologia de aprendizagem automática para reconhecer essas áreas de forma autônoma.
APA, Harvard, Vancouver, ISO, and other styles
36

Léonard, Nicholas. "Distributed conditional computation." Thèse, 2014. http://hdl.handle.net/1866/11954.

Full text
Abstract:
L'objectif de cette thèse est de présenter différentes applications du programme de recherche de calcul conditionnel distribué. On espère que ces applications, ainsi que la théorie présentée ici, mènera à une solution générale du problème d'intelligence artificielle, en particulier en ce qui a trait à la nécessité d'efficience. La vision du calcul conditionnel distribué consiste à accélérer l'évaluation et l'entraînement de modèles profonds, ce qui est très différent de l'objectif usuel d'améliorer sa capacité de généralisation et d'optimisation. Le travail présenté ici a des liens étroits avec les modèles de type mélange d'experts. Dans le chapitre 2, nous présentons un nouvel algorithme d'apprentissage profond qui utilise une forme simple d'apprentissage par renforcement sur un modèle d'arbre de décisions à base de réseau de neurones. Nous démontrons la nécessité d'une contrainte d'équilibre pour maintenir la distribution d'exemples aux experts uniforme et empêcher les monopoles. Pour rendre le calcul efficient, l'entrainement et l'évaluation sont contraints à être éparse en utilisant un routeur échantillonnant des experts d'une distribution multinomiale étant donné un exemple. Dans le chapitre 3, nous présentons un nouveau modèle profond constitué d'une représentation éparse divisée en segments d'experts. Un modèle de langue à base de réseau de neurones est construit à partir des transformations éparses entre ces segments. L'opération éparse par bloc est implémentée pour utilisation sur des cartes graphiques. Sa vitesse est comparée à deux opérations denses du même calibre pour démontrer le gain réel de calcul qui peut être obtenu. Un modèle profond utilisant des opérations éparses contrôlées par un routeur distinct des experts est entraîné sur un ensemble de données d'un milliard de mots. Un nouvel algorithme de partitionnement de données est appliqué sur un ensemble de mots pour hiérarchiser la couche de sortie d'un modèle de langage, la rendant ainsi beaucoup plus efficiente. Le travail présenté dans cette thèse est au centre de la vision de calcul conditionnel distribué émis par Yoshua Bengio. Elle tente d'appliquer la recherche dans le domaine des mélanges d'experts aux modèles profonds pour améliorer leur vitesse ainsi que leur capacité d'optimisation. Nous croyons que la théorie et les expériences de cette thèse sont une étape importante sur la voie du calcul conditionnel distribué car elle cadre bien le problème, surtout en ce qui concerne la compétitivité des systèmes d'experts.<br>The objective of this paper is to present different applications of the distributed conditional computation research program. It is hoped that these applications and the theory presented here will lead to a general solution of the problem of artificial intelligence, especially with regard to the need for efficiency. The vision of distributed conditional computation is to accelerate the evaluation and training of deep models which is very different from the usual objective of improving its generalization and optimization capacity. The work presented here has close ties with mixture of experts models. In Chapter 2, we present a new deep learning algorithm that uses a form of reinforcement learning on a novel neural network decision tree model. We demonstrate the need for a balancing constraint to keep the distribution of examples to experts uniform and to prevent monopolies. To make the calculation efficient, the training and evaluation are constrained to be sparse by using a gater that samples experts from a multinomial distribution given examples. In Chapter 3 we present a new deep model consisting of a sparse representation divided into segments of experts. A neural network language model is constructed from blocks of sparse transformations between these expert segments. The block-sparse operation is implemented for use on graphics cards. Its speed is compared with two dense operations of the same caliber to demonstrate and measure the actual efficiency gain that can be obtained. A deep model using these block-sparse operations controlled by a distinct gater is trained on a dataset of one billion words. A new algorithm for data partitioning (clustering) is applied to a set of words to organize the output layer of a language model into a conditional hierarchy, thereby making it much more efficient. The work presented in this thesis is central to the vision of distributed conditional computation as issued by Yoshua Bengio. It attempts to apply research in the area of mixture of experts to deep models to improve their speed and their optimization capacity. We believe that the theory and experiments of this thesis are an important step on the path to distributed conditional computation because it provides a good framework for the problem, especially concerning competitiveness inherent to systems of experts.
APA, Harvard, Vancouver, ISO, and other styles
37

Muwawa, Jean Nestor Dahj. "Data mining and predictive analytics application on cellular networks to monitor and optimize quality of service and customer experience." Diss., 2018. http://hdl.handle.net/10500/25875.

Full text
Abstract:
This research study focuses on the application models of Data Mining and Machine Learning covering cellular network traffic, in the objective to arm Mobile Network Operators with full view of performance branches (Services, Device, Subscribers). The purpose is to optimize and minimize the time to detect service and subscriber patterns behaviour. Different data mining techniques and predictive algorithms have been applied on real cellular network datasets to uncover different data usage patterns using specific Key Performance Indicators (KPIs) and Key Quality Indicators (KQI). The following tools will be used to develop the concept: RStudio for Machine Learning and process visualization, Apache Spark, SparkSQL for data and big data processing and clicData for service Visualization. Two use cases have been studied during this research. In the first study, the process of Data and predictive Analytics are fully applied in the field of Telecommunications to efficiently address users’ experience, in the goal of increasing customer loyalty and decreasing churn or customer attrition. Using real cellular network transactions, prediction analytics are used to predict customers who are likely to churn, which can result in revenue loss. Prediction algorithms and models including Classification Tree, Random Forest, Neural Networks and Gradient boosting have been used with an exploratory Data Analysis, determining relationship between predicting variables. The data is segmented in to two, a training set to train the model and a testing set to test the model. The evaluation of the best performing model is based on the prediction accuracy, sensitivity, specificity and the Confusion Matrix on the test set. The second use case analyses Service Quality Management using modern data mining techniques and the advantages of in-memory big data processing with Apache Spark and SparkSQL to save cost on tool investment; thus, a low-cost Service Quality Management model is proposed and analyzed. With increase in Smart phone adoption, access to mobile internet services, applications such as streaming, interactive chats require a certain service level to ensure customer satisfaction. As a result, an SQM framework is developed with Service Quality Index (SQI) and Key Performance Index (KPI). The research concludes with recommendations and future studies around modern technology applications in Telecommunications including Internet of Things (IoT), Cloud and recommender systems.<br>Cellular networks have evolved and are still evolving, from traditional GSM (Global System for Mobile Communication) Circuit switched which only supported voice services and extremely low data rate, to LTE all Packet networks accommodating high speed data used for various service applications such as video streaming, video conferencing, heavy torrent download; and for say in a near future the roll-out of the Fifth generation (5G) cellular networks, intended to support complex technologies such as IoT (Internet of Things), High Definition video streaming and projected to cater massive amount of data. With high demand on network services and easy access to mobile phones, billions of transactions are performed by subscribers. The transactions appear in the form of SMSs, Handovers, voice calls, web browsing activities, video and audio streaming, heavy downloads and uploads. Nevertheless, the stormy growth in data traffic and the high requirements of new services introduce bigger challenges to Mobile Network Operators (NMOs) in analysing the big data traffic flowing in the network. Therefore, Quality of Service (QoS) and Quality of Experience (QoE) turn in to a challenge. Inefficiency in mining, analysing data and applying predictive intelligence on network traffic can produce high rate of unhappy customers or subscribers, loss on revenue and negative services’ perspective. Researchers and Service Providers are investing in Data mining, Machine Learning and AI (Artificial Intelligence) methods to manage services and experience. This research study focuses on the application models of Data Mining and Machine Learning covering network traffic, in the objective to arm Mobile Network Operators with full view of performance branches (Services, Device, Subscribers). The purpose is to optimize and minimize the time to detect service and subscriber patterns behaviour. Different data mining techniques and predictive algorithms will be applied on cellular network datasets to uncover different data usage patterns using specific Key Performance Indicators (KPIs) and Key Quality Indicators (KQI). The following tools will be used to develop the concept: R-Studio for Machine Learning, Apache Spark, SparkSQL for data processing and clicData for Visualization.<br>Electrical and Mining Engineering<br>M. Tech (Electrical Engineering)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography