Rozprawy doktorskie na temat „Intelligence artificielle – Apprentissage profond”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Intelligence artificielle – Apprentissage profond”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Vialatte, Jean-Charles. "Convolution et apprentissage profond sur graphes". Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2018. http://www.theses.fr/2018IMTA0118/document.
Pełny tekst źródłaConvolutional neural networks have proven to be the deep learning model that performs best on regularly structured datasets like images or sounds. However, they cannot be applied on datasets with an irregular structure (e.g. sensor networks, citation networks, MRIs). In this thesis, we develop an algebraic theory of convolutions on irregular domains. We construct a family of convolutions that are based on group actions (or, more generally, groupoid actions) that acts on the vertex domain and that have properties that depend on the edges. With the help of these convolutions, we propose extensions of convolutional neural netowrks to graph domains. Our researches lead us to propose a generic formulation of the propagation between layers, that we call the neural contraction. From this formulation, we derive many novel neural network models that can be applied on irregular domains. Through benchmarks and experiments, we show that they attain state-of-the-art performances, and beat them in some cases
Mollaret, Sébastien. "Artificial intelligence algorithms in quantitative finance". Thesis, Paris Est, 2021. http://www.theses.fr/2021PESC2002.
Pełny tekst źródłaArtificial intelligence has become more and more popular in quantitative finance given the increase of computer capacities as well as the complexity of models and has led to many financial applications. In the thesis, we have explored three different applications to solve financial derivatives challenges, from model selection, to model calibration and pricing. In Part I, we focus on a regime-switching model to price equity derivatives. The model parameters are estimated using the Expectation-Maximization (EM) algorithm and a local volatility component is added to fit vanilla option prices using the particle method. In Part II, we then use deep neural networks to calibrate a stochastic volatility model, where the volatility is modelled as the exponential of an Ornstein-Uhlenbeck process, by approximating the mapping between model parameters and corresponding implied volatilities offline. Once the expensive approximation has been performed offline, the calibration reduces to a standard & fast optimization problem.In Part III, we finally use deep neural networks to price American option on large baskets to solve the curse of the dimensionality. Different methods are studied with a Longstaff-Schwartz approach, where we approximate the continuation values, and a stochastic control approach, where we solve the pricing partial differential equation by reformulating the problem as a stochastic control problem using the non-linear Feynman-Kac formula
Carrara, Nicolas. "Reinforcement learning for dialogue systems optimization with user adaptation". Thesis, Lille 1, 2019. http://www.theses.fr/2019LIL1I071/document.
Pełny tekst źródłaThe most powerful artificial intelligence systems are now based on learned statistical models. In order to build efficient models, these systems must collect a huge amount of data on their environment. Personal assistants, smart-homes, voice-servers and other dialogue applications are no exceptions to this statement. A specificity of those systems is that they are designed to interact with humans, and as a consequence, their training data has to be collected from interactions with these humans. As the number of interactions with a single person is often too scarce to train a proper model, the usual approach to maximise the amount of data consists in mixing data collected with different users into a single corpus. However, one limitation of this approach is that, by construction, the trained models are only efficient with an "average" human and do not include any sort of adaptation; this lack of adaptation makes the service unusable for some specific group of persons and leads to a restricted customers base and inclusiveness problems. This thesis proposes solutions to construct Dialogue Systems that are robust to this problem by combining Transfer Learning and Reinforcement Learning. It explores two main ideas: The first idea of this thesis consists in incorporating adaptation in the very first dialogues with a new user. To that extend, we use the knowledge gathered with previous users. But how to scale such systems with a growing database of user interactions? The first proposed approach involves clustering of Dialogue Systems (tailored for their respective user) based on their behaviours. We demonstrated through handcrafted and real user-models experiments how this method improves the dialogue quality for new and unknown users. The second approach extends the Deep Q-learning algorithm with a continuous transfer process.The second idea states that before using a dedicated Dialogue System, the first interactions with a user should be handled carefully by a safe Dialogue System common to all users. The underlying approach is divided in two steps. The first step consists in learning a safe strategy through Reinforcement Learning. To that extent, we introduced a budgeted Reinforcement Learning framework for continuous state space and the underlying extensions of classic Reinforcement Learning algorithms. In particular, the safe version of the Fitted-Q algorithm has been validated, in term of safety and efficiency, on a dialogue system tasks and an autonomous driving problem. The second step consists in using those safe strategies when facing new users; this method is an extension of the classic ε-greedy algorithm
Levy, Abitbol Jacobo. "Computational detection of socioeconomic inequalities". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN001.
Pełny tekst źródłaMachine and deep learning advances have come to permeate modern sciences and have unlocked the study of numerous issues many deemed intractable. Social sciences have accordingly not been exempted from benefiting from these advances, as neural language model have been extensively used to analyze social and linguistic based phenomena such as the quantification of semantic change or the detection of the ideological bias of news articles, while convolutional neural networks have been used in urban settings to explore the dynamics of urban change by determining which characteristics predict neighborhood improvement or by examining how the perception of safety affects the liveliness of neighborhoods. In light of this fact, this dissertation argues that one particular social phenomenon, socioeconomic inequalities, can be gainfully studied by means of the above. We set out to collect and combine large datasets enabling 1) the study of the spatial, temporal, linguistic and network dependencies of socioeconomic inequalities and 2) the inference of socioeconomic status (SES) from these multimodal signals. This task is one worthy of study as previous research endeavors have come short of providing a complete picture on how these multiple factors are intertwined with individual socioeconomic status and how the former can fuel better inference methodologies for the latter. The study of these questions is important, as much is still unclear about the root causes of SES inequalities and the deployment of ML/DL solutions to pinpoint them is still very much in its infancy
Tamaazousti, Youssef. "Vers l’universalité des représentations visuelle et multimodales". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC038/document.
Pełny tekst źródłaBecause of its key societal, economic and cultural stakes, Artificial Intelligence (AI) is a hot topic. One of its main goal, is to develop systems that facilitates the daily life of humans, with applications such as household robots, industrial robots, autonomous vehicle and much more. The rise of AI is highly due to the emergence of tools based on deep neural-networks which make it possible to simultaneously learn, the representation of the data (which were traditionally hand-crafted), and the task to solve (traditionally learned with statistical models). This resulted from the conjunction of theoretical advances, the growing computational capacity as well as the availability of many annotated data. A long standing goal of AI is to design machines inspired humans, capable of perceiving the world, interacting with humans, in an evolutionary way. We categorize, in this Thesis, the works around AI, in the two following learning-approaches: (i) Specialization: learn representations from few specific tasks with the goal to be able to carry out very specific tasks (specialized in a certain field) with a very good level of performance; (ii) Universality: learn representations from several general tasks with the goal to perform as many tasks as possible in different contexts. While specialization was extensively explored by the deep-learning community, only a few implicit attempts were made towards universality. Thus, the goal of this Thesis is to explicitly address the problem of improving universality with deep-learning methods, for image and text data. We have addressed this topic of universality in two different forms: through the implementation of methods to improve universality (“universalizing methods”); and through the establishment of a protocol to quantify its universality. Concerning universalizing methods, we proposed three technical contributions: (i) in a context of large semantic representations, we proposed a method to reduce redundancy between the detectors through, an adaptive thresholding and the relations between concepts; (ii) in the context of neural-network representations, we proposed an approach that increases the number of detectors without increasing the amount of annotated data; (iii) in a context of multimodal representations, we proposed a method to preserve the semantics of unimodal representations in multimodal ones. Regarding the quantification of universality, we proposed to evaluate universalizing methods in a Transferlearning scheme. Indeed, this technical scheme is relevant to assess the universal ability of representations. This also led us to propose a new framework as well as new quantitative evaluation criteria for universalizing methods
Wallis, David. "A study of machine learning and deep learning methods and their application to medical imaging". Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST057.
Pełny tekst źródłaWe first use Convolutional Neural Networks (CNNs) to automate mediastinal lymph node detection using FDG-PET/CT scans. We build a fully automated model to go directly from whole-body FDG-PET/CT scans to node localisation. The results show a comparable performance to an experienced physician. In the second half of the thesis we experimentally test the performance, interpretability, and stability of radiomic and CNN models on three datasets (2D brain MRI scans, 3D CT lung scans, 3D FDG-PET/CT mediastinal scans). We compare how the models improve as more data is available and examine whether there are patterns common to the different problems. We question whether current methods for model interpretation are satisfactory. We also investigate how precise segmentation affects the performance of the models. We first use Convolutional Neural Networks (CNNs) to automate mediastinal lymph node detection using FDG-PET/CT scans. We build a fully automated model to go directly from whole-body FDG-PET/CT scans to node localisation. The results show a comparable performance to an experienced physician. In the second half of the thesis we experimentally test the performance, interpretability, and stability of radiomic and CNN models on three datasets (2D brain MRI scans, 3D CT lung scans, 3D FDG-PET/CT mediastinal scans). We compare how the models improve as more data is available and examine whether there are patterns common to the different problems. We question whether current methods for model interpretation are satisfactory. We also investigate how precise segmentation affects the performance of the models
Pierrard, Régis. "Explainable Classification and Annotation through Relation Learning and Reasoning". Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPAST008.
Pełny tekst źródłaWith the recent successes of deep learning and the growing interactions between humans and AIs, explainability issues have risen. Indeed, it is difficult to understand the behaviour of deep neural networks and thus such opaque models are not suited for high-stake applications. In this thesis, we propose an approach for performing classification or annotation and providing explanations. It is based on a transparent model, whose reasoning is clear, and on interpretable fuzzy relations that enable to express the vagueness of natural language.Instead of learning on training instances that are annotated with relations, we propose to rely on a set of relations that was set beforehand. We present two heuristics that make the process of evaluating relations faster. Then, the most relevant relations can be extracted using a new fuzzy frequent itemset mining algorithm. These relations enable to build rules, for classification, and constraints, for annotation. Since the strengths of our approach are the transparency of the model and the interpretability of the relations, an explanation in natural language can be generated.We present experiments on images and time series that show the genericity of the approach. In particular, the application to explainable organ annotation was received positively by a set of participants that judges the explanations consistent and convincing
Etienne, Caroline. "Apprentissage profond appliqué à la reconnaissance des émotions dans la voix". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS517.
Pełny tekst źródłaThis thesis deals with the application of artificial intelligence to the automatic classification of audio sequences according to the emotional state of the customer during a commercial phone call. The goal is to improve on existing data preprocessing and machine learning models, and to suggest a model that is as efficient as possible on the reference IEMOCAP audio dataset. We draw from previous work on deep neural networks for automatic speech recognition, and extend it to the speech emotion recognition task. We are therefore interested in End-to-End neural architectures to perform the classification task including an autonomous extraction of acoustic features from the audio signal. Traditionally, the audio signal is preprocessed using paralinguistic features, as part of an expert approach. We choose a naive approach for data preprocessing that does not rely on specialized paralinguistic knowledge, and compare it with the expert approach. In this approach, the raw audio signal is transformed into a time-frequency spectrogram by using a short-term Fourier transform. In order to apply a neural network to a prediction task, a number of aspects need to be considered. On the one hand, the best possible hyperparameters must be identified. On the other hand, biases present in the database should be minimized (non-discrimination), for example by adding data and taking into account the characteristics of the chosen dataset. We study these aspects in order to develop an End-to-End neural architecture that combines convolutional layers specialized in the modeling of visual information with recurrent layers specialized in the modeling of temporal information. We propose a deep supervised learning model, competitive with the current state-of-the-art when trained on the IEMOCAP dataset, justifying its use for the rest of the experiments. This classification model consists of a four-layer convolutional neural networks and a bidirectional long short-term memory recurrent neural network (BLSTM). Our model is evaluated on two English audio databases proposed by the scientific community: IEMOCAP and MSP-IMPROV. A first contribution is to show that, with a deep neural network, we obtain high performances on IEMOCAP, and that the results are promising on MSP-IMPROV. Another contribution of this thesis is a comparative study of the output values of the layers of the convolutional module and the recurrent module according to the data preprocessing method used: spectrograms (naive approach) or paralinguistic indices (expert approach). We analyze the data according to their emotion class using the Euclidean distance, a deterministic proximity measure. We try to understand the characteristics of the emotional information extracted autonomously by the network. The idea is to contribute to research focused on the understanding of deep neural networks used in speech emotion recognition and to bring more transparency and explainability to these systems, whose decision-making mechanism is still largely misunderstood
Duran, Audrey. "Intelligence artificielle pour la caractérisation du cancer de la prostate par agressivité en IRM multiparamétrique". Thesis, Lyon, 2022. http://theses.insa-lyon.fr/publication/2022LYSEI008/these.pdf.
Pełny tekst źródłaProstate cancer (PCa) is the most frequently diagnosed cancer in men in more than half the countries in the world and the fifth leading cause of cancer death among men in 2020. Diagnosis of PCa includes multiparametric magnetic resonance imaging acquisition (mp-MRI) - which combines T2 weighted (T2-w), diffusion weighted imaging (DWI) and dynamic contrast enhanced (DCE) sequences - prior to any biopsy. The joint analysis of these multimodal images is time demanding and challenging, especially when individual MR sequences yield conflicting findings. In addition, the sensitivity of MRI is low for less aggressive cancers and inter-reader reproducibility remains moderate at best. Moreover, visual analysis does not currently allow to determine the cancer aggressiveness, characterized by the Gleason score (GS). This is why computer-aided diagnosis (CAD) systems based on statistical learning models have been proposed in recent years, to assist radiologists in their diagnostic task, but the vast majority of these models focus on the binary detection of clinically significant (CS) lesions. The objective of this thesis is to develop a CAD system to detect and segment PCa on mp-MRI images but also to characterize their aggressiveness, by predicting the associated GS. In a first part, we present a supervised CAD system to segment PCa by aggressiveness from T2-w and ADC maps. This end-to-end multi-class neural network jointly segments the prostate gland and cancer lesions with GS group grading. The model was trained and validated with a 5-fold cross-validation on a heterogeneous series of 219 MRI exams acquired on three different scanners prior prostatectomy. Regarding the automatic GS group grading, Cohen’s quadratic weighted kappa coefficient (κ) is 0.418 ± 0.138, which is the best reported lesion-wise kappa for GS segmentation to our knowledge. The model has also encouraging generalization capacities on the PROSTATEx-2 public dataset. In a second part, we focus on a weakly supervised model that allows the inclusion of partly annotated data, where the lesions are identified by points only, for a consequent saving of time and the inclusion of biopsy-based databases. Regarding the automatic GS group grading on our private dataset, we show that we can approach performance achieved with the baseline fully supervised model while considering 6% of annotated voxels only for training. In the last part, we study the contribution of DCE MRI, a sequence often omitted as input to deep models, for the detection and characterization of PCa. We evaluate several ways to encode the perfusion from the DCE MRI information in a U-Net like architecture. Parametric maps derived from DCE MR exams are shown to positively impact segmentation and grading performance of PCa lesions
Carvalho, Micael. "Deep representation spaces". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS292.
Pełny tekst źródłaIn recent years, Deep Learning techniques have swept the state-of-the-art of many applications of Machine Learning, becoming the new standard approach for them. The architectures issued from these techniques have been used for transfer learning, which extended the power of deep models to tasks that did not have enough data to fully train them from scratch. This thesis' subject of study is the representation spaces created by deep architectures. First, we study properties inherent to them, with particular interest in dimensionality redundancy and precision of their features. Our findings reveal a strong degree of robustness, pointing the path to simple and powerful compression schemes. Then, we focus on refining these representations. We choose to adopt a cross-modal multi-task problem, and design a loss function capable of taking advantage of data coming from multiple modalities, while also taking into account different tasks associated to the same dataset. In order to correctly balance these losses, we also we develop a new sampling scheme that only takes into account examples contributing to the learning phase, i.e. those having a positive loss. Finally, we test our approach in a large-scale dataset of cooking recipes and associated pictures. Our method achieves a 5-fold improvement over the state-of-the-art, and we show that the multi-task aspect of our approach promotes a semantically meaningful organization of the representation space, allowing it to perform subtasks never seen during training, like ingredient exclusion and selection. The results we present in this thesis open many possibilities, including feature compression for remote applications, robust multi-modal and multi-task learning, and feature space refinement. For the cooking application, in particular, many of our findings are directly applicable in a real-world context, especially for the detection of allergens, finding alternative recipes due to dietary restrictions, and menu planning
Corbat, Lisa. "Fusion de segmentations complémentaires d'images médicales par Intelligence Artificielle et autres méthodes de gestion de conflits". Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCD029.
Pełny tekst źródłaNephroblastoma is the most common kidney tumour in children and its diagnosis is based exclusively on imaging. This work, which is the subject of our research, is part of a larger project: the European project SAIAD (Automated Segmentation of Medical Images Using Distributed Artificial Intelligence). The aim of the project is to design a platform capable of performing different automatic segmentations from source images using Artificial Intelligence (AI) methods, and thus obtain a faithful three-dimensional reconstruction. In this sense, work carried out in a previous thesis of the research team led to the creation of a segmentation platform. It allows the segmentation of several structures individually, by methods such as Deep Learning, and more particularly Convolutional Neural Networks (CNNs), as well as Case Based Reasoning (CBR). However, it is then necessary to automatically fuse the segmentations of these different structures in order to obtain a complete relevant segmentation. When aggregating these structures, contradictory pixels may appear. These conflicts can be resolved by various methods based or not on AI and are the subject of our research. First, we propose a fusion approach not focused on AI using the combination of six different methods, based on different imaging and segmentation criteria. In parallel, two other fusion methods are proposed using, a CNN coupled to the CBR for one, and a CNN using a specific existing segmentation learning method for the other. These different approaches were tested on a set of 14 nephroblastoma patients and demonstrated their effectiveness in resolving conflicting pixels and their ability to improve the resulting segmentations
De, Bois Maxime. "Apprentissage profond sous contraintes biomédicales pour la prédiction de la glycémie future de patients diabétiques". Electronic Thesis or Diss., université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG065.
Pełny tekst źródłaDespite its recent successes in computer vision or machine translation, the use of deep learning in the biomedical field faces many challenges. Among them, we have the difficult access to data in sufficient quantity and quality, as well as the need of having interoperable and the interpretable models. In this thesis, we are interested in these different issues from the perspective of the creation of models predicting future glucose values of diabetic patients. Such models would allow patients to anticipate daily glucose variations, helping its regulation in order to avoid states of hypoglycemia or hyperglycemia.To this end, we use three datasets. While the first was collected during this thesis on several type-2 diabetic patients, the other two are composed of type-1 diabetic patients, both real and virtual. Across the studies, we use each patient’s past glucose, insulin, and carbohydrate data to build personalized models that predict the patient’s glucose values 30 minutes into the future.First, we do a detailed state-of-the-art analysis by building an open-source benchmark of glucosepredictive models. While promising, we highlight the difficulty deep models have in making predictions that are at the same time accurate and safe for the patient.In order to improve the clinical acceptability of the models, we investigate the integration of clinical constraints within the training of the models. We propose new cost functions enhancing the coherence of successive predictions. In addition, they enable the training to focus on clinically dangerous errors. We explore its practical use through an algorithm that enables the training of a model maximizing the precision of the predictions while respecting the clinical constraints set beforehand.Then, we study the use of transfer learning to improve the performance of glucose-predictive models. It eases the learning of personalized models by reusing the knowledge learned on other patients. In particular, we propose the adversarial multi-source transfer learning framework. It significantly improves the performance of the models by allowing the learning of a priori knowledge which is more general, by being agnostic of the patients that are the source of the transfer. We investigate different transfer scenarios through the use of our three datasets. We show that it is possible to transfer knowledge using data coming from different experimental devices, from patients of different types of diabetes, but also from virtual patients.Finally, we are interested in improving the interpretability of deep models through the attention mechanism. In particular, we explore the use of a deep and interpretable model for the prediction of glucose. It implements a double attention mechanism enabling the estimation of the contribution of each input variable to the model to the final prediction. We empirically show the value of such a model for the prediction of glucose by analyzing its behavior in the computation of its predictions
Brenon, Alexis. "Modèle profond pour le contrôle vocal adaptatif d'un habitat intelligent". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM057/document.
Pełny tekst źródłaSmart-homes, resulting of the merger of home-automation, ubiquitous computing and artificial intelligence, support inhabitants in their activity of daily living to improve their quality of life.Allowing dependent and aged people to live at home longer, these homes provide a first answer to society problems as the dependency tied to the aging population.In voice controlled home, the home has to answer to user's requests covering a range of automated actions (lights, blinds, multimedia control, etc.).To achieve this, the control system of the home need to be aware of the context in which a request has been done, but also to know user habits and preferences.Thus, the system must be able to aggregate information from a heterogeneous home-automation sensors network and take the (variable) user behavior into account.The development of smart home control systems is hard due to the huge variability regarding the home topology and the user habits.Furthermore, the whole set of contextual information need to be represented in a common space in order to be able to reason about them and make decisions.To address these problems, we propose to develop a system which updates continuously its model to adapt itself to the user and which uses raw data from the sensors through a graphical representation.This new method is particularly interesting because it does not require any prior inference step to extract the context.Thus, our system uses deep reinforcement learning; a convolutional neural network allowing to extract contextual information and reinforcement learning used for decision-making.Then, this memoir presents two systems, a first one only based on reinforcement learning showing limits of this approach against real environment with thousands of possible states.Introduction of deep learning allowed to develop the second one, ARCADES, which gives good performances proving that this approach is relevant and opening many ways to improve it
Mercadier, Yves. "Classification automatique de textes par réseaux de neurones profonds : application au domaine de la santé". Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS068.
Pełny tekst źródłaThis Ph.D focuses on the analysis of textual data in the health domain and in particular on the supervised multi-class classification of data from biomedical literature and social media.One of the major difficulties when exploring such data by supervised learning methods is to have a sufficient number of data sets for models training. Indeed, it is generally necessary to label manually the data before performing the learning step. The large size of the data sets makes this labellisation task very expensive, which should be reduced with semi-automatic systems.In this context, active learning, in which the Oracle intervenes to choose the best examples to label, is promising. The intuition is as follows: by choosing the smartly the examples and not randomly, the models should improve with less effort for the oracle and therefore at lower cost (i.e. with less annotated examples). In this PhD, we will evaluate different active learning approaches combined with recent deep learning models.In addition, when small annotated data set is available, one possibility of improvement is to artificially increase the data quantity during the training phase, by automatically creating new data from existing data. More precisely, we inject knowledge by taking into account the invariant properties of the data with respect to certain transformations. The augmented data can thus cover an unexplored input space, avoid overfitting and improve the generalization of the model. In this Ph.D, we will propose and evaluate a new approach for textual data augmentation.These two contributions will be evaluated on different textual datasets in the medical domain
Bilodeau, Anthony. "Apprentissage faiblement supervisé appliqué à la segmentation d'images de protéines neuronales". Master's thesis, Université Laval, 2020. http://hdl.handle.net/20.500.11794/39752.
Pełny tekst źródłaThèse ou mémoire avec insertion d'articles
Tableau d'honneur de la Faculté des études supérieures et postdoctorales, 2020-2021
En biologie cellulaire, la microscopie optique est couramment utilisée pour visualiser et caractériser la présence et la morphologie des structures biologiques. Suite à l’acquisition, un expert devra effectuer l’annotation des structures pour quantification. Cette tâche est ardue, requiert de nombreuses heures de travail, parfois répétitif, qui peut résulter en erreurs d’annotations causées par la fatigue d’étiquetage. L’apprentissage machine promet l’automatisation de tâches complexes à partir d’un grand lot de données exemples annotés. Mon projet de maîtrise propose d’utiliser des techniques faiblement supervisées, où les annotations requises pour l’entraînement sont réduites et/ou moins précises, pour la segmentation de structures neuronales. J’ai d’abord testé l’utilisation de polygones délimitant la structure d’intérêt pour la tâche complexe de segmentation de la protéine neuronale F-actine dans des images de microscopie à super-résolution. La complexité de la tâche est supportée par la morphologie hétérogène des neurones, le nombre élevé d’instances à segmenter dans une image et la présence de nombreux distracteurs. Malgré ces difficultés, l’utilisation d’annotations faibles a permis de quantifier un changement novateur de la conformation de la protéine F-actine en fonction de l’activité neuronale. J’ai simplifié davantage la tâche d’annotation en requérant seulement des étiquettes binaires renseignant sur la présence des structures dans l’image réduisant d’un facteur 30 le temps d’annotation. De cette façon, l’algorithme est entraîné à prédire le contenu d’une image et extrait ensuite les caractéristiques sémantiques importantes pour la reconnaissance de la structure d’intérêt à l’aide de mécanismes d’attention. La précision de segmentation obtenue sur les images de F-actine est supérieure à celle des annotations polygonales et équivalente à celle des annotations précises d’un expert. Cette nouvelle approche devrait faciliter la quantification des changements dynamiques qui se produisent sous le microscope dans des cellules vivantes et réduire les erreurs causées par l’inattention ou le biais de sélection des régions d’intérêt dans les images de microscopie.
In cell biology, optical microscopy is commonly used to visualize and characterize the presenceand morphology of biological structures. Following the acquisition, an expert will have toannotate the structures for quantification. This is a difficult task, requiring many hours ofwork, sometimes repetitive, which can result in annotation errors caused by labelling fatigue.Machine learning promises to automate complex tasks from a large set of annotated sampledata. My master’s project consists of using weakly supervised techniques, where the anno-tations required for training are reduced and/or less precise, for the segmentation of neuralstructures.I first tested the use of polygons delimiting the structure of interest for the complex taskof segmentation of the neuronal protein F-actin in super-resolution microscopy images. Thecomplexity of the task is supported by the heterogeneous morphology of neurons, the highnumber of instances to segment in an image and the presence of many distractors. Despitethese difficulties, the use of weak annotations has made it possible to quantify an innovativechange in the conformation of the F-actin protein as a function of neuronal activity. I furthersimplified the annotation task by requiring only binary labels that indicate the presence ofstructures in the image, reducing annotation time by a factor of 30. In this way, the algorithmis trained to predict the content of an image and then extract the semantic characteristicsimportant for recognizing the structure of interest using attention mechanisms. The segmen-tation accuracy obtained on F-actin images is higher than that of polygonal annotations andequivalent to that of an expert’s precise annotations. This new approach should facilitate thequantification of dynamic changes that occur under the microscope in living cells and reduceerrors caused by inattention or bias in the selection of regions of interest in microscopy images.
Léon, Aurélia. "Apprentissage séquentiel budgétisé pour la classification extrême et la découverte de hiérarchie en apprentissage par renforcement". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS226.
Pełny tekst źródłaThis thesis deals with the notion of budget to study problems of complexity (it can be computational complexity, a complex task for an agent, or complexity due to a small amount of data). Indeed, the main goal of current techniques in machine learning is usually to obtain the best accuracy, without worrying about the cost of the task. The concept of budget makes it possible to take into account this parameter while maintaining good performances. We first focus on classification problems with a large number of classes: the complexity in those algorithms can be reduced thanks to the use of decision trees (here learned through budgeted reinforcement learning techniques) or the association of each class with a (binary) code. We then deal with reinforcement learning problems and the discovery of a hierarchy that breaks down a (complex) task into simpler tasks to facilitate learning and generalization. Here, this discovery is done by reducing the cognitive effort of the agent (considered in this work as equivalent to the use of an additional observation). Finally, we address problems of understanding and generating instructions in natural language, where data are available in small quantities: we test for this purpose the simultaneous use of an agent that understands and of an agent that generates the instructions
Feutry, Clément. "Two sides of relevant information : anonymized representation through deep learning and predictor monitoring". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS479.
Pełny tekst źródłaThe work presented here is for a first part at the cross section of deep learning and anonymization. A full framework was developed in order to identify and remove to a certain extant, in an automated manner, the features linked to an identity in the context of image data. Two different kinds of processing data were explored. They both share the same Y-shaped network architecture despite components of this network varying according to the final purpose. The first one was about building from the ground an anonymized representation that allowed a trade-off between keeping relevant features and tampering private features. This framework has led to a new loss. The second kind of data processing specified no relevant information about the data, only private information, meaning that everything that was not related to private features is assumed relevant. Therefore the anonymized representation shares the same nature as the initial data (e.g. an image is transformed into an anonymized image). This task led to another type of architecture (still in a Y-shape) and provided results strongly dependent on the type of data. The second part of the work is relative to another kind of relevant information: it focuses on the monitoring of predictor behavior. In the context of black box analysis, we only have access to the probabilities outputted by the predictor (without any knowledge of the type of structure/architecture producing these probabilities). This monitoring is done in order to detect abnormal behavior that is an indicator of a potential mismatch between the data statistics and the model statistics. Two methods are presented using different tools. The first one is based on comparing the empirical cumulative distribution of known data and to be tested data. The second one introduces two tools: one relying on the classifier uncertainty and the other relying on the confusion matrix. These methods produce concluding results
Durand, Thibaut. "Weakly supervised learning for visual recognition". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066142/document.
Pełny tekst źródłaThis thesis studies the problem of classification of images, where the goal is to predict if a semantic category is present in the image, based on its visual content. To analyze complex scenes, it is important to learn localized representations. To limit the cost of annotation during training, we have focused on weakly supervised learning approaches. In this thesis, we propose several models that simultaneously classify and localize objects, using only global labels during training. The weak supervision significantly reduces the cost of full annotation, but it makes learning more challenging. The key issue is how to aggregate local scores - e.g. regions - into global score - e.g. image. The main contribution of this thesis is the design of new pooling functions for weakly supervised learning. In particular, we propose a “max + min” pooling function, which unifies many pooling functions. We describe how to use this pooling in the Latent Structured SVM framework as well as in convolutional networks. To solve the optimization problems, we present several solvers, some of which allow to optimize a ranking metric such as Average Precision. We experimentally show the interest of our models with respect to state-of-the-art methods, on ten standard image classification datasets, including the large-scale dataset ImageNet
Chen, Hao. "Vers la ré-identification de personnes non-supervisée". Thesis, Université Côte d'Azur, 2022. http://www.theses.fr/2022COAZ4014.
Pełny tekst źródłaAs a core component of intelligent video surveillance systems, person re-identification (ReID) targets at retrieving a person of interest across non-overlapping cameras. Despite significant improvements in supervised ReID, cumbersome annotation process makes it less scalable in real-world deployments. Moreover, as appearance representations can be affected by noisy factors, such as illumination level and camera properties, between different domains, person ReID models suffer a large performance drop in the presence of domain gaps. We are particularly interested in designing algorithms that can adapt a person ReID model to a target domain without human supervision. In such context, we mainly focus on designing unsupervised domain adaptation and unsupervised representation learning methods for person ReID.In this thesis, we first explore how to build robust representations by combining both global and local features under the supervised condition. Then, towards an unsupervised domain adaptive ReID system, we propose three unsupervised methods for person ReID, including 1) teacher-student knowledge distillation with asymmetric network structures for feature diversity encouragement, 2) joint generative and contrastive learning framework that generates augmented views with a generative adversarial network for contrastive learning, and 3) exploring inter-instance relations and designing relation-aware loss functions for better contrastive learning based person ReID.Our methods have been extensively evaluated on main-stream ReID datasets, such as Market-1501, DukeMTMC-reID and MSMT17. The proposed methods significantly outperform previous methods on the ReID datasets, significantly pushing person ReID to real-world deployments
Wilson, Dennis G. "Évolution des principes de la conception des réseaux de neurones artificiels". Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30075.
Pełny tekst źródłaThe biological brain is an ensemble of individual components which have evolved over millions of years. Neurons and other cells interact in a complex network from which intelligence emerges. Many of the neural designs found in the biological brain have been used in computational models to power artificial intelligence, with modern deep neural networks spurring a revolution in computer vision, machine translation, natural language processing, and many more domains. However, artificial neural networks are based on only a small subset of biological functionality of the brain, and often focus on global, homogeneous changes to a system that is complex and locally heterogeneous. In this work, we examine the biological brain, from single neurons to networks capable of learning. We examine individually the neural cell, the formation of connections between cells, and how a network learns over time. For each component, we use artificial evolution to find the principles of neural design that are optimized for artificial neural networks. We then propose a functional model of the brain which can be used to further study select components of the brain, with all functions designed for automatic optimization such as evolution. Our goal, ultimately, is to improve the performance of artificial neural networks through inspiration from modern neuroscience. However, through evaluating the biological brain in the context of an artificial agent, we hope to also provide models of the brain which can serve biologists
Chandra, Siddhartha. "Apprentissage Profond pour des Prédictions Structurées Efficaces appliqué à la Classification Dense en Vision par Ordinateur". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC033/document.
Pełny tekst źródłaIn this thesis we propose a structured prediction technique that combines the virtues of Gaussian Conditional Random Fields (G-CRFs) with Convolutional Neural Networks (CNNs). The starting point of this thesis is the observation that while being of a limited form GCRFs allow us to perform exact Maximum-APosteriori (MAP) inference efficiently. We prefer exactness and simplicity over generality and advocate G-CRF based structured prediction in deep learning pipelines. Our proposed structured prediction methods accomodate (i) exact inference, (ii) both shortand long- term pairwise interactions, (iii) rich CNN-based expressions for the pairwise terms, and (iv) end-to-end training alongside CNNs. We devise novel implementation strategies which allow us to overcome memory and computational challenges
Chen, Xing. "Modeling and simulations of skyrmionic neuromorphic applications". Thesis, université Paris-Saclay, 2022. http://www.theses.fr/2022UPAST083.
Pełny tekst źródłaSpintronics nanodevices, which exploit both the magnetic and electrical properties of electrons, have emerged to bring various exciting characteristics promising for neuromorphic computing. Magnetic textures, such as domain walls and skyrmions, are particularly intriguing as neuromorphic components because they can support different functionalities due to their rich physical mechanisms. How the skyrmion dynamics can be utilized to build energy efficient neuromorphic hardware, and how deep learning can help achieve fast and accurate tests and validations of the proposals form the central topics of this thesis. The major contributions and innovations of this thesis can be summarized as follows: 1. Numerical and theoretical studies on skyrmion dynamics in confined nanostructures. We explore the skyrmion dynamics in terms of size, velocity, energy, and stability in a width-varying nanotrack. We found nanoscale skyrmion with small sizes could be obtained by employing this asymmetric structure. We also obtain a tradeoff between the nanotrack width (storage density) and the skyrmion motion velocity (data access speed). We study the skyrmion dynamics under voltage excitation through the voltage-controlled magnetic anisotropy effect in a circular thin film. We find that the breathing skyrmion can be analogized as a modulator. These findings could help us design efficient neuromorphic devices. 2. Skyrmion based device applications for neuromorphic computing. We present a compact Leaky-Integrate-Fire spiking neuron device by exploiting the current-driven skyrmion dynamics in a wedge-shaped nanotrack. We propose a True random number generators based on continuous skyrmion thermal Brownian motion in a confined geometry at room temperature. Our design are promising in emerging low power neuromorphic computing system, such as spiking neural network and stochastic/ probabilistic computing neuron network.3. A data-driven approach for modeling dynamical physical systems based on the Neural Ordinary Differential Equations (ODEs). We show that the adapted formalisms of Neural ODEs, designed for spintronics, can accurately predict the behavior of a non-ideal nanodevice, including noise, after training on a minimal set of micromagnetic simulations or experimental data, with new inputs and material parameters not belonging to the training data. With this modeling strategy, we can perform more complicated computational tasks, such as Mackey-Glass time-series predictions and spoken digit recognition, using the trained models of spintronic systems, with high accuracy and fast speed compared to conventional micromagnetic simulations
Zimmer, Matthieu. "Apprentissage par renforcement développemental". Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0008/document.
Pełny tekst źródłaReinforcement learning allows an agent to learn a behavior that has never been previously defined by humans. The agent discovers the environment and the different consequences of its actions through its interaction: it learns from its own experience, without having pre-established knowledge of the goals or effects of its actions. This thesis tackles how deep learning can help reinforcement learning to handle continuous spaces and environments with many degrees of freedom in order to solve problems closer to reality. Indeed, neural networks have a good scalability and representativeness. They make possible to approximate functions on continuous spaces and allow a developmental approach, because they require little a priori knowledge on the domain. We seek to reduce the amount of necessary interaction of the agent to achieve acceptable behavior. To do so, we proposed the Neural Fitted Actor-Critic framework that defines several data efficient actor-critic algorithms. We examine how the agent can fully exploit the transitions generated by previous behaviors by integrating off-policy data into the proposed framework. Finally, we study how the agent can learn faster by taking advantage of the development of his body, in particular, by proceeding with a gradual increase in the dimensionality of its sensorimotor space
Martinez, Coralie. "Classification précoce de séquences temporelles par de l'apprentissage par renforcement profond". Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT123.
Pełny tekst źródłaEarly classification (EC) of time series is a recent research topic in the field of sequential data analysis. It consists in assigning a label to some data that is sequentially collected with new data points arriving over time, and the prediction of a label has to be made using as few data points as possible in the sequence. The EC problem is of paramount importance for supporting decision makers in many real-world applications, ranging from process control to fraud detection. It is particularly interesting for applications concerned with the costs induced by the acquisition of data points, or for applications which seek for rapid label prediction in order to take early actions. This is for example the case in the field of health, where it is necessary to provide a medical diagnosis as soon as possible from the sequence of medical observations collected over time. Another example is predictive maintenance with the objective to anticipate the breakdown of a machine from its sensor signals. In this doctoral work, we developed a new approach for this problem, based on the formulation of a sequential decision making problem, that is the EC model has to decide between classifying an incomplete sequence or delaying the prediction to collect additional data points. Specifically, we described this problem as a Partially Observable Markov Decision Process noted EC-POMDP. The approach consists in training an EC agent with Deep Reinforcement Learning (DRL) in an environment characterized by the EC-POMDP. The main motivation for this approach was to offer an end-to-end model for EC which is able to simultaneously learn optimal patterns in the sequences for classification and optimal strategic decisions for the time of prediction. Also, the method allows to set the importance of time against accuracy of the classification in the definition of rewards, according to the application and its willingness to make this compromise. In order to solve the EC-POMDP and model the policy of the EC agent, we applied an existing DRL algorithm, the Double Deep-Q-Network algorithm, whose general principle is to update the policy of the agent during training episodes, using a replay memory of past experiences. We showed that the application of the original algorithm to the EC problem lead to imbalanced memory issues which can weaken the training of the agent. Consequently, to cope with those issues and offer a more robust training of the agent, we adapted the algorithm to the EC-POMDP specificities and we introduced strategies of memory management and episode management. In experiments, we showed that these contributions improved the performance of the agent over the original algorithm, and that we were able to train an EC agent which compromised between speed and accuracy, on each sequence individually. We were also able to train EC agents on public datasets for which we have no expertise, showing that the method is applicable to various domains. Finally, we proposed some strategies to interpret the decisions of the agent, validate or reject them. In experiments, we showed how these solutions can help gain insight in the choice of action made by the agent
Ben-Younes, Hedi. "Multi-modal representation learning towards visual reasoning". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS173.
Pełny tekst źródłaThe quantity of images that populate the Internet is dramatically increasing. It becomes of critical importance to develop the technology for a precise and automatic understanding of visual contents. As image recognition systems are becoming more and more relevant, researchers in artificial intelligence now seek for the next generation vision systems that can perform high-level scene understanding. In this thesis, we are interested in Visual Question Answering (VQA), which consists in building models that answer any natural language question about any image. Because of its nature and complexity, VQA is often considered as a proxy for visual reasoning. Classically, VQA architectures are designed as trainable systems that are provided with images, questions about them and their answers. To tackle this problem, typical approaches involve modern Deep Learning (DL) techniques. In the first part, we focus on developping multi-modal fusion strategies to model the interactions between image and question representations. More specifically, we explore bilinear fusion models and exploit concepts from tensor analysis to provide tractable and expressive factorizations of parameters. These fusion mechanisms are studied under the widely used visual attention framework: the answer to the question is provided by focusing only on the relevant image regions. In the last part, we move away from the attention mechanism and build a more advanced scene understanding architecture where we consider objects and their spatial and semantic relations. All models are thoroughly experimentally evaluated on standard datasets and the results are competitive with the literature
Abou, Bakr Nachwa. "Reconnaissance et modélisation des actions de manipulation". Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALM010.
Pełny tekst źródłaThis thesis addresses the problem of recognition, modelling and description of human activities. We describe results on three problems: (1) the use of transfer learning for simultaneous visual recognition of objects and object states, (2) the recognition of manipulation actions from state transitions, and (3) the interpretation of a series of actions and states as events in a predefined story to construct a narrative description.These results have been developed using food preparation activities as an experimental domain. We start by recognising food classes such as tomatoes and lettuce and food states, such as sliced and diced, during meal preparation. We adapt the VGG network architecture to jointly learn the representations of food items and food states using transfer learning. We model actions as the transformation of object states. We use recognised object properties (state and type) to detect corresponding manipulation actions by tracking object transformations in the video. Experimental performance evaluation for this approach is provided using the 50 salads and EPIC-Kitchen datasets. We use the resulting action descriptions to construct narrative descriptions for complex activities observed in videos of 50 salads dataset
Hocquet, Guillaume. "Class Incremental Continual Learning in Deep Neural Networks". Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST070.
Pełny tekst źródłaWe are interested in the problem of continual learning of artificial neural networks in the case where the data are available for only one class at a time. To address the problem of catastrophic forgetting that restrain the learning performances in these conditions, we propose an approach based on the representation of the data of a class by a normal distribution. The transformations associated with these representations are performed using invertible neural networks, which can be trained with the data of a single class. Each class is assigned a network that will model its features. In this setting, predicting the class of a sample corresponds to identifying the network that best fit the sample. The advantage of such an approach is that once a network is trained, it is no longer necessary to update it later, as each network is independent of the others. It is this particularly advantageous property that sets our method apart from previous work in this area. We support our demonstration with experiments performed on various datasets and show that our approach performs favorably compared to the state of the art. Subsequently, we propose to optimize our approach by reducing its impact on memory by factoring the network parameters. It is then possible to significantly reduce the storage cost of these networks with a limited performance loss. Finally, we also study strategies to produce efficient feature extractor models for continual learning and we show their relevance compared to the networks traditionally used for continual learning
Othmani-Guibourg, Mehdi. "Supervised learning for distribution of centralised multiagent patrolling strategies". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS534.
Pełny tekst źródłaFor nearly two decades, patrolling has received significant attention from the multiagent community. Multiagent patrolling (MAP) consists in modelling a patrol task to optimise as a multiagent system. The problem of optimising a patrol task is to distribute the most efficiently agents over the area to patrol in space and time, which constitutes a decision-making problem. A range of algorithms based on reactive, cognitive, reinforcement learning, centralised and decentralised strategies, amongst others, have been developed to make such a task ever more efficient. However, the existing patrolling-specific approaches based on supervised learning were still at preliminary stages, although a few works addressed this issue. Central to supervised learning, which is a set of methods and tools that allow inferring new knowledge, is the idea of learning a function mapping any input to an output from a sample of data composed of input-output pairs; learning, in this case, enables the system to generalise to new data never observed before. Until now, the best online MAP strategy, namely without precalculation, has turned out to be a centralised strategy with a coordinator. However, as for any centralised decision process in general, such a strategy is hardly scalable. The purpose of this work is then to develop and implement a new methodology aiming at turning any high-performance centralised strategy into a distributed strategy. Indeed, distributed strategies are by design resilient, more adaptive to changes in the environment, and scalable. In doing so, the centralised decision process, generally represented in MAP by a coordinator, is distributed into patrolling agents by means of supervised learning methods, so that each agent of the resultant distributed strategy tends to capture a part of the algorithm executed by the centralised decision process. The outcome is a new distributed decision-making algorithm based on machine learning. In this dissertation therefore, such a procedure of distribution of centralised strategy is established, then concretely implemented using some artificial neural networks architectures. By doing so, after having exposed the context and motivations of this work, we pose the problematic that led our study. The main multiagent strategies devised until now as part of MAP are then described, particularly a high-performance coordinated strategy, which is the centralised strategy studied in this work, as well as a simple decentralised strategy used as reference for decentralised strategies. Among others, some existing strategies based on supervised learning are also described. Thereafter, the model as well as certain of key concepts of MAP are defined. We also define the methodology laid down to address and study this problematic. This methodology comes in the form of a procedure that allows decentralising any centralised strategy by means of supervised learning. Then, the software ecosystem we developed for the needs of this work is also described, particularly PyTrol a discrete-time simulator dedicated to MAP developed with the aim of performing MAP simulation, to assess strategies and generate data, and MAPTrainer, a framework hinging on the PyTorch machine learning library, dedicated to research in machine learning in the context of MAP
Chen, Mickaël. "Learning with weak supervision using deep generative networks". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS024.
Pełny tekst źródłaMany successes of deep learning rely on the availability of massive annotated datasets that can be exploited by supervised algorithms. Obtaining those labels at a large scale, however, can be difficult, or even impossible in many situations. Designing methods that are less dependent on annotations is therefore a major research topic, and many semi-supervised and weakly supervised methods have been proposed. Meanwhile, the recent introduction of deep generative networks provided deep learning methods with the ability to manipulate complex distributions, allowing for breakthroughs in tasks such as image edition and domain adaptation. In this thesis, we explore how these new tools can be useful to further alleviate the need for annotations. Firstly, we tackle the task of performing stochastic predictions. It consists in designing systems for structured prediction that take into account the variability in possible outputs. We propose, in this context, two models. The first one performs predictions on multi-view data with missing views, and the second one predicts possible futures of a video sequence. Then, we study adversarial methods to learn a factorized latent space, in a setting with two explanatory factors but only one of them is annotated. We propose models that aim to uncover semantically consistent latent representations for those factors. One model is applied to the conditional generation of motion capture data, and another one to multi-view data. Finally, we focus on the task of image segmentation, which is of crucial importance in computer vision. Building on previously explored ideas, we propose a model for object segmentation that is entirely unsupervised
Tardy, Mickael. "Deep learning for computer-aided early diagnosis of breast cancer". Thesis, Ecole centrale de Nantes, 2021. http://www.theses.fr/2021ECDN0035.
Pełny tekst źródłaBreast cancer has the highest incidence amongst women. Regular screening allows to reduce the mortality rate, but creates a heavy workload for clinicians. To reduce it, the computer-aided diagnosis tools are designed, but a high level of performances is expected. Deep learning techniques have a potential to overcome the limitations of the traditional image processing algorithms. Although several challenges come with the deep learning applied to breast imaging, including heterogeneous and unbalanced data, limited amount of annotations, and high resolution. Facing these challenges, we approach the problem from multiple angles and propose several methods integrated in complete solution. Hence, we propose two methods for the assessment of the breast density as one of the cancer development risk factors, a method for abnormality detection, a method for uncertainty estimation of a classifier, and a method of transfer knowledge from mammography to tomosynthesis. Our methods contribute to the state of the art of weakly supervised learning and open new paths for further research
Desir, Chesner. "Classification Automatique d'Images, Application à l'Imagerie du Poumon Profond". Phd thesis, Université de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00879356.
Pełny tekst źródłaFrancis, Danny. "Représentations sémantiques d'images et de vidéos". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS605.
Pełny tekst źródłaRecent research in Deep Learning has sent the quality of results in multimedia tasks rocketing: thanks to new big datasets of annotated images and videos, Deep Neural Networks (DNN) have outperformed other models in most cases. In this thesis, we aim at developing DNN models for automatically deriving semantic representations of images and videos. In particular we focus on two main tasks : vision-text matching and image/video automatic captioning. Addressing the matching task can be done by comparing visual objects and texts in a visual space, a textual space or a multimodal space. Based on recent works on capsule networks, we define two novel models to address the vision-text matching problem: Recurrent Capsule Networks and Gated Recurrent Capsules. In image and video captioning, we have to tackle a challenging task where a visual object has to be analyzed, and translated into a textual description in natural language. For that purpose, we propose two novel curriculum learning methods. Moreover regarding video captioning, analyzing videos requires not only to parse still images, but also to draw correspondences through time. We propose a novel Learned Spatio-Temporal Adaptive Pooling method for video captioning that combines spatial and temporal analysis. Extensive experiments on standard datasets assess the interest of our models and methods with respect to existing works
Pageaud, Simon. "SmartGov : architecture générique pour la co-construction de politiques urbaines basée sur l'apprentissage par renforcement multi-agent". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1128.
Pełny tekst źródłaIn this thesis, we propose the SmartGov model, coupling multi-agent simulation and multi-agent deep reinforcement learning, to help co-construct urban policies and integrate all stakeholders in the decision process. Smart Cities provide sensor data from the urban areas to increase realism of the simulation in SmartGov.Our first contribution is a generic architecture for multi-agent simulation of the city to study global behavior emergence with realistic agents reacting to political decisions. With a multi-level modeling and a coupling of different dynamics, our tool learns environment specificities and suggests relevant policies. Our second contribution improves autonomy and adaptation of the decision function with multi-agent, multi-level reinforcement learning. A set of clustered agents is distributed over the studied area to learn local specificities without any prior knowledge on the environment. Trust score assignment and individual rewards help reduce non-stationary impact on experience replay in deep reinforcement learning.These contributions bring forth a complete system to co-construct urban policies in the Smart City. We compare our model with different approaches from the literature on a parking fee policy to display the benefits and limits of our contributions
Cabanes, Quentin. "New hardware platform-based deep learning co-design methodology for CPS prototyping : Objects recognition in autonomous vehicle case-study". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG042.
Pełny tekst źródłaCyber-Physical Systems (CPSs) are a mature research technology topic that deals with Artificial Intelligence (AI) and Embedded Systems (ES). A CPS can be defined as a networked ES that can analyze a physical environment, via sensors, and make decisions from its current state to affect it toward a desired outcome via actuators. These CPS deal with data analysis, which need powerful algorithms combined with robust hardware architectures. On one hand, Deep Learning (DL) is proposed as the main solution algorithm. On the other hand, the standard design and prototyping methodologies for ES are not adapted to modern DL-based CPS. In this thesis, we investigate AI design for CPS around embedded DL using a hybrid CPU/FPGA platform. We proposed a methodology to develop DL applications for CPS which is based on the usage of a neural network accelerator and an automation software to speed up the prototyping time. We present our hardware neural network accelerator design and prototyping. Finally, we validate our work using a smart LIDAR (LIght Detection And Ranging) application use-case with several algorithms for pedestrians detection using a 3D point cloud from a LIDAR
Mehr, Éloi. "Unsupervised Learning of 3D Shape Spaces for 3D Modeling". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS566.
Pełny tekst źródłaEven though 3D data is becoming increasingly more popular, especially with the democratization of virtual and augmented experiences, it remains very difficult to manipulate a 3D shape, even for designers or experts. Given a database containing 3D instances of one or several categories of objects, we want to learn the manifold of plausible shapes in order to develop new intelligent 3D modeling and editing tools. However, this manifold is often much more complex compared to the 2D domain. Indeed, 3D surfaces can be represented using various embeddings, and may also exhibit different alignments and topologies. In this thesis we study the manifold of plausible shapes in the light of the aforementioned challenges, by deepening three different points of view. First of all, we consider the manifold as a quotient space, in order to learn the shapes’ intrinsic geometry from a dataset where the 3D models are not co-aligned. Then, we assume that the manifold is disconnected, which leads to a new deep learning model that is able to automatically cluster and learn the shapes according to their typology. Finally, we study the conversion of an unstructured 3D input to an exact geometry, represented as a structured tree of continuous solid primitives
Blot, Michaël. "Étude de l'apprentissage et de la généralisation des réseaux profonds en classification d'images". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS412.
Pełny tekst źródłaArtificial intelligence is experiencing a resurgence in recent years. This is due to the growing ability to collect and store a considerable amount of digitized data. These huge databases allow machine learning algorithms to respond to certain tasks through supervised learning. Among the digitized data, images remain predominant in the modern environment. Huge datasets have been created. moreover, the image classification has allowed the development of previously neglected models, deep neural networks or deep learning. This family of algorithms demonstrates a great facility to learn perfectly datasets, even very large. Their ability to generalize remains largely misunderstood, but the networks of convolutions are today the undisputed state of the art. From a research and application point of view of deep learning, the demands will be more and more demanding, requiring to make an effort to bring the performances of the neuron networks to the maximum of their capacities. This is the purpose of our research, whose contributions are presented in this thesis. We first looked at the issue of training and considered accelerating it through distributed methods. We then studied the architectures in order to improve them without increasing their complexity. Finally, we particularly study the regularization of network training. We studied a regularization criterion based on information theory that we deployed in two different ways
Neverova, Natalia. "Deep learning for human motion analysis". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI029/document.
Pełny tekst źródłaThe research goal of this work is to develop learning methods advancing automatic analysis and interpreting of human motion from different perspectives and based on various sources of information, such as images, video, depth, mocap data, audio and inertial sensors. For this purpose, we propose a several deep neural models and associated training algorithms for supervised classification and semi-supervised feature learning, as well as modelling of temporal dependencies, and show their efficiency on a set of fundamental tasks, including detection, classification, parameter estimation and user verification. First, we present a method for human action and gesture spotting and classification based on multi-scale and multi-modal deep learning from visual signals (such as video, depth and mocap data). Key to our technique is a training strategy which exploits, first, careful initialization of individual modalities and, second, gradual fusion involving random dropping of separate channels (dubbed ModDrop) for learning cross-modality correlations while preserving uniqueness of each modality-specific representation. Moving forward, from 1 to N mapping to continuous evaluation of gesture parameters, we address the problem of hand pose estimation and present a new method for regression on depth images, based on semi-supervised learning using convolutional deep neural networks, where raw depth data is fused with an intermediate representation in the form of a segmentation of the hand into parts. In separate but related work, we explore convolutional temporal models for human authentication based on their motion patterns. In this project, the data is captured by inertial sensors (such as accelerometers and gyroscopes) built in mobile devices. We propose an optimized shift-invariant dense convolutional mechanism and incorporate the discriminatively-trained dynamic features in a probabilistic generative framework taking into account temporal characteristics. Our results demonstrate, that human kinematics convey important information about user identity and can serve as a valuable component of multi-modal authentication systems
Aklil, Nassim. "Apprentissage actif sous contrainte de budget en robotique et en neurosciences computationnelles. Localisation robotique et modélisation comportementale en environnement non stationnaire". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066225/document.
Pełny tekst źródłaDecision-making is a highly researched field in science, be it in neuroscience to understand the processes underlying animal decision-making, or in robotics to model efficient and rapid decision-making processes in real environments. In neuroscience, this problem is resolved online with sequential decision-making models based on reinforcement learning. In robotics, the primary objective is efficiency, in order to be deployed in real environments. However, in robotics what can be called the budget and which concerns the limitations inherent to the hardware, such as computation times, limited actions available to the robot or the lifetime of the robot battery, are often not taken into account at the present time. We propose in this thesis to introduce the notion of budget as an explicit constraint in the robotic learning processes applied to a localization task by implementing a model based on work developed in statistical learning that processes data under explicit constraints, limiting the input of data or imposing a more explicit time constraint. In order to discuss an online functioning of this type of budgeted learning algorithms, we also discuss some possible inspirations that could be taken on the side of computational neuroscience. In this context, the alternation between information retrieval for location and the decision to move for a robot may be indirectly linked to the notion of exploration-exploitation compromise. We present our contribution to the modeling of this compromise in animals in a non-stationary task involving different levels of uncertainty, and we make the link with the methods of multi-armed bandits
Pascal, Lucas. "Optimization of deep multi-task networks". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS535.
Pełny tekst źródłaMulti-task learning (MTL) is a learning paradigm involving the joint optimization of parameters with respect to multiple tasks. By learning multiple related tasks, a learner receives more complete and complementary information on the input domain from which the tasks are issued. This allows to gain better understanding of the domain by building a more accurate set of assumptions of it. However, in practice, the broader use of MTL is hindered by the lack of consistent performance gains observed by deep multi-task networks. It is often the case that deep MTL networks suffer from performance degradation caused by task interference. This thesis addresses the problem of task interference in Multi-Task learning, in order to improve the generalization capabilities of deep neural networks
Sendi, Naziha. "Transparent approach based on deep learning and multiagent argumentation for hypertension management". Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASG036.
Pełny tekst źródłaHypertension is known to be one of the leading causes of heart disease and stroke, killing around 7.5 million people worldwide every year, mostly because of its late diagnosis.In order to confirm the diagnosis of Hypertension, it is necessary to collect repeated medical measurements. One solution is to exploit these measurements and integrate them into Electronic Health Records by Machine Learning algorithms.In this work, we focused on ensemble learning methods that combine several machine learning algorithms for classification. These models have been widely used to improve classification performance of a single classifier. For that purpose, methods such as Bagging and Boosting are used. These methods mainly use majority or weighted voting to integrate the results of the classifiers. However, one major drawback of these approaches is their opacity, as they do not provide results explanation and they do not allow prior knowledge integration. As we use machine learning for healthcare, the explanation of classification results and the ability to introduce domain and clinical knowledge inside the learned model become a necessity.In order to overcome theses weaknesses, we introduce a new ensemble method based on multiagent argumentation.The integration of argumentation and machine learning has been proven to be fruitful and the use of argumentation is a relevant way for combining the classifiers. Indeed, argumentation can imitate human decision-making process to realize resolution of the conflicts.Our idea is to automatically extract the arguments from ML models and combine them using argumentation. This allows to exploit the internal knowledge of each classifier, to provide an explanation for the decisions and to facilitate integration of domain and clinical knowledge.In this thesis, objectives were multiple. From the medical application point of view, the goal was to predict the treatment of Hypertension and the date of the next doctor visit. From the scientific point of view, the objective was to add transparency to ensemble method and to inject domain and clinical knowledge.The contributions of the thesis are various:-Explaining predictions;-Integrating internal classification knowledge;-Injecting domain and clinical knowledge;-Improving predictions accuracy.The results demonstrate that our method effectively provides explanations and transparency of the ensemble methods predictions and is able to integrate domain and clinical knowledge into the system. Moreover, it improves the performance of existing machine learning algorithms
Strub, Florian. "Développement de modèles multimodaux interactifs pour l'apprentissage du langage dans des environnements visuels". Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I030.
Pełny tekst źródłaWhile our representation of the world is shaped by our perceptions, our languages, and our interactions, they have traditionally been distinct fields of study in machine learning. Fortunately, this partitioning started opening up with the recent advents of deep learning methods, which standardized raw feature extraction across communities. However, multimodal neural architectures are still at their beginning, and deep reinforcement learning is often limited to constrained environments. Yet, we ideally aim to develop large-scale multimodal and interactive models towards correctly apprehending the complexity of the world. As a first milestone, this thesis focuses on visually grounded language learning for three reasons (i) they are both well-studied modalities across different scientific fields (ii) it builds upon deep learning breakthroughs in natural language processing and computer vision (ii) the interplay between language and vision has been acknowledged in cognitive science. More precisely, we first designed the GuessWhat?! game for assessing visually grounded language understanding of the models: two players collaborate to locate a hidden object in an image by asking a sequence of questions. We then introduce modulation as a novel deep multimodal mechanism, and we show that it successfully fuses visual and linguistic representations by taking advantage of the hierarchical structure of neural networks. Finally, we investigate how reinforcement learning can support visually grounded language learning and cement the underlying multimodal representation. We show that such interactive learning leads to consistent language strategies but gives raise to new research issues
Shahid, Mustafizur Rahman. "Deep learning for Internet of Things (IoT) network security". Electronic Thesis or Diss., Institut polytechnique de Paris, 2021. http://www.theses.fr/2021IPPAS003.
Pełny tekst źródłaThe growing Internet of Things (IoT) introduces new security challenges for network activity monitoring. Most IoT devices are vulnerable because of a lack of security awareness from device manufacturers and end users. As a consequence, they have become prime targets for malware developers who want to turn them into bots. Contrary to general-purpose devices, an IoT device is designed to perform very specific tasks. Hence, its networking behavior is very stable and predictable making it well suited for data analysis techniques. Therefore, the first part of this thesis focuses on leveraging recent advances in the field of deep learning to develop network monitoring tools for the IoT. Two types of network monitoring tools are explored: IoT device type recognition systems and IoT network Intrusion Detection Systems (NIDS). For IoT device type recognition, supervised machine learning algorithms are trained to perform network traffic classification and determine what IoT device the traffic belongs to. The IoT NIDS consists of a set of autoencoders, each trained for a different IoT device type. The autoencoders learn the legitimate networking behavior profile and detect any deviation from it. Experiments using network traffic data produced by a smart home show that the proposed models achieve high performance.Despite yielding promising results, training and testing machine learning based network monitoring systems requires tremendous amount of IoT network traffic data. But, very few IoT network traffic datasets are publicly available. Physically operating thousands of real IoT devices can be very costly and can rise privacy concerns. In the second part of this thesis, we propose to leverage Generative Adversarial Networks (GAN) to generate bidirectional flows that look like they were produced by a real IoT device. A bidirectional flow consists of the sequence of the sizes of individual packets along with a duration. Hence, in addition to generating packet-level features which are the sizes of individual packets, our developed generator implicitly learns to comply with flow-level characteristics, such as the total number of packets and bytes in a bidirectional flow or the total duration of the flow. Experimental results using data produced by a smart speaker show that our method allows us to generate high quality and realistic looking synthetic bidirectional flows
Medrouk, Indira Lisa. "Réseaux profonds pour la classification des opinions multilingue". Electronic Thesis or Diss., Paris 8, 2018. http://www.theses.fr/2018PA080081.
Pełny tekst źródłaIn the era of social networks where everyone can claim to be a contentproducer, the growing interest in research and industry is an indisputablefact for the opinion mining domain.This thesis is mainly addressing a Web inherent characteristic reflectingits globalized and multilingual character.To address the multilingual opinion mining issue, the proposed model isinspired by the process of acquiring simultaneous languages with equal intensityamong young children. The incorporate corpus-based input is raw, usedwithout any pre-processing, translation, annotation nor additional knowledgefeatures. For the machine learning approach, we use two different deep neuralnetworks. The evaluation of the proposed model was executed on corpusescomposed of four different languages, namely French, English, Greek and Arabic,to emphasize the ability of a deep learning model in order to establishthe sentiment polarity of reviews and topics classification in a multilingualenvironment. The various experiments combining corpus size variations forbi and quadrilingual grouping languages, presented to our models withoutadditional modules, have shown that, such as children bilingual competencedevelopment, which is linked to quality and quantity of their immersion in thelinguistic context, the network learns better in a rich and varied environment.As part of the problem of opinion classification, the second part of thethesis presents a comparative study of two models of deep networks : convolutionalnetworks and recurrent networks. Our contribution consists in demonstratingtheir complementarity according to their combinations in a multilingualcontext
De, La Bourdonnaye François. "Learning sensori-motor mappings using little knowledge : application to manipulation robotics". Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC037/document.
Pełny tekst źródłaThe thesis is focused on learning a complex manipulation robotics task using little knowledge. More precisely, the concerned task consists in reaching an object with a serial arm and the objective is to learn it without camera calibration parameters, forward kinematics, handcrafted features, or expert demonstrations. Deep reinforcement learning algorithms suit well to this objective. Indeed, reinforcement learning allows to learn sensori-motor mappings while dispensing with dynamics. Besides, deep learning allows to dispense with handcrafted features for the state spacerepresentation. However, it is difficult to specify the objectives of the learned task without requiring human supervision. Some solutions imply expert demonstrations or shaping rewards to guiderobots towards its objective. The latter is generally computed using forward kinematics and handcrafted visual modules. Another class of solutions consists in decomposing the complex task. Learning from easy missions can be used, but this requires the knowledge of a goal state. Decomposing the whole complex into simpler sub tasks can also be utilized (hierarchical learning) but does notnecessarily imply a lack of human supervision. Alternate approaches which use several agents in parallel to increase the probability of success can be used but are costly. In our approach,we decompose the whole reaching task into three simpler sub tasks while taking inspiration from the human behavior. Indeed, humans first look at an object before reaching it. The first learned task is an object fixation task which is aimed at localizing the object in the 3D space. This is learned using deep reinforcement learning and a weakly supervised reward function. The second task consists in learning jointly end-effector binocular fixations and a hand-eye coordination function. This is also learned using a similar set-up and is aimed at localizing the end-effector in the 3D space. The third task uses the two prior learned skills to learn to reach an object and uses the same requirements as the two prior tasks: it hardly requires supervision. In addition, without using additional priors, an object reachability predictor is learned in parallel. The main contribution of this thesis is the learning of a complex robotic task with weak supervision
Sun-Hosoya, Lisheng. "Meta-Learning as a Markov Decision Process". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS588/document.
Pełny tekst źródłaMachine Learning (ML) has enjoyed huge successes in recent years and an ever- growing number of real-world applications rely on it. However, designing promising algorithms for a specific problem still requires huge human effort. Automated Machine Learning (AutoML) aims at taking the human out of the loop and develop machines that generate / recommend good algorithms for a given ML tasks. AutoML is usually treated as an algorithm / hyper-parameter selection problems, existing approaches include Bayesian optimization, evolutionary algorithms as well as reinforcement learning. Among them, auto-sklearn which incorporates meta-learning techniques in their search initialization, ranks consistently well in AutoML challenges. This observation oriented my research to the Meta-Learning domain. This direction led me to develop a novel framework based on Markov Decision Processes (MDP) and reinforcement learning (RL).After a general introduction (Chapter 1), my thesis work starts with an in-depth analysis of the results of the AutoML challenge (Chapter 2). This analysis oriented my work towards meta-learning, leading me first to propose a formulation of AutoML as a recommendation problem, and ultimately to formulate a novel conceptualisation of the problem as a MDP (Chapter 3). In the MDP setting, the problem is brought back to filling up, as quickly and efficiently as possible, a meta-learning matrix S, in which lines correspond to ML tasks and columns to ML algorithms. A matrix element S(i, j) is the performance of algorithm j applied to task i. Searching efficiently for the best values in S allows us to identify quickly algorithms best suited to given tasks. In Chapter 4 the classical hyper-parameter optimization framework (HyperOpt) is first reviewed. In Chapter 5 a first meta-learning approach is introduced along the lines of our paper ActivMetaL that combines active learning and collaborative filtering techniques to predict the missing values in S. Our latest research applies RL to the MDP problem we defined to learn an efficient policy to explore S. We call this approach REVEAL and propose an analogy with a series of toy games to help visualize agents’ strategies to reveal information progressively, e.g. masked areas of images to be classified, or ship positions in a battleship game. This line of research is developed in Chapter 6. The main results of my PhD project are: 1) HP / model selection: I have explored the Freeze-Thaw method and optimized the algorithm to enter the first AutoML challenge, achieving 3rd place in the final round (Chapter 3). 2) ActivMetaL: I have designed a new algorithm for active meta-learning (ActivMetaL) and compared it with other baseline methods on real-world and artificial data. This study demonstrated that ActiveMetaL is generally able to discover the best algorithm faster than baseline methods. 3) REVEAL: I developed a new conceptualization of meta-learning as a Markov Decision Process and put it into the more general framework of REVEAL games. With a master student intern, I developed agents that learns (with reinforcement learning) to predict the next best algorithm to be tried. To develop this agent, we used surrogate toy tasks of REVEAL games. We then applied our methods to AutoML problems. The work presented in my thesis is empirical in nature. Several real world meta-datasets were used in this research. Artificial and semi-artificial meta-datasets are also used in my work. The results indicate that RL is a viable approach to this problem, although much work remains to be done to optimize algorithms to make them scale to larger meta-learning problems
Hamis, Sébastien. "Compression de contenus visuels pour transmission mobile sur réseaux de très bas débit". Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAS020.
Pełny tekst źródłaThe field of visual content compression (image, video, 2D/3D graphics elements) has known spectacular achievements for more than twenty years, with the emergence numerous international standards such as JPEG, JPEG2000 for still image compression, or MPEG-1/2/4 for video and 3D graphics content coding.The apparition of smartphones and of their related applications have also benefited from these advances, the image being today ubiquitous in a context of mobility. Nevertheless, image transmission requires reliable and available networks, since such visual data that are inherently bandwidth-intensive. While developed countries benefit today from high-performance mobile networks (3G, 4G...), this is not the case in a certain number of regions of the world, particularly in emerging countries, where communications still rely on 2G SMS networks. Transmitting visual content in such a context becomes a highly ambitious challenge, requiring the elaboration of new, for very low bitrate compression algorithm. The challenge is to ensure images transmission over a narrow bandwidth corresponding to a relatively small set (10 to 20) of SMS (140 bytes per SMS).To meet such constraints, multiple axes of development have been considered. After a state-of-the-art of traditional image compression techniques, we have oriented our research towards deep learning methods, aiming achieve post-treatments over strongly compressed data in order to improve the quality of the decoded content.Our contributions are structures around the creation of a new compression scheme, including existing codecs and a panel of post-processing bricks aiming at enhancing highly compressed content. Such bricks represent dedicated deep neural networks, which perform super-resolution and/or compression artifact reduction operations, specifically trained to meet the targeted objectives. These operations are carried out on the decoder side and can be interpreted as image reconstruction algorithms from heavily compressed versions. This approach offers the advantage of being able to rely on existing codecs, which are particularly light and resource-efficient. In our work, we have retained the BPG format, which represents the state of art in the field, but other compression schemes can also be considered.Regarding the type of neural networks, we have adopted Generative Adversarials Nets-GAN, which are particularly well-suited for objectives of reconstruction from incomplete data. Specifically, the two architectures retained and adapted to our objectives are the SRGAN and ESRGAN networks. The impact of the various elements and parameters involved, such as the super-resolution factors and the loss functions, are analyzed in detail.A final contribution concerns experimental evaluation performed. After showing the limitations of objective metrics, which fail to take into account the visual quality of the image, we have put in place a subjective evaluation protocol. The results obtained in terms of MOS (Mean Opinion Score) fully demonstrate the relevance of the proposed reconstruction approaches.Finally, we open our work to different use cases, of a more general nature. This is particularly the case for high-resolution image processing and for video compression
Debard, Quentin. "Automatic learning of next generation human-computer interactions". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI036.
Pełny tekst źródłaArtificial Intelligence (AI) and Human-Computer Interactions (HCIs) are two research fields with relatively few common work. HCI specialists usually design the way we interact with devices directly from observations and measures of human feedback, manually optimizing the user interface to better fit users’ expectations. This process is hard to optimize: ergonomy, intuitivity and ease of use are key features in a User Interface (UI) that are too complex to be simply modelled from interaction data. This drastically restrains the possible uses of Machine Learning (ML) in this design process. Currently, ML in HCI is mostly applied to gesture recognition and automatic display, e.g. advertisement or item suggestion. It is also used to fine tune an existing UI to better optimize it, but as of now it does not participate in designing new ways to interact with computers. Our main focus in this thesis is to use ML to develop new design strategies for overall better UIs. We want to use ML to build intelligent – understand precise, intuitive and adaptive – user interfaces using minimal handcrafting. We propose a novel approach to UI design: instead of letting the user adapt to the interface, we want the interface and the user to adapt mutually to each other. The goal is to reduce human bias in protocol definition while building co-adaptive interfaces able to further fit individual preferences. In order to do so, we will put to use the different mechanisms available in ML to automatically learn behaviors, build representations and take decisions. We will be experimenting on touch interfaces, as these interfaces are vastly used and can provide easily interpretable problems. The very first part of our work will focus on processing touch data and use supervised learning to build accurate classifiers of touch gestures. The second part will detail how Reinforcement Learning (RL) can be used to model and learn interaction protocols given user actions. Lastly, we will combine these RL models with unsupervised learning to build a setup allowing for the design of new interaction protocols without the need for real user data
Baccouche, Moez. "Apprentissage neuronal de caractéristiques spatio-temporelles pour la classification automatique de séquences vidéo". Phd thesis, INSA de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00932662.
Pełny tekst źródłaPajot, Arthur. "Incorporating physical knowledge into deep neural network". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS290.
Pełny tekst źródłaA physical process is a sustained phenomenon marked by gradual changes through a series of states occurring in the physical world. Physicists and environmental scientists attempt to model these processes in a principled way through analytic descriptions of the scientist’s prior knowledge of the underlying processes. Despite the undeniable Deep Learning success, a fully data-driven approach is not yet ready to challenge the classical approach for modeling dynamical systems. We will try to demonstrate in this thesis that knowledge and techniques accumulated for modeling dynamical systems processes in well-developed fields such as maths or physics could be useful as a guideline to design efficient learning systems and conversely, that the ML paradigm could open new directions for modeling such complex phenomena. We describe three tasks that are relevant to the study and modeling of Deep Learning and Dynamical System : Forecasting, hidden state discovery and unsupervised signal recovery
Dahmane, Khouloud. "Analyse d'images par méthode de Deep Learning appliquée au contexte routier en conditions météorologiques dégradées". Thesis, Université Clermont Auvergne (2017-2020), 2020. http://www.theses.fr/2020CLFAC020.
Pełny tekst źródłaNowadays, vision systems are becoming more and more used in the road context. They ensure safety and facilitate mobility. These vision systems are generally affected by the degradation of weather conditions, like heavy fog or strong rain, phenomena limiting the visibility and thus reducing the quality of the images. In order to optimize the performance of the vision systems, it is necessary to have a reliable detection system for these adverse weather conditions.There are meteorological sensors dedicated to physical measurement, but they are expensive. Since cameras are already installed on the road, they can simultaneously perform two functions: image acquisition for surveillance applications and physical measurement of weather conditions instead of dedicated sensors. Following the great success of convolutional neural networks (CNN) in classification and image recognition, we used a deep learning method to study the problem of meteorological classification. The objective of our study is to first seek to develop a classifier of time, which discriminates between "normal" conditions, fog and rain. In a second step, once the class is known, we seek to develop a model for measuring meteorological visibility.The use of CNN requires the use of train and test databases. For this, two databases were used, "Cerema-AWP database" (https://ceremadlcfmds.wixsite.com/cerema-databases), and the "Cerema-AWH database", which has been acquired since 2017 on the Fageole site on the highway A75. Each image of the two bases is labeled automatically thanks to meteorological data collected on the site to characterize various levels of precipitation for rain and fog.The Cerema-AWH base, which was set up as part of our work, contains 5 sub-bases: normal day conditions, heavy fog, light fog, heavy rain and light rain. Rainfall intensities range from 0 mm/h to 70mm/h and fog weather visibilities range from 50m to 1800m. Among the known neural networks that have demonstrated their performance in the field of recognition and classification, we can cite LeNet, ResNet-152, Inception-v4 and DenseNet-121. We have applied these networks in our adverse weather classification system. We start by the study of the use of convolutional neural networks. The nature of the input data and the optimal hyper-parameters that must be used to achieve the best results. An analysis of the different components of a neural network is done by constructing an instrumental neural network architecture. The conclusions drawn from this analysis show that we must use deep neural networks. This type of network is able to classify five meteorological classes of Cerema-AWH base with a classification score of 83% and three meteorological classes with a score of 99%Then, an analysis of the input and output data was made to study the impact of scenes change, the input's data and the meteorological classes number on the classification result.Finally, a database transfer method is developed. We study the portability from one site to another of our adverse weather conditions classification system. A classification score of 63% by making a transfer between a public database and Cerema-AWH database is obtained.After the classification, the second step of our study is to measure the meteorological visibility of the fog. For this, we use a neural network that generates continuous values. Two fog variants were tested: light and heavy fog combined and heavy fog (road fog) only. The evaluation of the result is done using a correlation coefficient R² between the real values and the predicted values. We compare this coefficient with the correlation coefficient between the two sensors used to measure the weather visibility on site. Among the results obtained and more specifically for road fog, the correlation coefficient reaches a value of 0.74 which is close to the physical sensors value (0.76)