Dissertationen zum Thema „Document automation“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-50 Dissertationen für die Forschung zum Thema "Document automation" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Seira, Argyri. „Low rate automation in manufacturing and assembly - A framework based om improved methods towards higher automation level : A case study at AIRBUS HELICOPTERS“. Thesis, KTH, Industriell produktion, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-265567.
Der volle Inhalt der QuelleMcCarty, George E. „Integrating XML and Rdf concepts to achieve automation within a tactical knowledge management environment /“. Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FMcCarty.pdf.
Der volle Inhalt der QuelleZuluaga, Valencia Maria Alejandra. „Methods for automation of vascular lesions detection in computed tomography images“. Thesis, Lyon 1, 2011. http://www.theses.fr/2011LYO10010/document.
Der volle Inhalt der QuelleThis thesis presents a framework for the detection and diagnosis of vascular lesions with a special emphasis on coronary heart disease. Coronary heart disease remains to be the first cause of mortality worldwide. Typically, the problem of vascular lesion identification has been solved by trying to model the abnormalities (lesions). The main drawback of this approach is that lesions are highly heterogeneous, which makes the detection of previously unseen abnormalities difficult. We have selected not to model lesions directly, but to treat them as anomalies which are seen as low probability density points. We propose the use of two classification frameworks based on support vector machines (SVM) for the density level detection problem. The main advantage of these two methods is that the learning stage does not require labeled data representing lesions, which is always difficult to obtain. The first method is completely unsupervised, whereas the second one only requires a limited number of labels for normality. The use of these anomaly detection algorithms requires the use of features such that anomalies are represented as points with low probability density. For this purpose, we developed an intensity based metric, denoted concentric rings, designed to capture the nearly symmetric intensity profiles of healthy vessels, as well as discrepancies with respect to the normal behavior. Moreover, we have selected a large set of alternative candidate features to use as input for the classifiers. Experiments on synthetic data and cardiac CT data demonstrated that our metric has a good performance in the detection of anomalies, when used with the selected classifiers. Combination of other features with the concentric rings metric has potential to improve the classification performance. We defined an unsupervised feature selection scheme that allows the definition of an optimal subset of features. We compared it with existent supervised feature selection methods. These experiments showed that, in general, the combination of features improves the classifiers performance, and that the best results are achieved with the combination selected by our scheme, associated with the proposed anomaly detection algorithms. Finally, we propose to use image registration in order to compare the classification results at different cardiac phases. The objective here is to match the regions detected as anomalous in different time-frames. In this way, more than attract the physician's attention to the anomaly detected as potential lesion, we want to aid in validating the diagnosis by automatically displaying the same suspected region reconstructed in different time-frames
Katz, Jerry A. „An Introduction of Office Automation and A Document Management System Within a Multiple Plant Manufacturing Organization“. NSUWorks, 1990. http://nsuworks.nova.edu/gscis_etd/626.
Der volle Inhalt der QuelleRoch, Eduard. „Automatizace dokumentů jako nástroj minimalizace rizik“. Master's thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2020. http://www.nusl.cz/ntk/nusl-414147.
Der volle Inhalt der QuelleLovegrove, Will. „Advanced document analysis and automatic classification of PDF documents“. Thesis, University of Nottingham, 1996. http://eprints.nottingham.ac.uk/13967/.
Der volle Inhalt der QuellePokam, Meguia Raïssa. „Conception d'une interface avec réalité augmentée pour la conduite automobile autonome“. Thesis, Valenciennes, 2018. http://www.theses.fr/2018VALE0029/document.
Der volle Inhalt der QuelleThis doctoral thesis was conducted under the canopy of the Localization and Augmented Reality (LAR) project. The research project was focused on railway yard and autonomous vehicles. The thesis provides answers to three main questions about the Human-Machine interface design in autonomous vehicles: Which information should be conveyed to the human agent? In which form? And when? Answers will enable an appropriate trust calibration of the human agent in the autonomous vehicle and improve driver’s experience by making automation “transparent”. We focus especially on the lane changing task entirely realized by the autonomous vehicle. The aim and the objectives were achieved by a five-steps methodology. Some general principles of transparency have been redefined on the LYONS (2013) model. These principles have been then operationalized by means of Cognitive Work Analysis. Graphical representation of useful information or potentially useful information was defined during creative sessions, by using Augmented Reality that lies at the heart of the LAR project. This information was categorized according to the functions from which it results: information acquisition, information analysis, decision making and action execution. Five interfaces were designed. Each of these interfaces presents information from some of these functions. Therefore, these interfaces corresponded to different transparency configurations more or less high. The validity of transparency principles was tested through an experiment on driving simulator with a sample of 45 participants. In this experiment, some indicators of cognitive activities and User Experience were measured. Data analysis has led to some differences between 5 interfaces. Indeed, the interface with related information to “information acquisition” and “action execution” functions improves the cognitive activities of the human agent. Considering the User Experience, it is the interface with the information related from the 4 functions which provides the best User Experience
Seraphin-Thibon, Laurence. „Etude de l'automatisation des mouvements d'écriture chez l'enfant de 6 à 10 ans“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAS040/document.
Der volle Inhalt der QuelleWritten production is an automated motor activity for adults. Writing is smooth and fast because letter production relies on the prior activation of a procedural memory known as “motor program” or “sensori-motor map”. Our investigation focused on how motor programs develop during writing acquisition. We examined how writing evolved from stroke-to-stroke programming to letter-to-letter programming. In all our studies we recorded the children's writing movements with a digitizer. In the first experiment, 98 children aged 6 to 9 had to write letters of varying numbers of strokes. The results indicated that at ages 6-7, movement duration, dysfluency and trajectory increased with the letter’s number of strokes. The letters were produced by the activation of the first stroke, then the second stroke, and so on until the completion of the whole letter. The number of strokes affected much less the productions of the older children. They assembled the strokes into chunks, which gradually increased in size, until they could write, at the end of the automation process, with a letter-to-letter programming strategy. The analysis revealed that the first automatisms stabilized at age 8. However, some letters remained represented in chunks even among the older children. Specific types of strokes affected the stabilization of letter automation. We thus carried out another experiment to examine the impact of the rotation strokes that are necessary for the production of curved lines (e.g., to produce letter o) and the pointing movements that position the pen after a lift (e.g., to produce the dot on letter i). In the experiment, 108 children aged 6 to 10 wrote sequences of upper-case letters varying in pointing and rotation movements. The results indicated that the production of rotation movements required a speed trade-off to decrease differences between maximum and minimum velocities. Pointing movements required a duration trade-off between the movements executed on the sheet and in the air. There seems to be a sort of tempo for letter production that modulates letter production. This requires compensatory strategies that are cognitively demanding. At the developmental level, the kinematic data suggests that most of the learning process takes place between ages 6 to 8. Then there is stabilization phase that marks the beginning of writing automation. It evolves between ages 9 and 10. Our work thus revealed that as the child practices writing, the motor programs code increasingly bigger information chunks. This quantitative increase in procedural memory is also accompanied by qualitative information for certain types of strokes that require specific processing. Therefore, the content of motor programs is not limited to information about letter shape, stroke order and direction. Motor programs also code information on compensatory kinematic strategies for rotation and pointing movements. These motor programs are elaborated during the learning process between ages 6 to 7. At around age 8, with practice and the increase of cognitive, attentional and memory skills, they start to stabilize and become automated. At ages 9-10, writing is automated for most letters and becomes a linguistic communicational tool. The implications of these results are directly applicable in schools for the improvement of pedagogical tools in teaching writing
Duffau, Clément. „Justification Factory : de l'élicitation d'exigences de justification jusqu'à leur production en continu“. Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4094/document.
Der volle Inhalt der QuelleIn many areas where it exists human risks, such as medicine, nuclear or avionics, it is necessary to go through a certification stage to ensure the proper functioning of a system or product. Certification is based on normative documents that express the justification requirements to which the product and the development process must conform. A certification audit then consists of producing documentation certifying compliance with this regulatory framework.To cope with this need for justifications to ensure compliance with the standards in force and the completeness of the justifications provided, it must therefore be able to target the justification requirements to be claimed for a project and produce justifications during the development of the project. In this context, eliciting the justification requirements from the standards and producing the necessary and sufficient justifications are issues to ensure compliance with standards and avoid over-justification.In these works we seek to structure the justification requirements and then help to produce the associated justifications while remaining attentive to the confidence that can be placed in them. To address these challenges, we have defined a formal semantics for an existing model of justifications: Justification Diagrams. From this semantics, we have been able to define a set of operations to control the life cycle of the justifications to ensure that the justifications regarding the justification requirements. Through this semantics, we have also been able to guide, and even automate in some cases, the production of justifications and the verification of conformance.These contributions were applied in the context of medical technologies for the company AXONIC, the bearer of this work. This made it possible to i) elicitate the justification requirements of the medical standards and company's internal practicals, ii) automatically produce the justifications associated with the IEC 62304 standard for medical software, iii) automate the verification and validation of the justifications as well as the production of documents that can be used during the audit
Hussein, Ali Dina. „A social Internet of Things application architecture : applying semantic web technologies for achieving interoperability and automation between the cyber, physical and social worlds“. Thesis, Evry, Institut national des télécommunications, 2015. http://www.theses.fr/2015TELE0024/document.
Der volle Inhalt der QuelleThe paradigm of the Social Internet of Things (SIoT) is being promoted in the literature to boost a new trend wherein the benefits of social network services are exhibited within the network of connected objects i.e., the Internet of Things (IoT). The novel user-friendly interaction framework of the SIoT opens the doors for enhancing the intelligence required to stimulate a shift in the IoT from a heterogeneous network of independently connected objects towards a manageable network of everything. In practice, achieving scalability within the large-scale and the heterogeneous paradigm of the IoT while maintaining on top of its user-friendly and intuitive services to bridge human-to-machine perceptions and encourage the technology’s adaptation is a major challenge which is hindering the realization and deployment of the IoT technologies and applications into people’s daily live. For the goal of handling IoT challenges, as well as improve the level of smart services adaptability to users’ situational needs, in this thesis, novel SIoT-based application architecture is provided. That is, Semantic Web Technologies are envisaged as a means to develop automated, value-added services for SIoT. While, interoperability and automation are essential requirement to seamlessly integrate such services into user life, Ontologies are used to semantically describe Web services with the aim of enabling the automatic invocation and composition of these services as well as support interactions across the cyber, physical and social worlds. On the other hand, handling the variety of contextual data in SIoT for intelligent decision making is another big challenge which is still in very early stages of research. In this thesis we propose a cognitive reasoning approach taking into consideration achieving situational-awareness (SA) in SIoT. This reasoning approach is deployed within two application domains where results show an improved level of services adaptability compared to location-aware services which are previously proposed in the literature
Brada, Jan. „Aplikace nástrojů řízení a automatizace administrativních procesů“. Master's thesis, Vysoké učení technické v Brně. Fakulta podnikatelská, 2008. http://www.nusl.cz/ntk/nusl-221799.
Der volle Inhalt der QuelleDarwish, Molham. „Architecture et déploiement de services d'aide à la personne“. Thesis, Lorient, 2016. http://www.theses.fr/2016LORIS411/document.
Der volle Inhalt der QuelleThe ageing of the European population encouraged the community to search for solutions tosupport this evolution. In this context, several issues (related to the expensive and limited healthcare services and health facilities capacities) need to be addressed.Thus, several projects, research and industrial solutions have been proposed to address these issues.Most of these projects and industrial developments are based on the use of the latest ICT technicaldevelopments to provide solutions that ameliorate the well-being of the targeted ageing groupand to guaranty their independence in their own living spaces. The provided technologicalsolutions need to be guaranteed against potential faults which may lead to the systems failure andimpact the users’ needs and independency.In this work, we propose a home automation system representation, based on the user’s needs toprovide continuous and viable solutions that meets the users’ expectations, and ensuring theavailability of system’s services.For this goal, we propose to develop an integrated modeling framework allowing therepresentation of the home automation reconfigurable system with the consideration of a faulttolerance approach (based on the alternative definition of scenarios of system servicesdeliverance).In the proposed workflow, we describe the system structural elements (described as services andcomponents) in the design modeling view, and, we lead model transformation rules allowinggenerating an analysis model and a behavior model. The analysis model allows making a decisionabout the alternative elements selection in order to substitute the faulty elements. The analysismodel definition is based on the notion of Fault Tree Analysis approach (adopting the probabilityof events failure in order to evaluate a given system status).The behavior model is in charge of simulating the execution of the system services ensuring, thus,that the proposed scenarios lead to system services deliverance.Moreover, we propose to define an expert based feature measuring the importance of a system’scomponent within the service context. In this framework, we propose a new approach, based onthe joint integration of the importance factor into the Fault Tree Analysis approach in order to studythe criticality of the component, in case of failure, on the service continuity.We propose an experimental validation framework, based on several validation objectives toevaluate the proposed work in this research
Salmi, Anas. „Aide à la Décision pour l’Optimisation du Niveau d’Automatisation lors de la Conception des Systèmes d’Assemblage Industriels“. Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAI090/document.
Der volle Inhalt der QuelleThis work is performed in the context of PhD dissertation of Anas Salmi in Grenoble INP – School of Industrial Engineering. The thesis is supervised by Dr. Eric Blanco (Grenoble INP – GSCOP laboratory) and co-supervised by Dr. Pierre David (Grenoble INP – GSCOP laboratory) and Pr. Joshua Summers (Clemson University – CEDAR laboratory).The work aims at defining a procedure and tool to help assembly manufacturers, particularly deciders, managers, and systems designers in the decision about automation for their assembly processes design. The purpose is to orient to the optimal Level of Automation (LoA) of the process since the early conceptual design phase.The purpose is to provide the most appropriate solution, most profitable, with consideration of the production requirements, product design features and characteristics, assembly sequence, and manufacturer’s exigencies and preferences and prior decision criteria from different point of views such as quality level, ergonomics, reliability, and manufacturer’s best practices and historical data. Different manufacturer’s constraints have also to be also taken into account in the decision such as social, financial and investment potentials, as well as the location and labor rate.A state of the art of the topic was realized and has shown that the literature about LoA deciding is not abundant. This need to support LoA deciding and delicacy of such process were also recognized by several assembly manufactures and researchers.A first main contribution consists in a multi-criteria LoA decision methodology proposal involving several identified decision criteria to be considered in the decision process.The approach generated a need to define an adequate modelling language. A new graphic Assembly Sequences Modeling Language (ASML) was then defined allowing conceptual assembly processes modelling since the early phase by assembly operations with different architectural possibilities. A standard model can be then defined introducing an intuitive and generic way to define systems with various automation levels alternatives.Rules and time standard databases were also developed allowing assembly systems ASML modelled time estimation based on standardized motions, corresponding time standards, and process’ architectures.To easier the alternatives generation and for better standardization, a high layer vocabulary of 20 standardized assembly tasks, associated to the modelling language, is defined. These developments allow a quick modelling and time estimation when automatically linked to the motions vocabulary.As the economic criterion represents the major occupation and criterion for every manufacturer in such heavy investment, an early phase cost model is developed after an exhaustive review in the field. The cost model, when combined to the previous developments, allows assembly systems alternatives cost prediction with consideration of selected automation options.To computerize the automation decision approach and the exhaustive generation of assembly systems alternatives for the sake of finding the optimal configuration, a mathematical integer formulation is developed and validated. The model is implemented in CPLEX OPL and allows the convergence to the optimal configuration with consideration of the different entered constraints and manufacturers preliminary preferences as input matrices.This work includes theoretical and industrial validations. It opens multiple perspectives and openings in the field of assembly systems design, particularly the rationalization of product and process integrated development by the assembly system automatic generation directly from the product design CAD tool. Other openings of work instructions generation and standardization may represent also promising openings
El, Mernissi Karim. „Une étude de la génération d'explication dans un système à base de règles“. Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066332/document.
Der volle Inhalt der QuelleThe concept of “Business Rule Management System” (BRMS) has been introduced in order to facilitate the design, the management and the execution of company-specific business policies. Based on a symbolic approach, the main idea behind these tools is to enable the business users to manage the business rule changes in the system without requiring programming skills. It is therefore a question of providing them with tools that enable to formulate their business policies in a near natural language form and automate their processing. Nowadays, with the expansion of intelligent systems, we have to cope with more and more complex decision logic and large volumes of data. It is not straightforward to identify the causes leading to a decision. There is a growing need to justify and optimize automated decisions in a short time frame, which motivates the integration of advanced explanatory component into its systems. Thus, the main challenge of this research is to provide an industrializable approach for explaining the decision-making processes of business rules applications and more broadly rule-based systems. This approach should be able to provide the necessary information for enabling a general understanding of the decision, to serve as a justification for internal and external entities as well as to enable the improvement of existing rule engines. To this end, the focus will be on the generation of the explanations in themselves as well as on the manner and the form in which they will be delivered
McCarty, George E. Jr. „Integrating XML and RDF concepts to achieve automation within a tactical knowledge management environment“. Thesis, Monterey, California. Naval Postgraduate School, 2004. http://hdl.handle.net/10945/1648.
Der volle Inhalt der QuelleSince the advent of Naval Warfare, Tactical Knowledge Management (KM) has been critical to the success of the On Scene Commander. Today's Tactical Knowledge Manager typically operates in a high stressed environment with a multitude of knowledge sources including detailed sensor deployment plans, rules of engagement contingencies, and weapon delivery assignments. However the WarFighter has placed a heavy reliance on delivering this data with traditional messaging processes while focusing on information organization vice knowledge management. This information oriented paradigm results in a continuation of data overload due to the manual intervention of human resources. Focusing on the data archiving aspect of information management overlooks the advantages of computational processing while delaying the empowerment of the processor as an automated decision making tool. Resource Description Framework (RDF) and XML provide the potential of increased machine reasoning within a KM design allowing the WarFighter to migrate from the dependency on manual information systems to a more computational intensive Knowledge Management environment. However the unique environment of a tactical platform requires innovative solutions to automate the existing naval message architecture while improving the knowledge management process. This thesis captures the key aspects for building a prototype Knowledge Management Model and provides an implementation example for evaluation. The model developed for this analysis was instantiated to evaluate the use of RDF and XML technologies in the Knowledge Management domain. The goal for the prototype included: 1. Processing required technical links in RDF/XML for feeding the KM model from multiple information sources. 2. Experiment with the visualization of Knowledge Management processing vice traditional Information Resource Display techniques. The results from working with the prototype KM Model demonstrated the flexibility of processing all information data under an XML context. Furthermore the RDF attribute format provided a convenient structure for automated decision making based on multiple information sources. Additional research utilizing RDF/XML technologies will eventually enable the WarFighter to effectively make decisions under a Knowledge Management Environment.
Civilian, SPAWAR System Center San Diego
Oudji, Salma. „Analyse de la robustesse et des améliorations potentielles du protocole RadioFréquences Sub-GHz KNX utilisé pour l’IoT domotique“. Thesis, Limoges, 2016. http://www.theses.fr/2016LIMO0121/document.
Der volle Inhalt der QuelleThis thesis addresses the performance of the KNX-RF protocol used for home automation applications in terms of radiofrequency robustness in a multi-protocol environment that is potentially subject to interferences. In this work, the aim is to assess the interference problems encountered by KNX-RF using simulation models that would increase its RF reliability. Thus, a first model was developed on MATLAB / Simulink and allowed to investigate the performance and limitations of this protocol at its physical layer in an interference scenario occurring inside a multiprotocol home and building automation box/gateway. These simulations were followed by field experimental tests in an indoor environment (house) to verify the results. A second model was developed to evaluate the MAC layer mechanisms of KNX-RF through the discrete event simulator OMNeT ++/Mixim. This model includes all the mechanisms of channel access and frequency agility specified by KNX-RF standard. A frame collision scenario was simulated and several improvement proposals are discussed in this manuscript. The developed models can be used to analyze and predict in advance phase the behavior of KNX-RF in a radio-constrained environment
Dainat, Jacques. „Étude du processus de perte de gènes et de pseudogénisation. Intégration et informatisation des concepts de l’évolution biologique. Application à la lignée humaine depuis l'origine des Eucaryotes“. Thesis, Aix-Marseille, 2012. http://www.theses.fr/2012AIXM4742/document.
Der volle Inhalt der QuelleBiology has undergone an extraordinary revolution with the appearance of numerous whole genomes sequenced. Analysis of the amount of information available requires creation and use of automated tools. The interpretation of biological data becomes meaningful in light of evolution. In view of all this, evolutionary studies are undoubtedly necessary to highlight the biological data. In this context, the laboratory develops tools to study the genomes (and proteomes) evolution through all the undergone mutations. The project of this thesis focuses specifically on the events of unitary gene losses. These events may reveal loss of functions very instructive for understanding the evolution of species. First, I developed the GLADX tool that mimics human expertise to automatically and accurately investigate the events of unitary gene losses. These studies are based on the creation and interpretation of phylogenetic data, BLAST, predictions of protein, etc., in an automated environment. Secondly, I developed a strategy using GLADX tool to study, at large-scale, the loss of unitary genes during the evolution of the human proteome. The strategy uses, in the first step, the analysis of orthologous groups produced by a clustering tool from complete proteomes of numerous species. This analysis used as a filter, allowed detecting 6237 putative losses in the human lineage. The study of these unitary gene loss cases has been deepened with GLADX and allowed to highlight many problems with the quality of available data in databases
Kuijpers, Nicola. „Système autonome de sécurité lors de la préparation d'un repas pour les personnes cognitivement déficientes dans un habitat intelligent pour la santé“. Thesis, Lorient, 2017. http://www.theses.fr/2017LORIS436/document.
Der volle Inhalt der QuelleIn developed countries such as Canada or France, the population is ageing and the number of people with disabilities increases. Those disabilities have an impact on their activities of daily living. According to the severity of the disability and the independance of those people, a placement in a specialized institution can be considered. Those institutions often represent huge financial costs for the people as for society. In order to reduce those costs, smart homes are an alternative solution. Smart homes make it possible for people to compensate their disabilities and increase their independance through a set of technologies. Preparing a meal is a complex activity can present various risks for those people. These people rarely live alone, and it must be taken into account that a varied public can use the system. Homes are usually already equipped with appliances, it is necessary for the system to adapt itself to these devices. This work aims the implementation of a prototype ensuring the safety of people with Alzheimer during meal preparation and their caregivers (natural or professional). The prototype must adapt itself to the user’s profiles, its environment and the appliances on which it is deployed. In order to do this, the system, based on a multi agent system, applies safety rules that are customizable through the users’ medical profiles. This work is carried out in two laboratories, each with distinct kitchen appliances in their smart home. The system had been tested in both environments, its adaptation towards different users and for several safety rules through use cases. The results of these experiments showed that the prototype meets the objectives
Beeman, Jai Chowdhry. „Le rôle des gaz à effet de serre dans les variations climatiques passées : une approche basée sur des chronologies précises des forages polaires profonds“. Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAU023/document.
Der volle Inhalt der QuelleDeep polar ice cores contain records of both past climate and trapped air that reflects past atmospheric compositions, notably of greenhouse gases. This record allows us to investigate the role of greenhouse gases in climate variations over eight glacial-interglacial cycles. The ice core record, like all paleoclimate records, contains uncertainties associated both with the relationships between proxies and climate variables, and with the chronologies of the records contained in the ice and trapped air bubbles. In this thesis, we develop a framework, based on Bayesian inverse modeling and the evaluation of complex probability densities, to accurately treat uncertainty in the ice core paleoclimate record. Using this framework, we develop two studies, the first about Antarctic Temperature and CO2 during the last deglaciation, and the second developing a Bayesian synchronization method for ice cores. In the first study, we use inverse modeling to identify the probabilities of piecewise linear fits to CO2 and a stack of Antarctic Temperature records from five ice cores, along with the individual temperature records from each core, over the last deglacial warming, known as Termination 1. Using the nodes, or change points in the piecewise linear fits accepted during the stochastic sampling of the posterior probability density, we discuss the timings of millenial-scale changes in trend in the series, and calculate the phasings between coherent changes. We find that the phasing between Antarctic Temperature and CO2 likely varied, though the response times remain within a range of ~500 years from synchrony, both between events during the deglaciation and accross the individual ice core records. This result indicates both regional-scale complexity and modulations or variations in the mechanisms linking Antarctic temperature and CO2 accross the deglaciation. In the second study, we develop a Bayesian method to synchronize ice cores using corresponding time series in the IceChrono inverse chronological model. Tests show that this method is able to accurately synchronize CH4 series, and is capable of including external chronological observations and prior information about the glaciological characteristics at the coring site. The method is continuous and objective, bringing a new degree of accuracy and precision to the use of synchronization in ice core chronologies
Quereilhac, Alina. „Une approche générique pour l'automatisation des expériences sur les réseaux informatiques“. Thesis, Nice, 2015. http://www.theses.fr/2015NICE4036/document.
Der volle Inhalt der QuelleThis thesis proposes a generic approach to automate network experiments for scenarios involving any networking technology on any type of network evaluation platform. The proposed approach is based on abstracting the experiment life cycle of the evaluation platforms into generic steps from which a generic experiment model and experimentation primitives are derived. A generic experimentation architecture is proposed, composed of an experiment model, a programmable experiment interface and an orchestration algorithm that can be adapted to network simulators, emulators and testbeds alike. The feasibility of the approach is demonstrated through the implementation of a framework capable of automating experiments using any combination of these platforms. Three main aspects of the framework are evaluated: its extensibility to support any type of platform, its efficiency to orchestrate experiments and its flexibility to support diverse use cases including education, platform management and experimentation with multiple platforms. The results show that the proposed approach can be used to efficiently automate experimentation on diverse platforms for a wide range of scenarios
Hyafil, Jean-Éric. „Revenu universel : pertinence pour accompagner les métamorphoses du travail, rôle dans la politique fiscale et macroéconomique, modalités de mise en oeuvre et effets redistributifs“. Thesis, Paris 1, 2017. http://www.theses.fr/2017PA01E035/document.
Der volle Inhalt der QuelleThis thesis focuses on proposals of basic income in lieu of means-tested cash transfers in the French welfare system. The first part questions whether job automation and a call for unalienating work can justify establishing a basic income. The second part presents the benefits of basic income for fiscal or macroeconomic policies, notably to compensate the anti-redistributive consequences of consumption or ecological taxes, and to conciliate wage wage-competitiveness with demand policies. The third part presents the characteristics of fiscal reforms that include a basic income and examines the specific case of the French socio-fiscal system: consequences of the removal of fiscal expenses on the income tax, individualization of the social and fiscal system, replacement of tax expenditures, consequences for the beneficiaries of means-tested transfers, on tax deduction at source, etc. In the fourth part, we formulate a proposal of fiscal reform that introduces a basic income, and we stimulate its redistributive consequences on a sample of 821,815 individuals representative of population in France. We investigate to what extent a basic income could replace subsidies on low-paid jobs. In the last part, we present key elements in the more sociological debate on the impact of basic income onto work incentives and social exclusion
Gastebois, Jérémy. „Contribution à la commande temps réel des robots marcheurs. Application aux stratégies d'évitement des chutes“. Thesis, Poitiers, 2017. http://www.theses.fr/2017POIT2315/document.
Der volle Inhalt der QuelleBig walking robots are complex multi-joints mechanical systems which crystallize the human will to confer their capabilities on artefacts, one of them being the bipedal locomotion and more especially the balance keeping against external disturbances. This thesis proposes a balance stabilizer under operating conditions displayed on the locomotor system BIP 2000.This anthropomorphic robot has got fifteen electrically actuated degree of freedom and an Industrial controller. A new software has been developed with an object-oriented programming approach in order to propose the modularity required by the emulated and natural human symmetry. This consideration leads to the development of a mathematical tool allowing the computation of every modelling of a serial robot which is the sum of multiple sub robots with already known modelling. The implemented software also enables the robot to run offline generated dynamic walking trajectories and to test the balance stabilizer.We explore in this thesis the feasibility of controlling the center of gravity of a multibody robotic system with electrostatic fields acting on its virtual counterpart in order to guarantee its balance. Experimental results confirm the potential of the proposed approach
Barbosa, Haniel. „Nouvelles techniques pour l'instanciation et la production des preuves dans SMT“. Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0091/document.
Der volle Inhalt der QuelleIn many formal methods applications it is common to rely on SMT solvers to automatically discharge conditions that need to be checked and provide certificates of their results. In this thesis we aim both to improve their efficiency of and to increase their reliability. Our first contribution is a uniform framework for reasoning with quantified formulas in SMT solvers, in which generally various instantiation techniques are employed. We show that the major instantiation techniques can be all cast in this unifying framework. Its basis is the problem of E-ground (dis)unification, a variation of the classic rigid E-unification problem. We introduce a decision procedure to solve this problem in practice: Congruence Closure with Free Variables (CCFV). We measure the impact of optimizations and instantiation techniques based on CCFV in the SMT solvers veriT and CVC4, showing that our implementations exhibit improvements over state-of-the-art approaches in several benchmark libraries stemming from real world applications. Our second contribution is a framework for processing formulas while producing detailed proofs. The main components of our proof producing framework are a generic contextual recursion algorithm and an extensible set of inference rules. With suitable data structures, proof generation creates only a linear-time overhead, and proofs can be checked in linear time. We also implemented the approach in veriT. This allowed us to dramatically simplify the code base while increasing the number of problems for which detailed proofs can be produced
Gouraud, Jonas. „Mind wandering dynamic in automated environments and its influence on out-of-the-loop situations“. Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30269/document.
Der volle Inhalt der QuelleHigher levels of automation are progressively integrated in critical environments to satisfy the increasing demand for safer systems. Such philosophy moves operators to a supervisory role, also called out-of-the-loop (OOTL) situations. Unfortunately, OOTL situations also create a new kind of human-machine interaction issues, called OOTL performance problem. The dramatic consequences of OOTL performance problem stress the need to identify which mechanisms could influence their appearance. The emergence of thoughts unrelated to the here and now, labeled mind wandering (MW), could affect operators in OOTL situations through the perceptual decoupling induced. This thesis investigates MW dynamic in OOTL situations and its influence on operators. We firstly reviewed the evidences in the literature underlining a link between OOTL performance problem and MW. We completed theoretical insights by reporting pilots' tendency (collected with a questionnaire) to encounter more problems with autopilots when experiencing more task-unrelated MW. Then, we conducted three experiments in OOTL conditions using an obstacle avoidance task. With non-expert population and sessions longer than 45 minutes, we observed a significant increase of MW in OOTL situations compared to manual conditions, independently of system reliability. MW episodes were also accompanied by a perceptual decoupling from the task induced by task-unrelated MW. This decoupling was visible on reports of mental demand as well as oculometric (pupil size, blinks) and encephalographic (N1 component, alpha activity) signals. Overall, our results demonstrate the possibility to use physiological markers of MW in complex OOTL environments. We discuss new perspectives towards the use of MW markers to characterize the OOTL performance problem. Instead of blindly stopping MW episodes, which could have benefits for operators, future research should focus on designing systems able to cope with MW and identify information needed to facilitate the reentry in the control loop when needed
Sèdes, Florence. „Contribution au developpement des systemes bureautiques integres : gestion de donnees, repertoires, formulaires, documents“. Toulouse 3, 1987. http://www.theses.fr/1987TOU30134.
Der volle Inhalt der QuelleMcElroy, Jonathan David. „Automatic Document Classification in Small Environments“. DigitalCommons@CalPoly, 2012. https://digitalcommons.calpoly.edu/theses/682.
Der volle Inhalt der QuelleHamoui, Mohamad Fady. „Un système multi-agents à base de composants pour l’adaptation autonomique au contexte – Application à la domotique“. Thesis, Montpellier 2, 2010. http://www.theses.fr/2010MON20088/document.
Der volle Inhalt der QuelleHome automation environments are ubiquitous environments where domestic devices, scattered throughout a home, provide services that can be used remotely over a network. Home automation systems are proposed to enable the users of controlling the devices according to their needs. Ideally, these systems orchestrate the execution of the services provided by the devices to achieve complex services. Even more, these systems must adapt to the variety of environments in terms of devices and users needs. They must also be able to adapt dynamically, if possible in an autonomous manner, to the changes of their execution context (appearance or disappearance of a device, changing needs).In this thesis, we provide an answer to this problematic with SAASHA, a multi-agent home automation system based on components. The combination of these two paradigms enables managing the adaptation on three levels: presentation (user interface), organization (system architecture) and behavior (internal architecture of agents). The agents perceive their context and its changes. The Users are offered a dynamic view of the context allowing them to define custom scenarios as rules. The agents divide the roles among them to realize the scenarios. They modify dynamically their internal architecture throughout the generation, deployment and assembly of components to adopt new device control behaviors and scenarios. The agents collaborate to execute the scenarios. In case of a change, these three levels of adaptation are updated dynamically and autonomously to maintain the service continuity. A SAASHA prototype, based on UPnP and OSGi industry standards, has been developed to assess the feasibility of our proposal
Macher, Hélène. „Du nuage de points à la maquette numérique de bâtiment : reconstruction 3D semi-automatique de bâtiments existants“. Thesis, Strasbourg, 2017. http://www.theses.fr/2017STRAD006/document.
Der volle Inhalt der QuelleThe creation of an as-built BIM requires the acquisition of the as-is conditions of existing buildings. Terrestrial laser scanning (TLS) is widely used to achieve this goal. Indeed, laser scanners permit to collect information about object geometry in form of point clouds. They provide a large amount of accurate data in a very fast way and with a high level of details. Unfortunately, the scan-to-BIM process remains currently largely a manual process because of the huge amount of data and because of processes, which are difficult to automate. It is time consuming and error-prone. A key challenge today is thus to automate the process leading to 3D reconstruction of existing buildings from point clouds. The aim of this thesis is to develop a processing chain to extract the maximum amount of information from a building point cloud in order to integrate the result in a BIM software
Leon, Diego, und Viktor Meyer. „Efficiency evaluation of digitalization“. Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-258134.
Der volle Inhalt der QuelleKonceptet digitalisering har blivit allt mer utbrett på grund av den enorma datorkraft som finns i våra nuvarande digitala maskiner. Den svenska sjukvården har fokuserat på digitalisering i form av e-hälsa under det senaste decenniet och antalet digitala tjänster ökar snabbt. Genom att använda modern teknik för att skapa smartare och effektivare processer är det möjligt att förbättra kvaliteten för patienter och användare.Det finns ett behov av mer tillförlitlig kunskap om digitalisering och dess potential för effektivitet. Detta projekt har som mål att undersöka denna effektivitet. Syftet är att direkt bidra med mer data om ämnet digitalisering.Detta projekt bygger på en kvalitativ forskningsmetod samt en fallstudie för att genomföra effektivitetsprov och jämföra manuella med digitaliserade arbetsprocesser.Den använda forskningsmetodiken består av en litteraturstudie, konstruktion av en digitaliserad process, samt två tester där vardera test utvärderar både en manuell och digital process. Processerna innefattar dokumentgenerering och kommunikation, inom sjukvården.Testresultaten visar att digitalisering uppnår en total effektivitetsökning på 333% när alla resultat sammanställs. Överlag är det från resultaten uppenbart att digitalisering leder till ökad effektivitet. Effektiviteten som uppnås varierar från process till process. I det generella fallet observeras ökad effektivitet, men ibland även minskad.
Christophe, François. „Semantics and Knowledge Engineering for Requirements and Synthesis in Conceptual Design: Towards the Automation of Requirements Clarification and the Synthesis of Conceptual Design Solutions“. Phd thesis, Ecole centrale de nantes - ECN, 2012. http://tel.archives-ouvertes.fr/tel-00977676.
Der volle Inhalt der QuelleБушуева, К. С., und K. S. Bushueva. „Внедрение машинного обучения в системы электронного документооборота : магистерская диссертация“. Master's thesis, б. и, 2020. http://hdl.handle.net/10995/93436.
Der volle Inhalt der QuelleThe relevance of the chosen topic lies in the fact that the introduction of machine learning into electronic document management systems is a new and powerful tool for simplifying work with documents. Many organizations have already transferred paper workflow to electronic, so it is necessary to consider how the introduction of machine learning from an electronic document management system will increase the productivity of each employee and the enterprise as a whole. The aim of the research is to automate the document preparation process using machine learning algorithms. The materials of this work, summarized according to the results of the analysis, can be used as a practical application at the enterprise to increase the efficiency of all departments and, as a result, obtain economic benefits from the introduction of machine learning into the electronic document management system.
Désage, Simon-Frédéric. „Contraintes et opportunités pour l'automatisation de l'inspection visuelle au regard du processus humain“. Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAA028/document.
Der volle Inhalt der QuelleThis research has the ambition to contribute to the automation of visual inspection, in the quality control of complex geometry metal parts. Firstly, many optical techniques, scanning, implementation of photorealistic rendering, classification of images or data, and pattern recognition are already highly developed and applied in each particular areas. But they are not, or rarely, in special cases, combined for a complete scanning method of appearance to the recognition, effective and perceptual, of object and aesthetic anomalies.This work benefited from the advancements of previous thesis on the formalization of quality control, as well as an agile system of surface appearance scanning to highlight the diversity of aesthetic anomalies surfaces. Thus, the major contribution lies in the adaptation of image processing methods to the formal structure of quality control, rich appearance data format and classification methods to achieve recognition as the human controller.In this sense, the thesis deciphers the different methodologies related to quality control, the human controller processes, surface appearance defects, the managements and processing of visual information, to the combination of all these constraints for a partial substitution system of the human controller. The aim of the thesis is to identify and reduce sources of variability to obtain better quality control, including through the intelligent and structured automation of visual inspection. From a selected computer vision device, the proposed solution is to analyze visual texture. This is regarded as a global signature of superior visual appearance information to a single image containing images textures. The analysis is performed with pattern recognition and machine learning mechanisms to develop automatic detection and evaluation of appearance defects
Moens, Marie-Francine. „Automatic indexing and abstracting of document texts /“. Boston, Mass. [u.a.] : Kluwer Academic Publ, 2000. http://www.loc.gov/catdir/enhancements/fy0820/00020394-d.html.
Der volle Inhalt der QuelleBlein, Florent. „Automatic Document Classification Applied to Swedish News“. Thesis, Linköping University, Department of Computer and Information Science, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-3065.
Der volle Inhalt der QuelleThe first part of this paper presents briefly the ELIN[1] system, an electronic newspaper project. ELIN is a framework that stores news and displays them to the end-user. Such news are formatted using the xml[2] format. The project partner Corren[3] provided ELIN with xml articles, however the format used was not the same. My first task has been to develop a software that converts the news from one xml format (Corren) to another (ELIN).
The second and main part addresses the problem of automatic document classification and tries to find a solution for a specific issue. The goal is to automatically classify news articles from a Swedish newspaper company (Corren) into the IPTC[4] news categories.
This work has been carried out by implementing several classification algorithms, testing them and comparing their accuracy with existing software. The training and test documents were 3 weeks of the Corren newspaper that had to be classified into 2 categories.
The last tests were run with only one algorithm (Naïve Bayes) over a larger amount of data (7, then 10 weeks) and categories (12) to simulate a more real environment.
The results show that the Naïve Bayes algorithm, although the oldest, was the most accurate in this particular case. An issue raised by the results is that feature selection improves speed but can seldom reduce accuracy by removing too many features.
Ou, Shiyan, Christopher S. G. Khoo und Dion H. Goh. „Automatic multi-document summarization for digital libraries“. School of Communication & Information, Nanyang Technological University, 2006. http://hdl.handle.net/10150/106042.
Der volle Inhalt der QuelleGreening, Christopher. „Automatic writer identification for forensic document analysis“. Thesis, University of Essex, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.520166.
Der volle Inhalt der QuelleWilson, Irene Meredith 1976. „An empirical study of automatic document extraction“. Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80137.
Der volle Inhalt der QuelleIncludes bibliographical references (leaves 104-105).
by Irene M. Wilson.
S.B.and M.Eng.
Somon, Bertille. „Corrélats neuro-fonctionnels du phénomène de sortie de boucle : impacts sur le monitoring des performances“. Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAS042/document.
Der volle Inhalt der QuelleThe ongoing technological mutations occuring in aeronautics have profoundly changed the interactions between men and machines. Systems are more and more complex, automated and opaque. Several tragedies have reminded us that the supervision of those systems by human operators is still a challenge. Particularly, evidences have been made that automation has driven the operators away from the control loop of the system thus creating an out-of-the-loop phenomenon (OOL). This phenomenon is characterized by a decrease in situation awareness and vigilance, but also complacency and over-reliance towards automated systems. These difficulties have been shown to result in a degradation of the operator’s performances. Thus, the OOL phenomenon is a major issue of today’s society to improve human-machine interactions. Even though it has been studied for several decades, the OOL is still difficult to characterize, and even more to predict. The aim of this thesis is to define how cognitive neurosciences theories, such as the performance monitoring activity, can be used in order to better characterize the OOL phenomenon and the operator’s state, particularly through physiological measures. Consequently, we have used electroencephalographic activity (EEG) to try and identify markers and/or precursors of the supervision activity during system monitoring. In a first step we evaluated the error detection or performance monitoring activity through standard laboratory tasks, with varying levels of difficulty. We performed two EEG studies allowing us to show that : (i) the performance monitoring activity emerges both for our own errors detection but also during another agent supervision, may it be a human agent or an automated system, and (ii) the performance monitoring activity is significantly decreased by increasing task difficulty. These results led us to develop another experiment to assess the brain activity associated with system supervision in an ecological environment, resembling everydaylife aeronautical system monitoring. Thanks to adapted signal processing techniques (e.g. trial-by-trial time-frequency decomposition), we were able to show that there is : (i) a fronto-central θ activité time-locked to the system’s decision similar to the one obtained in laboratory condition, (ii) a decrease in overall supervision activity time-locked to the system’s decision, and (iii) a specific decrease of monitoring activity for errors. In this thesis, several EEG measures have been used in order to adapt to the context at hand. As a perspective, we have developped a final study aiming at defining the evolution of the monitoring activity during the OOL. Finding markers of this degradation would allow to monitor its emersion, and even better, predict it
Pizziol, Sergio. „Prédiction des conflits dans des systèmes homme-machine“. Thesis, Toulouse, ISAE, 2013. http://www.theses.fr/2013ESAE0039/document.
Der volle Inhalt der QuelleConflict prediction in human-machine systems. The work is part of research devoted to problems of human-machine interaction and conflict between human and machine which may arise from such situations
Tang, Bo. „WEBDOC: AN AUTOMATED WEB DOCUMENT INDEXING SYSTEM“. MSSTATE, 2002. http://sun.library.msstate.edu/ETD-db/theses/available/etd-11052002-213723/.
Der volle Inhalt der QuelleRosati, Elise. „Outils d'aide à la conception pour l'ingénierie de systèmes biologiques“. Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAD008/document.
Der volle Inhalt der QuelleIn synthetic biology, Gene Regulatory Networks (GRN) are one of the main ways to create new biological functions to solve problems in various areas (therapeutics, biofuels, biomaterials, biosensing). However, the complexity of the designed networks has reached a limit, thereby restraining the variety of problems they can address. How can biologists overcome this limit and further increase the complexity of their systems? The goal of this thesis is to provide the biologists with tools to assist them in the design and simulation of complex GRNs. To this aim, the current state of the art was examined and it was decided to adapt tools from the micro-electronic field to biology, as well as to create a Genetic Programming algorithm for GRN design. On the one hand, models of diffusion and of other various systems (band-pass, prey-predator, repressilator, XOR) were created and written in Verilog A. They are already implemented and well-functioning on the Spectre solver as well as a free solver, namely NgSpice. On the other hand, the first steps of automatic GRN design were achieved. Indeed, an algorithm able to optimize the parameters of a given GRN according to a specification was developed. Moreover, Genetic Programming was applied to GRN design, allowing the optimization of both the topology and the parameters of a GRN. These tools proved their usefulness for the biologists’ community by efficiently answering relevant biological questions arising in the development of a system. With this work, we were able to show that adapting microelectronics and Genetic Programming tools to biology is doable and useful. By assisting design and simulation, such tools should promote the emergence of more complex systems
Addad, Boussad. „Evaluation analytique du temps de réponse des systèmes de commande en réseau en utilisant l’algèbre (max,+)“. Thesis, Cachan, Ecole normale supérieure, 2011. http://www.theses.fr/2011DENS0023/document.
Der volle Inhalt der QuelleNetworked automation systems (NAS) are more and more used in industry, given the several advantages they provide like flexibility, low cost, ease of maintenance, etc. However, the use of a communication network in SCR means in essence sharing some resources and therefore strikingly impacts their time performances. For instance, a control signal does get to its destination (actuator) only after a non zero delay. So, to guarantee that such a delay is shorter than a given threshold or other time constraints well respected, an a priori evaluation is necessary before operating the SCR. In our research activities, we are interested in client/server SCR reactivity and the evaluation of their response time.Our contribution in this investigation is the introduction of a (Max,+) Algebra-based analytic approach to solve some problems, faced in the existing methods like state explosion of model checking or the non exhaustivity of simulation. So, after getting Timed Event Graphs based models of the SCR and their linear state (Max,+) representation, we obtain formulae that enables to calculate straightforwardly the SCR response times. More precisely, we obtain formulae of the bounds of response time by adopting a deterministic analysis and other formulae to calculate the probability density of response time by considering a stochastic analysis. Moreover, in our investigation we take into account every single elementary delay involved in the response time, including the end-to-end delays, due exclusively to crossing the communication network. This latter being however constituted of shared resources, making by the way the use of TEG and (Max,+) Algebra impossible, we introduce a novel approach to model the communication network. This approach brings to life a new class of Petri nets, called Conflicting Timed Event Graphs (CTEG), which enables us to solve the problem of the shared resources. We also manage to represent the CTEG dynamics using recurrent (Max,+) equations and therefore calculate the end to-end delays. An Ethernet-based network is studied as an example to apply this novel approach. Note by the way that the field of application of this approach borders largely communication networks and is quite possible when dealing with other systems.Finally, to validate the different results of our research activities and the related hypotheses, especially the maximal bound of response time formula, we carry out lots of experimental measurements on a lab facility. We compare the measures to the formula predictions and check their agreement under different conditions
Liu, Shibo. „Numerical and experimental study on residual stresses in laser beam welding of dual phase DP600 steel plates“. Thesis, Rennes, INSA, 2017. http://www.theses.fr/2017ISAR0003/document.
Der volle Inhalt der QuelleLaser welding process is widely used in assembly work of automobi le industry. DP600 dual phase steeis a high strength steel to reduce automobile weight. Residual stresses are produced during laser weldingDP600. Continuum mechanics is used for analyzing res idual stresses by finite element simulation.Based on experimental tensile tests, the DP600 steel constitutive model are identified. The hardening termaccording to Ludwik law, Voce law and a proposed synthesis model are studied. The temperature sensitivityof Johnson-Cook, Khan, Chen and a proposed temperature sensitivity model are investigated. The strain ratesensitivity model proposed by A. Gavrus and planar anisotropy defined by Hi ll theory are also used.Cellul ar Automaton (CA) 20 method are programed for the simulation of solidification microstructureevolution during laser welding process. The temperature field of CA are imported from finite element analysimodel. The analysis function of nucleation, solid fraction, interface concentration, surface tension an isotropy,diffusion, interface growth ve locity and conservation equations are presented in detail. By comparing thesimulation and experimental results, good accordances are found.Modelling by a finite element method of laser welding process are presented. Geometry of specimen, heatsource, boundary conditions, DP600 dual phase steel material properties such as conductivity, density, specifiheat, expansion, elasticity and plasticity are introduced. Models analyzing hardening term, strain ratesensitivity, temperature sensitivity, plastic an isotropy and elastic an isotropy are simulated.The numerical results of laser welding DP600 steel process are presented. The influence of hardening term,strain rate sensitivity, temperature sensitivity and anisotropy on residual stresses are analyzed. Comparisonwith experimental data show good numerical accuracy.Keywords: Laser Welding, DP600, Residual Stress, Cellular Automaton, Hardening, Temperature sensitivity,Strain Rate Sensitivity, Anisotropy, Mixture dual phase law
Yu, Xinyao. „Filter-based automatic document content creation in hypermedia“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq22433.pdf.
Der volle Inhalt der QuelleLatif, Seemab. „Automatic summarisation as pre-processing for document clustering“. Thesis, University of Manchester, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.521783.
Der volle Inhalt der QuelleIonita, Mihaela-Izabela. „Contribution to the study of synchronized differential oscillators used to controm antenna arrays“. Thesis, Poitiers, 2012. http://www.theses.fr/2012POIT2271/document.
Der volle Inhalt der QuelleThe work presented in this thesis deals with the study of coupled differential oscillators and Voltage Controlled Oscillators (VCO) used to control antenna arrays. After reminding the concept of antenna arrays and oscillators, an overview of R. York's theory giving the dynamics for two Van der Pol oscillators coupled through a resonant network was presented. Then, showing the limitation of this approach regarding the prediction of the oscillators' amplitudes, a new formulation of the nonlinear equations describing the oscillators' locked states was proposed. Nevertheless, due to the trigonometric and strongly non-linear aspect of these equations, mathematical manipulations were applied in order to obtain a new system easier to solve numerically. This has allowed to the elaboration of a Computer Aided Design (CAD) tool, which provides a cartography giving the frequency locking region of two coupled differential Van der Pol oscillators. This cartography can help the designer to rapidly find the free-running frequencies of the two outermost differential oscillators or VCOs of the array required to achieve the desired phase shift. To do so, a modeling procedure of two coupled differential oscillators and VCOs as two coupled differential Van der Pol oscillators, with a resistive coupling network was performed. Then, in order to validate the results provided by our CAD tool, we compared them to the simulation results of two coupled differential oscillators and VCOs obtained with Agilent’s ADS software. Good agreements between the simulations of the circuits, the models and the theoretical results from our CAD tool were found
Ding, Jie. „Automatic classification of multi-lingual documents“. Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ39111.pdf.
Der volle Inhalt der QuelleChan, Wai-man. „Medical document management system using XML“. Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23273203.
Der volle Inhalt der QuelleMatsubara, Shigeki, Tomohiro Ohno und Masashi Ito. „Automatic Editing of Spoken Document for Intelligent Speech Archive“. 日本知能情報ファジイ学会, 2008. http://hdl.handle.net/2237/15111.
Der volle Inhalt der QuelleChen, Hsinchun, und K. J. Lynch. „Automatic Construction of Networks of Concepts Characterizing Document Databases“. IEEE, 1992. http://hdl.handle.net/10150/105175.
Der volle Inhalt der QuelleThe results of a study that involved the creation of knowledge bases of concepts from large, operational textual databases are reported. Two East-bloc computing knowledge bases, both based on a semantic network structure, were created automatically using two statistical algorithms. With the help of four East-bloc computing experts, we evaluated the two knowledge bases in detail in a concept-association experiment based on recall and recognition tests. In the experiment, one of the knowledge bases that exhibited the asymmetric link property out-performed all four experts in recalling relevant concepts in East-bloc computing. The knowledge base, which contained about 20,O00 concepts (nodes) and 280,O00 weighted relationships (links), was incorporated as a thesaurus-like component into an intelligent retrieval system. The system allowed users to perform semantics-based information management and information retrieval via interactive, conceptual relevance feedback.