Dissertations / Theses on the topic 'Analysis of sensibility'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Analysis of sensibility.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Le, Van-Thao. "Development of a new device and stratistical analysis for characterizing soil sensibility face suffusion process." Thesis, Nantes, 2017. http://www.theses.fr/2017NANT4109/document.
Full textInternal erosion is one of the main causes of instabilities within hydraulic earth structures. In literature, four different internal erosion processes are distinguished. This study deals with suffusion which corresponds to the coupled processes of detachment-transport-filtration of the soil’s fine fraction between the coarse fraction. The suffusion susceptibility characterization was mainly researched through grain size based criteria for the initiation of process. However the influence of other physical parameters has to be considered. A statistical analysis is performed in order to identify the main parameters and to focus on those which can easily be measured on existing structures. By distinguishing gapgraded and widely-graded soils, two correlations are proposed to estimate the soil suffusion susceptibility. By performing suffusion tests with two different-sized devices, the results show the significant effect of specimen size on critical hydraulic gradient and on erosion coefficient. The interpretation of all tests by energy based method permits to avoid this spatial scale effect. With the objective to investigate the influence of flow direction, a new device is designed and realized for the industrial purpose. The new device and associated experimental methodology are validated by the comparison of test results with results of triaxial erodimeter tests and oedopermeameter tests. Finally, tests performed on heterogeneous specimens highlight the influence of flow direction. Moreover, under a horizontal flow without any downstream filter, the development of suffusion in clayey sand can lead to the piping process
Gayrard, Emeline. "Analyse bayésienne de la gerbe d'éclats provoquée pa l'explosion d'une bombe à fragmentation naturelle." Thesis, Université Clermont Auvergne (2017-2020), 2019. http://www.theses.fr/2019CLFAC039/document.
Full textDuring this thesis, a method of statistical analysis on sheaf of bomb fragments, in particular on their masses, has been developed. Three samples of incomplete experimental data and a mechanical model which simulate the explosion of a ring were availables. First, a statistical model based on the mechanical model has been designed, to generate data similar to those of an experience. Then, the distribution of the masses has been studied. The classical methods of analysis being not accurate enough, a new method has been developed. It consists in representing the mass by a random variable built from a basis of chaos polynomials. This method gives good results however it doesn't allow to take into account the link between slivers. Therefore, we decided to model the masses by a stochastic process, and not a random variable. The range of fragments, which depends of the masses, has also been modeled by a process. Last, a sensibility analysis has been carried out on this range with Sobol indices. Since these indices are applied to random variables, it was necessary to adapt them to stochastic process in a way that take into account the links between the fragments. In the last part, it is shown how the results of this analysis could be improved. Specifically, the indices presented in the last part are adapted to dependent variables and therefore, they could be suitable to processes with non independent increases
Vitória, Letícia da Silva. "The mirror of a writer's sensibility : an analysis of Truman Capote's narrator in Other voices, other rooms." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/150319.
Full textAmerican novelist, screenwriter and playwright Truman Capote was one of the leading American authors of fiction of the post-war period, known for receiving wide notoriety for his 1965 best seller In Cold Blood, for a style of writing that mixed literature and journalism. However, Capote’s works extend beyond the aforementioned novel. The author, who would eventually become famous for his personality as well, revealed great talent as a writer since a very young age, working with themes closely related to his personal life. During my readings of his works, I was able to perceive that the narrator Capote creates brings the reader much closer to the story. The purpose of this thesis is to carry out an analysis of Capote’s narrator in order to discuss his particular techniques. In order to do that, I chose to work with the theory of narratology, which is not only the study of narrative and the narrative structure of a text, but also of how it affects our perceptions as readers. Through an analysis of aspects such as focalization and the narrator’s discourse, my intention was to trace a relation between the narrator with Capote’s implied author in order to understand how this affects the reading experience and the relationship with the reader. For this analysis, I chose Capote’s first published novel, Other Voices, Other Rooms (1948), because I believe that it tells a story that seem to come from the highly suppressed emotions of the author about his childhood and growing up. I will also attempt to identify where biographical elements might have inspired some of the events that appear in the story, attempting to establish connection to the events of his real life and how much it interfered in his fiction. As to the theory that underlines this work, I chose the works of Mieke Bal (2009) and Herman & Vervaeck (2005), in order to bring light to terms that help further the discussion. By the end of this analysis, I hope to show what lies beneath a carefully constructed narrator, and that the reader will be able to perceive Truman Capote for more than his famous personality, but also as a careful and focused writer that was passionate about his craft.
Guerrero, Gustavo. "Analyse à base de modèles des interactions cardiorespiratoires chez l'adulte et chez le nouveau-né." Thesis, Rennes 1, 2020. http://www.theses.fr/2020REN1S019.
Full textThe physiological mechanisms behind apnea episodes in adults and premature infants are not yet fully elucidated. The main objective of this thesis is to propose an approach, based on computational models, in order to better understand the acute cardio-respiratory response to an episode of apnea. An original model of cardio-respiratory interactions has been proposed and adapted in 3 versions: adult, newborn at term and preterm infant. Sensitivity analyses, performed on these models, have highlighted the importance of certain physiological variables: the fraction of inspired oxygen, metabolic rates, the chemoreflex and lung volume. From these results, a subset of parameters was selected to perform the first patient-specific identification of an adult model to study the dynamics of SaO2 during an obstructive apnea from a clinical database composed of 107 obstructive apneas distributed over 10 patients. From the parameters identified, a phenotyping of the patients was obtained, differentiating the patients with an increased risk of respiratory instability and periodic breathing. The results of the thesis open up new perspectives for the management and optimization of certain therapies (CPAP, PEEP, oxygen therapy, etc.) in neonatal and adult intensive care units
Spitz, Clara. "Analyse de la fiabilité des outils de simulation et des incertitudes de métrologie appliquée à l'efficacité énergétique des bâtiments." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00768506.
Full textToledo, Cristian EpifÃnio de. "Hydrological connectivity in semi-arid environment: case study of the OrÃs reservoir." Universidade Federal do CearÃ, 2013. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=11002.
Full textPara tentar auxiliar na resoluÃÃo do problema da seca, as decisÃes polÃticas priorizaram a construÃÃo de reservatÃrios, produzindo âredes de alta densidade de reservatÃriosâ na regiÃo Nordeste Brasileira. Via de regra, a construÃÃo de um reservatÃrio interrompe o fluxo natural do rio, interferindo, assim, na dinÃmica de Ãgua a jusante. Objetivou com esse trabalho estudar os processos envolvidos na conectividade hidrolÃgica, bem como a interferÃncia da rede de mÃltiplos reservatÃrios na conectividade hidrolÃgica de uma grande bacia semiÃrida. O estudo de caso à a bacia hidrogrÃfica do aÃude OrÃs â BHAO (24.211 km2), situada no SemiÃrido do CearÃ. A pesquisa foi iniciada com o levantamento da topologia da rede densa de reservatÃrio da BHAO, realizada utilizando tÃcnicas de sensoriamento remoto (SR), ferramentas de geoprocessamento (SIG) e imagem de satÃlite no fim do perÃodo chuvoso de 2011. A anÃlise da conectividade hidrolÃgica foi realizada usando o modelo âReservoir Network Modelâ (ResNetM), que simulou os processos hidrolÃgicos e considerou a conectividade hidrolÃgica entre os reservatÃrios, conforme os critÃrios estabelecidos nesta pesquisa. Na busca de se identificar os principais elementos naturais e antrÃpicos da bacia que afetam a conectividade hidrolÃgica, foi realizada uma anÃlise de sensibilidades (IS) de alguns parÃmetros de entrada do modelo, o que possibilitou avaliar o impacto da rede de reservatÃrios sobre o volume armazenado no aÃude OrÃs. O levantamento da rede de reservatÃrios com SR e ferramentas automÃticas de SIG demonstrou duas falhas: a interpretaÃÃo de sombras como reservatÃrios e a mà identificaÃÃo da superfÃcie da Ãgua real devido à presenÃa de macrÃfitas nos reservatÃrios. Desse modo, foram gerados automaticamente 6.002 polÃgonos, dos quais, apÃs ajuste manual, confirmaram-se, como reservatÃrios, apenas 4.717 polÃgonos (79%). A pesquisa constatou que, na Ãltima dÃcada, ocorreu um aumento de 17,5% no nÃmero de reservatÃrio da BHAO e que, nas regiÃes com embasamento cristalino, a densidade de reservatÃrios à 80% maior do que nas regiÃes sobre geologia sedimentar. A anÃlise de sensibilidade indicou que o nÃmero de reservatÃrio da rede foi a variÃvel à qual o sistema apresentou maior sensibilidade (IS = 1,07), considerando-se a conectividade hidrolÃgica. Contrariamente, a variaÃÃo da evaporaÃÃo (IS = 0,19) e da perda em trÃnsito (IS= 0,01) nÃo induziu a mudanÃas significativas da conectividade hidrolÃgica da BHAO. O volume armazenado no aÃude OrÃs nÃo sofreu mudanÃas significativas (IS = 0,21) ao se modificar a topologia da rede de reservatÃrio. Por exemplo, ao se simular a retirada dos pequenos e mÃdios reservatÃrios da rede (4.664, ou 98,9% dos reservatÃrios), o aÃude OrÃs indicou um acrÃscimo de apenas 14% em seu volume mÃdio armazenado. Com base nas observaÃÃes, concluiu-se que ocorreu uma reduÃÃo na taxa de incremento anual de reservatÃrios na BHAO nos Ãltimos 10 anos, o que marca o inÃcio da fase de estabilizaÃÃo da referida rede. Entre os elementos naturais avaliados, o coeficiente de escoamento superficial (natural) foi o que demonstrou maior significÃncia para a conectividade hidrolÃgica. Sua importÃncia deve-se ao fato de, no sistema natural da BHAO, raramente se observa escoamento de base significativo. Dos elementos antrÃpicos analisados, a rede densa de reservatÃrios, obteve a maior importÃncia para a conectividade hidrolÃgica. O motivo para esse comportamento à que os reservatÃrios promovem a laminaÃÃo da onda de cheia, aumentando o nÃmero de dias com vazÃo fluvial e, consequentemente, maior frequÃncia da conectividade hidrolÃgica. AlÃm disso, novos reservatÃrios diminuem o comprimento dos trechos a serem ligados, atenuando as perdas em trÃnsito e facilitando a ocorrÃncia da conectividade hidrolÃgica. A variaÃÃo da rede de reservatÃrio comprovou que, ao diminuir o nÃmero de reservatÃrio da rede, ocorre uma reduÃÃo na conectividade hidrolÃgica da BHAO, porÃm, nÃo altera significativamente a vazÃo afluente ao aÃude OrÃs, o exutÃrio da bacia deste trabalho. A rede densa de reservatÃrios provou que, no inÃcio do perÃodo chuvoso, atua como barreira à vazÃo fluvial, causando a quebra da conectividade hidrolÃgica. Com o passar do tempo e com a continuidade da precipitaÃÃo, os milhares de reservatÃrios favorecem a conectividade hidrolÃgica por meio da laminaÃÃo da onda de cheia.
Attempting to solve the drought problem, political decisions prioritized the construction of reservoirs, what eventually resulted in the construction of a "high density network of reservoirs" in the Brazilian Northeast. Usually, a reservoir interrupts the natural river flow, thus interfering in the water dynamics downstream. This work was aimed at studying the processes involved in the hydrological connectivity as well as the interference of multiple reservoirs in the hydrologic network connectivity of a large semiarid basin. The case study is the catchment area of the OrÃs - BHAO (24,211 km2 ) reservoir, located in Semiarid CearÃ. The research began with a survey of the BHAO dense reservoir network topology, conducted using remote sensing (RS), GIS tools (GIS) and satellite image at the end of the 2011 rainy season. The hydrological connectivity analysis was performed using the 'Reservoir Network Model' (ResNetM), which simulated hydrologic processes and considered the hydrological connectivity between the reservoirs, according to the criteria established in this research model. While seeking to identify the key natural and anthropogenic factors affecting the hydrological connectivity of the basin, an analysis of input sensitivity (IS) of some input parameters of the model was performed, this allowed us to evaluate the reservoir network impact on the stored volume on the OrÃs reservoir. The survey of the network of reservoirs with SR and automatic GIS tools showed two shortcomings: the misinterpretation of shadows as reservoirs and the misidentification of the actual water surface due to the macrophyte presence in reservoirs. Thus, of the 6,002 automatically generated polygons, only 4717 polygons (79%) were confirmed as reservoirs, after manual adjustment. The survey found that in the last decade, there was a 17.5% increase in the number of BHAO reservoirs and that, in regions with crystalline geology, the density of reservoirs is 80% higher than in regions of sedimentary geology. The sensitivity analysis indicated that the number of reservoirs in the network was the variable to which the system showed higher sensitivity (SI = 1.07), considering the hydrological connectivity. In contrast, the evaporation variation (SI = 0.19) and loss in transit (SI = 0.01) did not induce significant changes on BHAO hydrological connectivity. Also, the volume stored in the OrÃs reservoir showed no significant changes (SI = 0.21) when the reservoir network topology was modified. For example, when the removal of small and medium network reservoirs (4,664, or 98.9% of the reservoirs) was simulated, the OrÃs reservoir indicated an increase of only 14% in its average volume stored. Based on observations, it was concluded that there was a reduction in the rate of annual BHAO reservoir increment in the past 10 years, marking the beginning of the stabilization phase of the said network. Among the evaluated natural elements, it was the (natural) runoff coefficient which was demonstrated to have the most significance for the hydrological connectivity. Its importance is due to the fact that in the BHAO natural system, underground flow is infrequent. Of the human elements analyzed, the dense reservoir network, obtained the highest importance for hydrological connectivity. The reason for this is that the reservoirs promote the lamination of the flood wave, increasing the number of days with river flow and, consequently, increase the frequency of hydrological connectivity. In addition, new reservoirs decrease the length of the passages to be connected, reducing losses in transit and promoting hydrological connectivity. The variation of the reservoir network demonstrated that decreasing the number of network reservoirs, a decrease in BHAO hydrological connectivity occurs, not changing, however, significantly the inflow to the OrÃs reservoir, the convergence focus of the network. A dense reservoir network showed that, at the beginning of the rainy season, it acts as a barrier to river flow, breaking hydrological connectivity. Over time and with continued rainfall, the thousands of reservoirs promote hydrological connectivity by lamination of the flood wave.
Ferber, De Vieira Lessa Moisés. "Metodologias para análise de incertezas paramétricas em conversores de potência." Phd thesis, Ecole Centrale de Lyon, 2013. http://tel.archives-ouvertes.fr/tel-00999588.
Full textRossi, Lubianka Ferrari Russo. "Acoplamento entre os métodos diferencial e da teoria da perturbação para o cálculo dos coeficientes de sensibilidade em problemas de transmutação nuclear." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/85/85133/tde-12022015-154545/.
Full textThe main target of this study is to introduce a new method for calculating the coefficients of sensibility through the union of differential method and generalized perturbation theory, which are the two methods generally used in reactor physics to obtain such variables. These two methods, separated, have some issues turning the sensibility coefficients calculation slower or computationally exhaustive. However, putting them together, it\'s possible to repair these issues and build a new equation for the coecient of sensibility. The method introduced in this study was applied in a PWR reactor, where it was performed the sensibility analysis for the production and 239Pu conversion rate during 120 days (1 cycle) of burnup. The computational code used for both burnup and sensibility analysis, the CINEW, was developed in this study and all the results were compared with codes widely used in reactor physics, such as CINDER and SERPENT. The new mathematical method for calculating the sensibility coefficients and the code CINEW provide good numerical agility and also good eciency and security, once the new method, when compared with traditional ones, provide satisfactory results, even when the other methods use different mathematical approaches. The burnup analysis, performed using the code CINEW, was compared with the code CINDER, showing an acceptable variation, though CINDER presents some computational issues due to the period it was built. The originality of this study is the application of such method in problems involving temporal dependence and, not least, the elaboration of the first national code for burnup and sensitivity analysis.
ROSSI, LUBIANKA F. R. "Acoplamento entre os métodos diferencial e da teoria da perturbação para o cálculo dos coeficientes de sensibilidade em problemas de transmutação nuclear." reponame:Repositório Institucional do IPEN, 2014. http://repositorio.ipen.br:8080/xmlui/handle/123456789/23594.
Full textMade available in DSpace on 2015-03-17T10:41:16Z (GMT). No. of bitstreams: 0
Tese (Doutorado em Tecnologia Nuclear)
IPEN/T
Instituto de Pesquisas Energeticas e Nucleares - IPEN-CNEN/SP
Santos, Junior James Dean Oliveira dos. "Considerações sobre a relação entre distribuições de cauda pesada e conflitos de informação em inferencia bayesiana." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/306673.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Matematica, Estatistica e Computação Cientifica
Made available in DSpace on 2018-08-08T04:30:52Z (GMT). No. of bitstreams: 1 SantosJunior_JamesDeanOliveirados_M.pdf: 1844173 bytes, checksum: 122644f8bc0dedaaa7d7633d9b25eb9c (MD5) Previous issue date: 2006
Resumo: Em inferência bayesiana lidamos com informações provenientes dos dados e com informações a priori. Eventualmente, um ou mais outliers podem causar um conflito entre as fontes de informação. Basica!llente, resolver um conflito entre as fontes de informações implica em encontrar um conjunto de restrições tais que uma das fontes domine, em certo sentido, as demais. Têm-se utilizado na literatura distribuições amplamente aceitas como sendo de cauda pesada para este fim. Neste trabalho, mostramos as relações existentes entre alguns resultados da teoria de conflitos e as distribuições de caudas pesadas. Também mostramos como podemos resolver conflitos no caso locação utilizando modelos subexponenciais e como utilizar a medida credence para resolver problemas no caso escala
Abstract: In bayesian inference we deal with information proceeding from the data and prior information. Eventually, one ar more outliers can cause a conflict between the sources information. Basically, to decide a conflict between the sources of information implies in finding a set of restrictions such that one of the sources dominates, in certain sense, the outher. Widely distributions have been used in literature as being of heavy tailed for this end. In this work, we show the relations between some results of the theory of conflicts and the heavy tailed distributions. Also we show how we can decide a conflicts in the location case using subexponential models and how to use the measure credence to decide problems in the scale case
Mestrado
Inferencia Bayesiana
Mestre em Estatística
Lara, Jerusa Petróvna Resende 1980. "Análise cinemática tridimensional do salto em distância de atletas de alto nível em competição." [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/274719.
Full textDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Educação Física
Made available in DSpace on 2018-08-17T17:54:42Z (GMT). No. of bitstreams: 1 Lara_JerusaPetrovnaResende_M.pdf: 2070442 bytes, checksum: 43a77e989ba58b33b8bd6d21f8264a14 (MD5) Previous issue date: 2011
Resumo: O presente trabalho teve como objetivo fazer a análise cinemática tridimensional do salto em distância de atletas de alto nível em competição através de quatro trabalhos independentes. O primeiro trabalho teve como objetivo analisar a relação das variáveis cinemáticas do salto em distância de atletas de alto nível em competição através de uma análise de regressão linear múltipla nas fases de aproximação, impulsão e voo. Os resultados encontrados foram que a velocidade escalar do centro de massa no touchdown , a altura máxima do centro de massa na fase de voo e a distância do pé de apoio do atleta à tábua de impulsão foram as variáveis incluídas no modelo de predição da distância saltada, sendo conjuntamente responsáveis por 68% da variabilidade da distância. Para um segundo modelo, onde as velocidades escalares foram substituídas pelas componentes das velocidades do centro de massa, a velocidade horizontal do centro de massa no touchdown, a velocidade vertical do centro de massa no touchdown e o ângulo de ataque foram responsáveis por 88% da variabilidade da distância saltada. O segundo trabalho teve como objetivo analisar a replicabilidade e a reprodutibilidade das variáveis cinemáticas tridimensionais no salto em distância e testar a sensibilidade dos valores de replicabilidade e reprodutibilidade na predição da distância saltada. Para esse estudo dez observadores de ambos os sexos realizaram cinco medições cada um de um mesmo salto em distância sob as condições que definem reprodutibilidade e replicabilidade. A partir das edições dos observadores, foram calculadas as variáveis cinemáticas do salto em distância na fase de impulsão. Concluímos que as variáveis de velocidades do centro de massa são reprodutíveis e replicáveis acima de 0.09m/s enquanto variáveis angulares, acima de 0.67°. Para a análise de sensibilidade foi encontrado que para variação de 1m na distância saltada, seriam necessários uma variação de 0.77, 200.00 e 1.80 m/s nas velocidades horizontal, lateral e vertical do centro de massa, respectivamente e uma variação de 5.75° no ângulo do vetor velocidade do centro de massa e o plano horizontal. O terceiro trabalho teve como objetivo investigar a cinemática angular do salto em distância de atletas de alto nível em competição através de três análises dos ângulos articulares durante a fase de impulsão, sugerindo que uma análise realizada durante uma competição oficial, fornece informações valiosas para as análises técnicas e de investigação. Além disso, a análise estatística revelou que os ângulos de flexão máxima ocorre de forma seqüencial, a partir do quadril ao tornozelo, quando relacionada com a porcentagem da fase de impulsão do salto. O último trabalho teve como objetivo analisar a variabilidade nas variáveis cinemáticas ridimensionais de dois atletas medalhistas olímpicos e os resultados sugerem que os atletas devem buscar controlar as variáveis de saída da tábua, buscando reduzir a sua variabilidade e atingir seus valores ótimos das variáveis cinemáticas.
Abstract: The aim of this study is to make a three-dimensional kinematic analysis of long jump during competition through four independent papers. The first study aimed to analyze the long jump kinematic variables relationship in high level athletes in competing across a multiple linear regression analysis of the approach, takeoff and flight phases. The results showed that the center of mass velocity at touchdown, center of mass maximum height during the flight phase and distance from the support athlete's foot to the takeoff board were the variables included in he prediction model of the hopped distance, and together were responsible for 68% of the distance variability. For the second model, the scalar velocities were replaced by the velocity components of the center of mass. With that change, the center of mass horizontal velocity at touchdown, the center of mass vertical velocity at touchdown and angle of attack were responsible for 88% of the distance variability. The aim of the second study was to examine the repeatability and reproducibility of three-dimensional kinematic variables in the long jump and test the sensitivity of the repeatability values and reproducibility in the distance jumped prediction. For this study, ten observers of both sexes performed five measurements each of a long jump under the same conditions that define reproducibility and repeatability. From the observer's measurements, we calculated the kinematic variables of the long jump during the takeoff. We conclude that the variables of the center of mass velocities are reproducible and repeatable over 0.09m/s while angular variables over 0.67°. For sensitivity analysis were found that for the variation of 1m in the distance jumped, would require a variation of 0.77, 200.00 and 1.80 m/s in horizontal, vertical and lateral velocities of the center of mass, respectively, and a variation of 5.75 ° in angle velocity vector of center of mass and in the horizontal plane. The third study aimed to investigate the angular kinematic of the long jump in high level athletes during competition across three analysis of the joint angles during the takeoff, suggesting that an analysis performed during an official competition provides valuable information for technical analysis and research. Furthermore, statistical analysis revealed that the maximum flexion angles occur sequentially, from hip to ankle, when associated to the percentage of the jump takeoff. The aim of the last study were to analyze the variability in three-dimensional kinematic variables of two Olympic medalists and the results suggest that athletes should attempt to control the output variables of the board, seeking to reduce their variability and achieve their optimum values of the kinematic variables.
Mestrado
Biodinamica do Movimento Humano
Mestre em Educação Física
Vorger, Éric. "Étude de l'influence du comportement des habitants sur la performance énergétique du bâtiment." Thesis, Paris, ENMP, 2014. http://www.theses.fr/2014ENMP0066/document.
Full textHuman behaviour is modelled in a simplistic manner in building energy simulation programs. However, it has a considerable impact and is identified as a major explanatory factor of the discrepancy between simulation results and in situ measurements. Occupants influence buildings energy consumption through their presence and activities, the opening/closing of windows, the actions on blinds, the use of artificial lighting and electrical appliances, the choices of temperature setpoints, and the water consumptions. The thesis proposes a model of occupants' behaviour including all these aspects, according to a stochastic approach, for residential and office buildings. Models' development is based on numerous data from measurements campaigns, sociological surveys and from the scientific literature. The proposed model for occupants' behaviour is coupled to the simulation tool Pléiades+COMFIE. By propagating the uncertainties of factors from the occupants' behaviour model and the thermal model (envelope, climate, systems), the simulation results confidence interval can be estimated, opening the way to an energy performance guarantee process
Pham, Tu Quoc Sang. "Caractérisation des propriétés d’un matériau par radiométrie photothermique modulée." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112377/document.
Full textModulated photothermal radiometry, a remote non-intrusive technique, was used to measure the thickness and the thermal diffusivity of a metal plate and to characterize a layer on a substrate. A thermal model of 3D heating was developed with considering the thermal exchange by convection for a plate and the thermal resistance of the interface for a layer on a substrate. The sensibility analysis and the multi-parameter studies on the phase shift were performed by the code developed with the Matlab software. Simple formulas were obtained to determine the thickness and the thermal diffusivity of a plate and the ratio of the thermal effusivities for a layer on a substrate. The obtained formulas were experimentally validated for 100 μm - 500 μm plate thickness of various metals (stainless steel 304L, nickel, titanium, tungsten, molybdenum, zinc and iron). The uncertainty of the measurements was lower than 10 % for thickness and lower than 15 % for thermal diffusivity determination. The same technique was applied in the study on Zircaloy-4 cladding that may be of particular interest for the nuclear industry. It was found that the presence of the oxide layer of some μm thickness had practically no effect on the thickness and the thermal diffusivity measurements of Zircaloy-4 cladding. However, the observed effect of a phase shift on high frequency (> 1kHz) may open new perspectives and widen the field of the method application for semi-transparent layers and for very thin layers (of less than μm thickness)
Douinot, Audrey. "Analyse des processus d'écoulement lors de crues à cinétique rapide sur l'arc méditerranéen." Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30265/document.
Full textThe purpose of this thesis is to improve the knowledge of hydrological processes during flash flood events using rainfall-runoff modelling. The project focuses on hydrological processes occurring into soil and subsoil horizons. A preliminary data analysis corroborates the activity of the weathered bedrock during flash floods. The hydrological response, simulated by the MARINE model, is then investigated to detect the sensitivity of subsurface flow processes to model assumptions. It leads to several modifications of the model structure in order to make it more robust. Moreover a two-layered soil column is implemented to explicitly integrate the activity of the weathered bedrock into the model. Assuming preferential path flows at the soil-bedrock interface, the model performs well on sedimentary watersheds, but underestimate recession curves and second flood peaks on granitic ones, showing the need to simulate as well significantcontribution from the weathered bedrock
Holá, Lucie. "Matematický model rozpočtu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2008. http://www.nusl.cz/ntk/nusl-227938.
Full textMartraire, Diane. "Étude du pouvoir de discrimination des primaires initiant les grandes gerbes atmosphériques avec des réseaux de détecteurs au sol : analyse des rayons cosmiques de ultra haute énergie détectés à l’observatoire Pierre Auger, Estimation des performances pour la detection de gamma de très haute énergie du future observatoire LHAASO." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112276/document.
Full textDuring the past century, ultra-high-energy cosmic rays (UHECR), those with an energy larger than 1018 eV, remain as a mystery: What are cosmic rays? Where do they come from? How do they attain their huge energy? When these charged particles strike the earth's atmosphere, they dissipate their energy by generating a shower of secondary particles whose development is significantly different depending on the nature of the primaries. The Pierre Auger observatory, with its hybrid structure and huge size network of ground detectors, can shed some light into these questions.The study of the composition of UHECR was performed with the Pierre Auger apparatus. This is crucial both to understand the hadronic interactions, which govern the evolution of showers, and to identify their sources. It can help to understand the origin of the energy spectrum cut-off: is it the GZK cut-off or the exhaustion of sources? These reasons motivate the first part of this thesis: the development of a method to extract the muonic component of air showers and deduce the implications on the composition of UHECR at the Pierre Auger observatory. The results of this method show a dependence of the composition with the distance to the axis of the shower, which could help to improve the hadronic models. The determination of the muon component is limited by the surface detector setup.The second part is devoted to the new observatory in China, LHAASO. This project focuses on the study of gamma rays with an energy higher than 30 TeV, which probe the acceleration of protons in the galaxy, providing indirect information on cosmic rays. Moreover, the observatory studies cosmic rays between 10 TeV and 1 EeV, one of the regions where the energy spectrum presents a break. This region requires the ability to discriminate gamma rays and cosmic rays. For this reason, one of the detectors of LHAASO, the KM2A, was simulated and its power of discrimination gamma/hadron evaluated
Lemaitre, Paul. "Analyse de sensibilité en fiabilité des structures." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0061/document.
Full textThis thesis' subject is sensitivity analysis in a structural reliability context. The general framework is the study of a deterministic numerical model that allows to reproduce a complex physical phenomenon. The aim of a reliability study is to estimate the failure probability of the system from the numerical model and the uncertainties of the inputs. In this context, the quantification of the impact of the uncertainty of each input parameter on the output might be of interest. This step is called sensitivity analysis. Many scientific works deal with this topic but not in the reliability scope. This thesis' aim is to test existing sensitivity analysis methods, and to propose more efficient original methods. A bibliographical step on sensitivity analysis on one hand and on the estimation of small failure probabilities on the other hand is first proposed. This step raises the need to develop appropriate techniques. Two variables ranking methods are then explored. The first one proposes to make use of binary classifiers (random forests). The second one measures the departure, at each step of a subset method, between each input original density and the density given the subset reached. A more general and original methodology reflecting the impact of the input density modification on the failure probability is then explored.The proposed methods are then applied on the CWNR case, which motivates this thesis
Silva, Giovana Oliveira. "Modelos de regressão quando a função de taxa de falha não é monótona e o modelo probabilístico beta Weibull modificada." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-10032009-094918/.
Full textIn survival analysis applications, the failure rate function may have frequently unimodal or bathtub shape, that is, non-monotone functions. The regression models commonly used for survival studies are log-Weibull, monotone failure rate function shape, and log-logistic, decreased or unimodal failure rate function shape. In the first part of this thesis, we propose location-scale regression models based on an extended Weibull distribution for modeling data with bathtub-shaped failure rate function and on a Burr XII distribution as an alternative to the log-logistic regression model. Assuming censored data, we consider a classical analysis, a Bayesian analysis and a jackknife estimator for the parameters of the proposed models. For these models, we derived the appropriate matrices for assessing the local influence on the parameter estimates under diferent perturbation schemes, and we also presented some ways to perform global influence. Additionally, we developed residual analy- sis based on the martingale-type residual. For di®erent parameter settings, sample sizes and censoring percentages, various simulation studies were performed and the empirical distribution of the martingale-type residual was displayed and compared with the standard normal distribution. These studies suggest that the empirical distribution of the martingale-type residual for the log-extended Weibull regression model with data censured present a high agreement with the standard normal distribution when compared with other residuals considered in these studies. For the log-Burr XII regression model, it was proposed a change in the martingale-type residual based on some studies of simulation in order to obtain an agreement with the standard normal distribution. Some applications to real data illustrate the usefulness of the methodology developed. It can also happen in some applications that the assumption of independence of the times of survival is not valid, so it was added to the log-Burr XII regression model of random exects for which an estimate method was proposed for the parameters based on the EM algorithm for Monte Carlo simulation. Finally, a five- parameter distribution so called the beta modified Weibull distribution is defined and studied. The advantage of that new distribution is its flexibility in accommodating several forms of the failure rate function, for instance, bathtub-shaped and unimodal shape, and it is also suitable for testing goodness-of-fit of some special sub-models. The method of maximum likelihood is used for estimating the model parameters. We calculate the observed information matrix. A real data set is used to illustrate the application of the new distribution.
Mazzilli, Naomi. "Sensibilité et incertitude de modélisation sur les bassins méditerranéens à forte composante karstique." Thesis, Montpellier 2, 2011. http://www.theses.fr/2011MON20188/document.
Full textKarst aquifers are associated with key issues for water resource management and also for flood risk mitigation. These systems are characterized by a highly heterogeneous structure and non-linear functioning. This thesis addresses the sensitivity and uncertainty associated with the numerical modelling of groundwater flow in karst systems. As a systematic approach, sensitivity analysis has been used to answer the following questions:(i) is it possible to calibrate the model ? (ii) is the calibration robust ? (iii) is it possible to reduce equifinality, through multi-objective calibration or through multi-variable calibration ? This contribution stresses the potentialities of local sensitivity analyses. Despite their inherent limitations (local approximation), local analyses have proved to bring valuable insights into the general behaviour of complex, non-linear flow models, at little computational cost. Besides, this contribution also stresses the interest of multi-variable calibration as compared to multi-objective calibration, as regards equifinality reduction
Sarlon, Emmanuelle. "Stratégies palliatives à la non-randomisation en santé mentale : score de propension et techniques d’ajustement apparentées. Méthodologie appliquée à la prise en compte des facteurs de confusion dans le cas de la schizophrénie." Thesis, Paris 11, 2014. http://www.theses.fr/2013PA11T103/document.
Full textObjective : To evaluate control methods for measured or unmeasured confusion bias, in observational situation of psychotic or schizophrenic patients. Methods : Propensity score method (for measured confusion bias) and analyses of sensibility (for unmeasured confusion bias) were applied in the field of psychiatric epidemiology, specifically in schizophrenia. In first, the question of residual bias was underlined by the results of a transversal study. The exposition at a contextual parameter (prison) was studied in link with psychotic disorders (DSM IV), with a classic control method.Second, to lead to an unbiased estimation of treatment effect, we compared a classic control method with a method based on propensity score. These approach were applied to a cohort of French schizophrenic patients where we studied the event (relapse) by the treatment exposition (polypharmacy or not).Third, we developed a synthesis on modelisation of uncertainty and non-measured confusion bias. Theories and methods were described, and then applied on results of previous studies. Results : The transversal study, with non-demonstrated results still then, allow us to reach the question of control quality in the case of exposition to a parameter in observational situation. The cohort study permit to compare a classic control method and propensity score (PS). We highlighted different results according to some control method. Stratification method on PS seemed to be the best method to predict relapse according to treatment exposition. Non-measured bias control methods were then described. And a combination of probabilistic methods was applied to the previous studies. Conclusion : In the case of observational studies, the objective was to study, to describe and to apply modelisation methods to take in account differences at baseline, potentially source of confusion bias. This research is at the crossroads of methodology, biostatistics and epidemiology
Janon, Alexandre. "Analyse de sensibilité et réduction de dimension. Application à l'océanographie." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00757101.
Full textBarrera, Gallegos Noé. "Sensibilité Paramétrique pour l’Analyse Dynamique des Réseaux à Courant Continu." Thesis, Ecole centrale de Lille, 2016. http://www.theses.fr/2016ECLI0021/document.
Full textThe work presented in this thesis presents different methodology for parametric sensitivity of high voltage dc networks(HVDC).The fundamental theory of modal analysis has been applied for analysis of the power electrical systems in its different stages of production and transmission of energy. Tools derived from these fundamentals have become popular with its use. Among the tools used in dynamic analysis, participation factors have been used for a long time. Proposed by (Perez-Arriaga et al., 1982), they give a metric for relating states and eigenvalues of a system. The objective of the participation factors is to analyze systems with particular dynamics such as electromechanical systems. The participation factors is a tool that helps in the reduction of systems. Firstly, we present the fundaments of the sensitivity analysis upon which the participation factors are based on. The principle is illustrated with several examples.We propose a new formulation for sensitivity analysis using parametric sensitivity (Barrera Gallegos et al., 2016).In the latter, the application of participation factors and parametric sensitivity analysis is performed using HVDC networks. This comparison exposes the limitation of the participation factors for the general analysis of HVDC grids.In conclusion, the new methodology is a better and general alternative compared to traditional participation factors employed for analysis of HVDC grids. In addition, the new technique of parametric sensitivity produces several novel information related to the dynamic characteristics of the HVDC grid
Fiorini, Camilla. "Analyse de sensibilité pour systèmes hyperboliques non linéaires." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLV034/document.
Full textSensitivity analysis (SA) concerns the quantification of changes in Partial Differential Equations (PDEs) solution due to perturbations in the model input. Stan- dard SA techniques for PDEs, such as the continuous sensitivity equation method, rely on the differentiation of the state variable. However, if the governing equations are hyperbolic PDEs, the state can exhibit discontinuities yielding Dirac delta functions in the sensitivity. We aim at modifying the sensitivity equations to obtain a solution without delta functions. This is motivated by several reasons: firstly, a Dirac delta function cannot be seized numerically, leading to an incorrect solution for the sensi- tivity in the neighbourhood of the state discontinuity; secondly, the spikes appearing in the numerical solution of the original sensitivity equations make such sensitivities unusable for some applications. Therefore, we add a correction term to the sensitivity equations. We do this for a hierarchy of models of increasing complexity: starting from the inviscid Burgers’ equation, to the quasi 1D Euler system. We show the influence of such correction term on an optimization algorithm and on an uncertainty quantification problem
Paucar, Casas Walter Jesus. "Concepção otima de sistemas elasto-acusticos interiores acoplados." [s.n.], 1998. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265126.
Full textTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica
Made available in DSpace on 2018-07-23T16:29:26Z (GMT). No. of bitstreams: 1 PaucarCasas_WalterJesus_D.pdf: 11209231 bytes, checksum: ed76027cff17f4a62e9dfcff2f0e03d2 (MD5) Previous issue date: 1998
Resumo: Neste trabalho desenvolveram-se metodologias para a obtenção de formas ótimas em sistemas vibroacústicos acoplados, via mudança de parâmetros geométricos, usando a análise de sensibilidade e ferramentas de programação não linear. As equações matriciais do problema são determinadas com o método dos elementos finitos, e expostas de forma a se tornarem dependentes dos parâmetros estruturais. Uma formulação não simétrica em deslocamento da estrutura e pressão do fluido é utilizada para descrever o sistema. Obtidas as freqüências e modos próprios para um conjunto de parâmetros, executa-se o processo de otimização usando a análise de sensibilidade modal. O objetivo é maximizar o afastamento de freqüências naturais adjacentes, ou diminuir a resposta numa região do sistema para uma faixa predefinida de freqüências de excitação, modificando para isso os parâmetros de forma. O efeito do amortecimento proporcional é incluído na modelagem. Os resultados obtidos são validados a partir de soluções numéricas disponíveis na literatura. A utilização da predição modal no processo de otimização também é analisada. A implementação da metodologia desenvolvida encontra aplicação, por exemplo, na melhora do conforto vibroacústico
Abstract: In this research some methodologies for obtaining optimal forms in coupled vibroacoustic problems are developed, through geometrical parameter changing, using sensitivity analysis and non linear programming tools. The matrix equations of the problem are determined through the finite element method, and then put in such a form that they become functions of the structural parameters. A non symmetrical formulation in structural displacement and tluid pressure is used to describe the system. Once the natural frequencies and modes for a set of parameters are found, the optimization process is conducted using the modal sensitivity analysis. The objective is either to maximize the gap between some adjacent natural frequencies,or to minimizethe frequency response in a specific region of the system for one set of excitation frequencies.This is done by modifying the shape parameters. The effect of proportional damping is included in the model. The results are validated with numerical solutions available in the literature. Additional results using the modal prediction in the optimization are also analyzed. The implemented methodology can be applied, for example, in the improvement of the vibroacoustic confort
Doutorado
Mecanica dos Sólidos e Projeto Mecanico
Doutor em Engenharia Mecânica
Figueiredo, António José Pereira de. "Energy efficiency and comfort strategies for Southern European climate : optimization of passive housing and PCM solutions." Doctoral thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17291.
Full textPursuing holistic sustainable solutions, towards the target defined by the United Nations Framework Convention on Climate Change (UNFCCC) is a stimulating goal. Exploring and tackling this task leads to a broad number of possible combinations of energy saving strategies than can be bridged by Passive House (PH) concept and the use of advanced materials, such as Phase Change Materials (PCM) in this context. Acknowledging that the PH concept is well established and practiced mainly in cold climate countries of Northern and Central Europe, the present research investigates how the construction technology and energy demand levels can be adapted to Southern Europe, in particular to Portugal mainland climate. For Southern Europe in addition to meeting the heating requirements in a fairly easier manner, it is crucial to provide comfortable conditions during summer, due to a high risk of overheating. The incorporation of PCMs into building solutions making use of solar energy to ensure their phase change process, are a potential solution for overall reduction of energy consumption and overheating rate in buildings. The PH concept and PCM use need to be adapted and optimised to work together with other active and passive systems improving the overall building thermal behaviour and reducing the energy consumption. Thus, a hybrid evolutionary algorithm was used to optimise the application of the PH concept to the Portuguese climate through the study of the combination of several building features as well as constructive solutions incorporating PCMs minimizing multi-objective benchmark functions for attaining the defined goals.
A procura de soluções de sustentabilidade holísticas que conduzam ao cumprimento dos desafios impostos pela Convenção-Quadro das Nações Unidas sobre as Alterações Climáticas é uma meta estimulante. Explorar esta tarefa resulta num amplo número de possíveis combinações de estratégias de poupança energética, sendo estas alcançáveis através do conceito definido pela Passive House (PH) e pela utilização de materiais de mudança de fase que se revelam como materiais inovadores neste contexto. Reconhecendo que este conceito já se encontra estabelecido e disseminado em países de climas frios do centro e norte da Europa, o presente trabalho de investigação foca-se na aplicabilidade e adaptabilidade deste conceito e correspondentes técnicas construtivas, assim como os níveis de energia, para climas do sul da Europa, nomeadamente em Portugal continental. No sudeste da Europa, adicionalmente à necessidade de cumprimento dos requisitos energéticos para aquecimento, é crucial promover e garantir condições de conforto no verão, devido ao elevado risco de sobreaquecimento. A incorporação de materiais de mudança de fase nas soluções construtivas dos edifícios, utilizando a energia solar para assegurar o processo de mudança de fase, conduz a soluções de elevado potencial para a redução global da energia consumida e do risco de sobreaquecimento. A utilização do conceito PH e dos materiais de mudança de fase necessitam de ser adaptados e otimizados para funcionarem integrados com outros sistemas ativos e passivos, melhorando o comportamento térmico dos edifícios e minimizando o consumo energético. Assim, foi utilizado um algoritmo evolutivo para otimizar a aplicabilidade do conceito PH ao clima português através do estudo e combinação de diversos aspetos construtivos, bem como o estudo de possíveis soluções construtivas inovadoras com incorporação de materiais de mudança de fase minimizando as funções objetivo para o cumprimento das metas inicialmente definidas.
Resmini, Andrea. "Analyse de sensibilité pour la simulation numérique des écoulements compressibles en aérodynamique externe." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066529.
Full textSensitivity analysis for the numerical simulation of external aerodynamics compressible flows with respect to the mesh discretization and to the model input parametric uncertainty has been addressed respectively 1- through adjoint-based gradient computation techniques and 2- through non-intrusive stochastic approximation methods based on sparse grids. 1- An enhanced goal-oriented mesh adaptation method based on aerodynamic functional total derivatives with respect to mesh coordinates in a RANS finite-volume mono-block and non-matching multi-block structured grid framework is introduced. Applications to 2D RANS flow about an airfoil in transonic and detached subsonic conditions for the drag coefficient estimation are presented. The asset of the proposed method is patent. 2- The generalized Polynomial Chaos in its sparse pseudospectral form and stochastic collocation methods based on both isotropic and dimension-adapted sparse grids obtained through an improved dimension-adaptivity method driven by global sensitivity analysis are considered. The stochastic approximations efficiency is assessed on multi-variate test functions and airfoil viscous aerodynamics simulation in the presence of geometrical and operational uncertainties. Integration of achievements 1- and 2- into a coupled approach in future work will pave the way for a well-balanced goal-oriented deterministic/stochastic error control
Ngodock, Hans Emmanuel. "Assimilation de données et analyse de sensibilité : une application à la circulation océanique." Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00005006.
Full textChabridon, Vincent. "Analyse de sensibilité fiabiliste avec prise en compte d'incertitudes sur le modèle probabiliste - Application aux systèmes aérospatiaux." Thesis, Université Clermont Auvergne (2017-2020), 2018. http://www.theses.fr/2018CLFAC054/document.
Full textAerospace systems are complex engineering systems for which reliability has to be guaranteed at an early design phase, especially regarding the potential tremendous damage and costs that could be induced by any failure. Moreover, the management of various sources of uncertainties, either impacting the behavior of systems (“aleatory” uncertainty due to natural variability of physical phenomena) and/or their modeling and simulation (“epistemic” uncertainty due to lack of knowledge and modeling choices) is a cornerstone for reliability assessment of those systems. Thus, uncertainty quantification and its underlying methodology consists in several phases. Firstly, one needs to model and propagate uncertainties through the computer model which is considered as a “black-box”. Secondly, a relevant quantity of interest regarding the goal of the study, e.g., a failure probability here, has to be estimated. For highly-safe systems, the failure probability which is sought is very low and may be costly-to-estimate. Thirdly, a sensitivity analysis of the quantity of interest can be set up in order to better identify and rank the influential sources of uncertainties in input. Therefore, the probabilistic modeling of input variables (epistemic uncertainty) might strongly influence the value of the failure probability estimate obtained during the reliability analysis. A deeper investigation about the robustness of the probability estimate regarding such a type of uncertainty has to be conducted. This thesis addresses the problem of taking probabilistic modeling uncertainty of the stochastic inputs into account. Within the probabilistic framework, a “bi-level” input uncertainty has to be modeled and propagated all along the different steps of the uncertainty quantification methodology. In this thesis, the uncertainties are modeled within a Bayesian framework in which the lack of knowledge about the distribution parameters is characterized by the choice of a prior probability density function. During a first phase, after the propagation of the bi-level input uncertainty, the predictive failure probability is estimated and used as the current reliability measure instead of the standard failure probability. Then, during a second phase, a local reliability-oriented sensitivity analysis based on the use of score functions is achieved to study the impact of hyper-parameterization of the prior on the predictive failure probability estimate. Finally, in a last step, a global reliability-oriented sensitivity analysis based on Sobol indices on the indicator function adapted to the bi-level input uncertainty is proposed. All the proposed methodologies are tested and challenged on a representative industrial aerospace test-case simulating the fallout of an expendable space launcher
Loubiere, Peio. "Amélioration des métaheuristiques d'optimisation à l'aide de l'analyse de sensibilité." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1051/document.
Full textHard optimization stands for a class of problems which solutions cannot be found by an exact method, with a polynomial complexity.Finding the solution in an acceptable time requires compromises about its accuracy.Metaheuristics are high-level algorithms that solve these kind of problems. They are generic and efficient (i.e. they find an acceptable solution according to defined criteria such as time, error, etc.).The first chapter of this thesis is partially dedicated to the state-of-the-art of these issues, especially the study of two families of population based metaheuristics: evolutionnary algorithms and swarm intelligence based algorithms.In order to propose an innovative approach in metaheuristics research field, sensitivity analysis is presented in a second part of this chapter.Sensitivity analysis aims at evaluating arameters influence on a function response. Its study characterises globally a objective function behavior (linearity, non linearity, influence, etc.), over its search space.Including a sensitivity analysis method in a metaheuristic enhances its seach capabilities along most promising dimensions.Two algorithms, binding these two concepts, are proposed in second and third parts.In the first one, ABC-Morris, Morris method is included in artificial bee colony algorithm.This encapsulation is dedicated because of the similarity of their bare bone equations, With the aim of generalizing the approach, a new method is developped and its generic integration is illustrated on two metaheuristics.The efficiency of the two methods is tested on the CEC 2013 conference benchmark. The study contains two steps: an usual performance analysis of the method, on this benchmark, regarding several state-of-the-art algorithms and the comparison with its original version when influences are uneven deactivating a subset of dimensions
Nanty, Simon. "Quantification des incertitudes et analyse de sensibilité pour codes de calcul à entrées fonctionnelles et dépendantes." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM043/document.
Full textThis work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called covariate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or metamodel, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the metamodel. Finally, a new approximation approach for expensive codes with functional outputs has been explored. In this approach, the code is seen as a stochastic code, whose randomness is due to the functional variables, assumed uncontrollable. In this framework, several metamodels have been developed and compared. All the methods proposed in this work have been applied to the two nuclear safety applications
Grandjacques, Mathilde. "Analyse de sensibilité pour des modèles stochastiques à entrées dépendantes : application en énergétique du bâtiment." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT109/document.
Full textBuildings represent one of the main levers of action to optimize energy efficiency and reducing emissions of $ CO_2 $. To understand how perform energy consumption of a building, different studies have been conducted on the thermal performance both the point of view of design and model calibration as the climate change impact. Energy performance can be optimized according to these studies by evaluating the degree of uncertainty due to each of the variables or parameters that may influence performance. This stage is called sensitivity analysis.Most building studies in the literature are placed in a static framework that does not represent the evolution of the system. The variables whose sensitivity to be studied are either considered at a given time or the input-output models are not dynamic. It became necessary to develop methods that take into account both the dependence of the inputs and the temporal dimension which itself always involves dependence. Among the different methods of sensitivity analysis, we have focused on the global method, based on the calculation of Sobol sensitivity indices. Sobol index of a parameter (or group of parameters) is a statistical indicator of easy interpretation. It allows to measure the importance of this parameter (or group of parameters) on the variability of a scalar quantity of interest, depending on the model output. Sensitivity indices allow to rank input parameters according to their influence on the output.Sobol indices can be calculated in different ways. We focused on the Pick and Freeze method based on sampling. This is based on a fundamental assumption and in practice often unverified : inputs independence. This led us statistically to develop new techniques to take into account the dynamic characteristic of inputs and dependents both in time and in every moment. Our work focuses on methods that can bring back to the case of independent inputs. Our concern was modelled in a flexible way inputs, easily transferable to other concrete situations and allowing relatively easy simulations. The input-output relationships are not important as the only constraint, of course not trivial, possible simulation.In order to reproduce the temporal relationship between the variables, we chose to consider an index dependent, in the non-stationary case (especially if there are seasonal phenomena), on the time of calculation and quantify the variability of output not not only to the variability of the input at time $ t $, but also to the same variability from previous times. This vision allows to introduce the concept of usable memory for the calculation of the sensitivity.The second method that we have developed is an estimation method of Sobol indices for static dependent inputs a priori. It may nevertheless be implemented for dynamic inputs with short memory but the calculations are then very heavy when the number of inputs are large or own important memories. This method allows to separate dependent variables of any law in independent variables uniformly distributed.Easy to implement these estimation methods developed are not based on assumptions of independence of inputs. It then allows a wide range of applications.This method applied to an existing building can help improve energy management and can be useful in the design from the implementation scenarios. We could show different situations by analysing the variable order according to the sensitivities from measurements on a test building. Two criteria were studied. A criterion of comfort: the study of indoor temperature and performance criteria: the heating energy
Vasconcelos, Fillipe Matos de. "Estudo de reativos em sistemas de distribuição de energia elétrica." Universidade de São Paulo, 2012. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-25042012-163719/.
Full textThis work aims to use nonlinear optimization methods to develop an efficient methodology for capacitor banks allocation to eliminate voltage violations in distribution networks. The application of capacitors in parallel to the electric power systems are commonly employed in order to have better control of power flow, voltage profile management, power factor correction and loss minimization. To achieve these benefits, the methodology of this work will be done through the resolution of a nonlinear programming problem associated with the linear approach of Voltage Variations versus Reactive Power Variation, calculating the number, location and optimal design of capacitor banks along distribution lines. Thus, it looks forward to minimize reactive power injection and reduce losses subject to meeting the operating and the loading constraints. The results are evaluated by the program GAMS TM (General Algebraic Modeling System), by Matlab TM (Matrix Laboratory) and by a program written in FORTRAN TM, being able to analyze and describe the contributions achieved by this work, considering it is a topic of great relevance to the operation and expansion planning of electric power systems.
Mensi, Amira. "Analyse des pointeurs pour le langage C." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2013. http://pastel.archives-ouvertes.fr/pastel-00944703.
Full textDerennes, Pierre. "Mesures de sensibilité de Borgonovo : estimation des indices d'ordre un et supérieur, et application à l'analyse de fiabilité." Thesis, Toulouse 3, 2019. http://www.theses.fr/2019TOU30039.
Full textIn many disciplines, a complex system is modeled by a black box function whose purpose is to mimic the real system behavior. Then, the system is represented by an input-output model, i.e, a relationship between the output Y (the observation made on the system) and a set of external parameters Xi (typically representing physical variables). These parameters are usually assumed to be random in order to take phenomenological uncertainties into account. Then, global sensitivity analysis (GSA) plays a crucial role in the handling of these uncertainties and in the understanding of the system behavior. This study is based on the estimation of importance measures which aim at identifying and ranking the different inputs with respect to their influence on the model output. Variance-based sensitivity indices are one of the most widely used GSA measures. They are based on Sobol's indices which express the share of variance of the output that is due to a given input or input combination. However, by definition they only study the impact on the second-order moment of the output which may a restrictive representation of the whole output distribution. The central subject of this thesis is an alternative method, introduced by Emanuele Borgonovo, which is based on the analysis of the whole output distribution. Borgonovo's importance measures present very convenient properties that justify their recent gain of interest, but their estimation is a challenging task. Indeed, the initial definition of the Borgonovo's indices involves the unconditional and conditional densities of the model output, which are unfortunately unknown in practice. Thus, the first proposed methods led to a high computational burden especially since the black box function may be very costly-to-evaluate. The first contribution of this thesis consists in proposing new methodologies for estimating first order Borgonovo importance measures which quantify the influence of the output Y relatively to a scalar input Xi. First, we choose to adopt the reinterpretation of the Borgonovo indices in term of measure of dependence, i.e, as a distance between the joint density of Xi and Y and the product distribution. In addition, we develop an estimation procedure combining an importance sampling procedure and Gaussian kernel approximation of the output density and the joint density. This approach allows the computation of all first order Borgonovo with a low budget simulation, independent to the model dimension. However, the use of Gaussian kernel estimation may provide inaccurate estimates for heavy tail distributions. To overcome this problem, we consider an alternative definition of the Borgonovo indices based on the copula formalism
Machala, Dawid. "Comportement d'un projectile en vol libre : modélisation LPV et analyse de sensibilité." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0165.
Full textFree-flight experiments can be considered as a realistic source of aerodynamic data-based knowledge, since the behaviour of a projectile is observed under real flight conditions. However, the experimental framework introduces several sources of uncertainty: the initial flight conditions are unknown, and only an initial guess of aerodynamics of the projectile is available. Additionally, the nonlinear model structure used in identification of aerodynamic behaviour is different from the structures typically used in guidance and navigation activities, which necessitates switching between model structures---depending on the task at hand. Such a setting lays the groundwork for the following main contributions developed in the thesis: • A novel model structure is proposed: the nonlinear model equations are transformed into a quasi-LPV structure in a non-rolling reference frame. The new structure resembles the ones used in guidance and navigation activities, while preserving the projectile's nonlinear behaviour. It is also much faster in computation time for the case of spin-stabilised projectiles, making it more efficient for simulations. • Influence of the aforementioned uncertainties on the quasi-LPV model was assessed using global sensitivity analysis: it has allowed to determine which parameters of the model can be deemed non-identifiable, and to gain further insight into the physical behaviour of the quasi-LPV model. These developments could be used in future safety analysis preceding the flight tests
Picherit, Marie-Lou. "Evaluation environnementale du véhicule électrique : méthodologies et application." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2010. http://tel.archives-ouvertes.fr/tel-00666955.
Full textCaniou, Yann. "Analyse de sensibilité globale pour les modèles de simulation imbriqués et multiéchelles." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00864175.
Full textDobre, Simona. "Analyses de sensibilité et d'identifiabilité globales : application à l'estimation de paramètres photophysiques en thérapie photodynamique." Phd thesis, Université Henri Poincaré - Nancy I, 2010. http://tel.archives-ouvertes.fr/tel-00550527.
Full textKouassi, Attibaud. "Propagation d'incertitudes en CEM. Application à l'analyse de fiabilité et de sensibilité de lignes de transmission et d'antennes." Thesis, Université Clermont Auvergne (2017-2020), 2017. http://www.theses.fr/2017CLFAC067/document.
Full textNowadays, most EMC analyzes of electronic or electrical devices are based on deterministic approaches for which the internal and external models’ parameters are supposed to be known and the uncertainties on models’ parameters are taken into account on the outputs by defining very large security margins. But, the disadvantage of such approaches is their conservative character and their limitation when dealing with the parameters’ uncertainties using appropriate stochastic modeling (via random variables, processes or fields) is required in agreement with the goal of the study. In the recent years, this probabilistic approach has been the subject of several researches in the EMC community. The work presented here is a contribution to these researches and has a dual purpose : (1) develop a probabilistic methodology and implement the associated numerical tools for the reliability and sensitivity analyzes of the electronic devices and systems, assuming stochastic modeling via random variables; (2) extend this study to stochastic modeling using random processes and random fields through a prospective analysis based on the resolution of the telegrapher equations (partial derivative equations) with random coefficients. The first mentioned probabilistic approach consists in computing the failure probability of an electronic device or system according to a given criteria and in determining the relative importance of each considered random parameter. The methods chosen for this purpose are adaptations to the EMC framework of methods developed in the structural mechanics community for uncertainty propagation studies. The failure probabilities computation is performed using two type of methods: the ones based on an approximation of the limit state function associated to the failure criteria, and the Monte Carlo methods based on the simulation of the model’s random variables and the statistical estimation of the target failure probabilities. In the case of the sensitivity analysis, a local approach and a global approach are retained. All these methods are firstly applied to academic EMC problems in order to illustrate their interest in the EMC field. Next, they are applied to transmission lines problems and antennas problems closer to reality. In the prospective analysis, more advanced resolution methods are proposed. They are based on spectral approaches requiring the polynomial chaos expansions and the Karhunen-Loève expansions of random processes and random fields considered in the models. Although the first numerical tests of these methods have been hopeful, they are not presented here because of lack of time for a complete analysis
Raillon, Loic. "Experimental identification of physical thermal models for demand response and performance evaluation." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEI039.
Full textThe European Union strategy for achieving the climate targets, is to progressively increase the share of renewable energy in the energy mix and to use the energy more efficiently from production to final consumption. It requires to measure the energy performance of buildings and associated systems, independently of weather conditions and user behavior, to provide efficient and adapted retrofitting solutions. It also requires to known the energy demand to anticipate the energy production and storage (demand response). The estimation of building energy demand and the estimation of energy performance of buildings have a common scientific: the experimental identification of the physical model of the building’s intrinsic behavior. Grey box models, determined from first principles, and black box models, determined heuristically, can describe the same physical process. Relations between the physical and mathematical parameters exist if the black box structure is chosen such that it matches the physical ones. To find the best model representation, we propose to use, Monte Carlo simulations for analyzing the propagation of errors in the different model transformations, and factor prioritization, for ranking the parameters according to their influence. The obtained results show that identifying the parameters on the state-space representation is a better choice. Nonetheless, physical information determined from the estimated parameters, are reliable if the model structure is invertible and the data are informative enough. We show how an identifiable model structure can be chosen, especially thanks to profile likelihood. Experimental identification consists of three phases: model selection, identification and validation. These three phases are detailed on a real house experiment by using a frequentist and Bayesian framework. More specifically, we proposed an efficient Bayesian calibration to estimate the parameter posterior distributions, which allows to simulate by taking all the uncertainties into account, which is suitable for model predictive control. We have also studied the capabilities of sequential Monte Carlo methods for estimating simultaneously the states and parameters. An adaptation of the recursive prediction error method into a sequential Monte Carlo framework, is proposed and compared to a method from the literature. Sequential methods can be used to provide a first model fit and insights on the selected model structure while the data are collected. Afterwards, the first model fit can be refined if necessary, by using iterative methods with the batch of data
Louboutin, Etienne. "Sensibilité de logiciels au détournement de flot de contrôle." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2021. http://www.theses.fr/2021IMTA0230.
Full textThe security of a software can be taken into account right from the design stage. This approach, called security by design, allows to influence as early as possible the design to influence the architecture of a software. The protections against control flow hijacks, such as return oriented programming, are not designed to change the way of designing the software. They often aim to protect a software either during its compilation or by working directly on the binary produced. In this thesis, we propose metrics allowing a developer to evaluate the sensitivity of a software program to attacks by control flow hijacks. To ease development, metrics defined allow to identify the parameters used in the production of binaries of a software that result in increased sensitivity to these attacks. The use of of these metrics are illustrated in this thesis by studying the influence of compilers and their options, languages and hardware architectures
Zhu, Yueying. "Investigation on uncertainty and sensitivity analysis of complex systems." Thesis, Le Mans, 2017. http://www.theses.fr/2017LEMA1021/document.
Full textBy means of taylor series expansion, a general analytic formula is derived to characterise the uncertaintypropagation from input variables to the model response,in assuming input independence. By using power-lawand exponential functions, it is shown that the widelyused approximation considering only the first ordercontribution of input uncertainty is sufficiently good onlywhen the input uncertainty is negligible or the underlyingmodel is almost linear. This method is then applied to apower grid system and the eoq model.The method is also extended to correlated case. Withthe extended method, it is straightforward to identify theimportance of input correlations in the model response.This allows one to determine whether or not the inputcorrelations should be considered in practicalapplications. Numerical examples suggest theeffectiveness and validation of our method for generalmodels, as well as specific ones such as thedeterministic hiv model.The method is then compared to Sobol’s one which isimplemented with sampling based strategy. Resultsshow that, compared to our method, it may overvaluethe roles of individual input factors but underestimatethose of their interaction effects when there arenonlinear coupling terms of input factors. A modificationis then introduced, helping understand the differencebetween our method and Sobol’s one.Finally, a numerical model is designed based on avirtual gambling mechanism, regarding the formation ofopinion dynamics. Theoretical analysis is proposed bythe use of one-at-a-time method. Sampling-basedmethod provides a global analysis of output uncertaintyand sensitivity
Benoumechiara, Nazih. "Traitement de la dépendance en analyse de sensibilité pour la fiabilité industrielle." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS047.
Full textStructural reliability studies use probabilistic approaches to quantify the risk of an accidental event occurring. The dependence between the random input variables of a model can have a significant impact on the results of the reliability study. This thesis contributes to the treatment of dependency in structural reliability studies. The two main topics covered in this document are the sensitivity analysis for dependent variables when the dependence is known and, as well as the assessment of a reliability risk when the dependence is unknown. First, we propose an extension of the permutation-based importance measures of the random forest algorithm towards the case of dependent data. We also adapt the Shapley index estimation algorithm, used in game theory, to take into account the index estimation error. Secondly, in the case of dependence structure being unknown, we propose a conservative estimate of the reliability risk based on dependency modelling to determine the most penalizing dependence structure. The proposed methodology is applied to an example of structural reliability to obtain a conservative estimate of the risk
Alhossen, Iman. "Méthode d'analyse de sensibilité et propagation inverse d'incertitude appliquées sur les modèles mathématiques dans les applications d'ingénierie." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30314/document.
Full textApproaches for studying uncertainty are of great necessity in all disciplines. While the forward propagation of uncertainty has been investigated extensively, the backward propagation is still under studied. In this thesis, a new method for backward propagation of uncertainty is presented. The aim of this method is to determine the input uncertainty starting from the given data of the uncertain output. In parallel, sensitivity analysis methods are also of great necessity in revealing the influence of the inputs on the output in any modeling process. This helps in revealing the most significant inputs to be carried in an uncertainty study. In this work, the Sobol sensitivity analysis method, which is one of the most efficient global sensitivity analysis methods, is considered and its application framework is developed. This method relies on the computation of sensitivity indexes, called Sobol indexes. These indexes give the effect of the inputs on the output. Usually inputs in Sobol method are considered to vary as continuous random variables in order to compute the corresponding indexes. In this work, the Sobol method is demonstrated to give reliable results even when applied in the discrete case. In addition, another advancement for the application of the Sobol method is done by studying the variation of these indexes with respect to some factors of the model or some experimental conditions. The consequences and conclusions derived from the study of this variation help in determining different characteristics and information about the inputs. Moreover, these inferences allow the indication of the best experimental conditions at which estimation of the inputs can be done
Rousseau, Marie. "Propagation d'incertitudes et analyse de sensibilité pour la modélisation de l'infiltration et de l'érosion." Phd thesis, Université Paris-Est, 2012. http://pastel.archives-ouvertes.fr/pastel-00788360.
Full textAmstutz, Samuel. "Analyse de sensibilité topologique et applications en optimisation de formes." Habilitation à diriger des recherches, Université d'Avignon, 2011. http://tel.archives-ouvertes.fr/tel-00736647.
Full textGurevsky, Evgeny. "Conception de lignes de fabrication sous incertitudes : analyse de sensibilité et approche robuste." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2011. http://tel.archives-ouvertes.fr/tel-00820619.
Full textLiu, Yuan. "Analyse de sensibilité et estimation de l'humidité du sol à partir de données radar." Thesis, Strasbourg, 2016. http://www.theses.fr/2016STRAD032/document.
Full textElectromagnetic waves scattering from a randomly rough surface is of palpable importance in many fields of disciplines and bears itself in various applications spanned from surface treatment to remote sensing of terrain and sea. By knowing the backscattering patterns, one may detect the presence of the undesired random roughness of the reflection surface such as antenna reflector and accordingly devise a means to correct or compensate the phase errors. Therefore, it has been both theoretically and practically necessary to study the electromagnetic wave scattering from the random surfaces. This dissertation focuses on the retrieval of surface soil moisture from radar measurements. The description of the randomly rough surface is presented, followed by the electromagnetic wave interactions with the media. In particular, an advanced integral equation model (AIEM) is introduced. The validity of the AIEM model, which is adopted as a working model, is made by extensive comparison with numerical simulations and experimental data. Also analyzes the characteristics of the bistatic radar configurations and dissects the sensitivity of bistatic scattering to soil moisture and surface roughness of soil surfaces. Meanwhile presents a framework of soil moisture retrieval from radar measurements using a recurrent Kalman filter-based neural network. The network training and data inversion are described in detail
Liu, Xing. "Modélisation, analyse et optimisation de la résilience des infrastructures critiques interdépendantes." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC080.
Full textResilience is the ability of a system to resist to and recover from disruptive events. The objective of this thesis is to build a framework of analysis and optimization of interconnected critical infrastructures (ICIs) resilience. In this work, the original scientific contributions include: 1) a generic modeling approach to describe the dynamic behavior and the physical cascading failure processes in ICIs.2) on the basis of the model, a quantitative resilience assessment approach for ICIs is proposed, where both the mitigation and recovery aspects of system resilience are evaluated; 3) in order to reduce the computational cost in the case of large-scale systems, three different global sensitivity analysis methods (ANN estimation, ensemblebased, give-data estimation) are implemented to identify the most relevant model parameters affecting the system resilience, and then the performance of these methods are compared; 4) a hierarchical model is developed to characterize the factors of resilience improvement strategies. A multi-objectives optimization problem is formulated and solved by NSGA-II algorithm, to provide the optimal plan for system resilience improvement. The methods proposed are implemented to applications, e.g., a gas supply network and an electrical power grid
Saint-Geours, Nathalie. "Analyse de sensibilité de modèles spatialisés : application à l'analyse coût-bénéfice de projets de prévention du risque d'inondation." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20203/document.
Full textVariance-based global sensitivity analysis is used to study how the variability of the output of a numerical model can be apportioned to different sources of uncertainty in its inputs. It is an essential component of model building as it helps to identify model inputs that account for most of the model output variance. However, this approach is seldom applied in Earth and Environmental Sciences, partly because most of the numerical models developed in this field include spatially distributed inputs or outputs . Our research work aims to show how global sensitivity analysis can be adapted to such spatial models, and more precisely how to cope with the following two issues: i) the presence of spatial auto-correlation in the model inputs, and ii) the scaling issues. We base our research on the detailed study of the numerical code NOE, which is a spatial model for cost-benefit analysis of flood risk management plans. We first investigate how variance-based sensitivity indices can be computed for spatially distributed model inputs. We focus on the “map labelling” approach, which allows to handle any complex spatial structure of uncertainty in the modelinputs and to assess its effect on the model output. Next, we offer to explore how scaling issues interact with the sensitivity analysis of a spatial model. We define “block sensitivity indices” and “site sensitivity indices” to account for the role of the spatial support of model output. We establish the properties of these sensitivity indices under some specific conditions. In particular, we show that the relative contribution of an uncertain spatially distributed model input to the variance of the model output increases with its correlation length and decreases with the size of the spatial support considered for model output aggregation. By applying our results to the NOE modelling chain, we also draw a number of lessons to better deal with uncertainties in flood damage modelling and cost-benefit analysis of flood riskmanagement plans