To see the other types of publications on this topic, follow the link: Uncertain structural processes.

Dissertations / Theses on the topic 'Uncertain structural processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 16 dissertations / theses for your research on the topic 'Uncertain structural processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Freitag, Steffen, Wolfgang Graf, and Michael Kaliske. "Prognose des Langzeitverhaltens von Textilbeton-Tragwerken mit rekurrenten neuronalen Netzen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1244048026002-79164.

Full text
Abstract:
Zur Prognose des Langzeitverhaltens textilbetonverstärkter Tragwerke wird ein modellfreies Vorgehen auf Basis rekurrenter neuronaler Netze vorgestellt. Das Vorgehen ermöglicht die Prognose zeitveränderlicher Strukturantworten unter Berücksichtigung der gesamten Belastungsgeschichte. Mit unscharfen Größen aus Messungen an Versuchstragwerken werden rekurrente neuronale Netze trainiert. Anschließend ist die unscharfe Prognose des Tragverhaltens möglich.
APA, Harvard, Vancouver, ISO, and other styles
2

Zager, Laura (Laura A. ). "Infection processes on networks with structural uncertainty." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45616.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (p. 167-175).
Over the last ten years, the interest in network phenomena and the potential for a global pandemic have produced a tremendous volume of research exploring the consequences of human interaction patterns for disease propagation. The research often focuses on a single question: will an emerging infection become an epidemic? This thesis clarifies the relationships among different epidemic threshold criteria in deterministic disease models, and discusses the role and meaning of the basic reproductive ratio, R0. We quantify the incorporation of population structure into this general framework, and identify conditions under which interaction topology and infection characteristics can be decoupled in the computation of threshold functions, which generalizes many existing results in the literature. This decoupling allows us to focus on the impact of network topology via the spectral radius of the adjacency matrix of the network. It is rare, however, that one has complete information about every potential disease-transmitting interaction; this uncertainty in the network structure is often ignored in deterministic models. Neglecting this uncertainty can lead to an underestimate of R0, an unacceptable outcome for public health planning. Is it possible to make guarantees and approximations regarding disease spread when only partial information about the routes of transmission is known? We present methods for making predictions about disease spread over uncertain networks, including approximation techniques and bounding results obtained via spectral graph theory, and illustrate these results on several data sets. We also approach this problem by using simulation and analytical work to characterize the spectral radii that arise from members of the exponential random graph family, commonly used to model empirical networks in quantitative sociology. Finally, we explore several issues in the spatiotemporal patterns of epidemic propagation through a network, focusing on the behavior of the contact process and the influence model.
by Laura A. Zager.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
3

Reimer, Jody. "Effective design of marine reserves : incorporating alongshore currents, size structure, and uncertainty." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:8a5e72cb-6bc9-4ef3-a991-2cc934b228fb.

Full text
Abstract:
Marine populations worldwide are in decline due to anthropogenic effects. Spatial management via marine reserves may be an effective conservation method for many species, but the requisite theory is still underdeveloped. Integrodifference equation (IDE) models can be used to determine the critical domain size required for persistence and provide a modelling framework suitable for many marine populations. Here, we develop a novel spatially implicit approximation for the proportion of individuals lost outside the reserve areas which consistently outperforms the most common approximation. We examine how results using this approximation compare to the existing IDE results on the critical domain size for populations in a single reserve, in a network of reserves, in the presence of alongshore currents, and in structured populations. We find that the approximation consistently provides results which are in close agreement with those of an IDE model with the advantage of being simpler to convey to a biological audience while providing insights into the significance of certain model components. We also design a stochastic individual based model (IBM) to explore the probability of extinction for a population within a reserve area. We use our spatially implicit approximation to estimate the proportion of individuals which disperse outside the reserve area. We then use this approximation to obtain results on extinction using two different approaches, which we can compare to the baseline IBM; the first approach is based on the Central Limit Theorem and provides efficient simulation results, and the second modifies a simple Galton-Watson branching process to include loss outside the reserve area. We find that this spatially implicit approximation is also effective in obtaining results similar to those produced by the IBM in the presence of both demographic and environmental variability. Overall, this provides a set of complimentary methods for predicting the reserve area required to sustain a population in the presence of strong fishing pressure in the surrounding waters.
APA, Harvard, Vancouver, ISO, and other styles
4

Herman, Joseph L. "Multiple sequence analysis in the presence of alignment uncertainty." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:88a56d9f-a96e-48e3-b8dc-a73f3efc8472.

Full text
Abstract:
Sequence alignment is one of the most intensely studied problems in bioinformatics, and is an important step in a wide range of analyses. An issue that has gained much attention in recent years is the fact that downstream analyses are often highly sensitive to the specific choice of alignment. One way to address this is to jointly sample alignments along with other parameters of interest. In order to extend the range of applicability of this approach, the first chapter of this thesis introduces a probabilistic evolutionary model for protein structures on a phylogenetic tree; since protein structures typically diverge much more slowly than sequences, this allows for more reliable detection of remote homologies, improving the accuracy of the resulting alignments and trees, and reducing sensitivity of the results to the choice of dataset. In order to carry out inference under such a model, a number of new Markov chain Monte Carlo approaches are developed, allowing for more efficient convergence and mixing on the high-dimensional parameter space. The second part of the thesis presents a directed acyclic graph (DAG)-based approach for representing a collection of sampled alignments. This DAG representation allows the initial collection of samples to be used to generate a larger set of alignments under the same approximate distribution, enabling posterior alignment probabilities to be estimated reliably from a reasonable number of samples. If desired, summary alignments can then be generated as maximum-weight paths through the DAG, under various types of loss or scoring functions. The acyclic nature of the graph also permits various other types of algorithms to be easily adapted to operate on the entire set of alignments in the DAG. In the final part of this work, methodology is introduced for alignment-DAG-based sequence annotation using hidden Markov models, and RNA secondary structure prediction using stochastic context-free grammars. Results on test datasets indicate that the additional information contained within the DAG allows for improved predictions, resulting in substantial gains over simply analysing a set of alignments one by one.
APA, Harvard, Vancouver, ISO, and other styles
5

Ezvan, Olivier. "Multilevel model reduction for uncertainty quantification in computational structural dynamics." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1109/document.

Full text
Abstract:
Ce travail de recherche présente une extension de la construction classique des modèles réduits (ROMs) obtenus par analyse modale, en dynamique numérique des structures linéaires. Cette extension est basée sur une stratégie de projection multi-niveau, pour l'analyse dynamique des structures complexes en présence d'incertitudes. De nos jours, il est admis qu'en dynamique des structures, la prévision sur une large bande de fréquence obtenue à l'aide d'un modèle éléments finis doit être améliorée en tenant compte des incertitudes de modèle induites par les erreurs de modélisation, dont le rôle croît avec la fréquence. Dans un tel contexte, l'approche probabiliste non-paramétrique des incertitudes est utilisée, laquelle requiert l'introduction d'un ROM. Par conséquent, ces deux aspects, évolution fréquentielle des niveaux d'incertitudes et réduction de modèle, nous conduisent à considérer le développement d'un ROM multi-niveau, pour lequel les niveaux d'incertitudes dans chaque partie de la bande de fréquence peuvent être adaptés. Dans cette thèse, on s'intéresse à l'analyse dynamique de structures complexes caractérisées par la présence de plusieurs niveaux structuraux, par exemple avec un squelette rigide qui supporte diverses sous-parties flexibles. Pour de telles structures, il est possible d'avoir, en plus des modes élastiques habituels dont les déplacements associés au squelette sont globaux, l'apparition de nombreux modes élastiques locaux, qui correspondent à des vibrations prédominantes des sous-parties flexibles. Pour ces structures complexes, la densité modale est susceptible d'augmenter fortement dès les basses fréquences (BF), conduisant, via la méthode d'analyse modale, à des ROMs de grande dimension (avec potentiellement des milliers de modes élastiques en BF). De plus, de tels ROMs peuvent manquer de robustesse vis-à-vis des incertitudes, en raison des nombreux déplacements locaux qui sont très sensibles aux incertitudes. Il convient de noter qu'au contraire des déplacements globaux de grande longueur d'onde caractérisant la bande BF, les déplacements locaux associés aux sous-parties flexibles de la structure, qui peuvent alors apparaître dès la bande BF, sont caractérisés par de courtes longueurs d'onde, similairement au comportement dans la bande hautes fréquences (HF). Par conséquent, pour les structures complexes considérées, les trois régimes vibratoires BF, MF et HF se recouvrent, et de nombreux modes élastiques locaux sont entremêlés avec les modes élastiques globaux habituels. Cela implique deux difficultés majeures, concernant la quantification des incertitudes d'une part et le coût numérique d'autre part. L'objectif de cette thèse est alors double. Premièrement, fournir un ROM stochastique multi-niveau qui est capable de rendre compte de la variabilité hétérogène introduite par le recouvrement des trois régimes vibratoires. Deuxièmement, fournir un ROM prédictif de dimension réduite par rapport à celui de l'analyse modale. Une méthode générale est présentée pour la construction d'un ROM multi-niveau, basée sur trois bases réduites (ROBs) dont les déplacements correspondent à l'un ou l'autre des régimes vibratoires BF, MF ou HF (associés à des déplacements de type BF, de type MF ou bien de type HF). Ces ROBs sont obtenues via une méthode de filtrage utilisant des fonctions de forme globales pour l'énergie cinétique (par opposition aux fonctions de forme locales des éléments finis). L'implémentation de l'approche probabiliste non-paramétrique dans le ROM multi-niveau permet d'obtenir un ROM stochastique multi-niveau avec lequel il est possible d'attribuer un niveau d'incertitude spécifique à chaque ROB. L'application présentée est relative à une automobile, pour laquelle le ROM stochastique multi-niveau est identifié par rapport à des mesures expérimentales. Le ROM proposé permet d'obtenir une dimension réduite ainsi qu'une prévision améliorée, en comparaison avec un ROM stochastique classique
This work deals with an extension of the classical construction of reduced-order models (ROMs) that are obtained through modal analysis in computational linear structural dynamics. It is based on a multilevel projection strategy and devoted to complex structures with uncertainties. Nowadays, it is well recognized that the predictions in structural dynamics over a broad frequency band by using a finite element model must be improved in taking into account the model uncertainties induced by the modeling errors, for which the role increases with the frequency. In such a framework, the nonparametric probabilistic approach of uncertainties is used, which requires the introduction of a ROM. Consequently, these two aspects, frequency-evolution of the uncertainties and reduced-order modeling, lead us to consider the development of a multilevel ROM in computational structural dynamics, which has the capability to adapt the level of uncertainties to each part of the frequency band. In this thesis, we are interested in the dynamical analysis of complex structures in a broad frequency band. By complex structure is intended a structure with complex geometry, constituted of heterogeneous materials and more specifically, characterized by the presence of several structural levels, for instance, a structure that is made up of a stiff main part embedding various flexible sub-parts. For such structures, it is possible having, in addition to the usual global-displacements elastic modes associated with the stiff skeleton, the apparition of numerous local elastic modes, which correspond to predominant vibrations of the flexible sub-parts. For such complex structures, the modal density may substantially increase as soon as low frequencies, leading to high-dimension ROMs with the modal analysis method (with potentially thousands of elastic modes in low frequencies). In addition, such ROMs may suffer from a lack of robustness with respect to uncertainty, because of the presence of the numerous local displacements, which are known to be very sensitive to uncertainties. It should be noted that in contrast to the usual long-wavelength global displacements of the low-frequency (LF) band, the local displacements associated with the structural sub-levels, which can then also appear in the LF band, are characterized by short wavelengths, similarly to high-frequency (HF) displacements. As a result, for the complex structures considered, there is an overlap of the three vibration regimes, LF, MF, and HF, and numerous local elastic modes are intertwined with the usual global elastic modes. This implies two major difficulties, pertaining to uncertainty quantification and to computational efficiency. The objective of this thesis is thus double. First, to provide a multilevel stochastic ROM that is able to take into account the heterogeneous variability introduced by the overlap of the three vibration regimes. Second, to provide a predictive ROM whose dimension is decreased with respect to the classical ROM of the modal analysis method. A general method is presented for the construction of a multilevel ROM, based on three orthogonal reduced-order bases (ROBs) whose displacements are either LF-, MF-, or HF-type displacements (associated with the overlapping LF, MF, and HF vibration regimes). The construction of these ROBs relies on a filtering strategy that is based on the introduction of global shape functions for the kinetic energy (in contrast to the local shape functions of the finite elements). Implementing the nonparametric probabilistic approach in the multilevel ROM allows each type of displacements to be affected by a particular level of uncertainties. The method is applied to a car, for which the multilevel stochastic ROM is identified with respect to experiments, solving a statistical inverse problem. The proposed ROM allows for obtaining a decreased dimension as well as an improved prediction with respect to a classical stochastic ROM
APA, Harvard, Vancouver, ISO, and other styles
6

Tavares, Ivo Alberto Valente. "Uncertainty quantification with a Gaussian Process Prior : an example from macroeconomics." Doctoral thesis, Instituto Superior de Economia e Gestão, 2021. http://hdl.handle.net/10400.5/21444.

Full text
Abstract:
Doutoramento em Matemática Aplicada à Economia e Gestão
This thesis may be broadly divided into 4 parts. In the first part, we do a literature review of the state of the art in misspecification in Macroeconomics, and what so far has been the contribution of a relatively new area of research called Uncertainty Quantification to the Macroeconomics subject. These reviews are essential to contextualize the contribution of this thesis in the furthering of research dedicated to correcting non-linear misspecifications, and to account for several other sources of uncertainty, when modelling from an economic perspective. In the next three parts, we give an example, using the same simple DSGE model from macroeconomic theory, of how researchers may quantify uncertainty in a State-Space Model using a discrepancy term with a Gaussian Process prior. The second part of the thesis, we used a full Gaussian Process (GP) prior on the discrepancy term. Our experiments showed that despite the heavy computational constraints of our full GP method, we still managed to obtain a very interesting forecasting performance with such a restricted sample size, when compared with similar uncorrected DSGE models, or corrected DSGE models using state of the art methods for time series, such as imposing a VAR on the observation error of the state-space model. In the third part of our work, we improved on the computational performance of our previous method, using what has been referred in the literature as Hilbert Reduced Rank GP. This method has close links to Functional Analysis, and the Spectral Theorem for Normal Operators, and Partial Differential Equations. It indeed improved the computational processing time, albeit just slightly, and was accompanied with a similarly slight decrease in the forecasting performance. The fourth part of our work delved into how our method would account for model uncertainty just prior, and during, the great financial crisis of 2007-2009. Our technique allowed us to capture the crisis, albeit at a reduced applicability possibly due to computational constraints. This latter part also was used to deepen the understanding of our model uncertainty quantification technique with a GP. Identifiability issues were also studied. One of our overall conclusions was that more research is needed until this uncertainty quantification technique may be used in as part of the toolbox of central bankers and researchers for forecasting economic fluctuations, specially regarding the computational performance of either method.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
7

Ola, Abdel Malik. "L’identification des opportunités d’investissement en incertitude : le jugement intuitif des Business Angels dans le financement des firmes entrepreneuriales." Thesis, Angers, 2016. http://www.theses.fr/2016ANGE0032/document.

Full text
Abstract:
Nous analysons l’identification des opportunités d’investissement dans le cas spécifique du financement de l’amorçage des firmes porteuses d’innovation. L’absence d’informations pertinentes et objectives au démarrage remet en cause la capacité postulée des investisseurs à évaluer objectivement la rentabilité des firmes entrepreneuriales. Ainsi, nous étudions la vraie stratégie psycho-cognitive sous-jacente à la création du sens autour du potentiel des projets en se focalisant sur un acteur spécifique, le Business Angel (BA). Nous postulons que cet investissement suit un processus de jugement intuitif. L’analyse qualitative des notes d’observation et des entretiens permet de construire un modèle décrivant la manière dont le BA produit in situ de nouveaux construits utiles dans sa perception. Nous mettons aussi en évidence des comportements réflexifs réduisant l’erreur dans sa décision. Ainsi, l’intuition du BA doit être vue comme une réelle approche de transformation situationnelle d’indicateurs à travers des manipulations langagières. Nous offrons une nouvelle perspective dans la compréhension du comportement des capital-risqueurs qui sont susceptibles d’accompagner financièrement les firmes innovantes dès leur phase de démarrage. Nos résultats sont aussi généralisables à des contexte où l’aptitude intuitive devant une source d’efficience décisionnelle. Nous faisons des propositions théoriques qui orienteront les études futures
We analyze the investment opportunities’s identification in the specific case of the innovative firm financing. The absence of relevant and objective informations at the early stage weaken the investor’s postulated ability inestimating objectively the profitability of the entrepreneurial firms. Then, we study the real cognitivestrategy underlying the sensemaking process around the potential of the projects by focusing on a specific actor, the Business angel (BA). We argue that this investment follows a process of intuitive judgment.The research design is a qualitative inductive approach with data collected by observation and interviews. We build a model of how the BA cognitively interpret the innovative firm’s potential in order to invest. We highlight also cognitive practices in reducting biais and errors during the sensemaking process. The BA’s intuition atearly stage must be viewed as a processus of meaning construction through labelling and speech articulation. This thesis contributes to a better understanding ofventure capitalist behaviors at early stage as well as a better comprehension of how meaning can be created intuitively in uncertain context. Theoretical propositions are made for future researchs
APA, Harvard, Vancouver, ISO, and other styles
8

Nwankwo, Cosmas Chidozie. "Smart offshore structure for reliability prediction process." Thesis, Cranfield University, 2013. http://dspace.lib.cranfield.ac.uk/handle/1826/9335.

Full text
Abstract:
A review of the developments within the field of structural reliability theory shows that some gaps still exist in the reliability prediction process and hence there is an urgent desire for improvements such that the estimated structural reliability will be capable of expressing a physical property of the given structure. The current reliability prediction process involves the continuous estimation and use of reliability index as a way of estimating the safety of any given structure. The reliability index β depends on the Probability Density Function (PDF) distribution for the wave force and the corresponding PDF of resistance from respective structural members of the given structure. The PDF for the applied wave force will depend on the PDF of water depth, wave angular velocity and wave direction hence the reliability index as currently practiced is a statistical way of managing uncertainties based on a general probabilistic model. This research on Smart Offshore Structure for Reliability Prediction has proposed the design of a measurement based reliability prediction process as a way of closing the gap on structural reliability prediction process. Structural deflection and damping are some of the measurable properties of an offshore structure and this study aims at suggesting the use of these measurable properties for improvements in structural reliability prediction process. A design case study has shown that a typical offshore structure can deflect to a range of only a few fractions of a millimetre. This implies that if we have a way of monitoring this level of deflection, we could use the results from such measurement for the detection of a structural member failure. This advocated concept is based on the hypothesis that if the original dynamic characteristics of a structure is known, that measurement based modified dynamic properties can be used to determine the onset of failure or failure propagation of the given structure. This technology could reveal the location and magnitude of internal cracks or corrosion effects on any given structure which currently is outside the current probability based approach. A simple economic analysis shows that the recommended process shows a positive net present value and that some $74mln is the Value of Information for any life extension technology that could reveal the possibility of extending the life of a given 10,000bopd production platform from 2025 to 2028.
APA, Harvard, Vancouver, ISO, and other styles
9

Yin, Qi. "Prise en compte de la variabilité dans les calculs par éléments finis des structures composites en régime statique ou vibratoire." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2304/document.

Full text
Abstract:
La fabrication des structures composites conduit à une variabilité élevée de ses paramètres mécaniques. La thèse a comme objectif global de développer des méthodes économiques et robustes pour étudier la variabilité de la réponse statique ou dynamique des structures composites modélisées par éléments finis, prenant en compte les propriétés matériaux (modules d'élasticité, coefficients de Poisson, masses volumiques...) et physiques (épaisseurs et orientations des fibres) incertaines. Deux méthodes stochastiques : Certain Generalized Stresses Method (CGSM) et Modal Stability Procedure (MSP), sont développées. La méthode CGSM considère une hypothèse mécanique, les efforts généralisés sont supposés indépendants des paramètres incertains. Elle permet d'évaluer la variabilité de la réponse statique. La méthode MSP, proposée pour étudier la variabilité d'une structure en dynamique, est basée sur l'hypothèse considérant que les modes propres sont peu sensibles aux paramètres incertains. Les hypothèses mécaniques et une unique analyse par éléments finis permettent de construire un méta-modèle exploité dans une simulation de Monte Carlo. Le coût de calcul de ces méthodes stochastiques est donc réduit considérablement. De plus, elles présentent les avantages de ne pas limiter le nombre de paramètres incertains ou le niveau de variabilité d'entrée, et d'être compatibles avec tout code éléments finis standard. Quatre exemples académiques de plaque et coque composite sont traités avec la méthode CGSM, deux exemples académiques de plaque composite carrée et un exemple de plaque raidie sont traités avec la méthode MSP. La variabilité de la réponse statique (déplacement et critère de rupture) et dynamique (fréquence propre), soit la moyenne, l'écart-type, le coefficient de variation et la distribution, est évaluée. Les résultats statistiques obtenus par les méthodes proposées sont comparés avec ceux obtenus par une simulation de Monte Carlo directe, considérée comme la méthode de référence. La comparaison montre que les méthodes développées fournissent des résultats de bonne qualité et qu'elles sont très performantes en temps de calcul. Un indicateur d'erreur est également proposé, permettant de donner une estimation du niveau d'erreur des résultats obtenus par les méthodes CGSM ou MSP par rapport à la méthode de référence, sans réaliser un grand nombre d'analyses par éléments finis
The manufacture of composite structures leads to a high variability of mechanical parameters. The objective of this work is to develop economic and robust methods to study the variability of the static or dynamic response of composite structures modeled by finite elements, taking into account uncertain material (elastic moduli, Poisson's ratios, densities... ) and physical (thicknesses and fiber orientations) properties. Two methods are developed: the Certain Generalized Stresses Method (CGSM) and the Modal Stability Procedure (MSP). The CGSM considers a mechanical assumption, the generalized stresses are assumed to be independant of uncertain parameters. lt allows to evaluate the variability of static response. The MSP, proposed to study the variability of structures in dynamics, is based on the assumption that the modes shapes are insensitive to uncertain parameters. These mechanical assumptions and only one fïnite element analysis allow to construct a metamodel used in a Monte Carlo simulation. As a result, the computational cost is reduced considerably. Moreover, they are not limited by the number of considered parameters or the level of input variability, and are compatible with standard finite element software. Four academic examples of composite plate and shell are treated with the CGSM, while two academic examples of composite square plate and an example of stiffened plate are studied by the MSP. The variability of static response (displacement and failure criterion) and dynamic response (natural frequency), namely mean value, standard deviation, coefficient of variation and distribution, is evaluated. The results obtained by the proposed methods are compared with those obtained by the direct Monte Carlo simulation, considered as the reference method. The comparison shows that the proposed methods provide quite accurate results and highlights their high computational efficiency. An error indicator is also proposed, which allows to provide an estimation of the error level of the results obtained by the CGSM or MSP compared to the reference method, without performing a large number of finite element analyses
APA, Harvard, Vancouver, ISO, and other styles
10

Cherpeau, Nicolas. "Incertitudes structurales en géomodélisation : échantillonnage et approche inverse." Thesis, Université de Lorraine, 2012. http://www.theses.fr/2012LORR0141/document.

Full text
Abstract:
La modélisation du sous-sol est un outil indispensable pour décrire, comprendre et quantifier les processus géologiques. L'accès au sous-sol et son observation étant limités aux moyens d'acquisition, la construction de modèles tridimensionnels du sous-sol repose sur l'interprétation de données éparses à résolution limitée. Dans ce contexte, de nombreuses incertitudes affectent la construction de tels modèles, dues aux possibles biais humains cognitifs lors de l'interprétation, à la variabilité naturelle des objets géologiques et aux incertitudes intrinsèques des données utilisées. Ces incertitudes altèrent la prédictibilité des modèles et leur évaluation est donc nécessaire afin de réduire les risques économiques et humains liés à l'utilisation des modèles. Le travail de thèse s'est déroulé dans le cadre plus spécifique des incertitudes sur les structures géologiques. Les réponses apportées sont multiples : (1) une méthode stochastique de génération de modèles structuraux à géométrie et topologie changeantes, combinant une connaissance a priori des structures géologiques aux données interprétées, a été développée ; (2) le réalisme géologique des structures modélisées est garanti grâce à la modélisation implicite, représentant une surface par une équipotentielle d'un champ scalaire volumique ; (3) la description des failles en un nombre restreint de paramètres incertains a permis d'aborder la modélisation inverse, ce qui ouvre la voie vers l'assimilation de données géophysiques ou d'écoulement fluides grâce à des méthodes bayesiennes
Subsurface modeling is a key tool to describe, understand and quantify geological processes. As the subsurface is inaccessible and its observation is limited by acquisition methods, 3D models of the subsurface are usually built from the interpretation of sparse data with limited resolution. Therefore, uncertainties occur during the model building process, due to possible cognitive human bias, natural variability of geological objects and intrinsic uncertainties of data. In such context, the predictability of models is limited by uncertainties, which must be assessed in order to reduce economical and human risks linked to the use of models. This thesis focuses more specifically on uncertainties about geological structures. Our contributions are : (1) a stochastic method for generating structural models with various fault and horizon geometries as well as fault connections, combining prior information and interpreted data, has been developped ; (2) realistic geological objects are obtained using implicit modeling that represents a surface by an equipotential of a volumetric scalar field ; (3) faults have been described by a reduced set of uncertain parameters, which opens the way to the inversion of structural objects using geophysical or fluid flow data by baysian methods
APA, Harvard, Vancouver, ISO, and other styles
11

Daley, Marcia. "Exploring the Relationship between Supply Network Configuration, Interorganizational Information Sharing and Performance." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/managerialsci_diss/16.

Full text
Abstract:
ABSTRACT EXPLORING THE RELATIONSHIP BETWEEN SUPPLY NETWORK CONFIGURATION, INTER-ORGANIZATIONAL INFORMATION SHARING AND PERFORMANCE By MARCIA DALEY August 2008 Committee Chair: Dr. Subhashish Samaddar Major Department: Decision Science Critical to the success of a firm is the ability of managers to coordinate the complex network of business relationships that can exist between business partners in the supply network. However many managers are unsure on how best to leverage their resources to capitalize on the information sharing opportunities that are available in such networks. Although there is significant research on information sharing, the area of inter-organizational information sharing (IIS) is still evolving and there is limited research on IIS in relation to systemic factors within supply networks. To help fill this gap in the literature, a primary focus of this dissertation is on the relationship between the design of the supply network and IIS. The design of the supply network is characterized by the supply network configuration which is comprised of (1) the network pattern, (2) the number of stages in the supply network, and (3) where the firm is located in that supply network. Four different types of IIS are investigated, herein. These types of IIS are a function of the frequency with which information is shared and the scope of information shared. Type 1 (Type 2) IIS is the low (high) frequency state where only operational information is shared. Similarly, Type 3 (Type 4) is the low (high) frequency state where strategic information is shared. The argument is that the type of IIS varies depending on the configuration of the supply network and that this relationship is influenced by the coordination structure established between firms in the network. The second focus of this dissertation deals with the relationship between IIS and performance. Research findings on the benefits to be gained from IIS have been ambiguous, with some researchers claiming reduced cost in the supply network with IIS, and others finding minimal or no benefits. To add clarity to these findings, the role that uncertainty plays in the relationship between IIS and performance is examined. The thesis presented is that the positive relationship between IIS types and the performance of the supply network is impacted by process uncertainty (i.e. the variability in process outcomes and production times), and partner uncertainty. Social network theory and transaction cost economics provide the theoretical lens for this dissertation. A model is developed and will be empirically validated in a cross-sectional setting, utilizing a sampling frame randomly selected and comprised of supply management executives from various industries within the United States.
APA, Harvard, Vancouver, ISO, and other styles
12

Turner, Lyle Robert. "Production structure models and applications within a Statistical Activity Cost Theory (SACT) Framework." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16310/.

Full text
Abstract:
Statistical Activity Cost Theory (SACT) is an axiomatic and statistical theory of basic accounting measurement practice. The aim of the SACT analysis, among others, is to determine the statistical nature of both the physical production system of an accounting entity and its related costs, which can then be examined and applied to various decision-making problems. A central proposition of SACT is that the physical system of the entity, and the costs related to this system, are separate structures which can be modelled as such. To date, however, mini- mal progress has been made in describing production process structures within the SACT framework, and nor have there been any advances made in applying common statistical techniques to such an analysis. This thesis, therefore, moves to extend the basic theory that has already been developed, presenting a novel method for representing and examining the physical processes that make up an entity's production system. It also examines the costing of these physical models, such that transactional data can be examined and related back to the underlying production processes. The thesis concludes by giving an example of such an application in a case study. The analysis developed in this thesis has been applied in a larger project which aims to produce generic modelling and decision tools, based upon SACT, to support return and risk management.
APA, Harvard, Vancouver, ISO, and other styles
13

Beck, Andre Teofilo. "Reliability Analysis of Degrading Uncertain Structures - with Applications to Fatigue and Fracture under Random Loading." 2003. http://hdl.handle.net/1959.13/24731.

Full text
Abstract:
In the thesis, the reliability analysis of structural components and structural details subject to random loading and random resistance degradation is addressed. The study concerns evaluation of the probability of failure due to an overload of a component or structural detail, in consideration of random (environmental) loads and their combination, uncertain resistance parameters, statistical and phenomenological uncertainty and random resistance degradation mechanisms. Special attention is devoted to resistance degradation, as it introduces an additional level of difficulty in the solution of time variant reliability problems. The importance of this study arrives from the ageing of existing infrastructure in a world wide scale and from the lack of standards and codes for the ongoing safety management of general structures past their original design lives. In this context, probabilistic-based risk assessment and reliability analysis provide a framework for the safety management of ageing structures in consideration of inherent load and resistance uncertainty, current state of the structure, further resistance degradation, periodic inspections, in the absence of past experience and on an individual basis. In particular, the critical problem of resistance degradation due to fatigue is addressed. The formal solution of time variant reliability problems involves integration of local crossing rates over a conditional failure domain boundary, over time and over random resistance variables. This solution becomes very difficult in the presence of resistance degradation, as crossing rates become time dependent, and the innermost integration over the failure domain boundary has to be repeated over time. Significant simplification is achieved when the order of integrations is changed, and crossing rates are first integrated over the random failure domain boundary and then over time. In the so-called ensemble crossing rate or Ensemble Up-crossing Rate (EUR) approximation, the arrival rate of the first crossing over a random barrier is approximated by the ensemble average of crossings. This approximation conflicts with the Poisson assumption of independence implied in the first passage failure model, making results unreliable and highly conservative. Despite significant simplification of the solution, little was known to date about the quality of the EUR approximation. In this thesis, a simulation procedure to obtain Poissonian estimates of the arrival rate of the first up-crossing over a random barrier is introduced. The procedure is used to predict the error of the EUR approximation. An error parameter is identified and error functions are constructed. Error estimates are used to correct original EUR failure probability results and to compare the EUR with other common simplifications of time variant reliability problems. It is found that EUR errors can be quite large even when failure probabilities are small, a result that goes against previous ideas. A barrier failure dominance concept is introduced, to characterize those problems where an up-crossing or overload failure is more likely to be caused by a small outcome of the resistance than by a large outcome of the load process. It is shown that large EUR errors are associated with barrier failure dominance, and that solutions which simplify the load part of the problem are more likely to be appropriate in this case. It is suggested that the notion of barrier failure dominance be used to identify the proper (simplified) solution method for a given problem. In this context, the EUR approximation is compared with Turkstra’s load combination rule and with the point-crossing formula. It is noted that in many practical structural engineering applications involving environmental loads like wind, waves or earthquakes, load process uncertainty is larger than resistance uncertainty. In these applications, barrier failure dominance in unlikely and EUR errors can be expected to be small. The reliability problem of fatigue and fracture under random loading is addressed in the thesis. A solution to the problem, based on the EUR approximation, is constructed. The problem is formulated by combining stochastic models of crack propagation with the first passage failure model. The solution involves evaluation of the evolution in time of crack size and resistance distributions, and provides a fresh random process-based approach to the problem. It also simplifies the optimization and planning of non-destructive periodic inspection strategies, which play a major role in the ongoing safety management of fatigue affected structures. It is shown how sensitivity coefficients of a simplified preliminary First Order Reliability solution can be used to characterize barrier failure dominance. In the fatigue and fracture reliability problem, barrier failure dominance can be caused by large variances of resistance or crack growth parameters. Barrier failure dominance caused by resistance parameters leads to problems where overload failure is an issue and where the simplified preliminary solution is likely to be accurate enough. Barrier failure dominance caused by crack growth parameters leads to highly non-linear problems, where critical crack growth dominates failure probabilities. Finally, in the absence of barrier failure dominance, overload failure is again the issue and the EUR approximation becomes not just appropriate but also accurate. The random process-based EUR solution of time-variant reliability problems developed and the concept of barrier failure dominance introduced in the thesis have broad applications in problems involving general forms of resistance degradation as well as in problems of random vibration of uncertain structures.
PhD Doctorate
APA, Harvard, Vancouver, ISO, and other styles
14

ZHANG, FU-CUN, and 張富村. "R & D/production interface:a congruency perspective on structure, processes, and task uncertainty." Thesis, 1988. http://ndltd.ncl.edu.tw/handle/66965644286789828537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Suryawanshi, Anup Arvind. "Uncertainty Quantification in Flow and Flow Induced Structural Response." Thesis, 2015. http://etd.iisc.ernet.in/2005/3875.

Full text
Abstract:
Response of flexible structures — such as cable-supported bridges and aircraft wings — is associated with a number of uncertainties in structural and flow parameters. This thesis is aimed at efficient uncertainty quantification in a few such flow and flow-induced structural response problems. First, the uncertainty quantification in the lift force exerted on a submerged body in a potential flow is considered. To this end, a new method — termed here as semi-intrusive stochastic perturbation (SISP) — is proposed. A sensitivity analysis is also performed, where for the global sensitivity analysis (GSA) the Sobol’ indices are used. The polynomial chaos expansion (PCE) is used for estimating these indices. Next, two stability problems —divergence and flutter — in the aeroelasticity are studied in the context of reliability based design optimization (RBDO). Two modifications are proposed to an existing PCE-based metamodel to reduce the computational cost, where the chaos coefficients are estimated using Gauss quadrature to gain computational speed and GSA is used to create nonuniform grid to reduce the cost even further. The proposed method is applied on a rectangular unswept cantilever wing model. Next, reliability computation in limit cycle oscillations (LCOs) is considered. While the metamodel performs poorly in this case due to bimodality in the distribution, a new simulation-based scheme proposed to this end. Accordingly, first a reduced-order model (ROM) is used to identify the critical region in the random parameter space. Then the full-scale expensive model is run only over a this critical region. This is applied to the rectangular unswept cantilever wing with cubic and fifth order stiffness terms in its equation of motion. Next, the wind speed is modeled as a spatio-temporal process, and accordingly new representations of spatio-temporal random processes are proposed based on tensor decompositions of the covariance kernel. These are applied to three problems: a heat equation, a vibration, and a readily available covariance model for wind speed. Finally, to assimilate available field measurement data on wind speed and to predict based on this assimilation, a new framework based on the tensor decompositions is proposed. The framework is successfully applied to a set of measured data on wind speed in Ireland, where the prediction based on simulation is found to be consistent with the observed data.
APA, Harvard, Vancouver, ISO, and other styles
16

Israel, Joshua James. "Shape optimization of lightweight structures under blast loading." 2013. http://hdl.handle.net/1805/3743.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Structural optimization of vehicle components for blast mitigation seeks to counteract the damaging effects of an impulsive threat on occupants and critical components. The strong and urgent need for improved protection from blast events has made blast mitigating component design an active research subject. Standard up-armoring of ground vehicles can significantly increase the mass of the vehicle. Without concurrent modifications to the power train, suspension, braking and steering components, the up-armored vehicles suffer from degraded stability and mobility. For these reasons, there is a critical need for effective methods to generate lightweight components for blast mitigation. The overall objective of this research is to make advances in structural design methods for the optimization of lightweight blast-mitigating systems. This thesis investigates the automated design process of isotropic plates to mitigate the effects of blast loading by addressing the design of blast-protective structures from a design optimization perspective. The general design problem is stated as finding the optimum shape of a protective shell of minimum mass satisfying deformation and envelops constraints. This research was conducted in terms of three primary research projects. The first project was to investigate the design of lightweight structures under deterministic loading conditions and subject to the same objective function and constraints, in order to compare feasible design methodologies through the expansion of the problem dimension in order to reach the limits of performance. The second research project involved the investigation of recently developed uncertainty quantification methods, the univariate dimensional reduction method and the performance moment integration method, to structures under stochastic loading conditions. The third research project involved application of these uncertainty quantification methods to problems of design optimization under uncertainty, in order to develop a methodology for the generation of lightweight reliable structures. This research has resulted in the construction of a computational framework, incorporating uncertainty quantification methods and various optimization techniques, which can be used for the generation of lightweight structures for blast mitigation under uncertainty. Applied to practical structural design problems, the results demonstrate that the methodologies provide a practical tool to aid the design engineer in generating design concepts for blast-mitigating structures. These methods can be used to advance research into the generation of reliable structures under uncertain loading conditions inherent to blast events.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography