Academic literature on the topic 'Three and four layers models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Three and four layers models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Three and four layers models"

1

Abowaly, Mohamed E., Abdel-Aziz A. Belal, Enas E. Abd Abd Elkhalek, et al. "Assessment of Soil Pollution Levels in North Nile Delta, by Integrating Contamination Indices, GIS, and Multivariate Modeling." Sustainability 13, no. 14 (2021): 8027. http://dx.doi.org/10.3390/su13148027.

Full text
Abstract:
The proper assessment of trace element concentrations in the north Nile Delta of Egypt is needed in order to reduce the high levels of toxic elements in contaminated soils. The objectives of this study were to assess the risks of contamination for four trace elements (nickel (Ni), cobalt (Co), chromium (Cr), and boron (B)) in three different layers of the soil using the geoaccumulation index (I-geo) and pollution load index (PLI) supported by GIS, as well as to evaluate the performance of partial least-square regression (PLSR) and multiple linear regression (MLR) in estimating the PLI based on data for the four trace elements in the three different soil layers. The results show a widespread contamination of I-geo Ni, Co, Cr, and B in the three different layers of the soil. The I-geo values varied from 0 to 4.74 for Ni, 0 to 6.56 for Co, 0 to 4.11 for Cr, and 0 to 4.57 for B. According to I-geo classification, the status of Ni, Cr, and B ranged from uncontaminated/moderately contaminated to strongly/extremely contaminated. Co ranged from uncontaminated/moderately contaminated to extremely contaminated. There were no significant differences in the values of I-geo for Ni, Co, Cr, and B in the three different layers of the soil. According to the PLI classification, the majority of the samples were very highly polluted. For example, 4.76% and 95.24% of the samples were unpolluted and very highly polluted, respectively, in the surface layer of the soil profiles. Additionally, 14.29% and 85.71% of the samples were unpolluted and very highly polluted, respectively, in the subsurface layer of the soil profiles. Both calibration (Cal.) and validation (Val.) models of the PLSR and MLR showed the highest performance in predicting the PLI based on data for the four studied trace elements, as an alternative method. The validation (Val.) models performed the best in predicting the PLI, with R2 = 0.89–0.93 in the surface layer, 0.91–0.96 in the subsurface layer, 0.89–0.94 in the lowest layers, and 0.92–0.94 across the three different layers. In conclusion, the integration of the I-geo, PLI, GIS technique, and multivariate models is a valuable and applicable approach for the assessment of the risk of contamination for trace elements, and the PLSR and MLR models could be used through applying chemometric techniques to evaluate the PLI in different layers of the soil.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Penghao, Li Zhang, Zhongyu Wang, Shuang Chen, and Zhendong Shang. "A Strain-Transfer Model of Surface-Bonded Sapphire-Derived Fiber Bragg Grating Sensors." Applied Sciences 10, no. 12 (2020): 4399. http://dx.doi.org/10.3390/app10124399.

Full text
Abstract:
An improved strain-transfer model was developed for surface-bonded sapphire-derived fiber Bragg grating sensors. In the model, the core and cladding of the fiber are separated into individual layers, unlike in conventional treatment that regards the fiber as a unitive structure. The separation is because large shear deformation occurs in the cladding when the core of the sapphire-derived fiber is heavily doped with alumina, a material with a high Young’s modulus. Thus, the model was established to have four layers, namely, a core, a cladding, an adhesive, and a host material. A three-layer model could also be obtained from the regressed four-layer model when the core’s radius increased to that of the cladding, which treated the fiber as if it were still homogeneous material. The accuracy of both the four- and three-layer models was verified using a finite-element model and a tensile-strain experiment. Experiment results indicated that a larger core diameter and a higher alumina content resulted in a lower average strain-transfer rate. Error percentages were less than 1.8% when the four- and three-layer models were used to predict the transfer rates of sensors with high and low alumina content, respectively.
APA, Harvard, Vancouver, ISO, and other styles
3

Alhassan, Seiba, Gaddafi Abdul-Salaam, Michael Asante, Yaw Missah, and Ernest Ganaa. "Analyzing Autoencoder-Based Intrusion Detection System Performance." Journal of Information Security and Cybercrimes Research 6, no. 2 (2023): 105–15. http://dx.doi.org/10.26735/ylxb6430.

Full text
Abstract:
The rise in cyberattacks targeting critical network infrastructure has spurred an increased emphasis on the development of robust cybersecurity measures. In this context, there is a growing exploration of effective Intrusion Detection Systems (IDS) that leverage Machine Learning (ML) and Deep Learning (DL), with a particular emphasis on autoencoders. Recognizing the pressing need to mitigate cyber threats, our study underscores the crucial importance of advancing these methodologies. Our study aims to identify the optimal architecture for an Intrusion Detection System (IDS) based on autoencoders, with a specific focus on configuring the number of hidden layers. To achieve this objective, we designed four distinct sub-models, each featuring a different number of hidden layers: Test 1 (one hidden layer), Test 2 (two hidden layers), Test 3 (three hidden layers), and Test 4 (four hidden layers).We subjected our models to rigorous training and testing, maintaining consistent neuron counts of 30 and 60. The outcomes of our experimental study reveal that the model with a single hidden layer consistently outperformed its counterparts, achieving an accuracy of 95.11% for NSL-KDD and an impressive 98.6% for CIC-IDS2017. The findings of our study indicate that our proposed system is viable for implementation on critical network infrastructure as a proactive measure against cyber-attacks.
APA, Harvard, Vancouver, ISO, and other styles
4

Schubert, Wayne, Richard Taft, and Christopher Slocum. "A Simple Family of Tropical Cyclone Models." Meteorology 2, no. 2 (2023): 149–70. http://dx.doi.org/10.3390/meteorology2020011.

Full text
Abstract:
This review discusses a simple family of models capable of simulating tropical cyclone life cycles, including intensification, the formation of the axisymmetric version of boundary layer shocks, and the development of an eyewall. Four models are discussed, all of which are axisymmetric, f-plane, three-layer models. All four models have the same parameterizations of convective mass flux and air–sea interaction, but differ in their formulations of the radial and tangential equations of motion, i.e., they have different dry dynamical cores. The most complete model is the primitive equation (PE) model, which uses the unapproximated momentum equations for each of the three layers. The simplest is the gradient balanced (GB) model, which replaces the three radial momentum equations with gradient balance relations and replaces the boundary layer tangential wind equation with a diagnostic equation that is essentially a high Rossby number version of the local Ekman balance. Numerical integrations of the boundary layer equations confirm that the PE model can produce boundary layer shocks, while the GB model cannot. To better understand these differences in GB and PE dynamics, we also consider two hybrid balanced models (HB1 and HB2), which differ from GB only in their treatment of the boundary layer momentum equations. Because their boundary layer dynamics is more accurate than GB, both HB1 and HB2 can produce results more similar to the PE model, if they are solved in an appropriate manner.
APA, Harvard, Vancouver, ISO, and other styles
5

Rowe, Avery. "Effect of drainage layers on water retention of potting media in containers." PLOS ONE 20, no. 2 (2025): e0318716. https://doi.org/10.1371/journal.pone.0318716.

Full text
Abstract:
Excess water retention in the potting medium can be a significant problem for plants grown in containers due to the volume of saturated medium which forms above the drainage hole. Adding a layer of coarse material like gravel or sand at the bottom is a common practise among gardeners with the aim of improving drainage, but some researchers have argued that such layers will raise the saturated area and in fact increase water retention. Two different depths and four different materials of drainage layer were tested with three different potting media to determine the water retention in the container after saturating and draining freely. For loamless organic media, almost all types of drainage layer reduced overall water retention in the container compared to controls. For loam-based media, most drainage layers had no effect on the overall water retention. Two simple models were also used to estimate the water retention in the media alone, excluding the drainage layer itself. All drainage layers reduced water retention of loamless organic media, according to both models. There was disagreement between the two models applied to loam-based media, and further study is required to determine the most accurate. Both models showed that some drainage layers with smaller particle sizes reduced water retention in loam-based media, but disagreed on the effect of drainage layers with larger particle sizes. Overall, any drainage layer was likely to reduce water retention of any medium, and almost never increased it. Thicker drainage layers were more effective than thinner layers, with the most effective substrate depending on the potting media used. A 60 mm layer of coarse sand was the most universally-effective drainage layer with all potting media tested.
APA, Harvard, Vancouver, ISO, and other styles
6

Tsepav, Matthew Tersoo, Azeh Yakubu, Kumar Niranjan, et al. "Geophysical Characterisation of Native Clay Deposits in Some Parts of Niger State, Nigeria." Journal of Physics: Theories and Applications 6, no. 1 (2022): 43. http://dx.doi.org/10.20961/jphystheor-appl.v6i1.56457.

Full text
Abstract:
<p class="Abstract">Clay minerals are among the world’s most important and useful industrial minerals. Conductance, transmissivity and corrosity are some physical parameters for determining quality clay. Four (4) clay deposit sites in Kaffin-Koro, Dutse, Dogon-Ruwa and Kushikoko were investigated to evaluate corrosivity, the longitudinal conductance and transmissivity to determine the clay quality. Electrical resistivity method employing Schlumberger electrode array was used to determine the thicknesses and depths of the subsurface strata while Interpex 1xD software was used to interpret the data. Three (3) to four (4) layer earth models were delineated. Kaffin-Koro and Dutse showed three layer models while Dogon-Ruwa and Kushikoko revealed four layers. Moderate clay content was found in Kaffin-Koro in the second layer with longitudinal conductance value of 0.4780 siemens and thickness 0.770m at depth of 1.17m Dogon-Ruwa also had moderate clay content in the third layer with conductance value of 0.237 siemens, depth of 2.43m and thickness 1.76m. Kushikoko had low clay deposit in the second layer with conductance 0.1810 siemens and thickness 2.73 m at 4.37 m while the clay deposit in Dutse appeared to be generally poor as the longitudinal conductance values of the top two layers were less than 0.1 siemens.</p>
APA, Harvard, Vancouver, ISO, and other styles
7

Simms, J. E., and F. D. Morgan. "Comparison of four least‐squares inversion schemes for studying equivalence in one‐dimensional resistivity interpretation." GEOPHYSICS 57, no. 10 (1992): 1282–93. http://dx.doi.org/10.1190/1.1443196.

Full text
Abstract:
The problem of equivalence in dc resistivity inversion is well known. The ability to invert resistivity data successfully depends on the uniqueness of the model as well as the robustness of the inversion algorithm. To study the problems of model uniqueness and resolution, theoretical data are inverted using variations of a nonlinear least‐squares inversion. It is only through model studies such as this one, where the true solutions are known, that realistic and meaningful comparisons of inversion methods can be undertaken. The data are inverted using three schemes of fixed‐layer thickness where only the resistivity varies, and the results are compared to the variable parameter inversion where both the layer resistivities and thicknesses are allowed to vary. The purpose of fixing the layer thicknesses is to reduce the number of parameters solved for during the inversion process. By doing this, nonuniqueness may be reduced. The fixed‐layer thickness schemes are uniform thickness, geometrical progression of thickness, and logarithmic progression of thickness. By applying each inversion scheme to different models, the layer thickness that minimizes the data rms error for various numbers of layers is determined. The curve of data rms error versus model rms error consists of three general regions: unique, nonunique, and no resolution. A good inversion routine simultaneously minimizes the data rms and model rms errors. The variable parameter scheme is best at simultaneously minimizing the data rms and model rms errors for models that can be resolved through the inversion process. The optimum number of layers in the model can be determined by using a modified F‐test.
APA, Harvard, Vancouver, ISO, and other styles
8

Lawrence, E., E. J. Garba, Y. M. Malgwi, and M. A. Hambali. "An Application of Artificial Neural Network for Wind Speeds and Directions Forecasts in Airports." European Journal of Electrical Engineering and Computer Science 6, no. 1 (2022): 53–59. http://dx.doi.org/10.24018/ejece.2022.6.1.407.

Full text
Abstract:
Wind speed patterns are highly dynamic and non-linear and thus cannot be accurately forecasted using conventional linear regression models. In this work, Artificial Neural Network (ANN) technique was applied to forecast wind speeds and directions in airports. Monthly data of maximum temperature, minimum temperature, wind speed, wind direction, relative humidity and wind run for Yola International Airport were collected from 1995 to 2021 from Nigerian Meteorological Agency (NIMET) Abuja-Nigeria. Six Neural Network models were built. ANN with no hidden layers, ANN model with one hidden layer and two dropout layers, ANN model with four hidden layers and three dropout layers, ANN model with eight hidden layers, ANN model with nine hidden layers and finally, ANN model with ten hidden layers. Back Propagation training algorithm was implemented using the PYTHON toolbox. Each of the models was trained using the training dataset and validated using the validation dataset. To test the forecasting ability of each of the models we tested it using unknown data that is the test dataset. The results from each of the models were organized and assessed in terms of the magnitude of the statistical error between the measured result and the real data. This was achieved by measuring the average of the Mean Square Errors (MSE) and Mean Absolute Error (MAE) for each of the models used for forecasting both wind speeds and directions. The results show that Multilayer perceptron with ten hidden layers with (MSE) = 0.92 and (MAE) = 0.73 emerged as the most preferred model for wind speeds forecast while the multilayer perceptron with four hidden layers with (MSE) = 1,858 and (MAE) = 35 emerged the most preferred model for wind directions forecast. Future research can be carried out to improve the accuracy of the model for wind direction forecasts.
APA, Harvard, Vancouver, ISO, and other styles
9

Mehmood, Maryam, Farhan Hussain, Ahsan Shahzad, and Nouman Ali. "Classification of Remote Sensing Datasets with Different Deep Learning Architectures." Earth Sciences Research Journal 28, no. 4 (2025): 409–19. https://doi.org/10.15446/esrj.v28n4.113518.

Full text
Abstract:
Remote sensing image classification has great advantages in the areas of environmental monitoring, urban planning, disaster management and many others. Unmanned Aerial Vehicles (UAVs) have revolutionized remote sensing by providing high-resolution imagery. In this context, effective image classification is crucial for extracting meaningful information from UAV-captured images. This study presents a comparison of different deep learning-based approach for supervised image classification of UAV images. We have experimented on four different CNN models like VGG 16, Alex net, Resnet50 and a deep neural network Efficient-Net-B0 on different remote sensing datasets; AID and AIDER. Multiple combinations were tried to find out which model performs better on which type of datasets. We have used pre-trained initial layers of four CNN models (AlexNet, VGG 16, Resnet50 and Efficient-Net-Bo) then last three layers of each of the selected models are removed and new layers have been added with better tuned parameters. Two different schemes were analyzed. In Scheme-1 the original AlexNet, VGG 16, Resnet50 and Efficient-Net-B0 were experimented without changing and tuning their number of parameters, while in Scheme-2 transfer learning was applied on the pre-trained models and after removing last three layers new layers were added with better tuned hyper-parameters. The evaluation of above schemes was ensured through comprehensive metrics across diverse land cover classes, four different performance evaluation matrices namely; F1 score, precision, accuracy and recall. The main focus of this research is towards transfer learning and adding new layers into pre-trained models to get better classification accuracy.
APA, Harvard, Vancouver, ISO, and other styles
10

Deng, Qi. "Blockchain Economical Models, Delegated Proof of Economic Value and Delegated Adaptive Byzantine Fault Tolerance and their implementation in Artificial Intelligence BlockCloud." Journal of Risk and Financial Management 12, no. 4 (2019): 177. http://dx.doi.org/10.3390/jrfm12040177.

Full text
Abstract:
The Artificial Intelligence BlockCloud (AIBC) is an artificial intelligence and blockchain technology based large-scale decentralized ecosystem that allows system-wide low-cost sharing of computing and storage resources. The AIBC consists of four layers: a fundamental layer, a resource layer, an application layer, and an ecosystem layer (the latter three are the collective “upper-layers”). The AIBC layers have distinguished responsibilities and thus performance and robustness requirements. The upper layers need to follow a set of economic policies strictly and run on a deterministic and robust protocol. While the fundamental layer needs to follow a protocol with high throughput without sacrificing robustness. As such, the AIBC implements a two-consensus scheme to enforce economic policies and achieve performance and robustness: Delegated Proof of Economic Value (DPoEV) incentive consensus on the upper layers, and Delegated Adaptive Byzantine Fault Tolerance (DABFT) distributed consensus on the fundamental layer. The DPoEV uses the knowledge map algorithm to accurately assess the economic value of digital assets. The DABFT uses deep learning techniques to predict and select the most suitable BFT algorithm in order to enforce the DPoEV, as well as to achieve the best balance of performance, robustness, and security. The DPoEV-DABFT dual-consensus architecture, by design, makes the AIBC attack-proof against risks such as double-spending, short-range and 51% attacks; it has a built-in dynamic sharding feature that allows scalability and eliminates the single-shard takeover. Our contribution is four-fold: that we develop a set of innovative economic models governing the monetary, trading and supply-demand policies in the AIBC; that we establish an upper-layer DPoEV incentive consensus algorithm that implements the economic policies; that we provide a fundamental layer DABFT distributed consensus algorithm that executes the DPoEV with adaptability; and that we prove the economic models can be effectively enforced by AIBC’s DPoEV-DABFT dual-consensus architecture.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Three and four layers models"

1

Stewart, I. J. "A model for transition by attachment line contamination and an examination of cross-flow instability in three-dimensional boundary layers." Thesis, Cranfield University, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.380625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ramachandran, Tanisha. "Three tellings, four models and differing perceptions, the construction of female sexuality in the R¢am¢ayana." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ54259.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Öztek, Muzaffer Tonguç. "The study of three different layered structures as model systems for hydrogen storage materials." Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5001.

Full text
Abstract:
The strength and success of the hydrogen economy relies heavily on the storage of hydrogen. Storage systems in which hydrogen is sequestered in a solid material have been shown to be advantageous over storage of hydrogen as a liquid or compressed gas. Many different types of materials have been investigated, yet the desired capacity and uptake/release characteristics required for implementation have not been reached. In this work, porphyrin aggregates were investigated as a new type of material for hydrogen storage. The building blocks of the aggregates are porphyrin molecules that are planar and can assume a face to face arrangement that is also known as H-aggregation. The H-aggregates were formed in solution, upon mixing of aqueous solutions of two different porphyrins, one carrying positively charged and the other one carrying negatively charged functional groups. The cationic porphyrin used was meso-tetra(4-N,N,N-trimethylanilinium) porphine (TAP) and it was combined with four different anionic porphyrins, meso-tetra(4-sulfonatophenyl)porphine (TPPS), meso-tetra(4-carboxyphenyl) porphine (TCPP), Cu(II) meso-tetra(4-carboxyphenyl) porphine, and Fe(III) meso-tetra(4-carboxyphenyl) porphine. The force of attraction that held two oppositely charged porphyrin molecules together was electrostatic attraction between the peripheral groups. Solid state aggregates were successfully isolated either by solvent evaporation or by centrifuging and freeze drying. TCPP-TAP and Cu(II)TCPP-TAP aggregates were shown to interact with hydrogen starting from 150 [degrees]C up to 250 [degrees]C. The uptake capacity was about 1 weight %. Although this value is very low, this is the first observation of porphyrin aggregates absorbing hydrogen. This opened the way for further research to improve hydrogen absorption properties of these materials, as well as other materials based on this model.; Two other materials that are also based on planar building blocks were selected to serve as a comparison to the porphyrin aggregates. The first of those materials was metal intercalated graphite compounds. In such compounds, a metal atom is placed between the layers of graphene that make up the graphite. Lithium, calcium and lanthanum were selected in this study. Theoretical hydrogen capacity was calculated for each material based on the hydriding of the metal atoms only. The fraction of that theoretical hydrogen capacity actually displayed by each material increased from La to Ca to Li containing graphite. The weight % hydrogen observed for these materials varied between 0.60 and 2.0 %. The other material tested for comparison was K[sub x]MnO[sub2], a layered structure of MnO[sub2] that contained the K atoms in between oxygen layers. The hydrogen capacity of the K[sub x]MnO[sub2] samples was similar to the other materials tested in the study, slightly above 1 weight %. This work has shown that porphyrin aggregates, carbon based and manganese dioxide based materials are excellent model materials for hydrogen storage. All three materials absorb hydrogen. Porphyrin aggregates have the potential to exhibit adjustable hydrogen uptake and release temperatures owing to their structure that could interact with an external electric or magnetic field. In the layered materials, it is possible to alter interlayer spacing and the particular intercalates to potentially produce a material with an exceptionally large hydrogen capacity. As a result, these materials can have significant impact on the use of hydrogen as an energy carrier.<br>ID: 029809891; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 86-101).<br>Ph.D.<br>Doctorate<br>Chemistry<br>Sciences
APA, Harvard, Vancouver, ISO, and other styles
4

Fulford, Will. "Spatial characteristics that create & sustain functional encounters : a new three-layered model for unpacking how street markets support urbanity." Thesis, University of Westminster, 2017. https://westminsterresearch.westminster.ac.uk/item/q40x6/spatial-characteristics-that-create-sustain-functional-encounters-a-new-three-layered-model-for-unpacking-how-street-markets-support-urbanity.

Full text
Abstract:
This dissertation explores the role of street markets in supporting urbanity as defined by Sennett (1974) to mean the ability for people to ‘act together without the compulsion to be the same’. The study draws together and builds on three strands of literature – public space, difference and social encounters – to propose a new model of urbanity that provides a conceptual link between the physical characteristics of space, its ability to support differences, and the encounters that take place within it. Previous writings on urbanity have explored a variety of urban spaces but this study is the first to focus on street markets. Using qualitative semi-structured interviews, informal participant observations and a quantitative structured survey, the study explores the attitudes of market traders and customers towards difference and diversity within two ‘ordinary’ case-study London street markets in ethnically diverse and comparatively deprived urban areas. The core finding is that there are seven characteristics of street markets, presented over a three-layered model, that make them highly effective in creating and sustaining functional encounters that support urbanity. Layer I consists of three spatial characteristics – (1) micro-borders, (2) precarity and (3) proximity – that generate moments of mutual solidarity through functional encounters based on cooperation and trust. Layer II identifies two characteristics of functional encounters – (4) adaptable content and (5) familiar form – that seed ‘sociabililties of emplacement’ through mundane rituals of civility that can satisfy both established residents and newcomers. Layer III extends the conventional definition of functional encounters to include sustaining contact between people: this generates two types of conviviality – (6) ‘inconsequential’ and (7) consequential intimacy – supporting deeper-rooted sociabilities of emplacement that are more resistant to challenge. There are additional findings for conflict and competition that cut across the above and are presented separately. The seven characteristics found in the study combine to replace third-hand stereotypes of what someone will be like based on appearances alone with first-hand knowledge of what someone is like based on shared experience. The compulsion to be the same is thus reduced and urbanity is supported.
APA, Harvard, Vancouver, ISO, and other styles
5

Jämtander, Jämtander. "Models explaining the average return on the Stockholm Stock Exchange." Thesis, Högskolan i Jönköping, Internationella Handelshögskolan, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-40360.

Full text
Abstract:
Using three different models, we examine the determinants of average stock returns on the Stockholm Stock Exchange during 2012-2016. By using time-series data, we find that a Fama-French three-factor model (directed at capturing size and book-to-market ratio) functions quite well in the Swedish stock market and is able to explain the variation in returns better than the traditional CAPM. Additionally, we investigated if the addition of a Price/Earning variable to the Fama-French model would increase the explanatory power of the expected returns of the different dependent variables portfolios. We conclude that the P/E ratio does not influence the expected returns in the sample we used.
APA, Harvard, Vancouver, ISO, and other styles
6

Javerzat, Nina. "New conformal bootstrap solutions and percolation models on the torus Two-point connectivity of two- dimensional critical Q-Potts random clusters on the torus Three- and four-point connectivities of two-dimensional critical Q-Potts random clusters on the torus Topological effects and conformal invariance in long-range correlated random surfaces Notes on the solutions of Zamolodchikov- type recursion relations in Virasoro minimal models." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASP062.

Full text
Abstract:
Les propriétés géométriques des phénomènes critiques ont généré un intérêt croissant en physique théorique ainsi qu'en mathématiques au cours des trente dernières années. Les systèmes de percolation sont l'exemple par excellence de tels phénomènes géométriques, où la transition de phase est caractérisée par le comportement de degrés de liberté non-locaux, les amas de percolation. Au point critique, ces amas sont des exemples d'objets aléatoires dont la mesure est invariante conforme, c'est à dire invariante sous tout changement d'échelle local. Nous ne savons en général pas caractériser complètement ces amas, ni même pour le modèle le plus simple de la percolation pure. En deux dimensions, la présence de la symétrie conforme a des conséquences particulièrement importantes. Dans cette thèse nous examinons les implications de cette symétrie sur les propriétés universelles des systèmes critiques bidimensionnels, en utilisant une approche dite de bootstrap conforme. La première partie détaille les implications générales de l'invariance conforme, en examinant ses conséquences sur les fonctions de corrélation. Sont considérés en particulier les effets induits par une topologie de tore, ce qui est appliqué dans la deuxième partie de la thèse à l'étude de modèle statistiques particuliers. Nous discutons également les propriétés analytiques des fonctions de corrélation et présentons des résultats sur des questions techniques liées à l'implémentation de méthodes numériques de bootsrap conforme en deux dimensions. La seconde partie est dédiée à l'étude de deux familles particulières de modèles critiques de percolation avec des corrélations de longue portée : le modèle d'amas aléatoires de Potts à Ǫ états, et un modèle de percolation de surfaces aléatoires. Nous explorons les propriétés percolatoires de ces modèles en étudiant les propriétés de connectivité des amas, c'est à dire les probabilités que des points appartiennent au même amas. Nous avons réalisé que les connectivités sur le tore représentent des observables très intéressantes. En les décrivant comme fonction de corrélation de champs quantiques dans une théorie des champs conforme, nous obtenons de nouveaux résultats sur les classes d'universalité de ces modèles<br>The geometric properties of critical phenomena have generated an increasing interest in theoretical physics and mathematics over the last thirty years. Percolation-type systems are a paradigm of such geometric phenomena, their phase transition being characterised by the behaviour of non-local degrees of freedom: the percolation clusters. At criticality, such clusters are examples of random objects with a conformally invariant measure, namely invariant under all local rescalings. Even in the simplest percolation model --pure percolation, we do not know how to fully characterise these clusters. In two dimensions, the presence of conformal symmetry has especially important implications. In this thesis we investigate the consequences of this symmetry on the universal properties of two-dimensional critical statistical models, by using a conformal bootstrap approach. The first part details the general implications of conformal invariance, by examining its consequences on correlation functions. Are addressed in particular the effects induced by the torus topology, applied in the second part to the study of specific statistical models. We also examine the analytic properties of correlation functions and present results on technical questions related to the implementation of numerical conformal bootstrap methods in two dimensions. The second part is devoted to the study of two specific families of critical long-range correlated percolation models: the random cluster Q-state Potts model and the percolation of random surfaces. We investigate the percolative properties of these models by studying the clusters connectivity properties, namely the probability that points belong to the same cluster. We find that the connectivities on a torus represent particularly interesting observables. By describing them as correlation functions of quantum fields in a conformal field theory, we obtain new results on the universality classes of these models
APA, Harvard, Vancouver, ISO, and other styles
7

Onyeako, Isidore. "Resolution-aware Slicing of CAD Data for 3D Printing." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34303.

Full text
Abstract:
3D printing applications have achieved increased success as an additive manufacturing (AM) process. Micro-structure of mechanical/biological materials present design challenges owing to the resolution of 3D printers and material properties/composition. Biological materials are complex in structure and composition. Efforts have been made by 3D printer manufacturers to provide materials with varying physical, mechanical and chemical properties, to handle simple to complex applications. As 3D printing is finding more medical applications, we expect future uses in areas such as hip replacement - where smoothness of the femoral head is important to reduce friction that can cause a lot of pain to a patient. The issue of print resolution plays a vital role due to staircase effect. In some practical applications where 3D printing is intended to produce replacement parts with joints with movable parts, low resolution printing results in fused joints when the joint clearance is intended to be very small. Various 3D printers are capable of print resolutions of up to 600dpi (dots per inch) as quoted in their datasheets. Although the above quoted level of detail can satisfy the micro-structure needs of a large set of biological/mechanical models under investigation, it is important to include the ability of a 3D slicing application to check that the printer can properly produce the feature with the smallest detail in a model. A way to perform this check would be the physical measurement of printed parts and comparison to expected results. Our work includes a method for using ray casting to detect features in the 3D CAD models whose sizes are below the minimum allowed by the printer resolution. The resolution validation method is tested using a few simple and complex 3D models. Our proposed method serves two purposes: (a) to assist CAD model designers in developing models whose printability is assured. This is achieved by warning or preventing the designer when they are about to perform shape operations that will lead to regions/features with sizes lower than that of the printer resolution; (b) to validate slicing outputs before generation of G-Codes to identify regions/features with sizes lower than the printer resolution.
APA, Harvard, Vancouver, ISO, and other styles
8

Purutcuoglu, Vilda. "Unit Root Problems In Time Series Analysis." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12604701/index.pdf.

Full text
Abstract:
In time series models, autoregressive processes are one of the most popular stochastic processes, which are stationary under certain conditions. In this study we consider nonstationary autoregressive models of order one, which have iid random errors. One of the important nonstationary time series models is the unit root process in AR (1), which simply implies that a shock to the system has permanent effect through time. Therefore, testing unit root is a very important problem. However, under nonstationarity, any estimator of the autoregressive coefficient does not have a known exact distribution and the usual t &ndash<br>statistic is not accurate even if the sample size is very large. Hence,Wiener process is invoked to obtain the asymptotic distribution of the LSE under normality. The first four moments of under normality have been worked out for large n. In 1998, Tiku and Wong proposed the new test statistics and whose type I error and power values are calculated by using three &ndash<br>moment chi &ndash<br>square or four &ndash<br>moment F approximations. The test statistics are based on the modified maximum likelihood estimators and the least square estimators, respectively. They evaluated the type I errors and the power of these tests for a family of symmetric distributions (scaled Student&rsquo<br>s t). In this thesis, we have extended this work to skewed distributions, namely, gamma and generalized logistic.
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Yi-Chen, and 林奕辰. "Star-Routing Algorithm For Three-Layers Channel Routing Using Manhattan-Diagonal Model." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/15776886691018725408.

Full text
Abstract:
碩士<br>淡江大學<br>電機工程學系碩士班<br>96<br>The Very Large Scale Integration (VLSI) manufacturing technology progresses is increasingly in recent year, Integrated Circuit (IC) design cannot be accomplished successfully by the pure manpower. The complexity of the IC is the major cause. In order to deal with that, Electronic Design Automation (EDA) has greatly decreased the time form establishing the system specifications to tape-out and reduced the manpower. In nanometer-scale processing, a VLSI chip may contain several million transistors. As a result, it is highly probable that tens of millions of connecting nets have to be routed completely and successfully in the layout step. The problem of the routing becomes more and more complicated. We must finish the all routing nets in the finite routing region, or going back to replace the whole components in prior step. It is in order to adjust to more suitable routing region, then expending too much time in placement and routing. In a word, a nice router is very important for IC design. In this paper, we use the grid-model to deal with the routing problem. The advantage of using grid-model needs not too complex data structure to solve it, and making the complexity of routing lower. Besides, we use the three metal-layers to be the major components of routing. We also use the positive and negative 45 degrees routing path, and therefore reduce the routing path and length. In our proposed algorithm, we draw up the smaller grid-model, and in order to avoid violating the Design Rule Check (DRC), so the algorithm has a restriction for the third metal-layer in routing step. Although we use three metal-layers in our algorithm, this way can not only reduce the signal length but also decrease the antenna effect between signals than the other multilayer routing algorithm. We needn’t increasing the other spaces and moving any pins to finish the routing completely. The foregoing is good for hard blocks to finish the routing easily, and it does not replace the location of hard blocks because of routing incompletely. Therefore, the height of the entire routing channel can reduce a lot. In the test cases, we use several test benchmarks of channel routing, including YK3a, YK3b, YK3c, Deutsch’s Difficult Example (DDE) and ISCAS 85 benchmarks. In the simulation results, the result of YK3c has more numbers of the routing track than others, even so, our routing channel height after conversion still less than others, because our unit of grid-model is smaller than others. The height of routing channel decrease 30% in average and it can guarantee to achieve 100% routing.
APA, Harvard, Vancouver, ISO, and other styles
10

Ramachandran, Tanisha. "Three tellings, four models and differing perceptions : the contruction of female sexuality in the Rāmāyaṇa." Thesis, 2000. http://spectrum.library.concordia.ca/1198/1/MQ54259.pdf.

Full text
Abstract:
The Ramayan[dotbelow]a has been used as a model for appropriate behavior for Hindus throughout the world. Hinduism, like many other religions, uses examples of morality derived from divine sources. The two epics, the Mahabharata, and Ramayan[dotbelow]a, occupy a unique position in the lives of Hindus. They serve, in effect, as a guide to appropriate conduct. The Ramayan[dotbelow]a is filled with tales, which depict, above all, dharma. Rama, the king of Ayodhya, is presented as the quintessence of ethical action. Although Sita, his wife, has been singled out as the representation of the pure woman, the Ramayan[dotbelow]a contains many characters that conform to or reject this notion of gender and sexuality. In this manner, ideal sexuality is not represented solely by Sita. To illustrate (speaking only in a prescriptive manner) a pan "Hindu" concept of sexuality, the focus of this research will be limited to four characters, Sita, Sabari, Surpan[dotbelow]akha and Ayomukhi in the Aran[dotbelow]yakan[dotbelow]d[dotbelow]a of Valmiki's Ramayan[dotbelow]a , Tulsidas' Ramacaritmanas and Kampa n 's Iramavataram . The purpose, is not simply to point out the subjugated position of women in this epic, rather it is also to illustrate the dependent and fluid nature of female sexuality.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Three and four layers models"

1

Sommer, T. P. A near-wall four-equation turbulence model for compressible boundary layers. National Aeronautics and Space Administration, Scientific and Technical Information Program, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sommer, T. P. A near-wall four-equation turbulence model for compressible boundary layers. Langley Research Center, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sommer, T. P. A near-wall four-equation turbulence model for compressible boundary layers. National Aeronautics and Space Administration, Scientific and Technical Information Program, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

United States. National Aeronautics and Space Administration., ed. Physics of magnetospheric boundary layers. National Aeronautics and Space Administration, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lewis Research Center. Institute for Computational Mechanics in Propulsion., ed. On the behavior of three-dimensional wave packets in viscously spreading mixing layers. National Aeronautics and Space Administration, Lewis Research Center, Institute for Computational Mechanics in Propulsion, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lewis Research Center. Institute for Computational Mechanics in Propulsion, ed. On the behavior of three-dimensional wave packets in viscously spreading mixing layers. National Aeronautics and Space Administration, Lewis Research Center, Institute for Computational Mechanics in Propulsion, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lewis Research Center. Institute for Computational Mechanics in Propulsion., ed. On the behavior of three-dimensional wave packets in viscously spreading mixing layers. National Aeronautics and Space Administration, Lewis Research Center, Institute for Computational Mechanics in Propulsion, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lewis Research Center. Institute for Computational Mechanics in Propulsion., ed. On the behavior of three-dimensional wave packets in viscously spreading mixing layers. National Aeronautics and Space Administration, Lewis Research Center, Institute for Computational Mechanics in Propulsion, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

A, Houck Jacob, and United States. National Aeronautics and Space Administration. Scientific and Technical Information Branch., eds. Theoretical three- and four-axis gimbal robot wrists. National Aeronautics and Space Administration, Scientific and Technical Information Branch, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Barker, L. Keith. Theoretical three- and four-axis gimbal robot wrists. National Aeronautics and Space Administration Scientific and Technical Information Branch, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Three and four layers models"

1

Hannezo, Edouard, and Colinda L. G. J. Scheele. "A Guide Toward Multi-scale and Quantitative Branching Analysis in the Mammary Gland." In Cell Migration in Three Dimensions. Springer US, 2023. http://dx.doi.org/10.1007/978-1-0716-2887-4_12.

Full text
Abstract:
AbstractThe mammary gland consists of a bilayered epithelial structure with an extensively branched morphology. The majority of this epithelial tree is laid down during puberty, during which actively proliferating terminal end buds repeatedly elongate and bifurcate to form the basic structure of the ductal tree. Mammary ducts consist of a basal and luminal cell layer with a multitude of identified sub-lineages within both layers. The understanding of how these different cell lineages are cooperatively driving branching morphogenesis is a problem of crossing multiple scales, as this requires information on the macroscopic branched structure of the gland, as well as data on single-cell dynamics driving the morphogenic program. Here we describe a method to combine genetic lineage tracing with whole-gland branching analysis. Quantitative data on the global organ structure can be used to derive a model for mammary gland branching morphogenesis and provide a backbone on which the dynamics of individual cell lineages can be simulated and compared to lineage-tracing approaches. Eventually, these quantitative models and experiments allow to understand the couplings between the macroscopic shape of the mammary gland and the underlying single-cell dynamics driving branching morphogenesis.
APA, Harvard, Vancouver, ISO, and other styles
2

Holfelder, Wieland, Andreas Mayer, and Thomas Baumgart. "Sovereign Cloud Technologies for Scalable Data Spaces." In Designing Data Spaces. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-93975-5_25.

Full text
Abstract:
AbstractThe cloud has changed the way we consume technology either as individual users or in a business context. However, cloud computing can only transform organizations, create innovation, or provide the ability to scale digital business models if there is trust in the cloud and if the data that is being generated, processed, exchanged, and stored in the cloud has the appropriate safeguards. Therefore, sovereignty and control over data and its protection are paramount. Data spaces provide organizations with additional capabilities to govern strict data usage rules over the whole life cycle of information sharing with others and enable new use cases and new business models where data can be securely shared among a defined set of collaborators and with clear and enforceable usage rights attached to create new value. Open and sovereign cloud technologies will provide the necessary transparency, control, and the highest levels of privacy and security that are required to fully leverage the potential of such data spaces. Digital sovereignty, however, still means many things to many people. So to make it more concrete, in this article, we will look at digital sovereignty across three layers: data sovereignty, operational sovereignty, and software sovereignty. With these layers, we will create a spectrum of solutions that enable scalable data spaces that will be critical for the digital transformation of the European economy.
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Di, Jian Wu, Lingyan Deng, and Jianjian Wu. "Discrete Element Simulation Study of Multi-Layered Reinforced Geotextile Treatment of Karst Collapse." In Lecture Notes in Civil Engineering. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-2417-8_12.

Full text
Abstract:
AbstractThe use of multi-layered reinforced geotextile to treat karst collapse can reduce the buried range of geotextiles, so as to achieve the purpose of saving project costs. In order to investigate the impact of varying layers of geotextile on the mitigation of collapses in karst regions, this study establishes a Discrete Element Particle Flow Code 2D (PFC2D) model for geotextile treatment with different layer configurations. The analysis in this research encompasses several critical aspects, including the top vertical settlement of soil, variations in tensile forces experienced by the first layer (bottom layer) of geotextile with changes in the position of the settlement plate, and the distribution of tensile forces across different horizontal positions within each layer of geotextile. The findings indicate the following trends: as the number of reinforced geotextile layers increases, there is an overall reduction in the vertical settlement of the soil. When employing multiple layers of geotextile, the first layer (bottom layer) experiences the highest tensile forces. Furthermore, as the number of reinforced geotextile layers increases, there is a general decrease in the tensile forces acting on the first layer (bottom layer).
APA, Harvard, Vancouver, ISO, and other styles
4

Kruse, Otto, and Chris M. Anson. "Writing and Thinking: What Changes with Digital Technologies?" In Digital Writing Technologies in Higher Education. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-36033-6_29.

Full text
Abstract:
AbstractThe relationship between writing and thinking explicitly or implicitly runs through all the contributions to this book. There is no writing without thinking and there is no new writing technology that does not alter the way thinking in writing happens. Many layers of the relationship between thinking and writing await conceptualization. Four of them that seem most widely affected by the currently unfolding transformational processes are described in more detail in this chapter: (1) the connection of inscription and linearization to thinking; (2) the relation of sub-actions of the writing processes to thinking; (3) the influence of digital technology on connected thought, networked thinking, and collaborative writing; and (4) the challenges of higher-order support for writing, including automatic text generation for the conceptualization of the writing-thinking interplay. We close with a short statement on the necessity to adopt human-machine models to conceptualize thinking in writing.
APA, Harvard, Vancouver, ISO, and other styles
5

Henze, Dominic. "Dynamically Scalable Fog Architectures." In Ernst Denert Award for Software Engineering 2020. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-83128-8_6.

Full text
Abstract:
AbstractRecent advances in mobile connectivity as well as increased computational power and storage in sensor devices have given rise to a new family of software architectures with challenges for data and communication paths as well as architectural reconfigurability at runtime. Established in 2012, Fog Computing describes one of these software architectures. It lacks a commonly accepted definition, which manifests itself in the missing support for mobile applications as well as dynamically changing runtime configurations. The dissertation “Dynamically Scalable Fog Architectures” provides a framework that formalizes Fog Computing and adds support for dynamic and scalable Fog Architectures.The framework called xFog (Extension for Fog Computing) models Fog Architectures based on set theory and graphs. It consists of three parts: xFogCore, xFogPlus, and xFogStar. xFogCore establishes the set theoretical foundations. xFogPlus enables dynamic and scalable Fog Architectures to dynamically add new components or layers. Additionally, xFogPlus provides a View concept which allows stakeholders to focus on different levels of abstraction.These formalizations establish the foundation for new concepts in the area of Fog Computing. One such concept, xFogStar, provides a workflow to find the best service configuration based on quality of service parameters.The xFog framework has been applied in eight case studies to investigate the applicability of dynamic Fog Components, scalable Fog Architectures, and the service provider selection at runtime. The case studies, covering different application domains—ranging from smart environments, health, and metrology to gaming—successfully demonstrated the feasibility of the formalizations provided by xFog, the dynamic change of Fog Architectures by adding new components and layers at runtime, as well as the applicability of a workflow to establish the best service configuration.
APA, Harvard, Vancouver, ISO, and other styles
6

Poliakov, A. N. B., P. A. Cundall, Y. Y. Podladchikov, and V. A. Lyakhovsky. "An Explicit Inertial Method for the Simulation of Viscoelastic Flow: An Evaluation of Elastic Effects on Diapiric Flow in Two- and Three- Layers Models." In Flow and Creep in the Solar System: Observations, Modeling and Theory. Springer Netherlands, 1993. http://dx.doi.org/10.1007/978-94-015-8206-3_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Vahdati, Farkhondeh, Mia Tedjosaputro, Asterios Agkathidis, and Charles K. S. Moy. "Exploring the Integration of Holographic Construction in Design for Disassembly." In Lecture Notes in Civil Engineering. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-4749-1_16.

Full text
Abstract:
Abstract Design for Disassembly (DfD) aims to enhance the recyclability and reusability of building materials by considering their ease of disassembly at the end of their lifecycle. This paper explores the innovative integration of holographic construction technology within the DfD context. The six DfD principles are evaluated by integrating Fologram© and physical modelling, which has yet to be explored in previous studies. The research question the authors seek to answer is, “How can holographic construction methods facilitate the DfD approach of no-glue, no-nail timber structure?”. The research is carried out by conducting a prototyping experiment with three participants to assemble and disassemble a structure using Fologram©. The mobile app simplifies viewing and editing Rhinoceros© 3D models in real-time through a live link with the device, enabling easy visualization and modifications. Six DfD principles are analysed, and from this preliminary study, it is observed that the Fologram© as a holographic construction method can facilitate disassembly processes if it is improved in some steps. The future design options will be more organized using discrete layers in the defined mixed-reality fabrication steps.
APA, Harvard, Vancouver, ISO, and other styles
8

McLean, J. D., and T. K. Matoi. "Shock/Boundary-Layer Interaction Model for Three-Dimensional Transonic Flow Calculations." In Turbulent Shear-Layer/Shock-Wave Interactions. Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/978-3-642-82770-9_25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Schirmeier, Horst, Christoph Borchert, Martin Hoffmann, et al. "Dependability Aspects in Configurable Embedded Operating Systems." In Dependable Embedded Systems. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-52017-5_4.

Full text
Abstract:
AbstractAs all conceptual layers in the software stack depend on the operating system (OS) to reliably provide resource-management services and isolation, it can be considered the “reliable computing base” that must be hardened for correct operation under fault models such as transient hardware faults in the memory hierarchy. In this chapter, we approach the problem of system-software hardening in three complementary scenarios. (1) We address the following research question: Where do the general reliability limits of static system-software stacks lie, if designed from scratch with reliability as a first-class design goal? In order to reduce the proverbial “attack surface” as far as possible, we harness static application knowledge from an AUTOSAR-compliant task set, and protect the whole OS kernel with AN-encoding. This static approach yields an extremely reliable software system, but is constrained to specific application domains. (2) We investigate how reliable a dynamic COTS embedded OS can become if hardened with programming-language and compiler-based fault-tolerance techniques. We show that aspect-oriented programming is an appropriate means to encapsulate generic software-implemented hardware fault tolerance mechanisms that can be application-specifically applied to a selection of OS components. (3) We examine how system-software stacks can survive even more adverse fault models like whole-system outages, using emerging persistent memory (PM) technology as a vehicle for state conservation. Our findings include that software transactional memory facilitates maintaining consistent state within PM and allows fast recovery.
APA, Harvard, Vancouver, ISO, and other styles
10

Kaewunruen, Sakdirat, Charalampos Baniotopoulos, Yunlong Guo, Pasakorn Sengsri, Patrick Teuffel, and Diana Bajare. "6D-BIM Applications to Enrich Circular Value Chains and Stakeholder Engagement Within Built Environments." In Lecture Notes in Civil Engineering. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-57800-7_32.

Full text
Abstract:
AbstractBuilding Information Modelling (BIM) is a digitalisation tool that is widely adopted in construction industry. It is a three-dimensional digital replica of asset(s) such as buildings, which contain architectural information and building details (e.g. dimensions, materials, parts, and components). It has evolved from 2D CAD models (or blueprints) in the past to 3D CAD models embedded with information layers (e.g., construction time sequence or 4D-BIM), resulting in automation in construction. BIM has now been essential in various countries; for example, new UK BIM standards require asset owners to keep and maintain building information. BIM adopts an interoperable concept that can benefit the whole life-cycle assessment (LCA) and circularity of the built environments. Its applications extend to six dimensions (6D) where time sequence, cost and carbon footprint can now be reported in real time. These attributes are essential to stakeholders and critically help reduce any unexpected consumption and waste over the life cycle of a project. This study builds on the development of 6D BIM of an existing building to enrich circular value chains and stakeholder engagement. This paper highlights the development of 6D BIM, and, subsequently, the stakeholder interviews to address challenges, barriers, benefits, and effectiveness of 6D-BIM applications for stakeholder engagements across circular value chains. Snowballing sampling method has been used to identify stakeholder interviews to obtain new insights into the digital valorisation for stakeholder engagement. The outcome of this study will exhibit new insights and practical paradigms for BIM applications in built environments.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Three and four layers models"

1

Shi, Hua, Hongbin Wang, W. Paul Jepson, and Lee D. Rhyne. "Predicting of Water Film Thickness and Velocity for Corrosion Rate Calculation in Oil-Water Flows." In CORROSION 2002. NACE International, 2002. https://doi.org/10.5006/c2002-02500.

Full text
Abstract:
Abstract Experiments studying oil-water flows were conducted in a 10-cm diameter, 40-m long, horizontal pipeline. Oil (viscosity 3 cP at 25°C) and ASTM substitute seawater were used at superficial mixture velocities ranging from 0.4 to 3.0m/s. in situ water cut and in situ velocity along the pipe across section have been measured at a temperature of 25°C and a carbon dioxide partial pressure of 0.13 MPa for a whole range of water cut. A novel mathematical segregated flow model, four-layer/phase was then developed for intermediate oil-water flow patterns of semi-segregated, semi-mixed and mixed as a three-phase model by incorporating experimental data. The mixed layer in the three-layer/phase model is further divided into water-in-oil (oil-continuous) and oil-in-water (water-continuous) layers by the phase inversion point. The experimental data are in good agreement with the predicted water film height from the model.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Aihong, Ian Owens Pericevic, Kennerly Digges, Cing-Dao Kan, Moji Moatamedi, and Jeffrey S. Augenstein. "FE Modeling of the Orthotropic and Three-Layered Human Thoracic Aorta." In ASME 2006 Pressure Vessels and Piping/ICPVT-11 Conference. ASMEDC, 2006. http://dx.doi.org/10.1115/pvp2006-icpvt-11-93573.

Full text
Abstract:
The human aorta consists of three layers: intima, media and adventitia from the inner to outer layer. Since aortic rupture of victims in vehicle crashes frequently occurs in the intima and the media, latent aortic injuries are difficult to detect at the crash scene or in the emergency room. It is necessary to develop a multi-layer aorta finite element (FE) model to identify and describe the potential mechanisms of injury in various impact modes. In this paper, a novel three-layer FE aortic model was created to study aortic ruptures under impact loading. The orthotropic material model [1] has been implemented into a user-defined material subroutine in the commercial dynamic finite element software LS-DYNA version 970 [2], which was adopted in the aorta FE model. The Arbitrary-Lagrangian Eulerian (ALE) approach was adopted to simulate the interaction between the fluid (blood) and the structure (aorta). Single element verifications for the user-defined subroutine were performed. The mechanical behaviors of aortic tissues under impact loading were simulated by the aorta FE model. The models successfully predicted the rupture of the layers separately. The results provide a basis for a more in-depth investigation of blunt traumatic aortic rupture (BTAR) in vehicle crashes.
APA, Harvard, Vancouver, ISO, and other styles
3

Ayub, Mohammed, and SanLinn Ismail Kaka. "Automated Hyperparameter Optimization of Convolutional Neural Network (CNN) for First-Break (FB) Arrival Picking." In Gas & Oil Technology Showcase and Conference. SPE, 2023. http://dx.doi.org/10.2118/214253-ms.

Full text
Abstract:
Abstract The Convolutional Neural Network (CNN) has been used successfully to enhance the First-break (FB) automated arrival picking of seismic data. Determining an optimized FB model is challenging as it needs to consider several hyperparameters (HPs) combinations. Tuning the most important HPs manually is infeasible because of a higher number of HP combinations to be tested. Three state-of-the-art automated hyperparameter optimization (HPO) techniques are applied to a CNN model for robust FB arrival picking classification. A CNN model with 4 convolutional (Conv) layers followed by one fully connected (FC) and one output layer is designed to classify the seismic event as FB or non-FB. To control overfitting, dropout (DO), batch normalization are used after every two Conv layers, in addition to only the DO layer after FC. The number and size of kernels, DO rate, Learning rate (Lr), and several neurons in the FC layer are fine-tuned using random search, Bayesian, and Hyper Band HPO techniques. The findings are experimentally evaluated and compared in terms of four performance metrics with respect to classification performance. The five hyperparameters mentioned above are fine-tuned in 13 search spaces for each of the three HPO techniques. From experimental results, applying random search HPO to CNN yields the best accuracy and F1-score of 96.26%, with the best HP combination of 16, 16, 32, and 64 for numbers of kernels in four Conv layers respectively; 2, 2, 2, 5 for the size of kernels in each Conv layer; 0, 0.45, 0.25 for DO rate in each of DO layers; 240 for numbers of neurons in FC layer; and 0.000675 for Lr. In terms of loss on test data, the above combination of HP gives the lowest test loss of 0.1191 among all techniques, making it a robust model. This model outperforms all the other models in terms of precision (96.27%) and recall. Moreover, all HPO models outperformed the baseline in terms of all metrics. The use of DO after Conv layers and FC layers is highly recommended. Moreover, the use of kernel size relatively smaller (i.e. 2) produces the best classification performance. According to the best HP combination results, there is also no harm to use a relatively higher number of neurons in the FC layer than the Conv layer in FB arrival picking classification. The optimal values of Lr range from 0.0001 to 0.000675 depending on the HPO techniques. The model developed in this study improves the accuracy of the auto-picking of FB seismic data and it is anticipated our model to be used more widely in future studies in the processing of seismic data.
APA, Harvard, Vancouver, ISO, and other styles
4

Ferrer-Gómez, M., A. Gámez-Pozo, G. Prado-Vázquez, et al. "PO-460 Gene expression-based probabilistic graphical models identify three independent biological layers in colorrectal cancer." In Abstracts of the 25th Biennial Congress of the European Association for Cancer Research, Amsterdam, The Netherlands, 30 June – 3 July 2018. BMJ Publishing Group Ltd, 2018. http://dx.doi.org/10.1136/esmoopen-2018-eacr25.967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Peng, Zhaopeng, Xiaoliang Fan, Yufan Chen, et al. "FedPFT: Federated Proxy Fine-Tuning of Foundation Models." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/531.

Full text
Abstract:
Adapting Foundation Models (FMs) for down- stream tasks through Federated Learning (FL) emerges a promising strategy for protecting data privacy and valuable FMs. Existing methods fine- tune FM by allocating sub-FM to clients in FL, however, leading to suboptimal performance due to insufficient tuning and inevitable error accumula- tions of gradients. In this paper, we propose Feder- ated Proxy Fine-Tuning (FedPFT), a novel method enhancing FMs adaptation in downstream tasks through FL by two key modules. First, the sub-FM construction module employs a layer-wise com- pression approach, facilitating comprehensive FM fine-tuning across all layers by emphasizing those crucial neurons. Second, the sub-FM alignment module conducts a two-step distillations—layer- level and neuron-level—before and during FL fine- tuning respectively, to reduce error of gradient by accurately aligning sub-FM with FM under theo- retical guarantees. Experimental results on seven commonly used datasets (i.e., four text and three vi- sion) demonstrate the superiority of FedPFT. Our code is available at https://github.com/pzp-dzd/FedPFT.
APA, Harvard, Vancouver, ISO, and other styles
6

Roshankhah, S., and J. McLennan. "Hydraulic Fractures in Reservoirs Bounded by Layers of Other Rocks." In 56th U.S. Rock Mechanics/Geomechanics Symposium. ARMA, 2022. http://dx.doi.org/10.56952/arma-2022-2062.

Full text
Abstract:
ABSTRACT: This study investigates the characteristics of hydraulic fractures (HFs) formed in low permeability reservoirs that are bounded by salt layers. Three layered systems are modeled, where the thickness of the bounding salt layers differs with respect to the thickness of the shale layer (same thickness, thinner salt, and thicker salt). The width and total height of the models are the same. The interface properties match the properties of the weaker material, which is the salt. Both the shale and salt zones are modeled as homogeneous and impermeable materials, and water injection is modeled in the center of the middle shale layer. An additional model of hydraulic fracturing in the middle of a homogeneous and isotropic shale is included. All models are subjected to the maximum (major) principal stress in the vertical direction and the minimum (minor) principal stress in the horizontal direction with fixed boundary conditions. The hybrid finite-discrete element modeling technique is used for these analyses. Results show that the contrast between the mechanical properties and thickness of layers influence the state of stress in the layers. Specifically, the orientation of the major and minor principal stresses switch in the target shale layer. This leads to creation of inclined HFs in the bounded shale as opposed to vertical HFs that would form in a thick shale layer under normal anisotropic stress conditions. The thicker are the bounding salt layers, the more horizontally inclined the HFs are in the shale. These analyses inform us that the design of hydraulic stimulations is influenced by the properties and thickness contract between the reservoir and bounding layers. 1. INTRODUCTION Unconventional fossil energy reservoirs usually comprise relatively thin layers of shale or mudstone (consider a thickness of about 10 m) bounded by other lithologies such as sandstone, salt, or limestone. It is well known that deformational characteristics and type, intensity, and height of fractures in multi-lithological layered formations are controlled by the contrast in the mechanical properties and thickness of the layers, type of the interfaces, and the in-situ confining stress (Narr and Suppe, 1991; Gross et al., 1995; Rijken and Cooke, 2001; Lorenz, et al., 2002; Underwood et al., 2003; Laubach, et al., 2009; Ferrill, et al., 2014; McGinnis, et al., 2017).
APA, Harvard, Vancouver, ISO, and other styles
7

Cai, Zhong, Ana Widyanita, Prasanna Chidambaram, and Ernest A. Jones. "Reservoir Architecture Modeling at Sub-Seismic Scale for a Depleted Carbonate Reef Reservoir for CO2 Storage in Sarawak Basin, Offshore Malaysia." In SPE Middle East Oil & Gas Show and Conference. SPE, 2021. http://dx.doi.org/10.2118/204689-ms.

Full text
Abstract:
Abstract It is still a challenge to build a numerical static reservoir model, based on limited data, to characterize reservoir architecture that corresponds to the geological concept models. The numerical static reef reservoir model has been evolving from the oversimplified tank-like models, simple multi-layer models to the complex multi-layer models that are more realistic representations of complex reservoirs. A simple multi-layer model for the reef reservoir with proportional layering scheme was applied in the CO2 Storage Development Plan (SDP) study, as the most-likely scenario to match the geological complexity. Model refinement can be conducted during CO2 injection phase with Measurement, Monitoring and Verification (MMV) technologies for CO2 plume distribution tracking. The selected reservoir is a Middle to Late Miocene carbonate reef complex, with three phases of reef growth: 1) basal transgressive phase, 2) lower buildup phase, and 3) upper buildup phase. Three chronostratigraphic surfaces were identified on 3D seismic reflection data as the zone boundaries, which were then divided into sub-zones and layers. Four layering methods were compared, which are ‘proportional’, ’follow top’, ‘follow base’ and ‘follow top with reference surface’. The proportional layering method was selected for the base case of the 3D static reservoir model and the others were used in the uncertainty analysis. Based on the results of uncertainty and risk assessment, a risk mitigation for CO2 injection operation were modeled and three CO2 injection well locations were optimized. The reservoir architecture model would be updated and refined by the difference between the modeled CO2 plume patterns and The MMV results in the future.
APA, Harvard, Vancouver, ISO, and other styles
8

Damia´n Ascencio, C. E., A. Herna´ndez Guerrero, J. A. Escobar Vargas, C. Rubio-Arana, and F. Elizalde Blancas F. "Three-Dimensional Numerical Prediction of Current Density for a Constructal Theory-Based Flow Field Pattern." In ASME 2007 International Mechanical Engineering Congress and Exposition. ASMEDC, 2007. http://dx.doi.org/10.1115/imece2007-42449.

Full text
Abstract:
This work presents a three-dimensional numerical prediction of the current density for a Proton Exchange Membrane Fuel Cell (PEMFC) with a tree-like pattern, which is observed in the nature, for the flow field channels. The numerical model considers a complete solution of the Navier-Stokes Equation, the species transport equation and two potential field equations; the model is solved using a finite volume technique assuming isothermal and steady state conditions. The three-dimensional model includes the analysis of: current collectors, flow channels, gas diffusion layers, catalyst layers on both sides of the PEMFC (anode and cathode) and a membrane between the two catalyst layers. The contours of the current density are compared to other models found in the technical literature. The results of the model presented here show that the average current density is larger than for conventional models (such as serpentine flow paths). This suggests that more efficient flow field paths could be build with this constructal theory-based pattern for the flow field channels.
APA, Harvard, Vancouver, ISO, and other styles
9

Vieira, João Marcos Bastos, and José Renato Mendes de Sousa. "Gas Diffusion in Flexible Pipes: A Comparison Between Two- and Three-Dimensional FE Models to Predict Annulus Composition." In ASME 2022 41st International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2022. http://dx.doi.org/10.1115/omae2022-78325.

Full text
Abstract:
Abstract The Brazilian offshore oil and gas industry uses flexible pipes to transport water, oil, and/or gas. Currently, the high concentration of acid gases, such as carbon dioxide (CO2) and hydrogen sulfide (H2S), in Brazilian pre-salt wells has been introducing new challenges to these pipes. The acid gases may migrate from the bore to the annulus of these structures, forming a corrosive environment that can induce the steel armors’ failure by SCC (Stress Corrosion Cracking) or HIC (Hydrogen Induced Cracking). Hence, predicting the gas composition in the annulus is of fundamental importance to ensure the safe operation of flexible pipes. However, this prediction involves complex gas permeation analyses through the layers of these pipes. For instance, the permeation rate depends on temperature, gases partial pressures, and the free volume distribution. Therefore, new tools are required to understand better the fluid permeation between the flexible pipes’ layers. Therefore, this paper presents and compares two finite element (FE) models to predict the annulus composition of flexible pipes. Both models consider the temperature gradient effects on the layers’ material properties. On the one hand, the first approach deals with a two-dimensional model that considers helical layers as rings. So, the shielding effect is simplified. On the other, the second develops a complete three-dimensional model of the cross-section geometry. The results indicate that, while being faster, the two-dimensional approach shows higher concentration results than the three-dimensional approach. Furthermore, the difference between the two approaches suggests that the shielding provided by the helicoidal wires is relevant.
APA, Harvard, Vancouver, ISO, and other styles
10

De Palma, P. "Numerical Analysis of Turbomachinery Flows With Transitional Boundary Layers." In ASME Turbo Expo 2002: Power for Land, Sea, and Air. ASMEDC, 2002. http://dx.doi.org/10.1115/gt2002-30223.

Full text
Abstract:
This paper provides a numerical study of the flow through two turbomachinery cascades with transitional boundary layers. The aim of the present work is to validate some state-of-the-art turbulence and transition models in complex flow configurations. Therefore, the compressible Reynolds-averaged Navier–Stokes equations, with an Explicit Algebraic Stress Model (EASM) and k − ω turbulence closure, are considered. Such a turbulence model is combined with the transition model of Mayle for separated flow. The space discretization is based on a finite volume method with Roe’s approximate Riemann solver and formally second-order-accurate MUSCL extrapolation with minmod limiter. Time integration is performed employing an explicit Runge–Kutta scheme with multigrid acceleration. Firstly, the computations of the two- and three-dimensional subsonic flow through the T106 low-pressure turbine cascade are briefly discussed. Then, a more severe test case, involving shock-induced boundary-layer separation and corner stall is considered, namely, the three-dimensional transonic flow through a linear compressor cascade. In the present paper, calculations of such a transonic flow are presented, employing the standard k − ω model and the EASM, without transition model, and a comparison with the experimental data available in the literature is provided.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Three and four layers models"

1

Aiello-Lammens, Matthew E., Robert Anderson, Mary E. Blair, et al. Species Distribution Modeling for Conservation Educators and Practitioners. American Museum of Natural History, 2023. http://dx.doi.org/10.5531/cbc.ncep.0184.

Full text
Abstract:
Models that predict distributions of species by combining known occurrence records with digital layers of environmental variables have much potential for application in conservation. Through using this module, teachers will enable students to develop species distribution models, to apply the models across a series of analyses, and to interpret predictions accurately. In addition to its original components, this module features an updated and condensed synthesis document ("A Brief Introduction to Species Distribution Modeling for Conservation Educators and Practitioners," which provides theoretical and practical guidance for the expanding field of species distribution modeling. The synthesis is supplemented by a new exercise where learners create and optimize species distribution models using Wallace, an R-based GUI (Graphical User Interface) application for ecological modeling that currently focuses on building, evaluating, and visualizing models of species niches and distributions. Additionally, there are four new PowerPoint presentations on species distribution models (the history and theory, data and algorithms, and evaluating SDMs), as well as a presentation on how to use Wallace. The original Synthesis, "Species' Distribution Modeling for Conservation Educators and Practitioners," introduces learners to the modeling approach, outlines key concepts and terminology, and describes questions that may be addressed using the approach. A theoretical framework that is fundamental to ensuring that students understand the uses and limitations of the models is then described. Additionally, it details the main steps in building and testing a distribution model, and describes three case studies that illustrate applications of the models. This module is targeted at a level suitable for teaching graduate students and conservation professionals.
APA, Harvard, Vancouver, ISO, and other styles
2

Chapman, Ray, Phu Luong, Sung-Chan Kim, and Earl Hayter. Development of three-dimensional wetting and drying algorithm for the Geophysical Scale Transport Multi-Block Hydrodynamic Sediment and Water Quality Transport Modeling System (GSMB). Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/41085.

Full text
Abstract:
The Environmental Laboratory (EL) and the Coastal and Hydraulics Laboratory (CHL) have jointly completed a number of large-scale hydrodynamic, sediment and water quality transport studies. EL and CHL have successfully executed these studies utilizing the Geophysical Scale Transport Modeling System (GSMB). The model framework of GSMB is composed of multiple process models as shown in Figure 1. Figure 1 shows that the United States Army Corps of Engineers (USACE) accepted wave, hydrodynamic, sediment and water quality transport models are directly and indirectly linked within the GSMB framework. The components of GSMB are the two-dimensional (2D) deep-water wave action model (WAM) (Komen et al. 1994, Jensen et al. 2012), data from meteorological model (MET) (e.g., Saha et al. 2010 - http://journals.ametsoc.org/doi/pdf/10.1175/2010BAMS3001.1), shallow water wave models (STWAVE) (Smith et al. 1999), Coastal Modeling System wave (CMS-WAVE) (Lin et al. 2008), the large-scale, unstructured two-dimensional Advanced Circulation (2D ADCIRC) hydrodynamic model (http://www.adcirc.org), and the regional scale models, Curvilinear Hydrodynamics in three dimensions-Multi-Block (CH3D-MB) (Luong and Chapman 2009), which is the multi-block (MB) version of Curvilinear Hydrodynamics in three-dimensions-Waterways Experiments Station (CH3D-WES) (Chapman et al. 1996, Chapman et al. 2009), MB CH3D-SEDZLJ sediment transport model (Hayter et al. 2012), and CE-QUAL Management - ICM water quality model (Bunch et al. 2003, Cerco and Cole 1994). Task 1 of the DOER project, “Modeling Transport in Wetting/Drying and Vegetated Regions,” is to implement and test three-dimensional (3D) wetting and drying (W/D) within GSMB. This technical note describes the methods and results of Task 1. The original W/D routines were restricted to a single vertical layer or depth-averaged simulations. In order to retain the required 3D or multi-layer capability of MB-CH3D, a multi-block version with variable block layers was developed (Chapman and Luong 2009). This approach requires a combination of grid decomposition, MB, and Message Passing Interface (MPI) communication (Snir et al. 1998). The MB single layer W/D has demonstrated itself as an effective tool in hyper-tide environments, such as Cook Inlet, Alaska (Hayter et al. 2012). The code modifications, implementation, and testing of a fully 3D W/D are described in the following sections of this technical note.
APA, Harvard, Vancouver, ISO, and other styles
3

Carter, T. R., C E Logan, and H. A. J. Russell. Three-dimensional model of dolomitization patterns in the Salina Group A-1 Carbonate and A-2 Carbonate units, Sombra Township, Lambton County, southern Ontario. Natural Resources Canada/CMSS/Information Management, 2024. http://dx.doi.org/10.4095/332363.

Full text
Abstract:
Dolomitization of carbonate rocks is a subject of considerable interest due to association with oil and gas reservoirs and Mississippi Valley Type ore deposits. Conceptual two-dimensional models of dolomitization are common in the literature, however numeric models supported by high quality data are rare to nonexistent. This paper presents three-dimensional (3-D) dolomitization patterns in the Salina Group A-1 Carbonate Unit and A-2 Carbonate Unit located in Sombra Township, Lambton County. The source data consists of percent dolomite measurements collected from 9727 drill cutting samples, stained with alizarin red, from 409 petroleum wells. Numerical interpolants of the percentage of dolomite versus limestone in the two formations are developed within the boundaries of lithostratigraphic formation layers derived from a 3-D geologic model of southern Ontario, published as GSC Open File 8795 (Carter et al. 2021b). The model was developed using Leapfrog© Works software with a 400 m grid resolution. Results show that increased proportions of dolomite vs limestone in both formations are spatially associated with the flanks and crests of pinnacles in the underlying Lockport Group carbonates, over which the B Salt has been dissolved, and the downthrown side of the Dawn Fault and Becher faults. In the A-1 Carbonate there is an increase in dolomite content over a minority of incipient reefs in the Lockport, and in the A-2 Carbonate Unit there is a gradational increase in dolomite content upwards from a basal limestone to 100% dolomite. The cross-cutting relationships of dolomite occurrence in the A-1 Carbonate on the flanks and crests of some pinnacles support a post-depositional burial diagenesis mechanism, consistent with previous interpretations. The pathway for the dolomitizing fluid was laterally through porous and permeable regional paleokarst in the underlying Lockport Group, uppermost Goat Island and Guelph formations, and upwards through the porous reefal carbonates of the pinnacles. Association of dolomitization haloes with dissolution features in halite of the overlying B Salt Unit further suggest that the dolomitizing fluids were also responsible for salt dissolution. The preferential association of dolomite with the Dawn and Becher faults suggest that movement of the dolomitizing fluid was also fault controlled. This project demonstrates the feasibility and merit of assignment and interpolation of attribute values constrained by lithostratigraphic layers in the regional 3-D geologic model of southern Ontario. Spatial associations of dolomite with other geological features are more clearly resolved than in a 2-D study.
APA, Harvard, Vancouver, ISO, and other styles
4

Carter, T. R., C. E. Logan, J K Clark, H. A. J. Russell, E. H. Priebe, and S. Sun. A three-dimensional bedrock hydrostratigraphic model of southern Ontario. Natural Resources Canada/CMSS/Information Management, 2022. http://dx.doi.org/10.4095/331098.

Full text
Abstract:
A hydrostratigraphic framework has been developed for southern Ontario consisting of 15 hydrostratigraphic units and 3 regional hydrochemical regimes. Using this framework, the 54 layer 3-D lithostratigraphic model has been converted into a 15 layer 3-D hydrostratigraphic model. Layers are expressed as either aquifer or aquitard based principally on hydrogeologic characteristics, in particular the permeability and the occurrence/absence of groundwater when intersected by a water well or petroleum well. Hydrostratigraphic aquifer units are sub-divided into up to three distinct hydrochemical regimes: brines (deep), brackish-saline sulphur water (intermediate), and fresh (shallow). The hydrostratigraphic unit assignment provides a standard nomenclature and definition for regional flow modelling of potable water and deeper fluids. Included in the model are: 1) 3-D hydrostratigraphic units, 2) 3-D hydrochemical fluid zones within aquifers, 3) 3-D representations of oil and natural gas reservoirs which form an integral part of the intermediate to deep groundwater regimes, 4) 3-D fluid level surfaces for deep Cambrian brines, for brines and fresh to sulphurous groundwater in the Guelph Aquifer, and the fresh to sulphurous groundwater of the Bass Islands Aquifer and Lucas-Dundee Aquifer, 5) inferred shallow karst, 6) base of fresh water, 7) Lockport Group TDS, and 8) the 3-D lithostratigraphy. The 3-D hydrostratigraphic model is derived from the lithostratigraphic layers of the published 3-D geological model. It is constructed using Leapfrog Works at 400 m grid scale and is distributed in a proprietary format with free viewer software as well as industry standard formats.
APA, Harvard, Vancouver, ISO, and other styles
5

Rigotti, Christophe, and Mohand-Saïd Hacid. Representing and Reasoning on Conceptual Queries Over Image Databases. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.89.

Full text
Abstract:
The problem of content management of multimedia data types (e.g., image, video, graphics) is becoming increasingly important with the development of advanced multimedia applications. Traditional database management systems are inadequate for the handling of such data types. They require new techniques for query formulation, retrieval, evaluation, and navigation. In this paper we develop a knowledge-based framework for modeling and retrieving image data by content. To represent the various aspects of an image object's characteristics, we propose a model which consists of three layers: (1) Feature and Content Layer, intended to contain image visual features such as contours, shapes,etc.; (2) Object Layer, which provides the (conceptual) content dimension of images; and (3) Schema Layer, which contains the structured abstractions of images, i.e., a general schema about the classes of objects represented in the object layer. We propose two abstract languages on the basis of description logics: one for describing knowledge of the object and schema layers, and the other, more expressive, for making queries. Queries can refer to the form dimension (i.e., information of the Feature and Content Layer) or to the content dimension (i.e., information of the Object Layer). These languages employ a variable free notation, and they are well suited for the design, verification and complexity analysis of algorithms. As the amount of information contained in the previous layers may be huge and operations performed at the Feature and Content Layer are time-consuming, resorting to the use of materialized views to process and optimize queries may be extremely useful. For that, we propose a formal framework for testing containment of a query in a view expressed in our query language. The algorithm we propose is sound and complete and relatively efficient.
APA, Harvard, Vancouver, ISO, and other styles
6

Rigotti, Christophe, and Mohand-Saïd Hacid. Representing and Reasoning on Conceptual Queries Over Image Databases. Aachen University of Technology, 1999. http://dx.doi.org/10.25368/2022.89.

Full text
Abstract:
The problem of content management of multimedia data types (e.g., image, video, graphics) is becoming increasingly important with the development of advanced multimedia applications. Traditional database management systems are inadequate for the handling of such data types. They require new techniques for query formulation, retrieval, evaluation, and navigation. In this paper we develop a knowledge-based framework for modeling and retrieving image data by content. To represent the various aspects of an image object's characteristics, we propose a model which consists of three layers: (1) Feature and Content Layer, intended to contain image visual features such as contours, shapes,etc.; (2) Object Layer, which provides the (conceptual) content dimension of images; and (3) Schema Layer, which contains the structured abstractions of images, i.e., a general schema about the classes of objects represented in the object layer. We propose two abstract languages on the basis of description logics: one for describing knowledge of the object and schema layers, and the other, more expressive, for making queries. Queries can refer to the form dimension (i.e., information of the Feature and Content Layer) or to the content dimension (i.e., information of the Object Layer). These languages employ a variable free notation, and they are well suited for the design, verification and complexity analysis of algorithms. As the amount of information contained in the previous layers may be huge and operations performed at the Feature and Content Layer are time-consuming, resorting to the use of materialized views to process and optimize queries may be extremely useful. For that, we propose a formal framework for testing containment of a query in a view expressed in our query language. The algorithm we propose is sound and complete and relatively efficient.
APA, Harvard, Vancouver, ISO, and other styles
7

Steudlein, Armin, Besrat Alemu, T. Matthew Evans, et al. PEER Workshop on Liquefaction Susceptibility. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, 2023. http://dx.doi.org/10.55461/bpsk6314.

Full text
Abstract:
Seismic ground failure potential from liquefaction is generally undertaken in three steps. First, a susceptibility evaluation determines if the soil in a particular layer is in a condition where liquefaction triggering could potentially occur. This is followed by a triggering evaluation to estimate the likelihood of triggering given anticipated seismic demands, environmental conditions pertaining to the soil layer (e.g., its depth relative to the ground water table), and the soil state. For soils where triggering can be anticipated, the final step involves assessments of the potential for ground failure and its impact on infrastructure systems. This workshop was dedicated to the first of these steps, which often plays a critical role in delineating risk for soil deposits with high fines contents and clay-silt-sand mixtures of negligible to moderate plasticity. The workshop was hosted at Oregon State University on September 8-9, 2022 and was attended by 49 participants from the research, practice, and regulatory communities. Through pre-workshop polls, extended abstracts, workshop presentations, and workshop breakout discussions, it was demonstrated that leaders in the liquefaction community do not share a common understanding of the term “susceptibility” as applied to liquefaction problems. The primary distinction between alternate views concerns whether environmental conditions and soil state provide relevant information for a susceptibility evaluation, or if susceptibility is a material characteristic. For example, a clean, dry, dense sand in a region of low seismicity is very unlikely to experience triggering of liquefaction and would be considered not susceptible by adherents of a definition that considers environmental conditions and state. The alternative, and recommended, definition focusing on material susceptibility would consider the material as susceptible and would defer consideration of saturation, state, and loading effects to a separate triggering analysis. This material susceptibility definition has the advantage of maintaining a high degree of independence between the parameters considered in the susceptibility and triggering phases of the ground failure analysis. There exist differences between current methods for assessing material susceptibility – the databases include varying amount of test data, the materials considered are distinct (from different regions) and have been tested using different procedures, and the models can be interpreted as providingdifferent outcomes in some cases. The workshop reached a clear consensus that new procedures are needed that are developed using a new research approach. The recommended approach involves assembling a database of information from sites for which in situ test data are available (borings with samples, CPTs), cyclic test data are available from high-quality specimens, and a range of index tests are available for important layers. It is not necessary that the sites have experienced earthquake shaking for which field performance is known, although such information is of interest where available. A considerable amount of data of this type are available from prior research studies and detailed geotechnical investigations for project sites by leading geotechnical consultants. Once assembled and made available, this data would allow for the development of models to predict the probability of material susceptibility given various independent variables (e.g., in-situ tests indices, laboratory index parameters) and the epistemic uncertainty of the predictions. Such studies should be conducted in an open, transparent manner utilizing a shared database, which is a hallmark of the Next Generation Liquefaction (NGL) project.
APA, Harvard, Vancouver, ISO, and other styles
8

Berger, Rutherford C. Foundational Principles in the Development of AdH-SW3, the Three-Dimensional Shallow Water Hydrodynamics and Transport Module within the Adaptive Hydraulics/Hydrology Model. U.S. Army Engineer Research and Development Center, 2022. http://dx.doi.org/10.21079/11681/44560.

Full text
Abstract:
This report details the design and development of the three-dimensional shallow water hydrodynamics formulation within the Adaptive Hydraulics/Hydrology model (AdH-SW3) for simulation of flow and transport in rivers, estuaries, reservoirs, and other similar hydrologic environments. The report is intended to communicate principles of the model design for the interested and diligent user. The design relies upon several layers of consistency to produce a stable, accurate, and conservative model. The mesh design can handle rapid changes in bathymetry (e.g., steep-sided navigation channels in estuaries) and maintain accuracy in density-driven transport phenomena (e.g., thermal, or saline stratification and intrusion of salinity).
APA, Harvard, Vancouver, ISO, and other styles
9

Chapman and Toema. PR-266-09211-R01 Physics-Based Characterization of Lambda Sensor from Natural Gas Fueled Engines. Pipeline Research Council International, Inc. (PRCI), 2012. http://dx.doi.org/10.55274/r0010022.

Full text
Abstract:
The increasingly strict air emission regulations may require implementing Non-Selective Catalytic Reduction (NSCR) systems as a promising emission control technology for stationary rich burn spark ignition engines. Many recent experimental investigations that used NSCR systems for stationary natural gas fueled engines showed that NSCR systems were unable to consistently control the exhaust emissions level below the compliance limits. Modeling of NSCR components to better understand, and then exploit, the underlying physical processes that occur in the lambda sensor and the catalyst media is now considered an essential step toward improving NSCR system performance. This report focuses on modeling the lambda sensor that provides feedback to the air-to-fuel ratio controller. Correct interpretation of the sensor output signal is necessary to achieve consistently low emissions level. The goal of this modeling study is to improve the understanding of the physical processes that occur within the sensor, investigate the cross-sensitivity of various exhaust gas species on the sensor performance, and finally this model serves as a tool to improve NSCR control strategies. This model simulates the output from a planar switch type lambda sensor. The model consists of three modules. The first module models the multi-component mass transport through the sensor protective layer. The second module includes all the surface catalytic reactions that take place on the sensor platinum electrodes. The third module is responsible for simulating the reactions that occur on the electrolyte material and determine the sensor output voltage.
APA, Harvard, Vancouver, ISO, and other styles
10

Brenan, J. M., K. Woods, J. E. Mungall, and R. Weston. Origin of chromitites in the Esker Intrusive Complex, Ring of Fire Intrusive Suite, as revealed by chromite trace element chemistry and simple crystallization models. Natural Resources Canada/CMSS/Information Management, 2021. http://dx.doi.org/10.4095/328981.

Full text
Abstract:
To better constrain the origin of the chromitites associated with the Esker Intrusive Complex (EIC) of the Ring of Fire Intrusive Suite (RoFIS), a total of 50 chromite-bearing samples from the Black Thor, Big Daddy, Blackbird, and Black Label chromite deposits have been analysed for major and trace elements. The samples represent three textural groups, as defined by the relative abundance of cumulate silicate phases and chromite. To provide deposit-specific partition coefficients for modeling, we also report on the results of laboratory experiments to measure olivine- and chromite-melt partitioning of V and Ga, which are two elements readily detectable in the chromites analysed. Comparison of the Cr/Cr+Al and Fe/Fe+Mg of the EIC chromites and compositions from previous experimental studies indicates overlap in Cr/Cr+Al between the natural samples and experiments done at &amp;amp;gt;1400oC, but significant offset of the natural samples to higher Fe/Fe+Mg. This is interpreted to be the result of subsolidus Fe-Mg exchange between chromite and the silicate matrix. However, little change in Cr/Cr+Al from magmatic values, owing to the lack of an exchangeable reservoir for these elements. A comparison of the composition of the EIC chromites and a subset of samples from other tectonic settings reveals a strong similarity to chromites from the similarly-aged Munro Township komatiites. Partition coefficients for V and Ga are consistent with past results in that both elements are compatible in chromite (DV = 2-4; DGa ~ 3), and incompatible in olivine (DV = 0.01-0.14; DGa ~ 0.02), with values for V increasing with decreasing fO2. Simple fractional crystallization models that use these partition coefficients are developed that monitor the change in element behaviour based on the relative proportions of olivine to chromite in the crystallizing assemblage; from 'normal' cotectic proportions involving predominantly olivine, to chromite-only crystallization. Comparison of models to the natural chromite V-Ga array suggests that the overall positive correlation between these two elements is consistent with chromite formed from a Munro Township-like komatiitic magma crystallizing olivine and chromite in 'normal' cotectic proportions, with no evidence of the strong depletion in these elements expected for chromite-only crystallization. The V-Ga array can be explained if the initial magma responsible for chromite formation is slightly reduced with respect to the FMQ oxygen buffer (~FMQ- 0.5), and has assimilated up to ~20% of wall-rock banded iron formation or granodiorite. Despite the evidence for contamination, results indicate that the EIC chromitites crystallized from 'normal' cotectic proportions of olivine to chromite, and therefore no specific causative link is made between contamination and chromitite formation. Instead, the development of near- monomineralic chromite layers likely involves the preferential removal of olivine relative to chromite by physical segregation during magma flow. As suggested for some other chromitite-forming systems, the specific fluid dynamic regime during magma emplacement may therefore be responsible for crystal sorting and chromite accumulation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography