Academic literature on the topic 'Source scaling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Source scaling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Source scaling"

1

Bakulin, Andrey, Dmitry Alexandrov, Christos Saragiotis, Abdullah Al Ramadan, and Boris Kashtan. "Correcting source and receiver scaling for virtual source imaging and monitoring." GEOPHYSICS 83, no. 3 (2018): Q15—Q24. http://dx.doi.org/10.1190/geo2017-0163.1.

Full text
Abstract:
Virtual source redatuming is a data-driven interferometric approach that relies on constructive and destructive interference, and as a result it is quite sensitive to input seismic trace amplitudes. Land surveys are prone to amplitude changes that are unrelated to subsurface geology (source/receiver coupling, etc.). We have determined that such variations may be particularly damaging to construct a virtual-source signal for imaging and seismic monitoring applications, and they need to be correctly compensated before satisfactory images, repeatability, and proper relative amplitudes are achieved. We examine two methods to correct for these variations: a redatuming approach based on multidimensional deconvolution and multisurvey surface-consistent (SC) scaling. Using synthetic data, we discover that the first approach can only balance time-dependent variations between repeat surveys, e.g., compensate for variable shot scaling. In contrast, a multisurvey SC approach can compensate for shot and receiver scaling within each survey and among the surveys. As a result, it eliminates redatuming artifacts, brings repeat surveys to a common amplitude level, while preserving relative amplitudes required for quantitative interpretation of 4D amplitude differences. Applying an SC approach to a land time-lapse field data set with buried receivers from Saudi Arabia, we additionally conclude that separate SC scaling of early arrivals and deep reflections may produce better image and repeatability. This is likely due to the significantly different frequency content of early arrivals and deep reflections.
APA, Harvard, Vancouver, ISO, and other styles
2

Liao, Lele, Guoliang Cheng, Zhaoyi Gu, and Jing Lu. "Efficient independent vector extraction of dominant source (L)." Journal of the Acoustical Society of America 151, no. 6 (2022): 4126–30. http://dx.doi.org/10.1121/10.0011746.

Full text
Abstract:
The complete decomposition performed by blind source separation is computationally demanding and superfluous when only the speech of one specific target speaker is desired. This letter proposes a computationally efficient blind source extraction method based on the fast fixed-point optimization algorithm under the mild assumption that the average power of the source of interest outweighs the interfering sources. Moreover, a one-unit scaling operation is designed to solve the scaling ambiguity for source extraction. Experiments validate the efficacy of the proposed method in extracting the dominant source.
APA, Harvard, Vancouver, ISO, and other styles
3

IDE, Satoshi. "Scaling Relations for Earthquake Source Process." Zisin (Journal of the Seismological Society of Japan. 2nd ser.) 61, Supplement (2009): 329–38. http://dx.doi.org/10.4294/zisin.61.329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Thingbaijam, Kiran Kumar S., P. Martin Mai, and Katsuichiro Goda. "New Empirical Earthquake Source‐Scaling Laws." Bulletin of the Seismological Society of America 107, no. 5 (2017): 2225–46. http://dx.doi.org/10.1785/0120170017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Popescu, Emilia, Anica Otilia Placinta, Felix Borleasnu, and Mircea Radulian. "Repeated Earthquakes in the Vrancea Subcrustal Source and Source Scaling." IOP Conference Series: Earth and Environmental Science 95 (December 2017): 032005. http://dx.doi.org/10.1088/1755-1315/95/3/032005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Somerville, P. G., J. P. McLaren, L. V. LeFevre, R. W. Burger, and D. V. Helmberger. "Comparison of source scaling relations of eastern and western North American earthquakes." Bulletin of the Seismological Society of America 77, no. 2 (1987): 322–46. http://dx.doi.org/10.1785/bssa0770020322.

Full text
Abstract:
Abstract Source scaling relations have been obtained for earthquakes in eastern North America and other continental interiors, and compared with a relation obtained for earthquakes in western North America. The scaling relation for eastern North American earthquakes was constructed from measurements of seismic moment and source duration obtained by the waveform modeling of seismic body waves. The events used include nine events of mbLg magnitude 4.7 to 5.8 that occurred after 1960, and four earlier events with magnitudes between 5.5 and 6.6. The scaling relation for events in other continental interiors was used for comparative purposes and to provide constraints for large magnitudes. Detailed analysis of the uncertainties in the scaling relations has allowed the resolution of two important issues concerning the source scaling of earthquakes in eastern North America. First, the source characteristics of earthquakes in eastern North America and other continental interiors are consistent with constant stress drop scaling, and are inconsistent with nonconstant scaling models such as that of Nuttli (1983). Second, the stress drops of earthquakes in eastern North America and other continental interiors are not significantly different from those of earthquakes in western North America, and have median values of approximately 100 bars. The source parameters of earthquakes in eastern North America are consistent with a single constant stress drop scaling relation, whereas the source parameters of earthquakes in western North America are much more variable and show significant departures from an average scaling relation in which stress drop decreases slightly with seismic moment.
APA, Harvard, Vancouver, ISO, and other styles
7

Ji, Chen, and Ralph J. Archuleta. "A Source Physics Interpretation of Nonself-Similar Double-Corner-Frequency Source Spectral Model JA19_2S." Seismological Research Letters 93, no. 2A (2022): 777–86. http://dx.doi.org/10.1785/0220210098.

Full text
Abstract:
Abstract We investigate the relation between the kinematic double-corner-frequency source spectral model JA19_2S (Ji and Archuleta, 2020) and static fault geometry scaling relations proposed by Leonard (2010). We find that the nonself-similar low-corner-frequency scaling relation of JA19_2S model can be explained using the fault length scaling relation of Leonard’s model combined with an average rupture velocity ∼70% of shear-wave speed for earthquakes 5.3 < M< 6.9. Earthquakes consistent with both models have magnitude-independent average static stress drop and average dynamic stress drop around 3 MPa. Their scaled energy e˜ is not a constant. The decrease of e˜ with magnitude can be fully explained by the magnitude dependence of the fault aspect ratio. The high-frequency source radiation is generally controlled by seismic moment, static stress drop, and dynamic stress drop but is further modulated by the fault aspect ratio and the relative location of the hypocenter. Based on these two models, the commonly quoted average rupture velocity of 70%–80% of shear-wave speed implies predominantly unilateral rupture.
APA, Harvard, Vancouver, ISO, and other styles
8

Smith, Chad M., and Thomas B. Gabrielson. "Scaling of a gas-combustion infrasound source." Journal of the Acoustical Society of America 143, no. 3 (2018): 1808. http://dx.doi.org/10.1121/1.5035922.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Gusev, A., M. Radulian, M. Rizescu, and G. F. Panza. "Source scaling of intermediate-depth Vrancea earthquakes." Geophysical Journal International 151, no. 3 (2002): 879–89. http://dx.doi.org/10.1046/j.1365-246x.2002.01816.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Vaze, Rahul, and Jayakrishnan Nair. "Network Speed Scaling." ACM SIGMETRICS Performance Evaluation Review 48, no. 3 (2021): 61–62. http://dx.doi.org/10.1145/3453953.3453967.

Full text
Abstract:
Speed scaling for a network of servers represented by a directed acyclic graph is considered. Jobs arrive at a source server, with a specified destination server, and are defined to be complete once they are processed by all servers on any feasible path between the source and the corresponding destination. Each server has variable speed, with power consumption function P, a convex increasing function of the speed. The objective is to minimize the sum of the flow time (summed across jobs) and the energy consumed by all the servers, which depends on how jobs are routed, as well as how server speeds are set. Algorithms are derived for both the worst case and stochastic job arrivals setting, whose competitive ratio depends only on the power functions and path diversity in the network, but is independent of the workload.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Source scaling"

1

Edmundsson, Niklas. "Scaling a Content Delivery system for Open Source Software." Thesis, Umeå universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-109779.

Full text
Abstract:
This master’s thesis addresses scaling of content distribution sites. In a case study, the thesis investigates issues encountered on ftp.acc.umu.se, a content distribution site run by the Academic Computer Club (ACC) of Umeå University. This site is characterized by the unusual situation of the external network connectivity having higher bandwidth than the components of the system, which differs from the norm of the external connectivity being the limiting factor. To address this imbalance, a caching approach is proposed to architect a system that is able to fully utilize the available network capacity, while still providing a homogeneous resource to the end user. A set of modifications are made to standard open source solutions to make caching perform as required, and results from production deployment of the system are evaluated. In addition, time series analysis and forecasting techniques are introduced as tools to improve the system further, resulting in the implementation of a method to automatically detect bursts and handle load distribution of unusually popular files.
APA, Harvard, Vancouver, ISO, and other styles
2

Orefice, Antonella <1983&gt. "Refined Estimation of Earthquake Source Parameters: Methods, Applications and Scaling Relationships." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amsdottorato.unibo.it/4286/1/orefice_antonella_tesi.pdf.

Full text
Abstract:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
APA, Harvard, Vancouver, ISO, and other styles
3

Orefice, Antonella <1983&gt. "Refined Estimation of Earthquake Source Parameters: Methods, Applications and Scaling Relationships." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amsdottorato.unibo.it/4286/.

Full text
Abstract:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
APA, Harvard, Vancouver, ISO, and other styles
4

Mrowczynski, Piotr. "Scaling cloud-native Apache Spark on Kubernetes for workloads in external storages." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237455.

Full text
Abstract:
CERN Scalable Analytics Section currently offers shared YARN clusters to its users as monitoring, security and experiment operations. YARN clusters with data in HDFS are difficult to provision, complex to manage and resize. This imposes new data and operational challenges to satisfy future physics data processing requirements. As of 2018, there were over 250 PB of physics data stored in CERN’s mass storage called EOS. Hadoop-XRootD Connector allows to read over network data stored in CERN EOS. CERN’s on-premise private cloud based on OpenStack allows to provision on-demand compute resources. Emergence of technologies as Containers-as-a-Service in Openstack Magnum and support for Kubernetes as native resource scheduler for Apache Spark, give opportunity to increase workflow reproducability on different compute infrastructures with use of containers, reduce operational effort of maintaining computing cluster and increase resource utilization via cloud elastic resource provisioning. This trades-off the operational features with datalocality known from traditional systems as Spark/YARN with data in HDFS.In the proposed architecture of cloud-managed Spark/Kubernetes with data stored in external storage systems as EOS, Ceph S3 or Kafka, physicists and other CERN communities can on-demand spawn and resize Spark/Kubernetes cluster, having fine-grained control of Spark Applications. This work focuses on Kubernetes CRD Operator for idiomatically defining and running Apache Spark applications on Kubernetes, with automated scheduling and on-failure resubmission of long-running applications. Spark Operator was introduced with design principle to allow Spark on Kubernetes to be easy to deploy, scale and maintain with similar usability of Spark/YARN.The analysis of concerns related to non-cluster local persistent storage and memory handling has been performed. The architecture scalability has been evaluated on the use case of sustained workload as physics data reduction, with files in ROOT format being stored in CERN mass-storage called EOS. The series of microbenchmarks has been performed to evaluate the architecture properties compared to state-of-the-art Spark/YARN cluster at CERN. Finally, Spark on Kubernetes workload use-cases have been classified, and possible bottlenecks and requirements identified.<br>CERN Scalable Analytics Section erbjuder för närvarande delade YARN-kluster till sina användare och för övervakning, säkerhet, experimentoperationer, samt till andra grupper som är intresserade av att bearbeta data med hjälp av Big Data-tekniker. Dock är YARNkluster med data i HDFS svåra att tillhandahålla, samt komplexa att hantera och ändra storlek på. Detta innebär nya data och operativa utmaningar för att uppfylla krav på dataprocessering för petabyte-skalning av fysikdata.Från och med 2018 fanns över 250 PB fysikdata lagrade i CERNs masslagring, kallad EOS. CERNs privata moln, baserat på OpenStack, gör det möjligt att tillhandahålla beräkningsresurser på begäran. Uppkomsten av teknik som Containers-as-a-Service i Openstack Magnum och stöd för Kubernetes som inbyggd resursschemaläggare för Apache Spark, ger möjlighet att öka arbetsflödesreproducerbarheten på olika databaser med användning av containers, minska operativa ansträngningar för att upprätthålla datakluster, öka resursutnyttjande via elasiska resurser, samt tillhandahålla delning av resurser mellan olika typer av arbetsbelastningar med kvoter och namnrymder.I den föreslagna arkitekturen av molnstyrda Spark / Kubernetes med data lagrade i externa lagringssystem som EOS, Ceph S3 eller Kafka, kan fysiker och andra CERN-samhällen på begäran skapa och ändra storlek på Spark / Kubernetes-klustrer med finkorrigerad kontroll över Spark Applikationer. Detta arbete fokuserar på Kubernetes CRD Operator för idiomatiskt definierande och körning av Apache Spark-applikationer på Kubernetes, med automatiserad schemaläggning och felåterkoppling av långvariga applikationer. Spark Operator introducerades med designprincipen att tillåta Spark över Kubernetes att vara enkel att distribuera, skala och underhålla. Analys av problem relaterade till icke-lokal kluster persistent lagring och minneshantering har utförts. Arkitekturen har utvärderats med användning av fysikdatareduktion, med filer i ROOT-format som lagras i CERNs masslagringsystem som kallas EOS. En serie av mikrobenchmarks har utförts för att utvärdera arkitekturegenskaperna såsom prestanda jämfört med toppmoderna Spark / YARN-kluster vid CERN, och skalbarhet för långvariga dataprocesseringsjobb.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Gengji [Verfasser], and Franz X. [Akademischer Betreuer] Kärtner. "Power scaling of ultrafast mid-IR source enabled by high-power fiber laser technology / Gengji Zhou ; Betreuer: Franz Kärtner." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2017. http://d-nb.info/1143868781/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Noda, Shunta. "Relation of Earthquake Growth and Final Size with Applications to Magnitude Determination for Early Warning." Kyoto University, 2020. http://hdl.handle.net/2433/259707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huijts, Julius. "Broadband Coherent X-ray Diffractive Imaging and Developments towards a High Repetition Rate mid-IR Driven keV High Harmonic Source." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS154/document.

Full text
Abstract:
Des sources des rayons XUV (1-100 nm) sont des outils extraordinaires pour sonder la dynamique à l’échelle nanométrique avec une résolution femto- voire attoseconde. La génération d’harmoniques d’ordre élevé (GH) est une des sources majeures dans ce domaine d’application. La GH est un processus dans lequel une impulsion laser infrarouge femtoseconde est convertie, de manière cohérente, en fréquences élevées dans le domaine EUV par interaction hautement non-linéaire dans un atome, une molécule et plus récemment, dans un cristal. La GH possède une excellente cohérence spatiale qui a permis de réaliser des démonstrations impressionnantes en imagerie sans lentille. Pour accroître le potentiel de ces sources, des défis sont à relever : leur brillance et énergie de photon maximum doivent augmenter et les techniques d’imagerie sans lentille doivent être modifiées pour être compatibles avec l’importante largeur spectrale des impulsions attosecondes émise par ces sources. Cette thèse présente une nouvelle approche dans laquelle des figures de diffraction large bande, i.e. potentiellement attosecondes, sont rendues monochromatiques numériquement. Cette méthode est basée uniquement sur la mesure du spectre de la source et la supposition d’un échantillon spatialement non-dispersif. Cette approche a été validée tout d’abord dans le visible, à partir d’un supercontinuum. L’échantillon binaire est reconstruit par recouvrement de phase pour une largeur spectrale de 11 %, là où les algorithmes usuels divergent. Les simulations numériques montrent aussi que la méthode de monochromatisation peut être appliquée au domaine des rayons X, avec comme exemple un masque semi-conducteur utilisé en de lithographie EUV. Bien que la brillance « cohérente » de la source actuelle (qui progresse) reste insuffisante, une application sur l’inspection de masques sur source Compton est proposée. Dans une extension de ces simulations un masque de lithographie étendu est reconstruit par ptychographie, démontrant la versatilité à d’autres techniques d’imagerie sans lentille. Nous avons également entamé une série d’expérience dans le domaine des X-durs sur source synchrotron. Les figures de diffraction après monochromatisation numérique semblent prometteuses mais l’analyse des données demandent des efforts supplémentaires. Une partie importante de cette thèse est dédiée à l’extension des sources harmoniques à des brillances et énergies de photon plus élevées. Ce travail exploratoire permettrait la réalisation d’une source harmonique compacte pompée par un laser OPCPA dans le moyen infrarouge à très fort taux de répétition. Les longueurs d’onde moyen infrarouge (3.1 μm dans ce travail de thèse) sont favorables à l’extension des énergies des photons au keV et aux impulsions attosecondes. Le but est de pouvoir couvrir les seuils d’absorption X et d’améliorer la résolution spatio-temporelle. Cependant, deux facteurs rendent cette démonstration difficile: le nombre de photons par impulsion de la source OPCPA est très limité et la réponse du dipôle harmonique à grande longueur est extrêmement faible. Pour relever ces défis plusieurs configurations expérimentales sont explorées : génération dans un jet de gaz ; génération dans une cellule de gaz ; compression solitonique et la génération d’harmoniques combinées dans une fibre à cristal photonique ; compression solitonique dans une fibre à cristal photonique et génération d’harmoniques dans une cellule de gaz. Les premiers résultats expérimentaux sur la compression solitonique jusqu’à 26 femtosecondes et des harmoniques basses jusqu’à l’ordre sept sont présentésEn résumé, ces résultats représentent une avancée vers l’imagerie nanométrique attoseconde sans lentille basée sur des algorithmes « large bande » innovants et une extension des capacités de nouvelles sources harmoniques ‘table-top’ au keV pompées par laser OPCPA<br>Soft X-ray sources based on high harmonic generation are up to now unique tools to probe dynamics in matter on femto- to attosecond timescales. High harmonic generation is a process in which an intense femtosecond laser pulse is frequency upconverted to the UV and soft X-ray region through a highly nonlinear interaction in a gas. Thanks to their excellent spatial coherence, they can be used for lensless imaging, which has already led to impressive results. To use these sources to the fullest of their potential, a number of challenges needs to be met: their brightness and maximum photon energy need to be increased and the lensless imaging techniques need to be modified to cope with the large bandwidth of these sources. For the latter, a novel approach is presented, in which broadband diffraction patterns are rendered monochromatic through a numerical treatment based solely on the spectrum and the assumption of a spatially non-dispersive sample. This approach is validated through a broadband lensless imaging experiment on a supercontinuum source in the visible, in which a binary sample was properly reconstructed through phase retrieval for a source bandwidth of 11 %. Through simulations, the numerical monochromatization method is shown to work for hard X-rays as well, with a simplified semiconductor lithography mask as sample. A potential application of lithography mask inspection on an inverse Compton scattering source is proposed, although the conclusion of the analysis is that the current source lacks brightness for the proposal to be realistic. Simulations with sufficient brightness show that the sample is well reconstructed up to 10 % spectral bandwidth at 8 keV. In an extension of these simulations, an extended lithography mask sample is reconstructed through ptychography, showing that the monochromatization method can be applied in combination with different lensless imaging techniques. Through two synchrotron experiments an experimental validation with hard X-rays was attempted, of which the resulting diffraction patterns after numerical monochromatization look promising. The phase retrieval process and data treatment however require additional efforts.An important part of the thesis is dedicated to the extension of high harmonic sources to higher photon energies and increased brightness. This exploratory work is performed towards the realization of a compact high harmonic source on a high repetition rate mid-IR OPCPA laser system, which sustains higher average power and longer wavelengths compared to ubiquitous Ti:Sapphire laser systems. High repetition rates are desirable for numerous applications involving the study of rare events. The use of mid-IR wavelengths (3.1 μm in this work) promises extension of the generated photon energies to the kilo-electronvolt level, allowing shorter pulses, covering more X-ray absorption edges and improving the attainable spatial resolution for imaging. However, high repetition rates come with low pulse energies, which constrains the generation process. The generation with longer wavelengths is challenging due to the significantly lower dipole response of the gas. To cope with these challenges a number of experimental configurations is explored theoretically and experimentally: free-focusing in a gas-jet; free-focusing in a gas cell; soliton compression and high harmonic generation combined in a photonic crystal fiber; separated soliton compression in a photonic crystal fiber and high harmonic generation in a gas cell. First results on soliton compression down to 26 fs and lower harmonics up to the seventh order are presented.Together, these results represent a step towards ultrafast lensless X-ray imaging on table-top sources and towards an extension of the capabilities of these sources
APA, Harvard, Vancouver, ISO, and other styles
8

Perrin, Clément. "Relations entre propriétés des failles et propriétés des forts séismes." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4045/document.

Full text
Abstract:
J’examine les relations entre propriétés des failles géologiques long-termes et propriétés des forts séismes que produisent ces failles. J’ai compilé les données sismologiques disponibles sur les grands séismes historiques mondiaux, et cartographié, sur images satellitaires, les failles long-termes rompues par ces séismes et les traces des ruptures. L’analyse combinée des données montre que : i) les failles long-termes ont certaines propriétés génériques (organisation des réseaux, segmentation latérale, forme de distribution du glissement cumulé, etc) ; ii) les forts séismes ont également des propriétés communes (similarité de distribution du glissement cosismique, du nombre de segments rompus, de la chute de contrainte sur chaque segment majeur rompu, de la distance relative entre hypocentre et zone de glissement maximum, etc) ; iii) la maturité structurale des failles est la propriété tectonique qui impacte le plus le comportement des forts séismes. Il est probable que cette maturité diminue la friction statique et la complexité géométrique du plan de faille. Elle agit sur la localisation de la zone d’initiation du séisme, sur la localisation et l’amplitude maximum du glissement cosismique, sur la direction de décroissance de ce glissement, sur la « capacité » de la rupture à se propager et donc sur sa vitesse de propagation. Elle dicte le nombre de segments majeurs qui peuvent être rompus, et par conséquent, elle contrôle la longueur totale et la chute de contrainte globale de la rupture. Pour comprendre la physique des forts séismes, il apparaît donc indispensable d’analyser conjointement les propriétés des failles rompues et les propriétés des séismes produits<br>I examine the relations between the properties of long-term geological faults and the properties of the large earthquakes these faults produce. I have gathered available seismological information on large historical earthquakes worldwide and mapped in detail, on satellite images, both the long-term fault and the rupture traces. The combined analysis of the data shows that: i) long-term faults have a number of generic properties (arrangement of overall fault networks, lateral segmentation of fault traces, form of cumulative slip distribution, etc); ii) large earthquakes also have generic properties (similarity of envelope shape of coseismic slip-length profiles, of decrease in rupture width along rupture length, of number of broken segments, of stress drop on broken segments, of relative distance between hypocenter and zone of maximum slip, etc); iii) the structural maturity of the faults is the tectonic property most impacting the behavior of large earthquakes. The maturity likely acts in reducing both the static friction and the geometric complexity of the fault plane. It partly governs the location of the earthquake initiation, the location and amplitude of the maximum coseismic slip, the direction of the coseismic slip decrease, the rupture propagation efficiency and speed, the number of major fault segments that are broken, and hence the rupture length and its overall stress drop. To understand the physics of earthquakes, it thus seems necessary to analyze jointly the tectonic properties of the broken faults and the seismological properties of the earthquakes
APA, Harvard, Vancouver, ISO, and other styles
9

Cieslak, Rafal. "Power scaling of novel fibre sources." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/351186/.

Full text
Abstract:
This thesis explores novel fibre-based coherent light source architectures and strategies for scaling output power. The research focuses on fibre-based amplified spontaneous emission (ASE) sources with broadband output in the ~1 μm spectral region and on internally-frequency-doubled fibre lasers emitting in the visible (green) wavelength range. In Part I: Spectrum-controllable fibre-based amplified spontaneous emission sources modelling, development and characterisation of a versatile ASE source based on Yb-doped fibre gain stages, using power-efficient means for spectrum control is presented. Experiments have culminated in a versatile seed source with polarized output and a reasonable degree of spectral control. In its final configuration, the ASE source was capable of producing either a broad spectrum in the 1-1.1 μm band with a full-width at half-maximum of 15-40 nm and output power > 1 W. Alternatively, single/multiple narrow lines with a full-width at half-maximum ranging from several nanometres to < 0.05 nm and output power spectral densities of up to 100 mW/nm could be generated. The output power was temporally stable with fluctuations at the level < 0.3-0.8% of the total output power. Very high spectral stability was obtained, which was limited mostly by the mechanical stability of the external cavity. The output beam was nearly diffraction limited with M2 ≈ 1.1. Part II: Efficient intracavity frequency doubling schemes for continuous-wave fibre lasers introduces a novel concept for resonant enhancement of the intracavity power in high power continuous-wave fibre lasers that is suitable for a wide range of applications. Using this concept, efficient frequency doubling in continuous-wave Yb-doped fibre lasers has been demonstrated and techniques for using it in devices based on both robustly single-mode and multi-mode fibres have been developed. Finally, this thesis presents wavelength tuning of continuous-wave Yb-doped fibre lasers over ~19 nm in the green spectral region and scaling the generated second harmonic power up to ~19 W with more than 21% pump to second harmonic conversion efficiency.
APA, Harvard, Vancouver, ISO, and other styles
10

Erez, Giacomo. "Modélisation du terme source d'incendie : montée en échelle à partir d'essais de comportement au feu vers l'échelle réelle : approche "modèle", "numérique" et "expérimentale"." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0189.

Full text
Abstract:
Le recours à la simulation numérique peut être précieux pour un investigateur cherchant à évaluer une hypothèse dans le cadre de la recherche des causes et circonstances d'un incendie. Pour utiliser cet outil, il est primordial de définir précisément le terme source, c'est-à-dire la quantité d'énergie ou de gaz combustibles que le foyer va dégager au cours du temps. Il peut être déterminé par des essais en échelle réduite ou réelle. La première approche est souvent préférée car plus facile à mettre en œuvre et moins coûteuse. Dans ce cas, il est ensuite nécessaire de transposer les résultats vers l'échelle réelle : pour cela, plusieurs types de modèles ont été proposés. Les plus complets reposent sur des modèles de pyrolyse, qui décrivent le comportement au feu à partir des réactions chimiques dans la phase condensée. Toutefois, ils ne sont pour l'instant pas mûrs pour les applications en investigation. C'est pourquoi une autre famille de modèles, dits thermiques, a été choisie dans l'étude présentée ici. Ces modèles visent à prédire le terme source en fonction de la sollicitation thermique uniquement. Les travaux sont divisés en deux parties principales, à savoir une caractérisation approfondie des transferts de chaleur au sein d'une flamme et une investigation de leur influence sur la décomposition des matériaux. Pour le premier sujet, l'accent est mis sur les transferts radiatifs car ils jouent un rôle prédominant dans l'entretien de la combustion et la propagation. Le rayonnement des flammes a donc été caractérisé pour plusieurs combustibles (kérosène, gazole, heptane, mousse polyuréthane et bois) et de nombreuses tailles de foyer (de 0,3 m à 3,5 m de côté). Les mesures, incluant de l'imagerie visible mais aussi un dispositif d'opacimétrie multispectral et un spectromètre infrarouge, ont permis de décrire la forme et l'émission des flammes. Ces données ont ensuite été utilisées dans un modèle (méthode de Monte-Carlo) pour prédire les flux thermiques reçus à différentes positions. Ces prédictions reproduisent bien les valeurs mesurées lors des essais, ce qui montre que les principaux phénomènes contrôlant le rayonnement ont été identifiés et pris en compte, pour toutes les tailles de foyer. Étant donné que l'objectif final est de fournir un outil de simulation complet, il a été choisi d'évaluer Fire Dynamics Simulator (FDS) afin de déterminer si le code permet de décrire correctement ces transferts radiatifs. Ce travail a été fait grâce aux données et connaissances acquises précédemment et montre que le code prédit correctement les flux reçus. Il a donc été choisi, pour la suite des travaux, de se reposer sur le modèle de rayonnement déjà incorporé dans FDS, pour profiter de son couplage avec le reste des modèles utiles à la simulation incendie. Concernant le second thème de l'étude, à savoir l'effet du rayonnement sur la décomposition, un travail expérimental a été mené sur la mousse polyuréthane au cône calorimètre, afin de lier la vitesse de perte de masse (MLR, pour Mass loss rate) au flux thermique imposé. Ces données ont permis de construire un modèle prédisant le MLR en fonction du temps et de l'éclairement, ce qui représente bien l'approche thermique évoquée précédemment. Des essais à plus grande échelle ont servi à caractériser la propagation du feu à la surface du combustible mais aussi à l'intérieur des échantillons de mousse, en employant différents moyens de mesure (imagerie visible, thermométrie, photogrammétrie). En plus des connaissances acquises, cette étude indique que l'utilisation de données obtenues à petite échelle est possible pour prédire le comportement au feu à échelle réelle. C'est donc ce qui a été fait, en modifiant le code source de FDS, pour intégrer une approche thermique en utilisant les données du modèle décrivant le MLR en fonction du temps et de l'éclairement. Les premières simulations montrent des résultats encourageants, et seront complétées par l'étude de géométries plus complexes<br>Numerical simulations can provide valuable information to fire investigators, but only if the fire source is precisely defined. This can be done through full- or small-scale testing. The latter is often preferred because these tests are easier to perform, but their results have to be extrapolated in order to represent full-scale fire behaviour. Various approaches have been proposed to perform this upscaling. An example is pyrolysis models, which involve a detailed description of condensed phase reactions. However, these models are not ready yet for investigation applications. This is why another approach was chosen for the work presented here, employing a heat transfer model: the prediction of mass loss rate for a material is determined based on a heat balance. This principle explains the two-part structure of this study: first, a detailed characterisation of heat transfers is performed; then, the influence of these heat transfers on thermal decomposition is studied. The first part focuses on thermal radiation because it is the leading mechanism of flame spread. Flame radiation was characterised for several fuels (kerosene, diesel, heptane, polyurethane foam and wood) and many fire sizes (from 0.3 m up to 3.5 m wide). Measurements included visible video recordings, multispectral opacimetry and infrared spectrometry, which allowed the determination of a simplified flame shape as well as its emissive power. These data were then used in a model (Monte-Carlo method) to predict incident heat fluxes at various locations. These values were compared to the measurements and showed a good agreement, thus proving that the main phenomena governing flame radiation were captured and reproduced, for all fire sizes. Because the final objective of this work is to provide a comprehensive fire simulation tool, a software already available, namely Fire Dynamics Simulator (FDS), was evaluated regarding its ability to model radiative heat transfers. This was done using the data and knowledge gathered before, and showed that the code could predict incident heat fluxes reasonably well. It was thus chosen to use FDS and its radiation model for the rest of this work. The second part aims at correlating thermal decomposition to thermal radiation. This was done by performing cone calorimeter tests on polyurethane foam and using the results to build a model which allows the prediction of MLR as a function of time and incident heat flux. Larger tests were also performed to study flame spread on top and inside foam samples, through various measurements: videos processing, temperatures analysis, photogrammetry. The results suggest that using small-scale data to predict full-scale fire behaviour is a reasonable approach for the scenarios being investigated. It was thus put into practice using FDS, by modifying the source code to allow for the use of a thermal model, in other words defining the fire source based on the model predicting MLR as a function of time and incident heat flux. The results of the first simulations are promising, and predictions for more complex geometries will be evaluated to validate this method
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Source scaling"

1

J, Xie, Baqer S, U.S. Nuclear Regulatory Commission. Office of Nuclear Regulatory Research. Division of Engineering Technology., and St. Louis University. Dept. of Earth and Atmospheric Sciences., eds. Lg excitation, attenuation, and source spectral scaling in central and eastern North America. Division of Engineering Technology, Office of Nuclear Regulatory Research, U.S. Nuclear Regulatory Commission, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Stephen, Gitonga, Clemens Elisabeth, and United Nations Development Programme, eds. Expanding access to modern energy services: Replicating, scaling up and mainstreaming at the local level : lessons from community-based energy initiatives. United Nations Development Programme, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Nepal. Alternative Energy Promotion Centre and Tribhuvana Viśvavidyālaya. Institute of Engineering. Centre for Energy Studies, eds. Proceedings of Third International Conference on Addressing Climate Change for Sustainable Development through Up-scaling Renewable Energy Technologies (RETRUD-11): October 12-14, 2011 : Kathmandu, Nepal. RETRUD-11 Conference Secretariat, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bekunda, Mateete, Irmgard Hoeschle-Zeledon, and Jonathan Odhong, eds. Sustainable agricultural intensification: a handbook for practitioners in East and Southern Africa. CABI, 2022. http://dx.doi.org/10.1079/9781800621602.0000.

Full text
Abstract:
Abstract This book presents some of the improved agricultural technologies that were validated by the Africa RISING Project in East and Southern Africa (ESA), focusing on smallholder farmers in Malawi, Tanzania, and Zambia, and working in partnership with development (scaling) institutions. It consists of 11 chapters. Chapter 1 shows how gender concerns must be woven into all sustainable intensification (SI) interventions to produce equitable outcomes. It describes activities to enhance women's participation, measure the benefits, and transform gender relations. Chapter 2 describes the performance of new cereal and legume crop varieties introduced by Africa RISING into agroecosystems in which they had not been tested before. Chapter 3 presents technologies to diversify the common maize-dominated cropping systems and address human nutrition, improve soil organic matter, and maximize the benefits of applying fertilizer. Chapter 4 presents technologies for replacing the nutrients lost from cropped fields with external fertilizer sources in a manner that minimizes the consequences of too little or too much application. Chapter 5 is about soil conservation. Chapter 6 presents conservation agriculture, which can help smallholder farmers build better resilience to the consequences of climate change and variable weather. Improved technologies for drying, shelling, and hermetic storage of grain are presented in Chapter 7. Chapter 8 provides information to help farmers use outputs from crop production systems to formulate supplementary feed. Chapter 9 follows with technologies that allow well-planned nutrition-specific interventions (recipes) to utilize various livestock and crop products to enhance family nutrition, with specific attention paid to diets for children. Chapter 10 presents examples from the preceding chapters to illustrate the potential impacts of interconnected technologies. Lastly, Chapter 11 presents experiences and lessons learned from using these approaches to transfer and scale the technologies.
APA, Harvard, Vancouver, ISO, and other styles
5

Chodorow, Kristina. Scaling MongoDB. O'Reilly Media, Incorporated, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chodorow, Kristina. Scaling Mongodb. O'Reilly Media, Incorporated, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Chodorow, Kristina. Scaling MongoDB: Sharding, Cluster Setup, and Administration. O'Reilly Media, Incorporated, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lg Excitation, Attenuation, and Source Spectral Scaling in Central and Eastern North America. United States Government Printing Office, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Karambelkar, Hrishikesh. Scaling Big Data with Hadoop and Solr - Second Edition. Packt Publishing, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kimmins, Mika. Scaling Python with Dask: From Data Science to Machine Learning. O'Reilly Media, Incorporated, 2023.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Source scaling"

1

Boore, D. M. "The Effect of Finite Bandwidth on Seismic Scaling Relationships." In Earthquake Source Mechanics. American Geophysical Union, 2013. http://dx.doi.org/10.1029/gm037p0275.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

McGarr, A. "Some Observations Indicating Complications in the Nature of Earthquake Scaling." In Earthquake Source Mechanics. American Geophysical Union, 2013. http://dx.doi.org/10.1029/gm037p0217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Denny, Marvin D., and Lane R. Johnson. "The explosion seismic source function: Models and scaling laws reviewed." In Explosion Source Phenomenology. American Geophysical Union, 1991. http://dx.doi.org/10.1029/gm065p0001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Patton, Howard J. "Seismic moment estimation and the scaling of the long-period explosion source spectrum." In Explosion Source Phenomenology. American Geophysical Union, 1991. http://dx.doi.org/10.1029/gm065p0171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fletcher, J. B., L. C. Haar, F. L. Vernon, J. N. Brune, T. C. Hanks, and J. Berger. "The Effects of Attenuation on the Scaling of Source Parameters for Earthquakes at Anza, California." In Earthquake Source Mechanics. American Geophysical Union, 2013. http://dx.doi.org/10.1029/gm037p0331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fellhofer, Stephan, Annemarie Harzl, and Wolfgang Slany. "Scaling and Internationalizing an Agile FOSS Project: Lessons Learned." In Open Source Systems: Adoption and Impact. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-17837-0_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Cris, C., E. Born, D. Frey, R. Mini, H. Neuenschwander, and W. Volken. "A Scaling Method for Multiple Source Models (SMSM)." In The Use of Computers in Radiation Therapy. Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-642-59758-9_156.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dobrynina, Anna A., Vyacheslav V. Cheptsov, Vladimir A. Sankov, Vladimir V. Chechelnitsky, and Rajib Biswas. "Source parameters and scaling relation of the Baikal rift's earthquakes." In Recent Developments in Using Seismic Waves as a Probe for Subsurface Investigations. CRC Press, 2022. http://dx.doi.org/10.1201/9781003177692-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Martínez-Mekler, G., G. F. Al-Noaimi, and A. Robledo. "New Source of Corrections to Scaling for Micellar Solution Critical Behavior." In Phase Transitions in Soft Condensed Matter. Springer US, 1989. http://dx.doi.org/10.1007/978-1-4613-0551-4_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Patton, Howard J. "Regional Magnitude Scaling, Transportability, and Ms:mb Discrimination at Small Magnitudes." In Monitoring the Comprehensive Nuclear-Test-Ban Treaty: Source Processes and Explosion Yield Estimation. Birkhäuser Basel, 2001. http://dx.doi.org/10.1007/978-3-0348-8310-8_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Source scaling"

1

Lotfimarangloo, S., R. D. Badawi, and S. L. Bowen. "Absolute Scatter Scaling for Transmission Source Enhanced Attenuation Correction." In 2024 IEEE Nuclear Science Symposium (NSS), Medical Imaging Conference (MIC) and Room Temperature Semiconductor Detector Conference (RTSD). IEEE, 2024. http://dx.doi.org/10.1109/nss/mic/rtsd57108.2024.10656003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Han, Shujie, Zirui Ou, Qun Huang, and Patrick P. C. Lee. "Scaling Disk Failure Prediction via Multi-Source Stream Mining." In 2024 IEEE International Conference on Data Mining (ICDM). IEEE, 2024. https://doi.org/10.1109/icdm59182.2024.00020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shaw, L. Brandon, Rafael R. Gattass, Augustus X. Carlson, et al. "Power Scaling of a Four-Wave Mixing Source at 730 nm." In Advanced Solid State Lasers. Optica Publishing Group, 2024. https://doi.org/10.1364/assl.2024.jw2a.7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Smyth, Frank. "Integrated Comb Lasers for Coherent Transceiver Scaling." In Signal Processing in Photonic Communications. Optica Publishing Group, 2024. https://doi.org/10.1364/sppcom.2024.spth1h.5.

Full text
Abstract:
An integrated comb laser assembly (iCLA) based on a monolithic InP gain switched comb source and demultiplexer PIC is presented. Its design, characterisation, and demonstration in coherent optical communication and mmWave generation applications are presented. Full-text article not available; see video presentation
APA, Harvard, Vancouver, ISO, and other styles
5

Das, Gaurav, Jerzy Kosinski, Ronald D. Springer, and Andre Anderko. "Scaling Risk Assessment and Remediation in Geothermal Operations Using a Novel Theoretical Approach." In CONFERENCE 2024. AMPP, 2024. https://doi.org/10.5006/c2024-20701.

Full text
Abstract:
Abstract Geothermal power holds immense potential as a renewable energy source with low emissions utilizing the Earth's natural heat to generate electricity. With growing concerns over climate change and the need for sustainable energy alternatives, geothermal power can provide energy independence, economic benefits, and versatility. Mineral scaling has been recognized as a major hindrance in seamless geothermal operations due to the harsh and diverse operating conditions, which can cause significant issues resulting in higher operating costs while reducing energy production's efficiency and overall economic feasibility. Therefore, there is a growing need for a tool that can help in designing preventive and remedial strategies against mineral scaling and, in effect, ensure seamless operation while reducing costs associated with equipment failure. A few of the most commonly occurring scales in geothermal operations across different regions are amorphous silica (SiO2), metal silicates, and calcite (CaCO3). Formulating an effective theoretical framework to identify the critical conditions and characteristics of scaling solids is imperative in devising preventive and/or remedial measures. This multi-faceted problem requires the simultaneous modeling of solution thermodynamics and kinetics. In this work, we propose a novel modeling scheme through the incorporation of the classical nucleation theory (CNT) with the Mixed-Solvent Electrolyte (MSE) thermodynamic model. While MSE assesses scaling risk based on the effective evaluation of the solution chemistry, CNT provides kinetic information, i.e., an estimate of induction time, based on the continuum thermodynamics treatment of clusters. This work focuses on applying the novel theoretical approach in providing accurate thermodynamic modeling of the scales and subsequent applications of the kinetic modeling in deriving remedial techniques. The theoretical framework aims to provide a consistent approach for testing various what-if scenarios and aid in making the best operational solution in the development of flow assurance.
APA, Harvard, Vancouver, ISO, and other styles
6

Tan, Xin, Minghui Zhou, and Brian Fitzgerald. "Scaling open source communities." In ICSE '20: 42nd International Conference on Software Engineering. ACM, 2020. http://dx.doi.org/10.1145/3377811.3380920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Luo, Hongyu. "Scaling from an Unexpected Source: Proppants." In SPE Eastern Regional Meeting. Society of Petroleum Engineers, 2014. http://dx.doi.org/10.2118/171013-ms.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lietzke, A. F., and C. A. Hauck. "H− ion source scaling studies at LBL." In AIP Conference Proceedings Volume 158. AIP, 1987. http://dx.doi.org/10.1063/1.36538.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

N. Bokhorst, K., and A. M. Ziolkowski. "Determination of the source siganture of a dynamite source using the source scaling law." In 54th EAEG Meeting. European Association of Geoscientists & Engineers, 1992. http://dx.doi.org/10.3997/2214-4609.201410726.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Dimri, V. P. "Scaling Spectral Method For Potentail Field Due To Scaling Behaviour Of Source Distribution." In 7th International Congress of the Brazilian Geophysical Society. European Association of Geoscientists & Engineers, 2001. http://dx.doi.org/10.3997/2214-4609-pdb.217.201.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Source scaling"

1

Costley, D., Luis De Jesús Díaz,, Sarah McComas, Christopher Simpson, James Johnson, and Mihan McKenna. Multi-objective source scaling experiment. Engineer Research and Development Center (U.S.), 2021. http://dx.doi.org/10.21079/11681/40824.

Full text
Abstract:
The U.S. Army Engineer Research and Development Center (ERDC) performed an experiment at a site near Vicksburg, MS, during May 2014. Explosive charges were detonated, and the shock and acoustic waves were detected with pressure and infrasound sensors stationed at various distances from the source, i.e., from 3 m to 14.5 km. One objective of the experiment was to investigate the evolution of the shock wave produced by the explosion to the acoustic wavefront detected several kilometers from the detonation site. Another objective was to compare the effectiveness of different wind filter strategies. Toward this end, several sensors were deployed near each other, approximately 8 km from the site of the explosion. These sensors used different types of wind filters, including the different lengths of porous hoses, a bag of rocks, a foam pillow, and no filter. In addition, seismic and acoustic waves produced by the explosions were recorded with seismometers located at various distances from the source. The suitability of these sensors for measuring low-frequency acoustic waves was investigated.
APA, Harvard, Vancouver, ISO, and other styles
2

Mayeda, K., S. Felker, R. Gok, J. O'Boyle, W. Walter, and S. Ruppert. LDRD LW Project Final Report:Resolving the Earthquake Source Scaling Problem. Office of Scientific and Technical Information (OSTI), 2004. http://dx.doi.org/10.2172/15013992.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Norman, Bruce R., Anne L. Sallaska, and Leticia S. Pibida. Scaling of Testing Speed for Different Source-to-Detector Distances. National Institute of Standards and Technology, 2015. http://dx.doi.org/10.6028/nist.tn.1864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Xie, J., L. Cong, and B. J. Mitchell. Lg Excitation, Attenuation and Source Spectral Scaling In Central Asia and China. Defense Technical Information Center, 1996. http://dx.doi.org/10.21236/ada305459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mitchell, B. J., J. Xie, and S. Baqer. Lg excitation, attenuation, and source spectral scaling in central and eastern North America. Office of Scientific and Technical Information (OSTI), 1997. http://dx.doi.org/10.2172/560831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Mayeda, Kevin M., William R. Walter, Rengin M. Gok, and Luca Malagnini. Improved High Frequency Discrimination: A New Approach to Correct for Regional Source Scaling Variations (POSTPRINT) Annual Report 2. Defense Technical Information Center, 2012. http://dx.doi.org/10.21236/ada565322.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mazzoni, Silvia, Nicholas Gregor, Linda Al Atik, Yousef Bozorgnia, David Welch, and Gregory Deierlein. Probabilistic Seismic Hazard Analysis and Selecting and Scaling of Ground-Motion Records (PEER-CEA Project). Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, 2020. http://dx.doi.org/10.55461/zjdn7385.

Full text
Abstract:
This report is one of a series of reports documenting the methods and findings of a multi-year, multi-disciplinary project coordinated by the Pacific Earthquake Engineering Research Center (PEER) and funded by the California Earthquake Authority (CEA). The overall project is titled “Quantifying the Performance of Retrofit of Cripple Walls and Sill Anchorage in Single-Family Wood-Frame Buildings,” henceforth referred to as the “PEER–CEA Project.” The overall objective of the PEER–CEA Project is to provide scientifically based information (e.g., testing, analysis, and resulting loss models) that measure and assess the effectiveness of seismic retrofit to reduce the risk of damage and associated losses (repair costs) of wood-frame houses with cripple wall and sill anchorage deficiencies as well as retrofitted conditions that address those deficiencies. Tasks that support and inform the loss-modeling effort are: (1) collecting and summarizing existing information and results of previous research on the performance of wood-frame houses; (2) identifying construction features to characterize alternative variants of wood-frame houses; (3) characterizing earthquake hazard and ground motions at representative sites in California; (4) developing cyclic loading protocols and conducting laboratory tests of cripple wall panels, wood-frame wall subassemblies, and sill anchorages to measure and document their response (strength and stiffness) under cyclic loading; and (5) the computer modeling, simulations, and the development of loss models as informed by a workshop with claims adjustors. This report is a product of Working Group 3 (WG3), Task 3.1: Selecting and Scaling Ground-motion records. The objective of Task 3.1 is to provide suites of ground motions to be used by other working groups (WGs), especially Working Group 5: Analytical Modeling (WG5) for Simulation Studies. The ground motions used in the numerical simulations are intended to represent seismic hazard at the building site. The seismic hazard is dependent on the location of the site relative to seismic sources, the characteristics of the seismic sources in the region and the local soil conditions at the site. To achieve a proper representation of hazard across the State of California, ten sites were selected, and a site-specific probabilistic seismic hazard analysis (PSHA) was performed at each of these sites for both a soft soil (Vs30 = 270 m/sec) and a stiff soil (Vs30=760 m/sec). The PSHA used the UCERF3 seismic source model, which represents the latest seismic source model adopted by the USGS [2013] and NGA-West2 ground-motion models. The PSHA was carried out for structural periods ranging from 0.01 to 10 sec. At each site and soil class, the results from the PSHA—hazard curves, hazard deaggregation, and uniform-hazard spectra (UHS)—were extracted for a series of ten return periods, prescribed by WG5 and WG6, ranging from 15.5–2500 years. For each case (site, soil class, and return period), the UHS was used as the target spectrum for selection and modification of a suite of ground motions. Additionally, another set of target spectra based on “Conditional Spectra” (CS), which are more realistic than UHS, was developed [Baker and Lee 2018]. The Conditional Spectra are defined by the median (Conditional Mean Spectrum) and a period-dependent variance. A suite of at least 40 record pairs (horizontal) were selected and modified for each return period and target-spectrum type. Thus, for each ground-motion suite, 40 or more record pairs were selected using the deaggregation of the hazard, resulting in more than 200 record pairs per target-spectrum type at each site. The suites contained more than 40 records in case some were rejected by the modelers due to secondary characteristics; however, none were rejected, and the complete set was used. For the case of UHS as the target spectrum, the selected motions were modified (scaled) such that the average of the median spectrum (RotD50) [Boore 2010] of the ground-motion pairs follow the target spectrum closely within the period range of interest to the analysts. In communications with WG5 researchers, for ground-motion (time histories, or time series) selection and modification, a period range between 0.01–2.0 sec was selected for this specific application for the project. The duration metrics and pulse characteristics of the records were also used in the final selection of ground motions. The damping ratio for the PSHA and ground-motion target spectra was set to 5%, which is standard practice in engineering applications. For the cases where the CS was used as the target spectrum, the ground-motion suites were selected and scaled using a modified version of the conditional spectrum ground-motion selection tool (CS-GMS tool) developed by Baker and Lee [2018]. This tool selects and scales a suite of ground motions to meet both the median and the user-defined variability. This variability is defined by the relationship developed by Baker and Jayaram [2008]. The computation of CS requires a structural period for the conditional model. In collaboration with WG5 researchers, a conditioning period of 0.25 sec was selected as a representative of the fundamental mode of vibration of the buildings of interest in this study. Working Group 5 carried out a sensitivity analysis of using other conditioning periods, and the results and discussion of selection of conditioning period are reported in Section 4 of the WG5 PEER report entitled Technical Background Report for Structural Analysis and Performance Assessment. The WG3.1 report presents a summary of the selected sites, the seismic-source characterization model, and the ground-motion characterization model used in the PSHA, followed by selection and modification of suites of ground motions. The Record Sequence Number (RSN) and the associated scale factors are tabulated in the Appendices of this report, and the actual time-series files can be downloaded from the PEER Ground-motion database Portal (https://ngawest2.berkeley.edu/)(link is external).
APA, Harvard, Vancouver, ISO, and other styles
8

Si, Hongjun, Saburoh Midorikawa, and Tadahiro Kishida. Development of NGA-Sub Ground-Motion Model of 5%-Damped Pseudo-Spectral Acceleration Based on Database for Subduction Earthquakes in Japan. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, 2020. http://dx.doi.org/10.55461/lien3652.

Full text
Abstract:
Presented within is an empirical ground-motion model (GMM) for subduction-zone earthquakesin Japan. The model is based on the extensive and comprehensive subduction database of Japanese earthquakes by the Pacific Engineering Research Center (PEER). It considers RotD50 horizontal components of peak ground acceleration (PGA), peak ground velocity (PGV), and 5%-damped elastic pseudo-absolute acceleration response spectral ordinates (PSA) at the selected periods ranging from 0.01 to 10 sec. The model includes terms and predictor variables considering tectonic setting (i.e., interplate and intraslab), hypocentral depths (D), magnitude scaling, distance attenuation, and site response. The magnitude scaling derived in this study is well constrained by the data observed during the large-magnitude interface events in Japan (i.e., the 2003 Tokachi-Oki and 2011 Tohoku earthquakes) for different periods. The developed ground-motion prediction equation (GMPE) covers subduction-zone earthquakes that have occurred in Japan for magnitudes ranging from 5.5 to as large as 9.1, with distances less than 300 km from the source.
APA, Harvard, Vancouver, ISO, and other styles
9

Andresen, Jens-Bjørn R., and Søren M. Kristiansen. Historic maps as source for hydrological reconstruction of pre-industrial landscape wetness in Denmark: a methodological study. Det Kgl. Bibliotek, 2023. http://dx.doi.org/10.7146/aul.491.

Full text
Abstract:
Historic maps are an important primary source which can be utilized in the reconstruction of environmental variables of the pre-industrial landscape. However, methodological constraints have hitherto prevented large scale and systematic approaches. In this paper a novel methodology is presented, which documents the usefulness of the maps in the study of paleo-hydrology and thus serves a better understanding of the conditions for agricultural production under pre-drainage conditions. The methodology is developed based on eighteenth and nineteenth century maps from a 100 km2 study area in one stream catchment in East Jutland, Denmark. It combines information from two types of historic maps in order to correlate computed soil hydrology (wetness index) and recorded historic land-use. The calculated wetness indexes are derived from contour lines on topographic (military) maps (in Danish: Høje Maalebordsblade), whereas the spatial overlays are land-use classes from economic maps (in Danish: Matrikelkort - Original 1). This study demonstrates – for the first time - that the wetness index is explanatory for the agricultural suitable/non-suitable dichotomy (tilled land versus “wetland”: meadows, fens, and peat bogs) on the historic economic maps. Furthermore, the study shows that pre-industrial arable areas were stretched to their limits in respect to cropping wet soils in this agricultural dominated landscape. The study confirms the existing belief that the historic economic maps constitute the best available source of these mosaic-landscapes for periods before the intense subsurface tile drainage began. This finding opens for further methodological development and up-scaling using automatic feature detection, contour line extraction and text recognition of historical maps.
APA, Harvard, Vancouver, ISO, and other styles
10

Serrano, Jason Dimitri, Alexander S. Chuvatin, M. C. Jones, et al. Compact wire array sources: power scaling and implosion physics. Office of Scientific and Technical Information (OSTI), 2008. http://dx.doi.org/10.2172/941403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography