To see the other types of publications on this topic, follow the link: Source scaling.

Dissertations / Theses on the topic 'Source scaling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 23 dissertations / theses for your research on the topic 'Source scaling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Edmundsson, Niklas. "Scaling a Content Delivery system for Open Source Software." Thesis, Umeå universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-109779.

Full text
Abstract:
This master’s thesis addresses scaling of content distribution sites. In a case study, the thesis investigates issues encountered on ftp.acc.umu.se, a content distribution site run by the Academic Computer Club (ACC) of Umeå University. This site is characterized by the unusual situation of the external network connectivity having higher bandwidth than the components of the system, which differs from the norm of the external connectivity being the limiting factor. To address this imbalance, a caching approach is proposed to architect a system that is able to fully utilize the available network capacity, while still providing a homogeneous resource to the end user. A set of modifications are made to standard open source solutions to make caching perform as required, and results from production deployment of the system are evaluated. In addition, time series analysis and forecasting techniques are introduced as tools to improve the system further, resulting in the implementation of a method to automatically detect bursts and handle load distribution of unusually popular files.
APA, Harvard, Vancouver, ISO, and other styles
2

Orefice, Antonella <1983&gt. "Refined Estimation of Earthquake Source Parameters: Methods, Applications and Scaling Relationships." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amsdottorato.unibo.it/4286/1/orefice_antonella_tesi.pdf.

Full text
Abstract:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
APA, Harvard, Vancouver, ISO, and other styles
3

Orefice, Antonella <1983&gt. "Refined Estimation of Earthquake Source Parameters: Methods, Applications and Scaling Relationships." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amsdottorato.unibo.it/4286/.

Full text
Abstract:
The objective of this work of thesis is the refined estimations of source parameters. To such a purpose we used two different approaches, one in the frequency domain and the other in the time domain. In frequency domain, we analyzed the P- and S-wave displacement spectra to estimate spectral parameters, that is corner frequencies and low frequency spectral amplitudes. We used a parametric modeling approach which is combined with a multi-step, non-linear inversion strategy and includes the correction for attenuation and site effects. The iterative multi-step procedure was applied to about 700 microearthquakes in the moment range 1011-1014 N•m and recorded at the dense, wide-dynamic range, seismic networks operating in Southern Apennines (Italy). The analysis of the source parameters is often complicated when we are not able to model the propagation accurately. In this case the empirical Green function approach is a very useful tool to study the seismic source properties. In fact the Empirical Green Functions (EGFs) consent to represent the contribution of propagation and site effects to signal without using approximate velocity models. An EGF is a recorded three-component set of time-histories of a small earthquake whose source mechanism and propagation path are similar to those of the master event. Thus, in time domain, the deconvolution method of Vallée (2004) was applied to calculate the source time functions (RSTFs) and to accurately estimate source size and rupture velocity. This technique was applied to 1) large event, that is Mw=6.3 2009 L’Aquila mainshock (Central Italy), 2) moderate events, that is cluster of earthquakes of 2009 L’Aquila sequence with moment magnitude ranging between 3 and 5.6, 3) small event, i.e. Mw=2.9 Laviano mainshock (Southern Italy).
APA, Harvard, Vancouver, ISO, and other styles
4

Mrowczynski, Piotr. "Scaling cloud-native Apache Spark on Kubernetes for workloads in external storages." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-237455.

Full text
Abstract:
CERN Scalable Analytics Section currently offers shared YARN clusters to its users as monitoring, security and experiment operations. YARN clusters with data in HDFS are difficult to provision, complex to manage and resize. This imposes new data and operational challenges to satisfy future physics data processing requirements. As of 2018, there were over 250 PB of physics data stored in CERN’s mass storage called EOS. Hadoop-XRootD Connector allows to read over network data stored in CERN EOS. CERN’s on-premise private cloud based on OpenStack allows to provision on-demand compute resources. Emergence of technologies as Containers-as-a-Service in Openstack Magnum and support for Kubernetes as native resource scheduler for Apache Spark, give opportunity to increase workflow reproducability on different compute infrastructures with use of containers, reduce operational effort of maintaining computing cluster and increase resource utilization via cloud elastic resource provisioning. This trades-off the operational features with datalocality known from traditional systems as Spark/YARN with data in HDFS.In the proposed architecture of cloud-managed Spark/Kubernetes with data stored in external storage systems as EOS, Ceph S3 or Kafka, physicists and other CERN communities can on-demand spawn and resize Spark/Kubernetes cluster, having fine-grained control of Spark Applications. This work focuses on Kubernetes CRD Operator for idiomatically defining and running Apache Spark applications on Kubernetes, with automated scheduling and on-failure resubmission of long-running applications. Spark Operator was introduced with design principle to allow Spark on Kubernetes to be easy to deploy, scale and maintain with similar usability of Spark/YARN.The analysis of concerns related to non-cluster local persistent storage and memory handling has been performed. The architecture scalability has been evaluated on the use case of sustained workload as physics data reduction, with files in ROOT format being stored in CERN mass-storage called EOS. The series of microbenchmarks has been performed to evaluate the architecture properties compared to state-of-the-art Spark/YARN cluster at CERN. Finally, Spark on Kubernetes workload use-cases have been classified, and possible bottlenecks and requirements identified.<br>CERN Scalable Analytics Section erbjuder för närvarande delade YARN-kluster till sina användare och för övervakning, säkerhet, experimentoperationer, samt till andra grupper som är intresserade av att bearbeta data med hjälp av Big Data-tekniker. Dock är YARNkluster med data i HDFS svåra att tillhandahålla, samt komplexa att hantera och ändra storlek på. Detta innebär nya data och operativa utmaningar för att uppfylla krav på dataprocessering för petabyte-skalning av fysikdata.Från och med 2018 fanns över 250 PB fysikdata lagrade i CERNs masslagring, kallad EOS. CERNs privata moln, baserat på OpenStack, gör det möjligt att tillhandahålla beräkningsresurser på begäran. Uppkomsten av teknik som Containers-as-a-Service i Openstack Magnum och stöd för Kubernetes som inbyggd resursschemaläggare för Apache Spark, ger möjlighet att öka arbetsflödesreproducerbarheten på olika databaser med användning av containers, minska operativa ansträngningar för att upprätthålla datakluster, öka resursutnyttjande via elasiska resurser, samt tillhandahålla delning av resurser mellan olika typer av arbetsbelastningar med kvoter och namnrymder.I den föreslagna arkitekturen av molnstyrda Spark / Kubernetes med data lagrade i externa lagringssystem som EOS, Ceph S3 eller Kafka, kan fysiker och andra CERN-samhällen på begäran skapa och ändra storlek på Spark / Kubernetes-klustrer med finkorrigerad kontroll över Spark Applikationer. Detta arbete fokuserar på Kubernetes CRD Operator för idiomatiskt definierande och körning av Apache Spark-applikationer på Kubernetes, med automatiserad schemaläggning och felåterkoppling av långvariga applikationer. Spark Operator introducerades med designprincipen att tillåta Spark över Kubernetes att vara enkel att distribuera, skala och underhålla. Analys av problem relaterade till icke-lokal kluster persistent lagring och minneshantering har utförts. Arkitekturen har utvärderats med användning av fysikdatareduktion, med filer i ROOT-format som lagras i CERNs masslagringsystem som kallas EOS. En serie av mikrobenchmarks har utförts för att utvärdera arkitekturegenskaperna såsom prestanda jämfört med toppmoderna Spark / YARN-kluster vid CERN, och skalbarhet för långvariga dataprocesseringsjobb.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Gengji [Verfasser], and Franz X. [Akademischer Betreuer] Kärtner. "Power scaling of ultrafast mid-IR source enabled by high-power fiber laser technology / Gengji Zhou ; Betreuer: Franz Kärtner." Hamburg : Staats- und Universitätsbibliothek Hamburg, 2017. http://d-nb.info/1143868781/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Noda, Shunta. "Relation of Earthquake Growth and Final Size with Applications to Magnitude Determination for Early Warning." Kyoto University, 2020. http://hdl.handle.net/2433/259707.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huijts, Julius. "Broadband Coherent X-ray Diffractive Imaging and Developments towards a High Repetition Rate mid-IR Driven keV High Harmonic Source." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS154/document.

Full text
Abstract:
Des sources des rayons XUV (1-100 nm) sont des outils extraordinaires pour sonder la dynamique à l’échelle nanométrique avec une résolution femto- voire attoseconde. La génération d’harmoniques d’ordre élevé (GH) est une des sources majeures dans ce domaine d’application. La GH est un processus dans lequel une impulsion laser infrarouge femtoseconde est convertie, de manière cohérente, en fréquences élevées dans le domaine EUV par interaction hautement non-linéaire dans un atome, une molécule et plus récemment, dans un cristal. La GH possède une excellente cohérence spatiale qui a permis de réaliser des démonstrations impressionnantes en imagerie sans lentille. Pour accroître le potentiel de ces sources, des défis sont à relever : leur brillance et énergie de photon maximum doivent augmenter et les techniques d’imagerie sans lentille doivent être modifiées pour être compatibles avec l’importante largeur spectrale des impulsions attosecondes émise par ces sources. Cette thèse présente une nouvelle approche dans laquelle des figures de diffraction large bande, i.e. potentiellement attosecondes, sont rendues monochromatiques numériquement. Cette méthode est basée uniquement sur la mesure du spectre de la source et la supposition d’un échantillon spatialement non-dispersif. Cette approche a été validée tout d’abord dans le visible, à partir d’un supercontinuum. L’échantillon binaire est reconstruit par recouvrement de phase pour une largeur spectrale de 11 %, là où les algorithmes usuels divergent. Les simulations numériques montrent aussi que la méthode de monochromatisation peut être appliquée au domaine des rayons X, avec comme exemple un masque semi-conducteur utilisé en de lithographie EUV. Bien que la brillance « cohérente » de la source actuelle (qui progresse) reste insuffisante, une application sur l’inspection de masques sur source Compton est proposée. Dans une extension de ces simulations un masque de lithographie étendu est reconstruit par ptychographie, démontrant la versatilité à d’autres techniques d’imagerie sans lentille. Nous avons également entamé une série d’expérience dans le domaine des X-durs sur source synchrotron. Les figures de diffraction après monochromatisation numérique semblent prometteuses mais l’analyse des données demandent des efforts supplémentaires. Une partie importante de cette thèse est dédiée à l’extension des sources harmoniques à des brillances et énergies de photon plus élevées. Ce travail exploratoire permettrait la réalisation d’une source harmonique compacte pompée par un laser OPCPA dans le moyen infrarouge à très fort taux de répétition. Les longueurs d’onde moyen infrarouge (3.1 μm dans ce travail de thèse) sont favorables à l’extension des énergies des photons au keV et aux impulsions attosecondes. Le but est de pouvoir couvrir les seuils d’absorption X et d’améliorer la résolution spatio-temporelle. Cependant, deux facteurs rendent cette démonstration difficile: le nombre de photons par impulsion de la source OPCPA est très limité et la réponse du dipôle harmonique à grande longueur est extrêmement faible. Pour relever ces défis plusieurs configurations expérimentales sont explorées : génération dans un jet de gaz ; génération dans une cellule de gaz ; compression solitonique et la génération d’harmoniques combinées dans une fibre à cristal photonique ; compression solitonique dans une fibre à cristal photonique et génération d’harmoniques dans une cellule de gaz. Les premiers résultats expérimentaux sur la compression solitonique jusqu’à 26 femtosecondes et des harmoniques basses jusqu’à l’ordre sept sont présentésEn résumé, ces résultats représentent une avancée vers l’imagerie nanométrique attoseconde sans lentille basée sur des algorithmes « large bande » innovants et une extension des capacités de nouvelles sources harmoniques ‘table-top’ au keV pompées par laser OPCPA<br>Soft X-ray sources based on high harmonic generation are up to now unique tools to probe dynamics in matter on femto- to attosecond timescales. High harmonic generation is a process in which an intense femtosecond laser pulse is frequency upconverted to the UV and soft X-ray region through a highly nonlinear interaction in a gas. Thanks to their excellent spatial coherence, they can be used for lensless imaging, which has already led to impressive results. To use these sources to the fullest of their potential, a number of challenges needs to be met: their brightness and maximum photon energy need to be increased and the lensless imaging techniques need to be modified to cope with the large bandwidth of these sources. For the latter, a novel approach is presented, in which broadband diffraction patterns are rendered monochromatic through a numerical treatment based solely on the spectrum and the assumption of a spatially non-dispersive sample. This approach is validated through a broadband lensless imaging experiment on a supercontinuum source in the visible, in which a binary sample was properly reconstructed through phase retrieval for a source bandwidth of 11 %. Through simulations, the numerical monochromatization method is shown to work for hard X-rays as well, with a simplified semiconductor lithography mask as sample. A potential application of lithography mask inspection on an inverse Compton scattering source is proposed, although the conclusion of the analysis is that the current source lacks brightness for the proposal to be realistic. Simulations with sufficient brightness show that the sample is well reconstructed up to 10 % spectral bandwidth at 8 keV. In an extension of these simulations, an extended lithography mask sample is reconstructed through ptychography, showing that the monochromatization method can be applied in combination with different lensless imaging techniques. Through two synchrotron experiments an experimental validation with hard X-rays was attempted, of which the resulting diffraction patterns after numerical monochromatization look promising. The phase retrieval process and data treatment however require additional efforts.An important part of the thesis is dedicated to the extension of high harmonic sources to higher photon energies and increased brightness. This exploratory work is performed towards the realization of a compact high harmonic source on a high repetition rate mid-IR OPCPA laser system, which sustains higher average power and longer wavelengths compared to ubiquitous Ti:Sapphire laser systems. High repetition rates are desirable for numerous applications involving the study of rare events. The use of mid-IR wavelengths (3.1 μm in this work) promises extension of the generated photon energies to the kilo-electronvolt level, allowing shorter pulses, covering more X-ray absorption edges and improving the attainable spatial resolution for imaging. However, high repetition rates come with low pulse energies, which constrains the generation process. The generation with longer wavelengths is challenging due to the significantly lower dipole response of the gas. To cope with these challenges a number of experimental configurations is explored theoretically and experimentally: free-focusing in a gas-jet; free-focusing in a gas cell; soliton compression and high harmonic generation combined in a photonic crystal fiber; separated soliton compression in a photonic crystal fiber and high harmonic generation in a gas cell. First results on soliton compression down to 26 fs and lower harmonics up to the seventh order are presented.Together, these results represent a step towards ultrafast lensless X-ray imaging on table-top sources and towards an extension of the capabilities of these sources
APA, Harvard, Vancouver, ISO, and other styles
8

Perrin, Clément. "Relations entre propriétés des failles et propriétés des forts séismes." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4045/document.

Full text
Abstract:
J’examine les relations entre propriétés des failles géologiques long-termes et propriétés des forts séismes que produisent ces failles. J’ai compilé les données sismologiques disponibles sur les grands séismes historiques mondiaux, et cartographié, sur images satellitaires, les failles long-termes rompues par ces séismes et les traces des ruptures. L’analyse combinée des données montre que : i) les failles long-termes ont certaines propriétés génériques (organisation des réseaux, segmentation latérale, forme de distribution du glissement cumulé, etc) ; ii) les forts séismes ont également des propriétés communes (similarité de distribution du glissement cosismique, du nombre de segments rompus, de la chute de contrainte sur chaque segment majeur rompu, de la distance relative entre hypocentre et zone de glissement maximum, etc) ; iii) la maturité structurale des failles est la propriété tectonique qui impacte le plus le comportement des forts séismes. Il est probable que cette maturité diminue la friction statique et la complexité géométrique du plan de faille. Elle agit sur la localisation de la zone d’initiation du séisme, sur la localisation et l’amplitude maximum du glissement cosismique, sur la direction de décroissance de ce glissement, sur la « capacité » de la rupture à se propager et donc sur sa vitesse de propagation. Elle dicte le nombre de segments majeurs qui peuvent être rompus, et par conséquent, elle contrôle la longueur totale et la chute de contrainte globale de la rupture. Pour comprendre la physique des forts séismes, il apparaît donc indispensable d’analyser conjointement les propriétés des failles rompues et les propriétés des séismes produits<br>I examine the relations between the properties of long-term geological faults and the properties of the large earthquakes these faults produce. I have gathered available seismological information on large historical earthquakes worldwide and mapped in detail, on satellite images, both the long-term fault and the rupture traces. The combined analysis of the data shows that: i) long-term faults have a number of generic properties (arrangement of overall fault networks, lateral segmentation of fault traces, form of cumulative slip distribution, etc); ii) large earthquakes also have generic properties (similarity of envelope shape of coseismic slip-length profiles, of decrease in rupture width along rupture length, of number of broken segments, of stress drop on broken segments, of relative distance between hypocenter and zone of maximum slip, etc); iii) the structural maturity of the faults is the tectonic property most impacting the behavior of large earthquakes. The maturity likely acts in reducing both the static friction and the geometric complexity of the fault plane. It partly governs the location of the earthquake initiation, the location and amplitude of the maximum coseismic slip, the direction of the coseismic slip decrease, the rupture propagation efficiency and speed, the number of major fault segments that are broken, and hence the rupture length and its overall stress drop. To understand the physics of earthquakes, it thus seems necessary to analyze jointly the tectonic properties of the broken faults and the seismological properties of the earthquakes
APA, Harvard, Vancouver, ISO, and other styles
9

Cieslak, Rafal. "Power scaling of novel fibre sources." Thesis, University of Southampton, 2012. https://eprints.soton.ac.uk/351186/.

Full text
Abstract:
This thesis explores novel fibre-based coherent light source architectures and strategies for scaling output power. The research focuses on fibre-based amplified spontaneous emission (ASE) sources with broadband output in the ~1 μm spectral region and on internally-frequency-doubled fibre lasers emitting in the visible (green) wavelength range. In Part I: Spectrum-controllable fibre-based amplified spontaneous emission sources modelling, development and characterisation of a versatile ASE source based on Yb-doped fibre gain stages, using power-efficient means for spectrum control is presented. Experiments have culminated in a versatile seed source with polarized output and a reasonable degree of spectral control. In its final configuration, the ASE source was capable of producing either a broad spectrum in the 1-1.1 μm band with a full-width at half-maximum of 15-40 nm and output power > 1 W. Alternatively, single/multiple narrow lines with a full-width at half-maximum ranging from several nanometres to < 0.05 nm and output power spectral densities of up to 100 mW/nm could be generated. The output power was temporally stable with fluctuations at the level < 0.3-0.8% of the total output power. Very high spectral stability was obtained, which was limited mostly by the mechanical stability of the external cavity. The output beam was nearly diffraction limited with M2 ≈ 1.1. Part II: Efficient intracavity frequency doubling schemes for continuous-wave fibre lasers introduces a novel concept for resonant enhancement of the intracavity power in high power continuous-wave fibre lasers that is suitable for a wide range of applications. Using this concept, efficient frequency doubling in continuous-wave Yb-doped fibre lasers has been demonstrated and techniques for using it in devices based on both robustly single-mode and multi-mode fibres have been developed. Finally, this thesis presents wavelength tuning of continuous-wave Yb-doped fibre lasers over ~19 nm in the green spectral region and scaling the generated second harmonic power up to ~19 W with more than 21% pump to second harmonic conversion efficiency.
APA, Harvard, Vancouver, ISO, and other styles
10

Erez, Giacomo. "Modélisation du terme source d'incendie : montée en échelle à partir d'essais de comportement au feu vers l'échelle réelle : approche "modèle", "numérique" et "expérimentale"." Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0189.

Full text
Abstract:
Le recours à la simulation numérique peut être précieux pour un investigateur cherchant à évaluer une hypothèse dans le cadre de la recherche des causes et circonstances d'un incendie. Pour utiliser cet outil, il est primordial de définir précisément le terme source, c'est-à-dire la quantité d'énergie ou de gaz combustibles que le foyer va dégager au cours du temps. Il peut être déterminé par des essais en échelle réduite ou réelle. La première approche est souvent préférée car plus facile à mettre en œuvre et moins coûteuse. Dans ce cas, il est ensuite nécessaire de transposer les résultats vers l'échelle réelle : pour cela, plusieurs types de modèles ont été proposés. Les plus complets reposent sur des modèles de pyrolyse, qui décrivent le comportement au feu à partir des réactions chimiques dans la phase condensée. Toutefois, ils ne sont pour l'instant pas mûrs pour les applications en investigation. C'est pourquoi une autre famille de modèles, dits thermiques, a été choisie dans l'étude présentée ici. Ces modèles visent à prédire le terme source en fonction de la sollicitation thermique uniquement. Les travaux sont divisés en deux parties principales, à savoir une caractérisation approfondie des transferts de chaleur au sein d'une flamme et une investigation de leur influence sur la décomposition des matériaux. Pour le premier sujet, l'accent est mis sur les transferts radiatifs car ils jouent un rôle prédominant dans l'entretien de la combustion et la propagation. Le rayonnement des flammes a donc été caractérisé pour plusieurs combustibles (kérosène, gazole, heptane, mousse polyuréthane et bois) et de nombreuses tailles de foyer (de 0,3 m à 3,5 m de côté). Les mesures, incluant de l'imagerie visible mais aussi un dispositif d'opacimétrie multispectral et un spectromètre infrarouge, ont permis de décrire la forme et l'émission des flammes. Ces données ont ensuite été utilisées dans un modèle (méthode de Monte-Carlo) pour prédire les flux thermiques reçus à différentes positions. Ces prédictions reproduisent bien les valeurs mesurées lors des essais, ce qui montre que les principaux phénomènes contrôlant le rayonnement ont été identifiés et pris en compte, pour toutes les tailles de foyer. Étant donné que l'objectif final est de fournir un outil de simulation complet, il a été choisi d'évaluer Fire Dynamics Simulator (FDS) afin de déterminer si le code permet de décrire correctement ces transferts radiatifs. Ce travail a été fait grâce aux données et connaissances acquises précédemment et montre que le code prédit correctement les flux reçus. Il a donc été choisi, pour la suite des travaux, de se reposer sur le modèle de rayonnement déjà incorporé dans FDS, pour profiter de son couplage avec le reste des modèles utiles à la simulation incendie. Concernant le second thème de l'étude, à savoir l'effet du rayonnement sur la décomposition, un travail expérimental a été mené sur la mousse polyuréthane au cône calorimètre, afin de lier la vitesse de perte de masse (MLR, pour Mass loss rate) au flux thermique imposé. Ces données ont permis de construire un modèle prédisant le MLR en fonction du temps et de l'éclairement, ce qui représente bien l'approche thermique évoquée précédemment. Des essais à plus grande échelle ont servi à caractériser la propagation du feu à la surface du combustible mais aussi à l'intérieur des échantillons de mousse, en employant différents moyens de mesure (imagerie visible, thermométrie, photogrammétrie). En plus des connaissances acquises, cette étude indique que l'utilisation de données obtenues à petite échelle est possible pour prédire le comportement au feu à échelle réelle. C'est donc ce qui a été fait, en modifiant le code source de FDS, pour intégrer une approche thermique en utilisant les données du modèle décrivant le MLR en fonction du temps et de l'éclairement. Les premières simulations montrent des résultats encourageants, et seront complétées par l'étude de géométries plus complexes<br>Numerical simulations can provide valuable information to fire investigators, but only if the fire source is precisely defined. This can be done through full- or small-scale testing. The latter is often preferred because these tests are easier to perform, but their results have to be extrapolated in order to represent full-scale fire behaviour. Various approaches have been proposed to perform this upscaling. An example is pyrolysis models, which involve a detailed description of condensed phase reactions. However, these models are not ready yet for investigation applications. This is why another approach was chosen for the work presented here, employing a heat transfer model: the prediction of mass loss rate for a material is determined based on a heat balance. This principle explains the two-part structure of this study: first, a detailed characterisation of heat transfers is performed; then, the influence of these heat transfers on thermal decomposition is studied. The first part focuses on thermal radiation because it is the leading mechanism of flame spread. Flame radiation was characterised for several fuels (kerosene, diesel, heptane, polyurethane foam and wood) and many fire sizes (from 0.3 m up to 3.5 m wide). Measurements included visible video recordings, multispectral opacimetry and infrared spectrometry, which allowed the determination of a simplified flame shape as well as its emissive power. These data were then used in a model (Monte-Carlo method) to predict incident heat fluxes at various locations. These values were compared to the measurements and showed a good agreement, thus proving that the main phenomena governing flame radiation were captured and reproduced, for all fire sizes. Because the final objective of this work is to provide a comprehensive fire simulation tool, a software already available, namely Fire Dynamics Simulator (FDS), was evaluated regarding its ability to model radiative heat transfers. This was done using the data and knowledge gathered before, and showed that the code could predict incident heat fluxes reasonably well. It was thus chosen to use FDS and its radiation model for the rest of this work. The second part aims at correlating thermal decomposition to thermal radiation. This was done by performing cone calorimeter tests on polyurethane foam and using the results to build a model which allows the prediction of MLR as a function of time and incident heat flux. Larger tests were also performed to study flame spread on top and inside foam samples, through various measurements: videos processing, temperatures analysis, photogrammetry. The results suggest that using small-scale data to predict full-scale fire behaviour is a reasonable approach for the scenarios being investigated. It was thus put into practice using FDS, by modifying the source code to allow for the use of a thermal model, in other words defining the fire source based on the model predicting MLR as a function of time and incident heat flux. The results of the first simulations are promising, and predictions for more complex geometries will be evaluated to validate this method
APA, Harvard, Vancouver, ISO, and other styles
11

Billaud, Antonin. "Power-scaling of wavelength-flexible two-micron fibre sources." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/425928/.

Full text
Abstract:
In this thesis we explore thulium doped silica fibre based sources, focusing on laser and amplified spontaneous emission behaviours. We analyse new ways of improving fibre cavity performances by first demonstrating a novel way of manufacturing doped fibres showing high pump absorption whilst retaining ease of use for cleaving and splicing. This new process offers a trade-off between circular and non-circular fibre geometries by maintaining advantages of both configurations. An implementation process on fibre drawing towers is detailed for future large scale production of highly circular active fibres with high mode scrambling, resulting in high pump absorption comparable, and potentially higher, to octagonal fibre. We then introduce a new way of improving fibre tip movement insensitivity in free-space feedback arms by utilising corner-cubes as reflective elements, with results showing transverse fibre tip movement of more than a millimetre in specific configurations whilst maintaining high feedback efficiency. Output power variation of less than 35% was demonstrated over a translation window of ±1.2mm in some cases. Exploiting the movement insensitive properties offered by a corner-cube, a tunable ring laser based on fibre tip movement and a Fabry-Perot etalon is demonstrated. Up to 5nm of quasi-continuous fine tuning is proven with a theoretical accuracy of 8pm/μm fibre tip movement and a linewidth lower than 1.5GHz is demonstrated. Potential for rapid wavelength scanning and for much broader tuning over a window of few tens of nm is proposed with further modifications of the experimental setup to allow wider fibre tip movement without feedback losses appearing. Focus is then centred on the broad tuning capabilities on thulium doped silica fibres and a CW laser source allowing tuning over more than 130nm in the 2μm band is described. Tuning is achieved by the use of a digital micro-mirror device (DMD) coupled with a diffraction grating, allowing further spectral shaping. Up to 8.5W of output power is displayed, pump power limited, with capabilities for multi-wavelength emission and spectral power density shaping by adjustment of the micro-mirror matrix reflective pattern. Modification of the system was explained in order to achieve different requirements, either by improving tuning range, accuracy or minimal line width. Utilising the fast dynamics of a doped fibre cavity, a cavity is built around an acousto-optic modulator to frustrate lasing in order to create a feedback tolerant pulsed amplified spontaneous emission (ASE) source. This source was designed to allow generation of a pulsed wavelength-controllable ASE via the use of a DMD coupled with a diffraction grating. A core-pumped setup is demonstrated, reaching tuning from 1860 to 1950nm and a cladding-pumped architecture is built for longer wavelengths generation to improve compatibility with amplifier stages. This source displayed tuning performances from 1940 to 2020nm with peak power of up to 1.5kW and pulses shorter than 100ns. Multi-waveband behaviour is demonstrated and output bandwidth is controlled through the DMD. A cladding-pumped amplification stage is described and amplification of the ASE output by 15dB, reaching up to 72W, pump limited, was demonstrated corresponding to peak powers of more than 5kW. Prospects for pumping of a ZGP OPO cavity with an ASE are discussed, detailing the potential benefits of utilising a bandwidth-adjustable ASE source for mid-infrared generation.
APA, Harvard, Vancouver, ISO, and other styles
12

Pearson, Lee. "Novel power scaling architectures for fibre and solid-state sources." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/194931/.

Full text
Abstract:
This thesis explores approaches for scaling the output power of rare-earth ion doped fibre lasers and amplifiers, fibre amplified spontaneous emission sources, and solid state laser oscillators. Scaling output power from laser sources has been a topic of interest ever since the first laser was demonstrated. The development of new geometries and novel techniques for reducing effects that limit the maximum output power are particularly important. Three approaches for power scaling are demonstrated here. The first is an all fibre geometry for producing predominately single-ended operation. By exploiting the high available gain in rare-earth-ion doped fibres, predominately single-ended laser output can be achieved in a high loss cavity with feedback at one end considerably lower than the other. This was demonstrated with an Yb doped fibre laser using a low loss end termination scheme to produce 29W and 2W in the forward and backward directions, respectively, for launched pump power of 48W. This corresponds to a slope efficiency for the forward direction of 77%. The single-ended scheme was also applied to a Tm-doped fibre ASE system, producing a maximum output of 11W for 43W of launched pump, with an emission bandwidth of 36nm centred at 1958nm. Secondly, a Tm-doped fibre distributed feedback laser with 875mW of single frequency utput at 1943nm was used in a master oscillator power amplifier configuration. Using three amplifier stages, the output was scaled to 100W of output with a final polarisation extinction ratio of >94% and a beam propagation factor ofM2 <1.25. The last laser architecture was a cryogenically cooled Ho:YAGlaser in-band pumped by a diode pumped Tm-doped fibre laser. After determining the absorption bandwidth as a function of temperature at the desired pump wavelength of 1932nm in Ho:YAG, the fibre laser was constructed to have an emission line-width of <0.2nm to achieve efficient overlap with the absorption peak. This fibre laser was used to pump two different Ho:YAG laser configurations. The first was a free-running laser based on a simple two-mirror cavity design, which showed a factor of 1.7 increase in the laser slope efficiency and a factor of 10 decrease in threshold pump power when the crystal temperature was reduced from 300K to 77K. The second cavity condition discussed was for low quantum defect operation, which was demonstrated at 1970nm corresponding to a quantum defect of just 2%. Lastly, further power scaling and other applications for all three approaches are discussed
APA, Harvard, Vancouver, ISO, and other styles
13

HUANG, KUAN-YU. "Fractal or Scaling Analysis of Natural Cities Extracted from Open Geographic Data Sources." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-19386.

Full text
Abstract:
A city consists of many elements such as humans, buildings, and roads. The complexity of cities is difficult to measure using Euclidean geometry. In this study, we use fractal geometry (scaling analysis) to measure the complexity of urban areas. We observe urban development from different perspectives using the bottom-up approach. In a bottom-up approach, we observe an urban region from a basic to higher level from our daily life perspective to an overall view. Furthermore, an urban environment is not constant, but it is complex; cities with greater complexity are more prosperous. There are many disciplines that analyze changes in the Earth’s surface, such as urban planning, detection of melting ice, and deforestation management. Moreover, these disciplines can take advantage of remote sensing for research. This study not only uses satellite imaging to analyze urban areas but also uses check-in and points of interest (POI) data. It uses straightforward means to observe an urban environment using the bottom-up approach and measure its complexity using fractal geometry.   Web 2.0, which has many volunteers who share their information on different platforms, was one of the most important tools in this study. We can easily obtain rough data from various platforms such as the Stanford Large Network Dataset Collection (SLNDC), the Earth Observation Group (EOG), and CloudMade. The check-in data in this thesis were downloaded from SLNDC, the POI data were obtained from CloudMade, and the nighttime lights imaging data were collected from EOG. In this study, we used these three types of data to derive natural cities representing city regions using a bottom-up approach. Natural cities were derived from open geographic data without human manipulation. After refining data, we used rough data to derive natural cities. This study used a triangulated irregular network to derive natural cities from check-in and POI data.   In this study, we focus on the four largest US natural cities regions: Chicago, New York, San Francisco, and Los Angeles. The result is that the New York City region is the most complex area in the United States. Box-counting fractal dimension, lacunarity, and ht-index (head/tail breaks index) can be used to explain this. Box-counting fractal dimension is used to represent the New York City region as the most prosperous of the four city regions. Lacunarity indicates the New York City region as the most compact area in the United States. Ht-index shows the New York City region having the highest hierarchy of the four city regions. This conforms to central place theory: higher-level cities have better service than lower-level cities. In addition, ht-index cannot represent hierarchy clearly when data distribution does not fit a long-tail distribution exactly. However, the ht-index is the only method that can analyze the complexity of natural cities without using images.
APA, Harvard, Vancouver, ISO, and other styles
14

Forrest, Adam F. "Novel approaches to power scaling in mode-locked laser diode based ultrashort pulse sources." Thesis, University of Dundee, 2018. https://discovery.dundee.ac.uk/en/studentTheses/74ed1b92-a07c-4bd1-af9d-8bfd8a241eed.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Cruise, Richard James Randon. "A scaling framework for the many sources limit of queueing systems through large deviations." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608658.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Simakov, Nikita. "Development of components and fibres for the power scaling of pulsed holmium-doped fibre sources." Thesis, University of Southampton, 2017. https://eprints.soton.ac.uk/415254/.

Full text
Abstract:
In this thesis the optimisation and peak power scaling of pulsed holmium-doped fibre lasers were investigated with the aim of demonstrating a fibre gain medium that is able to address the requirements of applications that currently rely on bulk crystalline Ho:YAG or Ho:YLF solutions. Conventional fibre processing techniques such as cleaving, end-capping and component fabrication were improved upon using CO2 laser processing. The resulting components and processes are also characterised under high power operating conditions and have enabled subsequent experimentation and demonstrations. Holmium-doped silica fibres were fabricated and characterised with the aim of reducing impurity contaminations, improving composition and achieving efficient operation at 2.1 μm. These fibres were characterised passively using transmission spectroscopy and actively in a laser configuration. The most efficient of these compositions operated with a 77% slope efficiency in a core-pumped laser up to average powers of 5 W and was then processed into a double-clad geometry. The cladding-pumped fibre was operated at 70 W output power with a slope efficiency of 67% and represents one of the highest power and most efficient cladding-pumped holmium-doped fibres demonstrated to-date. Small-signal amplifiers utilising both thulium-doped and holmium-doped silica fibres were demonstrated. These amplifiers offered a broad wavelength coverage spanning 490 nm at 15 dB gain from 1660 nm – 2150 nm. This remarkably broad wavelength coverage is attractive for a large number of disciplines looking to exploit this previously difficult-to-reach wavelength range. In addition to these devices, the average power and peak power scaling of 2 μm fibre sources was investigated. A thulium-doped fibre laser operating at 1950 nm with > 170 W of output power, a tuneable holmium-doped fibre laser producing > 15 W over the wavelength span from 2040 nm – 2171 nm and a pulsed holmium-doped fibre amplifier with > 100 kW peak power at 2090 nm are reported. Finally we review the requirements for efficient scaling of mid-infrared optical parametric oscillators and analyse the non-linear effects that arise when attempting to scale the peak power in silica fibres in the 2 μm spectral region. We implement a range of strategies to reduce the onset of nonlinear effects and demonstrate a holmium-doped fibre amplifier with peak power levels exceeding 36 kW in a 5 ns pulse with a spectral width of < 1 nm. This represents the highest spectral density achieved for nano-second pulse duration from pulsed holmium-doped fibre sources. This preliminary result provides an excellent platform for further peak power scaling and also in replacing conventional Q-switched Ho:YAG lasers.
APA, Harvard, Vancouver, ISO, and other styles
17

C?ndido, J?nior Irenaldo Pessoa. "Par?metros de fonte de microterremotos em Cascavel-CE." Universidade Federal do Rio Grande do Norte, 2009. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18768.

Full text
Abstract:
Made available in DSpace on 2015-03-13T17:08:20Z (GMT). No. of bitstreams: 1 IrenaldoPCJ.pdf: 3773351 bytes, checksum: 66d713ebc6c47b7533bafff9f32c98e7 (MD5) Previous issue date: 2009-04-27<br>Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior<br>In this dissertation it was studied the rupture characteristic of earthquakes of the Town of Cascavel CE, Northeastern Brazil. Located on the border of the Potiguar Basin, the Town of Cascavel is one of the most seismically active intraplate areas in the country. In this town, on November 20th, 1980 a 5,2mb earthquake occurred. This was the largest earthquake ever reported in Northeast Brazil. Studies of this region using instruments were possible after 1989, with several campaigns being done using seismographic networks. From the beginning of the monitoring to April 2008 more than 55,000 events were recorded. With the data collected by a network with six 3-components digital seismographic stations during the campaigns done from September 29th, 1997 to March 5th, 1998, estimates of source parameters were found fitting the displacement spectra in the frequency domain for each event. From the fitting of the displacement spectra it was possible to obtain the corner frequency ( ) c f and long period amplitude ( ) W0 . Source parameters were determined following Brune (1970) and Madariaga (1976) models. Twenty-one seismic events were analyzed (0.7 ? ? 2.1) b m in order to estimate the source dimension (r ), seismic moment ( ) M0 , static stress drop (Ds ), apparent stress ( ) a s , seismic energy ( ) S E and moment magnitude ( ) W M for each of the events. It was observed that the ratio between radiated seismic energy and moment seismic (apparent stress) increases with increasing moment and hence magnitude at the observed range. As suggested by Abercrombie (1995), also in this work there is a breakdown in the scaling for earthquakes with magnitudes smaller than three ( < 3.0) W M , so that the rupture physics is different for larger events. If this assumption is valid, the earthquakes analyzed in this work are not selfsimilar. Thus, larger events tend to radiated more energy per unit area than smaller ones.<br>Nesta disserta??o foi estudada a caracter?stica de ruptura dos sismos da cidade de Cascavel CE, Nordeste Brasileiro. Localizada na borda da Bacia Potiguar, a cidade de Cascavel ? uma das ?reas intraplaca mais sismicamente ativa do Brasil. Neste munic?pio, no dia 20 de novembro de 1980, ocorreu o maior sismo de que se tem not?cia no Nordeste, com magnitude igual 5,2mb . A partir de 1989, essa regi?o tem sido estudada instrumentalmente, sendo realizadas diversas campanhas com redes sismogr?ficas. Desde o in?cio do monitoramento at? abril de 2008, foram registrados mais de 55.000 eventos. Com os dados coletados por uma rede de seis esta??es digitais triaxiais em uma campanha realizada entre 29 de setembro de 1997 e 05 de mar?o de 1998, foi realizado um estudo para determinar os par?metros de fonte, ajustando-se os espectros de deslocamento de cada sismo no dom?nio da frequ?ncia. A partir dos ajustes dos espectros de deslocamento, foi poss?vel obter os valores da frequ?ncia de corte ( ) c f e da amplitude de longo per?odo ( ) W0 . Os par?metros foram determinados a partir dos modelos de fonte propostos por Brune (1970) e Madariaga (1976) para 21 sismos (0,7 ? ? 2,1) b m , obtendo-se as estimativas do raio da fonte (r ), momento s?smico ( ) M0 , stress drop est?tico (Ds ), stress aparente ( ) a s , energia s?smica irradiada ( ) S E e magnitude momento ( ) W M de cada evento. Foi observado que o stress drop e a raz?o entre a energia irradiada e o momento s?smico (stress aparente) aumentam com o incremento do momento e, consequentemente, com o valor da magnitude para a escala investigada. Assim como sugerido por Abercrombie (1995), neste trabalho tamb?m parece haver um quebra na rela??o de escala para sismos com magnitudes menores que tr?s ( < 3,0) W M , o que implica em um processo de ruptura diferente para terremotos grandes e pequenos. Caso esta hip?tese seja v?lida, os sismos analisados neste trabalho n?o s?o autosimilares. Assim, os eventos maiores tendem a irradiar mais energia por unidade de ?rea que os menores.
APA, Harvard, Vancouver, ISO, and other styles
18

Musgrave, Ian. "Study of the physics of the power-scaling of end-pumped solid-state laser sources based on Nd:YVO." Thesis, University of Southampton, 2003. https://eprints.soton.ac.uk/46103/.

Full text
Abstract:
Using a modified Mach-Zehnder interferometer the thermal lensing in Nd:YV04 was measured for several different operating conditions. The thermal lens focal length can be determined from the measured transverse phase profile. It was found that the thermal lensing was weakest for p-polarised light and that the bulk expansion plays a part in modifying the power of the thermal lenses. By comparing the thermal lensing with cooling direction it was found that the providing cooling along the a-axis generated the weakest thermal lensing. Comparing the thermal lensing under lasing and non-lasing conditions demonstrated that the heating in the laser crystal under non-lasing conditions is significantly greater than under lasing conditions. The thermal lenses are almost 5 times stronger under non-lasing conditions than lasing condition for the 1% doped crystal. By comparing the effect of dopant concentration on thermal lensing the effect of ETU could be seen, with the thermal lensing for the 0.3% doped crystal being much lower than that of the 1% doped crystal under non-lasing conditions.An amplitude modulated mode-locked laser was built based on Nd:YVO4 generating 600mW of diffraction limited output and l00ps pulses. Multipass amplification was then investigated as a means to increase the average power of the source. This was achieved with 5W of output achieved, with the beam remaining diffraction limited. The prospects for further power scaling are investigated and it was shown that the limit to power scaling via amplifiers is the eventual beam quality degradation that will be suffered as the signal beam passes through the thermal lenses in the laser crystal. An equation was finally presented that analysed the limitations of scaling via amplifiers, finding that when stress-fracture and beam quality degradation are considered, Nd:YVO4 represents an excellent choice for further power scaling.
APA, Harvard, Vancouver, ISO, and other styles
19

Kapnoula, Efthymia Evangelia. "Individual differences in speech perception: sources, functions, and consequences of phoneme categorization gradiency." Diss., University of Iowa, 2016. https://ir.uiowa.edu/etd/3115.

Full text
Abstract:
During spoken language comprehension, listeners transform continuous acoustic cues into categories (e.g. /b/ and /p/). While longstanding research suggests that phoneme categories are activated in a gradient way, there are also clear individual differences, with more gradient categorization being linked to various communication impairment like dyslexia and specific language impairments (Joanisse, Manis, Keating, & Seidenberg, 2000; López-Zamora, Luque, Álvarez, & Cobos, 2012; Serniclaes, Van Heghe, Mousty, Carré, & Sprenger-Charolles, 2004; Werker & Tees, 1987). Crucially, most studies have used two-alternative forced choice (2AFC) tasks to measure the sharpness of between-category boundaries. Here we propose an alternative paradigm that allows us to measure categorization gradiency in a more direct way. We then use this measure in an individual differences paradigm to: (a) examine the nature of categorization gradiency, (b) explore its links to different aspects of speech perception and other cognitive processes, (c) test different hypotheses about its sources, (d) evaluate its (positive/negative) role in spoken language comprehension, and (e) assess whether it can be modified via training. Our results provide validation for this new method of assessing phoneme categorization gradiency and offer valuable insights into the mechanisms that underlie speech perception.
APA, Harvard, Vancouver, ISO, and other styles
20

Steinke, Michael [Verfasser]. "Fiber amplifiers at 1.5 [my]m for gravitational wave detectors : power scaling, gain dynamics, and pump sources / Michael Steinke." Hannover : Technische Informationsbibliothek (TIB), 2015. http://d-nb.info/1095506102/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Chiang, Chia-Hao, and 江嘉豪. "Source scaling model in Taiwan area." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/52017391733253250770.

Full text
Abstract:
碩士<br>國立中央大學<br>地球物理學系<br>82<br>The Lg spectral analysis from short-period and strong motion records of 865 earthquakes occurred in the Taiwan area are used to study the source scaling model. Basing on the omega-square model and the circular crack model, the long-period level and corner frequency of Lg amplitude spectrum are picked to determin the seismic moment, source radius and stress drop of individual earthquake. the relationships of seismic moment, magnitude, and corner frequency are logMo= (1.21±0.022)ML + (16.74±1.17) logMo= (-5.17±0.25)logfc + (22.20±1.23) in the magnitude range of 1.28 5.88. From the distribution of stress drop, the stress drop is almost constant in 10 to 100 bars for seismic moment greater than 10**22 dyne*centimeter and becomes increasing tendency with respect to seismic moment as seismic moment smaller than 10**22 dyne*centimeter. Adding seismic moments of 54 earthquakes listed in the PDE catalog to the results in this study, the relationship of seismic moment to magnitude becomes logMo= 1.21ML+ 16.72±1.86 1.28≦ML≦5.04 logMo= 1.75ML+ 14.00±1.99 5.04≦ML≦6.82
APA, Harvard, Vancouver, ISO, and other styles
22

Yen, Yin-Tung, and 顏銀桐. "Simulation and Source Scaling of Finite-Fault Slip Distribution for Taiwan Region." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/00736886949058887210.

Full text
Abstract:
博士<br>國立中央大學<br>地球物理研究所<br>99<br>Source scaling, the relation between source parameters earthquake size, has been explored the self-similarity relation or developed a series of empirical relationships from the beginning of 1970s. In general, an important requirement for probabilistic and deterministic analyses in seismic hazard of engineering seismology is to estimate a future earthquake potential in a specific region. The fault parameters (e.g. fault length, width and mean slip) generated by a particular fault or earthquake source related to the size of the largest earthquakes is often necessary. In addition, pre-setting source dimensions and slip over the fault are also necessary for numerical prediction of time history ground motion. This kind of studies, thus, not only focuses on scientific purposes for seismology but provides the implications and applications for engineering seismology. Source parameters, including fault length, width and mean slip can be dug out from the finite-fault slip distribution inferred from waveform inversion. In this study, thus, we sorted and solved slip distribution models resolved from the period from 1993 to 2009 in Taiwan. We investigated the source scaling of earthquakes (Mw4.6~Mw7.7) from Taiwan orogenic belt, and made the global compilation of source parameter to discuss the scaling self-similarity. Finite-fault slip models (13 dip-slip and 7 strike-slip) using mainly from Taiwan dense strong motion and teleseismic data were utilized. Seven additional earthquakes (M>7) were included for further scaling discussion on large events. Considering the definitive effective length and width for the scaling study, we found M0~L^2 and M0~L^3 for the events less and larger than the seismic moment of 10^20 Nm, respectively, regardless the fault types, suggesting a non-self similar scaling for small to moderate events and a self-similar scaling for large events. Although the events showed the variation in stress drops, except three events with high stress drops, most of the events had the stress drops of 10-100 bars. The bilinear relation was well explained by the derived magnitude-area equation of Shaw (2009) while we considered only the events with the stress drops of 10-100 bars and the seismogenic thickness of 35 km. The bilinear feature of the regressed magnitude-area scaling appears at the ruptured area of about 1000 km^2, for our seismogenic thickness of 35 km. For the events having ruptured area larger than that, the amount of the average slip becomes proportional to the ruptured length. The distinct high stress drops events from blind faults in the western foothill of Taiwan yield local high Peak Ground Acceleration (PGA) as we made the comparison to the Next Generation Attenuation (NGA) model. Further, we implemented strong motion waveform modeling with the empirical Green’s function method to assess the area of strong-motion generation for inspecting the existence of fact that two earthquakes with similar magnitude have significantly distinct stress drop. Two earthquakes with similar magnitude have ~180 and ~610 bars, respectively, indeed, suggesting that significant difference of stress drop was proved. Regardless the relative small in magnitudes of these events, the high PGA of these events will give the high regional seismic hazard potential, and, thus, required special attention for seismic hazard mitigation.
APA, Harvard, Vancouver, ISO, and other styles
23

Lin, Hsin-I., and 林欣儀. "Earthquake Source Scaling of Moderate to Large Earthquakes in Taiwan: Study of 2003 Mw>6 Taiwan Earthquakes." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/61918575413742340049.

Full text
Abstract:
碩士<br>國立中央大學<br>地球物理研究所<br>92<br>Earthquake Source Scaling of Moderate to Large Earthquakes in Taiwan: Study of 2003 Mw>6 Taiwan Earthquakes Hsin-I Lin Abstract We inverted velocity data from broadband array in Taiwan (BATS)to study earthquake source parameters of three Mw>6 earthquakes in eastern Taiwan in 2003. These earthquakes occurred in June 9th, June 10th and Dec 10th, 2003, respectively. We use least-square method to invert the distribution of slip and other source parameters. The slip distribution and aftershocks were used to estimate the fault geometry. For each event, we derived a preferred model by testing different focal mechanisms. In order to have a batter azimuthal coverage to the earthquake, one Japanese station from F-net was used, in addition to the BATS stations. These earthquakes are all thrust faulting mechanisms. The hypocenter of June 9th earthquake is 24.4oN, 121.99oE at the depth of 21.3 km. The focal mechanism has the strike, dip, and rake of 225o, 26 o, and 121 o, respectively. The moment is 8.65*1024 dyne-cm, which yields the Mw of 5.89. The main rupture is around hypocenter, and ruptured toward northeast direction. The June 10th earthquake occurred at 23.52oN 121.67oE, at the depth of b5.7 km. The focal mechanism has the strike, dip, and rake of 217 o, 39 o, and 110 o, respectively. The moment is 2.03*1025 dyne-cm, which yields the Mw of 6.13. It ruptured toward northeast in downdip direction. The September 10th earthquake was occurred at 23.07oN 121.40oE at a focal depth of 17.7 km. The focal mechanism has the strike, dip, and rake of 3 o, 42 o, and 104 o, respectively. The moment is 2.68*1025 dyne-cm, which yields the Mw of 6.22. According to the slip and aftershocks distributions, this earthquake is believed to be associated with the Chihshang fault. We collected the results and other recent research of Taiwan earthquakes, to obtain the relationship of earthquake magnitude (Mw) to fault length (L2), fault area (A2) and average slip (D2). The relations among the source parameters are as follow: 5 . 4 ) log( 4 . 1 2 + = L Mw , 5 . 4 ) log( 8 . 0 2 + = A Mw , 7 . 3 ) log( ) 45 . 0 3 . 1 ( 2 + ± = D Mw , For better understanding of earthquake source characteristics in Taiwan region, threes scaling relationships provided important information for seismic hazard analysis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography