To see the other types of publications on this topic, follow the link: Earthquake prediction in art.

Dissertations / Theses on the topic 'Earthquake prediction in art'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Earthquake prediction in art.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bramlet, John. "Earthquake prediction and earthquake damage prediction /." Connect to resource, 1996. http://hdl.handle.net/1811/31764.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Weatherley, Dion Kent. "Investigations of automaton earthquake models : implications for seismicity and earthquake forecasting /." St. Lucia, Qld, 2002. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe16401.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Neurohr, Theresa. "The seismic vulnerability of art objects /." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99782.

Full text
Abstract:
Throughout history, objects of art have been damaged and sometimes destroyed in earthquakes. Even though the importance of providing seismically adequate design for nonstructural components has received attention over the past decade, art objects in museums, either on display or in storage, require further research. The research reported in this study was undertaken to investigate the seismic vulnerability of art objects. Data for this research was gathered from three museums in Montreal.
The seismic behaviour of three unrestrained display cases, storage shelves, and a 6m long dinosaur skeleton model structure was investigated according to the seismic hazard for Montreal and representative museum floor motions were simulated for that purpose. Particular attention was paid to the support conditions, the effects of modified floor surface conditions, the sliding and rocking response of unrestrained display cases, the location (floor elevation) of the display case and/or storage shelves, art object mass, and the dynamic properties of the display cases/storage shelves. The seismic vulnerability of art objects was evaluated based on the seismic response of the display cases/storage shelves at the level of art object display. The display cases were investigated experimentally using shake table testing. Computer analyses were used to simulate the seismic behaviour of storage shelves, and the seismic sensitivity of the dinosaur structure was determined via free vibration acceleration measurements. The floor contact conditions and floor elevation had a crucial effect on the unrestrained display cases, causing them to slide or rock vigorously. The distribution of content mass had a large impact on the response of the shelving system. As a result of experimental and analytical analyses, recommendations and/or simple mitigation techniques are provided to reduce the seismic vulnerability of objects of art.
APA, Harvard, Vancouver, ISO, and other styles
4

Malushte, Sanjeev R. "Prediction of seismic design response spectra using ground characteristics." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/45802.

Full text
Abstract:

The available earthquake records are classified into five groups according to their site stiffness and epicentral distance as the grouping parameters. For the groups thus defined, normalized response spectra are obtained for single-degree-ofâ freedom and massless oscillators. The effectiveness of the grouping scheme is examined by studying the variance of response quantities within each group. The implicit parameters of average frequency and significant duration are obtained for each group and their effect on the response spectra is studied. Correlation analyses between various ground motion characteristics such as peak displacement, velocity, acceleration and root mean square acceleration are carried out for each group.

Smoothed design spectra for relative and pseudo velocities and relative acceleration responses of single degree of freedom oscillators and the velocity and acceleration responses of massless oscillators are proposed for each group. Methods to predict relative velocity and relative acceleration spectra directly from the pseudo velocity spectra are presented. It is shown that the relative spectra can be reliably estimated from the pseudo spectra. The site dependent design spectra are defined for a wide range of oscillator periods and damping ratios.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
5

Creamer, Frederic Harold. "The method to predict a large earthquake in an aftershock sequence." Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/25985.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Stark, Colin Peter. "The influence of active extensional tectonics on patterns of fluid flow." Thesis, University of Leeds, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.305934.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Akbar, Siddiq-A. "Urban housing in seismic areas : a computerised methodology for evaluating strategies for risk mitigation." Thesis, University of Cambridge, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.306466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sheikh, Md Neaz. "Simplified analysis of earthquake site response with particular application to low and moderate seismicity regions." Thesis, Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B2353008x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bäckman, Erik. "Defining an Earthquake Intensity Based Method for a Rapid Earthquake Classification System." Thesis, Uppsala universitet, Institutionen för geovetenskaper, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-317406.

Full text
Abstract:
Ground motions caused by earthquakes may be strong enough to cause destruction of infrastructure and possibly casualties. If such past destructive earthquakes are analysed, the gained information could be used to develop earthquake warning systems that predicts and possibly reduce the damage potential of further earthquakes. The Swedish National Seismic Network (SNSN) runs an automated early warning system that attempts to predict the damage of an earthquake that just got recorded, and forward the predictions to relevant government agencies. The predictions are based on, e.g. earthquake magnitude, source depth and an estimate of the size of affected human population. The purpose of this thesis is to introduce an additional parameter: earthquake intensity, which is a measure of the intensity with which the ground shakes. Based on this, a new earthquake hazard scheme, the Intensity Based Earthquake Classification (IBEC) scheme, is created. This scheme suggests alternate methods, relative to SNSN, of how earthquake classifications can be made. These methods will use an intensity database established by modelling scenario earthquakes in the open-source software ShakeMap by the U.S. Geological Survey. The database consists of scenarios on the intervals: 4.0 ≤ Mw ≤ 9.0 and 10 ≤ depth ≤ 150 kilometre, and covers the whole intensity scale, Modified Mercalli Intensity, 1.0 ≤ Imm ≤ 10.0. The IBEC classification scheme also enabled the creation of the 'Population-to-Area' criterion. It improves prediction of earthquakes that struck isolated cities, located in e.g. valleys in large mountainous areas and deserts. Even though such earthquakes are relatively uncommon, once they occur, they may cause great damage as many cities in such regions around the world often are less developed regarding resistance to ground motions.
Markrörelser orsakade av jordbävningar kan va starka nog att skada vår infrastruktur och orsaka dödsoffer. Genom att analysera forna destruktiva jordbävningar och utveckla program som försöker att förutsäga deras inverkan så kan den potentiella skada minskas. Svenska Nationella Seismiska Nätet (SNSN) driver ett automatiserat tidigt varningssystem som försöker förutsäga skadorna som följer en jordbävning som precis spelats in, och vidarebefodra denna information till relevanta myndigheter. Förutsägelserna är baserade på, t.ex. jordbävnings-magnitud och djup samt uppskattning av mänsklig population i det påverkade området. Syftet med denna avhandlingen är att introducera ytterligare en parameter: jordbävnings-intensitet, som är ett mått av intensiteten i markrörelserna. Baserat på detta skapas ett jordbävnings-schema kallat Intensity Based Earthquake Classification (IBEC). Detta schema föreslår alternativa metoder, relativt SNSN, för hur jordbävnings-klassificering kan göras. Dessa metoder använder sig av en intensitets-databas etablerad genom modellering av jordbävning-scenarios i open source-\linebreak programmet ShakeMap, skapat av U.S. Geological Survey. Databasen består av scenarior över intervallen 4.0 ≤ Mw ≤ 9.0 och 10 ≤ djup ≤ 150 kilometer, vilka täcker hela intensitetsskalan, Modified Mercalli Intensity, 1.0 ≤ Imm ≤ 10.0. IBECs klassificeringsschema har även möjliggjort skapandet av "Population-mot-Area"-kriteriet. Detta förbättrar förutsägelsen av jordbävningar som träffar isolerade städer, placerade i t.ex. dalgångar i stora bergskjedjor och öknar. Även om denna typ av jordbävningar är relativt ovanliga så orsakar dom ofta enorm skada då sådana här städer ofta är mindre utvecklade rörande byggnaders motstånd mot markrörelser.
APA, Harvard, Vancouver, ISO, and other styles
10

Sugito, Masata. "EARTHQUAKE MOTION PREDICTION, MICROZONATION, AND BURIED PIPE RESPONSE FOR URBAN SEISMIC DAMAGE ASSESSMENT." Kyoto University, 1987. http://hdl.handle.net/2433/138405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Baska, David A. "An analytical/empirical model for prediction of lateral spread displacements /." Thesis, Connect to this title online; UW restricted, 2002. http://hdl.handle.net/1773/10182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Yin, Can. "Exploring the underlying mechanism of load/unload response ratio theory and its application to earthquake prediction /." [St. Lucia, Qld.], 2005. http://www.library.uq.edu.au/pdfserve.php?image=thesisabs/absthe19121.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Kumar, Senthil. "Earthquake size, recurrence and rupture mechanics of large surface-rupture earthquakes along the Himalayan Frontal Thrust of India /." abstract and full text PDF (free order & download UNR users only), 2005. http://0-wwwlib.umi.com.innopac.library.unr.edu/dissertations/fullcit/3209126.

Full text
Abstract:
Thesis (Ph. D.)--University of Nevada, Reno, 2005.
"August 2005." Includes bibliographical references. Online version available on the World Wide Web. Library also has microfilm. Ann Arbor, Mich. : ProQuest Information and Learning Company, [2005]. 1 microfilm reel ; 35 mm.
APA, Harvard, Vancouver, ISO, and other styles
14

Alcazar, Pastrana Omar. "Operational modal analysis, model updating and response prediction bridge under the 2014 Napa Earthquake." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/59197.

Full text
Abstract:
Bridges constitute a critical and important part of the infrastructure of many cities’ transportation network. They are expensive to build and maintain, and the consequences of a sudden failure are very severe. Therefore, bridges are expected to have a high degree of reliability, which means that they have to perform above a life safety criterion under earthquake excitations. In a continuous effort to improve design guidelines, it is imperative to understand the behavior of existing bridges that are subjected to severe shaking. For this reason, continuous monitoring of bridges has become essential: not only to help determine if a bridge has been damaged but also to understand their response to strong earthquake motions. The work reported here includes an in-depth analysis of the behavior of the Vallejo- Hwy 37 Napa River Bridge during the 2014 California, Napa earthquake (M 6.0). The bridge located in Vallejo California connects Sears Point Road and Mare Island to Vallejo. It was built in 1967. The bridge was instrumented with 12 accelerometers on the superstructure and 3 accelerometers at a free-field site. An analysis of the recorded data of the accelerometers on the superstructure was carried out to determine the maximum displacement at mid-span, and to get the fundamental frequencies of the bridge during the excitation. A finite element (FE) model was developed based on the as-built drawings and model updaitng was perform. Finally, the updated model was used with the recorded ground motion of the 2014 Napa Earthquake to perform a time history analysis. The results were compared to the recorded data of the sensors located on the bridge. The peak displacement at mid-span in the longitudinal and transverse directions of the FE had a good match to the recorded peak displacement. It can be concluded that the updated FE model can capture the peak displacement at the bridge mid-span. It also shows that having a strong motion network can help engineers to better understand the behavior of structures under earthquake loading, by looking at the recorded data and identifying peak values of acceleration, velocity and displacement.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
15

KAWABE, Iwao, Kazuya MIYAKAWA, and Tianshi YANG. "Enhanced dissolution of soda-lime glass under stressed conditions with small effective stress (0.05 MPa) at 35℃ to 55℃: Implication for seismogeochemical monitoring." Dept. of Earth and Planetary Sciences, Nagoya University, 2012. http://hdl.handle.net/2237/20537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Goetz, Ryan P. Rosenblad Brent L. "Study of the horizontal-to-vertical spectral ratio (HVSR) method for characterization of deep soils in the Mississippi Embayment." Diss., Columbia, Mo. : University of Missouri--Columbia, 2009. http://hdl.handle.net/10355/5334.

Full text
Abstract:
The entire thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file; a non-technical public abstract appears in the public.pdf file. Title from PDF of title page (University of Missouri--Columbia, viewed on December 22, 2009). Thesis advisor: Dr. Brent L. Rosenblad. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
17

Van, T. Veen Lauren Hannah. "CPT Prediction of Soil Behaviour Type, Liquefaction Potential and Ground Settlement in North-West Christchurch." Thesis, University of Canterbury. Geological Sciences, 2015. http://hdl.handle.net/10092/10468.

Full text
Abstract:
As a consequence of the 2010 – 2011 Canterbury earthquake sequence, Christchurch experienced widespread liquefaction, vertical settlement and lateral spreading. These geological processes caused extensive damage to both housing and infrastructure, and increased the need for geotechnical investigation substantially. Cone Penetration Testing (CPT) has become the most common method for liquefaction assessment in Christchurch, and issues have been identified with the soil behaviour type, liquefaction potential and vertical settlement estimates, particularly in the north-western suburbs of Christchurch where soils consist mostly of silts, clayey silts and silty clays. The CPT soil behaviour type often appears to over-estimate the fines content within a soil, while the liquefaction potential and vertical settlement are often calculated higher than those measured after the Canterbury earthquake sequence. To investigate these issues, laboratory work was carried out on three adjacent CPT/borehole pairs from the Groynes Park subdivision in northern Christchurch. Boreholes were logged according to NZGS standards, separated into stratigraphic layers, and laboratory tests were conducted on representative samples. Comparison of these results with the CPT soil behaviour types provided valuable information, where 62% of soils on average were specified by the CPT at the Groynes Park subdivision as finer than what was actually present, 20% of soils on average were specified as coarser than what was actually present, and only 18% of soils on average were correctly classified by the CPT. Hence the CPT soil behaviour type is not accurately describing the stratigraphic profile at the Groynes Park subdivision, and it is understood that this is also the case in much of northwest Christchurch where similar soils are found. The computer software CLiq, by GeoLogismiki, uses assessment parameter constants which are able to be adjusted with each CPT file, in an attempt to make each more accurate. These parameter changes can in some cases substantially alter the results for liquefaction analysis. The sensitivity of the overall assessment method, raising and lowering the water table, lowering the soil behaviour type index, Ic, liquefaction cutoff value, the layer detection option, and the weighting factor option, were analysed by comparison with a set of ‘base settings’. The investigation confirmed that liquefaction analysis results can be very sensitive to the parameters selected, and demonstrated the dependency of the soil behaviour type on the soil behaviour type index, as the tested assessment parameters made very little to no changes to the soil behaviour type plots. The soil behaviour type index, Ic, developed by Robertson and Wride (1998) has been used to define a soil’s behaviour type, which is defined according to a set of numerical boundaries. In addition to this, the liquefaction cutoff point is defined as Ic > 2.6, whereby it is assumed that any soils with an Ic value above this will not liquefy due to clay-like tendencies (Robertson and Wride, 1998). The method has been identified in this thesis as being potentially unsuitable for some areas of Christchurch as it was developed for mostly sandy soils. An alternative methodology involving adjustment of the Robertson and Wride (1998) soil behaviour type boundaries is proposed as follows:  Ic < 1.31 – Gravelly sand to dense sand  1.31 < Ic < 1.90 – Sands: clean sand to silty sand  1.90 < Ic < 2.50 – Sand mixtures: silty sand to sandy silt  2.50 < Ic < 3.20 – Silt mixtures: clayey silt to silty clay  3.20 < Ic < 3.60 – Clays: silty clay to clay  Ic > 3.60 – Organics soils: peats. When the soil behaviour type boundary changes were applied to 15 test sites throughout Christchurch, 67% showed an improved change of soil behaviour type, while the remaining 33% remained unchanged, because they consisted almost entirely of sand. Within these boundary changes, the liquefaction cutoff point was moved from Ic > 2.6 to Ic > 2.5 and altered the liquefaction potential and vertical settlement to more realistic ii values. This confirmed that the overall soil behaviour type boundary changes appear to solve both the soil behaviour type issues and reduce the overestimation of liquefaction potential and vertical settlement. This thesis acts as a starting point towards researching the issues discussed. In particular, future work which would be useful includes investigation of the CLiq assessment parameter adjustments, and those which would be most suitable for use in clay-rich soils such as those in Christchurch. In particular consideration of how the water table can be better assessed when perched layers of water exist, with the limitation that only one elevation can be entered into CLiq. Additionally, a useful investigation would be a comparison of the known liquefaction and settlements from the Canterbury earthquake sequence with the liquefaction and settlement potentials calculated in CLiq for equivalent shaking conditions. This would enable the difference between the two to be accurately defined, and a suitable adjustment applied. Finally, inconsistencies between the Laser-Sizer and Hydrometer should be investigated, as the Laser-Sizer under-estimated the fines content by up to one third of the Hydrometer values.
APA, Harvard, Vancouver, ISO, and other styles
18

Balal, Onur. "Probabilistic Seismic Hazard Assessment For Earthquake Induced Landslides." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615453/index.pdf.

Full text
Abstract:
Earthquake-induced slope instability is one of the major sources of earthquake hazards in near fault regions. Simplified tools, such as Newmark&rsquo
s Sliding Block (NSB) Analysis are widely used to represent the stability of a slope under earthquake shaking. The outcome of this analogy is the slope displacement where larger displacement values indicate higher seismic slope instability risk. Recent studies in the literature propose empirical models between the slope displacement and single or multiple ground motion intensity measures such as peak ground acceleration or Arias intensity. These correlations are based on the analysis of large datasets from global ground motion recording database (PEER NGA-W1 Database). Ground motions from earthquakes occurred in Turkey are poorly represented in NGA-W1 database since corrected and processed data from Turkey was not available until recently. The objective of this study is to evaluate the compatibility of available NSB displacement prediction models for the Probabilistic Seismic Hazard Assessment (PSHA) applications in Turkey using a comprehensive dataset of ground motions recorded during earthquakes occurred in Turkey. Then the application of selected NSB displacement prediction model in a vector-valued PSHA framework is demonstrated with the explanations of seismic source characterization, ground motion prediction models and ground motion intensity measure correlation coefficients. The results of the study is presented in terms of hazard curves and a comparison is made with a case history in Asarsuyu Region where seismically induced landslides (Bakacak Landslides) had taken place during 1999 Dü
zce Earthquake.
APA, Harvard, Vancouver, ISO, and other styles
19

Clancey, Gregory K. "Foreign Knowledge or art nation, earthquake nation : architecture, seismology, carpentry, the West, and Japan, 1876-1923." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9389.

Full text
Abstract:
Thesis (Ph.D.)--Massachusetts Institute of Technology, Program in Science, Technology and Society, 1999.
Includes bibliographical references.
This dissertation follows British professors at Tokyo's late nineteenth century College of Technology (Kobudaigaku) and continues into the twentieth century with the Japanese students they trained. My first chapters map out an argument between British disciplines over Japanese 'adaptation' and/or 'resistance' to nature, a conflict driven by the development of the modem science of seismology in Tokyo. Seismology was a unique cross-cultural project - a 'Western' instrumental science invented and first institutionalized in a non-Western place. I discuss bow artifacts as diverse as seismographs, five-story wooden pagoda, and Mt. Fuji became 'boundary objects' in a fierce dispute between spokesmen for science and an over the character of the Japanese landscape and people. The latter chapters explain bow young Japanese architects and seismologists re-mapped the discursive and instrumental terrains of their British teachers, challenging foreign knowledge-production from inside colonizing disciplines. The text is framed around the story of the Great Nobi Earthquake of 1891. According to contemporary Japanese narratives, the great earthquake (the most powerful in modem Japanese history) was particularity damaging to the new 'foreign' infrastructure, and caused Japanese to seriously question, for the first time, the efficacy of foreign knowledge. 'Japan's earthquake problem' went from being one of bow to import European resistance into a fragile nation, to one of how to make a uniquely fragile imported infrastructure resist the power of Japanese nature. I critically re-tell this Japanese story as a corrective to European and American images of Meiji .Japan as a 'pupil country' and the West as a 'teacher culture'. "Foreign Knowledge" demonstrates in very concrete ways bow science and technology, art and architecture, gender, race, and class co-constructed Meiji Japan. Distinctions between 'artistic' and 'scientific' representations of culture/nature were particularly fluid in late nineteenth century Tokyo. Architects in my text often speak in the name of science and seismologists become an critics and even ethnographers. The narrative is also trans-national; centered in Tokyo, it follows Japanese architects, scientists, and carpenters to Britain, Italy, the United States, and Formosa.
by Gregory K. Clancey.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
20

Llenos, Andrea Lesley. "Controls on earthquake rupture and triggering mechanisms in subduction zones." Thesis, Massachusetts Institute of Technology, 2010. http://hdl.handle.net/1721.1/59743.

Full text
Abstract:
Thesis (Ph. D.)--Joint Program in Oceanography/Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences; and the Woods Hole Oceanographic Institution), 2010.
Cataloged from PDF version of thesis.
Includes bibliographical references.
Large earthquake rupture and triggering mechanisms that drive seismicity in subduction zones are investigated in this thesis using a combination of earthquake observations, statistical and physical modeling. A comparison of the rupture characteristics of M 7.5 earthquakes with fore-arc geological structure suggests that long-lived frictional heterogeneities (asperities) are primary controls on the rupture extent of large earthquakes. To determine when and where stress is accumulating on the megathrust that could cause one of these asperities to rupture, this thesis develops a new method to invert earthquake catalogs to detect space-time variations in stressing rate. This algorithm is based on observations that strain transients due to aseismic processes such as fluid flow, slow slip, and afters lip trigger seismicity, often in the form of earthquake swarms. These swarms are modeled with two common approaches for investigating time-dependent driving mechanisms in earthquake catalogs: the stochastic Epidemic Type Aftershock Sequence model [Ogata, 1988] and the physically-based rate-state friction model [Dieterich, 1994]. These approaches are combined into a single model that accounts for both aftershock activity and variations in background seismicity rate due to aseismic processes, which is then implemented in a data assimilation algorithm to invert catalogs for space-time variations in stressing rate. The technique is evaluated with a synthetic test and applied to catalogs from the Salton Trough in southern California and the Hokkaido corner in northeastern Japan. The results demonstrate that the algorithm can successfully identify aseismic transients in a multi-decade earthquake catalog, and may also ultimately be useful for mapping spatial variations in frictional conditions on the plate interface.
by Andrea Lesley Llenos.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
21

Lee, Michael. "Rapid Prediction of Tsunamis and Storm Surges Using Machine Learning." Diss., Virginia Tech, 2021. http://hdl.handle.net/10919/103154.

Full text
Abstract:
Tsunami and storm surge are two of the main destructive and costly natural hazards faced by coastal communities around the world. To enhance coastal resilience and to develop effective risk management strategies, accurate and efficient tsunami and storm surge prediction models are needed. However, existing physics-based numerical models have the disadvantage of being difficult to satisfy both accuracy and efficiency at the same time. In this dissertation, several surrogate models are developed using statistical and machine learning techniques that can rapidly predict a tsunami and storm surge without substantial loss of accuracy, with respect to high-fidelity physics-based models. First, a tsunami run-up response function (TRRF) model is developed that can rapidly predict a tsunami run-up distribution from earthquake fault parameters. This new surrogate modeling approach reduces the number of simulations required to build a surrogate model by separately modeling the leading order contribution and the residual part of the tsunami run-up distribution. Secondly, a TRRF-based inversion (TRRF-INV) model is developed that can infer a tsunami source and its impact from tsunami run-up records. Since this new tsunami inversion model is based on the TRRF model, it can perform a large number of tsunami forward simulations in tsunami inversion modeling, which is impossible with physics-based models. And lastly, a one-dimensional convolutional neural network combined with principal component analysis and k-means clustering (C1PKNet) model is developed that can rapidly predict the peak storm surge from tropical cyclone track time series. Because the C1PKNet model uses the tropical cyclone track time series, it has the advantage of being able to predict more diverse tropical cyclone scenarios than the existing surrogate models that rely on a tropical cyclone condition at one moment (usually at or near landfall). The surrogate models developed in this dissertation have the potential to save lives, mitigate coastal hazard damage, and promote resilient coastal communities.
Doctor of Philosophy
Tsunami and storm surge can cause extensive damage to coastal communities; to reduce this damage, accurate and fast computer models are needed that can predict the water level change caused by these coastal hazards. The problem is that existing physics-based computer models are either accurate but slow or less accurate but fast. In this dissertation, three new computer models are developed using statistical and machine learning techniques that can rapidly predict a tsunami and storm surge without substantial loss of accuracy compared to the accurate physics-based computer models. Three computer models are as follows: (1) A computer model that can rapidly predict the maximum ground elevation wetted by the tsunami along the coastline from earthquake information, (2) A computer model that can reversely predict a tsunami source and its impact from the observations of the maximum ground elevation wetted by the tsunami, (3) A computer model that can rapidly predict peak storm surges across a wide range of coastal areas from the tropical cyclone's track position over time. These new computer models have the potential to improve forecasting capabilities, advance understanding of historical tsunami and storm surge events, and lead to better preparedness plans for possible future tsunamis and storm surges.
APA, Harvard, Vancouver, ISO, and other styles
22

Yu, Diming. "Investigations of the b-value and its variations on possible earthquake prediction in the North-South China Seismic Belt." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/104602.

Full text
Abstract:
Thesis: S.M. in Geophysics, Massachusetts Institute of Technology, Department of Earth, Atmospheric, and Planetary Sciences, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 113-118).
The Gutenberg-Richter b-value is studied extensively by researchers as a possible earthquake precursor. In this thesis, two different approaches to compute the b-value for the purpose of earthquake prediction are investigated and discussed. A new methodology, the b-value ratio method, inspired by a 1988 paper by Morgan et al., is also introduced in this thesis as a variation of b-value. To calculate the b-value ratio, the event catalog has to be separated into a group of larger events and a group of smaller events with a change-point magnitude, which leads to two b-values for the catalog, b-value of the smaller events, b-low, and the b-value of the larger events, b-high. The b-value ratio is then obtained by dividing b-high by b-low. Both b-value and b-value ratio methods are applied to a set of earthquakes occurring between 1983 and 2015 in the North-South China Seismic Belt. The dataset contains 4454 events for M >/= 3.6. Within this dataset, there is the catastrophic 2008 M = 7.9 Wenchuan earthquake. The b-value time series are computed in two different ways, the time-based method and the event-based method. Moving windows and overlapping windows are used in both ways. Our results calculated with the event-based method show an initial increase in b-value followed by a constant-slope decrease prior to the 2008 Wenchuan event. After the 2008 large earthquake occurred, the b-value bounces back to about 1.0 and starts to decrease again. The b-value ratio shows a completely reversed trend. Both b-value and b-value ratio in this case could be used as post-prediction precursors of the 2008 M = 7.9 Wenchuan earthquake. Analysis of b-value versus depth in the North-South China Seismic Belt region shows a monotonic decrease in b-value between 8km and 13km depth, which reflects an increase in differential stress in the upper crust. It is observed that b-value increases between 13 km and 22 km depth and decreases below 22 km depth. These observations correspond to the changes in the stress regimes and indicates the inverse relationship between b-value and differential stress in the crust.
by Diming Yu.
S.M. in Geophysics
APA, Harvard, Vancouver, ISO, and other styles
23

Neupane, Ganesh Prasad. "Comparison of Natural and Predicted Earthquake Occurrence in Seismologically Active Areas for Determination of Statistical Significance." Bowling Green, Ohio : Bowling Green State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=bgsu1213494761.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Carroll, Daniel P. "Development of a GIS extension for liquefaction hazard analysis." Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/22960.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Copana, Paucara Julio. "Seismic Slope Stability: A Comparison Study of Empirical Predictive Methods with the Finite Element Method." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/100797.

Full text
Abstract:
This study evaluates the seismically induced displacements of a slope using the Finite Element Method (FEM) in comparison to the results of twelve empirical predictive approaches. First, the existing methods to analyze the stability of slopes subjected to seismic loads are presented and their capabilities to predict the onset of failure and post-failure behavior are discussed. These methods include the pseudostatic method, the Newmark method, and stress-deformation numerical methods. Whereas the pseudostatic method defines a seismic coefficient for the analysis and provides a safety factor, the Newmark method incorporates a yield coefficient and the actual acceleration time history to estimate permanent displacements. Numerical methods incorporate advanced constitutive models to simulate the coupled stress-strain soil behavior, making the process computationally more costly. In this study, a model slope previously studied at laboratory scale is selected and scaled up to prototype dimensions. Then, the slope is subjected to 88 different input motions, and the seismic displacements obtained from the numerical and empirical approaches are compared statistically. From correlation analyses between seven ground motion parameters and the numerical results, new empirical predictive equations are developed for slope displacements. The results show that overall the FEM displacements are generally in agreement with the numerically developed methods by Fotopoulou and Pitilakis (2015) labelled "Method 2" and "Method 3", and the Newmark-type Makdisi and Seed (1978) and Bray and Travasarou (2007) methods for rigid slopes. Finally, functional forms for seismic slope displacement are proposed as a function of peak ground acceleration (PGA), Arias intensity (Ia), and yield acceleration ratio (Ay/PGA). These functions are expected to be valid for granular slopes such as earth dams, embankments, or landfills built on a rigid base and with low fundamental periods (Ts<0.2).
Master of Science
A landslide is a displacement on a sloped ground that can be triggered by earthquake shaking. Several authors have investigated the failure mechanisms that lead to landslide initiation and subsequent mass displacement and proposed methodologies to assess the stability of slopes subjected to seismic loads. The development of these methodologies has to rely on field data that in most of the cases are difficult to obtain because identifying the location of future earthquakes involves too many uncertainties to justify investments in field instrumentation (Kutter, 1995). Nevertheless, the use of scale models and numerical techniques have helped in the investigation of these geotechnical hazards and has led to development of equations that predict seismic displacements as function of different ground motion parameters. In this study, the capabilities and limitations of the most recognized approaches to assess seismic slope stability are reviewed and explained. In addition, a previous shaking-table model is used for reference and scaled up to realistic proportions to calculate its seismic displacement using different methods, including a Finite Element model in the commercial software Plaxis2D. These displacements are compared statistically and used to develop new predictive equations. This study is relevant to understand the capabilities of newer numerical approaches in comparison to classical empirical methods.
APA, Harvard, Vancouver, ISO, and other styles
26

Dernbach, Rafael Karl. "Anticipatory realism : constructions of futures and regimes of prediction in contemporary post-cinematic art." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289021.

Full text
Abstract:
This thesis examines strategies of anticipation in contemporary post-cinematic art. In the Introduction and the first chapter, I make the case for anticipation as a cultural technique for the construction of and adjustment to future scenarios. This framing allows analysis of constructions of futures as culturally and media-historically specific operations. Via anticipation, constructions of futures become addressable as embedded in specific performative and material economies: as regimes of prediction. The hypothesis is that cultural techniques of anticipation do not only serve to construct particular future scenarios, but also futurity, the very condition for the construction of futures. Drawing upon the philosophical works of, in particular, Vilem Flusser, Jacques Derrida and Elena Esposito, and the theory of cultural techniques, I conceptualize anticipation through the analysis of post-cinematic strategies. I argue that post-cinematic art is particularly apt for the conceptualization of anticipation. The self-reflexive multi-media interventions of post-cinematic art can expose the realisms that govern regimes of prediction. Three cultural techniques of anticipation and their use as artistic strategies in post-cinematic art are theorized: enactment, soft montage and rendering. Each of these techniques is examined in its construction of futures through performative and material operations in art gallery spaces. The second chapter examines strategies of enactment in post-cinematic installations by Neïl Beloufa. My readings of Kempinski (2007), The Analyst, the Researcher, the Screenwriter, the CGI tech and the Lawyer (2011), World Domination (2012) and Data for Desire (2014) propose that enactment allows for an engagement with futures beyond extrapolation. With Karen Barad's theory of agential realism, the construction of futures becomes graspable as a political process in opposition to a mere prolonging of the present into the future. The third chapter focuses on the strategy of soft montage in works by Harun Farocki. I interpret Farocki's application of soft montage in the exhibition Serious Games I-IV (2009-2010) as a critical engagement with anticipatory forms of organizing power and distributing precarity. His work series Parallel I-IV (2012-2014) is then analyzed as a speculation on the future of image production technologies and their role in constructing futures. The final chapter analyses the self-referential use of computer-generated renderings in works by Hito Steyerl. The installations How Not To Be Seen (2013), Liquidity Inc. (2014), The Tower (2015) and ExtraSpaceCraft (2016) are read as interventions in the performative economies of contemporary image production. I argue that these works allow us to grasp the reality-producing and futurity-producing effects of rendering as anticipatory cultural technique. My thesis aims to contribute to the discussions on a 'turn towards the future' in contemporary philosophy and cultural criticism. My research thus focuses on the following set of questions. What can we learn about the operations of future construction through encounters with post-cinematic art? How are futures and future construction framed in such art? What realisms do future constructions rely on? And how can anticipation as a cultural technique be politicized and democratized?
APA, Harvard, Vancouver, ISO, and other styles
27

Ekstrom, Levi Thomas. "A Simplified Performance-Based Procedure for the Prediction of Lateral Spread Displacements." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/5466.

Full text
Abstract:
Characterization of the seismic hazard and ground-failure hazard of a site using traditional empirical lateral spread displacement models requires consideration of uncertainties in seismic loading, site conditions, and model prediction. Researchers have developed performance-based design methods to simultaneously account for these sources of uncertainty through the incorporation of a probabilistic analytical framework. While these methods can effectively handle the various sources of uncertainty associated with empirical lateral spread displacement prediction, they can be difficult for engineers to perform in a practical manner without the use of specialized numerical tools. To make the benefits of a performance-based approach accessible to a broader audience of geotechnical engineers, a simplified performance-based procedure is introduced in this paper. This map-based procedure utilizes a reference soil profile to provide hazard-targeted reference displacements across a geographic area. Equations are provided for engineers to correct those reference displacements for site-specific soil conditions and surface geometry to produce site-specific, hazard-targeted estimates of lateral spread displacement. The simplified performance-based procedure is validated through a comparative study assessing probabilistic lateral spread displacements across several cities in the United States. Results show that the simplified procedure closely approximates the results from the full performance-based model for all sites. Comparison with deterministic analyses are presented, and the place for both in engineering practice are discussed.
APA, Harvard, Vancouver, ISO, and other styles
28

Williams, Nicole D. "Evaluation of Empirical Prediction Methods for Liquefaction-Induced Lateral Spread from the 2010 Maule, Chile, Mw 8.8 Earthquake in Port Coronel." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/6086.

Full text
Abstract:
Over the past several decades, empirical formulas have been developed and improved to predict liquefaction and lateral spread based on a database of case histories from observed earthquakes, such as Youd et al. (2002) and Rauch and Martin (2000). The 2010 Maule Chile earthquake is unique first of all because it is recent and was not used to develop recent liquefaction and lateral spread evaluation methods, and therefore can be reasonably used to evaluate the effectiveness of such equations. Additionally, the 8.8 magnitude megathrust event fills a significant gap in the databases used to develop these empirical formulas, which tends to under represent large magnitude earthquakes and events which occur along subduction zones. Use of case histories from this event will therefore effectively test the robustness and accuracy of these methods.As a part of this comparison, data will be collected from two piers in Port Coronel, Chile: Lo Rojas or Fisherman's Pier, and el Carbonero. Lo Rojas is a municipally owned pier which failed in the 2010 earthquake. Dr. Kyle Rollins gathered detailed engineering survey data defining lateral spread displacements along this pier in a reconnaissance visit with other GEER investigators after the earthquake. El Carbonero was under construction during the earthquake, but no known lateral displacements were observed. Collaboration with local universities and personnel contributed a great deal of knowledge about the soil profile. In early April 2014, collection of SPT and CPT data began in strategic locations to fill gaps of understanding about the stratigraphy near the two piers. Additional testing will provide necessary information to carry out predictions of displacements using current empirical models, which can then be compared with observed displacements collected after the earthquake. Collected data will also be complied, and this alone will provide useful information as it represents a unique case history for future evaluation.The goals of this study are therefore: (1) Collect data for two piers (Lo Rojas and el Carbonero) in Port Coronel, Chile to provide a useful case history of lateral displacements observed; (2) Conduct a liquefaction and lateral spread analysis to predict displacement of the two piers in question, considering lateral spread and slope stability; (3) Compare predicted values with observed displacements and draw conclusions on the predictive capabilities of analyzed empirical equations for similar earthquakes (4) Make recommendations to improve when possible.
APA, Harvard, Vancouver, ISO, and other styles
29

Ocak, Recai Soner. "Probabilistic Seismic Hazard Assessment Of Eastern Marmara And Evaluation Of Turkish Earthquake Code Requirements." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613902/index.pdf.

Full text
Abstract:
The primary objective of this study is to evaluate the seismic hazard in the Eastern Marmara Region using improved seismic source models and enhanced ground motion prediction models by probabilistic approach. Geometry of the fault zones (length, width, dip angle, segmentation points etc.) is determined by the help of available fault maps and traced source lines on the satellite images. State of the art rupture model proposed by USGS Working Group in 2002 is applied to the source system. Composite reoccurrence model is used for all seismic sources in the region to represent the characteristic behavior of North Anatolian Fault. New and improved global ground motion models (NGA models) are used to model the ground motion variability for this study. Previous studies, in general, used regional models or older ground motion prediction models which were updated by their developers during the NGA project. New NGA models were improved in terms of additional prediction parameters (such as depth of the source, basin effects, site dependent standard deviations, etc.), statistical approach, and very well constrained global database. The use of NGA models reduced the epistemic uncertainty in the total hazard incorporated by regional or older models using smaller datasets. The results of the study is presented in terms of hazard curves, deaggregation of the hazard and uniform hazard spectrum for six main locations in the region (Adapazari, Duzce, Golcuk, Izmit, Iznik, and Sapanca City Centers) to provide basis for seismic design of special structures in the area. Hazard maps of the region for rock site conditions at the accepted levels of risk by Turkish Earthquake Code (TEC-2007) are provided to allow the user perform site-specific hazard assessment for local site conditions and develop site-specific design spectrum. Comparison of TEC-2007 design spectrum with the uniform hazard spectrum developed for selected locations is also presented for future reference.
APA, Harvard, Vancouver, ISO, and other styles
30

Erdurmaz, Muammer Sercan. "Neural Network Prediction Of Tsunami Parameters In The Aegean And Marmara Seas." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12605134/index.pdf.

Full text
Abstract:
Tsunamis are characterized as shallow water waves, with long periods and wavelengths. They occur by a sudden water volume displacement. Earthquake is one of the main reasons of a tsunami development. Historical data for an observation period of 3500 years starting from 1500 B.C. indicates that approximately 100 tsunamis occurred in the seas neighboring Turkey. Historical earthquake and tsunami data were collected and used to develop two artificial neural network models to forecast tsunami characteristics for future occurrences and to estimate the tsunami return period. Artificial Neural Network (ANN) is a system simulating the human brain learning and thinking behavior by experiencing measured or observed data. A set of artificial neural network is used to estimate the future earthquakes that may create a tsunami and their magnitudes. A second set is designed for the estimation of tsunami inundation with relation with the tsunami intensity, the earthquake depth and the earthquake magnitude that are predicted by the first set of neural networks. In the case study, Marmara and Aegean regions are taken into consideration for the estimation process. Return periods including the last occurred earthquake in the Turkish seas, which was the izmit (Kocaeli) Earthquake in 1999, were utilized together with the average earthquake depths calculated for Marmara and Aegean regions for the prediction of the earthquake magnitude that may create a tsunami in the stated regions for various return periods of 1-100 years starting from the year of 2004. The obtained earthquake magnitudes were used together with tsunami intensities and earthquake depth to forecast tsunami wave height at the coast. It is concluded that, Neural Networks predictions were a satisfactory first step to implement earthquake parameters such as depth and magnitude, for the average tsunami height on the shore calculations.
APA, Harvard, Vancouver, ISO, and other styles
31

Luo, Yan. "Spatial and temporal variations of earthquake frequency-magnitude distribution at the subduction zone near the Nicoya Peninsula, Costa Rica." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/45963.

Full text
Abstract:
The Nicoya Peninsula of Costa Rica is unusually close to the Middle America Trench (MAT), such that interface locking along the megathrust is observable under land. Here, rapid convergence between the downgoing Cocos and the over-riding Caribbean plates at ~85mm/yr allows for observable high strain rates, frequent large earthquakes and ongoing micro-seismicity. By taking advantage of this ideal location, a network of 20 on-land broadband seismometers was established in cooperation between UC Santa Cruz, Georgia Tech, and OVSICORI, with most stations operating since 2008. To evaluate what seismicity tells us about the ongoing state of coupling along the interface, we must consistently evaluate the location and magnitude of ongoing micro- seismicity. Because of large levels of anthropogenic, biologic, and coastal noise, automatic detection of earthquakes remains problematic in this region. Thus, we resorted to detailed manual investigation of earthquake phases. So far, we have detected nearly 7,000 earthquakes below or near Nicoya between February and August 2009. From these events we evaluate the fine-scale frequency-magnitude distribution (FMD) along the subduction megathrust. The results from this b-value mapping‟ are compared with an earlier study of the seismicity 9 years prior. In addition, we evaluate them relative to the latest geodetically derived locking. Preliminary comparisons of spatial and temporal variations of the b-values will be reported here. Because ongoing manual detection of earthquakes is extremely laborious and some events might be easily neglected, we are implementing a match-filter detection algorithm to search for new events from the continuous seismic data. This new approach has been previously successful in identifying aftershocks of the 2004 Parkfield earthquake. To do so, we use the waveforms of 858 analyst-detected events as templates to search for similarly repeating events during the same periods that have been manually detected. Preliminary results on the effectiveness of this technique are reported. The overall goal of this research is to evaluate the evolution of stress along the megathrust that may indicate the location and magnitude of potentially large future earthquakes. To do so, I make the comparison between the FMD and the interface locking. Only positive correlations are observed in the Nicoya region. The result is different from the one derived from the seismic data set that was recorded 9 years before our data. Therefore, to substantiate the causes for the different relationships between the b-value and the coupling degree, we need additional data with more reliable magnitudes.
APA, Harvard, Vancouver, ISO, and other styles
32

Shackleton, John Ryan. "Numerical Modeling of Fracturing in Non-Cylindrical Folds: Case Studies in Fracture Prediction Using Structural Restoration." Amherst, Mass. : University of Massachusetts Amherst, 2009. http://scholarworks.umass.edu/open_access_dissertations/82/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Arvola, Maja. "Deep Learning for Dose Prediction in Radiation Therapy : A comparison study of state-of-the-art U-net based architectures." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-447081.

Full text
Abstract:
Machine learning has shown great potential as a step in automating radiotherapy treatment planning. It can be used for dose prediction and a popular deep learning architecture for this purpose is the U-net. Since it was proposed in 2015, several modifications and extensions have been proposed in the literature. In this study, three promising modifications are reviewed and implemented for dose prediction on a prostate cancer data set and compared with a 3D U-net as a baseline. The tested modifications are residual blocks, densely connected layers and attention gates. The different models are compared in terms of voxel error, conformity, homogeneity, dose spillage and clinical goals. The results show that the performance was similar in many aspects for the models. The residual blocks model performed similar or better than the baseline in almost all evaluations. The attention gates model performed very similar to the baseline and the densely connected layers were uneven in the results, often with low dose values in comparison to the baseline. The study also shows the importance of consistent ground truth data and how inconsistencies affect metrics such as isodose Dice score and Hausdorff distance.
APA, Harvard, Vancouver, ISO, and other styles
34

Zöller, Gert. "Critical states of seismicity : modeling and data analysis." Thesis, Universität Potsdam, 2005. http://opus.kobv.de/ubp/volltexte/2006/742/.

Full text
Abstract:
The occurrence of earthquakes is characterized by a high degree of spatiotemporal complexity. Although numerous patterns, e.g. fore- and aftershock sequences, are well-known, the underlying mechanisms are not observable and thus not understood. Because the recurrence times of large earthquakes are usually decades or centuries, the number of such events in corresponding data sets is too small to draw conclusions with reasonable statistical significance. Therefore, the present study combines both, numerical modeling and analysis of real data in order to unveil the relationships between physical mechanisms and observational quantities. The key hypothesis is the validity of the so-called "critical point concept" for earthquakes, which assumes large earthquakes to occur as phase transitions in a spatially extended many-particle system, similar to percolation models. New concepts are developed to detect critical states in simulated and in natural data sets. The results indicate that important features of seismicity like the frequency-size distribution and the temporal clustering of earthquakes depend on frictional and structural fault parameters. In particular, the degree of quenched spatial disorder (the "roughness") of a fault zone determines whether large earthquakes occur quasiperiodically or more clustered. This illustrates the power of numerical models in order to identify regions in parameter space, which are relevant for natural seismicity. The critical point concept is verified for both, synthetic and natural seismicity, in terms of a critical state which precedes a large earthquake: a gradual roughening of the (unobservable) stress field leads to a scale-free (observable) frequency-size distribution. Furthermore, the growth of the spatial correlation length and the acceleration of the seismic energy release prior to large events is found. The predictive power of these precursors is, however, limited. Instead of forecasting time, location, and magnitude of individual events, a contribution to a broad multiparameter approach is encouraging.
Das Auftreten von Erdbeben zeichnet sich durch eine hohe raumzeitliche Komplexität aus. Obwohl zahlreiche Muster, wie Vor- und Nachbeben bekannt sind, weiß man wenig über die zugrundeliegenden Mechanismen, da diese sich direkter Beobachtung entziehen. Die Zeit zwischen zwei starken Erdbeben in einer seismisch aktiven Region beträgt Jahrzehnte bis Jahrhunderte. Folglich ist die Anzahl solcher Ereignisse in einem Datensatz gering und es ist kaum möglich, allein aus Beobachtungsdaten statistisch signifikante Aussagen über deren Eigenschaften abzuleiten. Die vorliegende Arbeit nutzt daher numerische Modellierungen einer Verwerfungszone in Verbindung mit Datenanalyse, um die Beziehung zwischen physikalischen Mechanismen und beobachteter Seismizität zu studieren. Die zentrale Hypothese ist die Gültigkeit des sogenannten "kritischen Punkt Konzeptes" für Seismizität, d.h. starke Erdbeben werden als Phasenübergänge in einem räumlich ausgedehnten Vielteilchensystem betrachtet, ähnlich wie in Modellen aus der statistischen Physik (z.B. Perkolationsmodelle). Es werden praktische Konzepte entwickelt, die es ermöglichen, kritische Zustände in simulierten und in beobachteten Daten sichtbar zu machen. Die Resultate zeigen, dass wesentliche Eigenschaften von Seismizität, etwa die Magnitudenverteilung und das raumzeitliche Clustern von Erdbeben, durch Reibungs- und Bruchparameter bestimmt werden. Insbesondere der Grad räumlicher Unordnung (die "Rauhheit") einer Verwerfungszone hat Einfluss darauf, ob starke Erdbeben quasiperiodisch oder eher zufällig auftreten. Dieser Befund zeigt auf, wie numerische Modelle genutzt werden können, um den Parameterraum für reale Verwerfungen einzugrenzen. Das kritische Punkt Konzept kann in synthetischer und in beobachteter Seismizität verifiziert werden. Dies artikuliert sich auch in Vorläuferphänomenen vor großen Erdbeben: Die Aufrauhung des (unbeobachtbaren) Spannungsfeldes führt zu einer Skalenfreiheit der (beobachtbaren) Größenverteilung; die räumliche Korrelationslänge wächst und die seismische Energiefreisetzung wird beschleunigt. Ein starkes Erdbeben kann in einem zusammenhängenden Bruch oder in einem unterbrochenen Bruch (Vorbeben und Hauptbeben) stattfinden. Die beobachtbaren Vorläufer besitzen eine begrenzte Prognosekraft für die Auftretenswahrscheinlichkeit starker Erdbeben - eine präzise Vorhersage von Ort, Zeit, und Stärke eines nahenden Erdbebens ist allerdings nicht möglich. Die genannten Parameter erscheinen eher vielversprechend als Beitrag zu einem umfassenden Multiparameteransatz für eine verbesserte zeitabhängige Gefährdungsabschätzung.
APA, Harvard, Vancouver, ISO, and other styles
35

Kohlburn, Joseph Robert. "A History of Dissent: Utagawa Kuniyoshi (1797-1861) as Agent of the Edokko Chonin." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1242853913.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Mengfu. "Design By Accident." VCU Scholars Compass, 2009. http://scholarscompass.vcu.edu/etd/1902.

Full text
Abstract:
Accident is a metaphor for life. From an arbitrary point in time, we potentially preview the entirety of existence. There is a Chinese idiom called “ blessing or bane,” which implies that a misfortune may perhaps soon turn into a blessing. Focusing on accident as a design method implies making the best out of a bad situation. An accident reveals invisible circumstances and potentialities in the world, both familiar and unfamiliar. Looking into the unpredictable world, I can begin to release my control, take a breath, and see what might happen if I do not fight the situation. I am able to get out of my own way, and see what the work’s destiny will be. This sets up a context in which there are no faults, no mistakes, and no accidents — everything may contribute to a solution.
APA, Harvard, Vancouver, ISO, and other styles
37

Yunatci, Ali Anil. "Gis Based Seismic Hazard Mapping Of Turkey." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612688/index.pdf.

Full text
Abstract:
Efficiency of probabilistic seismic hazard analysis mainly depends on the individual successes of its complementing components
such as source characterization and ground motion intensity prediction. This study contributes to major components of the seismic hazard workflow including magnitude &ndash
rupture dimension scaling relationships, and ground motion intensity prediction. The study includes revised independent models for predicting rupture dimensions in shallow crustal zones, accompanied by proposals for geometrically compatible rupture area-length-width models which satisfy the rectangular rupture geometry assumption. Second main part of the study focuses on developing a new ground motion prediction model using data from Turkish strong ground motion database. The series of efforts include, i) compilation and processing of a strong motion dataset, ii) quantifying parameter uncertainties of predictive parameters such as magnitude and source to site distance
and predicted accelerations due to uncertainty in site conditions and response, as well as uncertainty due to random orientation of the sensor, iii) developing a ground response model as a continuous function of peak ground acceleration and shear wave velocity, and finally, iv) removing bias in predictions due to uneven sampling of the dataset. Auxiliary components of the study include a systematic approach to source characterization problem, with products ranging from description of systematically idealized and documented seismogenic faults in Anatolia, to delineation, magnitude-recurrence parameterization, and selection of maximum magnitude earthquakes. Last stage of the study covers the development of a custom computer code for probabilistic seismic hazard assessment which meets the demands of modern state of practice.
APA, Harvard, Vancouver, ISO, and other styles
38

Tavano, Matteo. "Seismic response of tank-fluid systems: state of the art review and dynamic buckling analysis of a steel tank with the added mass method." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3006/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Levendoglu, Mert. "Probabilistic Seismic Hazard Assessment Of Ilgaz - Abant Segments Of North Anatolian Fault Using Improved Seismic Source Models." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615430/index.pdf.

Full text
Abstract:
Bolu-Ilgaz region was damaged by several large earthquakes in the last century and the structural damage was substantial especially after the 1944 and 1999 earthquakes. The objective of this study is to build the seismic source characterization model for the rupture zone of 1944 Bolu-Gerede earthquake and perform probabilistic seismic hazard assessment (PSHA) in the region. One of the major improvements over the previous PSHA practices accomplished in this study is the development of advanced seismic source models in terms of source geometry and reoccurrence relations. Geometry of the linear fault segments are determined and incorporated with the help of available fault maps. Composite magnitude distribution model is used to properly represent the characteristic behavior of NAF without an additional background zone. Fault segments, rupture sources, rupture scenarios and fault rupture models are determined using the WG-2003 terminology. The Turkey-Adjusted NGAW1 (Gü
lerce et al., 2013) prediction models are employed for the first time on NAF system. The results of the study is presented in terms of hazard curves, deaggregation of the hazard and uniform hazard spectrum for four main locations in the region to provide basis for evaluation of the seismic design of special structures in the area. Hazard maps of the region for rock site conditions and for the proposed site characterization model are provided to allow the user perform site-specific hazard assessment for local site conditions and develop site-specific design spectrum. The results of the study will be useful to manage the future seismic hazard in the region.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Zhiyi. "évaluation du risque sismique par approches neuronales." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC089/document.

Full text
Abstract:
L'étude probabiliste de sûreté (EPS) parasismique est l'une des méthodologies les plus utiliséespour évaluer et assurer la performance des infrastructures critiques, telles que les centrales nucléaires,sous excitations sismiques. La thèse discute sur les aspects suivants: (i) Construction de méta-modèlesavec les réseaux de neurones pour construire les relations entre les intensités sismiques et les paramètresde demande des structures, afin d'accélérer l'analyse de fragilité. L'incertitude liée à la substitution desmodèles des éléments finis par les réseaux de neurones est étudiée. (ii) Proposition d'une méthodologiebayésienne avec réseaux de neurones adaptatifs, afin de prendre en compte les différentes sourcesd'information, y compris les résultats des simulations numériques, les valeurs de référence fournies dansla littérature et les évaluations post-sismiques, dans le calcul de courbes de fragilité. (iii) Calcul des loisd'atténuation avec les réseaux de neurones. Les incertitudes épistémiques des paramètres d'entrée de loisd'atténuation, tels que la magnitude et la vitesse moyenne des ondes de cisaillement de trente mètres, sontprises en compte dans la méthodologie développée. (iv) Calcul du taux de défaillance annuel en combinantles résultats des analyses de fragilité et de l'aléa sismique. Les courbes de fragilité sont déterminées parle réseau de neurones adaptatif, tandis que les courbes d'aléa sont obtenues à partir des lois d'atténuationconstruites avec les réseaux de neurones. Les méthodologies proposées sont appliquées à plusieurs casindustriels, tels que le benchmark KARISMA et le modèle SMART
Seismic probabilistic risk assessment (SPRA) is one of the most widely used methodologiesto assess and to ensure the performance of critical infrastructures, such as nuclear power plants (NPPs),faced with earthquake events. SPRA adopts a probabilistic approach to estimate the frequency ofoccurrence of severe consequences of NPPs under seismic conditions. The thesis provides discussionson the following aspects: (i) Construction of meta-models with ANNs to build the relations betweenseismic IMs and engineering demand parameters of the structures, for the purpose of accelerating thefragility analysis. The uncertainty related to the substitution of FEMs models by ANNs is investigated.(ii) Proposal of a Bayesian-based framework with adaptive ANNs, to take into account different sourcesof information, including numerical simulation results, reference values provided in the literature anddamage data obtained from post-earthquake observations, in the fragility analysis. (iii) Computation ofGMPEs with ANNs. The epistemic uncertainties of the GMPE input parameters, such as the magnitudeand the averaged thirty-meter shear wave velocity, are taken into account in the developed methodology.(iv) Calculation of the annual failure rate by combining results from the fragility and hazard analyses.The fragility curves are determined by the adaptive ANN, whereas the hazard curves are obtained fromthe GMPEs calibrated with ANNs. The proposed methodologies are applied to various industrial casestudies, such as the KARISMA benchmark and the SMART model
APA, Harvard, Vancouver, ISO, and other styles
41

Bibault, Jean-Emmanuel. "Prédiction par Deep Learning de la réponse complète après radiochimiothérapie pré-opératoire du cancer du rectum localement avancé Labeling for big data in radiation oncology: the radiation oncology structures ontology Big data and machine learning in radiation oncology: state of the art and future prospects Deep learning and radiomics predict complete response after neo-adjuvant chemoradiation for locally advanced rectal cancer." Thesis, Sorbonne Paris Cité, 2018. https://wo.app.u-paris.fr/cgi-bin/WebObjects/TheseWeb.woa/wa/show?t=2388&f=17288.

Full text
Abstract:
L’utilisation de systèmes informatiques pour formaliser, organiser et planifier le traitement des patients a abouti à la création et à l’accumulation de quantité importante de données. Ces informations comprennent des caractéristiques démographiques, socio-économiques, cliniques, biologiques, d’imagerie, et, de plus en plus, génomiques. La médecine et sa pratique, fondées sur la sémiologie et la physiopathologie, vont être profondément transformées par ce phénomène. La complexité et la quantité des informations à intégrer pour prendre une décision médicale pourrait dépasser rapidement les capacités humaines. Les techniques d’intelligence artificielle pourraient assister le médecin et augmenter ses capacités prédictives et décisionnelles. La première partie de ce travail présente les types de données désormais accessibles en routine en oncologie radiothérapie. Elle détaille les données nécessaires à la création d’un modèle prédictif. Nous explorons comment exploiter les données spécifiques à la radiothérapie et présentons le travail d’homogénéisation et de conceptualisation qui a été réalisé sur ces données, notamment via la création d’une ontologie, dans le but de les intégrer à un entrepôt de données. La deuxième partie explore différentes méthodes de machine learning : k-NN, SVM, ANN et sa variante, le Deep Learning. Leurs avantages et inconvénients respectifs sont évalués avant de présenter les études ayant déjà utilisé ces méthodes dans le cadre de la radiothérapie. La troisième partie présente la création d’un modèle prédictif de la réponse complète à la radiochimiothérapie (RTCT) pré-opératoire dans le cancer du rectum localement avancé. Cette preuve de concept utilise des sources de données hétérogènes et un réseau neuronal profond dans le but d’identifier les patients en réponse complète après RTCT qui pourraient ne pas nécessiter de traitement chirurgical radical. Cet exemple, qui pourrait en pratique être intégré aux logiciels de radiothérapie déjà existant, utilise les données collectées en routine et illustre parfaitement le potentiel des approches de prédiction par IA pour la personnalisation des soins
The use of Electronic Health Records is generating vast amount of data. They include demographic, socio-economic, clinical, biological, imaging and genomic features. Medicine, which relied on semiotics and physiopathology, will be permanently disrupted by this phenomenon. The complexity and volume of data that need to be analyzed to guide treatment decision will soon overcome the human cognitive abilities. Artificial Intelligence methods could be used to assist the physicians and guide decision-making. The first part of this work presents the different types of data routinely generated in oncology, which should be considered for modelling a prediction. We also explore which specific data is created in radiation oncology and explain how it can be integrated in a clinical data warehouse through the use of an ontology we created. The second part reports on several types of machine learning methods: k-NN, SVM, ANN and Deep Learning. Their respective advantages and pitfalls are evaluated. The studies using these methods in the field of radiation oncology are also referenced. The third part details the creation of a model predicting pathologic complete response after neoadjuvant chemoradiation for locally-advanced rectal cancer. This proof-of-concept study uses heterogeneous sources of data and a Deep Neural Network in order to find out which patient could potentially avoid radical surgical treatment, in order to significantly reduce the overall adverse effects of the treatment. This example, which could easily be integrated within the existing treatment planning systems, uses routine health data and illustrates the potential of this kind of approach for treatment personalization
APA, Harvard, Vancouver, ISO, and other styles
42

Lin, Jia-Sheng, and 林家聖. "The Statistical Analysis of Earthquake Prediction." Thesis, 2001. http://ndltd.ncl.edu.tw/handle/23420377599575099017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Chen, Yun-Chien, and 陳芸仟. "Earthquake Prediction Via Back Propagation Neural Network." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/87238218545102882108.

Full text
Abstract:
碩士
國立臺北教育大學
資訊科學系碩士班
101
Taiwan is located between Eurasian and Pacific plates, the seismicity is very frequently and actively. Of course it’s including more than 6.0 of the magnitude of the earthquake. Beside, the population rate is very high in Taiwan, so the disaster caused by the earthquake always loss human life and economic. Therefore the research of the earthquake precursor and earthquake prediction is an important issue. It’s the common and easier method that observes the change of the short time seismicity rate base on long time seismicity rate, like value, value, Z value, value, the cumulative magnitude and the quantity of the earthquakes. These methods can depict the seismicity well, but can not completely predict the arrival of a main earthquake. Scientists still can’t understand the mechanism that the earthquake occurs totally, but they believe that these parameters of the methods should be the nonlinear correlation. However, just we can unable to establish a good physical model to describe the occurrence mechanism of earthquakes. Back-propagation neural network, mimic the biological born neurons, that has the good performance for solving nonlinear problems without the prior predictable model. In this study, we attempt to put the value, value, Z value and quality of earthquakes number into the input layer of the back-propagation neural network. Then predict the largest magnitude in the next month. After train and test data from 1994 - 2011, forecast the largest capacity in the next month: 72% is success if the magnitude is between 5.0-6.5, 39% is success if the magnitude is more than 6.5.
APA, Harvard, Vancouver, ISO, and other styles
44

"A Hidden Markov Model for Earthquake Prediction." 2016. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1292344.

Full text
Abstract:
This thesis introduces a new model for earthquake prediction. Earthquake occurrence is associated with change-points in underground underlying dynamics such as stress level and strength of electromagnetic signals. Hence, earthquake prediction can be viewed as a problem of change-point prediction. Previous literature of change-point analysis focus on testing, estimation, sequential detection and forecasting future observations under a change-point model. All these research areas treat change-points as abrupt changes that cannot be predicted. We develop a novel model with hidden Markov structure which can be used to predict the change-in-state of the hidden Markov chain. The change in hidden state can be regarded as change-point in underground underlying dynamics. Thus, the model can be applied to prediction of earthquake occurrence. Simulation studies and applications on real earthquake dataset indicate that the proposed model can successfully predict future change-points and observations.
Yip, Cheuk Fung.
Thesis M.Phil. Chinese University of Hong Kong 2016.
Includes bibliographical references (leaves ).
Abstracts also in Chinese.
Title from PDF title page (viewed on …).
APA, Harvard, Vancouver, ISO, and other styles
45

Sawlan, Zaid A. "Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea." Thesis, 2012. http://hdl.handle.net/10754/255453.

Full text
Abstract:
Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed.
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Ling-Shiang, and 楊凌翔. "Conditional probability prediction model for landslides induced by Chi-Chi earthquake." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/20195995026775436983.

Full text
Abstract:
碩士
國立臺灣大學
土木工程學研究所
93
Taiwan locates in the circum-pacific seismic zone with frequent earthquake activities, which could induce the hazardous landslides. An effective landslide prediction map could provide an important reference for policymaking for land use regulation and drafting of mitigation measures of potential disastrous area.   The geographic information system database of the research area was constructed by colleting geology and geomorphology data and the landslide scars triggered by Chi-Chi earthquake of research area. Furthermore, the Conditional Probability method was utilized to construct landslide potential model and prediction model.   Based on the results of landslide potential analysis, the best factor combination for landslide prediction analysis was determined. Verification of results from the landslide potential and prediction analysis was performed using landslide scars of research area, and success rate of analysis could be quantified.   The results of landslide prediction analysis indicate that using the aspect, slope and geology factors, could properly build up a distinguishing landslide prediction model. The landslide scars in the landslide prediction map coincide well with the high landslide probability area. Furthermore, the results of comparisons also prove the suitability of verification method used in this research.
APA, Harvard, Vancouver, ISO, and other styles
47

Goldfinger, Chris. "Active deformation of the Cascadia forearc : implications for great earthquake potential in Oregon and Washington." Thesis, 1994. http://hdl.handle.net/1957/36664.

Full text
Abstract:
Nine west-northwest-trending faults on the continental margin of Oregon and Washington, between 43° 05'N and 470 20'N latitude, have been mapped using seismic reflection, sidescan sonar, submersibles, and swath bathymetry. Five of these oblique faults are found on both the Juan de Fuca and North American plates, and offset abyssal plain sedimentary units left-laterally from 2.0 to 5.5 km. These five faults extend 8-18 km northwestward from the deformation front. The remaining four faults, found only on the North American plate, are also inferred to have a left-lateral slip sense. The age of the Wecoma fault on the abyssal plain is 600±50 ka, and has an average slip rate of 7-1 0 mm/year. Slip rates of the other four abyssal plain faults are 5.5 ± 2 - 6. 7 ± 3 mm/yr. These faults are active, as indicated by offset of the youngest sedimentary units, surficial fault scarps, offsets of surficial channels, and deep fluid venting. All nine faults have been surveyed on the continental slope using SeaMARC 1A sidescan sonar, and three of them were surveyed with a high-resolution AMS 150 sidescan sonar on the continental shelf off central Oregon. On the continental slope, the faults are expressed as linear, high-angle WNW trending scarps, and WNW trending fault-parallel folds that we interpret as flower structures. Active structures on the shelf include folds trending from NNE to WNW and associated flexural slip thrust faulting; NNW to N trending right-lateral strike-slip faults; and WNW trending left-lateral strike-slip faults. Some of these structures intersect the coast and can be correlated with onshore Quaternary faults and folds, and others are suspected to be deforming the coastal region. These structures may be contributing to the coastal marsh stratigraphic record of co-seismic subsidence events in the Holocene. We postulate that the set of nine WNW trending left-lateral strike-slip faults extend and rotate the forearc clockwise, absorbing most or all of the arc parallel component of plate convergence. The high rate of forearc deformation implies that the Cascadia forearc may lack the rigidity to generate M > 8.2 earthquakes. From a comparison of Cascadia seismogenic zone geometry to data from circum-Pacific great earthquakes of this century, the maximum Cascadia rupture is estimated to be 500 to 600 km in length, with a 150-400 km rupture length in best agreement with historical data.
Graduation date: 1994
APA, Harvard, Vancouver, ISO, and other styles
48

Momeni, Mehdi. "Feed-Forward Neural Network (FFNN) Based Optimization Of Air Handling Units: A State-Of-The-Art Data-Driven Demand-Controlled Ventilation Strategy." Thesis, 2020. http://hdl.handle.net/1805/23569.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
Heating, ventilation and air conditioning systems (HVAC) are the single largest consumer of energy in commercial and residential sectors. Minimizing its energy consumption without compromising indoor air quality (IAQ) and thermal comfort would result in environmental and financial benefits. Currently, most buildings still utilize constant air volume (CAV) systems with on/off control to meet the thermal loads. Such systems, without any consideration of occupancy, may ventilate a zone excessively and result in energy waste. Previous studies showed that CO2-based demand-controlled ventilation (DCV) methods are the most widely used strategies to determine the optimal level of supply air volume. However, conventional CO2 mass balanced models do not yield an optimal estimation accuracy. In this study, feed-forward neural network algorithm (FFNN) was proposed to estimate the zone occupancy using CO2 concentrations, observed occupancy data and the zone schedule. The occupancy prediction result was then utilized to optimize supply fan operation of the air handling unit (AHU) associated with the zone. IAQ and thermal comfort standards were also taken into consideration as the active constraints of this optimization. As for the validation, the experiment was carried out in an auditorium located on a university campus. The results revealed that utilizing neural network occupancy estimation model can reduce the daily ventilation energy by 74.2% when compared to the current on/off control.
APA, Harvard, Vancouver, ISO, and other styles
49

(9187742), SAYEDMOHAMMADMA VAEZ MOMENI. "FEED-FORWARD NEURAL NETWORK (FFNN) BASED OPTIMIZATION OF AIR HANDLING UNITS: A STATE-OF-THE-ART DATA-DRIVEN DEMAND-CONTROLLED VENTILATION STRATEGY." Thesis, 2020.

Find full text
Abstract:
Heating, ventilation and air conditioning systems (HVAC) are the single largest consumer of energy in commercial and residential sectors. Minimizing its energy consumption without compromising indoor air quality (IAQ) and thermal comfort would result in environmental and financial benefits. Currently, most buildings still utilize constant air volume (CAV) systems with on/off control to meet the thermal loads. Such systems, without any consideration of occupancy, may ventilate a zone excessively and result in energy waste. Previous studies showed that CO2-based demand-controlled ventilation (DCV) methods are the most widely used strategies to determine the optimal level of supply air volume. However, conventional CO2 mass balanced models do not yield an optimal estimation accuracy. In this study, feed-forward neural network algorithm (FFNN) was proposed to estimate the zone occupancy using CO2 concentrations, observed occupancy data and the zone schedule. The occupancy prediction result was then utilized to optimize supply fan operation of the air handling unit (AHU) associated with the zone. IAQ and thermal comfort standards were also taken into consideration as the active constraints of this optimization. As for the validation, the experiment was carried out in an auditorium located on a university campus. The results revealed that utilizing neural network occupancy estimation model can reduce the daily ventilation energy by 74.2% when compared to the current on/off control.
APA, Harvard, Vancouver, ISO, and other styles
50

Vipin, K. S. "Assessment Of Seismic Hazard With Local Site Effects : Deterministic And Probabilistic Approaches." Thesis, 2009. http://etd.iisc.ernet.in/handle/2005/1973.

Full text
Abstract:
Many researchers have pointed out that the accumulation of strain energy in the Penninsular Indian Shield region may lead to earthquakes of significant magnitude(Srinivasan and Sreenivas, 1977; Valdiya, 1998; Purnachandra Rao, 1999; Seeber et al., 1999; Ramalingeswara Rao, 2000; Gangrade and Arora, 2000). However very few studies have been carried out to quantify the seismic hazard of the entire Pennisular Indian region. In the present study the seismic hazard evaluation of South Indian region (8.0° N - 20° N; 72° E - 88° E) was done using the deterministic and probabilistic seismic hazard approaches. Effects of two of the important geotechnical aspects of seismic hazard, site response and liquefaction, have also been evaluated and the results are presented in this work. The peak ground acceleration (PGA) at ground surface level was evaluated by considering the local site effects. The liquefaction potential index (LPI) and factor of safety against liquefaction wee evaluated based on performance based liquefaction potential evaluation method. The first step in the seismic hazard analysis is to compile the earthquake catalogue. Since a comprehensive catalogue was not available for the region, it was complied by collecting data from different national (Guaribidanur Array, Indian Meterorological Department (IMD), National Geophysical Research Institute (NGRI) Hyderabad and Indira Gandhi Centre for Atomic Research (IGCAR) Kalpakkam etc.) and international agencies (Incorporated Research Institutions for Seismology (IRIS), International Seismological Centre (ISC), United States Geological Survey (USGS) etc.). The collected data was in different magnitude scales and hence they were converted to a single magnitude scale. The magnitude scale which is chosen in this study is the moment magnitude scale, since it the most widely used and the most advanced scientific magnitude scale. The declustering of earthquake catalogue was due to remove the related events and the completeness of the catalogue was analysed using the method suggested by Stepp (1972). Based on the complete part of the catalogue the seismicity parameters were evaluated for the study area. Another important step in the seismic hazard analysis is the identification of vulnerable seismic sources. The different types of seismic sources considered are (i) linear sources (ii) point sources (ii) areal sources. The linear seismic sources were identified based on the seismotectonic atlas published by geological survey of India (SEISAT, 2000). The required pages of SEISAT (2000) were scanned and georeferenced. The declustered earthquake data was superimposed on this and the sources which were associated with earthquake magnitude of 4 and above were selected for further analysis. The point sources were selected using a method similar to the one adopted by Costa et.al. (1993) and Panza et al. (1999) and the areal sources were identified based on the method proposed by Frankel et al. (1995). In order to map the attenuation properties of the region more precisely, three attenuation relations, viz. Toto et al. (1997), Atkinson and Boore (2006) and Raghu Kanth and Iyengar (2007) were used in this study. The two types of uncertainties encountered in seismic hazard analysis are aleatory and epistemic. The uncertainty of the data is the cause of aleatory variability and it accounts for the randomness associated with the results given by a particular model. The incomplete knowledge in the predictive models causes the epistemic uncertainty (modeling uncertainty). The aleatory variability of the attenuation relations are taken into account in the probabilistic seismic hazard analysis by considering the standard deviation of the model error. The epistemic uncertainty is considered by multiple models for the evaluation of seismic hazard and combining them using a logic tree. Two different methodologies were used in the evaluation of seismic hazard, based on deterministic and probabilistic analysis. For the evaluation of peak horizontal acceleration (PHA) and spectral acceleration (Sa) values, a new set of programs were developed in MATLAB and the entire analysis was done using these programs. In the deterministic seismic hazard analysis (DSHA) two types of seismic sources, viz. linear and point sources, were considered and three attenuation relations were used. The study area was divided into small grids of size 0.1° x 0.1° (about 12000 grid points) and the PHA and Sa values were evaluated for the mean and 84th percentile values at the centre of each of the grid points. A logic tree approach, using two types of sources and three attenuation relations, was adopted for the evaluation of PHA and Sa values. Logic tree permits the use of alternative models in the hazard evaluation and appropriate weightages can be assigned to each model. By evaluating the 84th percentile values, the uncertainty in spectral acceleration values can also be considered (Krinitzky, 2002). The spatial variations of PHA and Sa values for entire South India are presented in this work. The DSHA method will not consider the uncertainties involved in the earthquake recurrence process, hypocentral distance and the attenuation properties. Hence the seismic hazard analysis was done based on the probabilistic seismic hazard analysis (PSHA), and the evaluation of PHA and Sa values were done by considering the uncertainties involved in the earthquake occurrence process. The uncertainties in earthquake recurrence rate, hypocentral location and attenuation characteristic were considered in this study. For evaluating the seismicity parameters and the maximum expected earthquake magnitude (mmax) the study area was divided into different source zones. The division of study area was done based on the spatial variation of the seismicity parameters ‘a’ and ‘b’ and the mmax values were evaluated for each of these zones and these values were used in the analysis. Logic tree approach was adopted in the analysis and this permits the use of multiple models. Twelve different models (2 sources x 2 zones x 3 attenuation) were used in the analysis and based on the weightage for each of them; the final PHA and Sa values at bed rock level were evaluated. These values were evaluated for a grid size of 0.1° x 0.1° and the spatial variation of these values for return periods of 475 and 2500 years (10% and 2% probability of exceedance in 50 years) are presented in this work. Both the deterministic and probabilistic analyses highlighted that the seismic hazard is high at Koyna region. The PHA values obtained for Koyna, Bangalore and Ongole regions are higher than the values given by BIS-1893(2002). The values obtained for south western part of the study area, especially for parts of kerala are showing the PHA values less than what is provided in BIS-1893(2002). The 84th percentile values given DSHA can be taken as the upper bound PHA and Sa values for South India. The main geotechnical aspects of earthquake hazard are site response and seismic soil liquefaction. When the seismic waves travel from the bed rock through the overlying soil to the ground surface the PHA and Sa values will get changed. This amplification or de-amplification of the seismic waves depends on the type of the overlying soil. The assessment of site class can be done based on different site classification schemes. In the present work, the surface level peak ground acceleration (PGA) values were evaluated based on four different site classes suggested by NEHRP (BSSC, 2003) and the PGA values were developed for all the four site classes based on non-linear site amplification technique. Based on the geotechnical site investigation data, the site class can be determined and then the appropriate PGA and Sa values can be taken from the respective PGA maps. Response spectra were developed for the entire study area and the results obtained for three major cities are discussed here. Different methods are suggested by various codes to Smooth the response spectra. The smoothed design response spectra were developed for these cities based on the smoothing techniques given by NEHRP (BSSC, 2003), IS code (BIS-1893,2002) and Eurocode-8 (2003). A Comparison of the results obtained from these studies is also presented in this work. If the site class at any location in the study area is known, then the peak ground acceleration (PGA) values can be obtained from the respective map. This provides a simplified methodology for evaluating the PGA values for a vast area like South India. Since the surface level PGA values were evaluated for different site classes, the effects of surface topography and basin effects were not taken into account. The analysis of response spectra clearly indicates the variation of peak spectral acceleration values for different site classes and the variation of period of oscillation corresponding to maximum Sa values. The comparison of the smoothed design response spectra obtained using different codal provisions suggest the use of NEHRP(BSSC, 2003) provisions. The conventional liquefaction analysis method takes into account only one earthquake magnitude and ground acceleration values. In order to overcome this shortfall, a performance based probabilistic approach (Kramer and Mayfield, 2007) was adopted for the liquefaction potential evaluation in the present work. Based on this method, the factor of safety against liquefaction and the SPT values required to prevent liquefaction for return periods of 475 and 2500 years were evaluated for Bangalore city. This analysis was done based on the SPT data obtained from 450 boreholes across Bangalore. A new method to evaluate the liquefaction return period based on CPT values is proposed in this work. To validate the new method, an analysis was done for Bangalore by converting the SPT values to CPT values and then the results obtained were compared with the results obtained using SPT values. The factor of safety against liquefaction at different depths were integrated using liquefaction potential index (LPI) method for Bangalore. This was done by calculating the factor of safety values at different depths based on a performance based method and then the LPI values were evaluated. The entire liquefaction potential analysis and the evaluation of LPI values were done using a set of newly developed programs in MATLAB. Based on the above approaches it is possible to evaluate the SPT and CPT values required to prevent liquefaction for any given return period. An analysis was done to evaluate the SPT and CPT values required to prevent liquefaction for entire South India for return periods of 475 and 2500 years. The spatial variations of these values are presented in this work. The liquefaction potential analysis of Bangalore clearly indicates that majority of the area is safe against liquefaction. The liquefaction potential map developed for South India, based on both SPT and CPT values, will help hazard mitigation authorities to identify the liquefaction vulnerable area. This in turn will help in reducing the liquefaction hazard.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography