To see the other types of publications on this topic, follow the link: Event rate.

Dissertations / Theses on the topic 'Event rate'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Event rate.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Harris, E. D. "Rate limiting in an event-driven BGP speaker." Thesis, University of Cambridge, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.603760.

Full text
Abstract:
The Border Gateway Protocol (BGP) is the Internet’s interdomain routing system  BGP is a global, distributed, peer-to-peer system consisting of tens of thousands of routers in the different networks which make up the Internet. Each BGP-speaking router, or speaker, is continually exchanging information with its neighbours about changes to the destinations to which it can send packets and the routes it uses to get there. However BGP is also just another application, competing for processor time and resources with all the other applications running on the same router. The way a router’s BGP software is implemented can have an enormous impact not only on that router’s individual performance, but on the performance of the Internet as a whole. Rate limiting is an important optimisation which can improve BGP’s network-wide performance and reduce its demands on the router running it. Nevertheless XORP, an interesting new software router aimed at both researchers and commercial users, does not support rate limiting. This is at least partly because XORP uses an innovative event-driven, pipelined route processing model which does not fit well with the traditional, timer-based way of implementing BGP with rate limiting. This dissertation argues that a serious, modern BGP speaker such as XORP must support rate limiting. It presents two different implementations of rate limiting in XORP’s route processing pipeline, representing different trade-offs between features and performance on the one hand and implementation impact t on the other. The first implementation requires no changes to the pipeline architecture but sacrifices some of its run-time capabilities. The second implementation requires fundamental changes throughout the pipeline but retains much more of its power. In the best case, experiments with a single XORP router show that both rate-limiting implementations significantly reduce BGP processing requirements; in the worst case, they equal or out-perform the standard XORP BGP speaker in all but one test. Further experiments, using several router instances in a virtualised testbed network, show that rate limiting also improves XORP’s network-wide behaviour.
APA, Harvard, Vancouver, ISO, and other styles
2

Mohney, Jack D. "Age and vigilance: The effects of event rate and task pacing." CSUSB ScholarWorks, 1986. https://scholarworks.lib.csusb.edu/etd-project/329.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, Junfeng, Qing-Shan Jia, Karl Henrik Johansson, and Ling Shi. "Event-Based Sensor Data Scheduling : Trade-Off Between Communication Rate and Estimation Quality." KTH, Reglerteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-111423.

Full text
Abstract:
We consider sensor data scheduling for remote state estimation. Due to constrained communication energy and bandwidth, a sensor needs to decide whether it should send the measurement to a remote estimator for further processing. We propose an event-based sensor data scheduler for linear systems and derive the corresponding minimum squared error estimator. By selecting an appropriate eventtriggering threshold, we illustrate how to achieve a desired balance between the sensor-to-estimator communication rate and the estimation quality. Simulation examples are provided to demonstrate the theory.

QC 20130318

APA, Harvard, Vancouver, ISO, and other styles
4

Alcaina, Acosta José Joaquín. "Design and implementation of event-based multi-rate controllers for networked control systems." Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/159884.

Full text
Abstract:
[ES] Con esta tesis se pretende dar solución a algunos de los problemas más habituales que aparecen en los Sistemas de control basados en red (NCS) como son los retardos variables en el tiempo, las pérdidas y el desorden de paquetes, y la restricción de ancho de banda y de recursos computacionales y energéticos de los dispositivos que forman parte del sistema de control. Para ello se ha planteado la integración de técnicas de control multifrecuencial, de control basado en paquetes, de control basado en predictor y de control basado en eventos. Los diseños de control realizados se han simulado utilizando Matlab-Simulink y Truetime, se ha analizado su estabilidad mediante LMIs y QFT, y se han validado experimentalmente en un péndulo invertido, un robot cartesiano 3D y en robots móviles de bajo coste. El artículo 1 aborda el control basado en eventos, el cual minimiza el ancho de banda consumido en el NCS mediante un control basado en eventos periódicos y presenta un método para obtener sus parámetros óptimos para el sistema específico en que se utilice. Los artículos 2, 4 y 6 añaden el control basado en paquetes, así como el control multifrecuencia, que aborda problemas de falta de datos por bajo uso del sensor y los retardos, pérdidas y desórdenes de paquetes en la red. También afrontan, mediante tecnicas de predicción basadas en un filtro de Kalman multifrecuencia variable en el tiempo, los problemas de ruido y perturbaciones, así como la observación de los estados completos del sistema. El artículo 7 hace frente a un modelo no lineal que utiliza las anteriores soluciones junto con un filtro de Kalman extendido para presentar otro tipo de estructura para un vehículo autónomo que, gracias a la información futura obtenida mediante estas técnicas, puede realizar de forma remota tareas de alto nivel como es la toma de decisiones y la monitorización de variables. Los artículos 3 y 5, presentan una forma de obtener y analizar la respuesta en frecuencia de sistemas SISO multifrecuencia y estudian su comportamiento ante ciertas incertidumbres o problemas en la red haciendo uso de procedimientos QFT.
[CA] Amb aquesta tesi es pretén donar solució a alguns dels problemes més habituals que apareixen als Sistemes de Control Basats en xarxa (NCS) com son els retards d'accés i transferència variables en el temps, les pèrdues y desordenament de paquets, i la restricció d'ampli de banda així com de recursos computacionals i energètics dels dispositius que foment part del sistema de control. Per tal de resoldre'ls s'ha plantejat la integració de tècniques de control multifreqüencial, de control basat en paquets, de control basat en predictor i de control basat en events. Els dissenys de control realitzats s'han simulat fent ús de Matlab-Simulink i de TrueTime, s'ha analitzat la seua estabilitat mitjançant LMIs i QFT, i s'han validat experimentalment en un pèndul invertit, un robot cartesià 3D i en robots mòbils de baix cost. L'article 1 aborda el control basat en events, el qual minimitza l'ampli de banda consumit a l'NCS mitjançant un control basat en events periòdics i presenta un mètode per a obtindré els seus paràmetres òptims per al sistema específic en el qual s'utilitza. Els articles 2, 4 i 6 afegeixen el control basat en paquets, així com el control multifreqüència, que aborda problemes de falta de dades per el baix us del sensor i els retards, pèrdues i desordre de paquets en la xarxa. També afronten, mitjançant tècniques de predicció basades en un filtre de Kalman multifreqüència variable en el temps. Els problemes de soroll i pertorbacions, així com la observació dels estats complets del sistema. L'article 7 fa referència a un model no lineal que utilitza les anteriors solucions junt a un filtre de Kalman estès per a presentar altre tipus d'estructura per a un vehicle autònom que, gracies a la informació futura obtinguda mitjançant aquestes tècniques, pot realitzar de manera remota tasques d'alt nivell com son la presa de decisions i la monitorització de variables. Els articles 3 y 5 presenten la manera d'obtindre i analitzar la resposta en frequencia de sistemes SISO multifreqüència i estudien el seu comportament front a certes incerteses o problemes en la xarxa fent us de procediments QFT.
[EN] This thesis attempts to solve some of the most frequent issues that appear in Networked Control Systems (NCS), such as time-varying delays, packet losses and packet disorders and the bandwidth limitation. Other frequent problems are scarce computational and energy resources of the local system devices. Thus, it is proposed to integrate multirate control, packet-based control, predictor-based control and event-based control techniques. The control designs have been simulated using Matlab-Simulink and Truetime, the stability has been analysed by LMIs and QFT, and the experimental validation has been done on an inverted pendulum, a 3D cartesian robot and in low-cost mobile robots. Paper 1 addresses event-based control, which minimizes the bandwidth consumed in NCS through a periodic event-triggered control and presents a method to obtain the optimal parameters for the specific system used. Papers 2, 4 and 6 include packet-based control and multirate control, addressing problems such as network delays, packet dropouts and packet disorders, and the scarce data due to low sensor usage in order to save battery in sensing tasks and transmissions of the sensed data. Also addressed, is how despite the existence of measurement noise and disturbances, time-varying dual-rate Kalman filter based prediction techniques observe the complete state of the system. Paper 7 tackles a non-linear model that uses all the previous solutions together with an extended Kalman filter to present another type of structure for an autonomous vehicle that, due to future information obtained through these techniques, can remotely carry out high level tasks, such as decision making and monitoring of variables. Papers 3 and 5, present a method for obtaining and analyzing the SISO dual-rate frequency response and using QFT procedures to study its behavior when faced with specific uncertainties or network problems.
This work was supported by the Spanish Ministerio de Economía y Competitividad under Grant referenced TEC2012-31506.
Alcaina Acosta, JJ. (2020). Design and implementation of event-based multi-rate controllers for networked control systems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/159884
TESIS
APA, Harvard, Vancouver, ISO, and other styles
5

Almeida, Antonio Felipe Costa de. "Investigating techniques to reduce soft error rate under single-event-induced charge sharing." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2014. http://hdl.handle.net/10183/169238.

Full text
Abstract:
The interaction of radiation with integrated circuits can provoke transient faults due to the deposit of charge in sensitive nodes of transistors. Because of the decrease the size in the process technology, charge sharing between transistors placed close to each other has been more and more observed. This phenomenon can lead to multiple transient faults. Therefore, it is important to analyze the effect of multiple transient faults in integrated circuits and investigate mitigation techniques able to cope with multiple faults. This work investigates the effect known as single-event-induced charge sharing in integrated circuits. Two main techniques are analyzed to cope with this effect. First, a placement constraint methodology is proposed. This technique uses placement constraints in standard cell based circuits. The objective is to achieve a layout for which the Soft-Error Rate (SER) due charge shared at adjacent cell is reduced. A set of fault injection was performed and the results show that the SER can be minimized due to single-event-induced charge sharing in according to the layout structure. Results show that by using placement constraint, it is possible to reduce the error rate from 12.85% to 10.63% due double faults. Second, Triple Modular Redundancy (TMR) schemes with different levels of granularities limited by majority voters are analyzed under multiple faults. The TMR versions are implemented using a standard design flow based on a traditional commercial standard cell library. An extensive fault injection campaign is then performed in order to verify the softerror rate due to single-event-induced charge sharing in multiple nodes. Results show that the proposed methodology becomes crucial to find the best trade-off in area, performance and soft-error rate when TMR designs are considered under multiple upsets. Results have been evaluated in a case-study circuit Advanced Encryption Standard (AES), synthesized to 90nm Application Specific Integrated Circuit (ASIC) library, and they show that combining the two techniques, the error rate resulted from multiple faults can be minimized or masked. By using TMR with different granularities and placement constraint methodology, it is possible to reduce the error rate from 11.06% to 0.00% for double faults. A detailed study of triple, four and five multiple faults combining both techniques are also described. We also tested the TMR with different granularities in SRAM-based FPGA platform. Results show that the versions with a fine grain scheme (FGTMR) were more effectiveness in masking multiple faults, similarly to results observed in the ASICs. In summary, the main contribution of this master thesis is the investigation of charge sharing effects in ASICs and the use of a combination of techniques based on TMR redundancy and placement to improve the tolerance under multiple faults.
APA, Harvard, Vancouver, ISO, and other styles
6

Yu, Xiaomin. "Simulation Study of Sequential Probability Ratio Test (SPRT) in Monitoring an Event Rate." University of Cincinnati / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1244562576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Siraj, Tazeen. "Event Rate as a Moderator Variable for Vigilance: Implications for Performance-Feedback and Stress." University of Cincinnati / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1191856419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Fan Agrawal Vishwani D. "Soft error rate determination for nanometer CMOS VLSI circuits." Auburn, Ala, 2008. http://hdl.handle.net/10415/1517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Unal, Haluk. "A switching regression approach to event studies : the case of deposit-rate ceiling changes." The Ohio State University, 1985. http://rave.ohiolink.edu/etdc/view?acc_num=osu1261421717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Curtindale, Lori Marie. "DIFFERENTIAL EFFECTS OF EVENT RATE AND TEMPORAL EXPECTANCY ON SUSTAINED ATTENTION PERFORMANCE OF ADULTS AND CHILDREN." Bowling Green State University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1174691694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Wilson, James Adams. "A New Volcanic Event Recurrence Rate Model and Code For Estimating Uncertainty in Recurrence Rate and Volume Flux Through Time With Selected Examples." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6435.

Full text
Abstract:
Recurrence rate is often used to describe volcanic activity. There are numerous documented ex- amples of non-constant recurrence rate (e.g. Dohrenwend et al., 1984; Condit and Connor, 1996; Cronin et al., 2001; Bebbington and Cronin, 2011; Bevilacqua, 2015), but current techniques for calculating recurrence rate are unable to fully account for temporal changes in recurrence rate. A local–window recurrence rate model, which allows for non-constant recurrence rate, is used to calculate recurrence rate from an age model consisting of estimated ages of volcanic eruption from a Monte Carlo simulation. The Monte Carlo age assignment algorithm utilizes paleomagnetic and stratigraphic information to mask invalid ages from the radiometric date, represented as a Gaussian probability density function. To verify the age assignment algorithm, data from Heizler et al. (1999) for Lathrop Wells is modeled and compared. Synthetic data were compared with expected results and published data were used for cross comparison and verification of recurrence rate and volume flux calculations. The latest recurrence rate fully constrained by the data is reported, based upon data provided in the referenced paper: Cima Volcanic Field, 33 +55/-14 Events per Ma (Dohren- wend et al., 1984), Cerro Negro Volcano, 0.29 Events per Year (Hill et al., 1998), Southern Nevada Volcanic Field, 4.45 +1.84/-0.87 (Connor and Hill, 1995) and Arsia Mons, Mars, 0.09 +0.14/-0.06 Events per Ma (Richardson et al., 2015). The local–window approach is useful for 1) identifying trends in recurrence rate and 2) providing the User the ability to choose the best median recurrence rate and 90% confidence interval with respect to temporal clustering.
APA, Harvard, Vancouver, ISO, and other styles
12

Gulliksson, Mats. "Studies of Secondary Prevention after Coronary Heart Disease with Special Reference to Determinants of Recurrent Event Rate." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis : Univ.-bibl. [distributör], 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-107347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Graur, Or, K. Decker French, H. Jabran Zahid, James Guillochon, Kaisey S. Mandel, Katie Auchettl, and Ann I. Zabludoff. "A Dependence of the Tidal Disruption Event Rate on Global Stellar Surface Mass Density and Stellar Velocity Dispersion." IOP PUBLISHING LTD, 2018. http://hdl.handle.net/10150/626532.

Full text
Abstract:
The rate of tidal disruption events (TDEs), R-TDE, is predicted to depend on stellar conditions near the super-massive black hole (SMBH), which are on difficult-to-measure sub-parsec scales. We test whether R-TDE depends on kpc-scale global galaxy properties, which are observable. We concentrate on stellar surface mass density, Sigma M-*, and velocity dispersion, sigma(nu), which correlate with the stellar density and velocity dispersion of the stars around the SMBH. We consider 35 TDE candidates, with and without known X-ray emission. The hosts range from star-forming to quiescent to quiescent with strong Balmer absorption lines. The last (often with post-starburst spectra) are overrepresented in our sample by a factor of 35(-17)(+21) or 18(-7)(+8), depending on the strength of the H delta absorption line. For a subsample of hosts with homogeneous measurements, Sigma M-* = 10(9)-10(10) M-circle dot/kpc(2), higher on average than for a volume-weighted control sample of Sloan Digital Sky Survey galaxies with similar redshifts and stellar masses. This is because (1) most of the TDE hosts here are quiescent galaxies, which tend to have higher Sigma M-* than the star-forming galaxies that dominate the control, and (2) the star-forming hosts have higher average Sigma M-* than the star-forming control. There is also a weak suggestion that TDE hosts have lower sigma(nu) than for the quiescent control. Assuming that R-TDE infinity Sigma M-*(alpha) x sigma(beta)(nu), and applying a statistical model to the TDE hosts and control sample, we estimate (alpha) over cap = 0.9 +/- 0.2 and (beta) over cap = -1.0 +/- 0.6. This is broadly consistent with RTDE being tied to the dynamical relaxation of stars surrounding the SMBH.
APA, Harvard, Vancouver, ISO, and other styles
14

Nilsson-Trygg, Kristina, and Anna Torstensson. "Sjuksköterskans inställning till att mäta och bedöma andningsfrekvens." Thesis, Uppsala universitet, Institutionen för folkhälso- och vårdvetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-254998.

Full text
Abstract:
SAMMANFATTNING Sjuksköterskan tillämpar omvårdnadsprocessen genom att observera, värdera, prioritera, dokumentera och vid behov åtgärda och hantera förändringar i allmäntillståndet, samt motverka komplikationer i samband med sjukdom, vård och behandling. Andningsfrekvens (AF) är den vitalparameter som först förändras och signalerar en förändring i allmäntillståndet. Vid de flesta hjärtstopp på sjukhus finns tecken till försämring hos patienten redan några timmar eller upp till ett dygn före.  Syftet var att undersöka sjuksköterskans inställning och följsamhet till att mäta och bedöma AF hos akuta sjuka patienter, för att tidigt upptäcka en försämring i patientens hälsotillstånd. Genom en litteraturstudie framkom fyra teman. Rutiners betydelse, sjuksköterskans inställning till AF och varför den inte mättes, värdet av förändringsarbete samt möjliga arbetssätt för att undvika vårdskador. Rutiner för mätning av AF, olika poängsystem och mätmallar för bedömning av vitalparametrar, påverkade antalet mätningar och registreringar av AF. Den enskilda sjuksköterskans inställning inverkade på mätningen och bedömningen av AF. Flera anledningar till varför AF inte mättes fanns. Studier visade att förändringsarbete och implementering av nya arbetssätt var ett komplext område, insatser krävdes inom flera områden på olika nivåer. Vårdskador och plötslig oväntad död minskade när nya rutiner och arbetssätt kombinerades med utbildning, uppföljning och återkoppling till personalen. AF är en viktig vitalparameter. Används inte den kunskapen för att hitta patienter på väg att försämras, riskerar patienterna att drabbas av vårdskador. Ett utbildningsbehov finns, den senaste forskningen har påvisat att rätt genomförd implementering av övervakningsrutiner och förändrat arbetssätt kan ge ett bra utfall i minskat antal vårdskador och oförutsedd död.
ABSTRACT The nurse applies the nursing process by observing, evaluating, prioritising, documenting and when necessary manage changes in the condition of the patient, and to prevent complications associated with disease, care and treatment. Respiratory Rate (RR) is the vital sign that first changes and signals changes in a patient’s condition. In most cardiac arrests there are signs of deterioration of the patient a few hours up to a day before the event.    The aim of this study was to investigate the nurse´s attitude and adherence to measure and assess RR in acutely ill patients, for an early detection of deterioration in the patient's state of health. Through a literature study four themes were emerged. The importance of guidelines, the nurses' attitude and why the RR was not measured, the value of change of management and possible ways to avoid injuries. Guidelines for the measurement of RR, different scoring systems and observations charts for the assessment of vital signs all affected the measuring and scorings of RR. The individual nurse's attitude affected the measurement and assessment of RR. Several reasons why RR was not measured were found. The studies showed that the process of change and implementation of new ways of working is a complex, and efforts were needed in several areas and at different levels. Care injuries and sudden unexpected deaths decreased when new routines and working procedures were combined with training, monitoring and feedback to the staff.   Research shows that RR is an important vital sign. If this knowledge is not used to find patients about to deteriorate, these patients risk suffering from permanent health effects. There is a need for significant training in this area and recent research has shown that a correct implementation of the procedures provide a good outcome in a decreased number of medical injuries and unexpected death
APA, Harvard, Vancouver, ISO, and other styles
15

Fairbairn, Roslyn Deidre. "An event study to investigate the impact of BEE announcements on share price." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/5859.

Full text
Abstract:
Thesis (MBA (Business Management))--University of Stellenbosch, 2009.
ENGLISH ABSTRACT: This event study examines the effect that Black Economic Empowerment (BEE) announcements have on a companies' share price. The average mean return model is applied to study a sample of companies from the Financial Mail Top 200 Empowerment Companies list, 2007. The mean price change observed in a 7-day window around the event announcement is found to be significant relative to the calculated critical value. Results of the test statistic calculated relative to the probability shows that at a p-value of 0,00113, the result is significant and the null hypothesis is rejected at a 95% confidence level. This result of this study supports the fact that markets react positively to the announcements of BEE events.
AFRIKAANSE OPSOMMING: Hierdie studie ondersoek die verhouding tussen die verandering van 'n maatskappy se aandele prys wanneer hierdie maatskappy 'n aankondiging maak oor 'n Swart Ekonomiese Transaksie (SET). Die Financial Mail Top 200 Empowerment Companies 2007 lys is gebruik om maatskappye te kies vir die studie. Die gemiddelde verandering in aandele prys in a 7-dag venster rondom die SET aankondiging blyk merkwaardig te wees wanneer met 'n berekende kritiese waarde vergelyk word. Die toets statistiek bewys dat met 'n p-waarde van 0,00113 daar met 95% sekerheid die nul hipotese kan verwerp. Die resultaat van hierdie studie ondersteun die feit dat markte positief reageer teenoor maatskappye wat nuus oor SET transaksies aankondig.
APA, Harvard, Vancouver, ISO, and other styles
16

Joubert, Ilse. "The effects of an ultra-endurance event on heart rate variability and cognitive performance during induced stress in Ironman triathletes." Master's thesis, University of Cape Town, 2009. http://hdl.handle.net/11427/2754.

Full text
Abstract:
Includes abstract.
Includes bibliographical references (leaves 55-79).
The effects of long-term participation in ultra-endurance exercise on the cardiovascular system have recently been the subject of much interest. It is well known that HRV, a marker of autonomic activity, is enhanced with long-term aerobic exercise training. However, after acute exercise, HRV is reduced, but recovers over time depending on the intensity of the prior bout of exercise. A limitation of previous research is that exercise bouts of only up to 120 minutes have been studied. A modified Stroop Task is a laboratory stressor to assess executive cognitive function by means of reaction time and accuracy. The resting HRV is directly related to these prefrontal neural functions, but the effect of an altered HRV on cognitive function has never been investigated. We determined the effects of an ultra duration (10 – 15 hours) exercise event on parameters of HRV and cognitive function during a Modified Stroop Task, 60 – 200 minutes after the 2007 South African Ironman Triathlon event (3,6km swim; 180 Km cycle; 42,2 Km run). 1 Female and 13 male competing triathletes (IRON; ages 33.7±7.9) and 7 control subjects (CON; 2 female and 5 males aged 42 ±4.5) completed a Modified Stroop Task before and after the event. The individual HRV parameters, heart rate (HR), respiratory frequency (RF), reaction time (RT) and % of mistakes made were recorded via the Biopac MP150WSW System (Goletta, California, USA). Data was transformed by auto regressive analyses (Biomedical signal analysis group, University of Kuopio, Finland) into LF (0.04 - 0.15 Hz) and HF (0.15 - 0.5 Hz) components. Additional calculations included %LF and %HF as well as the central or peak frequencies in both the LF and HF bands.
APA, Harvard, Vancouver, ISO, and other styles
17

Erich, Roger Alan. "Regression Modeling of Time to Event Data Using the Ornstein-Uhlenbeck Process." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1342796812.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Akin, Serdar. "Do Riksbanken produce unbiased forecast of the inflation rate? : and can it be improved?" Thesis, Stockholms universitet, Nationalekonomiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-58708.

Full text
Abstract:
The focus of this paper is to evaluate if forecast produced by the Central Bank of Sweden (Riksbanken) for the 12 month change in the consumer price index is unbiased? Results shows that for shorter horizons (h < 12) the mean forecast error is unbiased but for longer horizons its negatively biased when inference is done by Maximum entropy bootstrap technique. Can the unbiasedness be improved by strict ap- pliance to econometric methodology? Forecasting with a linear univariate model (seasonal ARIMA) and a multivariate model Vector Error Correction model (VECM) shows that when controlling for the presence of structural breaks VECM outperforms both prediction produced Riksbanken and ARIMA. However Riksbanken had the best precision in their forecast, estimated as MSFE
APA, Harvard, Vancouver, ISO, and other styles
19

Spriggs, M. P. "Quantification of acoustic emission from soils for predicting landslide failure." Thesis, Loughborough University, 2005. https://dspace.lboro.ac.uk/2134/10903.

Full text
Abstract:
Acoustic emission (AE) is a natural phenomenon that occurs when a solid is subjected to stress. These emissions are produced by all materials during pre failure. In soil, AE results from the release of energy as particles undergo small strains. If these emissions can be detected, then it becomes possible to develop an early warning system to predict slope failure. International research has shown that AE can be used to detect ground deformations earlier than traditional techniques, and thus it has a role to play in reducing risk to humans, property and in mitigating such risks. This thesis researches the design of a system to quantify the AE and calculate the distance to the deformation zone, and hence information on the mechanism of movement. The quantification of AE is derived from measuring the AE event rate, the output of which takes the form of a displacement rate. This is accurate to an order of magnitude, in line with current standards for classifying slope movements The system also demonstrates great sensitivity to changes within the displacement rate by an order of magnitude, making the technique suitable to remediation monitoring. Knowledge of the position of the shear surface is critical to the planning of cost effective stabllisation measures. This thesis details the development of a single sensor source location technique used to obtain the depth of a developing or existing shear surface within a slope. The active waveguide is used to reduce attenuation by taking advantage of the relatively low attenuation of metals such as steel. A method of source location based on the analysis of Lamb wave mode arrival times at a smgle sensor is summansed. An automatic approach to source location is demonstrated to locate a regular AE source to within one metre. Overall consideration is also given to field trials and towards the production of monitoring protocols for data analysis, and the implementation of necessary emergency/remediation plans.
APA, Harvard, Vancouver, ISO, and other styles
20

Montoya, Rodríguez María Isabel. "Contribution to the assessment of shelter-in-place effectiveness as a community protection measure in the event of a toxic gas release." Doctoral thesis, Universitat Politècnica de Catalunya, 2010. http://hdl.handle.net/10803/6485.

Full text
Abstract:
En les darreres dècades el nombre d'accidents ocurreguts en la industria química i durant el transport de mercaderies perilloses ha augmentat substancialment, registrant-se en la seva majoria en zones densament poblades. Els núvols de gasos tòxics solen originar-se en aquests accidents i malgrat que són menys probables que altres tipus d'accidents, poden afectar grans extensions i contaminar zones poblades, provocant greus conseqüències. Això comporta un gran repte per a les autoritats civils, que han d'avaluar i decidir l'àrea que cal evacuar i l'àrea en la que s'ha d'implementar el confinament com a mesura de protecció. L'avaluació de l'efectivitat del confinament comprèn tres etapes fonamentals: el càlcul de la dispersió exterior, el càlcul de la concentració interior en funció de la concentració exterior i l'avaluació dels efectes adversos per a la salut. Aquesta tesi s'enfoca principalment en l'estudi de la segona etapa, la qual és funció de la taxa d'infiltració d'aire en les edificacions.

Inicialment es va realitzar una extensa revisió bibliogràfica sobre les tres etapes, fent èmfasi en la cerca de models pel càlcul de la concentració interior, la taxa d'infiltració y l'hermeticitat de les vivendes. Posteriorment, a través d'una anàlisi de sensibilitat es trobà que la taxa de renovació d'aire té una gran influencia sobre l'efectivitat del confinament i, a més, atès que aquesta varia per cada edificació, el coneixement de la seva distribució en una població és necessari per a una avaluació adequada de l'efectivitat del confinament, ja que suposar-la constant per a totes les edificacions pot comportar sobreestimacions o subestimacions del radi d'evacuació. Per tant, amb la finalitat d'obtenir una aproximació de la distribució de l'hermeticitat, es va aplicar el model desenvolupat pel Lawrence Berkeley National Laboratory (LBNL), que prové de dades de vivendes nord-americanes, a les vivendes catalanes. De tota manera, els resultats obtinguts es trobaven esbiaixats a les zones climàtiques, essent les prediccions per a vivendes ubicades en zones seques més hermètiques que en zones humides. En el cas de Catalunya, on les tècniques constructives no varien significativament d'una zona a una altra i la majoria de vivendes estan construïdes a base de materials pesats, no és d'esperar una diferència tan marcada com la predita pel model del LBNL. Per tant, es va decidir desenvolupar un model per a les vivendes catalanes utilitzant la base de dades de taxes d'infiltració de vivendes unifamiliars del CETE de Lyon, ja que aquestes vivendes tenen més similitud amb les vivendes catalanes que no pas les nord-americanes.

El model desenvolupat, denominat UPC-CETE, permet estimar l'hermeticitat de les vivendes unifamiliars en funció de l'àrea, el número de pisos, l'edat i el tipus d'estructura constructiva: lleugera o pesada. Els valors d'hermeticitat predits amb aquest model foren menors que els obtinguts amb el model del LNBL, tal com s'esperava. Finalment, per tal de validar i millorar el model desenvolupat, es van realitzar mesures de la taxa de renovació d'aire en diverses vivendes de Catalunya i també en habitacions prèviament condicionades per ser utilitzades com a refugi, per tal d'avaluar la reducció guanyada sobre la taxa de renovació de tota la vivenda. Com a mitjana, s'obtingueren reduccions d'un 35% i es trobà que les reduccions més grans tenien lloc en vivendes antigues, amb àrees petites d'una o dues plantes. El model UPC-CETE millorat a partir dels resultats obtinguts en les proves experimentals, s'incorporà a la metodologia per avaluar l'efectivitat del confinament en l'etapa d'estimació de la taxa de renovació d'aire, evitant l'ús d'un valor constant per a totes les vivendes i promovent així l'ús d'una distribució d'aquest paràmetre per secció censal afectada dins la població.
During the last decades the number of accidents in chemical industries and during transportation of hazardous substances has significantly increased, with most of them occurring in highly populated areas. One of the possible accidents is a toxic gas cloud, which although less common than other major hazards could affect larger areas reaching populated zones and producing more severe consequences. This implies then, a great challenge to emergency managers who must plan and decide the areas where protection measures should be implemented: shelter in place and/or evacuation. The assessment of the effectiveness of shelter in place is subjected to three main stages: the calculation of the outdoor gas dispersion, the estimation of indoor concentration from outdoor concentration and the evaluation of human vulnerability. This thesis is mainly focused on the study of the second stage which is primarily a function of buildings leakage.

Initially we performed a bibliographic survey with special interest on the models to estimate indoor concentration from outdoor concentration, airtightness of dwellings and ventilation models. Then, through a sensitivity analysis, we found that the air exchange rate has a great influence on the effectiveness of shelter in place. Moreover, since this parameter is different for each building, the knowledge of the distribution of this variable in the affected population would lead to a more accurate assessment of the effectiveness of shelter in place, because if we assume it as a fix value, constant for all buildings, over or underestimations of the evacuation radius may occur. Therefore, with the aim of making an estimation of the airtightness distribution in Catalunya, we applied the model developed by the Lawrence Berkeley National Laboratory (LBNL), a model based on data from North American dwellings, to Catalan dwellings. The results obtained were influenced by climate zones, due to the coefficients of the model, being more airtight the predictions for dwellings located in dry climates than for dwellings in humid zones. In the case of Catalunya, where constructions techniques do not differ significantly from one zone to another and most of the dwellings consist of a heavy structure, a difference such as that predicted by the model of the LBNL is not expected. Consequently, we decided to develop a model for Catalan dwellings using the air leakage database from the CETE de Lyon, since French dwellings are more likely to Catalan dwellings than US dwellings. The model developed, named the UPC-CETE model, predicts the airtightness of single-family dwellings as a function of the floor area, the age, the number of stories and the structure type: light or heavy. The airtihgtness values predicted with this model were smaller than those predicted with the model of the LBNL, as was expected.

Finally, in order to validate and improve the model developed we carried out a series of trials to measure the air exchange rate in some Catalan dwellings. Measurements in sealed rooms were also performed with the aim of assessing the reduction gained on the air exchange rate with regards to the air exchange rate of the whole dwelling. On average, we obtained reductions of 35% and found that larger reductions belonged to old dwellings with small floor areas and 1 or 2 stories. The improved model was incorporated on the methodology to assess shelter in place effectiveness on the stage concerning the estimation of the air exchange rate of the dwellings located on the affected zone; therefore, the assumption of a constant value is avoided. These measurements and the model constitute therefore the first proposal for estimating the airtightness distribution of single-family dwellings that could be used by Catalan authorities for emergency response planning.
APA, Harvard, Vancouver, ISO, and other styles
21

Medan, Lena, and Arturo Montoya. "Reporäntan och dess påverkan på svenska bankers aktiekurser : En eventstudie." Thesis, Södertörns högskola, Institutionen för samhällsvetenskaper, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-27910.

Full text
Abstract:
Syfte: Uppsatsen syfte är att klargöra och analysera reporäntans ränteförändringars påverkan på aktiekurserna för samtliga svenska banker i large cap på Stockholmsbörsen. Metod: Kvantitativa händelsestudier har gjorts med deduktiv forskningsansats på fyra företag, samtliga noterade på Stockholmsbörsen. Den onormala avkastningen för de undersökta aktiekurserna har beräknats en dag före till en dag efter samtliga realiseringar av reporänteförändringar som skett mellan åren 2004 till 2015.  Teori: Den teoretiska referensramen för studien består av den effektiva marknadshypotesen och överreaktionshypotesen. Slutsatser: Studien har påvisat att det råder signifikant samband mellan ränteförändringar och de studerade aktiernas avkastning vid realisering av ränteförändringarna.
Purpose: The purpose of this thesis is to clarify and analyze the changes in the discount rate and its impact on stock prices of all Swedish listed banks in large cap on the Stockholm stock exchange. Methodology: Quantitative event studies has been done with deductive research approach on four companies, all listed on the Stockholm Stock Exchange. The abnormal returns for the examined stock prices have been calculated one day before to one day after all the realizations of the changes in the discount rate that occurred between year 2004 to 2015. Theory: The theoretical framework in this study consists of The Effective Market Hypothesis and The Overreaction Hypothesis. Conclusions: The study has shown that there is a significant correlation between the changes in the discount rate and the equity returns of the studied stocks.
APA, Harvard, Vancouver, ISO, and other styles
22

He, Jun. "THE APPLICATION OF LAST OBSERVATION CARRIED FORWARD (LOCF) IN THE PERSISTENT BINARY CASE." VCU Scholars Compass, 2014. http://scholarscompass.vcu.edu/etd/3621.

Full text
Abstract:
The main purpose of this research was to evaluate use of Last Observation Carried Forward (LOCF) as an imputation method when persistent binary outcomes are missing in a Randomized Controlled Trial. A simulation study was performed to see the effect of dropout rate and type of dropout (random or associated with treatment arm) on Type I error and power. Properties of estimated event rates, treatment effect, and bias were also assessed. LOCF was also compared to two versions of complete case analysis - Complete1 (excluding all observations with missing data), and Complete2 (only carrying forward observations if the event is observed to occur). LOCF was not recommended because of the bias. Type I error was increased, and power was decreased. The other two analyses also had poor properties. LOCF analysis was applied to a mammogram dataset, with results similar to the simulation study.
APA, Harvard, Vancouver, ISO, and other styles
23

Aboutaleb, Adam. "Empirical study of the effect of stochastic variability on the performance of human-dependent flexible flow lines." Thesis, De Montfort University, 2015. http://hdl.handle.net/2086/12103.

Full text
Abstract:
Manufacturing systems have developed both physically and technologically, allowing production of innovative new products in a shorter lead time, to meet the 21st century market demand. Flexible flow lines for instance use flexible entities to generate multiple product variants using the same routing. However, the variability within the flow line is asynchronous and stochastic, causing disruptions to the throughput rate. Current autonomous variability control approaches decentralise the autonomous decision allowing quick response in a dynamic environment. However, they have limitations, e.g., uncertainty that the decision is globally optimal and applicability to limited decisions. This research presents a novel formula-based autonomous control method centered on an empirical study of the effect of stochastic variability on the performance of flexible human-dependent serial flow lines. At the process level, normal distribution was used and generic nonlinear terms were then derived to represent the asynchronous variability at the flow line level. These terms were shortlisted based on their impact on the throughput rate and used to develop the formula using data mining techniques. The developed standalone formulas for the throughput rate of synchronous and asynchronous human-dependent flow lines gave steady and accurate results, higher than closest rivals, across a wide range of test data sets. Validation with continuous data from a real-world case study gave a mean absolute percentage error of 5%. The formula-based autonomous control method quantifies the impact of changes in decision variables, e.g., routing, arrival rate, etc., on the global delivery performance target, i.e., throughput, and recommends the optimal decisions independent of the performance measures of the current state. This approach gives robust decisions using pre-identified relationships and targets a wider range of decision variables. The performance of the developed autonomous control method was successfully validated for process, routing and product decisions using a standard 3x3 flexible flow line model and the real-world case study. The method was able to consistently reach the optimal decisions that improve local and global performance targets, i.e., throughput, queues and utilisation efficiency, for static and dynamic situations. For the case of parallel processing which the formula cannot handle, a hybrid autonomous control method, integrating the formula-based and an existing autonomous control method, i.e., QLE, was developed and validated.
APA, Harvard, Vancouver, ISO, and other styles
24

Choi, Andréa Yoon. "Caracterização dos efeitos da estimulação elétrica no núcleo basalis magnocelular no potencial de campo local e na freqüência cardíaca no condicionamento comportamental de ratos Wistar." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/5/5160/tde-29012009-094621/.

Full text
Abstract:
Estudamos os efeitos da estimulação elétrica no nucleus basalis magnocelular (Meynert), núcleo colinégico que projeta aferências para o córtex cerebral e tem sido associado a mecanismos de aprendizagem e memória. Verificamos os efeitos eletrofisiológicos induzidos pela estimulação elétrica do núcleo basalis pareado com apresentação de um tom puro. Caracterizamos a dinâmica da atividade elétrica neural do cortex auditivo primário e de núcleos subcorticais relacionados à circuitaria da aprendizagem e memória, durante o condicionamento auditivo nos momentos de aquisição e de revocação além correlacioná-las a dinâmica de freqüência cardiaca, variável que pode exprimir a relevância de um estímulo
Acetilcholine is related to learning and memory and is related to cortical activation. We studied the effects electrically stimulating the basal forebrain - the main cholinergic afferent to the cortex, while presenting paired and unpaired pure tones. Mathematical techniques were used to analyze electrophysiological data. The dynamics from primary auditory cortex and related subcortical nuclei were correlated to the auditory conditioning. We also correlated brain activity to the heart dynamics, considered a reliable measure of learning and conditioning, an interesting approach that uncovers the relevance of stimulus that is not detectable through other behavioral variables
APA, Harvard, Vancouver, ISO, and other styles
25

Huang, Wei. "A Population-Based Perspective on Clinically Recognized Venous Thromboembolism: Contemporary Trends in Clinical Epidemiology and Risk Assessment of Recurrent Events: A Dissertation." eScholarship@UMMS, 2011. http://escholarship.umassmed.edu/gsbs_diss/730.

Full text
Abstract:
Background: Venous thromboembolism (VTE), comprising the conditions of deep vein thrombosis (DVT) and pulmonary embolism (PE), is a common acute cardiovascular event associated with increased long-term morbidity, functional disability, all-cause mortality, and high rates of recurrence. Major advances in identification, prophylaxis, and treatment over the past 3-decades have likely changed its clinical epidemiology. However, there are little published data describing contemporary, population-based, trends in VTE prevention and management. Objectives: To examine recent trends in the epidemiology of clinically recognized VTE and assess the risk of recurrence after a first acute episode of VTE. Methods: We used population-based surveillance to monitor trends in acute VTE among residents of the Worcester, Massachusetts, metropolitan statistical area (WMSA) from 1985 through 2009, including in-hospital and ambulatory settings. Results: Among 5,025 WMSA residents diagnosed with acute PE and/or lower-extremity DVT between 1985 and 2009 (mean age = 65 years), 46% were men and 95% were white. Age- and sex-adjusted annual event rates (per 100, 000) of clinically recognized acute first-time and recurrent VTE was 142 overall, increasing from 112 in 1985/86 to 168 in 2009, due primarily to increases in PE occurrence. During this period, non-invasive diagnostic VTE testing increased, vi while treatment shifted from the in-hospital (chiefly with warfarin and unfractionated heparin) to out-patient setting (chiefly with low-molecular-weight heparins and newer anticoagulants). Among those with community-presenting first-time VTE, subsequent 3-year cumulative event rates of key outcomes decreased from 1999 to 2009, including all-cause mortality (41% to 26%), major bleeding episodes (12% to 6%), and recurrent VTE (17% to 9%). Active-cancer (with or without chemotherapy), a hypercoagulable state, varicose vein stripping, and Inferior vena cava filter placement were independent predictors of recurrence during short- (3-month) and long-term (3-year) follow-up after a first acute episode of VTE. We developed risk score calculators for VTE recurrence based on a 3-month prognostic model for all patients and separately for patients without active cancer. Conclusions: Despite advances in identification, prophylaxis, and treatment between 1985 and 2009, the disease burden from VTE in residents of central Massachusetts remains high, with increasing annual events. Declines in the frequency of major adverse outcomes between 1999 and 2009 were reassuring. Still, mortality, major bleeding, and recurrence rates remained high, suggesting opportunities for improved prevention and treatment. Clinicians may be able to use the identified predictors of recurrence and risk score calculators to estimate the risk of VTE recurrence and tailor outpatient treatments to individual patients.
APA, Harvard, Vancouver, ISO, and other styles
26

Huang, Wei. "A Population-Based Perspective on Clinically Recognized Venous Thromboembolism: Contemporary Trends in Clinical Epidemiology and Risk Assessment of Recurrent Events: A Dissertation." eScholarship@UMMS, 2014. https://escholarship.umassmed.edu/gsbs_diss/730.

Full text
Abstract:
Background: Venous thromboembolism (VTE), comprising the conditions of deep vein thrombosis (DVT) and pulmonary embolism (PE), is a common acute cardiovascular event associated with increased long-term morbidity, functional disability, all-cause mortality, and high rates of recurrence. Major advances in identification, prophylaxis, and treatment over the past 3-decades have likely changed its clinical epidemiology. However, there are little published data describing contemporary, population-based, trends in VTE prevention and management. Objectives: To examine recent trends in the epidemiology of clinically recognized VTE and assess the risk of recurrence after a first acute episode of VTE. Methods: We used population-based surveillance to monitor trends in acute VTE among residents of the Worcester, Massachusetts, metropolitan statistical area (WMSA) from 1985 through 2009, including in-hospital and ambulatory settings. Results: Among 5,025 WMSA residents diagnosed with acute PE and/or lower-extremity DVT between 1985 and 2009 (mean age = 65 years), 46% were men and 95% were white. Age- and sex-adjusted annual event rates (per 100, 000) of clinically recognized acute first-time and recurrent VTE was 142 overall, increasing from 112 in 1985/86 to 168 in 2009, due primarily to increases in PE occurrence. During this period, non-invasive diagnostic VTE testing increased, vi while treatment shifted from the in-hospital (chiefly with warfarin and unfractionated heparin) to out-patient setting (chiefly with low-molecular-weight heparins and newer anticoagulants). Among those with community-presenting first-time VTE, subsequent 3-year cumulative event rates of key outcomes decreased from 1999 to 2009, including all-cause mortality (41% to 26%), major bleeding episodes (12% to 6%), and recurrent VTE (17% to 9%). Active-cancer (with or without chemotherapy), a hypercoagulable state, varicose vein stripping, and Inferior vena cava filter placement were independent predictors of recurrence during short- (3-month) and long-term (3-year) follow-up after a first acute episode of VTE. We developed risk score calculators for VTE recurrence based on a 3-month prognostic model for all patients and separately for patients without active cancer. Conclusions: Despite advances in identification, prophylaxis, and treatment between 1985 and 2009, the disease burden from VTE in residents of central Massachusetts remains high, with increasing annual events. Declines in the frequency of major adverse outcomes between 1999 and 2009 were reassuring. Still, mortality, major bleeding, and recurrence rates remained high, suggesting opportunities for improved prevention and treatment. Clinicians may be able to use the identified predictors of recurrence and risk score calculators to estimate the risk of VTE recurrence and tailor outpatient treatments to individual patients.
APA, Harvard, Vancouver, ISO, and other styles
27

Leder, Alexander Friedrich Sebastian. "Rare-event searches with bolometers." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/119106.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Physics, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 173-185).
Rare-event searches have played an integral part in the pursuit of physics beyond the Standard Model, offering us the chance to bridge the disparity between our current understanding and observed phenomena such as Dark Matter (DM) or the nature of neutrino masses. Over the last 30 years, these experiments have grown larger and more sophisticated, allowing us to probe new and exciting theories of the universe. At the same time, we have started to apply the technologies and techniques used in rare-event searches to areas of applied physics, for example; reactor monitoring using Coherent Elastic Neutrino Nucleon Scattering (CEvNS) with Ricochet. In this thesis, I will discuss the hardware and analysis techniques required to design, construct, and extract results from these low background, rare-event searches. In particular, I will discuss the hardware and analysis related to the Cryogenic Dark Matter Search (CDMS), CEvNS detection with Ricochet and the measurement of the effective nuclear quenching factor g, via shape analysis of the highly forbidden In-115 beta spectrum. The latter measurement has far reaching consequences for all neutrino-less double beta decay experiments, independent of isotope.
by Alexander Friedrich Sebastian Leder.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
28

Kabraiel, Matilda, and Sandra Yildirim. "Reporäntans påverkan på aktiekursen : En eventstudie om hur reporänteförändringar påverkar den svenska aktiemarknaden." Thesis, Södertörns högskola, Institutionen för samhällsvetenskaper, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-28205.

Full text
Abstract:
Syfte: Studiens syfte är att undersöka om Riksbankens tillkännagivanden av reporänteförändringar har en effekt på den svenska aktiemarknaden, samt om det råder skillnader mellan fyra branscher i Stockholmsbörsen. Studien syftar även till att undersöka om det kan urskönjas en skillnad mellan branschernas räntekänslighet. Metod: Undersökningen baseras på en eventstudie med ett estimeringsfönster på 60 dagar före tillkännagivandet av reporänteförändringen, och ett eventfönster på 11 dagar. Urvalet består samtliga reporänteförändringar mellan 2001-2015, och av följande branscher, Finans & Fastighet, Industrivaror, Hälsovård, Teknologi, som är inhämtade från Stockholmsbörsen. Teori: Den teoretiska utgångspunkten i studien är teorin om den effektiva marknadshypotesen och teorier om reporäntan. Det presenteras även teorier om diskonteringsräntans effekt samt pris- och inkomstelasticitet. Finansiell psykologi, som är en invändning mot effektiva marknadshypotesen, redogörs dessutom tillsammans med tidigare forskning som har legat till grund för undersökningen. Slutsats: Studien resulterar i att det inte råder ett entydigt samband mellan Riksbankens tillkännagivanden av reporänteförändringar och den svenska aktiekursen. Resultatet illustrerar att det råder en skillnad mellan de valda branschernas räntekänslighet. Det går inte direkt att fastställa att den svenska marknaden är effektiv.
Purpose: The purpose of this study is to examine if Sweden’s central bank announcements of the federal funds rate have an effect on the Swedish stock market, and whether there are differences between four sectors of the Stockholm Stock Exchange. The study also aims to investigate if there is a difference between the sectors interest rate sensitivity.  Method: The study is based on an event study with an estimation window of 60 days prior the announcement of the federal fund rate, and an event window of 11 days. The sample consists of all the announcement of the federal funds rate between 2001- 2015 and the following sectors, Finance & Real Estate, Industrials, Healthcare, Technology, who are acquired from the Stockholm Stock Exchange. Theory: The theoretical basis in this study is the theory of the efficient market hypothesis and theories about the federal funds rate. An introduction to theories about the discount rate and price and income elasticity is also presented in the study. Financial psychology, which is a statement of opposition against the efficient market hypothesis, is also introduced together with previous research which the examination is based on. Conclusion: The results show that there is no unambiguous correlation between Sweden’s central bank announcements of the federal funds rate and the Swedish stock price. The result illustrate that there is a difference between the selected sectors interest rate sensitivity. In summary, it’s established that the Swedish stock market cannot be seen as an efficient market.
APA, Harvard, Vancouver, ISO, and other styles
29

Zeeshan, Muhammad Fazal. "Use of an Electronic Reporting System to Determine Adverse Event Rates, Adverse Event Costs, and the Relationship of Adverse Events with Patients’ Body Mass Index." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1372765526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ferreira, Pedro João Bem-Haja Gabriel. "Psychophysiology of eyewitness testimony." Doctoral thesis, Universidade de Aveiro, 2018. http://hdl.handle.net/10773/22797.

Full text
Abstract:
Doutoramento em Psicologia
As testemunhas oculares são muitas vezes o único meio que temos para aceder à autoria de um crime. Contudo, apesar dos 100 anos de evidência de erros no testemunho ocular, a consciência das suas limitações como meio de prova só ganhou força no advento do ADN. De facto os estudos de exoneração mostraram que 70 % das ilibações estavam associadas a erros de testemunho ocular. Estes erros têm um impacto social elevado principalmente os falsos positivos, por colocar inocentes na prisão. De acordo com a literatura, deverão ser utilizadas novas abordagens para tentar reduzir o numero de erros de identificação. Destas abordagens, destacam-se a análise dos padrões de movimentos oculares e os potenciais evocados. Nos nossos estudos utilizamos essas novas abordagens com o objetivo de examinar os padrões de acerto ou de identificação do criminoso, usando um paradigma de deteção de sinal. No que diz respeito aos movimentos oculares, não foram encontrados padrões robustos de acerto. No entanto, obtiveram-se evidências oculométricas de que a fusão de dois procedimentos (Alinhamento Simultâneo depois de um Alinhamento Sequencial com Regra de Paragem) aumenta a probabilidade de acerto. Em relação aos potenciais evocados, a P100 registou maior amplitude quando identificamos um inocente. Este efeito é concomitante com uma hiperactivação no córtex prefrontal ventromedial (CPFVM) identificada na análise de estimação de fontes. Esta hiperativação poderá estar relacionada com uma exacerbação emocional da informação proveniente da amígdala. A literatura relaciona a hiperativação no CPFVM com as falsas memorias, e estes resultados sugerem que a P100 poderá ser um promissor indicador de falsos positivos. Os resultados da N170 não nos permitem associar este componente ao acerto na identificação. Relativamente à P300, os resultados mostram uma maior amplitude deste componente quando identificamos corretamente um alvo, mas não diferiu significativamente de quando identificamos um inocente. Porém, a estimação de fontes mostrou que nessa janela temporal (300-600 ms) se verifica uma hipoativação dos Campos Oculares Frontais (COF) quando um distrator é identificado. Baixas ativações dos COF estão relacionadas com redução da eficiência de processamento e com a incapacidade para detetar alvos. Nas medidas periféricas, a eletromiografia facial mostrou que a maior ativação do corrugador e a menor ativação do zigomático são um bom indicador de quando estamos perante um criminoso. No que diz respeito ao ritmo cardíaco, a desaceleração esperada para os alvos devido à sua saliência emocional apenas foi obtida quando a visualização de um alvo foi acompanhada por um erro na identificação (i.e., um falso negativo). Neste trabalho de investigação parece que o sistema nervoso periférico está a responder corretamente, identificando o alvo, por ser emocionalmente mais saliente, enquanto que a modulação executiva efectuada pelo CPFVM conduz ao falso positivo. Os resultados obtidos são promissores e relevantes, principalmente quando o resultado de um erro poderá ser uma condenação indevida e, consequentemente, uma vida injustamente destruída.
Eyewitnesses are often the only way we can access the author of a crime. However, despite 100 years of evidence of errors in eyewitness testimony, awareness of its limitations only gained strength with the advent of DNA. In fact, 70% of exonerations have been associated with eyewitness errors. These errors have a high social impact, mainly false positives. According to the literature, new approaches to try to reduce the number of identification errors should be used. Of these, the study of oculometric patterns and event-related Potentials (ERP) stand out. In our studies, these new approaches were used with the objective of examining patterns of accuracy, using a signal detection paradigm. Regarding eye movements, no entirely clear patterns were found. However, there was oculometric evidence that the merging of two procedures (Simultaneous Lineup after a Sequential Lineup with Stopping Rule) increases performance accuracy. Regarding ERPs, the P100 registered a larger amplitude when an innocent was identified. This effect is concomitant with a hyperactivation in the ventromedial prefrontal cortex (VMPFC) identified by source estimation analysis. This hyperactivation might be related to an emotional exacerbation of the information coming from the amygdala. The literature relates the hyperactivation in the VMPFC with false memories, and these results suggest that the P100 component might be a promising marker of false positive errors. The results of the N170 do not allow to associate this component with accuracy. Regarding the P300, the results showed a greater amplitude of this component when a target was correctly identified but did not differ significantly from when an innocent was identified. However, source analysis in this time window (300-600 ms) showed a hypoactivation of Frontal Eye Fields (FEF) when a distractor was identified. FEF inactivations are related to the reduction of processing efficiency and to the inability to detect a target. Concerning the peripheral measures, facial electromyography showed that the greater activation of the corrugator and the lower activation of the zygomaticus are a good marker of when we are facing a perpetrator. Regarding heart rate, the expected deceleration for the targets due to their emotional salience was only obtained when the visualization of a target was accompanied by an error in the identification (i.e., a miss). In this research it seems that the peripheral nervous system is responding correctly, identifying the target, because it is emotionally more salient, while the executive modulation carried out by the VMPFC causes the false positive error. The results presently obtained are promising and relevant, especially when the result of an error might be an undue condemnation of an innocent and consequently a destroyed life.
APA, Harvard, Vancouver, ISO, and other styles
31

Walter, Clément. "Using Poisson processes for rare event simulation." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCC304/document.

Full text
Abstract:
Cette thèse est une contribution à la problématique de la simulation d'événements rares. A partir de l'étude des méthodes de Splitting, un nouveau cadre théorique est développé, indépendant de tout algorithme. Ce cadre, basé sur la définition d'un processus ponctuel associé à toute variable aléatoire réelle, permet de définir des estimateurs de probabilités, quantiles et moments sans aucune hypothèse sur la variable aléatoire. Le caractère artificiel du Splitting (sélection de seuils) disparaît et l'estimateur de la probabilité de dépasser un seuil est en fait un estimateur de la fonction de répartition jusqu'au seuil considéré. De plus, les estimateurs sont basés sur des processus ponctuels indépendants et identiquement distribués et permettent donc l'utilisation de machine de calcul massivement parallèle. Des algorithmes pratiques sont ainsi également proposés.Enfin l'utilisation de métamodèles est parfois nécessaire à cause d'un temps de calcul toujours trop important. Le cas de la modélisation par processus aléatoire est abordé. L'approche par processus ponctuel permet une estimation simplifiée de l'espérance et de la variance conditionnelles de la variable aléaoire résultante et définit un nouveau critère d'enrichissement SUR adapté aux événements rares
This thesis address the issue of extreme event simulation. From a original understanding of the Splitting methods, a new theoretical framework is proposed, regardless of any algorithm. This framework is based on a point process associated with any real-valued random variable and lets defined probability, quantile and moment estimators without any hypothesis on this random variable. The artificial selection of threshold in Splitting vanishes and the estimator of the probability of exceeding a threshold is indeed an estimator of the whole cumulative distribution function until the given threshold. These estimators are based on the simulation of independent and identically distributed replicas of the point process. So they allow for the use of massively parallel computer cluster. Suitable practical algorithms are thus proposed.Finally it can happen that these advanced statistics still require too much samples. In this context the computer code is considered as a random process with known distribution. The point process framework lets handle this additional source of uncertainty and estimate easily the conditional expectation and variance of the resulting random variable. It also defines new SUR enrichment criteria designed for extreme event probability estimation
APA, Harvard, Vancouver, ISO, and other styles
32

Jegourel, Cyrille. "Rare event simulation for statistical model checking." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S084/document.

Full text
Abstract:
Dans cette thèse, nous considérons deux problèmes auxquels le model checking statistique doit faire face. Le premier concerne les systèmes hétérogènes qui introduisent complexité et non-déterminisme dans l'analyse. Le second problème est celui des propriétés rares, difficiles à observer et donc à quantifier. Pour le premier point, nous présentons des contributions originales pour le formalisme des systèmes composites dans le langage BIP. Nous en proposons une extension stochastique, SBIP, qui permet le recours à l'abstraction stochastique de composants et d'éliminer le non-déterminisme. Ce double effet a pour avantage de réduire la taille du système initial en le remplaçant par un système dont la sémantique est purement stochastique sur lequel les algorithmes de model checking statistique sont définis. La deuxième partie de cette thèse est consacrée à la vérification de propriétés rares. Nous avons proposé le recours à un algorithme original d'échantillonnage préférentiel pour les modèles dont le comportement est décrit à travers un ensemble de commandes. Nous avons également introduit les méthodes multi-niveaux pour la vérification de propriétés rares et nous avons justifié et mis en place l'utilisation d'un algorithme multi-niveau optimal. Ces deux méthodes poursuivent le même objectif de réduire la variance de l'estimateur et le nombre de simulations. Néanmoins, elles sont fondamentalement différentes, la première attaquant le problème au travers du modèle et la seconde au travers des propriétés
In this thesis, we consider two problems that statistical model checking must cope. The first problem concerns heterogeneous systems, that naturally introduce complexity and non-determinism into the analysis. The second problem concerns rare properties, difficult to observe, and so to quantify. About the first point, we present original contributions for the formalism of composite systems in BIP language. We propose SBIP, a stochastic extension and define its semantics. SBIP allows the recourse to the stochastic abstraction of components and eliminate the non-determinism. This double effect has the advantage of reducing the size of the initial system by replacing it by a system whose semantics is purely stochastic, a necessary requirement for standard statistical model checking algorithms to be applicable. The second part of this thesis is devoted to the verification of rare properties in statistical model checking. We present a state-of-the-art algorithm for models described by a set of guarded commands. Lastly, we motivate the use of importance splitting for statistical model checking and set up an optimal splitting algorithm. Both methods pursue a common goal to reduce the variance of the estimator and the number of simulations. Nevertheless, they are fundamentally different, the first tackling the problem through the model and the second through the properties
APA, Harvard, Vancouver, ISO, and other styles
33

Suzuki, Yuya. "Rare-event Simulation with Markov Chain Monte Carlo." Thesis, KTH, Matematisk statistik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-138950.

Full text
Abstract:
In this thesis, we consider random sums with heavy-tailed increments. By the term random sum, we mean a sum of random variables where the number of summands is also random. Our interest is to analyse the tail behaviour of random sums and to construct an efficient method to calculate quantiles. For the sake of efficiency, we simulate rare-events (tail-events) using a Markov chain Monte Carlo (MCMC) method. The asymptotic behaviour of sum and the maximum of heavy-tailed random sums is identical. Therefore we compare random sum and maximum value for various distributions, to investigate from which point one can use the asymptotic approximation. Furthermore, we propose a new method to estimate quantiles and the estimator is shown to be efficient.
APA, Harvard, Vancouver, ISO, and other styles
34

Gudmundsson, Thorbjörn. "Rare-event simulation with Markov chain Monte Carlo." Doctoral thesis, KTH, Matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-157522.

Full text
Abstract:
Stochastic simulation is a popular method for computing probabilities or expecta- tions where analytical answers are difficult to derive. It is well known that standard methods of simulation are inefficient for computing rare-event probabilities and there- fore more advanced methods are needed to those problems. This thesis presents a new method based on Markov chain Monte Carlo (MCMC) algorithm to effectively compute the probability of a rare event. The conditional distri- bution of the underlying process given that the rare event occurs has the probability of the rare event as its normalising constant. Using the MCMC methodology a Markov chain is simulated, with that conditional distribution as its invariant distribution, and information about the normalising constant is extracted from its trajectory. In the first two papers of the thesis, the algorithm is described in full generality and applied to four problems of computing rare-event probability in the context of heavy- tailed distributions. The assumption of heavy-tails allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and heavy-tailed. The second problem is an extension of the first one to a heavy-tailed random sum Y1+···+YN exceeding a high threshold,where the number of increments N is random and independent of Y1 , Y2 , . . .. The third problem considers the solution Xm to a stochastic recurrence equation, Xm = AmXm−1 + Bm, exceeding a high threshold, where the innovations B are independent and identically distributed and heavy-tailed and the multipliers A satisfy a moment condition. The fourth problem is closely related to the third and considers the ruin probability for an insurance company with risky investments. In last two papers of this thesis, the algorithm is extended to the context of light- tailed distributions and applied to four problems. The light-tail assumption ensures the existence of a large deviation principle or Laplace principle, which in turn allows us to propose distributions which approximate the conditional distribution conditioned on the rare event. The first problem considers a random walk Y1 + · · · + Yn exceeding a high threshold, where the increments Y are independent and identically distributed and light-tailed. The second problem considers a discrete-time Markov chains and the computation of general expectation, of its sample path, related to rare-events. The third problem extends the the discrete-time setting to Markov chains in continuous- time. The fourth problem is closely related to the third and considers a birth-and-death process with spatial intensities and the computation of first passage probabilities. An unbiased estimator of the reciprocal probability for each corresponding prob- lem is constructed with efficient rare-event properties. The algorithms are illustrated numerically and compared to existing importance sampling algorithms.

QC 20141216

APA, Harvard, Vancouver, ISO, and other styles
35

Phinikettos, Ioannis. "Monitoring in survival analysis and rare event simulation." Thesis, Imperial College London, 2012. http://hdl.handle.net/10044/1/9518.

Full text
Abstract:
Monte Carlo methods are a fundamental tool in many areas of statistics. In this thesis, we will examine these methods, especially for rare event simulation. We are mainly interested in the computation of multivariate normal probabilities and in constructing hitting thresholds in survival analysis models. Firstly, we develop an algorithm for computing high dimensional normal probabilities. These kinds of probabilities are a fundamental tool in many statistical applications. The new algorithm exploits the diagonalisation of the covariance matrix and uses various variance reduction techniques. Its performance is evaluated via a simulation study. The new method is designed for computing small exceedance probabilities. Secondly, we introduce a new omnibus cumulative sum chart for monitoring in survival analysis models. By omnibus we mean that it is able to detect any change. This chart exploits the absolute differences between the Kaplan-Meier estimator and the in-control distribution over specific time intervals. A simulation study is presented that evaluates the performance of our proposed chart and compares it to existing methods. Thirdly, we apply the method of adaptive multilevel splitting for the estimation of hitting probabilities and hitting thresholds for the survival analysis cumulative sum charts. Simulation results are presented evaluating the benefits of adaptive multilevel splitting. Finally, we extend the idea of adaptive multilevel splitting by estimating not just a hitting probability, but the whole distribution function up to a certain point. A theoretical result is proved that is used to construct confidence bands for the distribution function conditioned on lying in a closed interval.
APA, Harvard, Vancouver, ISO, and other styles
36

Savojardo, Antonino. "Rare events in optical fibers." Thesis, University of Warwick, 2018. http://wrap.warwick.ac.uk/110215/.

Full text
Abstract:
This thesis examines the topic of rogue waves and interacting quasi-solitons in optical fibers. We demonstrate a simple cascade mechanism that drives the formation and emergence of rogue waves in the generalized non-linear Schrödinger equation with third-order dispersion. Such generation mechanism is based on inelastic collisions of quasi-solitons and is well described by a resonant-like scattering behavior for the energy transfer in pair-wise quasi-soliton collisions. Our theoretical and numerical results demonstrate a threshold for rogue wave emergence and the existence of a period of reduced amplitudes — a "calm before the storm" — preceding the arrival of a rogue wave event. Using long time window simulations we observe the statistics of rogue waves in optical fibers with an unprecedented level of detail and accuracy, unambiguously establishing the long-ranged character of the rogue wave probability density function over seven orders of magnitude. The same cascade mechanism also generates rogue waves in the generalized non-linear Schrödinger equation with Raman term. To comprehend the physics governing rogue wave formation, we propose an experimental setup where soliton amplification is induced without third order dispersion or Raman term. In an optical fiber with anomalous dispersion, we replace a small region of the fiber with a normal dispersion fiber. We show that solitons colliding in this region are able to exchange energy. Depending on the relative phase of the soliton pair, we find that the energy transfer can lead to an energy gain in excess of 20% for each collision. A sequence of such events can be used to enhance the energy gain even further, allowing the possibility of considerable soliton amplification.
APA, Harvard, Vancouver, ISO, and other styles
37

Ohannessian, Mesrob I. 1981. "On inference about rare events." Thesis, Massachusetts Institute of Technology, 2012. http://hdl.handle.net/1721.1/71278.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student submitted PDF version of thesis.
Includes bibliographical references (p. 75-77).
Despite the increasing volume of data in modern statistical applications, critical patterns and events have often little, if any, representation. This is not unreasonable, given that such variables are critical precisely because they are rare. We then have to raise the natural question: when can we infer something meaningful in such contexts? The focal point of this thesis is the archetypal problem of estimating the probability of symbols that have occurred very rarely, in samples drawn independently from an unknown discrete distribution. Our first contribution is to show that the classical Good-Turing estimator that is used in this problem has performance guarantees that are asymptotically non-trivial only in a heavy-tail setting. This explains the success of this method in natural language modeling, where one often has Zipf law behavior. We then study the strong consistency of estimators, in the sense of ratios converging to one. We first show that the Good-Turing estimator is not universally consistent. We then use Karamata's theory of regular variation to prove that regularly varying heavy tails are sufficient for consistency. At the core of this result is a multiplicative concentration that we establish both by extending the McAllester-Ortiz additive concentration for the missing mass to all rare probabilities and by exploiting regular variation. We also derive a family of estimators which, in addition to being strongly consistent, address some of the shortcomings of the Good-Turing estimator. For example, they perform smoothing implicitly. This framework is a close parallel to extreme value theory, and many of the techniques therein can be adopted into the model set forth in this thesis. Lastly, we consider a different model that captures situations of data scarcity and large alphabets, and which was recently suggested by Wagner, Viswanath and Kulkarni. In their rare-events regime, one scales the finite support of the distribution with the number of samples, in a manner akin to high-dimensional statistics. In that context, we propose an approach that allows us to easily establish consistent estimators for a large class of canonical estimation problems. These include estimating entropy, the size of the alphabet, and the range of the probabilities.
by Mesrob I. Ohannessian.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
38

Collingwood, Jesse. "Path Properties of Rare Events." Thesis, Université d'Ottawa / University of Ottawa, 2015. http://hdl.handle.net/10393/31948.

Full text
Abstract:
Simulation of rare events can be costly with respect to time and computational resources. For certain processes it may be more efficient to begin at the rare event and simulate a kind of reversal of the process. This approach is particularly well suited to reversible Markov processes, but holds much more generally. This more general result is formulated precisely in the language of stationary point processes, proven, and applied to some examples. An interesting question is whether this technique can be applied to Markov processes which are substochastic, i.e. processes which may die if a graveyard state is ever reached. First, some of the theory of substochastic processes is developed; in particular a slightly surprising result about the rate of convergence of the distribution pi(n) at time n of the process conditioned to stay alive to the quasi-stationary distribution, or Yaglom limit, is proved. This result is then verified with some illustrative examples. Next, it is demonstrated with an explicit example that on infinite state spaces the reversal approach to analyzing both the rate of convergence to the Yaglom limit and the likely path of rare events can fail due to transience.
APA, Harvard, Vancouver, ISO, and other styles
39

Shoda, Elizabeth Ann. "Impact of Binaural Beat Technology on Vigilance Task Performance, Psychological Stress and Mental Workload." Wright State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=wright1374240120.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Schaub, Katherine Elizabeth. "Rape as a Legitimate Medical event from 1800 - 1910." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1372382350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Landis, Margaret E., Shane Byrne, Ingrid J. Daubar, Kenneth E. Herkenhoff, and Colin M. Dundas. "A revised surface age for the North Polar Layered Deposits of Mars." AMER GEOPHYSICAL UNION, 2016. http://hdl.handle.net/10150/615108.

Full text
Abstract:
The North Polar Layered Deposits (NPLD) of Mars contain a complex stratigraphy that has been suggested to retain a record of past eccentricity- and obliquity-forced climate changes. The surface accumulation rate in the current climate can be constrained by the crater retention age. We scale NPLD crater diameters to account for icy target strength and compare surface age using a new production function for recent small impacts on Mars to the previously used model of Hartmann (2005). Our results indicate that ice is accumulating in these craters several times faster than previously thought, with a 100m diameter crater being completely infilled within centuries. Craters appear to have a diameter-dependent lifetime, but the data also permit a complete resurfacing of the NPLD at similar to 1.5 ka.
APA, Harvard, Vancouver, ISO, and other styles
42

Blohm, Per, and Andreas Wagemann. "Gör kritiken någon skillnad? : En studie om filmlanseringars finansiella påverkan." Thesis, Södertörns högskola, Institutionen för samhällsvetenskaper, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-30413.

Full text
Abstract:
Purpose: To examine the relationship between a new movie release and the stock value of the movie producers in america, and seek a connection between movie criticts and the stock price with an attempt to find similar patterns with swedish movies and their financial performance. Theoretical Framework: Based on theories of effcient and ineffcient markets, behavioural finance and previous research in the field. Method: The study has a quantitative and a deductive approach. An event study method is used to examine five large movie studios in the USA, and the Swedish film producers are examined through the number of paying customers. Results: The results are shown i charts to explain the abnormal rate of return (AR) and the relationship between movie release and the AR. Furthermore, the movie critique is also represented charts. Both for the american and the swedish movies. Conclusion: The results show that an overall negative rate of return of -0,24 % occurs at the time of a movie release. A connection between stock price and movie release has been encountered. Positive film critique generates positive AR.
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Hong. "Rare events, heavy tails, and simulation." Diss., Connect to online resource, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3239435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Malsom, Patrick. "Rare Events and the Thermodynamic Action." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1439307406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Gedion, Michael. "Contamination des composants électroniques par des éléments radioactifs." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20267/document.

Full text
Abstract:
Cette thèse a pour objet l'étude des éléments radioactifs qui peuvent altérer le bon fonctionnement des composants électroniques au niveau terrestre. Ces éléments radioactifs sont appelés émetteurs alpha. Intrinsèques aux composants électroniques, ils se désintègrent et émettent des particules alpha qui ionisent la matière du dispositif électronique et déclenchent des SEU (Single Event Upset). Ces travaux visent à évaluer la fiabilité des circuits digitaux due à cette contrainte radiative interne aux composants électroniques. Dans ce but, tous les émetteurs alpha naturelles ou artificielles susceptibles de contaminer les matériaux des circuits digitaux ont été identifiés et classés en deux catégories : les impuretés naturelles et les radionucléides introduits. Les impuretés naturelles proviennent d'une contamination naturelle ou involontaire des matériaux utilisés. Afin d'évaluer leurs effets sur la fiabilité, le SER (Soft Error Rate) a été déterminé par simulations Monte-Carlo pour différents nœuds technologiques dans le cas de l'équilibre séculaire. Par ailleurs, avec la miniaturisation des circuits digitaux, de nouveaux éléments chimiques ont été suggérés ou employés dans la nanoélectronique. Les radionucléides introduits regroupent ce type d'élément naturellement constitué d'émetteurs alpha. Des études basées sur des simulations Monte-Carlo et des applications analytiques ont été effectués pour évaluer la fiabilité des dispositifs électroniques. Par la suite, des recommandations ont été proposées sur l'emploi de nouveaux éléments chimiques dans la nanotechnologie
This work studies radioactive elements that can affect the proper functioning of electronic components at ground level. These radioactive elements are called alpha emitters. Intrinsic to electronic components, they decay and emit alpha particles which ionize the material of the electronic device and trigger SEU (Single Event Upset).This thesis aims to assess the reliability of digital circuits due to this internal radiative constraint of electronic components. For that, all alpha-emitting natural or artificial isotopes that can contaminate digital circuits have been identified and classified into two categories: natural impurities and introduced radionuclides.Natural impurities result from a natural or accidental contamination of materials used in nanotechnology. To assess their effects on reliability, the SER (Soft Error Rate) was determined by Monte Carlo simulations for different technology nodes in the case of secular equilibrium. Besides, a new analytical approach was developed to determine the consequences of secular disequilibrium on the reliability of digital circuits.Moreover, with the miniaturization of digital circuits, new chemical elements have been suggested or used in nanoelectronics. The introduced radionuclides include this type of element consisting of natural alpha emitters. Studies based on Monte Carlo simulations and analytical approches have been conducted to evaluate the reliability of electronic devices. Subsequently, recommendations were proposed on the use of new chemical elements in nanotechnology
APA, Harvard, Vancouver, ISO, and other styles
46

Yuen, Wai-kee. "A historical event analysis of the variability in the empirical uncovered interest parity (UIP) coefficient." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B36424201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Batagin, Armelin Fábio. "Stratégie d'estimation de la vulnérabilité aux erreurs `soft' basée sur la susceptibilité aux événements transitoires de chaque porte logique." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT035.

Full text
Abstract:
La vulnérabilité aux erreurs récupérables (SEV - Soft Error Vulnerability) est un paramètre estimé qui, associé aux caractéristiques de l’environnement de rayonnement, permet d’obtenir le SER (Soft-Error Rate), une métrique couramment utilisée pour prédire le comportement des systèmes électroniques numériques exposés au rayonnement de particules. Actuellement, la méthode la plus précise pour l’estimation de SER est le test de rayonnement, car elle présente l’interaction réelle des particules avec le dispositif électronique. Cependant, ce test est coûteux et requiert le circuit qui, lui, n’est disponible qu’à la fin du cycle de développement. Cela a motivé le développement d'autres méthodes d'estimation de SER et de SEV, notamment des méthodes analytiques, des simulations électriques et logiques, ainsi que des approches basées sur l'émulation. Ces techniques incorporent généralement des effets de masquage logique, électrique ou temporel. Néanmoins, la plupart de ces techniques ne prennent pas en compte la susceptibilité aux événements singuliers transitoires (SET - Single Event Transient). Ce facteur est intrinsèque au test de radiation et représente la probabilité que le rayonnement ionisant produise une erreur `soft' à la sortie des portes logiques du circuit. Dans ce contexte, cette thèse propose une stratégie d’estimation de SEV basée sur les susceptibilités aux SET. Deux versions de cette stratégie sont considérées : la version simplifiée, où les susceptibilités SET prennent en compte seulement les effets de la topologie des portes logiques et la version complète, où les susceptibilités prennent en compte la topologie et le fonctionnement du circuit. La stratégie proposée a été évaluée avec une approche basée sur la simulation, estimant la SEV de 38 circuits de référence. Les résultats montrent que les deux versions de la stratégie entraînent une amélioration de la précision de l'estimation, la version complète présentant l'erreur d'estimation la plus faible. Enfin, la faisabilité de l’adoption de la stratégie proposée est démontrée avec approche basée sur l’émulation
The Soft-Error Vulnerability (SEV) is an estimated parameter that, in conjunction with the characteristics of the radiation environment, is used to obtain the Soft-Error Rate (SER), that is a metric used to predict how digital systems will behave in this environment. Currently, the most confident method for SER estimation is the radiation test, since it has the actual interaction of the radiation with the electronic device. However, this test is expensive and requires the real device, that becomes available late on the design cycle. These restrictions motivated the development of other SER and SEV estimation methods, including analytical, electrical and logic simulations, and emulation-based approaches. These techniques usually incorporate the logical, electrical and latching-window masking effects into the estimation process. Nevertheless, most of them do not take into account a factor that is intrinsic to the radiation test: the probability of the radiation particle to produce a Soft-Error (SE) at the output of the gates of the circuit, referred to as Single-Event Transient (SET) susceptibility. In this context, we propose a strategy for SEV estimation based on these SET susceptibilities, suitable for simulation- and emulation-based frameworks. In a simplified version of this strategy, the SET susceptibilities take into account only the effects of the gate topology, while in a complete version, these susceptibilities consider both the topology and the operation of the circuit, that affects its input pattern distribution. The proposed strategy was evaluated with a simulation-based framework, estimating the SEV of 38 benchmark circuits. The results show that both versions of the strategy lead to an improvement in the estimation accuracy, with the complete version presenting the lowest estimation error. Finally, we show the feasibility of adopting the proposed strategy with an emulation-based framework
APA, Harvard, Vancouver, ISO, and other styles
48

Drozdenko, Myroslav. "Weak Convergence of First-Rare-Event Times for Semi-Markov Processes." Doctoral thesis, Västerås : Mälardalen University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Wahl, David. "Optimisation of light collection in inorganic scintillators for rare event searches." Thesis, University of Oxford, 2005. http://ora.ox.ac.uk/objects/uuid:c41d6500-c513-405f-926f-547a588aa1da.

Full text
Abstract:
Inorganic scintillators are playing an ever increasing role in the search for rare events. Progress in the use of cryogenic phonon-scintillation detectors (CPSD) has allowed for a rapid increase in sensitivity and resolution of experiments using this technique. It is likely that CPSD will be used in future dark matter searches with multiple scintillator materials. Further improvements in the performance of CPSD can be expected if the amount of light collected is increased. In this thesis, two approaches are used to look at ways of maximising the amount of light collected in CPSD modules. The first approach is to obtain a detailed understanding of the spectroscopic properties in the crystal to identify ways of increasing their scintillation intensity. The second is to simulate the light collection properties using a Monte-Carlo simulation program. This requires a detailed understanding of the optical properties of inorganic scintillators and obtaining this information is the focus of the current work. Two new methods have been developed to evaluate the scintillation decay time and the intrinsic light yield of scintillators. These methods are tested on CRESST CaWO4 crystals so that all the input parameters necessary for the simulation of CRESST modules is available. These input parameters are used to successfully explain features of the light collection in CRESST CPSD modules and to suggest possible improvements to the design of the modules. In summary, the current work has contributed to the development of a standardised method to maximise the light yield that can be obtained from CPSD for application to rare event searches.
APA, Harvard, Vancouver, ISO, and other styles
50

Zhang, Benjamin Jiahong. "A coupling approach to rare event simulation via dynamic importance sampling." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112384.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 106-109).
Rare event simulation involves using Monte Carlo methods to estimate probabilities of unlikely events and to understand the dynamics of a system conditioned on a rare event. An established class of algorithms based on large deviations theory and control theory constructs provably asymptotically efficient importance sampling estimators. Dynamic importance sampling is one these algorithms in which the choice of biasing distribution adapts in the course of a simulation according to the solution of an Isaacs partial differential equation or by solving a sequence of variational problems. However, obtaining the solution of either problem may be expensive, where the cost of solving these problems may be even more expensive than performing simple Monte Carlo exhaustively. Deterministic couplings induced by transport maps allows one to relate a complex probability distribution of interest to a simple reference distribution (e.g. a standard Gaussian) through a monotone, invertible function. This diverts the complexity of the distribution of interest into a transport map. We extend the notion of transport maps between probability distributions on Euclidean space to probability distributions on path space following a similar procedure to Itô's coupling. The contraction principle is a key concept from large deviations theory that allows one to relate large deviations principles of different systems through deterministic couplings. We convey that with the ability to computationally construct transport maps, we can leverage the contraction principle to reformulate the sequence of variational problems required to implement dynamic importance sampling and make computation more amenable. We apply this approach to simple rotorcraft models. We conclude by outlining future directions of research such as using the coupling interpretation to accelerate rare event simulation via particle splitting, using transport maps to learn large deviations principles, and accelerating inference of rare events.
by Benjamin Jiahong Zhang.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography