To see the other types of publications on this topic, follow the link: Time limit for prosecution.

Dissertations / Theses on the topic 'Time limit for prosecution'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Time limit for prosecution.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Roth, Stéphanie. "Clandestinité et prescription de l'action publique." Phd thesis, Université de Strasbourg, 2013. http://tel.archives-ouvertes.fr/tel-01061930.

Full text
Abstract:
La mise en œuvre de la prescription de l'action publique n'est pas, en principe, subordonnée à la connaissance de l'infraction par les personnes pouvant déclencher les poursuites pénales. Le législateur retient en effet comme point de départ du délai de prescription le jour de la commission des faits et non celui de leur découverte. Cette règle connaît toutefois une exception lorsque l'infraction est dite clandestine. Parce que le ministère public et la victime n'ont pas pu avoir connaissance de l'existence de cette infraction, la prescription ne court pas tant que les faits ne sont pas apparus et n'ont pu être constatés dans des conditions permettant l'exercice de l'action publique. L'exception de clandestinité empêche donc le temps de produire son effet destructeur sur l'action publique. Sa mise en œuvre évite ainsi que certaines infractions restent impunies par le seul jeu de l'écoulement du délai. S'il ne fait aucun doute que la clandestinité d'une infraction constitue un obstacle à la prescription de l'action publique, la notion même de clandestinité reste à circonscrire. Elle recouvre en effet, en droit positif, de multiples réalités qui rendent impossible sa systématisation. Aux termes de la recherche, il apparaît que le critère déterminant de la clandestinité consiste dans l'ignorance légitime de l'existence de l'infraction par les personnes habilitées à mettre en mouvement l'action publique. En application de l'adage contra non valentem agere non currit praescriptio, cette ignorance caractérisée devrait autoriser le report du point de départ de la prescription de l'action publique de toute infraction au jour où les faits peuvent être constatés par le ministère public ou par la personne lésée.
APA, Harvard, Vancouver, ISO, and other styles
2

Neukirchen, Bernhard [Verfasser]. "Continuous time limit of repeated quantum observations / Bernhard Neukirchen." Hannover : Technische Informationsbibliothek (TIB), 2016. http://d-nb.info/1122663315/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wen, Quan. "Limit-order completion time in the London stock market." Thesis, Heriot-Watt University, 2009. http://hdl.handle.net/10399/2239.

Full text
Abstract:
This study develops an econometric model of limit-order completion time using survival analysis. Time-to-completion for both buy and sell limit orders is estimated using tick-by-tick UK order data. The study investigates the explanatory power of variables that measure order characteristics and market conditions, such as the limitorder price, limit-order size, best bid-offer spread, and market volatility. The generic results show that limit-order completion time depends on some variables more than on others. This study also provides an investigation of how the dynamics of the market are incorporated into models of limit-order completion. The empirical results show that time-varying variables capture the state of an order book in a better way than static ones. Moreover, this study provides an examination of the prediction accuracy of the proposed models. In addition, this study provides an investigation of the intra-day pattern of order submission and time-of-day effects on limit-order completion time in the UK market. It provides evidence showing that limit orders placed in the afternoon period are expected to have the shortest completion times while orders placed in the mid-day period are expected to have the longest completion times, and the sensitivities of limit-order completion time to the explanatory variables vary over the trading day.
APA, Harvard, Vancouver, ISO, and other styles
4

Johnson, Charles A. "Common relevant operational picture : an analysis of effects on the prosecution of time-critical targets." Thesis, Monterey, California. Naval Postgraduate School, 2002. http://hdl.handle.net/10945/6048.

Full text
Abstract:
Approved for public release, distribution is unlimited
The conceptual template laid out in Joint Vision 2010 called for leveraging technological opportunities to achieve new and higher levels of effectiveness in a joint operating environment. Born out of this concept the U.S. Joint Forces Command developed a concept - the Common Relevant Operational Picture, or CROP. It is a presentation of timely, fused, accurate, assured and relevant information. The CROP concept addresses battlespace awareness, information transport and processing, combat identification and joint command and control - four of the six high priority challenges identified by the Joint Staff for the 21st century. This thesis investigates CROP, comparing and contrasting it to uncoordinated separate service systems in a time-critical targeting setting. The Measures of Effectiveness (MOEs) used are the time to kill a target and the number of weapons expended. Previous work on this problem used an analytical model with some simplifying assumptions concerning processing time latency following target detection. In this thesis, a simulation is used to investigate the validity of some of the analytical model assumptions. The simulation also extends the model for more general command and control time distributions and models Battle Damage Assessment. The results provide distributional information about the MOEs, showing how improvements in information sharing and optimal weapons assignment due to CROP can improve systems performance. However, this improvement is lost if processing time latency under CROP is too long.
APA, Harvard, Vancouver, ISO, and other styles
5

Sera, Toru. "Functional limit theorem for occupation time processes of intermittent maps." Kyoto University, 2020. http://hdl.handle.net/2433/259719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Klegrewe, Marc [Verfasser]. "Strong Coupling Lattice QCD in the Continuous Time Limit / Marc Klegrewe." Bielefeld : Universitätsbibliothek Bielefeld, 2020. http://d-nb.info/1214806449/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Prasad, Rachit. "Time Spectral Adjoint Based Design for Flutter and Limit Cycle Oscillation Suppression." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/98574.

Full text
Abstract:
When designing aircraft wings shapes, it is important to ensure that the flight envelope does not overlap with regions of flutter or Limit Cycle Oscillation (LCO). A quick assessment of these dynamic aeroelastic for various design candidates is key to successful design. Flutter based design requires the sensitivity of flutter parameters to be known with the respect of design parameters. Traditionally, frequency domain based methods have been used to predict flutter characteristics and its sensitivity. However, this approach is only applicable for linear or linearized models and cannot be applied to systems undergoing LCO or other nonlinear effects. Though the time accurate approach can be implemented to overcome this problem, it is computationally expensive. Also, the unsteady adjoint formulation for sensitivity analysis, requires the state and adjoint variables to be stored at every time step, which prohibitively increases the memory requirement. In this work, these problems have been overcome by implementing a time spectral method based approach to compute flutter onset, LCOs and their design sensitivities in a computationally efficient manner. The time spectral based formulation approximates the solution as a discrete Fourier series and directly solves for the periodic steady state, leading to a steady formulation. This can lead to the time spectral approach to be faster than the time accurate approach. More importantly, the steady formulation of the time spectral method also eliminates the memory issues faced by the unsteady adjoint formulation. The time spectral based flutter/LCO prediction method was used to predict flutter and LCO characteristics of the AGARD 445.6 wing and pitch/plunge airfoil section with NACA 64A010 airfoil. Furthermore, the adjoint based sensitivity analysis was used to carry out aerodynamic shape optimization, with an objective of maximizing the flutter velocity with and without constraints on the drag coefficient. The resulting designs show significant increase in the flutter velocity and the corresponding LCO velocity profile. The resulting airfoils display a greater sensitivity to the transonic shock which in turn leads to greater aerodynamic damping and hence leading to an increase in flutter velocity.
Doctor of Philosophy
When designing aircrafts, dynamic aeroelastic effects such as flutter onset and Limit Cycle Oscillations need to considered. At low enough flight speeds, any vibrations arising in the aircraft structure are damped out by the airflow. However, beyond a certain flight speed, instead of damping out the vibrations, the airflow accentuates these vibrations. This is known as flutter and it can lead to catastrophic structural failure. Hence, during the aircraft design phase, it must be ensured that the aircraft would not experience flutter during the flight conditions. One of the contribution of this work has been to come up with a fast and accurate method to predict flutter using computational modelling. Depending on the scenario, it is also possible that during flutter, the vibrations in the structure increase to a certain amplitude before leveling off due to interaction of non-linear physics. This condition is known as limit cycle oscillation. While they can arise due to different kinds of non-linearities, in this work the focus has been on aerodynamic non-linearities arising from shocks in transonic flight conditions. While limit cycle oscillations are undesirable as they can cause structural fatigue, they can also save the aircraft from imminent structural fracture and hence it is important to accurately predict them as well. The main advantage of the method developed in this work is that the same method can be used to predict both the flutter onset condition and limit cycle oscillations. This is a novel development as most of the traditional approaches in dynamic aeroelasticity cannot predict both the effects. The developed flutter/LCO prediction method has then been used in design with the goal of achieving superior flutter characteristics. In this study, the shape of the baseline airfoil is changed with the goal of increasing the flutter velocity. This enables the designed system to fly faster without addition of weight. Since the design has been carried out using gradient based optimization approach, an efficient way to compute the gradient needs to be used. Traditional approaches to compute the gradient, such as Finite Difference Method, have computational cost proportional to the number of design variables. This becomes a problem for shape design optimization, where a large number of design variables are required. This has been overcome by developing an adjoint based sensitivity analysis method. The main advantage of the adjoint based sensitivity analysis is that it its computational cost is independent of the number of design variables, and hence a large number of design variables can be accommodated. The developed flutter/LCO prediction and adjoint based sensitivity analysis framework was used to carry out shape design for a pitch/plunge airfoil section. The objective of the design process was to maximize the flutter onset velocity with and without constraints on drag. The resulting optimized airfoils showed significant increase in the flutter velocity.
APA, Harvard, Vancouver, ISO, and other styles
8

Hörnedal, Niklas. "Generalizations of the Mandelstam-Tamm Quantum Speed Limit." Thesis, Stockholms universitet, Fysikum, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-193265.

Full text
Abstract:
Quantum speed limits are lower bounds on the evolution time for quantum systems. In this thesis, we consider closed quantum systems. We investigate how different principal bundles offers a geometrical method for obtaining generalizations of the Mandelstam-Tamm quantum speed limit for mixed states. We look at three different principal bundles from which we derive two already known quantum speed limits, the Uhlmann and Andersson QSLs, and one which is new, the Grassmann QSL. We also investigate the tightness of these quantum speed limits and how they compare with each other.
APA, Harvard, Vancouver, ISO, and other styles
9

Caldwell, Thomas S. Jr. "First limit from a surface run of a 10 liter Dark Matter Time Projection Chamber." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/51605.

Full text
Abstract:
Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Physics, 2009.
Includes bibliographical references (leaves 35-37).
A 10 liter prototype Dark Matter Time Projection Chamber (DMTPC) is operated on the surface of the earth at 75 Torr using carbon-tetrafluoride (CF4) as a target material to obtain a 24.57 gram-day exposure. A limit is set on a likely dark matter candidate, the weakly interacting massive particle. This is the first limit from the DMTPC detector, and the goal is to understand the sensitivity of the detector. In addition, this detector is used to measure the mean energy and attenuation coefficient of electrons propagating in CF4.
by Thomas S. Caldwell, Jr.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
10

Cunningham, Ryan. "EXAMINING DYNAMIC VARIABLE SPEED LIMIT STRATEGIES FOR THE REDUCTION OF REAL-TIME CRASH RISK ON FREEWAYS." Master's thesis, University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3117.

Full text
Abstract:
Recent research at the University of Central Florida involving crashes on Interstate-4 in Orlando, Florida has led to the creation of new statistical models capable of determining the crash risk on the freeway (Abdel-Aty et al., 2004; 2005, Pande and Abdel-Aty, 2006). These models are able to calculate the rear-end and lane-change crash risks along the freeway in real-time through the use of static information at various locations along the freeway as well as the real-time traffic data obtained by loop detectors. Since these models use real-time traffic data, they are capable of calculating rear-end and lane-change crash risk values as the traffic flow conditions are changing on the freeway. The objective of this study is to examine the potential benefits of variable speed limit implementation techniques for reducing the crash risk along the freeway. Variable speed limits is an ITS strategy that is typically used upstream of a queue in order to reduce the effects of congestion. By lowering the speeds of the vehicles approaching a queue, more time is given for the queue to dissipate from the front before it continues to grow from the back. This study uses variable speed limit strategies in a corridor-wide attempt to reduce rear-end and lane-change crash risks where speed differences between upstream and downstream vehicles are high. The idea of homogeneous speed zones was also introduced in this study to determine the distance over which variable speed limits should be implemented from a station of interest. This is unique since it is the first time a dynamic distance has been considered for variable speed limit implementation. Several VSL strategies were found to successfully reduce the rear-end and lane-change crash risks at low-volume traffic conditions (60% and 80% loading conditions). In every case, the most successful treatments involved the lowering of upstream speed limits by 5 mph and the raising of downstream speed limits by 5 mph. In the free-flow condition (60% loading), the best treatments involved the more liberal threshold for defining homogeneous speed zones (5 mph) and the more liberal implementation distance (entire speed zone), as well as a minimum time period of 10 minutes. This treatment was actually shown to significantly reduce the network travel time by 0.8%. It was also shown that this particular implementation strategy (lowering upstream, raising downstream) is wholly resistant to the effects of crash migration in the 60% loading scenario. In the condition approaching congestion (80% loading), the best treatment again involved the more liberal threshold for homogeneous speed zones (5 mph), yet the more conservative implementation distance (half the speed zone), along with a minimum time period of 5 minutes. This particular treatment arose as the best due to its unique capability to resist the increasing effects of crash migration in the 80% loading scenario. It was shown that the treatments implementing over half the speed zone were more robust against crash migration than other treatments. The best treatment exemplified the greatest benefit in reduced sections and the greatest resistance to crash migration in other sections. In the 80% loading scenario, the best treatment increased the network travel time by less than 0.4%, which is deemed acceptable. No treatment was found to successfully reduce the rear-end and lane-change crash risks in the congested traffic condition (90% loading). This is attributed to the fact that, in the congested state, the speed of vehicles is subject to the surrounding traffic conditions and not to the posted speed limit. Therefore, changing the posted speed limit does not affect the speed of vehicles in a desirable manner. These conclusions agree with Dilmore (2005).
M.S.
Department of Civil and Environmental Engineering
Engineering and Computer Science
Civil Engineering MS
APA, Harvard, Vancouver, ISO, and other styles
11

da, Silveira Filho Getulio Borges. "Contributions to strong approximations in time series with applications in nonparametric statistics and functional limit theorems." Thesis, London School of Economics and Political Science (University of London), 1991. http://etheses.lse.ac.uk/2813/.

Full text
Abstract:
This thesis is concerned with applications in probability and statistics of approximation theorems for weakly dependent random vectors. The basic approach is to approximate partial sums of weakly dependent random vectors by corresponding partial sums of independent ones. In chapter 2 we apply such a general idea so as to obtain an almost sure invariance principle for partial sums of Rd-valued absolutely regular processes. In chapter 3 we apply the results of chapter 2 to obtain functional limit theorems for non-stationary fractionally differenced processes. Chapter 4 deals with applications of approximation theorems to nonparamatric estimation of density and regression functions under weakly dependent samples. We consider L1-consistency of kernel and histogram density estimates. Universal consistency of the partition estimates of the regression function is also studied. Finally in chapter 5 we consider necessary conditions for L1-consistency of kernel density estimates under weakly dependent samples as an application of a Poisson approximation theorem for sums of uniform mixing Bernoulli random variables.
APA, Harvard, Vancouver, ISO, and other styles
12

Azevedo, Rita Campanacho Bacelar. "O limite na óptica da Arquitetura paisagista. Caso de estudo das salinas de Molentargius." Master's thesis, ISA/UL, 2015. http://hdl.handle.net/10400.5/9204.

Full text
Abstract:
Mestrado em Arquitectura Paisagista - Instituto Superior de Agronomia
Possessed with great conceptual amplitude, limit has been approached by different areas of knowledge over time. However, the concept of limit has an inseparable bond with landscape, whereby the contribution of several multidisciplinary studies allowed the strengthening of the activity of the landscape architect in its functions of study, formalization of the design and overall planning. In this sense, philosophy, anthropology, geography, land use and ecology are used as bases for the development of this thesis, working as a theoretical reflection on limit, its transversal meaning and its practical application. Additionally, the intersection between the theoretical content and the design work developed in office has enabled a new approach to the project, with more consistent and better defined concepts, associated with an equally unifying theme, time.
APA, Harvard, Vancouver, ISO, and other styles
13

Ouedraogo, Nayabtigungu Hendrix. "The Safety Impact of Raising Trucks' Speed Limit on Rural Freeways in Ohio." University of Dayton / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1576248242725121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Zigo, Stefan John. "Photoionization of isomeric molecules: from the weak-field to the strong-field limit." Diss., Kansas State University, 2017. http://hdl.handle.net/2097/35440.

Full text
Abstract:
Doctor of Philosophy
Department of Physics
Carlos A. Trallero
Ultra-fast spectroscopy has become a common tool for understanding the structure and dynamics of atoms and molecules, as evidenced by the award of the 1999 Nobel Prize in Chemistry to Ahmed H. Zewail for his pioneering work in femtochemistry. The use of shorter and more energetic laser pulses have given rise to high intensity table-top light sources in the visible and infrared which have pushed spectroscopic measurements of atomic and molecular systems into the strong-field limit. Within this limit, there are unique phenomena that are still not well understood. Many of such phenomena involve a photoionization step. For three decades, there has been a steady investigation of the single ionization of atomic systems in the strong-field regime both experimentally and theoretically. The investigation of the ionization of more complex molecular systems is of great interest presently and will help with the understanding of ultra-fast spectroscopy as a whole. In this thesis, we explore the single ionization of molecules in the presence of a strong electric field. In particular, we study molecular isomer pairs, molecules that are the same elementally, but different structurally. The main goal of this work is to compare the ionization yields of these similar molecular pairs as a function of intensity and gain some insight into what differences caused by their structure contribute to how they ionize in the strong-field limit. Through our studies we explore a wavelength dependence of the photoionization yield in order to move from the multi-photon regime of ionization to the tunneling regime with increasing wavelength. Also, in contrast to our strong-field studies, we investigate isomeric molecules in the weak-field limit through single photon absorption by measuring the total ionization yield as a function of photon energy. Our findings shed light on the complexities of photoionization in both the strong- and weak-field limits and will serve as examples for the continued understanding of single ionization both experimentally and theoretically.
APA, Harvard, Vancouver, ISO, and other styles
15

Crawford, Jackie H. III. "Factors that limit control effectiveness in self-excited noise driven combustors." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/43647.

Full text
Abstract:
A full Strouhal number thermo-acoustic model is purposed for the feedback control of self excited noise driven combustors. The inclusion of time delays in the volumetric heat release perturbation models create unique behavioral characteristics which are not properly reproduced within current low Strouhal number thermo acoustic models. New analysis tools using probability density functions are introduced which enable exact expressions for the statistics of a time delayed system. Additionally, preexisting tools from applied mathematics and control theory for spectral analysis of time delay systems are introduced to the combustion community. These new analysis tools can be used to extend sensitivity function analysis used in control theory to explain limits to control effectiveness in self-excited combustors. The control effectiveness of self-excited combustors with actuator constraints are found to be most sensitive to the location of non-minimum phase zeros. Modeling the non-minimum phase zeros correctly require accurate volumetric heat release perturbation models. Designs that removes non-minimum phase zeros are more likely to have poles in the right hand complex plane. As a result, unstable combustors are inherently more responsive to feedback control.
APA, Harvard, Vancouver, ISO, and other styles
16

Valen, Magnus. "Launch and recovery of ROV: Investigation of operational limit from DNV Recommended Practices and time domain simulations in SIMO." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for marin teknikk, 2010. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-11626.

Full text
Abstract:
Offshore contractors seek to operate their remotely operated vehicles for the widest range of sea conditions where particularly launch and recovery through splash zone are critical phases in the offshore operation. The analytical methods for calculation of operational limit proposed by guidelines from DNV Recommended Practices may lead to an over-estimation of the hydrodynamic forces and consequently to an unduly restrictive operational limit. Accurate predictions of the hydrodynamic forces are important, and there is an opening in the regulations which allow the use of other analysis tools to determine the forces on the ROV system during launch and recovery. The main objective of the master thesis was to carry out splash zone analyses for DOF Subsea’s ROV system by use of DNV Recommended Practices and compare the results found by modeling the marine operation in the time domain simulation program SIMO. This involved a broad study of SIMO and a complete modeling of the offshore operation including calculation of the hydrodynamic data for the vessel Skandi Bergen and modeling of the ROV system. In SIMO, particularly the sea state of 4.5 [m] significant wave height was investigated since this is the current operational limit for DOF Subsea’s ROV system. The investigation of operational limits by use of the analytical method and SIMO have shown that DNV Recommended Practices over-estimates the hydrodynamic forces acting in the wave zone leading to an restrictive operational limit in comparison to the time domain calculations in SIMO. The calculations by the analytical method have shown that the operational limit for launch and recovery of ROV should be limited to 2.5 [m] significant wave height, while analyses in SIMO have shown that the current operational limit of 4.5 [m] could be justified. However, it is seen that the possibility for slack umbilical is present in the sea state of 4.5 [m] and peak periods in the range of Tp = 6 – 9 [s]. It is also to be noted that the slack umbilical occurrences show a thoroughly dependency of the vessel heading. Furthermore, the snap loads induced by the slack umbilical occurrences are not found to be critical in the irregular wave analyses. This can justify the operational limit of 4.5 [m] significant wave height as long as the weather is assessed by experienced personnel during deployment through wave zone and Skandi Bergen is positioned head sea.
APA, Harvard, Vancouver, ISO, and other styles
17

Saad, Ramadan [Verfasser]. "Characterizations of limit laws of residual life time distributions by generalizations of the lack of memory property / Ramadan Saad." Dortmund : Universitätsbibliothek Technische Universität Dortmund, 2004. http://d-nb.info/1011531747/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Abreu, Ana Rita Vicente Drumond de. "O som do espaço." Master's thesis, Universidade de Lisboa. Faculdade de Arquitetura, 2015. http://hdl.handle.net/10400.5/13586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Elezovic, Suad. "Modeling financial volatility : A functional approach with applications to Swedish limit order book data." Doctoral thesis, Umeå universitet, Statistik, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-18757.

Full text
Abstract:
This thesis is designed to offer an approach to modeling volatility in the Swedish limit order market. Realized quadratic variation is used as an estimator of the integrated variance, which is a measure of the variability of a stochastic process in continuous time. Moreover, a functional time series model for the realized quadratic variation is introduced. A two-step estimation procedure for such a model is then proposed. Some properties of the proposed two-step estimator are discussed and illustrated through an application to high-frequency financial data and simulated experiments. In Paper I, the concept of realized quadratic variation, obtained from the bid and ask curves, is presented. In particular, an application to the Swedish limit order book data is performed using signature plots to determine an optimal sampling frequency for the computations. The paper is the first study that introduces realized quadratic variation in a functional context. Paper II introduces functional time series models and apply them to the modeling of volatility in the Swedish limit order book. More precisely, a functional approach to the estimation of volatility dynamics of the spreads (differences between the bid and ask prices) is presented through a case study. For that purpose, a two-step procedure for the estimation of functional linear models is adapted to the estimation of a functional dynamic time series model. Paper III studies a two-step estimation procedure for the functional models introduced in Paper II. For that purpose, data is simulated using the Heston stochastic volatility model, thereby obtaining time series of realized quadratic variations as functions of relative quantities of shares. In the first step, a dynamic time series model is fitted to each time series. This results in a set of inefficient raw estimates of the coefficient functions. In the second step, the raw estimates are smoothed. The second step improves on the first step since it yields both smooth and more efficient estimates. In this simulation, the smooth estimates are shown to perform better in terms of mean squared error. Paper IV introduces an alternative to the two-step estimation procedure mentioned above. This is achieved by taking into account the correlation structure of the error terms obtained in the first step. The proposed estimator is based on seemingly unrelated regression representation. Then, a multivariate generalized least squares estimator is used in a first step and its smooth version in a second step. Some of the asymptotic properties of the resulting two-step procedure are discussed. The new procedure is illustrated with functional high-frequency financial data.
APA, Harvard, Vancouver, ISO, and other styles
20

Yang, Taeyoung. "Fundamental Limits on Antenna Size for Frequency and Time Domain Applications." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/39334.

Full text
Abstract:
As ubiquitous wireless communication becomes part of life, the demand on antenna miniaturization and interference reduction becomes more extreme. However, antenna size and performance are limited by radiation physics, not technology. In order to understand antenna radiation and energy storage mechanisms, classical and alternative viewpoints of radiation are discussed. Unlike the common sense of classical antenna radiation, it is shown that the entire antenna fields contribute to both radiation and energy storage with varying total energy velocity during the radiation process. These observations were obtained through investigating impedance, power, the Poynting vector, and energy velocity of a radiating antenna. Antenna transfer functions were investigated to understand the real-world challenges in antenna design and overall performance. An extended model, using both the singularity expansion method and spherical mode decomposition, is introduced to analyze the characteristics of various antenna types including resonant, frequency-independent, and ultra-wideband antennas. It is shown that the extended model is useful to understand real-world antennas. Observations from antenna radiation physics and transfer function modeling lead to both corrections and extension of the classical fundamental-limit theory on antenna size. Both field and circuit viewpoints of the corrected limit theory are presented. The corrected theory is extended for multi-mode excitation cases and also for ultra-wideband and frequency-independent antennas. Further investigation on the fundamental-limit theory provides new innovations, including a low-Q antenna design approach that reduces antenna interference issues and a generalized approach for designing an antenna close to the theoretical-size limit. Design examples applying these new approaches with simulations and measurements are presented. The extended limit theory and developed antenna design approaches will find many applications to optimize compact antenna solutions with reduced near-field interactions.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Azizsoltani, Hamoon, and Hamoon Azizsoltani. "Risk Estimation of Nonlinear Time Domain Dynamic Analyses of Large Systems." Diss., The University of Arizona, 2017. http://hdl.handle.net/10150/624545.

Full text
Abstract:
A novel concept of multiple deterministic analyses is proposed to design safer and more damage-tolerant structures, particularly when excited by dynamic including seismic loading in time domain. Since the presence of numerous sources of uncertainty cannot be avoided or overlooked, the underlying risk is estimated to compare design alternatives. To generate the implicit performance functions explicitly, the basic response surface method is significantly improved. Then, several surrogate models are proposed. The advanced factorial design and Kriging method are used as the major building blocks. Using these basic schemes, seven alternatives are proposed. Accuracies of these schemes are verified using basic Monte Carlo simulations. After verifying all seven alternatives, the capabilities of the three most desirable schemes are compared using a case study. They correctly identified and correlated damaged states of structural elements in terms of probability of failure using only few hundreds of deterministic analyses. The modified Kriging method appears to be the best technique considering both efficiency and accuracy. Estimating the probability of failure, the post-Northridge seismic design criteria are found to be appropriate. After verifying the proposed method, a Site-Specific seismic safety assessment method for nonlinear structural systems is proposed to generate a suite of ground excitation time histories. The information of risk is used to design a structure more damage-tolerant. The proposed procedure is verified and showcased by estimating risks associated with three buildings designed by professional experts in the Los Angeles area satisfying the post-Northridge design criteria for the overall lateral deflection and inter-story drift. The accuracy of the estimated risk is again verified using the Monte Carlo simulation technique. In all cases, the probabilities of collapse are found to be less than 10% when excited by the risk-targeted maximum considered earthquake ground motion satisfying the intent of the code. The spread in the reliability indexes for each building for both limit states cannot be overlooked, indicating the significance of the frequency contents. The inter story drift is found to be more critical than the overall lateral displacement. The reliability indexes for both limit states are similar only for few cases. The author believes that the proposed methodology is an alternative to the classical random vibration and simulation approaches. The proposed site-specific seismic safety assessment procedure can be used by practicing engineers for routine applications. The proposed reliability methodology is not problem-specific. It is capable of handling systems with different levels of complexity and scalability, and it is robust enough for multi-disciplinary routine applications. In order to show the multi-disciplinary application of the proposed methodology, the probability of failure of lead-free solders in Ball Grid Array 225 surface-mount packaging for a given loading cycle is estimated. The accuracy of the proposed methodology is verified with the help of Monte Carlo simulation. After the verification, probability of failure versus loading cycles profile is calculated. Such a comprehensive study of its lifetime behavior and the corresponding reliability analyses can be useful for sensitive applications.
APA, Harvard, Vancouver, ISO, and other styles
22

Haleem, Kirolos Maged. "EXPLORING THE POTENTIAL OF COMBINING RAMP METERING AND VARIABLE SPEED LIMIT STRATEGIES FOR ALLEVIATING REAL-TIME CRASH RISK ON URBAN FREEWAYS." Master's thesis, University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3059.

Full text
Abstract:
Research recently conducted at the University of Central Florida involving crashes on Interstate-4 in Orlando, Florida has led to the creation of new statistical and neural networks models that are capable of determining the crash risk on the freeway (Abdel-Aty et al., 2004; 2005, Pande and Abdel-Aty, 2006). These models are able to calculate rear-end and lane-change crash risks along the freeway in real-time through the use of static information at various locations along the freeway as well as real-time traffic data obtained by loop detectors. Since these models use real-time traffic data, they are capable of calculating rear-end and lane-change crash risk values as the traffic flow conditions are changing on the freeway. The objective of this study is to examine the potential benefits of combining two ITS strategies (Ramp Metering and Variable Speed Limits strategies) for reducing the crash risk (both rear-end and lane-change crash risks) along the I-4 freeway. Following this aspect, a 36.25-mile section of I-4 running though Orlando, FL was simulated using the PARAMICS micro-simulation program. Gayah (2006) used the same network to examine the potential benefits of two ITS strategies separately (Route Diversion and Ramp Metering) for reducing the crash risk along the freeway by changing traffic flow parameters. Cunningham (2007) also used the same network to examine the potential benefits of implementing Variable Speed Limits strategy for reducing the crash risk along the freeway. Since the same network is used, the calibration and validation procedures used in this study are the same as these previous two studies. This study simulates three volume loading scenarios on the I-4 freeway. These are 60, 80 and 90 percent loading scenarios. From the final experimental design for the 60 % loading, it was concluded that implementing VSL strategy only was more beneficial to the network than either implementing Ramp Metering everywhere (through the whole network) in conjunction with VSL everywhere or implementing Ramp Metering downtown (in downtown areas only) in conjunction with VSL everywhere. This was concluded from the comparison of the results of this study with the results from Cunningham (2007). However, either implementing Ramp Metering everywhere or downtown in conjunction with VSL everywhere showed safety benefits across the simulated network as well as a reduction in the total travel time. The best case for implementing Ramp Metering everywhere in conjunction with VSL everywhere was using a homogeneous speed zone threshold of 2.5 mph, a speed change distance of half speed zone and a speed change time of 5 minutes in conjunction with a 60 seconds cycle length for the Zone algorithm, a critical occupancy of 0.17 and a 30 seconds cycle length for the ALINEA algorithm. And the best case for implementing Ramp Metering downtown in conjunction with VSL everywhere was using a homogeneous speed zone threshold of 2.5 mph, a speed change distance of half speed zone and a speed change time of 10 minutes in conjunction with a 60 seconds cycle length for the Zone algorithm, a critical occupancy of 0.17 and a 30 seconds cycle length for the ALINEA algorithm. For the 80 % loading, it was concluded that either implementing Ramp Metering everywhere in conjunction with VSL everywhere or implementing Ramp Metering downtown in conjunction with VSL everywhere was more beneficial to the network than implementing VSL strategy only. This was also concluded from the comparison of the results of this study with the results from Cunningham (2007). Moreover, it was concluded that implementing Ramp Metering everywhere in conjunction with VSL everywhere showed higher safety benefits across the simulated network than implementing Ramp Metering downtown in conjunction with VSL everywhere. Also, both of them increased the total travel time a bit, but this was deemed acceptable. Additionally, both of them had successive fluctuations and variations in the average lane-change crash risk vs. time step. The best case for implementing Ramp Metering everywhere in conjunction with VSL everywhere was using a homogeneous speed zone threshold of 5 mph, a speed change distance of half speed zone and a speed change time of 30 minutes in conjunction with a 60 seconds cycle length for the Zone algorithm, a critical occupancy of 0.17 and a 30 seconds cycle length for the ALINEA algorithm. And the best case for implementing Ramp Metering downtown in conjunction with VSL everywhere was using a homogeneous speed zone threshold of 5 mph, a speed change distance of half speed zone and a speed change time of 30 minutes in conjunction with a 60 seconds cycle length for the Zone algorithm, a critical occupancy of 0.17 and a 30 seconds cycle length for the ALINEA algorithm. Searching for the best way to implement both Ramp Metering and VSL strategies in conjunction with each other, an indepth investigation was conducted in order to remove the fluctuations and variations in the crash risk with time step (through the entire simulation period). The entire simulation period is 3 hours, and each time step is 5 minutes, so there are 36 time steps representing the entire simulation period. This indepth investigation led to the idea of not implementing VSL at consecutive zones (using either a gap of one zone or more). Then this idea was applied for the best case of implementing Ramp Metering and VSL everywhere at the 80 % loading, and the successive fluctuations and variations in the crash risk with time step were removed. Moreover, much better safety benefits were found. So, this confirms that this idea was very beneficial to the network. For the 90 % loading, it was concluded that implementing Ramp Metering strategy only (Zone algorithm in downtown areas, and ALINEA algorithm in non downtown areas) was more beneficial to the network than implementing Ramp Metering everywhere in conjunction with VSL everywhere. This was concluded from the comparison of the results of this study with the results from Gayah (2006). However, implementing Ramp Metering everywhere in conjunction with VSL everywhere showed safety benefits across the simulated network as well as a reduction in the total travel time. The best case was using a homogeneous speed zone threshold of 2.5 mph, a speed change distance of the entire speed zone and a speed change time of 20 minutes in conjunction with a 60 seconds cycle length for the Zone algorithm, a critical occupancy of 0.17 and a 30 seconds cycle length for the ALINEA algorithm. In summary, Ramp Metering was more beneficial at congested situations, while Variable Speed Limits were more beneficial at free-flow conditions. At conditions approaching congestion, the combination of Ramp Metering and Variable Speed Limits produced the best benefits. These results illustrate the significant potential of ITS strategies to improve the safety and efficiency of urban freeways.
M.S.
Department of Civil and Environmental Engineering
Engineering and Computer Science
Civil Engineering MS
APA, Harvard, Vancouver, ISO, and other styles
23

Solders, Andreas. "Precision mass measurements : Final limit of SMILETRAP I and the developments of SMILETRAP II." Doctoral thesis, Stockholms universitet, Fysikum, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-56777.

Full text
Abstract:
The subject of this thesis is high-precision mass-measurements performed with Penning trap mass spectrometers (PTMS). In particular it describes the SMILETRAP I PTMS and the final results obtained with it, the masses of 40Ca and that of the proton. The mass of 40Ca is an indispensible input in the evaluation of measurements of the bound electron g-factor, used to test quantum electrodynamical calculations in strong fields. The value obtained agrees with available literature values but has a ten times higher precision. The measurement of the proton mass, considered a fundamental physical constant, was performed with the aim of validating other Penning trap results and to test the limit of SMILETRAP I. It was also anticipated that a measurement at a relative precision close to 10-10 would give insight in how to treat certain systematic uncertainties. The result is a value of the proton mass in agreement with earlier measurements and with an unprecedented precision of 1.8×10-10. Vital for the achieved precision of the proton mass measurement was the use of the Ramsey excitation technique. This technique, how it was implemented at SMILETRAP I and the benefits from it is discussed in the thesis and in one of the included papers. The second part of the thesis describes the improved SMILETRAP II setup at the S-EBIT laboratory, AlbaNova. All major changes and upgrades compared to SMILETRAP I are discussed. This includes, apart from the Ramsey excitation technique, higher ionic charge states, improved temperature stabilization, longer run times, different reference ions, stronger and more stable magnetic field and a more efficient ion detection. Altogether these changes should reduce the uncertainty in future mass determinations by an order of magnitude, possibly down to 10-11.
At the time of the doctoral defense, the following paper was unpublished and had a status as follows: Paper 9: Accepted.
APA, Harvard, Vancouver, ISO, and other styles
24

Newbury, James. "Limit order books, diffusion approximations and reflected SPDEs : from microscopic to macroscopic models." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:825d9465-842b-424b-99d0-ff4dfa9ebfc5.

Full text
Abstract:
Motivated by a zero-intelligence approach, the aim of this thesis is to unify the microscopic (discrete price and volume), mesoscopic (discrete price and continuous volume) and macroscopic (continuous price and volume) frameworks of limit order books, with a view to providing a novel yet analytically tractable description of their behaviour in a high to ultra high-frequency setting. Starting with the canonical microscopic framework, the first part of the thesis examines the limiting behaviour of the order book process when order arrival and cancellation rates are sent to infinity and when volumes are considered to be of infinitesimal size. Mathematically speaking, this amounts to establishing the weak convergence of a discrete-space process to a mesoscopic diffusion limit. This step is initially carried out in a reduced-form context, in other words, by simply looking at the best bid and ask queues, before the procedure is extended to the whole book. This subsequently leads us to the second part of the thesis, which is devoted to the transition between mesoscopic and macroscopic models of limit order books, where the general idea is to send the tick size to zero, or equivalently, to consider infinitely many price levels. The macroscopic limit is then described in terms of reflected SPDEs which typically arise in stochastic interface models. Numerical applications are finally presented, notably via the simulation of the mesocopic and macroscopic limits, which can be used as market simulators for short-term price prediction or optimal execution strategies.
APA, Harvard, Vancouver, ISO, and other styles
25

Stroh, Maximilian [Verfasser], Christoph [Akademischer Betreuer] Kühn, Johannes [Akademischer Betreuer] Muhle-Karbe, and Götz [Akademischer Betreuer] Kersting. "On continuous time trading of a small investor in a limit order market / Maximilian Stroh. Gutachter: Christoph Kühn ; Johannes Muhle-Karbe ; Götz Kersting." Frankfurt : Universitätsbibliothek Frankfurt am Main, 2012. http://d-nb.info/1042934894/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Grublyte, Ieva. "Modélisation de mémoire longue non linéaire." Thesis, Cergy-Pontoise, 2017. http://www.theses.fr/2017CERG0923.

Full text
Abstract:
Le but principal de cette thèse est de développer de nouveaux modèles non linéaires à longue mémoire pour modéliser des rendements financiers et leur estimation statistique. En plus de la longue mémoire, ces modèles sont capables de mettre en lumière d’autres faits stylisés comme l’asymétrie ou l’effet de levier. Les processus étudiés dans la thèse sont des solutions stationnaires de certaines équations aux différences stochastiques non linéaires impliquant un “bruit” i.i.d. Outre le fait de résoudre ces équations, qui est non trivial en lui-même, nous prouvons que leur solutions sont dépendantes à longue portée. Enfin pour un modèle non linéaire particulier à longue portée (GQARCH) nous prouvon la consistence et la normalité asymptotique de l’estimateur du quasi-maximum de vraisemblance (QMLE)
The thesis introduces new nonlinear models with long memory which can be used for modelling of financial returns and statistical inference. Apart from long memory, these models are capable to exhibit other stylized facts such as asymmetry and leverage. The processes studied in the thesis are defined as stationary solutions of certain nonlinear stochastic difference equations involving a given i.i.d. “noise”. Apart from solvability issues of these equations which are not trivial by itself, it is proved that their solutions exhibit long memory properties. Finally, for a particularly tractable nonlinear parametric model with long memory (GQARCH) we prove consistency and asymptotic normality of quasi-ML estimators
APA, Harvard, Vancouver, ISO, and other styles
27

ZANINELLI, MARTA. "DALLA PROFEZIA ALLA SCADENZA: L'EVOLUZIONE DELLA TEMPORALITA' NEL TEATRO DI SHAKESPEARE." Doctoral thesis, Università Cattolica del Sacro Cuore, 2020. http://hdl.handle.net/10280/87854.

Full text
Abstract:
Nel teatro shakespeariano è molto spesso presente un uso del tempo affatto moderno e connesso con le nuove tecnologie, se si considera come moderna la presenza massiccia nel canone di una temporalità realistica e cadenzata dal movimento delle lancette dell’orologio, vero oggetto rivoluzionario del nuovo modello temporale. Tra i tanti strumenti che Shakespeare impiega per la manipolazione dell’elemento temporale a livello di drammaturgia, uno è sembrato particolarmente innovativo e rilevante: la scadenza. Si è pensato che la presenza di questo elemento in circa un quarto della produzione shakespeariana potesse rappresentare, anche solo dal punto di vista quantitativo, un dato interessante e meritevole di esame approfondito. Si è infatti ritenuto che l’imposizione di un limite di tempo preciso e scandito dall'orologio all’interno del dramma testimoni non soltanto la sussistenza di un nuovo approccio alla temporalità, ma anche la familiarità del pubblico con un nuovo modo di pensare il tempo. Si è considerato inoltre che il legame tra la scadenza e una nuova concezione del tempo potesse risultare con maggiore evidenza mettendola a confronto con un altro elemento, rappresentativo invece di un pensiero più tradizionale: la profezia. Nella profezia si è infatti voluto vedere una sorta di antecedente della scadenza stessa, a causa di alcuni aspetti formali che le accomunano; allo stesso tempo, tuttavia, ci si è concentrati sull’aspetto che le differenzia, operante proprio sul piano temporale e tale da renderle emblematiche di due tradizioni culturali e teatrali vicine ma fondamentalmente differenti.
Shakespearian theatre often presents a use of time which is undeniably modern and connected to new technologies, if we consider as modern the presence, in the Canon, of a realistic kind of temporality, marked by the rhythm of the clock, the revolutionary object that characterizes the new temporal model. Among the techniques that Shakespeare uses to manipulate the temporal element on a dramaturgical level, one seemed to be particularly innovative and relevant: deadline. The presence of this element in about one fourth of Shakespearean production could represent, if only from a quantitative perspective, an interesting fact, worthy of in-depth analysis. The imposition of a precise time limit, marked by the clock, was thought to be a testimony both of a new approach to temporality, and of the audience's familiarity with a new way to conceive time. The connection between deadline and a new conception of time was thought to better emerge by comparing it to another element, more representative of a traditional way of thinking: prophecy. Prophecy was considered as a sort of precedent for deadline, because of some formal aspects that they have in common; at the same time, however, the work focused on their differences, that occur on the temporal level, so much so that they almost become symbols of two cultural and theatrical traditions that are close, but fundamentally different.
APA, Harvard, Vancouver, ISO, and other styles
28

Nüßgen, Ines [Verfasser], and Alexander [Gutachter] Schnurr. "Ordinal pattern analysis: limit theorems for multivariate long-range dependent Gaussian time series and a comparison to multivariate dependence measures / Ines Nüßgen ; Gutachter: Alexander Schnurr." Siegen : Universitätsbibliothek der Universität Siegen, 2021. http://nbn-resolving.de/urn:nbn:de:hbz:467-19650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Smearcheck, Matthew A. "Robust Stationary Time and Frequency Synchronization with Integrity in Support of Alternative Position, Navigation, and Timing." Ohio University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1366794772.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Mayer, Ulrike [Verfasser], and Henryk [Akademischer Betreuer] Zähle. "Functional weak limit theorem for a local empirical process of non-stationary time series and its application to von Mises-statistics / Ulrike Mayer ; Betreuer: Henryk Zähle." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2019. http://d-nb.info/119175555X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Mayer, Ulrike Verfasser], and Henryk [Akademischer Betreuer] [Zähle. "Functional weak limit theorem for a local empirical process of non-stationary time series and its application to von Mises-statistics / Ulrike Mayer ; Betreuer: Henryk Zähle." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2019. http://nbn-resolving.de/urn:nbn:de:bsz:291--ds-281226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Long, Zeyu. "Introduction of the Debye media to the filtered finite-difference time-domain method with complex-frequency-shifted perfectly matched layer absorbing boundary conditions." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/introduction-of-the-debye-media-to-the-filtered-finitedifference-timedomain-method-with-complexfrequencyshifted-perfectly-matched-layer-absorbing-boundary-conditions(441271dc-d4ea-4664-82e6-90bf93f5c2b7).html.

Full text
Abstract:
The finite-difference time-domain (FDTD) method is one of most widely used computational electromagnetics (CEM) methods to solve the Maxwell's equations for modern engineering problems. In biomedical applications, like the microwave imaging for early disease detection and treatment, the human tissues are considered as lossy and dispersive materials. The most popular model to describe the material properties of human body is the Debye model. In order to simulate the computational domain as an open region for biomedical applications, the complex-frequency-shifted perfectly matched layers (CFS-PML) are applied to absorb the outgoing waves. The CFS-PML is highly efficient at absorbing the evanescent or very low frequency waves. This thesis investigates the stability of the CFS-PML and presents some conditions to determine the parameters for the one dimensional and two dimensional CFS-PML.The advantages of the FDTD method are the simplicity of implementation and the capability for various applications. However the Courant-Friedrichs-Lewy (CFL) condition limits the temporal size for stable FDTD computations. Due to the CFL condition, the computational efficiency of the FDTD method is constrained by the fine spatial-temporal sampling, especially in the simulations with the electrically small objects or dispersive materials. Instead of modifying the explicit time updating equations and the leapfrog integration of the conventional FDTD method, the spatial filtered FDTD method extends the CFL limit by filtering out the unstable components in the spatial frequency domain. This thesis implements filtered FDTD method with CFS-PML and one-pole Debye medium, then introduces a guidance to optimize the spatial filter for improving the computational speed with desired accuracy.
APA, Harvard, Vancouver, ISO, and other styles
33

Downey, Matthew Blake. "Evaluating the Effects of a Congestion and Weather Responsive Advisory Variable Speed Limit System in Portland, Oregon." PDXScholar, 2015. https://pdxscholar.library.pdx.edu/open_access_etds/2397.

Full text
Abstract:
Safety and congestion are ever present and increasingly severe transportation problems in urban areas throughout the nation and world. These phenomena can have wide-ranging consequences relating to safety, the economy, and the environment. Adverse weather conditions represent another significant challenge to safety and mobility on highways. Oregon is not immune from either of these global issues. Oregon Route (OR) 217, to the southwest of the downtown Portland, is one of the worst freeways for congestion in the state and is also subject to the Pacific Northwest's frequently inclement and unpredictable climate. High crash rates, severe recurrent bottlenecks and highly unreliable travel times continuously plague the corridor, making it a major headache for the thousands of commuters using it every day. In an effort to more effectively combat both congestion and adverse weather, transportation officials all over the world have been turning to increasingly technological strategies like Active Traffic Management (ATM). This can come in many forms, but among the most common are variable speed limit (VSL) systems which use real-time data to compute and display appropriate reduced speeds during congestion and/or adverse weather. After numerous studies and deliberations, Oregon Department of Transportation (ODOT) selected Oregon Route (OR) 217 as one of the first locations in the state to be implemented with an advisory VSL system, and that system began operation in the summer of 2014. This thesis seeks to evaluate the effectiveness of this VSL system through the first eight months of its operation through an in-depth and wide-ranging "before and after" analysis. Analysis of traffic flow and safety data for OR 217 from before the VSL system was implemented made clear some of the most prevalent issues which convinced ODOT to pursue VSL. Using those issues as a basis, a framework of seven specific evaluation questions relating to both performance and safety, as well as both congestion and adverse weather, was established to guide the "before and after" comparisons. Hypotheses, and measures of effectiveness for each question were developed, and data were obtained from a diverse array of sources including freeway detectors, ODOT's incident database, and the National Oceanic and Atmospheric Administration (NOAA). The results of the various "before and after" comparisons performed as a part of this thesis indicate that conditions have changed on OR 217 in a number of ways since the VSL system was activated. Many, but not all, of the findings were consistent with the initial hypotheses and with the findings from other VSL studies in the literature. Certain locations along the corridor have seen significant declines in speed variability, supporting the common notion that VSL systems have a harmonizing effect on traffic flow. Crash rates have not decreased, but crashes have become less frequent in the immediate vicinity of VSL signs. Flow distribution between adjacent lanes has been more even since VSL implementation during midday hours and the evening peak, and travel time reliability has seen widespread improvement in three of the corridor's four primary travel lanes during those same times. The drops in flow that generally occur upstream of bottlenecks once they form have had diminished magnitudes, while the drops in flow downstream of the same bottlenecks have grown. Finally, the increase in travel times that is usually brought about by adverse weather has been smaller since VSL implementation, while the decline in travel time reliability has largely disappeared.
APA, Harvard, Vancouver, ISO, and other styles
34

Jeunesse, Paulien. "Estimation non paramétrique du taux de mort dans un modèle de population générale : Théorie et applications. A new inference strategy for general population mortality tables Nonparametric adaptive inference of birth and death models in a large population limit Nonparametric inference of age-structured models in a large population limit with interactions, immigration and characteristics Nonparametric test of time dependance of age-structured models in a large population limit." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLED013.

Full text
Abstract:
L’étude du taux de mortalité dans des modèles de population humaine ou en biologie est le cœur de ce travail. Cette thèse se situe à la frontière de la statistique des processus, de la statistique non-paramétrique et de l’analyse.Dans une première partie, centrée sur une problématique actuarielle, un algorithme est proposé pour estimer les tables de mortalité, utiles en assurance. Cet algorithme se base sur un modèle déterministe de population. Ces nouvelles estimations améliorent les résultats actuels en prenant en compte la dynamique globale de la population. Ainsi les naissances sont incorporées dans le modèle pour calculer le taux de mort. De plus, ces estimations sont mises en lien avec les travaux précédents, assurant ainsi la continuité théorique de notre travail.Dans une deuxième partie, nous nous intéressons à l’estimation du taux de mortalité dans un modèle stochastique de population. Cela nous pousse à utiliser des arguments propres à la statistique des processus et à la statistique non-paramétrique. On trouve alors des estimateurs non-paramétriques adaptatifs dans un cadre anisotrope pour la mortalité et la densité de population, ainsi que des inégalités de concentration non asymptotiques quantifiant la distance entre le modèle stochastique et le modèle déterministe limite utilisé dans la première partie. On montre que ces estimateurs restent optimaux dans un modèle où le taux de mort dépend d’interactions, comme dans le cas de la population logistique.Dans une troisième partie, on considère la réalisation d’un test pour détecter la présence d’interactions dans le taux de mortalité. Ce test permet en réalité de juger de la dépendance temporelle de ce taux. Sous une hypothèse, on montre alors qu’il est possible de détecter la présence d’interactions. Un algorithme pratique est proposé pour réaliser ce test
In this thesis, we study the mortality rate in different population models to apply our results to demography or biology. The mathematical framework includes statistics of process, nonparametric estimations and analysis.In a first part, an algorithm is proposed to estimate the mortality tables. This problematic comes from actuarial science and the aim is to apply our results in the insurance field. This algorithm is founded on a deterministic population model. The new estimates we gets improve the actual results. Its advantage is to take into account the global population dynamics. Thanks to that, births are used in our model to compute the mortality rate. Finally these estimations are linked with the precedent works. This is a point of great importance in the field of actuarial science.In a second part, we are interested in the estimation of the mortality rate in a stochastic population model. We need to use the tools coming from nonparametric estimations and statistics of process to do so. Indeed, the mortality rate is a function of two parameters, the time and the age. We propose minimax optimal and adaptive estimators for the mortality and the population density. We also demonstrate some non asymptotics concentration inequalities. These inequalities quantifiy the deviation between the stochastic process and its deterministic limit we used in the first part. We prove that our estimators are still optimal in a model where the mortality is influenced by interactions. This is for example the case for the logistic population.In a third part, we consider the testing problem to detect the existence of interactions. This test is in fact designed to detect the time dependance of the mortality rate. Under the assumption the time dependance in the mortality rate comes only from the interactions, we can detect the presence of interactions. Finally we propose an algorithm to do this test
APA, Harvard, Vancouver, ISO, and other styles
35

Bulhões, Alexandre Magno Câncio. "Métodos de recuperação pós-exercício: efeitos sobre o desempenho, marcadores fisiológicos, psicológicos, bioquímicos, imunológicos e sentidos atribuídos por sujeitos treinados." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/39/39132/tde-03042013-095350/.

Full text
Abstract:
Este estudo teve como objetivo comparar o efeito agudo de três métodos de recuperação pós-exercício (recuperação ativa, passiva e crioterapia) sobre o desempenho físico, marcadores fisiológicos, psicológicos, bioquímicos e imunológicos, bem como os sentidos atribuídos por sujeitos treinados. Doze corredores treinados em provas de meio-fundo e fundo, voluntários, do gênero masculino (idade: 20,6 ± 1,7 anos; Massa corporal: 64,1 ± 5,6 Kg; Estatura: 1,74 ± 0,05 m.; Gordura corporal: 6,8 ± 2,7 %; VO2máx: 57,0 ± 5,9 mL.Kg-1.min-1; vVO2máx: 15,7 ± 1,7 Km/h; Tlim: 603 ± 243 s.) realizaram três corridas de 30 minutos, em esteira rolante, a 80% da vVO2max, estimado através de teste incremental. Em seguida foram aplicados os métodos de recuperação ativa (corrida a 40% da vVO2max), passiva (sentado em uma cadeira) e crioterapia (imersão em água com gelo quebrado a 5° [±1º] até a altura da crista ilíaca) por 20 minutos, em ordem contrabalanceada. Logo após, os sujeitos realizaram um teste de corrida (Tlim) a 100% da vVO2max. Uma semana antes da realização dos testes, foi realizado um procedimento de familiarização com os métodos de recuperação a serem aplicados. As taxas dos marcadores tempo limite de corrida, distúrbio de humor total, razão fadiga/vigor, percepção subjetiva de esforço, frequência cardíaca, lactato, IL-6, TNF-, leucócitos, neutrófilos, monócitos e linfócitos foram mensuradas no momento anterior a corrida (M1), após a corrida na esteira rolante (M2), imediatamente após a aplicação dos métodos recuperativos (M3) e após a aplicação do teste de corrida tempo limite (M4), exceto a PSE que foi mensurado no M1 e M4, e o Tlim e a entrevista (para análise de representações sociais) que foram realizadas no M4. Foram retirados 18ml de sangue venoso, em cada momento de coleta, para realização dos procedimentos de análise sanguínea. Após os resultados concluímos que o uso dos métodos de recuperação ativa, passiva ou crioterapia durante 20 minutos após uma corrida de 30 minutos a 80%vVO2máx não afetou o desempenho subsequente de corrida a 100%vVO2máx até a exaustão. A crioterapia promove maior queda na frequência cardíaca e menor remoção de lactato após exercício a 80%vVO2máx comparada aos métodos de recuperação ativa e passiva, promovendo maior produção de lactato e menor resposta cronotrópica durante corrida subsequente a 100%vVO2máx até a exaustão e que o uso da crioterapia não interfere na percepção de esforço e nas respostas psicológicas após o esforço, mas induz uma maior perturbação sobre os marcadores imunológicos, especificamente, sobre leucócitos e linfócitos. Na perspectiva qualitativa, verificou-se uma variedade discursiva sobre a escolha do melhor método de recuperação. Os sentidos que mais se destacaram foram: uma maior leveza do corpo, acalmando a musculatura e fica mais... assim, relaxado na crioterapia; ação natural do corpo e quando se está cansado paramos para descansar na recuperação passiva e; continuidade de movimentos, operabilidade, manutenção do ritmo e da normalidade na recuperação ativa
This study aimed at comparing the acute effect of three post-exercise recovering methods (active, passive and cryotherapy recovering) on the physical performance, physiological, psychological, biochemical, immunological, performance and sense markers attributed by trained subjects. Twelve male volunteer runners (aged 20.6 + 1.7 years old; Body mass: 64.1 + 5.6 kg; Height: 1.74 + 0.05 m; Body fat 6.8 + 2.7%; VO2máx: 57,0 ± 5,9 mL.Kg-1.min-1; vVO2máx: 15,7 ± 1,7 Km/h; Tlim: 603 ± 243 s.) trained in middle-distance and distance races have accomplished three 30-minute runnings on a treadmill at 80% of the vVO2, estimated through an incremental test. After that, the active (running at 40 % of the vVO2max), passive (sitting on a chair) and cryotherapy (immersion in water with broken ice at 5° [+ 1°] until the height of the iliac crest) recovering methods were applied for 20 minutes in counterbalanced order. Then, the subjects carried out a running test (Tlim) at 100% of the vVO2max. One week before the accomplishment of the tests, a procedure in order to familiarize them with the recovering methods to be applied was carried out. The running limit time markers, total humor disturb, fatigue/vigor ratio, subjective perception of effort, heart rate, lactate, IL-6, tnf-, leucocytes, neutrophils and lymphocytes rates were measured at the moment before the running (M1), after running on the treadmill (M2), immediately after applying the recovering methods (M3) and after doing the limit time running test (M4), except the PSE, which was measured in M1 and M4, and the Tlim and the interview (to analyze the social representations), which were carried out in M4. 18 ml of venous blood were taken, in each moment of the blood collecting so as to carry out the blood analysis procedures. After the results, we reached the conclusion that the use of active, passive and cryotherapy recovering methods within 20 minutes after a 30-minute running at 80%vVO2máx hasnt affect the performance of a following running at 100%vVO2máx until exhaustion. The cryotherapy promotes a higher fall in the heart rate and a smaller lactate removal after the exercise at 80%vVO2máx, compared to the active and passive recovering methods, thus promoting higher production of lactate and a smaller chronotropic response during the follow-up running at 100%vVO2máx until exhaustion; and that the use of cryotherapy does not interfere in the effort perception or in the psychological responses after effort, but it leads to a higher disorder on the immunologic markers, specifically on the leucocytes and lymphocytes. Within the qualitative perspective, it was verified a discursive variety about the choice of the best recovering method. The senses which were highlighted most were, a larger lightness of the body, calming the muscles down and it gets sort of, relaxed in cryotherapy; natural action of the body and when youre tired we stop to rest in the passive recovering; and, continuity of movement, operability, keeping the rhythm and the normality in the active recovering
APA, Harvard, Vancouver, ISO, and other styles
36

Harrison, Irving. "NEXT GENERATION DATA VISUALIZATION AND ANALYSIS FOR SATELLITE, NETWORK, AND GROUND STATION OPERATIONS." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/607303.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
Recent years have seen a sharp rise in the size of satellite constellations. The monitoring and analysis tools in use today, however, were developed for smaller constellations and are ill-equipped to handle the increased volume of telemetry data. A new technology that can accommodate vast quantities of data is 3-D visualization. Data is abstracted to show the degree to which it deviates from normal, allowing an analyst to absorb the status of thousands of parameters in a single glance. Trend alarms notify the user of dangerous trends before data exceeds normal limits. Used appropriately, 3-D visualization can extend the life of a satellite by ten to twenty percent.
APA, Harvard, Vancouver, ISO, and other styles
37

Fojtů, Jan. "Most na přeložce silnice I/57 přes místní potok." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2015. http://www.nusl.cz/ntk/nusl-227779.

Full text
Abstract:
The topic of this thesis is a safe and economical design of a bearing bridge structure with a variable cross-section according to all valid regulations and standards. The structure is reviewed by limit states. The solution includes time-dependent analysis of the structure with the influence of progressive construction.
APA, Harvard, Vancouver, ISO, and other styles
38

Yeung, Deryck. "Maximally smooth transition: the Gluskabi raccordation." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42756.

Full text
Abstract:
The objective of this dissertation is to provide a framework for constructing a transitional behavior, connecting any two trajectories from a set with a particular characteristic, in such a way that the transition is as inconspicuous as possible. By this we mean that the connection is such that the characteristic behavior persists during the transition. These special classes include stationary solutions, limit cycles etc. We call this framework the Gluskabi raccordation. This problem is motivated from physical applications where it is often desired to steer a system from one stationary solution or periodic orbit to another in a ̒smooth̕ way. Examples include motion control in robotics, chemical process control and quasi-stationary processes in thermodynamics, etc. Before discussing the Gluskabi raccordations of periodic behaviors, we first study several periodic phenomena. Specifically, we study the self- propulsion of a number of legless, toy creatures based on differential friction under periodic excitations. This friction model is based on viscous friction which is predominant in a wet environment. We investigate the effects of periodic and optimal periodic control on locomotion. Subsequently, we consider a control problem of a stochastic system, under the basic constraint that the feedback control signal and the observations from the system cannot use the communication channel simultaneously. Hence, two modes of operation result: an observation mode and a control mode. We seek an optimal periodic regime in a statistical steady state by switching between the observation and the control mode. For this, the duty cycle and the optimal gains for the controller and observer in either mode are determined. We then investigate the simplest special case of the Gluskabi raccordation, namely the quasi-stationary optimal control problem. This forces us to revisit the classical terminal controller. We analyze the performance index as the control horizon increases to infinity. This problem gives a good example where the limiting operation and integration do not commute. Such a misinterpretation can lead to an apparent paradox. We use symmetrical components (the parity operator) to shed light on the correct solution. The main part of thesis is the Gluskabi raccordation problem. We first use several simple examples to introduce the general framework. We then consider the signal Gluskabi raccordation or the Gluskabi raccordation without a dynamical system. Specifically, we present the quasi-periodic raccordation where we seek the maximally ̒smooth̕ transitions between two periodic signals. We provide two methods, the direct and indirect method, to construct these transitions. Detailed algorithms for generating the raccordations based on the direct method are also provided. Next, we extend the signal Gluskabi raccordation to the dynamic case by considering the dynamical system as a hard constraint. The behavioral modeling of dynamical system pioneered by Willems provides the right language for this generalization. All algorithms of the signal Gluskabi raccordation are extended accordingly to produce these ̒smooth̕ transition behaviors.
APA, Harvard, Vancouver, ISO, and other styles
39

Renlund, Henrik. "Recursive Methods in Urn Models and First-Passage Percolation." Doctoral thesis, Uppsala universitet, Matematisk statistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-145430.

Full text
Abstract:
This PhD thesis consists of a summary and four papers which deal with stochastic approximation algorithms and first-passage percolation. Paper I deals with the a.s. limiting properties of bounded stochastic approximation algorithms in relation to the equilibrium points of the drift function. Applications are given to some generalized Pólya urn processes. Paper II continues the work of Paper I and investigates under what circumstances one gets asymptotic normality from a properly scaled algorithm. The algorithms are shown to converge in some other circumstances, although the limiting distribution is not identified. Paper III deals with the asymptotic speed of first-passage percolation on a graph called the ladder when the times associated to the edges are independent, exponentially distributed with the same intensity. Paper IV generalizes the work of Paper III in allowing more edges in the graph as well as not having all intensities equal.
APA, Harvard, Vancouver, ISO, and other styles
40

Corker, Lloyd A. "A test for Non-Gaussian distributions on the Johannesburg stock exchange and its implications on forecasting models based on historical growth rates." University of Western Cape, 2002. http://hdl.handle.net/11394/7447.

Full text
Abstract:
Masters of Commerce
If share price fluctuations follow a simple random walk then it implies that forecasting models based on historical growth rates have little ability to forecast acceptable share price movements over a certain period. The simple random walk description of share price dynamics is obtained when a large number of investors have equal probability to buy or sell based on their own opinion. This simple random walk description of the stock market is in essence the Efficient Market Hypothesis, EMT. EMT is the central concept around which financial modelling is based which includes the Black-Scholes model and other important theoretical underpinnings of capital market theory like mean-variance portfolio selection, arbitrage pricing theory (APT), security market line and capital asset pricing model (CAPM). These theories, which postulates that risk can be reduced to zero sets the foundation for option pricing and is a key component in financial software packages used for pricing and forecasting in the financial industry. The model used by Black and Scholes and other models mentioned above are Gaussian, i.e. they exhibit a random nature. This Gaussian property and the existence of expected returns and continuous time paths (also Gaussian properties) allow the use of stochastic calculus to solve complex Black- Scholes models. However, if the markets are not Gaussian then the idea that risk can be. (educed to zero can lead to a misleading and potentially disastrous sense of security on the financial markets. This study project test the null hypothesis - share prices on the JSE follow a random walk - by means of graphical techniques such as symmetry plots and Quantile-Quantile plots to analyse the test distributions. In both graphical techniques evidence for the rejection of normality was found. Evidenceleading to the rejection of the hypothesis was also found through nonparametric or distribution free methods at a 1% level of significance for Anderson-Darling and Runs test.
APA, Harvard, Vancouver, ISO, and other styles
41

Beran, Jakub. "Mostní nadjezd přes rychlostní komunikaci." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2019. http://www.nusl.cz/ntk/nusl-401476.

Full text
Abstract:
The topic of this thesis is detailed design of the supporting structure of a bridge. The final design is chosen out of three variants. It is a single beam structure consisting of 6 spans. The supporting structure is prestressed concrete with post-tensioning technology, the impact of phased construction is considered. the subject of the expertise of the structure is ultimate limit state, as well as service limit state, designed according to the code. The thesis also contains drawings, visualization and Engineering Report of the structure.
APA, Harvard, Vancouver, ISO, and other styles
42

Ackaah, Williams [Verfasser], Klaus [Akademischer Betreuer] [Gutachter] Bogenberger, and Robert L. [Gutachter] Bertini. "Empirical Analysis of Real-time Traffic Information for Navigation and the Variable Speed Limit System / Williams Ackaah ; Gutachter: Klaus Bogenberger, Robert L. Bertini ; Akademischer Betreuer: Klaus Bogenberger ; Universität der Bundeswehr München, Fakultät für Bauingenieurwesen und Umweltwissenschaften." Neubiberg : Universitätsbibliothek der Universität der Bundeswehr München, 2016. http://d-nb.info/1126974099/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ackaah, Williams Verfasser], Klaus [Akademischer Betreuer] [Bogenberger, and Robert L. [Gutachter] Bertini. "Empirical Analysis of Real-time Traffic Information for Navigation and the Variable Speed Limit System / Williams Ackaah ; Gutachter: Klaus Bogenberger, Robert L. Bertini ; Akademischer Betreuer: Klaus Bogenberger ; Universität der Bundeswehr München, Fakultät für Bauingenieurwesen und Umweltwissenschaften." Neubiberg : Universitätsbibliothek der Universität der Bundeswehr München, 2016. http://nbn-resolving.de/urn:nbn:de:bvb:706-4997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Bader, Philipp Karl-Heinz. "Geometric Integrators for Schrödinger Equations." Doctoral thesis, Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/38716.

Full text
Abstract:
The celebrated Schrödinger equation is the key to understanding the dynamics of quantum mechanical particles and comes in a variety of forms. Its numerical solution poses numerous challenges, some of which are addressed in this work. Arguably the most important problem in quantum mechanics is the so-called harmonic oscillator due to its good approximation properties for trapping potentials. In Chapter 2, an algebraic correspondence-technique is introduced and applied to construct efficient splitting algorithms, based solely on fast Fourier transforms, which solve quadratic potentials in any number of dimensions exactly - including the important case of rotating particles and non-autonomous trappings after averaging by Magnus expansions. The results are shown to transfer smoothly to the Gross-Pitaevskii equation in Chapter 3. Additionally, the notion of modified nonlinear potentials is introduced and it is shown how to efficiently compute them using Fourier transforms. It is shown how to apply complex coefficient splittings to this nonlinear equation and numerical results corroborate the findings. In the semiclassical limit, the evolution operator becomes highly oscillatory and standard splitting methods suffer from exponentially increasing complexity when raising the order of the method. Algorithms with only quadratic order-dependence of the computational cost are found using the Zassenhaus algorithm. In contrast to classical splittings, special commutators are allowed to appear in the exponents. By construction, they are rapidly decreasing in size with the semiclassical parameter and can be exponentiated using only a few Lanczos iterations. For completeness, an alternative technique based on Hagedorn wavepackets is revisited and interpreted in the light of Magnus expansions and minor improvements are suggested. In the presence of explicit time-dependencies in the semiclassical Hamiltonian, the Zassenhaus algorithm requires a special initiation step. Distinguishing the case of smooth and fast frequencies, it is shown how to adapt the mechanism to obtain an efficiently computable decomposition of an effective Hamiltonian that has been obtained after Magnus expansion, without having to resolve the oscillations by taking a prohibitively small time-step. Chapter 5 considers the Schrödinger eigenvalue problem which can be formulated as an initial value problem after a Wick-rotating the Schrödinger equation to imaginary time. The elliptic nature of the evolution operator restricts standard splittings to low order, ¿ < 3, because of the unavoidable appearance of negative fractional timesteps that correspond to the ill-posed integration backwards in time. The inclusion of modified potentials lifts the order barrier up to ¿ < 5. Both restrictions can be circumvented using complex fractional time-steps with positive real part and sixthorder methods optimized for near-integrable Hamiltonians are presented. Conclusions and pointers to further research are detailed in Chapter 6, with a special focus on optimal quantum control.
Bader, PK. (2014). Geometric Integrators for Schrödinger Equations [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/38716
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
45

Švancara, Marek. "Most nad místní komunikací a potokem." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2020. http://www.nusl.cz/ntk/nusl-409780.

Full text
Abstract:
The topic o thlis thesis is design of the structure of a bridge. The beam structure is chosen from three variants of the solution, the structure is formed by a bracket above the pillars and prepared prestressed beams. Various construction procedures have been verified and assessed for the serviceability and load-bearing limit states according to the applicable standards and regulations. Drawing documentation and static calculation are processed.
APA, Harvard, Vancouver, ISO, and other styles
46

Ondrůšková, Kamila. "Dálniční komorový most." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2018. http://www.nusl.cz/ntk/nusl-372280.

Full text
Abstract:
The subject of the diploma thesis is the bridging design of the valley and Doľanský potok on the D1 motorway in the section Jánovce - Jablonov in Slovakia. Three variants of bridging have been designed and then compared. A post-tensioned construction of the box girder cross section was chosen as the most suitable variant, which was further checked. The static calculation was drawn up according to European standards.
APA, Harvard, Vancouver, ISO, and other styles
47

Zatloukal, Bohuslav. "Dálniční estakáda." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2017. http://www.nusl.cz/ntk/nusl-265654.

Full text
Abstract:
The subject of this thesis is the design and assessment of the bridge across the expressway and the deep valley in section Hričovské Podhradie - Lietavská Lúčka of highway D1 in Slovakia. The box girder structure with eleven spans was chosen of three variants. For each course there is a separate structure. The design of the bridge is carried out according to limit states including consideration of the impact of construction. The assessment of the construction is carried out using the beam model. The appendix contains structural analysis, drawings and visualization of the bridge.
APA, Harvard, Vancouver, ISO, and other styles
48

Sahebdivan, Sahar. "The enigma of imaging in the Maxwell fisheye medium." Thesis, University of St Andrews, 2016. http://hdl.handle.net/10023/9894.

Full text
Abstract:
The resolution of optical instruments is normally limited by the wave nature of light. Circumventing this limit, known as the diffraction limit of imaging, is of tremendous practical importance for modern science and technology. One method, super-resolved fluorescence microscopy was distinguished with the Nobel Prize in Chemistry in 2014, but there is plenty of room for alternatives and complementary methods such as the pioneering work of Prof. J. Pendry on the perfect lens based on negative refraction that started the entire research area of metamaterials. In this thesis, we have used analytical techniques to solve several important challenges that have risen in the discussion of the microwave experimental demonstration of absolute optical instruments and the controversy surrounding perfect imaging. Attempts to overcome or circumvent Abbe's diffraction limit of optical imaging, have traditionally been greeted with controversy. In this thesis, we have investigated the role of interacting sources and detectors in perfect imaging. We have established limitations and prospects that arise from interactions and resonances inside the lens. The crucial role of detection becomes clear in Feynman's argument against the diffraction limit: “as Maxwell's electromagnetism is invariant upon time reversal, the electromagnetic wave emitted from a point source may be reversed and focused into a point with point-like precision, not limited by diffraction.” However, for this, the entire emission process must be reversed, including the source: A point drain must sit at the focal position, in place of the point source, otherwise, without getting absorbed at the detector, the focused wave will rebound and the superposition of the focusing and the rebounding wave will produce a diffraction-limited spot. The time-reversed source, the drain, is the detector which taking the image of the source. In 2011-2012, experiments with microwaves have confirmed the role of detection in perfect focusing. The emitted radiation was actively time-reversed and focused back at the point of emission, where, the time-reversed of the source sits. Absorption in the drain localizes the radiation with a precision much better than the diffraction limit. Absolute optical instruments may perform the time reversal of the field with perfectly passive materials and send the reversed wave to a different spatial position than the source. Perfect imaging with absolute optical instruments is defected by a restriction: so far it has only worked for a single–source single–drain configuration and near the resonance frequencies of the device. In chapters 6 and 7 of the thesis, we have investigated the imaging properties of mutually interacting detectors. We found that an array of detectors can image a point source with arbitrary precision. However, for this, the radiation has to be at resonance. Our analysis has become possible thanks to a theoretical model for mutually interacting sources and drains we developed after considerable work and several failed attempts. Modelling such sources and drains analytically had been a major unsolved problem, full numerical simulations have been difficult due to the large difference in the scales involved (the field localization near the sources and drains versus the wave propagation in the device). In our opinion, nobody was able to reproduce reliably the experiments, because of the numerical complexity involved. Our analytic theory draws from a simple, 1–dimensional model we developed in collaboration with Tomas Tyc (Masaryk University) and Alex Kogan (Weizmann Institute). This model was the first to explain the data of experiment, characteristic dips of the transmission of displaced drains, which establishes the grounds for the realistic super-resolution of absolute optical instruments. As the next step in Chapter 7 we developed a Lagrangian theory that agrees with the simple and successful model in 1–dimension. Inspired by the Lagrangian of the electromagnetic field interacting with a current, we have constructed a Lagrangian that has the advantage of being extendable to higher dimensions in our case two where imaging takes place. Our Lagrangian theory represents a device-independent, idealized model independent of numerical simulations. To conclude, Feynman objected to Abbe's diffraction limit, arguing that as Maxwell's electromagnetism is time-reversal invariant, the radiation from a point source may very well become focused in a point drain. Absolute optical instruments such as the Maxwell Fisheye can perform the time reversal and may image with a perfect resolution. However, the sources and drains in previous experiments were interacting with each other as if Feynman's drain would act back to the source in the past. Different ways of detection might circumvent this feature. The mutual interaction of sources and drains does ruin some of the promising features of perfect imaging. Arrays of sources are not necessarily resolved with arrays of detectors, but it also opens interesting new prospects in scanning near-fields from far–field distances. To summarise the novel idea of the thesis: • We have discovered and understood the problems with the initial experimental demonstration of the Maxwell Fisheye. • We have solved a long-standing challenge of modelling the theory for mutually interacting sources and drains. • We understand the imaging properties of the Maxwell Fisheye in the wave regime. Let us add one final thought. It has taken the scientific community a long time of investigation and discussion to understand the different ingredients of the diffraction limit. Abbe's limit was initially attributed to the optical device only. But, rather all three processes of imaging, namely illumination, transfer and detection, make an equal contribution to the total diffraction limit. Therefore, we think that for violating the diffraction limit one needs to consider all three factors together. Of course, one might circumvent the limit and achieve a better resolution by focusing on one factor, but that does not necessary imply the violation of a fundamental limit. One example is STED microscopy that focuses on the illumination, another near–field scanning microscopy that circumvents the diffraction limit by focusing on detection. Other methods and strategies in sub-wavelength imaging –negative refraction, time reversal imaging and on the case and absolute optical instruments –are concentrating on the faithful transfer of the optical information. In our opinion, the most significant, and naturally the most controversial, part of our findings in the course of this study was elucidating the role of detection. Maxwell's Fisheye transmits the optical information faithfully, but this is not enough. To have a faithful image, it is also necessary to extract the information at the destination. In our last two papers, we report our new findings of the contribution of detection. We find out in the absolute optical instruments, such as the Maxwell Fisheye, embedded sources and detectors are not independent. They are mutually interacting, and this interaction influences the imaging property of the system.
APA, Harvard, Vancouver, ISO, and other styles
49

Pidima, Jan. "Estakáda přes řeku Bečva." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2017. http://www.nusl.cz/ntk/nusl-265405.

Full text
Abstract:
The thesis deals with a design of a highway bridge with multiple spans. The bridge is constructed for highway D1 and goes across the river Bečva close to the city Přerov. Two different solutions were carried out from which a solution with subsequently prestressed box girder was chosen. Static model and load actions were modelled in program Scia Engineer 2016.0. Design checks were done manually according to corresponding Eurocodes. Load actions from wind, uneven support settling and horizontal actions from transit were neglecte.
APA, Harvard, Vancouver, ISO, and other styles
50

Prekop, Michal. "Most přes železniční trať." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2020. http://www.nusl.cz/ntk/nusl-409776.

Full text
Abstract:
Diploma thesis is focused on a design of the road bridge of the category S7,5 in cadastral territory of a municipality Polanka nad Odrou. Three preliminary studies have been proposed. Box girder beam with a slant walls were selected for more detailed processing. Construction of the bridge is going to be build by time-dependent analysis on a solid formwork. The calculation of load is done by using computer software Midas Civil 19 v 2.1. and Scia Engineer 19.1. Selected preliminary was assessed according to the recent Europan standards.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography