To see the other types of publications on this topic, follow the link: Distribution of the number of rejections.

Dissertations / Theses on the topic 'Distribution of the number of rejections'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Distribution of the number of rejections.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hörmann, Wolfgang, and Gerhard Derflinger. "Rejection-Inversion to Generate Variates from Monotone Discrete Distributions." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 1996. http://epub.wu.ac.at/1176/1/document.pdf.

Full text
Abstract:
For discrete distributions a variant of rejection from a continuous hat function is presented. The main advantage of the new method, called rejection-inversion, is that no extra uniform random number to decide between acceptance and rejection is required which means that the expected number of uniform variates required is halved. Using rejection-inversion and a squeeze, a simple universal method for a large class of monotone discrete distributions is developed. It can be used to generate variates from the tails of most standard discrete distributions. Rejection-inversion applied to the Zipf (or zeta) distribution results in algorithms that are short and simple and at least twice as fast as the fastest methods suggested in the literature. (author's abstract)<br>Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
2

Hörmann, Wolfgang. "A Rejection Technique for Sampling from T-Concave Distributions." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 1994. http://epub.wu.ac.at/1028/1/document.pdf.

Full text
Abstract:
A rejection algorithm - called transformed density rejection - that uses a new method for constructing simple hat functions for an unimodal, bounded density $f$ is introduced. It is based on the idea to transform $f$ with a suitable transformation $T$ such that $T(f(x))$ is concave. $f$ is then called $T$-concave and tangents of $T(f(x))$ in the mode and in a point on the left and right side are used to construct a hat function with table-mountain shape. It is possible to give conditions for the optimal choice of these points of contact. With $T=-1/\sqrt(x)$ the method can be used to construct a universal algorithm that is applicable to a large class of unimodal distributions including the normal, beta, gamma and t-distribution. (author's abstract)<br>Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
3

Leydold, Josef, and Wolfgang Hörmann. "The Automatic Generation of One- and Multi-dimensional Distributions with Transformed Density Rejection." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 1997. http://epub.wu.ac.at/1328/1/document.pdf.

Full text
Abstract:
A rejection algorithm, called ``transformed density rejection", is presented. It uses a new method for constructing simple hat functions for a unimodal density $f$. It is based on the idea of transforming $f$ with a suitable transformation $T$ such that $T(f(x))$ is concave. The hat function is then constructed by taking the pointwise minimum of tangents which are transformed back to the original scale. The resulting algorithm works very well for a large class of distributions and is fast. The method is also extended to the two- and multidimensional case. (author's abstract)<br>Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
4

Hörmann, Wolfgang. "A universal generator for discrete log-concave distributions." Institut für Statistik und Mathematik, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 1993. http://epub.wu.ac.at/1704/1/document.pdf.

Full text
Abstract:
We give an algorithm that can be used to sample from any discrete log-concave distribution (e.g. the binomial and hypergeometric distributions). It is based on rejection from a discrete dominating distribution that consists of parts of the geometric distribution. The algorithm is uniformly fast for all discrete log-concave distributions and not much slower than algorithms designed for a single distribution. (author's abstract)<br>Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
5

Hörmann, Wolfgang. "A Universal Generator for Bivariate Log-Concave Distributions." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 1995. http://epub.wu.ac.at/1044/1/document.pdf.

Full text
Abstract:
Different universal (also called automatic or black-box) methods have been suggested to sample from univariate log-concave distributions. The description of a universal generator for bivariate distributions has not been published up to now. The new algorithm for bivariate log-concave distributions is based on the method of transformed density rejection. In order to construct a hat function for a rejection algorithm the bivariate density is transformed by the logarithm into a concave function. Then it is possible to construct a dominating function by taking the minimum of several tangent planes which are by exponentiation transformed back into the original scale. The choice of the points of contact is automated using adaptive rejection sampling. This means that a point that is rejected by the rejection algorithm is used as additional point of contact until the maximal number of points of contact is reached. The paper describes the details how this main idea can be used to construct Algorithm ULC2D that can generate random pairs from bivariate log-concave distribution with a computable density. (author's abstract)<br>Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
6

Scheer, Marsel [Verfasser], Helmut [Akademischer Betreuer] Finner, and Arnold [Akademischer Betreuer] Janssen. "Controlling the Number of False Rejections in Multiple Hypotheses Testing / Marsel Scheer. Gutachter: Helmut Finner ; Arnold Janssen." Düsseldorf : Universitäts- und Landesbibliothek der Heinrich-Heine-Universität Düsseldorf, 2013. http://d-nb.info/1031074996/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Benditkis, Julia [Verfasser], Arnold [Akademischer Betreuer] Janssen, and Helmut [Akademischer Betreuer] Finner. "Martingale Methods for Control of False Discovery Rate and Expected Number of False Rejections / Julia Benditkis. Gutachter: Arnold Janssen ; Helmut Finner." Düsseldorf : Universitäts- und Landesbibliothek der Heinrich-Heine-Universität Düsseldorf, 2015. http://d-nb.info/1077295170/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Seminario, Carlos (Carlos Manuel Seminario Velarde), and Emmanuel Marks. "Using real-time truck transportation information to predict customer rejections and refrigeration-system fuel efficiency in packaged salad distribution." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68900.

Full text
Abstract:
Thesis (M. Eng. in Logistics)--Massachusetts Institute of Technology, Engineering Systems Division, 2011.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 139-141).<br>Companies that operate cold supply chains can greatly benefit from information availability and data generation. The abundance of information now available to cold chain operators and harvested from every echelon of the supply chain, ranging from the procurement process to the sales and customer service processes, provides an opportunity for logistics organizations to monitor and improve their operations. It is increasingly imperative to transform data into meaningful information that creates a competitive advantage for early adopters. This thesis attempts to determine how to make best use of and effectively interpret the information generated by trailer mounted temperature sensors and geospatial data collection devices during refrigerated transportation of packaged salads. The study covers only the transportation segment from the manufacturer's distribution center to the customer's (grocery retailer) distribution center. This thesis uses regression analysis in an effort to create a model that effectively uses realtime transportation information to identify the elements that can create a competitive advantage for cold chain operators. The main performance measurements subject to analysis in this thesis are reefer-unit fuel consumption and rejections of salad products at the customer's drop location. Regression yields a formula that can predict more than 70% reefer fuel consumption. However, with the independent variables available in the data at our disposal, it is not possible to build a model the effectively predicts product rejections. The findings of this thesis can help operators of transportation cold chains better manage fuel consumption by isolating and improving the independent variables we identified.<br>by Carlos Seminario and Emmanuel Marks.<br>M.Eng.in Logistics
APA, Harvard, Vancouver, ISO, and other styles
9

Hughes, Garry. "Distribution of additive functions in algebraic number fields." Title page, contents and summary only, 1987. http://web4.library.adelaide.edu.au/theses/09SM/09smh893.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hörmann, Wolfgang. "The generation of binomial random variates." Institut für Statistik und Mathematik, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 1992. http://epub.wu.ac.at/1242/1/document.pdf.

Full text
Abstract:
The transformed rejection method, a combination of inversion and rejection, which can be applied to various continuous distributions, is well suited to generate binomial random variates as well. The resulting algorithms are simple and fast, and need only a short set-up. Among the many possible variants two algorithms are described and tested: BTRS a short but nevertheless fast rejection algorithm and BTRD which is more complicated as the idea of decomposition is utilized. For BTRD the average number of uniforms required to return one binomial deviate lies between 2.5 and 1.4 which is considerably lower than for any of the known uniformly fast algorithms. Timings for a C-implementation show that for the case that the parameters of the binomial distribution vary from call to call BTRD is faster than the current state of the art algorithms. Depending on the computer, the speed of the uniform generator used and the binomial parameters the savings are between 5 and 40 percent. (author's abstract)<br>Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
11

Zareikar, Gita. "The Distribution and Function of Number in Azeri." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/38044.

Full text
Abstract:
In this dissertation, I study the distribution of number in Azeri within the Exo- Skeletal model of Borer (2005a). I adopt the Exo-Skeletal model's assumption that number marking is a syntactic rather than a lexical process. Following Borer (2005a), I assume that, in order to be counted, nouns need to be individuated by means of a functional category Div. In Borer's model, plural markers and classifiers are argued to be generated in DivP. However, unlike Borer, I propose that the plural marker in Azeri is not an individuator. Instead, it solely marks plurality. Under my proposal, individuation in Azeri is morphologically null. Moreover, I argue that classifiers do not belong to the category of individuators either and their function is to unitize the individuated object. Therefore, I consider classifiers in Azeri to be generated on a cluster head where they contribute to a group formation process. The generation of the plural marker and the classifier on heads other than division derives the conclusion that the individuation in Azeri is morphologically null. Furthermore, I investigate the interpretation of number in the verbal domain, i.e. in TP, in the presence of the viewpoint aspect in both telic and atelic contexts. I argue that the singular interpretation of the Azeri bare noun is linked to the projection of AspQ, where the specific interpretation of the bare noun arises under the effect of the perfective aspect. The presence of AspQ yields a telic interpretation of the event structure, and the DP in the specifier of AspQ is the subject of quantity (Borer, 2005a). Moreover, according to Borer, number ambiguous nouns are generated in atelic structures where AspQ is absent. In this case, the DP does not have to be the subject of quantity and the availability of quantity on the DP remains optional. Nevertheless, (non-)specific interpretation of the noun in telic and atelic contexts in Azeri, I argue to be due to the impact of the viewpoint aspect.
APA, Harvard, Vancouver, ISO, and other styles
12

Rozario, Rebecca. "The Distribution of the Irreducibles in an Algebraic Number Field." Fogler Library, University of Maine, 2003. http://www.library.umaine.edu/theses/pdf/RozarioR2003.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wodzak, Michael A. "Entire functions and uniform distribution /." free to MU campus, to others for purchase, 1996. http://wwwlib.umi.com/cr/mo/fullcit?p9823328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Shahabi, Majid. "The distribution of the classical error terms of prime number theory." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Mathematics and Computer Science, c2012, 2012. http://hdl.handle.net/10133/3252.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Amirkhanyan, Gagik M. "Problems in combinatorial number theory." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51865.

Full text
Abstract:
The dissertation consists of two parts. The first part is devoted to results in Discrepancy Theory. We consider geometric discrepancy in higher dimensions (d > 2) and obtain estimates in Exponential Orlicz Spaces. We establish a series of dichotomy-type results for the discrepancy function which state that if the L¹ norm of the discrepancy function is too small (smaller than the conjectural bound), then the discrepancy function has to be very large in some other function space.The second part of the thesis is devoted to results in Additive Combinatorics. For a set with small doubling an order-preserving Freiman 2-isomorphism is constructed which maps the set to a dense subset of an interval. We also present several applications.
APA, Harvard, Vancouver, ISO, and other styles
16

Saha, Nilanjan. "Gap Size Effect on Low Reynolds Number Wind Tunnel Experiments." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/35938.

Full text
Abstract:
A system was designed to measure the effect of gap size on semi-span low Reynolds number wind tunnel experiments. The lift forces on NACA 1412, NACA 2412 and NACA 4412 half wings were measured using a strain gauge balance at chord Reynolds numbers of 100,000 and 200,000 and three different gap sizes including sealed gap. Pressure distributions on both airfoil top and bottom surfaces in the chord-wise direction near the gap were recorded for these airfoils. Also recorded was the span wise pressure distribution on both the airfoil surfaces at the quarter chord section. The results revealed that the presence of the gap, however small, affects the measurements. These effects were mainly observed in drop of lift and change in zero lift angle of attack and change in stall angle for the airfoil. The size of the gap is not linearly related to these changes, which also depend on the camber of the airfoil. These changes occur due to the flow through the gap from the lower surface to the upper surface of the model. The wing/end plate gap effect reduces along the span but is not fully restricted to the base of the model and the model behaves more like a full three-dimensional wing than a semi-span model. This study was made possible with the support of Department of Aerospace and Ocean Engineering, Virginia Polytechnic Institute and State University under the supervision of Dr. James Marchman<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
17

Ritchie, Robert Peter. "Efficient Constructions for Deterministic Parallel Random Number Generators and Quantum Key Distribution." Miami University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=miami1619099112895031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Mansour-Tehrani, Mehrdad. "Spacial distribution and scaling of bursting events in boundary layer turbulence over smooth and rough surfaces." Thesis, University College London (University of London), 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261297.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Alqurashi, Faris. "Extension of spray flow modelling using the drop number size distribution moments approach." Thesis, University of Manchester, 2015. https://www.research.manchester.ac.uk/portal/en/theses/extension-of-spray-flow-modelling-using-the-drop-number-size-distribution-moments-approach(9c11e7da-f583-492d-b6a9-29b6fee71438).html.

Full text
Abstract:
This work is an extension to the spray model of Watkins and Jones (2010). In their model, the spray is characterized by evaluating the three moments Q_2, Q_3 and Q_4 of general gamma number size distribution from their transport equations. The sub-models of drop drag, drop break-up and drop collisions were simulated in terms of gamma distributions. The model is considered as non-vaporising and compared with cases which have low ambient gas temperature and also is strict to a particular set of sub-models for drop drag and break up which they are applicable to produce integrable functions. In this work the model is adjusted to allow a variety of sub-models to be implemented. Three models (TAB, ETAB, DDB) are considered for drop breakup which have been basically introduced to be used with the Droplet Discrete Method (DDM) approach. So in order to implement these models with the model of Watkins and Jones the source terms of the breakup are calculated by grouping the droplets in each cell into parcels which contain a certain number of droplets with similar physical properties (size, velocity, temperature ...). The source terms of each parcel are calculated and multiplied by the number of droplets in these parcels and a numerical integration is then used to obtain the resultant effect of the drop breakup in each cell. The number of drops in each cell is determined from the gamma size distribution. Also three hybrid breakup models (KH-RT, Turb-KH-RT, Turb-TAB) which include two distinct steps: primary and secondary break up model are implemented. The Kelvin- Helmholtz (KH) and the turbulence induced breakup (Turb) models were used to predict the primary break up of the intact liquid core of a liquid jet while the secondary break up is modelled using the TAB model and competition between the KH and the RT models. Both models are allowed to work simultaneously. However it is assumed that if the disintegration occurs due to the RT the KH break up does not occur. In case of drag sub-model, a dynamic drag model is introduced which accounts for the effects of drop distortion and oscillation due to the effects of high relative velocity between the liquid and the surrounding gas. In this model the drag coefficient is empirically related to the magnitude of the drop deformation. The magnitude of drop deformation was calculated by using the TAB model. In this work, the effects of mass and heat transfer on the spray are modelled. An additional equation for the energy of the liquid is solved. The mass transfer rate is evaluated using the model of Godsave (1953) and Spalding (1953) while the Faeth correlation (1983) is used to model heat transfer between the two phases. For all equations of heat and mass transfer between phases, the drop Nusselt and Sherwood number are calculated by using the correlation of Ranz and Marshall. In this model also the liquid surface-average temperature T_l2 which is calculated by Watkins (2007) is used to determine the heat and mass transfer between phases instead of liquid volume-average temperature. It was derived by assuming a parabolic temperature profile within individual drops. All the equations are treated in Eulerian framework using the finite volume method. The model has been applied to a wide range of sprays and compared to a number of experiments with different operating conditions including high liquid injection pressure and high ambient gas density and temperature. A reasonable agreement is found by the ETAB model with most of the data while the TAB and the DDB models continually underestimate the penetration and drop sizes of the spray. The hybrid breakup models perform well and show better agreement with the available experimental data than the single breakup models. In term of high temperature cases, the model correctly captures the effect of evaporation on the different spray properties especially with hybrid break up model.
APA, Harvard, Vancouver, ISO, and other styles
20

Marangon, Davide Giacomo. "Improving Quantum Key Distribution and Quantum Random Number Generation in presence of Noise." Doctoral thesis, Università degli studi di Padova, 2015. http://hdl.handle.net/11577/3424117.

Full text
Abstract:
The argument of this thesis might be summed up as the exploitation of the noise to generate better noise. More specifically this work is about the possibility of exploiting classic noise to effectively transmit quantum information and measuring quantum noise to generate better quantum randomness. What do i mean by exploiting classical noise to transmit effectively quantum information? In this case I refer to the task of sending quantum bits through the atmosphere in order set up transmissions of quantum key distribution (QKD) and this will be the subject of Chapter 1 and Chapter 2. In the Quantum Communications framework, QKD represents a topic with challenging problems both theoretical and experimental. In principle QKD offers unconditional security, however practical realizations of it must face all the limitations of the real world. One of the main limitation are the losses introduced by real transmission channels. Losses cause errors and errors make the protocol less secure because an eavesdropper could try to hide his activity behind the losses. When this problem is addressed under a full theoretical point of view, one tries to model the effect of losses by means of unitary transforms which affect the qubits in average according a fixed level of link attenuation. However this approach is somehow limiting because if one has a high level of background noise and the losses are assumed in average constant, it could happen that the protocol might abort or not even start, being the predicted QBER to high. To address this problem and generate key when normally it would not be possible, we have proposed an adaptive real time selection (ARTS) scheme where transmissivity peaks are instantaneously detected. In fact, an additional resource may be introduced to estimate the link transmissivity in its intrinsic time scale with the use of an auxiliary classical laser beam co-propagating with the qubits but conveniently interleaved in time. In this way the link scintillation is monitored in real time and the selection of the time intervals of high channel transmissivity corresponding to a viable QBER for a positive key generation is made available. In Chapter 2 we present a demonstration of this protocol in conditions of losses equivalent to long distance and satellite links, and with a range of scintillation corresponding to moderate to severe weather. A useful criterion for the preselection of the low QBER interval is presented that employs a train of intense pulses propagating in the same path as the qubits, with parameters chosen such that its fluctuation in time reproduces that of the quantum communication. For what concern the content of Chapter 3 we describe a novel principle for true random number generator (TRNG) which is based on the observation that a coherent beam of light crossing a very long path with atmospheric turbulence may generate random and rapidly varying images. To implement our method in a proof of concept demonstrator, we have chosen a very long free space channel used in the last years for experiments in Quantum Communications at the Canary Islands. Here, after a propagation of 143 km at an altitude of the terminals of about 2400 m, the turbulence in the path is converted into a dynamical speckle at the receiver. The source of entropy is then the atmospheric turbulence. Indeed, for such a long path, a solution of the Navier-Stokes equations for the {atmospheric flow in which the beam propagates is out of reach. Several models are based on the Kolmogorov statistical theory, which parametrizes the repartition of kinetic energy as the interaction of decreasing size eddies. However, such models only provide a statistical description for the spot of the beam and its wandering and never an instantaneous prediction for the irradiance distribution. These are mainly ruled by temperature variations and by the wind and cause fluctuations in the air refractive index. For such reason, when a laser beam is sent across the atmosphere, this latter may be considered as a dynamic volumetric scatterer which distorts the beam wavefront. We will evaluate the experimental data to ensure that the images are uniform and independent. Moreover, we will assess that our method for the randomness extraction based on the combinatorial analysis is optimal in the context of Information Theory. In Chapter 5 we will present a new approach for what concerns the generation of random bits from quantum physical processes. Quantum Mechanics has been always regarded as a possible and valuable source of randomness, because of its intrinsic probabilistic Nature. However the typical paradigm is employed to extract random number from a quantum system it commonly assumes that the state of said system is pure. Such assumption, only in theory would lead to full and unpredictable randomness. The main issue however it is that in real implementations, such as in a laboratory or in some commercial device, it is hardly possible to forge a pure quantum state. One has then to deal with quantum state featuring some degree of mixedness. A mixed state however might be somehow correlated with some other system which is hold by an adversary, a quantum eavesdropper. In the extreme case of a full mixed state, practically one it is like if he is extracting random numbers from a classical state. In order to do that we will show how it is important to shift from a classical randomness estimator, such as the min-classical entropy H-min(Z) of a random variable Z to quantum ones such as the min-entropy conditioned on quantum side information E. We have devised an effective protocol based on the entropic uncertainty principle for the estimation of the min-conditional entropy. The entropic uncertainty principle lets one to take in account the information which is shared between multiple parties holding a multipartite quantum system and, more importantly, lets one to bound the information a party has on the system state after that it has been measured. We adapted such principle to the bipartite case where an user Alice, A, is supplied with a quantum system prepared by the provider Eve, E, who could be maliciously correlated to it. In principle then Eve might be able to predict all the outcomes of the measurements Alice performs on the basis Z in order to extract random numbers from the system. However we will show that if Alice randomly switches from the measurement basis to a basis X mutually unbiased to Z, she can lower bound the min entropy conditioned to the side information of Eve. In this way for Alice is possible to expand a small initial random seed in a much larger amount of trusted numbers. We present the results of an experimental demonstration of the protocol where random numbers passing the most rigorous classical tests of randomness were produced. In Chapter 6, we will provide a secure generation scheme for a continuos variable (CV) QRNG. Since random true random numbers are an invaluable resource for both the classical Information Technology and the uprising Quantum one, it is clear that to sustain the present and future even growing fluxes of data to encrypt it is necessary to devise quantum random number generators able to generate numbers in the rate of Gigabit or Terabit per second. In the Literature are given several examples of QRNG protocols which in theory could reach such limits. Typically, these are based on the exploitation of the quadratures of the electro-magnetic field, regarded as an infinite bosonic quantum system. The quadratures of the field can be measured with a well known measurement scheme, the so called homodyne detection scheme which, in principle, can yield an infinite band noise. Consequently the band of the random signal is limited only by the passband of the devices used to measure it. Photodiodes detectors work commonly in the GHz band, so if one sample the signal with an ADC enough fast, the Gigabit or Terabit rates can be easily reached. However, as in the case of discrete variable QRNG, the protocols that one can find in the Literature, do not properly consider the purity of the quantum state being measured. The idea has been to extend the discrete variable protocol of the previous Chapter, to the Continuous case. We will show how in the CV framework, not only the problem of the state purity is given but also the problem related to the precision of the measurements used to extract the randomness.<br>L'argomento di questa tesi può essere riassunto nella frase utilizzare il rumore classico per generare un migliore rumore quantistico. In particolare questa tesi riguarda da una parte la possibilita di sfruttare il rumore classico per trasmettere in modo efficace informazione quantistica, e dall'altra la misurazione del rumore classico per generare una migliore casualita quantistica. Nel primo caso ci si riferisce all'inviare bit quantistici attraverso l'atmosfera per creare trasmissioni allo scopo di distribuire chiavi crittografiche in modo quantistico (QKD) e questo sara oggetto di Capitolo 1 e Capitolo 2. Nel quadro delle comunicazioni quantistiche, la QKD è caratterizzata da notevoli difficolta sperimentali. Infatti, in linea di principio la QKD offre sicurezza incondizionata ma le sue realizzazioni pratiche devono affrontare tutti i limiti del mondo reale. Uno dei limiti principali sono le perdite introdotte dai canali di trasmissione. Le perdite causano errori e gli errori rendono il protocollo meno sicuro perché un avversario potrebbe camuffare la sua attivita di intercettazione utilizzando le perdite. Quando questo problema viene affrontato da un punto di vista teorico, si cerca di modellare l'effetto delle perdite mediante trasformazioni unitarie che trasformano i qubits in media secondo un livello fisso di attenuazione del canale. Tuttavia questo approccio è in qualche modo limitante, perché se si ha ha un elevato livello di rumore di fondo e le perdite si assumono costanti in media, potrebbe accadere che il protocollo possa abortire o peggio ancora, non iniziare, essendo il quantum bit error rate (QBER) oltre il limite (11\%) per la distribuzione sicura. Tuttavia, studiando e caratterizzando un canale ottico libero, si trova che il livello di perdite è tutt'altro che stabile e che la turbolenza induce variazioni di trasmissivita che seguono una statistica log-normale. Il punto pertanto è sfruttare questo rumore classico per generare chiave anche quando normalmente non sarebbe possibile. Per far ciò abbiamo ideato uno schema adattativo per la selezione in tempo reale (ARTS) degli istanti a basse perdite in cui vengono istantaneamente rilevati picchi di alta trasmissivita. A tal scopo, si utilizza un fascio laser classico ausiliario co-propagantesi con i qubit ma convenientemente inframezzato nel tempo. In questo modo la scintillazione viene monitorata in tempo reale e vengono selezionati gli intervalli di tempo che daranno luogo ad un QBER praticabile per una generazione di chiavi. Verra quindi presentato un criterio utile per la preselezione dell'intervallo di QBER basso in cui un treno di impulsi intensi si propaga nello stesso percorso dei qubits, con i parametri scelti in modo tale che la sua oscillazione nel tempo riproduce quello della comunicazione quantistica. Nel Capitolo 2 presentiamo quindi una dimostrazione ed i risultati di tale protocollo che è stato implementato presso l'arcipelago delle Canarie, tra l'isola di La Palma e quella di Tenerife: tali isole essendo separate da 143 km, costituiscono un ottimo teatro per testare la validita del protocollo in quanto le condizioni di distanza sono paragonabili a quelle satellitari e la gamma di scintillazione corrisponde quella che si avrebbe in ambiente con moderato maltempo in uno scenario di tipo urbano. Per quanto riguarda il contenuto del Capitolo 3 descriveremo un metodo innovativo per la generazione fisica di numeri casuali che si basa sulla constatazione che un fascio di luce coerente, attraversando un lungo percorso con turbolenza atmosferica da luogo ad immagini casuali e rapidamente variabili. Tale fenomeno è stato riscontrato a partire dai diversi esperimenti di comunicazione quantistica effettuati alle Isole Canarie, dove il fascio laser classico utilizzato per puntare i terminali, in fase di ricezione presentava un fronte d'onda completamente distorto rispetto al tipico profilo gaussiano. In particolare ciò che si osserva è un insieme di macchie chiare e scure che si evolvono geometricamente in modo casuale, il cosiddetto profilo dinamico a speckle. La fonte di tale entropia è quindi la turbolenza atmosferica. Infatti, per un canale di tale lunghezza, una soluzione delle equazioni di Navier-Stokes per il flusso atmosferico in cui si propaga il fascio è completamente fuori portata, sia analiticamente che per mezzo di metodi computazionali. Infatti i vari modelli di dinamica atmosferica sono basati sulla teoria statistica Kolmogorov, che parametrizza la ripartizione dell'energia cinetica come l'interazione di vortici d'aria di dimensioni decrescenti. Tuttavia, tali modelli forniscono solo una descrizione statistica per lo spot del fascio e delle sue eventuali deviazioni ma mai una previsione istantanea per la distribuzione dell' irraggiamento. Per tale motivo, quando un raggio laser viene inviato attraverso l'atmosfera, quest'ultima può essere considerato come un diffusore volumetrico dinamico che distorce il fronte d'onda del fascio. All'interno del Capitolo verranno presentati i dati sperimentali che assicurano che le immagini del fascio presentano le caratteristiche di impredicibilita tali per cui sia possibile numeri casuali genuini. Inoltre, verra presentato anche il metodo per l'estrazione della casualita basato sull'analisi combinatoria ed ottimale nel contesto della Teoria dell'Informazione. In Capitolo 5 presenteremo un nuovo approccio per quanto riguarda la generazione di bit casuali dai processi fisici quantistici. La Meccanica quantistica è stata sempre considerata come la migliore fonte di casualita, a causa della sua intrinseca natura probabilistica. Tuttavia il paradigma tipico impiegato per estrarre numeri casuali da un sistema quantistico assume che lo stato di detto sistema sia puro. Tale assunzione, in principio comporta una generazione in cui il risultato delle misure è complemente impredicibile secondo la legge di Born. Il problema principale tuttavia è che nelle implementazioni reali, come in un laboratorio o in qualche dispositivo commerciale, difficilmente è possibile creare uno stato quantico puro. Generalmente ciò che si ottiene è uno stato quantistico misto. Uno stato misto tuttavia potrebbe essere in qualche modo correlato con un altro sistema quantistico in possesso, eventualmente, di un avversario. Nel caso estremo di uno stato completamente misto, un generatore quantistico praticamente è equivalente ad un generatore che impiega un processo di fisica classica, che in principio è predicibile. Nel Capitolo, si mostrera quindi come sia necessario passare da un estimatore di casualita classico, come l' entropia minima classica $ H_ {min (Z) $ di una variabile casuale $ Z $ ad un estimatore che tenga conto di una informazione marginale $E$ di tipo quantistico, ovvero l'entropia minima condizionata $H_{min(Z|E)$. La entropia minima condizionata è una quantita fondamentale perchè consente di derivare quale sia il minimo contenuto di bit casuali estraibili dal sistema, in presenza di uno stato non puro. Abbiamo ideato un protocollo efficace basato sul principio di indeterminazione entropica per la stima dell'entropia min-condizionale. In generale, il principio di indeterminazione entropico consente di prendere in considerazione le informazioni che sono condivise tra più parti in possesso di un sistema quantistico tri-partitico e, soprattutto, consente di stimare il limite all'informazione che un partito ha sullo stato del sistema, dopo che è stato misurato. Abbiamo adattato tale principio al caso bipartito in cui un utente Alice, $A$, è dotato di un sistema quantistico che nel caso in studio ipotizziamo essere preparato dall'avversario stesso, Eve $E$, e che quindi potrebbe essere con esso correlato. Quindi, teoricamente Eve potrebbe essere in grado di prevedere tutti i risultati delle misurazioni che Alice esegue sulla sua parte di sistema, cioè potrebbe avere una conoscenza massima della variabile casuale $Z$ in cui si registrano i risultati delle misure nella base $\mathcal{Z$. Tuttavia mostreremo che se Alice casualmente misura il sistema in una base $\mathcal{X$ massimamente complementare a $\mathcal{Z$, Alice può inferire un limite inferiore l'entropia per $H_{min(Z|E)$. In questo modo per Alice, utilizzando tecniche della crittografia classeica, è possibile espandere un piccolo seme iniziale di casualita utilizzato per la scelta delle basi di misura, in una quantita molto maggiore di numeri sicuri. Presenteremo i risultati di una dimostrazione sperimentale del protocollo in cui sono stati prodotti numeri casuali che passano i più rigorosi test per la valutazione della casualita. Nel Capitolo 6, verra illustrato un sistema di generazione ultraveloce di numeri casuali per mezzo di variabili continue(CV) QRNG. Siccome numeri casuali genuini sono una preziosa risorsa sia per l'Information Technology classica che quella quantistica, è chiaro che per sostenere i flussi sempre crescenti di dati per la crittografia, è necessario mettere a punto generatori in grado di produrre streaming con rate da Gigabit o Terabit al secondo. In Letteratura sono riportati alcuni esempi di protocolli QRNG che potrebbero raggiungere tali limiti. In genere, questi si basano sulla misura dele quadrature del campo elettromagnetico che può essere considerato come un infinito sistema quantistico bosonico. Le quadrature del campo possono essere misurate con il cosiddetto sistema di rivelazione a omodina che, in linea di principio, può estrarre un segnale di rumore a banda infinita. Di conseguenza, la banda del segnale casuale viene ad essere limitata solo dalla banda passante dei dispositivi utilizzati per misurare. Siccome, rilevatori a fotodiodi lavorano comunemente nella banda delle decine dei GHz, se il segnale è campionato con un ADC sufficientemente veloce e con un elevato numero di bit di digitalizzazione, rate da Gigabit o Terabit sono facilmente raggiungibili. Tuttavia, come nel caso dei QRNG a variabili discrete, i protocolli che si hanno in Letteratura, non considerano adeguatamente la purezza dello stato quantistico da misurare. Nel L'idea è di estendere il protocollo a variabile discreta del capitolo precedente, al caso continuo. Mostreremo come nell'ambito CV, non solo sia abbia il problema della purezza dello stato ma anche il problema relativo alla precisione delle misure utilizzate su di esso. Proporremo e daremo i risultati sperimentali per un nuovo protocollo in grado di estrarre numeri casuali ad alto rate e con un elevato grado di sicurezza.
APA, Harvard, Vancouver, ISO, and other styles
21

Brickman, Larry A. "Numerical evaluation of the pair-distribution function of dilute suspensions at high Péclet number." Thesis, Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/11305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Sharma, R. K. "The number, binding affinity and tissue distribution of autonomic receptors in healthy and diseased lung." Thesis, Imperial College London, 1990. http://hdl.handle.net/10044/1/46547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Cutter, Matthew R. "Dispersion in Steady Pipe Flow with Reynolds Number Under 10,000." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1093008636.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Salimi, Farhad. "Characteristics of spatial variation, size distribution, formation and growth of particles in urban environments." Thesis, Queensland University of Technology, 2014. https://eprints.qut.edu.au/69332/1/Farhad_Salimi_Thesis.pdf.

Full text
Abstract:
This thesis is the first comprehensive study of important parameters relating to aerosols' impact on climate and human health, namely spatial variation, particle size distribution and new particle formation. We determined the importance of spatial variation of particle number concentration in microscale environments, developed a method for particle size parameterisation and provided knowledge about the chemistry of new particle formation. This is a significant contribution to our understanding of processes behind the transformation and dynamics of urban aerosols. This PhD project included extensive measurements of air quality parameters using state of the art instrumentation at each of the 25 sites within the Brisbane metropolitan area and advanced statistical analysis.
APA, Harvard, Vancouver, ISO, and other styles
25

Scherer, Michael David. "Comparison of Retention and Stability of Implant-Retained Overdentures Based Upon Implant Location, Number, and Distribution." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1336664206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Mander, Dario <1963&gt. "A method to reduce the number of parameters to be estimated in a distribution lag-model." Master's Degree Thesis, Università Ca' Foscari Venezia, 2018. http://hdl.handle.net/10579/12016.

Full text
Abstract:
With the assumption of a particular agent response function due to a variation of one ore more exogenous variables, we transform a macro-economic model in a distributed lag-model. The response function we consider is a gamma distribution function. In this way we can estimate the distributed lag-model coefficients with the estimation of the two charactristic gamma distribution function parameters.
APA, Harvard, Vancouver, ISO, and other styles
27

Segarra, Elan. "An Exploration of Riemann's Zeta Function and Its Application to the Theory of Prime Distribution." Scholarship @ Claremont, 2006. https://scholarship.claremont.edu/hmc_theses/189.

Full text
Abstract:
Identified as one of the 7 Millennium Problems, the Riemann zeta hypothesis has successfully evaded mathematicians for over 100 years. Simply stated, Riemann conjectured that all of the nontrivial zeroes of his zeta function have real part equal to 1/2. This thesis attempts to explore the theory behind Riemann’s zeta function by first starting with Euler’s zeta series and building up to Riemann’s function. Along the way we will develop the math required to handle this theory in hopes that by the end the reader will have immersed themselves enough to pursue their own exploration and research into this fascinating subject.
APA, Harvard, Vancouver, ISO, and other styles
28

Shebanits, Oleg. "Determination of Ion Number Density from Langmuir Probe Measurements in the Ionosphere of Titan." Thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-132567.

Full text
Abstract:
Saturn’s largest moon, Titan, presents a very interesting subject for study because of its atmosphere’s complex organic chemistry. Processes taking place there might shed some light on the origins of organic compounds on Earth in its early days. The international spacecraft Cassini-Huygens was launched to Saturn in 1997 for a detailed study of the gas giant and its moons, specifically Titan. The Swedish Institute of Space Physics in Uppsala has manufactured the Langmuir probe instrument for the Cassini spacecraft now orbiting Saturn, and is responsible for its operation and data analysis. This project concerns the analysis of Titan’s ionosphere measurements from this instrument, from all “deep” flybys of the moon (&lt;1400km altitude) in the period October 2004 - April 2010. Using the Langmuir Probe analysis tools, the ion flux is derived by compensating for the atmospheric EUV extinction (that varies with the photoelectron current from the probe). The photoelectron current emitted from the probe also gives an artifact in the data that for this project needs to be deducted before analysis. This factor has already been modeled, while the extinction of Titan’s atmosphere has only been taken into account on event basis (not systematically). The EUV corrected ion flux data is then used to derive the ion number density in Titan’s atmosphere, by setting up an average ion mass altitude distribution (using the Ion Neutral Mass Spectrometer results for comparison) and deriving the spacecraft speed along the Cassini spacecraft trajectory through Titan’s ionosphere. The ion number density results proved to correlate very well with the theoretical ionospheric profiles on the day side of Titan (see graphical representation in the Results section). On the night side, a perturbation of the ion flux data was discovered by comparison with Ion Neutral Mass Spectrometer data, supporting earlier measurements of negative ions reported by Coates et al 2009. The project was carried out at the Swedish Institute of Space Physics (Institutet för Rymdfysik, IRF) in Uppsala.<br>Saturnus största måne Titan är ett väldigt intressant forskningsobjekt på grund av dess atmosfärs komplexa organiska kemi. Processer som pågår i Titans täta atmosfär kan hjälpa oss att förstå ursprunget till organiska föreningar på Jorden i dess unga ålder. Den internationella rymdsonden Cassini-Huygens blev uppskjuten mot Saturnus 1997, för att i detalj undersöka gasjätten och dess månar, speciellt Titan. Institutet för Rymdfysik (IRF) i Uppsala är ansvariga för operation och dataanalys av Langmuirsonden ombord Cassini som ligger i omloppsbanan kring Saturnus sedan 2004. Detta projekt omfattar analys av Langmuirsondens mätningar av Titans jonosfär från alla ”djupa” förbiflygningar av månen under perioden oktober 2004 – april 2010. Med hjälp av analysverktygen för Langmuirsonden, tas jonflödet fram efter kompensation för den atmosfäriska EUV extinktionen som ger upphov till fotoelektronströmmen från sonden. Fotoelektronströmmen som utsänds från proben ger en artefakt i data och måste (för detta projekt) korrigeras före analysen. Denna faktor är redan bestämd, men extinktionen av Titans atmosfär har endast korrigerats för i enstaka fall. Det korrigerade datat används för att få fram jondensiteten i Titans atmosfär genom att en genomsnittlig jonmass/höjd fördelning antas (jämförs med resultat från INMS-instrumentet) och kombineras med den beräknade hastighet som Cassini håller i banan genom jonosfären. Projektet utfördes vid Institutet för Rymdfysik, Uppsala.
APA, Harvard, Vancouver, ISO, and other styles
29

Liu, Shaobin. "Characterization, geographic distribution, and number of upper Eocene impact ejecta layers and their correlations with source craters." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file 15.46 Mb., 308 p, 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:3220787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Rodrigues, Jeffrey Collin. "Comparison of shear stability of mini and macroemulsion latexes with respect to particle size and number distribution." Thesis, Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/9136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Cho, Jin Seo. "Three essays on testing hypotheses with irregular conditions /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2002. http://wwwlib.umi.com/cr/ucsd/fullcit?p3071015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Gautham, Smitha. "An Efficient Implementation of an Exponential Random Number Generator in a Field Programmable Gate Array (FPGA)." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2173.

Full text
Abstract:
Many physical, biological, ecological and behavioral events occur at times and rates that are exponentially distributed. Modeling these systems requires simulators that can accurately generate a large quantity of exponentially distributed random numbers, which is a computationally intensive task. To improve the performance of these simulators, one approach is to move portions of the computationally inefficient simulation tasks from software to custom hardware implemented in Field Programmable Gate Arrays (FPGAs). In this work, we study efficient FPGA implementations of exponentially distributed random number generators to improve simulator performance. Our approach is to generate uniformly distributed random numbers using standard techniques and scale them using the inverse cumulative distribution function (CDF). Scaling is implemented by curve fitting piecewise linear, quadratic, cubic, and higher order functions to solve for the inverse CDF. As the complexity of the scaling function increases (in terms of order and the number of pieces), number accuracy increases and additional FPGA resources (logic cells and block RAMs) are consumed. We analyze these tradeoffs and show how a designer with particular accuracy requirements and FPGA resource constraints can implement an accurate and efficient exponentially distributed random number generator.
APA, Harvard, Vancouver, ISO, and other styles
33

Elizondo, Schmelkes Franz (Franz Mauricio) 1971. "An analysis of the supply chain management and distribution of refurbishable tools with a finite number of lives." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80495.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1999.<br>Includes bibliographical references (p. 103).<br>by Franz Elizondo Schmelkes.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
34

Thomas, Rinu S. M. "Optimising the number and position of reclosers on a medium voltage distribution line to minimise damage on equipment." Diss., University of Pretoria, 2014. http://hdl.handle.net/2263/45905.

Full text
Abstract:
The optimal placement of reclosers on overhead lines in a medium voltage distribution network is known to improve the reliability of a power system. Traditionally, recloser placement studies have not considered the effect of greater numbers of reclosers on network damage during faults or the effect of positioning on protection settings. Recloser positions that enhance the reliability of the system may not necessarily improve other problematic operational aspects, such as the damage to equipment and the risk of incorrect tripping due to the sudden increase in loading. This research seeks to prove the hypothesis that: Recloser placement studies with the additional consideration of protection-related factors such as equipment damage and the risk of false tripping will result in different recloser positions compared to when the priority is only on improving reliability indices and cost. A tool is developed to assess the reliability indices, cost, damage and the risk of false tripping and it determines the best recloser positioning based on the priority given to each factor considered. Using this tool, observations are made on the effect of the added factors of damage and the risk of false tripping on recloser positioning. The addition of the protection-related factors to the objective function is unique in its ability to realise the value of recloser positions that cater for minimizing the damage factor and the possibility of tripping on load. In the absence of these factors, the value of certain recloser positions would not be identified as they would not improve reliability or cost factors. The importance of reliability and cost are not overruled by the addition of the protection-related factors. The consideration of protection-related factors in the planning process of optimising recloser placement ensures that the protection of the overhead line is optimal and is not compromised in any way. This would inherently have a positive effect on the lifespan of the equipment on the feeder and the reliability of the feeder in the long-term.<br>Dissertation (MEng)--University of Pretoria, 2014.<br>tm2015<br>Electrical, Electronic and Computer Engineering<br>MEng<br>Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
35

Xu, Wenjing [Verfasser], Andrij [Akademischer Betreuer] Pich, and Walter [Akademischer Betreuer] Richtering. "Polyelectrolyte microgels with controlled number and distribution of charges : from synthesis to application / Wenjing Xu ; Andrij Pich, Walter Richtering." Aachen : Universitätsbibliothek der RWTH Aachen, 2020. http://d-nb.info/1233315986/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Parsons, Donald Williams. "Intragenic SMN mutations : frequency, distribution, evidence of a founder effect, and modification of SMA phenotype by centromeric copy number /." The Ohio State University, 1999. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488190109869324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Hennig, C., J. J. Mohr, A. Zenteno, et al. "Galaxy Populations in Massive Galaxy Clusters to z = 1.1: Color Distribution, Concentration, Halo Occupation Number and Red Sequence Fraction." OXFORD UNIV PRESS, 2017. http://hdl.handle.net/10150/623801.

Full text
Abstract:
We study the galaxy populations in 74 Sunyaev-Zeldovich effect selected clusters from the South Pole Telescope survey, which have been imaged in the science verification phase of the Dark Energy Survey. The sample extends up to z similar to 1.1 with 4 x 10(14)M(circle dot) <= M-200 <= 3 x 10(15)M(circle dot). Using the band containing the 4000 angstrom break and its redward neighbour, we study the colour-magnitude distributions of cluster galaxies to similar to m(*) + 2, finding that: (1) The intrinsic rest frame g - r colour width of the red sequence (RS) population is similar to 0.03 out to z similar to 0.85 with a preference for an increase to similar to 0.07 at z = 1, and (2) the prominence of the RS declines beyond z similar to 0.6. The spatial distribution of cluster galaxies is well described by the NFW profile out to 4R(200) with a concentration of c(g) = 3.59(-0.18)(+0.20), 5.37(-0.24)(+0.27) and 1.38(-0.19)(+0.21) for the full, the RS and the blue non-RS populations, respectively, but with similar to 40 per cent to 55 per cent cluster to cluster variation and no statistically significant redshift or mass trends. The number of galaxies within the virial region N-200 exhibits a mass trend indicating that the number of galaxies per unit total mass is lower in the most massive clusters, and shows no significant redshift trend. The RS fraction within R-200 is (68 +/- 3) per cent at z = 0.46, varies from similar to 55 per cent at z = 1 to similar to 80 per cent at z = 0.1 and exhibits intrinsic variation among
APA, Harvard, Vancouver, ISO, and other styles
38

Jin, Kang Meir Amnon J. "The lattice gas model and Lattice Boltzmann model on hexagonal grids." Auburn, Ala., 2005. http://repo.lib.auburn.edu/2005%20Summer/master's/JIN_KANG_53.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Cabrita, Iris Bianca da Silva. "Análise das causas em ato de inspeção sanitária de rejeição e respetiva frequência de carcaças e vísceras de bovino no matadouro Santacarnes S.A." Master's thesis, Universidade de Lisboa. Faculdade de Medicina Veterinária. Instituto Superior de Agronomia, 2014. http://hdl.handle.net/10400.5/7207.

Full text
Abstract:
Dissertação de Mestrado em Engenharia Zootécnica/Produção Animal<br>A elevada preocupação com a segurança dos alimentos, relativamente ao consumo de carnes provenientes de animais doentes e à transmissão de doenças ao Homem através destas carnes, levou à necessidade de estabelecer regras específicas de organização dos controlos oficiais de produtos de origem animal destinados ao consumo humano. Este controlo oficial, mediante as atividades de inspeção sanitária, visa garantir que toda a carne e produtos alimentares derivados da mesma, chegam ao consumidor nas condições sanitárias adequadas. Neste contexto, este trabalho teve como principal objetivo a determinação das causas de rejeição completa de carcaças de bovino, dos conjuntos de vísceras vermelhas, dos conjuntos de vísceras brancas e de pulmões, coração, fígado e rins, mediante o acompanhamento das ações da equipa de inspeção sanitária presente no matadouro Santacarnes S.A.. Com este trabalho foi possível apurar quais as causas de rejeições mais comuns através da observação e registo, tanto fotográfico como documental. O tratamento dos dados incidiu na determinação das frequências (%) para análise da incidência entre causas e número de rejeições. Durante o período de estágio (Fevereiro a Julho de 2012) foram abatidos e inspecionados 7191 bovinos. Com base nestes registos, no mesmo período, verificou-se a rejeição de 28 carcaças de bovino e a maior incidência de rejeições observou-se em fígados, rins e pulmões (27,93%). A principal causa de rejeição da totalidade da carcaça foi a pneumonia purulenta (0,14%). A maior incidência de rejeição de fígados deveu-se a abcessos (4,55%). Os rins foram rejeitados, na sua maioria, pela presença de cálculos renais (1,59%). Quanto aos pulmões, a principal causa de rejeição resultou da existência de: i) um histórico de animais suspeitos positivos à prova da intradermotuberculinização e/ou; ii) de lesões suspeitas detetadas em post mortem, associado à presença de Mycobacterium bovis ou tuberculosis, isolado na exploração de proveniência desses animais (5,55%).<br>ABSTRACT - The concern about food safety regarding consumption of meat from sick animals and disease transmission to humans led to establish specific rules in order to improve official controls on animal products for human consumption. This official control, through sanitary inspection activities, allows that meat and meat products reach the consumer in the right sanitary conditions. Main goal of this work was to determine the causes of rejection of beef carcasses, of groups of red and white viscera, of lungs, heart, liver and kidneys as the result of sanitary inspection team actions at Santacarnes S.A. slaughterhouse. This actions allowed to avoid the consumption of harmful meat and viscera to health and human welfare. It was possible to determine the most common causes of rejections through the observation and registration (photographic and documentary) of these. Data analysis were focused on determining the frequency (%) of the rejection causes. During trainee period (February to July of 2012) were slaughtered and inspected 7191 animals. Based on these registrations, 28 beef carcasses were rejected and high numbers of rejections were observed in lungs, liver and kidneys (27,93%). The main cause of beef carcasses rejection was purulent pneumonia (0,14%). The major cause of rejection of liver was due to abscesses (4,55%). Kidneys were rejected mostly by urinary calculi (kidney stones) (1,59%) and lungs presented the highest number of rejections related with official confirmation of suspect animals, positive proof of intradermal or suspicious lesions detected in post-mortem, and in which has been isolated Mycobacterium bovis or tuberculosis in the farm of animals provenience (5,55%).
APA, Harvard, Vancouver, ISO, and other styles
40

Mazzini, Martina. "Analysis of the aerosol number size distribution variability and characterization of new particle formation events at Monte Cimone GAW global station." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/23221/.

Full text
Abstract:
The aim of this thesis is to obtain long-term information on aerosol number size distribution and new particle formation at the Mt. Cimone GAW global station (CMN, 2165 m a.s.l.) to better assess how aerosols impact on Earth system. The size distribution of particles ranging from 9nm to 500nm was continuously observed with a DMPS from November 2005 to July 2013 in the framework of EUSAAR and ACTRIS projects. Size distribution and number concentration are studied at different time scales, together with occurrences of new particle formation (NPF). CMN typical aerosol number size distribution is bimodal while the average total number concentration of 1534±1332/cm^3. The number concentration shows large seasonal variations with higher values during warm months, almost four times higher than those observed in winter. On a daily time-scale, the maximum of total particles occurs in the afternoon. When classifying aerosol number size distribution into nucleation mode, Aitken mode, and accumulation mode, Aitken mode is the main contributor to the total number concentration for about 53%, followed by particles in the accumulation and nucleation modes with 31% and 16%, respectively. NPFs are identified, by classifying each day following standardized classification criteria. CMN is characterized by a NPF events frequency of 26.7%, with the highest event occurrence in May and August, while non-events are more frequent during winter. The growth of nucleation mode particles and the time evolution of their number concentration begin around local noon. The first one lasts almost three hours, with a mean growth rate of 4.65±1.97nm/h, while the latter lasts more than one hour and a half, with a rate of 0.50±0.56/(cm^3 ·s). The average condensation sink (CS) is 0.280 ·10^-3/s during a typical non-event day, and 0.483·10^-3/s during a typical event day. However, low CS observed before the nucleation onset time can be an important factor triggering NPFs, except during winter season.
APA, Harvard, Vancouver, ISO, and other styles
41

Menke, Jan-Hendrik [Verfasser]. "A comprehensive approach to implement monitoring and state estimation in distribution grids with a low number of measurements / Jan-Hendrik Menke." Kassel : kassel university press c/o Universität Kassel - Universitätsbibliothek, 2020. http://d-nb.info/1225769213/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Tran, Ngoc Quang. "Optimisation of indoor environmental quality and energy consumption within office buildings." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/64114/1/Ngoc%20Quang_Tran_Thesis.pdf.

Full text
Abstract:
This research investigated airborne particle characteristics and their dynamics inside and around the envelope of mechanically ventilated office buildings, together with building thermal conditions and energy consumption. Based on these, a comprehensive model was developed to facilitate the optimisation of building heating, ventilation and air conditioning systems, in order to protect the health of their occupants and minimise the energy requirements of these buildings.
APA, Harvard, Vancouver, ISO, and other styles
43

Rahman, Md Mahmudur. "Investigations into source contributions and spatial distribution of airborne pollutants in an urban airshed by the development and application of advanced statistical models." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/108952/2/Md_Mahmudur_Rahman_Thesis.pdf.

Full text
Abstract:
This thesis developed novel statistical modelling methods to quantify airborne gaseous and particle concentrations and their source contribution in an urban area; in particular, developed a novel Land Use Regression (LUR) model for predicting the daily average concentration of airborne gaseous concentrations, a novel Bayesian modelling approach to quantify airborne ultrafine particle source contribution, and a geostatistical modelling approach for quantifying spatial concentrations of airborne pollutants. Besides, nighttime new particle formation mechanism and their physical properties have been investigated for the first time. The findings have applications in urban planning and management as well as in epidemiological studies.
APA, Harvard, Vancouver, ISO, and other styles
44

Mejia, Jaime F. "Long-term trends in fine particle number concentrations in the urban atmosphere of Brisbane : the relevance of traffic emissions and new particle formation." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/26283/1/Jaime_Mejia_Thesis.pdf.

Full text
Abstract:
The measurement of submicrometre (< 1.0 m) and ultrafine particles (diameter < 0.1 m) number concentration have attracted attention since the last decade because the potential health impacts associated with exposure to these particles can be more significant than those due to exposure to larger particles. At present, ultrafine particles are not regularly monitored and they are yet to be incorporated into air quality monitoring programs. As a result, very few studies have analysed their long-term and spatial variations in ultrafine particle concentration, and none have been in Australia. To address this gap in scientific knowledge, the aim of this research was to investigate the long-term trends and seasonal variations in particle number concentrations in Brisbane, Australia. Data collected over a five-year period were analysed using weighted regression models. Monthly mean concentrations in the morning (6:00-10:00) and the afternoon (16:00-19:00) were plotted against time in months, using the monthly variance as the weights. During the five-year period, submicrometre and ultrafine particle concentrations increased in the morning by 105.7% and 81.5% respectively whereas in the afternoon there was no significant trend. The morning concentrations were associated with fresh traffic emissions and the afternoon concentrations with the background. The statistical tests applied to the seasonal models, on the other hand, indicated that there was no seasonal component. The spatial variation in size distribution in a large urban area was investigated using particle number size distribution data collected at nine different locations during different campaigns. The size distributions were represented by the modal structures and cumulative size distributions. Particle number peaked at around 30 nm, except at an isolated site dominated by diesel trucks, where the particle number peaked at around 60 nm. It was found that ultrafine particles contributed to 82%-90% of the total particle number. At the sites dominated by petrol vehicles, nanoparticles (< 50 nm) contributed 60%-70% of the total particle number, and at the site dominated by diesel trucks they contributed 50%. Although the sampling campaigns took place during different seasons and were of varying duration these variations did not have an effect on the particle size distributions. The results suggested that the distributions were rather affected by differences in traffic composition and distance to the road. To investigate the occurrence of nucleation events, that is, secondary particle formation from gaseous precursors, particle size distribution data collected over a 13 month period during 5 different campaigns were analysed. The study area was a complex urban environment influenced by anthropogenic and natural sources. The study introduced a new application of time series differencing for the identification of nucleation events. To evaluate the conditions favourable to nucleation, the meteorological conditions and gaseous concentrations prior to and during nucleation events were recorded. Gaseous concentrations did not exhibit a clear pattern of change in concentration. It was also found that nucleation was associated with sea breeze and long-range transport. The implications of this finding are that whilst vehicles are the most important source of ultrafine particles, sea breeze and aged gaseous emissions play a more important role in secondary particle formation in the study area.
APA, Harvard, Vancouver, ISO, and other styles
45

Mejia, Jaime F. "Long-term trends in fine particle number concentrations in the urban atmosphere of Brisbane : the relevance of traffic emissions and new particle formation." Queensland University of Technology, 2008. http://eprints.qut.edu.au/26283/.

Full text
Abstract:
The measurement of submicrometre (< 1.0 m) and ultrafine particles (diameter < 0.1 m) number concentration have attracted attention since the last decade because the potential health impacts associated with exposure to these particles can be more significant than those due to exposure to larger particles. At present, ultrafine particles are not regularly monitored and they are yet to be incorporated into air quality monitoring programs. As a result, very few studies have analysed their long-term and spatial variations in ultrafine particle concentration, and none have been in Australia. To address this gap in scientific knowledge, the aim of this research was to investigate the long-term trends and seasonal variations in particle number concentrations in Brisbane, Australia. Data collected over a five-year period were analysed using weighted regression models. Monthly mean concentrations in the morning (6:00-10:00) and the afternoon (16:00-19:00) were plotted against time in months, using the monthly variance as the weights. During the five-year period, submicrometre and ultrafine particle concentrations increased in the morning by 105.7% and 81.5% respectively whereas in the afternoon there was no significant trend. The morning concentrations were associated with fresh traffic emissions and the afternoon concentrations with the background. The statistical tests applied to the seasonal models, on the other hand, indicated that there was no seasonal component. The spatial variation in size distribution in a large urban area was investigated using particle number size distribution data collected at nine different locations during different campaigns. The size distributions were represented by the modal structures and cumulative size distributions. Particle number peaked at around 30 nm, except at an isolated site dominated by diesel trucks, where the particle number peaked at around 60 nm. It was found that ultrafine particles contributed to 82%-90% of the total particle number. At the sites dominated by petrol vehicles, nanoparticles (< 50 nm) contributed 60%-70% of the total particle number, and at the site dominated by diesel trucks they contributed 50%. Although the sampling campaigns took place during different seasons and were of varying duration these variations did not have an effect on the particle size distributions. The results suggested that the distributions were rather affected by differences in traffic composition and distance to the road. To investigate the occurrence of nucleation events, that is, secondary particle formation from gaseous precursors, particle size distribution data collected over a 13 month period during 5 different campaigns were analysed. The study area was a complex urban environment influenced by anthropogenic and natural sources. The study introduced a new application of time series differencing for the identification of nucleation events. To evaluate the conditions favourable to nucleation, the meteorological conditions and gaseous concentrations prior to and during nucleation events were recorded. Gaseous concentrations did not exhibit a clear pattern of change in concentration. It was also found that nucleation was associated with sea breeze and long-range transport. The implications of this finding are that whilst vehicles are the most important source of ultrafine particles, sea breeze and aged gaseous emissions play a more important role in secondary particle formation in the study area.
APA, Harvard, Vancouver, ISO, and other styles
46

Vagharshakyan, Armen. "Estimates for discrepancy and Calderon-Zygmund operators." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Li, Linlin, Shiyin Shen, Jinliang Hou, et al. "GALACTIC EXTINCTION AND REDDENING FROM THE SOUTH GALACTIC CAP u -BAND SKY SURVEY: u -BAND GALAXY NUMBER COUNTS AND u − r COLOR DISTRIBUTION." IOP PUBLISHING LTD, 2017. http://hdl.handle.net/10150/623264.

Full text
Abstract:
We study the integral Galactic extinction and reddening based on the galaxy catalog of the South Galactic Cap u-band Sky Survey (SCUSS), where u-band galaxy number counts and u - r color distribution are used to derive the Galactic extinction and reddening respectively. We compare these independent statistical measurements with the reddening map of Schlegel et al. (SFD) and find that both the extinction and reddening from the number counts and color distribution are in good agreement with the SFD results at low extinction regions (E(B - V)(SFD) < 0.12 mag). However, for high extinction regions (E(B - V)(SFD) > 0.12 mag), the SFD map overestimates the Galactic reddening systematically, which can be approximated by a linear relation Delta E(B - V)= 0.43[ E(B - V)(SFD) - 0.12]. By combining the results from galaxy number counts and color distribution, we find that the shape of the Galactic extinction curve is in good agreement with the standard R-V = 3.1 extinction law of O'Donnell.
APA, Harvard, Vancouver, ISO, and other styles
48

Howard, David M. "A study of discrepancy results in partially ordered sets." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34794.

Full text
Abstract:
In 2001, Fishburn, Tanenbaum, and Trenk published a pair of papers that introduced the notions of linear and weak discrepancy of a partially ordered set or poset. Linear discrepancy for a poset is the least k such that for any ordering of the points in the poset there is a pair of incomparable points at least distance k away in the ordering. Weak discrepancy is similar to linear discrepancy except that the distance is observed over weak labelings (i.e. two points can have the same label if they are incomparable, but order is still preserved). My thesis gives a variety of results pertaining to these properties and other forms of discrepancy in posets. The first chapter of my thesis partially answers a question of Fishburn, Tanenbaum, and Trenk that was to characterize those posets with linear discrepancy two. It makes the characterization for those posets with width two and references the paper where the full characterization is given. The second chapter introduces the notion of t-discrepancy which is similar to weak discrepancy except only the weak labelings with at most t copies of any label are considered. This chapter shows that determining a poset's t-discrepancy is NP-Complete. It also gives the t-discrepancy for the disjoint sum of chains and provides a polynomial time algorithm for determining t-discrepancy of semiorders. The third chapter presents another notion of discrepancy namely total discrepancy which minimizes the average distance between incomparable elements. This chapter proves that finding this value can be done in polynomial time unlike linear discrepancy and t-discrepancy. The final chapter answers another question of Fishburn, Tanenbaum, and Trenk that asked to characterize those posets that have equal linear and weak discrepancies. Though determining the answer of whether the weak discrepancy and linear discrepancy of a poset are equal is an NP-Complete problem, the set of minimal posets that have this property are given. At the end of the thesis I discuss two other open problems not mentioned in the previous chapters that relate to linear discrepancy. The first asks if there is a link between a poset's dimension and its linear discrepancy. The second refers to approximating linear discrepancy and possible ways to do it.
APA, Harvard, Vancouver, ISO, and other styles
49

Mance, Bill. "Normal Numbers with Respect to the Cantor Series Expansion." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1274431587.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Peng, Haolei. "Effects of two-way left-turn lane on roadway safety." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000289.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography