To see the other types of publications on this topic, follow the link: Neyron.

Dissertations / Theses on the topic 'Neyron'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Neyron.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Haiying. "Ranked set sampling for binary and ordered categorical variables with applications in health survey data." Connect to this title online, 2004. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1092770729.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2004.<br>Title from first page of PDF file. Document formatted into pages; contains xiii, 109 p.; also includes graphics Includes bibliographical references (p. 99-102). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
2

Sabbaghi, Arman. "Dilemmas in Design: From Neyman and Fisher to 3D Printing." Thesis, Harvard University, 2014. http://dissertations.umi.com/gsas.harvard:11367.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rudloff, Birgit. "A generalized Neyman-Pearson lemma for hedge problems in incomplete markets." Universitätsbibliothek Chemnitz, 2005. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200501316.

Full text
Abstract:
Some financial problems as minimizing the shortfall risk when hedging in incomplete markets lead to problems belonging to test theory. This paper considers a generalization of the Neyman-Pearson lemma. With methods of convex duality we deduce the structure of an optimal randomized test when testing a compound hypothesis against a simple alternative. We give necessary and sufficient optimality conditions for the problem.
APA, Harvard, Vancouver, ISO, and other styles
4

Soto, Fernández Carlos Federico. "Estrategia de expansión comercial a China de Viña Neyen." Tesis, Universidad de Chile, 2009. http://repositorio.uchile.cl/handle/2250/133817.

Full text
Abstract:
Magíster en Gestión para la Globalización<br>El objetivo principal de este estudio es analizar el atractivo del mercado chino del vino, específicamente en el segmento de los vinos premium, con la finalidad que Viña Neyen pueda desarrollar su expansión comercial en el país. En segundo lugar, después de evaluar el atractivo del mercado se entrega una propuesta específica de la estrategia para la expansión comercial de Viña Neyen a China. Se incluyen también los eventuales cambios y mejoras que la organización debe realizar para alcanzarla, considerando los aspectos particulares de este mercado. Este estudio presenta en primer lugar un proceso declarativo de Viña Neyen, con una formulación de sus declaraciones de intención. Luego, se realiza un análisis externo del mercado chino desde un punto de vista político, económico, social y tecnológico, así como desde la perspectiva de las fuerzas competitivas a las que se enfrenta la Viña en este nuevo entorno. A continuación se realiza un análisis interno de la compañía que permite establecer cuáles son las ventajas competitivas que Viña Neyen posee y cuáles son las oportunidades que efectivamente puede aprovechar en su expansión comercial en el mercado chino. Finalmente se propone una estrategia para dicha expansión, haciendo una revisión de aquellos temas importantes a la hora de decidir abrirse a un mercado tan complejo como el chino. El análisis del mercado chino muestra que hay complejidades como el idioma, la extensión del territorio, la inexistencia de redes de distribución nacionales y la forma de hacer negocios. No obstante, hay grandes oportunidades al ser un mercado que se está abriendo al resto del mundo y que, sólo en el segmento tinto premium, es cerca de 400 veces superior a la capacidad máxima de producción de Viña Neyen, alcanzando los 45 millones de litros anuales. Los consumidores chinos se están occidentalizando y el consumo de vino de alta calidad está creciendo. Además, el fuerte impulso al consumo por parte del gobierno permite esperar que la crisis financiera actual tenga efectos relativamente menores sobre este mercado de productos de lujo tan reducido. Como estrategia para la expansión comercial hacia China, se propone a Viña Neyen que contacte a un socio local, de preferencia un distribuidor, para enfocar la comercialización de Neyen inicialmente en Shanghai y Beijing. Se presenta un conjunto de posibles distribuidores con fuerte presencia en estas ciudades, que están enfocados al canal on trade y dentro de éste al segmento de calidad media-alta y alta. El establecimiento de una relación de largo plazo con un distribuidor chino requerirá de un gran compromiso y esfuerzo de marketing que considera visitas frecuentes y un fuerte apoyo en capacitación a la fuerza de ventas. Asimismo, se sugiere que la viña revise su dependencia del enólogo jefe y su estructura organizacional pequeña ya que ésta podría limitar su crecimiento futuro. Se recomienda la incorporación de un export manager ya sea para manejar los mercados existentes o alternativamente para generar la relación con distribuidores en este nuevo mercado. Se concluye que existe un real atractivo en el mercado chino y que Viña Neyen y Neyen de Apalta poseen las características necesarias para poder posicionarse bien en el segmento premium. El terroir, los viñedos y el equipo enológico han dado origen a un producto de alta calidad, reconocido internacionalmente por las más prestigiosas autoridades en la materia.
APA, Harvard, Vancouver, ISO, and other styles
5

Favre, Anne-Catherine. "Single and multi-site modelling of rainfall based on the Neyman-Scott process /." Lausanne : EPFL, 2001. http://library.epfl.ch/theses/?nr=2320.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Neyra, Fernando. "Concierto de Feffo Neyra. Lanzamiento del disco "Hace 100 días"." Universidad Peruana de Ciencias Aplicadas (UPC), 2021. http://hdl.handle.net/10757/656500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Montoya, Amanda Kay. "Extending the Johnson-Neyman Procedure to Categorical Independent Variables: Mathematical Derivations and Computational Tools." The Ohio State University, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=osu1469104326.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Donnelly, Neysan [Verfasser], and Stefan [Akademischer Betreuer] Jentsch. "Aneuploidy impairs protein folding and genome integrity in human cells / Neysan Donnelly ; Betreuer: Stefan Jentsch." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2016. http://d-nb.info/1122019084/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Castillo, Fung Luis Alberto, and Vásquez Victor Bazán. "Entrevista al Dr. Jose Neyra Flores: Reflexiones del Nuevo Código Procesal Penal." Derecho & Sociedad, 2012. http://repositorio.pucp.edu.pe/index/handle/123456789/119065.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Platoni, González Verónica Rocío. "Propuesta de Plan de Expansión y Penetración Comercial de Viña Neyen en Rusia." Tesis, Universidad de Chile, 2009. http://repositorio.uchile.cl/handle/2250/102251.

Full text
Abstract:
El objetivo principal de este estudio es analizar el atractivo del mercado ruso del vino, específicamente en el segmento de los vinos premium, para posteriormente desarrollar un plan de comercialización para Viña Neyen. Junto a este objetivo general, se encuentran los objetivos específicos que son entregar a Viña Neyen las herramientas para encontrar un distribuidor o agente para ingresar al mercado ruso y en segundo lugar proponer los eventuales ajustes que sean necesarios a su estrategia y estructura para alcanzar el plan. Este estudio comienza con una revisión de la historia y orígenes de Viña Neyen, enunciando aquí la visión y la misión de la empresa. Posteriormente analiza el mercado ruso desde un ambiente macro, abarcando los aspectos políticos, económicos, sociales y tecnológicos, así como las fuerzas competitivas a las que se enfrentaría la Viña. Luego se realiza un análisis interno de la organización. Con toda esta información se establecen cuáles son las ventajas competitivas de Viña Neyen y cuáles son las oportunidades que puede aprovechar en su expansión comercial. Por último se sugiere la estrategia para dicha expansión, haciendo una revisión de aquellos temas importantes a la hora de decidir abrirse al mercado ruso. El análisis permite constatar que Viña Neyen y su producto Neyen de Apalta presentan las características necesarias para asentarse en el segmento premium. El terroir, el equipo enológico y los viñedos han dado paso a un producto de alta calidad reconocido internacionalmente por las más prestigiosas autoridades. Por su lado, el análisis del mercado ruso muestra que hay ciertas complejidades, como el idioma y la extensión del territorio y la forma especial de hacer negocios. No obstante, hay grandes oportunidades al ser un mercado que se está abriendo al resto del mundo, sus consumidores se están occidentalizando y el consumo de vino, sobre todo de alta calidad, está creciendo y se espera que continúe con dicha tendencia en el futuro. Como estrategia para la expansión comercial hacia Rusia, se propone a Viña Neyen que contacte a un socio local, ya sea importador o distribuidor, con el cual establecer una relación para lograr la comercialización de Neyen en dicho país. Este socio debe cumplir ciertas características siendo las más importantes el que se tenga presencia fuerte en Moscú y San Petersburgo, que esté enfocado al canal on trade y dentro de éste en el segmento de calidad media-alta alta. Como ajustes a realizar Viña Neyen debe revisar su estructura organizacional, la que por ser pequeña podría limitar el crecimiento futuro, así como su dependencia del enólogo jefe. Por último, debe mantener la reputación de su producto a través del reconocimiento de expertos y no sólo por la presencia del actual enólogo jefe. Para finalizar, se concluyó que sí existe un atractivo en el mercado ruso al mismo tiempo que Viña Neyen cuenta con las competencias distintivas para poder ingresar exitosamente en este mercado.
APA, Harvard, Vancouver, ISO, and other styles
11

Nejad, K. S. "The geology and tectonic settings of ophiolites and associated rocks in the Neyriz area, south-eastern Iran." Thesis, Bucks New University, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.373604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Rajabzadeh, Mohammad Ali. "Minéralisation en chrome et éléments du groupe du platine dans les ophiolites d'Assemion et de Neyriz, ceinture du Zagros, Iran." Vandoeuvre-les-Nancy, INPL, 1998. http://docnum.univ-lorraine.fr/public/INPL_T_1998_RAJABZADEH_M_A.pdf.

Full text
Abstract:
L’étude des minéralisations en chrome et en éléments du groupe du platine (EGP) associées aux chromitites des massifs ophiolitiques d'Assemion et de Neyriz, ceinture du Zagros au sud de l'Iran, a été réalisée au cours de ce travail de thèse. Le manteau des massifs ophiolitiques comporte des harzburgites homogènes surmontées d'une zone de transition avec des dunites, des pyroxénites et la plupart des gisements de chromite métallurgique. La teneur en chrome des gisements augmente des harzburgites homogènes à la zone de transition. Le potentiel métallogénique des massifs semble plus grand lorsque prédomine le jeu de la molécule de magnésiochromite en association avec celle de magnésioferrite. La minéralisation en éléments du groupe du platine associée aux chromitites est enrichie est en Ru, Os et Ir. Les minéraux du groupe du platine (MGP) sont toujours inclus dans la chromite, ce qui souligne le rôle de la chromite pour piéger les MGP. Les MGP sont soit des sulfures (laurite et erlichmanite), des alliages (Os-Ir-Ru, Pd-Rh), des sulfoarseniures (irarsite et hollingworthite) ou des minéraux formant des solutions solides. Une fugacite plus forte en soufre du système minéralisant d'Assemion, comparé à celui de Neyriz a été mis en évidence à partir de la nature et de la proportion des MGP et des sulfures de métaux de base. Ce système minéralisant serait aussi hydraté comme l'indique la nature des silicates inclus dans les chromites qui accompagnent les MGP. La répartition, l'évolution cryptique de la composition des MGP ainsi que l'évolution de leur assemblages minéralogiques ont permis de caractériser l'évolution du système minéralisant responsable de la concentration des EGP par rapport à l'emplacement successif des chromitites. Une origine commune est proposée pour les minéralisations de Cr et EGP.
APA, Harvard, Vancouver, ISO, and other styles
13

Leone, Robert Matthew. "Machine learning multi-stage classification and regression in the search for vector-like quarks and the Neyman construction in signal searches." Thesis, The University of Arizona, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10247665.

Full text
Abstract:
<p> A search for vector-like quarks (VLQs) decaying to a Z boson using multi-stage machine learning was compared to a search using a standard square cuts search strategy. VLQs are predicted by several new theories beyond the Standard Model. The searches used 20.3 inverse femtobarns of proton-proton collisions at a center-of-mass energy of 8 TeV collected with the ATLAS detector in 2012 at the CERN Large Hadron Collider. CLs upper limits on production cross sections of vector-like top and bottom quarks were computed for VLQs produced singly or in pairs, <i>T<sub>single</sub>, B<sub>single</sub>, T<sub>pair</sub>, </i> and <i>B<sub>pair</sub>.</i> The two stage machine learning classification search strategy did not provide any improvement over the standard square cuts strategy, but for <i>T<sub>pair</sub>, B<sub> pair</sub>,</i> and <i>T<sub>single</sub>,</i> a third stage of machine learning regression was able to lower the upper limits of high signal masses by as much as 50%. Additionally, new test statistics were developed for use in the Neyman construction of confidence regions in order to address deficiencies in current frequentist methods, such as the generation of empty set confidence intervals. A new method for treating nuisance parameters was also developed that may provide better coverage properties than current methods used in particle searches. Finally, significance ratio functions were derived that allow a more nuanced interpretation of the evidence provided by measurements than is given by confidence intervals alone.</p>
APA, Harvard, Vancouver, ISO, and other styles
14

Leone, Robert Matthew, and Robert Matthew Leone. "Machine Learning Multi-Stage Classification and Regression in the Search for Vector-like Quarks and the Neyman Construction in Signal Searches." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/621827.

Full text
Abstract:
A search for vector-like quarks (VLQs) decaying to a Z boson using multi-stage machine learning was compared to a search using a standard square cuts search strategy. VLQs are predicted by several new theories beyond the Standard Model. The searches used 20.3 inverse femtobarns of proton-proton collisions at a center-of-mass energy of 8 TeV collected with the ATLAS detector in 2012 at the CERN Large Hadron Collider. CLs upper limits on production cross sections of vector-like top and bottom quarks were computed for VLQs produced singly or in pairs, Tsingle, Bsingle, Tpair, and Bpair. The two stage machine learning classification search strategy did not provide any improvement over the standard square cuts strategy, but for Tpair, Bpair, and Tsingle, a third stage of machine learning regression was able to lower the upper limits of high signal masses by as much as 50%. Additionally, new test statistics were developed for use in the Neyman construction of confidence regions in order to address deficiencies in current frequentist methods, such as the generation of empty set confidence intervals. A new method for treating nuisance parameters was also developed that may provide better coverage properties than current methods used in particle searches. Finally, significance ratio functions were derived that allow a more nuanced interpretation of the evidence provided by measurements than is given by confidence intervals alone.
APA, Harvard, Vancouver, ISO, and other styles
15

Diniz, Érika Cristina [UNESP]. "Modelos de distribuição espacial de precipitações intensas." Universidade Estadual Paulista (UNESP), 2003. http://hdl.handle.net/11449/91935.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:25:32Z (GMT). No. of bitstreams: 0 Previous issue date: 2003-02-26Bitstream added on 2014-06-13T19:53:23Z : No. of bitstreams: 1 diniz_ec_me_rcla.pdf: 610866 bytes, checksum: 86834ae8acca4f4532e7d39107c9c8c7 (MD5)<br>Modelos de geração de precipitações são de extrema importância nos dias atuais, pois com o conhecimento do padrão de precipitação em certa área, pode-se planejar obras de forma a minimizar os efeitos das precipitações de grande intensidade. No presente trabalho, aplica-se o modelo de Neyman-Scott e, particularmente, o de Poisson na geração de precipitações de grande intensidade na região da Bacia do Tietê Superior, no Estado de São Paulo, Brasil. Essa região sofre anualmente com as enchentes devido às fortes precipitações e a alta densidade populacional nesta área. Para a aplicação dos modelos de distribuição espacial de precipitações Neyman-Scott e Poisson, foram considerados os dados coletados de 1980 a 1997 de uma rede pluviométrica constituída de treze pluviômetros.<br>Models related with precipitations generation have extremely importance nowadays because with the standard knowledge about an specific area, we can plan projects to minimize the effects caused by high intensity precipitations. At the present work, we applies Neyman-Scott s model and particularly the one from Poisson, in the precipitations generations with high intensity in the Superior Tietê Bays region, São Paulo state, Brazil. This region suffer annually with the floods due to the strong precipitations and the high human density. To use the Neyman-Scott and Poisson models related to spatial precipitations distribution, we have considered data collected during 1980 to 1997 from a pluviometric network consisted by thirteen rain gauges.
APA, Harvard, Vancouver, ISO, and other styles
16

Starvaggi, Patrick William. "Exact Distributions of Sequential Probability Ratio Tests." Kent State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=kent1397042380.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Urpeque, Parraguez Nancy Lucía, and Sánchez Fiorella Liset López. "Percepción del paciente del servicio de cirugía sobre su relación interpersonal con la enfermera. Hospital Agustín Arbulú Neyra, Ferreñafe." Bachelor's thesis, Chiclayo, 2014. http://tesis.usat.edu.pe/jspui/handle/123456789/377.

Full text
Abstract:
La investigación cualitativa con abordaje de estudio de caso tiene como objetivo: identificar, caracterizar y comprender la percepción del paciente sobre su relación interpersonal con la enfermera. El marco teórico fue fundamentado en Fabro (1982), para percepción desde la psicología filosófica; Cibanal (2003), para comunicación y relación de ayuda; King (1984), fundamentó con su teoría en enfermería. La metodología fue de estudio de caso según Lucke (1986). Los sujetos de investigación fueron 5 enfermeras y 10 pacientes hospitalizados del servicio de cirugía del Hospital Arbulú Neyra, la muestra se obtuvo por saturación. El análisis fue de contenido, obteniendo como resultado las siguientes categorías: Percibiendo elementos clave de una relación terapéutica durante la relación interpersonal, experimentando buen trato durante su relación interpersonal con la enfermera y percibiendo diferencias en el trato según el carácter de la enfermera. Como consideraciones finales tenemos que los pacientes perciben buen trato basados en la confianza, la empatía y la esperanza de acuerdo a sus demandas de cuidado, elementos que permitieron sentirse en familia sobre todo por el cariño brindado por las enfermeras, aunque también ven diferencias en el trato según el carácter de ellas y frente a determinadas situaciones exigen buen trato ya que de por medio hay un pago para su atención. Se respetaron en la investigación los principios científicos fundamentados en Morse (2003) y los criterios éticos de investigación en Sgreccia (2001).
APA, Harvard, Vancouver, ISO, and other styles
18

Urpeque, Parraguez Nancy Lucía, Sánchez Fiorella Liset López, Parraguez Nancy Lucía Urpeque, and Sánchez Fiorella Liset López. "Percepción del paciente del servicio de cirugía sobre su relación interpersonal con la enfermera. Hospital Agustín Arbulú Neyra, Ferreñafe." Bachelor's thesis, Universidad Católica Santo Toribio de Mogrovejo, 2014. http://tesis.usat.edu.pe/handle/usat/364.

Full text
Abstract:
La investigación cualitativa con abordaje de estudio de caso tiene como objetivo: identificar, caracterizar y comprender la percepción del paciente sobre su relación interpersonal con la enfermera. El marco teórico fue fundamentado en Fabro (1982), para percepción desde la psicología filosófica; Cibanal (2003), para comunicación y relación de ayuda; King (1984), fundamentó con su teoría en enfermería. La metodología fue de estudio de caso según Lucke (1986). Los sujetos de investigación fueron 5 enfermeras y 10 pacientes hospitalizados del servicio de cirugía del Hospital Arbulú Neyra, la muestra se obtuvo por saturación. El análisis fue de contenido, obteniendo como resultado las siguientes categorías: Percibiendo elementos clave de una relación terapéutica durante la relación interpersonal, experimentando buen trato durante su relación interpersonal con la enfermera y percibiendo diferencias en el trato según el carácter de la enfermera. Como consideraciones finales tenemos que los pacientes perciben buen trato basados en la confianza, la empatía y la esperanza de acuerdo a sus demandas de cuidado, elementos que permitieron sentirse en familia sobre todo por el cariño brindado por las enfermeras, aunque también ven diferencias en el trato según el carácter de ellas y frente a determinadas situaciones exigen buen trato ya que de por medio hay un pago para su atención. Se respetaron en la investigación los principios científicos fundamentados en Morse (2003) y los criterios éticos de investigación en Sgreccia (2001).<br>Tesis
APA, Harvard, Vancouver, ISO, and other styles
19

Valdiviezo, Llontop Stefany Yoshelyn. "Capacidad de agencia de autocuidado de personas adultas con hipertensión arterial del Hospital I Agustín Arbulú Neyra – Ferreñafe. 2019." Bachelor's thesis, Universidad Católica Santo Toribio de Mogrovejo, 2020. http://hdl.handle.net/20.500.12423/2497.

Full text
Abstract:
La hipertensión arterial constituye uno de los principales problemas de salud a nivel mundial, al igual que en nuestro país; las personas con este tipo de enfermedad muchas veces no cuentan con una buena capacidad de agencia de autocuidado. Se planteó el presente estudio, de tipo cuantitativo, alcance descriptivo, corte trasversal y diseño no experimental, el cual tuvo como objetivo general, determinar el nivel de la capacidad de agencia de autocuidado en personas adultas con hipertensión arterial del Hospital I Agustín Arbulú Neyra - Ferreñafe.2019. La población estuvo conformada por 303 pacientes, con diagnóstico médico hipertensión arterial del programa de adulto/adulto mayor del mencionado hospital, y la muestra fue 137 pacientes, a quienes se les aplicó el cuestionario “Evaluación de la capacidad de agencia de autocuidado en pacientes con hipertensión arterial”. Para probar la confiabilidad del instrumento se realizó la prueba piloto, obteniéndose un Alfa de Crombrach 0.79. Para el procesamiento y análisis de datos, se hizo uso del programa Excel 2013 y SPSS versión 25.0. Resultados: el nivel de la capacidad de agencia de autocuidado de los adultos hipertensos encuestados, fue alta (58.4%); así mismo, se identificó que las dimensiones con un alto nivel fueron, los componentes de poder, con un 78.8%, seguido de las capacidades fundamentales (64.2%), mientras que el 64.2% obtuvo un nivel medio con respecto a dimensión capacidad para operacionalizar el autocuidado. Conclusión: el 58.4% de los pacientes hipertensos del Hospital I Agustín Arbulú Neyra, poseen una alta capacidad de agencia de autocuidado.<br>Tesis
APA, Harvard, Vancouver, ISO, and other styles
20

Peres, David Johnny. "The hydrologic control on shallow landslide triggering: empirical and monte carlo pysically-based approaches." Doctoral thesis, Università di Catania, 2013. http://hdl.handle.net/10761/1387.

Full text
Abstract:
In the dissertation, a Monte Carlo approach, that combines stochastic and deterministic modeling approaches, is used to analyze the hydrological control on shallow landslide triggering. In particular, an integrated stochastic rainfall and deterministic landslide simulator has been developed for the purpose. The simulator is composed by the following components: (i) a seasonal Neyman-Scott Rectangular Pulses (NRSP) model to generate synthetic hourly point rainfall data; (ii) a module for rainfall event identification and separation from dry intervals; (iii) the Transient Rainfall Infiltration and Grid-Based Regional Slope-Stability (TRIGRS) model, version 2 (Baum et al., 2008, 2010) to simulate landslide triggering by rainfall infiltration, combined with a water table recession (WTR) model that computes the initial water table height to consider in simulating rainfall events with TRIGRS. The Monte Carlo simulator has been applied to the Loco catchment in the Peloritani Mountains in northeastern Sicily of Italy, an area with high landslide risk, as recently demonstrated by the regional debris-flow event that occurred on 1 October 2009, which caused 37 casualties and millions of euros of damage. The Monte Carlo approach has been applied for estimation of return periods of shallow landslide triggering and for the evaluation of the most commonly-used types of empirical rainfall threshold. Use of the Monte Carlo approach for estimation of the return period of landsling, represents an advance to approaches based on rainfall Intensity-Duration-Frequency (IDF) curves, applied by several different researchers, for two reasons. Firstly because the response of an hillslope to hyetographs of rectangular (or any other predifined) shape may be significantly different from that to a real-like stochastically variable hyetograph. Secondly, and more importantly, the use of the Monte Carlo approach, in which water table depths at the beginning of each rainfall event are determined in response to antecedent rainfall time history, enables to avoid the drawback of assuming an arbitrary initial water table depth (for instance equal to zero), which has a probability to occur that should be taken into account in estimating the return period. In fact, IDF-based return period estimation is in principle flawed by the fact that in estimating return period the conditional probability of the rainfall event, given the assumed initial water table height, should be considered. Monte Carlo simulations have allowed to map return period of landslide triggering on the case-study catchment. Simulation results have been analyzed to evaluate from a theoretical perspective the Intensity-Duration empirical model paradigm, i.e. to understand if the stochastic nature of rainfall combined with the physical processes of soil-water movement provide a theoretical justification to this most widely used empirical model. In fact, in spite of its consolidated use, no particular theoretical justification for the use of the Intensity-Duration empirical model exists. The paradigm is that a rainfall threshold for landslide triggering assumes a straight line in a bi-logarithmic rainfall (mean) Intensity - Duration plane. The obtained results allow to state that, actually, stochastic structure of real rainfall events combined with the infiltration response reveal in a certain sense a theoretical justification to the I - D relationship. Iso-pore-pressure points, in the bi-logarithmic rainfall (mean) Intensity - Duration plane, lay, with relatively low scattering, around a straight line, in the cases that initial water table height is negligible. This means that the I-D model represents a valid model to interpret data in the case that memory of pore pressures is negligible. In the opposite, most likely, case, the I-D model should be coupled with an antecedent rainfall model.
APA, Harvard, Vancouver, ISO, and other styles
21

Sheikoleslami, Mohammad-Reza. "Évolution structurale et métamorphique de la marge sud de la microplaque de l'Iran central : les complexes métamorphiques de la région de Neyriz (zone de Sanandaj-Sirjan)." Brest, 2002. http://www.theses.fr/2002BRES2018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Damiano, Rafael Gaspar. "Modelagem estocástica da demanda individualizada de água residencial." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/18/18138/tde-18032019-165703/.

Full text
Abstract:
A modelagem da demanda de água residencial fornece importantes subsídios ao dimensionamento e gerenciamento de redes de abastecimento de água. O comportamento desta demanda pode ser descrito através de processos estocásticos, caracterizados pela ocorrência de pulsos retangulares de demanda de água ao longo do tempo. Nesse contexto, este trabalho de pesquisa teve como objetivos monitorar e modelar a demanda de água residencial através dos modelos estocásticos dos pulsos retangulares de Neyman Scott (NSRP) e dos Pulsos Totais (OP). Enquanto que no modelo NSRP há a tentativa de simular a demanda de água através da representação dos seus constituintes elementares, no modelo OP busca-se a representação direta da demanda de água agregada dos usuários finais, como observada nos hidrômetros. A calibração e a validação dos modelos foram feitas a partir do monitoramento do consumo de água de quatro residências localizadas na cidade de São Carlos, caracterizadas por perfis de abastecimento distintos. Para tanto, foram desenvolvidos dispositivos dataloggers, que associados aos sensores/emissores de pulsos dos hidrômetros, permitiram o monitoramento do consumo de água ao longo do tempo dos usuários residenciais individuais. Durante a elaboração da pesquisa, foram observados efeitos negativos nas modelagens relacionados à influência dos reservatórios domiciliares (caixas d\'água) no perfil temporal do consumo de água das residências. Buscando mitigar esses efeitos, foram propostas modificações nas etapas de calibração e de geração das séries sintéticas de demanda de água. De uma forma geral, observou-se que as modificações propostas contribuíram para que as séries sintéticas geradas a partir dos modelos NSRP e OP reproduzissem de forma mais acurada as estatísticas das séries observadas, principalmente com relação às intensidades e durações das demandas simuladas. Apesar de as versões modificadas dos modelos NSRP e OP apresentarem desempenho similar na reprodução das médias, variâncias e covariâncias das séries observadas, o modelo OP reproduziu de forma mais consistente os volumes consumidos diários observados.<br>The modelling of residential water demand provides important subsidies for the design and management of water supply networks. The behavior of this demand can be described through stochastic processes, characterized by the occurrence of rectangular pulses of water demand over time. In this context, the objectives of this research were to monitor and model residential water demand using the Neyman Scott Rectangular Pulse model (NSRP) and Overall Pulse model (OP). While in the NSRP model there is the attempt to simulate the water demand through the representation of its elementary constituents, the OP model aims to direct represent the aggregate water demand of the end users, as observed in water meters. The calibration and validation of the models were done by monitoring the water consumption of four residences located in the city of São Carlos, characterized by different supply profiles. To this end, dataloggers were developed, which, coupled with sensors/pulse emitters and water meters, allowed the monitoring of water consumption over time of individual residential users. During the research, negative effects were observed in the models, related to the influence of the domestic reservoirs on the temporal patter of water consumption of the residences. To mitigate these effects, modifications were proposed in the calibration and generation stages of the synthetic water demand generation series. In general, it was observed that these proposed modifications contributed to a more accurately reproduction of the observed series statistics by the OP and NSRP synthetic series, especially regarding the intensities and durations of the simulated demands. Although the modified versions of the NSRP and OP models presented similar performance in the reproduction of the means, variances and covariance of the observed series, the OP model reproduced in a more consistent way the observed daily consumed volumes.
APA, Harvard, Vancouver, ISO, and other styles
23

Sheikhi, Farid. "Entropy Filter for Anomaly Detection with Eddy Current Remote Field Sensors." Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31100.

Full text
Abstract:
We consider the problem of extracting a specific feature from a noisy signal generated by a multi-channels Remote Field Eddy Current Sensor. The sensor is installed on a mobile robot whose mission is the detection of anomalous regions in metal pipelines. Given the presence of noise that characterizes the data series, anomaly signals could be masked by noise and therefore difficult to identify in some instances. In order to enhance signal peaks that potentially identify anomalies we consider an entropy filter built on a-posteriori probability density functions associated with data series. Thresholds based on the Neyman-Pearson criterion for hypothesis testing are derived. The algorithmic tool is applied to the analysis of data from a portion of pipeline with a set of anomalies introduced at predetermined locations. Critical areas identifying anomalies capture the set of damaged locations, demonstrating the effectiveness of the filter in detection with Remote Field Eddy Current Sensor.
APA, Harvard, Vancouver, ISO, and other styles
24

Diniz, Érika Cristina. "Modelos de distribuição espacial de precipitações intensas /." Rio Claro : [s.n.], 2003. http://hdl.handle.net/11449/91935.

Full text
Abstract:
Resumo: Modelos de geração de precipitações são de extrema importância nos dias atuais, pois com o conhecimento do padrão de precipitação em certa área, pode-se planejar obras de forma a minimizar os efeitos das precipitações de grande intensidade. No presente trabalho, aplica-se o modelo de Neyman-Scott e, particularmente, o de Poisson na geração de precipitações de grande intensidade na região da Bacia do Tietê Superior, no Estado de São Paulo, Brasil. Essa região sofre anualmente com as enchentes devido às fortes precipitações e a alta densidade populacional nesta área. Para a aplicação dos modelos de distribuição espacial de precipitações Neyman-Scott e Poisson, foram considerados os dados coletados de 1980 a 1997 de uma rede pluviométrica constituída de treze pluviômetros.<br>Abstract: Models related with precipitations generation have extremely importance nowadays because with the standard knowledge about an specific area, we can plan projects to minimize the effects caused by high intensity precipitations. At the present work, we applies Neyman-Scott’s model and particularly the one from Poisson, in the precipitations generations with high intensity in the Superior Tietê Bays’ region, São Paulo state, Brazil. This region suffer annually with the floods due to the strong precipitations and the high human density. To use the Neyman-Scott and Poisson models related to spatial precipitations distribution, we have considered data collected during 1980 to 1997 from a pluviometric network consisted by thirteen rain gauges.<br>Orientador: Roberto Naves Domingos<br>Coorientador: José Silvio Govone<br>Banca: José Manoel Balthazar<br>Banca: Marco Aurélio Sicchiroli Lavrador<br>Mestre
APA, Harvard, Vancouver, ISO, and other styles
25

Vandenbroucke, Maxence [Verfasser], Bernhard [Akademischer Betreuer] Ketzer, Damien [Akademischer Betreuer] Neyret, et al. "Development and Characterization of Micro-Pattern Gas Detectors for Intense Beams of Hadrons / Maxence Vandenbroucke. Gutachter: Silvia Dalla Torre ; Stephan Paul ; Werner Riegler ; Walter Henning. Betreuer: Bernhard Ketzer ; Damien Neyret." München : Universitätsbibliothek der TU München, 2012. http://d-nb.info/1029818746/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Jannessary, Mohammad Reza. "Les ophiolites de Neyriz (Sud de l'Iran) : Naissance d'une dorsale en pied de marge continentale (étude des structures internes, de la fabrique du manteau, et de l'évolution pétro-géochimique des magmas)." Université Louis Pasteur (Strasbourg) (1971-2008), 2003. http://www.theses.fr/2003STR1GE12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Xu, Yangyi. "Frequentist-Bayesian Hybrid Tests in Semi-parametric and Non-parametric Models with Low/High-Dimensional Covariate." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/71285.

Full text
Abstract:
We provide a Frequentist-Bayesian hybrid test statistic in this dissertation for two testing problems. The first one is to design a test for the significant differences between non-parametric functions and the second one is to design a test allowing any departure of predictors of high dimensional X from constant. The implementation is also given in construction of the proposal test statistics for both problems. For the first testing problem, we consider the statistical difference among massive outcomes or signals to be of interest in many diverse fields including neurophysiology, imaging, engineering, and other related fields. However, such data often have nonlinear system, including to row/column patterns, having non-normal distribution, and other hard-to-identifying internal relationship, which lead to difficulties in testing the significance in difference between them for both unknown relationship and high-dimensionality. In this dissertation, we propose an Adaptive Bayes Sum Test capable of testing the significance between two nonlinear system basing on universal non-parametric mathematical decomposition/smoothing components. Our approach is developed from adapting the Bayes sum test statistic by Hart (2009). Any internal pattern is treated through Fourier transformation. Resampling techniques are applied to construct the empirical distribution of test statistic to reduce the effect of non-normal distribution. A simulation study suggests our approach performs better than the alternative method, the Adaptive Neyman Test by Fan and Lin (1998). The usefulness of our approach is demonstrated with an application in the identification of electronic chips as well as an application to test the change of pattern of precipitations. For the second testing problem, currently numerous statistical methods have been developed for analyzing high-dimensional data. These methods mainly focus on variable selection approach, but are limited for purpose of testing with high-dimensional data, and often are required to have explicit derivative likelihood functions. In this dissertation, we propose ``Hybrid Omnibus Test'' for high-dimensional data testing purpose with much less requirements. Our Hybrid Omnibus Test is developed under semi-parametric framework where likelihood function is no longer necessary. Our Hybrid Omnibus Test is a version of Freqentist-Bayesian hybrid score-type test for a functional generalized partial linear single index model, which has link being functional of predictors through a generalized partially linear single index. We propose an efficient score based on estimating equation to the mathematical difficulty in likelihood derivation and construct our Hybrid Omnibus Test. We compare our approach with a empirical likelihood ratio test and Bayesian inference based on Bayes factor using simulation study in terms of false positive rate and true positive rate. Our simulation results suggest that our approach outperforms in terms of false positive rate, true positive rate, and computation cost in high-dimensional case and low-dimensional case. The advantage of our approach is also demonstrated by published biological results with application to a genetic pathway data of type II diabetes.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
28

Tvaranaviciute, Iveta. "Fisher Inference and Local Average Treatment Effect: A Simulation study." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412989.

Full text
Abstract:
This thesis studies inference to the complier treatment effect denoted LATE. The standard approach is to base the inference on the two-stage least squares (2SLS) estimator and asymptotic Neyman inference, i.e., the t-test. The paper suggests a Fisher Randomization Test based on the t-test statistic as an alternative to the Neyman inference. Based on the setup with a randomized experiment with noncompliance, for which one can identify the LATE, I compare the two approaches in a Monte Carlo (MC) simulations. The results from the MC simulation is that the Fisher randomization test is not a valid alternative to the Neyman’s test as it has too low power.
APA, Harvard, Vancouver, ISO, and other styles
29

Poluru, Raja Rohit. "Random Access Procedures for Narrow-Band Internet of Things via Non-Terrestrial Networks." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
The Internet of Things (IoT) is a fundamental pillar in the digital transformation since it enables the interaction with the physical world by means of sensing and actuation. 3GPP recognized the importance of IoT by introducing new features aimed at supporting this technology. In details, in Rel.13, the 3GPP has launched the NB-IoT to provide support for LPWAN. Since these devices will be distributed also in such area where the terrestrial networks are not feasible or commercially viable, satellite networks will have a complementary role thanks to their capability to provide worldwide connectivity through their large footprint size and short service deployment time. In this context, the aim of this thesis is to analyze the feasibility of integration of the NB-IoT technology with satellite communication systems focusing on the Random-Access Procedure. Indeed, the RA is the most critical procedure since it allows the UE to achieve uplink synchronization, obtain the permanent ID, and get the resources for the uplink transmission. The objective of the thesis is the assessment of the preamble detection in the SatCom environment. The elaborate is divided in four chapters: the first one highlights the main characteristics of the NBIoT technology. In the second chapter, a detail description of the synchronization and RA procedures is presented with a review of the Stat of Art of the detection of the preamble; in the third chapter the main features of SatCom systems are shown, with particular emphasis on the impairments which could hamper the RA procedure. In the fourth chapter, a thorough analysis on the detection process and the implementation of the transmission and reception chain, obtained by the means of simulator built in MATLAB, are provided, with the results. Finally, chapter five concludes this work.
APA, Harvard, Vancouver, ISO, and other styles
30

Salhi, Khaled. "Risques extrêmes en finance : analyse et modélisation." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0192/document.

Full text
Abstract:
Cette thèse étudie la gestion et la couverture du risque en s’appuyant sur la Value-at-Risk (VaR) et la Value-at-Risk Conditionnelle (CVaR), comme mesures de risque. La première partie propose un modèle d’évolution de prix que nous confrontons à des données réelles issues de la bourse de Paris (Euronext PARIS). Notre modèle prend en compte les probabilités d’occurrence des pertes extrêmes et les changements de régimes observés sur les données. Notre approche consiste à détecter les différentes périodes de chaque régime par la construction d’une chaîne de Markov cachée et à estimer la queue de distribution de chaque régime par des lois puissances. Nous montrons empiriquement que ces dernières sont plus adaptées que les lois normales et les lois stables. L’estimation de la VaR est validée par plusieurs backtests et comparée aux résultats d’autres modèles classiques sur une base de 56 actifs boursiers. Dans la deuxième partie, nous supposons que les prix boursiers sont modélisés par des exponentielles de processus de Lévy. Dans un premier temps, nous développons une méthode numérique pour le calcul de la VaR et la CVaR cumulatives. Ce problème est résolu en utilisant la formalisation de Rockafellar et Uryasev, que nous évaluons numériquement par inversion de Fourier. Dans un deuxième temps, nous nous intéressons à la minimisation du risque de couverture des options européennes, sous une contrainte budgétaire sur le capital initial. En mesurant ce risque par la CVaR, nous établissons une équivalence entre ce problème et un problème de type Neyman-Pearson, pour lequel nous proposons une approximation numérique s’appuyant sur la relaxation de la contrainte<br>This thesis studies the risk management and hedging, based on the Value-at-Risk (VaR) and the Conditional Value-at-Risk (CVaR) as risk measures. The first part offers a stocks return model that we test in real data from NSYE Euronext. Our model takes into account the probability of occurrence of extreme losses and the regime switching observed in the data. Our approach is to detect the different periods of each regime by constructing a hidden Markov chain and estimate the tail of each regime distribution by power laws. We empirically show that powers laws are more suitable than Gaussian law and stable laws. The estimated VaR is validated by several backtests and compared to other conventional models results on a basis of 56 stock market assets. In the second part, we assume that stock prices are modeled by exponentials of a Lévy process. First, we develop a numerical method to compute the cumulative VaR and CVaR. This problem is solved by using the formalization of Rockafellar and Uryasev, which we numerically evaluate by Fourier inversion techniques. Secondly, we are interested in minimizing the hedging risk of European options under a budget constraint on the initial capital. By measuring this risk by CVaR, we establish an equivalence between this problem and a problem of Neyman-Pearson type, for which we propose a numerical approximation based on the constraint relaxation
APA, Harvard, Vancouver, ISO, and other styles
31

Lounis, Tewfik. "Inférences dans les modèles ARCH : tests localement asymptotiquement optimaux." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0222/document.

Full text
Abstract:
L'objectif de cette thèse est la construction des tests localement et asymptotiquement optimaux. Le problème traité concerne un modèle qui contient une large classe de modèles de séries chronologiques. La propriété de la normalité asymptotique locale (LAN) est l'outil fondamental utilisé dans nos travaux de recherches. Une application de nos travaux en finance est proposée<br>The purpose of this phD thesis is the construction of alocally asymptotically optimal tests. In this testing problem, the considered model contains a large class of time series models. LAN property was the fundamental tools in our research works. Our results are applied in financial area
APA, Harvard, Vancouver, ISO, and other styles
32

Toulemonde, Gwladys. "Estimation et tests en théorie des valeurs extrêmes." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2008. http://tel.archives-ouvertes.fr/tel-00348589.

Full text
Abstract:
Cette thèse se décompose en trois parties distinctes auxquelles s'ajoute une introduction. Dans un premier temps, nous nous intéressons à un test lisse d'ajustement à la famille de Pareto. Pour cela, nous proposons une statistique de test motivée par la théorie de LeCam sur la normalité asymptotique locale (LAN). Nous en établissons le comportement asymptotique sous l'hypothèse que l'échantillon provient d'une distribution de Pareto et sous des alternatives locales, nous plaçant ainsi dans le cadre LAN. Des simulations sont présentées afin d'étudier le comportement de la statistique de test à distance finie. Dans le chapitre suivant, nous nous plaçons dans le cadre de données censurées aléatoirement à droite. Nous proposons alors un estimateur des paramètres de la distribution de Pareto généralisée basé sur une première étape de l'algorithme de Newton-Raphson. Nous établissons la normalité asymptotique de cet estimateur. Par des simulations, nous illustrons son comportement à distance finie et le comparons à celui de l'estimateur du maximum de vraisemblance. Nous proposons enfin, dans un dernier chapitre, un modèle linéaire autorégressif adapté à la loi de Gumbel pour prendre en compte la dépendance dans les maxima. Nous établissons des propriétés théoriques de ce modèle et par simulations nous illustrons son comportement à distance finie. Enfin, comme des applications concrètes en sciences de l'atmosphère motivaient ce modèle, nous l'avons utilisé pour modéliser des maxima de dioxyde de carbone et de méthane.
APA, Harvard, Vancouver, ISO, and other styles
33

Ignaccolo, Rosaria. "Tests d'ajustement fonctionnels pour des observations corrélées." Paris 6, 2002. http://www.theses.fr/2002PA066416.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Salhi, Khaled. "Risques extrêmes en finance : analyse et modélisation." Electronic Thesis or Diss., Université de Lorraine, 2016. http://www.theses.fr/2016LORR0192.

Full text
Abstract:
Cette thèse étudie la gestion et la couverture du risque en s’appuyant sur la Value-at-Risk (VaR) et la Value-at-Risk Conditionnelle (CVaR), comme mesures de risque. La première partie propose un modèle d’évolution de prix que nous confrontons à des données réelles issues de la bourse de Paris (Euronext PARIS). Notre modèle prend en compte les probabilités d’occurrence des pertes extrêmes et les changements de régimes observés sur les données. Notre approche consiste à détecter les différentes périodes de chaque régime par la construction d’une chaîne de Markov cachée et à estimer la queue de distribution de chaque régime par des lois puissances. Nous montrons empiriquement que ces dernières sont plus adaptées que les lois normales et les lois stables. L’estimation de la VaR est validée par plusieurs backtests et comparée aux résultats d’autres modèles classiques sur une base de 56 actifs boursiers. Dans la deuxième partie, nous supposons que les prix boursiers sont modélisés par des exponentielles de processus de Lévy. Dans un premier temps, nous développons une méthode numérique pour le calcul de la VaR et la CVaR cumulatives. Ce problème est résolu en utilisant la formalisation de Rockafellar et Uryasev, que nous évaluons numériquement par inversion de Fourier. Dans un deuxième temps, nous nous intéressons à la minimisation du risque de couverture des options européennes, sous une contrainte budgétaire sur le capital initial. En mesurant ce risque par la CVaR, nous établissons une équivalence entre ce problème et un problème de type Neyman-Pearson, pour lequel nous proposons une approximation numérique s’appuyant sur la relaxation de la contrainte<br>This thesis studies the risk management and hedging, based on the Value-at-Risk (VaR) and the Conditional Value-at-Risk (CVaR) as risk measures. The first part offers a stocks return model that we test in real data from NSYE Euronext. Our model takes into account the probability of occurrence of extreme losses and the regime switching observed in the data. Our approach is to detect the different periods of each regime by constructing a hidden Markov chain and estimate the tail of each regime distribution by power laws. We empirically show that powers laws are more suitable than Gaussian law and stable laws. The estimated VaR is validated by several backtests and compared to other conventional models results on a basis of 56 stock market assets. In the second part, we assume that stock prices are modeled by exponentials of a Lévy process. First, we develop a numerical method to compute the cumulative VaR and CVaR. This problem is solved by using the formalization of Rockafellar and Uryasev, which we numerically evaluate by Fourier inversion techniques. Secondly, we are interested in minimizing the hedging risk of European options under a budget constraint on the initial capital. By measuring this risk by CVaR, we establish an equivalence between this problem and a problem of Neyman-Pearson type, for which we propose a numerical approximation based on the constraint relaxation
APA, Harvard, Vancouver, ISO, and other styles
35

Lounis, Tewfik. "Inférences dans les modèles ARCH : tests localement asymptotiquement optimaux." Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0222.

Full text
Abstract:
L'objectif de cette thèse est la construction des tests localement et asymptotiquement optimaux. Le problème traité concerne un modèle qui contient une large classe de modèles de séries chronologiques. La propriété de la normalité asymptotique locale (LAN) est l'outil fondamental utilisé dans nos travaux de recherches. Une application de nos travaux en finance est proposée<br>The purpose of this phD thesis is the construction of alocally asymptotically optimal tests. In this testing problem, the considered model contains a large class of time series models. LAN property was the fundamental tools in our research works. Our results are applied in financial area
APA, Harvard, Vancouver, ISO, and other styles
36

Jones, Gawain. "Modélisation d'images agronomiques - application a la reconnaissance d'adventices par imagerie pour une pulvérisation localisée." Phd thesis, Université de Bourgogne, 2009. http://tel.archives-ouvertes.fr/tel-00465118.

Full text
Abstract:
Les nouvelles réglementations concernant les usages de produits phytosanitaires et la prise en compte de l'environnement (pollution, biodiversité) en agriculture ont conduit à la mise au point de méthodes d'identification de plantes (culture et adventices) par une gestion spécifique des adventices par imagerie. Afin de disposer d'un outil performant permettant l'évaluation de ces méthodes d'identification reposant sur une analyse spatiale de la scène photographiée, un modèle de simulation de scènes agronomiques a été mis au point. Prenant en considération certaines caractéristiques agronomiques d'une parcelle cultivée, ce modèle permet de simuler une vérité terrain dont les paramètres - la spatialisation de la culture, le taux d'infestation, la distribution des adventices - sont contrôlés. La scène agronomique ainsi créée subit ensuite une transformation projective afin de simuler la prise de photographie et, ainsi, de prendre en compte tous les paramètres nécessaire à la création d'une image. Ce modèle a ensuite été validé à l'aide de comparaison statistique avec des données réelles. De nouveaux algorithmes spatiaux basés sur la Transformée de Hough et utilisant l'alignement en rang de la culture ont également été développés. Trois méthodes basées sur une analyse en composante connexe, une estimation de contours et une méthode probabiliste ont été mises en œuvre et exhaustivement évaluées à l'aide du modèle développé. Les résultats obtenus sont de très bonne qualité avec une classification correcte de la culture et des adventices supérieure à 90% et pouvant atteindre 98% dans certains cas. Enfin, pour ce modèle, une approche spectrale a également été explorée afin de dépasser les limitations imposées par les méthodes spatiales. Une extension 3D a été apportée à ce modèle afin de permettre la simulation de la réflectance bidirectionnelle (BRDF) des plantes et du sol à l'aide des modèles PROSPECT et SOILSPECT. La transformation d'une information spectrale en une information couleur RGB, la prise en compte de filtres optiques ou la création de données multispectrales sont également discutées.
APA, Harvard, Vancouver, ISO, and other styles
37

Malmström, Magnus. "5G Positioning using Machine Learning." Thesis, Linköpings universitet, Reglerteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-149055.

Full text
Abstract:
Positioning is recognized as an important feature of fifth generation (\abbrFiveG) cellular networks due to the massive number of commercial use cases that would benefit from access to position information. Radio based positioning has always been a challenging task in urban canyons where buildings block and reflect the radio signal, causing multipath propagation and non-line-of-sight (NLOS) signal conditions. One approach to handle NLOS is to use data-driven methods such as machine learning algorithms on beam-based data, where a training data set with positioned measurements are used to train a model that transforms measurements to position estimates.  The work is based on position and radio measurement data from a 5G testbed. The transmission point (TP) in the testbed has an antenna that have beams in both horizontal and vertical layers. The measurements are the beam reference signal received power (BRSRP) from the beams and the direction of departure (DOD) from the set of beams with the highest received signal strength (RSS). For modelling of the relation between measurements and positions, two non-linear models has been considered, these are neural network and random forest models. These non-linear models will be referred to as machine learning algorithms.  The machine learning algorithms are able to position the user equipment (UE) in NLOS regions with a horizontal positioning error of less than 10 meters in 80 percent of the test cases. The results also show that it is essential to combine information from beams from the different vertical antenna layers to be able to perform positioning with high accuracy during NLOS conditions. Further, the tests show that the data must be separated into line-of-sight (LOS) and NLOS data before the training of the machine learning algorithms to achieve good positioning performance under both LOS and NLOS conditions. Therefore, a generalized likelihood ratio test (GLRT) to classify data originating from LOS or NLOS conditions, has been developed. The probability of detection of the algorithms is about 90\% when the probability of false alarm is only 5%.  To boost the position accuracy of from the machine learning algorithms, a Kalman filter have been developed with the output from the machine learning algorithms as input. Results show that this can improve the position accuracy in NLOS scenarios significantly.<br>Radiobasserad positionering av användarenheter är en viktig applikation i femte generationens (5G) radionätverk, som mycket tid och pengar läggs på för att utveckla och förbättra. Ett exempel på tillämpningsområde är positionering av nödsamtal, där ska användarenheten kunna positioneras med en noggrannhet på ett tiotal meter. Radio basserad positionering har alltid varit utmanande i stadsmiljöer där höga hus skymmer och reflekterar signalen mellan användarenheten och basstationen. En ide att positionera i dessa utmanande stadsmiljöer är att använda datadrivna modeller tränade av algoritmer baserat på positionerat testdata – så kallade maskininlärningsalgoritmer. I detta arbete har två icke-linjära modeller - neurala nätverk och random forest – bli implementerade och utvärderade för positionering av användarenheter där signalen från basstationen är skymd.% Dessa modeller refereras som maskininlärningsalgoritmer. Utvärderingen har gjorts på data insamlad av Ericsson från ett 5G-prototypnätverk lokaliserat i Kista, Stockholm. Antennen i den basstation som används har 48 lober vilka ligger i fem olika vertikala lager. Insignal och målvärdena till maskininlärningsalgoritmerna är signals styrkan för varje stråle (BRSRP), respektive givna GPS-positioner för användarenheten. Resultatet visar att med dessa maskininlärningsalgoritmer positioneras användarenheten med en osäkerhet mindre än tio meter i 80 procent av försöksfallen. För att kunna uppnå dessa resultat är viktigt att kunna detektera om signalen mellan användarenheten och basstationen är skymd eller ej. För att göra det har ett statistiskt test blivit implementerat. Detektionssannolikhet för testet är över 90 procent, samtidigt som sannolikhet att få falskt alarm endast är ett fåtal procent.\newline \newline%För att minska osäkerheten i positioneringen har undersökningar gjorts där utsignalen från maskininlärningsalgoritmerna filtreras med ett Kalman-filter. Resultat från dessa undersökningar visar att Kalman-filtret kan förbättra presitionen för positioneringen märkvärt.
APA, Harvard, Vancouver, ISO, and other styles
38

Лещишин, Юрій Зіновійович, Юрий Зиновьевич Лещишин та Yu Z. Leschyshyn. "Математична модель та методи ефективного визначення розладки ритмокардіосигналу". Thesis, Тернопільський національний технічний університет ім. Івана Пулюя, 2011. http://elartu.tntu.edu.ua/handle/123456789/1481.

Full text
Abstract:
Робота виконана у Тернопільському національному технічному університеті імені Івана Пулюя Міністерства освіти і науки, молоді та спорту України.Захист відбувся у 2011 р. на засіданні спеціалізованої вченої ради К 58.052.01 у Тернопільському національному технічному університеті імені Івана Пулюя (46001, м. Тернопіль, вул. Руська, 56, ауд. 79). З дисертацією можна ознайомитись у бібліотеці Тернопільського національного технічного університету імені Івана Пулюя, (46001, м. Тернопіль, вул. Руська, 56).<br>У дисертаційній роботі розв’язано наукову задачу розроблення математичної моделі ритмокардіосигналу та методів достовірного визначення його розладки для виявлення перехідного процесу ритмокардіосигналу та підвищення вірогідності автоматичного оцінювання параметрів варіабельності ритмокардіосигналу в системах голтерівського моніторингу. Побудовано математичну модель ритмокардіосигналу, що враховує розладку. Розроблено критерій визначення розладки ритмокардіосигналу. Обґрунтовано та застосовано спектральне представлення ритмокардіосигналу для обчислення тестової статистики визначення розладки ритмокардіосигналу. Обґрунтовано та розроблено критерій адаптації параметрів спектрального представлення ритмокардіосигналу для підвищення достовірності визначення розладки. Побудовано критерій ефективності методів визначення розладки ритмокардіосигналу для вибору достовірнішого з них. Розроблено алгоритм і програмне забезпечення для визначення розладки ритмокардіосигналу. Проведено комп’ютерне моделювання методів визначення розладки ритмокардіосигналу для верифікації математичної моделі та оцінки характеристик достовірності визначення розладки РКС. Розроблено програмне забезпечення для імітаційного моделювання ритмокардіо- та електрокардіосигналів із розладкою. Розроблено алгоритм визначення періоду корельованості для оптимального налаштування синфазного методу отримання тестової статистики та вибору віконної функції.<br>В диссертационной работе решено научную задачу построения математической модели ритмокардиосигнала и разработки методов достоверного обнаружения его розладки для определения переходного процесса ритмокардиосигналу и повышения достоверности автоматического оценивания параметров вариабельности ритмокардиосигнала в системах холтеровского мониторинга. Построена математическая модель ритмокардиосигнала с розладкой, которая отличается от известных тем, что учитывает изменение ритмокардиосигнала от стационарной к периодически коррелированной случайной последовательности, что сделало возможным разработку критерия обнаружения розладки ритмокардиосигнала. Разработан критерий обнаружения розладки ритмокардиосигналу, путем модификации статистического критерия проверки гипотез Неймана-Пирсона к задаче обнаружения розладки ритмокардиосигналу, что сделало возможным обнаружения розладки с последующим определением временного момента ее наступления. Обосновано и внедрено спектральное представление ритмокардиосигнала для вычисления тестовой статистики обнаружения розладки ритмокардиосигнала за построенным критерием, что сделало возможным построение метода обнаружения розладки, что в отличие от других является инвариантным к моменту отбора сигнала. Разработан метод обнаружения розладки ритмокардиосигнала, путем применения критерия обнаружения розладки к тестовой статистики полученного за спектральным представлением ритмокардиосигнала, что сделало возможным достоверное обнаружение его розладки. Обосновано использование критерия адаптации параметров цифрового спектрального анализа и выбора оконной функции по максимуму вариации спектральных компонент ритмокардиосигнала, для оптимального его настройки при неизвестных параметрах ритмокардиосигнала, что повысило достоверность обнаружения розладки ритмокардиосигнала. Определено достоверность синфазного, Уэлча, Берга и периодограмного методов вычисления статистики обнаружения розладки ритмокардиосигнала, по характеристикам достоверности определения розладки ритмокардиосигнала, что упрощает аппаратуру оценивания параметров вариабельности РКС за счет применения более достоверного из них. Разработан алгоритм и программное обеспечение для обнаружения розладки ритмокардиосигнала. Проведено компьютерное моделирование методов обнаружения розладки ритмокардиосигнала для верификации математической модели и оценки характеристик достоверности обнаружения розладки ритмокардиосигнала. Разработано программное обеспечение для имитационного моделирования тестовых ритмокардио- и электрокардиосигналов с розладкой. Разработан алгоритм определения периода коррелированности для оптимальной настройки параметров синфазного метода получения тестовой статистики и выбора оконной функции.<br>In dissertation is solved a scientific problem of working out of rhythmocardiosignal mathematical model and methods of its authentic change-point detection for transient rhythmocardiosignal revealing and increases of reliability of automatic estimation parametres of rhythmocardiosignal variability in the Holter monitoring systems. The rhythmocardiosignal mathematical model, change-point considering is developed. The criterion of rhythmocardiosignal change-point detection is developed. The rhythmocardiosignal spectral representation for calculation of test statistics of rhythmocardiosignal change-point detection with use of the constructed criterion is proved and introduced. The criterion of parameters adaptation of rhythmocardiosignal spectral representation is proved and developed for increase of its change-point detection reliability. The criterion of methods efficiency of rhythmocardiosignal change-point detection is developed for a choice of more authentic of them. The algorithm and software is developed for rhythmocardiosignal change-point detection. Computer modelling of rhythmocardiosignal change-point detection methods for verification of mathematical model and estimation of reliability characteristics of rhythmocardiosignal change-point detection is spent. The software is developed for imitating modelling test rhythmocardiosignal and electrocardiosignal with change-point. The algorithm of the correlation period definition is developed for optimum adjustment of synphase method parametres of test statistics reception and window function choice.
APA, Harvard, Vancouver, ISO, and other styles
39

Лещишин, Юрій Зіновійович, Юрий Зиновьевич Лещишин та Yu Z. Leschyshyn. "Математична модель та методи ефективного визначення розладки ритмокардіосигналу". Thesis, Тернопільський національний технічний університет ім. Івана Пулюя, 2013. http://elartu.tntu.edu.ua/handle/123456789/2410.

Full text
Abstract:
Роботу виконано у Тернопільському національному технічному університеті імені Івана Пулюя Міністерства освіти і науки України. З дисертацією можна ознайомитись у бібліотеці Тернопільського національного технічного університету імені Івана Пулюя (46001, м. Тернопіль, вул. Руська, 56).<br>У дисертаційній роботі розв’язано наукове завдання розроблення математичної моделі ритмокардіосигналу та методів достовірного визначення його розладки для підвищення вірогідності автоматичного оцінювання параметрів варіабельності ритмокардіосигналу в системах голтерівського моніторингу. Побудовано математичну модель ритмокардіосигналу, що враховує розладку. Розроблено критерій визначення розладки ритмокардіосигналу. Обґрунтовано та застосовано спектральне представлення ритмокардіосигналу для обчислення тестової статистики визначення його розладки. Обґрунтовано та розроблено критерій адаптації параметрів спектрального представлення ритмокардіосигналу для підвищення достовірності визначення розладки. Побудовано критерій ефективності методів визначення розладки ритмокардіосигналу для вибору достовірнішого з них. Розроблено алгоритм і програмне забезпечення для визначення розладки ритмокардіосигналу. Проведено комп’ютерне моделювання методів визначення розладки ритмокардіосигналу для верифікації математичної моделі та оцінювання характеристик достовірності визначення розладки РКС. Розроблено програмне забезпечення для імітаційного моделювання ритмокардіо- та електрокардіосигналів із розладкою. Розроблено алгоритм визначення періоду корельованості для оптимального налаштування синфазного методу отримання тестової статистики та вибору віконної функції.<br>В диссертационной работе решено научное задание построение математической модели ритмокардиосигнала и разработки методов достоверного обнаружения его разладки для повышения достоверности автоматического оценивания параметров вариабельности ритмокардиосигнала в системах холтеровского мониторинга. Построена математическая модель ритмокардиосигнала с разладкой, которая отличается от известных тем, что учитывает изменение ритмокардиосигнала от стационарной к периодически коррелированной случайной последовательности, что сделало возможным разработку критерия обнаружения разладки ритмокардиосигнала. Разработан критерий обнаружения разладки ритмокардиосигнала, путем модификации статистического критерия проверки гипотез Неймана–Пирсона к задаче обнаружения разладки ритмокардиосигнала, что сделало возможным обнаружения разладки с последующим определением временного момента ее наступления. Обосновано и внедрено спектральное представление ритмокардиосигнала для вычисления тестовой статистики обнаружения разладки ритмокардиосигнала за построенным критерием, это сделало возможным построение метода обнаружения разладки, что в отличие от других является инвариантным к моменту отбора сигнала. Разработан метод обнаружения разладки ритмокардиосигнала путем применения критерия обнаружения разладки к тестовой статистики, полученного за спектральным представлением ритмокардиосигнала, что сделало возможным достоверное обнаружение его разладки. Обосновано использование критерия адаптации параметров цифрового спектрального анализа и выбора оконной функции по максимуму вариации спектральных компонент ритмокардиосигнала для его оптимальной настройки при неизвестных параметрах ритмокардиосигнала, что повысило достоверность обнаружения разладки ритмокардиосигнала. Определено достоверность синфазного, Уэлча, Берга и периодограмного методов вычисления статистики обнаружения разладки ритмокардиосигнала по характеристикам достоверности определения разладки ритмокардиосигнала, что упрощает аппаратуру оценивания параметров вариабельности РКС за счет применения более достоверного из них. Разработан алгоритм и программное обеспечение для обнаружения разладки ритмокардиосигнала. Проведено компьютерное моделирование методов обнаружения разладки ритмокардиосигнала для верификации математической модели и оценки характеристик достоверности обнаружения разладки ритмокардиосигнала. Разработано программное обеспечение для имитационного моделирования тестовых ритмокардио- и электрокардиосигналов с разладкой. Разработан алгоритм определения периода коррелированности для оптимальной настройки параметров синфазного метода получения тестовой статистики и выбора оконной функции.<br>The thesis is resolved important scientific task of construction rhythmocardiosignal mathematical model and methods of its reliable change-point detection for reliability increases of automatic parameters estimation of rhythmocardiosignal variability in the Holter monitoring systems. The rhythmocardiosignal mathematical model that takes into considered change-point is developed. The criterion of rhythmocardiosignal change-point detection is developed. The rhythmocardiosignal spectral representation for calculation of test statistics of rhythmocardiosignal change-point detection with use of the criterion constructed is proved and applied. The criterion of parameters adaptation of rhythmocardiosignal spectral representation is proved and developed for increase of its reliability change-point detection. The criterion of methods efficiency of rhythmocardiosignal change-point detection is developed for a choice of more reliable of them. The algorithm and software is developed for rhythmocardiosignal change-point detection. Computer modeling of rhythmocardiosignal change-point detection methods for verification of mathematical model and estimation of reliability characteristics of rhythmocardiosignal change-point detection is spent. The software is developed for imitating modeling test rhythmocardiosignal and electrocardiosignal with change-point. The algorithm of the correlation period definition is developed for optimum adjustment of parameters synphase method of test statistics reception and choice window function.
APA, Harvard, Vancouver, ISO, and other styles
40

Roberts, Geoff. "Classification of non-stationary signals using time-frequency representations and multiple hypotheses tests : an application to humpback whale songs." Thesis, Queensland University of Technology, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
41

Saaidia, Noureddine. "Sur les familles des lois de fonction de hasard unimodale : applications en fiabilité et analyse de survie." Thesis, Bordeaux 1, 2013. http://www.theses.fr/2013BOR14794/document.

Full text
Abstract:
En fiabilité et en analyse de survie, les distributions qui ont une fonction de hasard unimodale ne sont pas nombreuses, qu'on peut citer: Gaussienne inverse ,log-normale, log-logistique, de Birnbaum-Saunders, de Weibull exponentielle et de Weibullgénéralisée. Dans cette thèse, nous développons les tests modifiés du Chi-deux pour ces distributions tout en comparant la distribution Gaussienne inverse avec les autres. Ensuite nousconstruisons le modèle AFT basé sur la distribution Gaussienne inverse et les systèmes redondants basés sur les distributions de fonction de hasard unimodale<br>In reliability and survival analysis, distributions that have a unimodalor $\cap-$shape hazard rate function are not too many, they include: the inverse Gaussian,log-normal, log-logistic, Birnbaum-Saunders, exponential Weibull and power generalized Weibulldistributions. In this thesis, we develop the modified Chi-squared tests for these distributions,and we give a comparative study between the inverse Gaussian distribution and the otherdistributions, then we realize simulations. We also construct the AFT model based on the inverseGaussian distribution and redundant systems based on distributions having a unimodal hazard ratefunction
APA, Harvard, Vancouver, ISO, and other styles
42

Iskander, D. R. "The Generalised Bessel function K distribution and its application to the detection of signals in the presence of non-Gaussian interference." Thesis, Queensland University of Technology, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
43

Jhan, Hao-yang, and 詹鎬陽. "Johnson-Neyman Nonsignificant Region Under short-tailed symmetric distribution." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/50945660377080178611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Chuang, Yang-Kai, and 莊揚凱. "Generalized Neyman-Rubin’s causal model for Regression Interaction Assessment." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/33022596574382238879.

Full text
Abstract:
碩士<br>國立交通大學<br>統計學研究所<br>101<br>Although the insertion of product terms into analytical to test for presence of interaction effect is very common in economic, social and health sciences, it has long been criticized for that existence of interaction is model dependent (Greenland (2009) and Mauderly and Samet (2009)). The efforts for resolving this criticism leads to multiple but ambiguous definitions of statistical interaction resulting in assessing various but unknown versions of effect (Greenland (2009)). We report that a systematic introduction of definitions, methods and theorems to fit the intercorrelation (association) parameter into a generalized Neyman-Rubin’s causal model brings interesting advantages: (a) This approach allows us to define and measure a clean effect of intercorrelation for statistical inferences of unknown statistical interaction. (b) Statistical inferences for statistical interaction all can be constructed from the estimation theory of the distributional parameters. (c) This causal model measures an unambiguous but also model independent effect of intercorrelation that avoids the controversy of insertion. (d) The theory of the generalized Neyman-Rubin’s causality is extended to statistical interaction assessment for probit regression.
APA, Harvard, Vancouver, ISO, and other styles
45

Neyzen, Svenja [Verfasser]. "Kryokonservierung von hepatischen Sternzellen der Ratte / vorgelegt von Svenja Neyzen." 2007. http://d-nb.info/98572515X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Yang, Mei-Ting, and 楊美婷. "Johnson-Neyman Nonsignificant Region Under Variance Heterogeneity and Unbalanced Design." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/86083516857768634737.

Full text
Abstract:
碩士<br>逢甲大學<br>統計與精算所<br>97<br>This research considers an ANCOVA with two treatments and one covariate.ANCOVA assumes that the slopes be consistent within the groups and if this assumption is violated,then generally Johnson-Neyman nonsignificance region is suggested;this item of region is actually, in two groups, the confidence interval of the regression lines intersection abscissa. This research refering the Larholt and Sampson (1995) provides two items of large sample results under variance heterogeneity, and/or in the difference of group sample sizes, and suggests two confidence intervals of regression line intersection abscissa for the balanced design and the unbalanced design, and names the lambda-JN and the L-JN nonsignificance regions, respectively. The above nonsignificance regions are considered to take the pooled variance estimate, in addition to that this research further to take the the individual variance estimate and names the JNs、lambda-JNs and L-JNs nonsignificance regions, respectively. Using Monte Carlo simulation we discuss the robustness of nonsignificance regions for the six nonsignificance regions in the different combinations of group sample sizes, the slope rates (i.e. intersection abscissa''s locations) and the proportions of standard deviation. This research find out that robustness of the general traditional JN nonsignificance region is only for the balanced design, but not for unbalanced design. For the later case, we suggest the users can use JNs nonsignificance region with individual estimators of error variance.
APA, Harvard, Vancouver, ISO, and other styles
47

"Application of the generalized Neyman-Scott process in spatial sampling design." 2015. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1291530.

Full text
Abstract:
This thesis introduces a new algorithm to search for optimal spatial sampling design. It is found in previous studies of Zhu and Stein (2006) that the optimal sampling design for spatial prediction with estimated parameters is nearly regular with a few clustering points. The pattern is similar to the generalized Neyman-Scott (GNS) process introduced by Yau and Loh (2012), which allows for regularity in the parent process. This motivates the use of a realization of the GNS process as a spatial sampling design. This method translates the high dimensional optimization problem of selecting sampling sites into a low dimensional optimization problem of searching for the optimal parameter sets in the GNS process. Simulation studies indicate that the proposed sampling design algorithm is more computationally efficient while the result of criterion minimization is comparable to traditional methods.<br>本文介紹了一種新的算法來搜索最優空間採樣設計。先前Zhu和Stein(2006)的研究發現,按被估計參數的空間預測的最優採樣設計是近乎有規律的,同時伴隋一些聚類點。該圖案與Yau和Loh(2012)介紹的廣義Neyman-Scott(GNS)過程相似,其中的父過程擁有規律性。這驅使我們使用GNS過程的實現作為空間採樣設計。這種方法把選擇採樣點的高維優化問題轉化為搜索最優GNS過程參數集的低維優化問題。模擬實驗顯示,該採樣設計的算法是計算效率更高,同時其最小化判別函數的結果是可以媲美傳統的方法。<br>Lai, Sai Yu.<br>Thesis M.Phil. Chinese University of Hong Kong 2015.<br>Includes bibliographical references (leaves 31-34).<br>Abstracts also in Chinese.<br>Title from PDF title page (viewed on 18, October, 2016).<br>Detailed summary in vernacular field only.
APA, Harvard, Vancouver, ISO, and other styles
48

Leonard, Michael. "A stochastic space-time rainfall model for engineering risk assessment." Thesis, 2010. http://hdl.handle.net/2440/62329.

Full text
Abstract:
The temporal and spatial variability of Australia’s climate affects the quantity and quality of its water resources, the productivity of its agricultural systems, and the health of its ecosystems. This variability should be taken into account when assessing the risks associated with flooding. Continuous simulation rainfall models are one means for doing this, whereby sequences of storms are generated for an arbitrarily long time period and over some region of interest. The simulated rainfall should reproduce observed statistics in time and space so that it can be used as a suitable input for hydrologic models at the catchment scale, with particular emphasis on extreme events. There are a variety of approaches to modelling rainfall, including a broad range of singlesite and multi-site rainfall models. By way of contrast there are few models that aim to simulate rainfall across all points within a region at daily or sub-daily increments. This thesis focuses on models calibrated solely to rain gauges, and a specific type known as Neyman-Scott Rectangular Pulse (NSRP) models. Existing NSRP models have a mature history of modelling developments including calibration methodology and an ability to reproduce key statistics across a range of timescales. Nonetheless, these models also have several limitations (and other space-time models not withstanding) that are addressed in this thesis. These developments include improvements to the conceptual representation of rainfall and improvements to calibration and simulation techniques. Specifically these improvements include (i) the development of an efficient simulation technique, (ii) assessing the impact of monthly parameter changes on on rainfall statistics, (iii) the use of simulated statistics within calibration to overcome reliance on derived model properties (iv) incorporating a storm extent parameter to better match spatial correlations, (v) incorporating long term climatic variability and developing a methodology to assess climatic and seasonal variability in simulated extremes (vi) incorporating inhomogeneity of rainfall occurrence across a region. Numerous case studies are used at various locations about Australia to illustrate these improvements and highlight the applicability of the model under varied climatic conditions.<br>Thesis (Ph.D.) -- University of Adelaide, School of Civil, Environmental and Mining Engineering, 2010
APA, Harvard, Vancouver, ISO, and other styles
49

Chang, Chia-Yu, and 張家諭. "Compare the Johnson-Neyman nonsignificance region with least squares method and M-estimate." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/28124948468068401119.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Resende, David Meirelles Nunes. "Statistical assessment on Non-cooperative Target Recognition using the Neyman-Pearson statistical test." Master's thesis, 2017. http://hdl.handle.net/10400.6/7919.

Full text
Abstract:
Electromagnetic simulations of a X-target were performed in order to obtain its Radar Cross Section (RCS) for several positions and frequencies. The software used is the CST MWS©. A 1 : 5 scale model of the proposed aircraft was created in CATIA© V5 R19 and imported directly into the CST MWS© environment. Simulations on the X-band were made with a variable mesh size due to a considerable wavelength variation. It is intended to evaluate the Neyman-Pearson (NP) simple hypothesis test performance by analyzing its Receiver Operating Characteristics (ROCs) for two different radar detection scenarios - a Radar Absorbent Material (RAM) coated model, and a Perfect Electric Conductor (PEC) model for recognition purposes. In parallel the radar range equation is used to estimate the maximum range detection for the simulated RAM coated cases to compare their shielding effectiveness (SE) and its consequent impact on recognition. The AN/APG-68(V)9’s airborne radar specifications were used to compute these ranges and to simulate an airborne hostile interception for a Non-Cooperative Target Recognition (NCTR) environment. Statistical results showed weak recognition performances using the Neyman-Pearson (NP) statistical test. Nevertheless, good RCS reductions for most of the simulated positions were obtained reflecting in a 50:9% maximum range detection gain for the PAniCo RAM coating, abiding with experimental results taken from the reviewed literature. The best SE was verified for the PAniCo and CFC-Fe RAMs.<br>Simulações electromagnéticas do alvo foram realizadas de modo a obter a assinatura radar (RCS) para várias posições e frequências. O software utilizado é o CST MWS©. O modelo proposto à escala 1:5 foi modelado em CATIA© V5 R19 e importado diretamente para o ambiente de trabalho CST MWS©. Foram efectuadas simulações na banda X com uma malha de tamanho variável devido à considerável variação do comprimento de onda. Pretende-se avaliar estatisticamente o teste de decisão simples de Neyman-Pearson (NP), analisando as Características de Operação do Receptor (ROCs) para dois cenários de detecção distintos - um modelo revestido com material absorvente (RAM), e outro sendo um condutor perfeito (PEC) para fins de detecção. Em paralelo, a equação de alcance para radares foi usada para estimar o alcance máximo de detecção para ambos os casos de modo a comparar a eficiência de blindagem electromagnética (SE) entre os diferentes revestimentos. As especificações do radar AN/APG-68(V)9 do F-16 foram usadas para calcular os alcances para cada material, simulando uma intercepção hostil num ambiente de reconhecimento de alvos não-cooperativos (NCTR). Os resultados mostram performances de detecção fracas usando o teste de decisão simples de Neyman-Pearson como detector e uma boa redução de RCS para todas as posições na gama de frequências selecionada. Um ganho de alcance de detecção máximo 50:9 % foi obtido para o RAM PAniCo, estando de acordo com os resultados experimentais da bibliografia estudada. Já a melhor SE foi verificada para o RAM CFC-Fe e PAniCo.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography