To see the other types of publications on this topic, follow the link: Sampling Techniques.

Dissertations / Theses on the topic 'Sampling Techniques'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sampling Techniques.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Buljan, Matej. "Optimizing t-SNE using random sampling techniques." Thesis, Linnéuniversitetet, Institutionen för matematik (MA), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-88585.

Full text
Abstract:
The main topic of this thesis concerns t-SNE, a dimensionality reduction technique that has gained much popularity for showing great capability of preserving well-separated clusters from a high-dimensional space. Our goal with this thesis is twofold. Firstly we give an introduction to the use of dimensionality reduction techniques in visualization and, following recent research, show that t-SNE in particular is successful at preserving well-separated clusters. Secondly, we perform a thorough series of experiments that give us the ability to draw conclusions about the quality of embeddings from running t-SNE on samples of data using different sampling techniques. We are comparing pure random sampling, random walk sampling and so-called hubness sampling on a dataset, attempting to find a sampling method that is consistently better at preserving local information than simple random sampling. Throughout our testing, a specific variant of random walk sampling distinguished itself as a better alternative to pure random sampling.
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Ping. "Stable random projections and conditional random sampling, two sampling techniques for modern massive datasets /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Martin, Richard James. "Irregularly sampled signals : theories and techniques for analysis." Thesis, University College London (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.299693.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Laan, Marten Derk van der. "Signal sampling techniques for data acquisition in process control." [S.l. : [Groningen] : s.n.] ; [University Library Groningen] [Host], 1995. http://irs.ub.rug.nl/ppn/138454876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Allen, M. M. "An investigation of sampling techniques within marine fisheries discards." Thesis, Queen's University Belfast, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.516947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Blakeley, Nicholas D. "Sampling strategies and reconstruction techniques for magnetic resonance imaging." Thesis, University of Canterbury. Electrical and Computer Engineering, 2003. http://hdl.handle.net/10092/7705.

Full text
Abstract:
In magnetic resonance imaging (MRI), samples of the object's spectrum are measured in the spatial frequency domain (k-space). For a number of reasons there is a desire to reduce the time taken to gather measurements. The approach considered is to sample below the Nyquist density, using prior knowledge of the object's support in the spatial domain to enable full reconstruction. The two issues considered are where to position the samples (sampling strategies) and how to form an image (reconstruction techniques). Particular attention is given to a special case of irregular sampling, referred to as Cartesian sampling, in which the samples are located on a Cartesian grid but only constitute a subset of the full grid. A further special case is considered where the sampling scheme repeats periodically, referred to as periodic Cartesian sampling. These types of sampling schemes are applicable to 3-D Cartesian MRI, MRSI, and other modalities that measure a single point in 2-D k-space per echo. The case of general irregular sampling is also considered, which is applicable to spiral sampling, for example. A body of theory concerning Cartesian sampling is developed that has practical implications for how to approach the problem and provides intuition about its nature. It is demonstrated that periodic Cartesian sampling effectively decomposes the problem into a number of much smaller subproblems, which leads to the development of a reconstruction algorithm that exploits these computational advantages. An additional algorithm is developed to predict the regions that could be reconstructed from a particular sampling scheme and support; it can be used to evaluate candidate sampling schemes before measurements are obtained. A number of practical issues are also discussed using illustrative examples. Sample selection algorithms for both Cartesian and periodic Cartesian sampling are developed using heuristic metrics that are fast to compute. The result is a significant reduction in selection time at the expense of a slightly worse conditioned system. The reconstruction problem for a general irregular sampling scheme is also analysed and a reconstruction algorithm developed that trades off computation time for better image quality.
APA, Harvard, Vancouver, ISO, and other styles
7

Williams, Sarah L. "The study of conformational motions using enhanced sampling techniques." Thesis, University of Southampton, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.439611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Steyn, HC, CME McCrindle, and Toit D. Du. "Veterinary extension on sampling techniques related to heartwater research." Journal of the South African Veterinary Association, 2010. http://encore.tut.ac.za/iii/cpro/DigitalItemViewPage.external?sp=1001206.

Full text
Abstract:
ABSTRACT Heartwater, a tick-borne disease caused by Ehrlichia ruminantium, is considered to be a significant cause of mortality amongst domestic and wild ruminants in South Africa. The main vector is Amblyomma hebraeum and although previous epidemiological studies have outlined endemic areas based on mortalities, these have been limited by diagnostic methods which relied mainly on positive brain smears. The indirect fluorescent antibody test (IFA) has a low specificity for heartwater organisms as it cross-reacts with some other species. Since the advent of biotechnology and genomics, molecular epidemiology has evolved using the methodology of traditional epidemiology coupled with the new molecular techniques. A new quantitative real-time polymerase chain reaction (qPCR) test has been developed for rapid and accurate diagnosis of heartwater in the live animal. This method can also be used to survey populations of A. hebraeum ticks for heartwater. Sampling whole blood and ticks for this qPCR differs from routine serumsampling, which is used for many serological tests. Veterinary field staff, particularly animal health technicians, are involved in surveillance and monitoring of controlled and other diseases of animals in South Africa. However, it was found that the sampling of whole blood was not done correctly, probably because it is a new sampling technique specific for new technology, where the heartwater organism is much more labile than the serumantibodies required for other tests. This qPCR technique is highly sensitive and can diagnose heartwater in the living animal within 2 hours, in time to treat it. Poor sampling techniques that decrease the sensitivity of the test will, however, result in a false negative diagnosis. This paper describes the development of a skills training programme for para-veterinary field staff, to facilitate research into the molecular epidemiology of heartwater in ruminants and eliminate any sampling bias due to collection errors. Humane handling techniques were also included in the training, in line with the current focus on improved livestock welfare.
APA, Harvard, Vancouver, ISO, and other styles
9

Kamat, Niranjan Ganesh. "Sampling-based Techniques for Interactive Exploration of Large Datasets." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1523552932728325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Qin. "Reliable techniques for survey with sensitive question." HKBU Institutional Repository, 2013. http://repository.hkbu.edu.hk/etd_ra/1496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Bonn, Jonas. "Improved Techniques for Sampling and Sample Introduction in Gas Chromatography." Licentiate thesis, KTH, Chemistry, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4744.

Full text
Abstract:

Sampling and sample introduction are two key steps in quantitative gas chromatography. In this thesis, a development of a previously described sampling technique as well as a novel concept for sample introduction in gas chromatography are presented. The thesis is based on two papers.

Paper I describes a method for preparing physically mixed polymers for use as sorbent phases in open tubular trapping of gaseous analytes. The concept is based on mechanical disintegration and mixing of solid or liquid poly(ethylene glycol), PEG, into poly(dimethylsiloxane), PDMS, in a straightforward manner. The resulting mixture exhibits a higher affinity towards polar analytes, as compared to pure PDMS.

Paper II describes a novel approach to liquid sample introduction with the split/splitless inlet, used in gas chromatography. Classical injection techniques struggle with discrimination of high boiling analytes and poor repeatability of the injected amount of analytes. The presented injection technique utilizes high voltage to obtain a spraying effect of the injected liquid. The spraying effect can be achieved with a cold needle, which is unprecedented for gas chromatographic injections. The cold needle spraying results in highly repeatable injections, free from discrimination of high boiling analytes.

APA, Harvard, Vancouver, ISO, and other styles
12

Yardim, Anush. "Flexible sampling and adaptive techniques for communication and instrumentation applications." Thesis, University of Westminster, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wei, Jian. "Microcolumn field sampling and flow injection techniques for mercury speciation." Thesis, Sheffield Hallam University, 1993. http://shura.shu.ac.uk/20513/.

Full text
Abstract:
Mercury is one of the most toxic heavy metals, and many serious incidents have resulted from mercury poisoning. The methylation of mercury and its amplification by marine life have aggravated this pollution problem. Studies over the last three decades have shown that the toxicity of mercury is related to chemical form. A basic aim of the research has been to devise new methodology for the measurement and speciation of mercury. Key points of the investigation reported were the literature review of methodologies and techniques for mercury speciation and the development of a novel manifold which incorporates microcolumns of sulphydryl cotton which have a relatively high affinity and selectivity for inorganic and / or organomercury, and to utilise a continuous flow procedure for mercury speciation based on flow injection-atomic fluorescence spectrometry. This new and novel system has been used for the determination and speciation of mercury in a variety of water samples. The other column packing materials, eg. xanthate cotton, activated alumina and 8-hydroxy-quiniline were also investigated. A further aspect of element speciation concerns the development of a field sampling technique using sulphydryl cotton columns. Sample collection and preconcentration using microcolumns at the site of sampling was successfully performed. Preliminary experiments indicated that the field sampling technique in combination with FIA-AFS was a robust and potentially useful speciation tool. Field surveys on mercury distribution and speciation in the Manchester Ship Canal and the River Rother have been intensively carried out in collaboration with the National Rivers Authority (North West Region). The analytical data on different mercury species in waters of the Manchester Ship Canal are reported for the first time. A high correlation between organomercury and organolead in the Manchester Ship Canal is found and the related data have been assessed in order to clarify the possible origins for organomercury. Related work concerning participation in interlaboratory studies is reported in the Appendices.
APA, Harvard, Vancouver, ISO, and other styles
14

Gair, Amanda J. "Development and use of a passive technique for measuring nitrogen dioxide in the background atmosphere." Thesis, University of East Anglia, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Kulkarni, Mandar Shashikant. "Implementation of a 1GHZ frontend using transform domain charge sampling techniques." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ahmed, Abdalla G. M. [Verfasser]. "Grid-Based Techniques for Semi-Stochastic Sampling / Abdalla G. M. Ahmed." Konstanz : Bibliothek der Universität Konstanz, 2018. http://d-nb.info/1168591228/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Bell, Brian Jr (Brian E. ). "Cohort selection and sampling techniques to balance time-series retrospective studies." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112832.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 85-86).
Comparing irregular and event-driven time series data points is beyond the capabilities of most statistical techniques. This limits the potential to run insightful retrospective studies on many cross-sectional time-series datasets. In order to unlock the value of these datasets, we need techniques to standardize observations with irregular events enough to compare them to each other, and ways to select and sample them so as to produce class balances for each strata at modeling time that lend themselves to statistically sound analysis. In this study, we have developed two selection techniques and three sampling techniques for a characteristic cross-sectional time-series dataset. We found that using a Fluid-Balance Similarity-Based Dynamic Time Warp selection procedure with nearest neighbor parameter k=1 and using a Gamma distribution for sampling days produced consistently better class balance than all other methods when bootstrapped over 100 independent runs. We have written, documented and published open source MATLAB code for each selection and sampling technique, along with our bootstrap test. To evaluate our results, we have developed the Class Imbalance Penalty, a new metric that gives the lowest scores to the selection and sampling runs that produce most comparable counts of treatment and non-treatment observations for all strata. We validated our methods in the context of a study of diuretics treatment effects in ICU patients with Sepsis, drawn from the MIMIC II database. Starting from a group of 3,503 unique ICU stays from 2,341 study patients, with a Diuretics-treatment cohort of 349 unique ICU stays from 332 patients, we tested each selection and sampling technique, observing the trends across our dierent methods. We observed that sampling day was the stronger predictor of good class balance compared with selection technique, that the strongest similarity level (k=1) with the shortest history we considered produced the best results, and using a Gamma distribution for timepoint sampling most closely matched the distribution of actual administration days. Ultimately, we found strong evidence that our study lacked an important co- variate, physician-id, to more fully account for seemingly unpredictable assignments to Diuretics-treatment in our dataset.
by Brian Bell Jr.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
18

Holmes, Colette Gail. "Field sampling and microcolumn preconcentration techniques in inductively coupled plasma spectrometry." Thesis, Sheffield Hallam University, 1996. http://shura.shu.ac.uk/19820/.

Full text
Abstract:
This thesis is concerned with analytical studies on the trace analytes barium, cadmium, cobalt, chromium, copper, iron, manganese, nickel, lead, vanadium and zinc, present in high purity and highly complex matrices. The technique utilises activated alumina microcolumns in a flow injection (FI) system, to perform analyte enrichment and matrix removal. The analytes, after retention on the microcolumn are subsequently eluted and quantified by inductively coupled plasma-emission spectrometry (ICP-ES).Initial studies focus on trace analytes in caesium iodide, however a selection of the alkali metal salts, lithium nitrate, potassium bromide, sodium fluoride and sodium chloride, are investigated. New methodology for the ultratrace determination of high purity alkali metal salts is thus provided. The microcolumn enrichment technique with ICP-ES detection is robust, utilises limited sample handling and simultaneously preconcentrates and separates the analytes from matrix components. Hence possible matrix interferences are eliminated and limits of detection are significantly improved, in comparison to conventional ICP-ES analysis. A technique for the determination of the total content of eleven trace analytes present in natural waters (mineral, reservoir), using microcolumns of activated alumina in a FI-ICP-ES is investigated. The use of the complexing agent tartaric acid is shown to be effective in improving analyte retention. The procedure is successfully applied to determination of these analytes in a certified river water reference material (SLRS-1). Due to low retention and elution efficiencies, the total content of the analytes Fe and V present in Buxton, Redmires and Langsett samples could not be accurately determined by this technique. Activated alumina microcolumns are utilised as a new field sampling tools. Samples are collected in the field and processed through the alumina microcolumns for the effective retention of desired analytes. Hence, an alumina microcolumn sampling stage to effect concentration and isolation prior to analytical measurement is at the core of the investigation. The overall aim is to extend the application of alumina microcolumns, and in particular to provide a new multi-element field sampling device, which gives high sample integrity and preconcentration.
APA, Harvard, Vancouver, ISO, and other styles
19

De, Angelis Marco. "Efficient random set uncertainty quantification by means of advanced sampling techniques." Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2038039/.

Full text
Abstract:
In this dissertation, Random Sets and Advanced Sampling techniques are combined for general and efficient uncertainty quantification. Random Sets extend the traditional probabilistic framework, as they also comprise imprecision to account for scarce data, lack of knowledge, vagueness, subjectivity, etc. The general attitude of Random Sets to include different kinds of uncertainty is paid to a very high computational price. In fact, Random Sets requires a min-max convolution for each sample picked by the Monte Carlo method. The speed of the min-max convolution can be sensibly increased when the system response relationship is known in analytical form. However, in a general multidisciplinary design context, the system response is very often treated as a “black box”; thus, the convolution requires the adoption of evolutionary or stochastic algorithms, which need to be deployed for each Monte Carlo sample. Therefore, the availability of very efficient sampling techniques is paramount to allow Random Sets to be applied to engineering problems. In this dissertation, Advanced Line Sampling methods have been generalised and extended to include Random Sets. Advanced Sampling techniques make the estimation of quantiles on relevant probabilities extremely efficient, by requiring significantly fewer numbers of samples compared to standard Monte Carlo methods. In particular, the Line Sampling method has been enhanced to link well to the Random Set representation. These developments comprise line search, line selection, direction adaptation, and data buffering. The enhanced efficiency of Line Sampling is demonstrated by means of numerical and large scale finite element examples. With the enhanced algorithm, the connection between Line Sampling and the generalised uncertainty model has been possible, both in a Double Loop and in a Random Set approach. The presented computational strategies have been implemented in the open source general purpose software for uncertainty quantification, OpenCossan. The general reach of the proposed strategy is demonstrated by means of applications to structural reliability of a finite element model, to preventive maintenance, and to the NASA Langley multidisciplinary uncertainty quantification challenge.
APA, Harvard, Vancouver, ISO, and other styles
20

Huang, Qi. "Robust spectrum sensing techniques for cognitive radio networks." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/22012.

Full text
Abstract:
Cognitive radio is a promising technology that improves the spectral utilisation by allowing unlicensed secondary users to access underutilised frequency bands in an opportunistic manner. This task can be carried out through spectrum sensing: the secondary user monitors the presence of primary users over the radio spectrum periodically to avoid harmful interference to the licensed service. Traditional energy based sensing methods assume the value of noise power as prior knowledge. They suffer from the noise uncertainty problem as even a mild noise level mismatch will lead to significant performance loss. Hence, developing an efficient robust detection method is important. In this thesis, a novel sensing technique using the F-test is proposed. By assuming a multiple antenna assisted receiver, this detector uses the F-statistic as the test statistic which offers absolute robustness against the noise variance uncertainty. In addition, since the channel state information (CSI) is required to be known, the impact of CSI uncertainty is also discussed. Results show the F-test based sensing method performs better than the energy detector and has a constant false alarm probability, independent of the accuracy of the CSI estimate. Another main topic of this thesis is to address the sensing problem for non-Gaussian noise. Most of the current sensing techniques consider Gaussian noise as implied by the central limit theorem (CLT) and it offers mathematical tractability. However, it sometimes fails to model the noise in practical wireless communication systems, which often shows a non-Gaussian heavy-tailed behaviour. In this thesis, several sensing algorithms are proposed for non-Gaussian noise. Firstly, a non-parametric eigenvalue based detector is developed by exploiting the eigenstructure of the sample covariance matrix. This detector is blind as no information about the noise, signal and channel is required. In addition, the conventional energy detector and the aforementioned F-test based detector are generalised to non-Gaussian noise, which require the noise power and CSI to be known, respectively. A major concern of these detection methods is to control the false alarm probability. Although the test statistics are easy to evaluate, the corresponding null distributions are difficult to obtain as they depend on the noise type which may be unknown and non-Gaussian. In this thesis, we apply the powerful bootstrap technique to overcome this difficulty. The key idea is to reuse the data through resampling instead of repeating the experiment a large number of times. By using the nonparametric bootstrap approach to estimate the null distribution of the test statistic, the assumptions on the data model are minimised and no large sample assumption is invoked. In addition, for the F-statistic based method, we also propose a degrees-of-freedom modification approach for null distribution approximation. This method assumes a known noise kurtosis and yields closed form solutions. Simulation results show that in non-Gaussian noise, all the three detectors maintain the desired false alarm probability by using the proposed algorithms. The F-statistic based detector performs the best, e.g., to obtain a 90% detection probability in Laplacian noise, it provides a 2.5 dB and 4 dB signal-to-noise ratio (SNR) gain compared with the eigenvalue based detector and the energy based detector, respectively.
APA, Harvard, Vancouver, ISO, and other styles
21

Nader, Charles. "Signal Shaping and Sampling-based Measurement Techniques for Improved Radio Frequency Systems." Doctoral thesis, KTH, Signalbehandling, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-95404.

Full text
Abstract:
Wireless communication systems are omnipresent in our day-to-day life, with high expectations regarding capacity, reliability and power efficiency. In order to satisfy the capacity and reliability expectations, today's wireless systems are adopting sophisticated modulation schemes, such as orthogonal frequency division multiplexing (OFDM), which shape today's wireless signals with large bandwidths and high crest factors. On top of that, it is anticipated that different wireless systems/standards will co-exist and share the same radio frequency (RF) front-end in order to reduce the network implementation cost.   Such signals characteristics and systems coexistence put high requirements on the amplification stage which in best scenarios is considered weakly nonlinear. As a result, the power amplifier needs to be backed-off for linear operation. However, such power back-off reduces the operation's efficiency. Reducing the crest factor of the wireless signal and the possibility to linearize by means of digital pre-distortion the operation behavior of the power amplifier when operated near its maximum allowed continuous wave (CW) operating power range would lead to the optimal linearity and efficiency of operation.   In order to achieve a good linearization performance, accurate baseband behavioral models are needed which requires measuring time domain signals whose spectra spread largely due to the nonlinear operation of the power amplifier. Such spectrum spreading, denoted by spectral regrowth, puts high requirements on today's sampling-based measurement systems as a trade-of between the sampling rate and amplitude resolution exists in today's generation of analog-to-digital converters, in addition to a limitation in the available analog bandwidth. Overcoming such measurement challenges could lead to the design of expensive measurement systems which is not favorable.   In this thesis, the performance of RF transmitters is improved by combining the use of a smart crest factor reduction technique with an enhanced digital pre-distortion technique which allows operating the power amplifier near its CW 1-dB compression point, offering a significant increase in the efficiency of operation while satisfying the standard constraints on information error and spectral emission. Furthermore, the performance of RF measurement receivers is improved by reducing the requirements on the digital bandwidth by means of an evolved harmonic sampling technique, and by reducing the requirements on the analog bandwidth and design cost by means of a digital bandwidth interleaving technique and a signal separation technique based on an advanced sparse reconstruction methodology.
Trådlösa kommunikationssystem förekommer överallt i vår vardag, med höga förväntningar på kapacitet, tillförlitlighet och energieffektivitet. För att uppfylla förväntningar på kapacitet och tillförlitlighet är dagens trådlösa system utrustade med avancerade modulationsmetoder, såsom Orthogonal Frequency Division Multiplexing (OFDM), vilket medför att dagens trådlösa signaler har stora bandbredder och höga toppfaktorer. Därutöver planeras det för att olika trådlösa system/standarder kommer att samexistera och samutnyttja komponenter i gränssnittet mot radiosignaler, s.k. RF front-end, för att därigenom minska kostnader för nätverk. Egenskaperna hos dessa signaler och samexisterande system ställer höga krav på förstärkarsteg som i bästa fall kan anses svagt olinjära. Som ett resultat av detta behöver effektförstärkaren arbetspunkt flyttas för förstärkaren skall arbeta i det linjära området, men en sådan förflyttning minskar systemets verkningsgrad. Genom att reducera toppfaktorn på den trådlösa signalen samt att linjärisera förstärkarsteget genom digital förförvrängning, även kallad predistortion, hos effektförstärkaren när den drivs nära sin högsta tillåtna arbetspunkt för en kontinuerlig signal (CW) kan optimal linjäritet och effektiv drift erhållas. För att uppnå god linjäriseringsprestanda krävs noggranna modeller som beskriver beteende i basbandet. Att ta fram dessa modeller kräver tidsdomänmätningar av signaler vars spektra är bredbandiga, till stor del beroende på icke-linjär drift av effektförstärkaren. Bredbandiga spektra ställer höga krav på dagens samplande mätsystem i och med kompromissen mellan samplingsfrekvens och upplösning i amplitud finns i dagens generation av analog till digital omvandlare; dessutom finns en begränsning i tillgänglig analog bandbredd. Att lösa dessa utmanande mätproblem kan leda till utformning av dyra mätsystem vilket inte är önskvärt. I denna avhandling förbättras prestandan hos en radiosändare genom en kombination av smart toppfaktorreduktion och förbättrad digital predistortionsteknik som gör det möjligt att driva effektförstärkaren nära sin 1 dB kompressionspunkt, vilket erbjuder en betydande ökning av systemets verkningsgrad samtidigt som det uppfyller standarders krav avseende vektornoggrannhet och spektral spridning. Dessutom har prestandan på mätutrustningen för radiofrekvenser (RF) förbättras genom att minska kraven på digital bandbredd med hjälp av en nyutvecklad harmonisk samplingsteknik, och genom att minska kraven på den analoga bandbredden och konstruktionskostnad med hjälp av en teknik att intersekvensera signaler i den digitala frekvensdomänen samt med en signalseparationsteknik baserad på en avancerad rekonstruktionsmetod för glesa signaler.
QC 20120605
APA, Harvard, Vancouver, ISO, and other styles
22

Dal, Grande Eleonora. "Telephone monitoring of public health issues : a comparison of telephone sampling techniques." Title page, table of contents and abstract only, 2002. http://web4.library.adelaide.edu.au/theses/09MPM/09mpmd142.pdf.

Full text
Abstract:
Bibliography: leaves 164-172. This thesis investigated issues related to the methodology of telephone sampling methods with the aim of determining whether the telephone sample method radically affects estimates of health status according to certain health indicators. The study reviews the advantages and disadvantages of telephone surveys.
APA, Harvard, Vancouver, ISO, and other styles
23

Allay, Najib. "Application of nonuniform sampling techniques in digital signal processing and communication systems." Thesis, University of Westminster, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Gupta, Manik. "Data-centric energy efficient adaptive sampling techniques for wireless pollution sensor networks." Thesis, Queen Mary, University of London, 2014. http://qmro.qmul.ac.uk/xmlui/handle/123456789/8189.

Full text
Abstract:
Air pollution is one of the gravest problems being faced by modern world, and urban traffic emissions are the single major source of air pollution. This work is founded on collaboration with environmental scientists who need fine grained data to enable better understanding of pollutant distribution in urban street canyons. “Wireless sensor networks” can be used to deploy a significant number of sensors within a space as small as a single street canyon and capture simultaneous readings both in the time and space domain. Sensor energy management becomes the most critical constraints of such a solution, because of the energy hungry gas sensors. Hence, the main research objective addressed in this thesis is to propose novel temporal and spatial adaptive sampling techniques for wireless pollution sensor nodes that take into account the pollution data characteristics, and enable the sensor nodes to sample, only when, an important event happens to collect accurate statistics in as efficient a manner as possible. The major contributions of this thesis can be summarised as: 1) Better understanding of underlying pollution data characteristics (based on real datasets collected during pollution trials in Cyprus and India) using techniques from time series analysis and more advanced methods from multi-fractal analysis and nonlinear dynamical systems. 2)Proposal of novel adaptive temporal sampling algorithm called Exponential Double Smoothing based Adaptive Sampling (EDSAS) that exploits the presence of slowly decaying autocorrelations and local linear trends. The algorithm uses a time series prediction method based upon exponential double smoothing for irregularly sampled data. This algorithm has been compared against a random walk based stochastic scheduler called e-Sense and found to give better sampling performance. EDSAS has been extended to the spatial domain by incorporating distributed hierarchical agglomerative clustering mechanism. 3)Proposal of a novel spatial sampling algorithm called Nearest Neighbour based Adaptive Spatial Sampling (NNASS) that exploits the non-linear dynamics existing in pollution data to compute predictability measures to adapt the sampling intervals for the sensor nodes. NNASS has been compared against another spatial sampling algorithm called ASAP and found to give comparable or better sampling performance.
APA, Harvard, Vancouver, ISO, and other styles
25

Kandukuri, Somasekhar Reddy. "Spatio-Temporal Adaptive Sampling Techniques for Energy Conservation in Wireless Sensor Networks." Thesis, La Réunion, 2016. http://www.theses.fr/2016LARE0021/document.

Full text
Abstract:
La technologie des réseaux de capteurs sans fil démontre qu'elle peut être très utile dans de nombreuses applications. Ainsi chaque jour voit émerger de nouvelles réalisations dans la surveillance de notre environnement comme la détection des feux de forêt, l'approvisionnement en eau. Les champs d'applications couvrent aussi des domaines émergents et sensibles pour la population avec les soins aux personnes âgées ou les patients récemment opérés dans le cadre. L'indépendance des architectures RCSFs par rapport aux infrastructures existantes permet aux d'être déployées dans presque tous les sites afin de fournir des informations temporelles et spatiales. Dans les déploiements opérationnels le bon fonctionnement de l'architecture des réseaux de capteurs sans fil ne peut être garanti que si certains défis sont surmontés. La minisation de l'énergie consommée en fait partie. La limitation de la durée de vie des nœuds de capteurs est fortement couplée à l'autonomie de la batterie et donc à l'optimisation énergétique des nœuds du réseau. Nous présenterons plusieurs propositions à ces problèmes dans le cadre de cette thèse. En résumé, les contributions qui ont été présentées dans cette thèse, abordent la durée de vie globale du réseau, l'exploitation des messages de données redondantes et corrélées et enfin le fonctionnement nœud lui-même. Les travaux ont conduit à la réalisation d'algorithmes de routage hiérarchiques et de filtrage permettant la suppression des redondances. Ils s'appuient sur les corrélations spatio-temporelles des données mesurées. Enfin, une implémentation de ce réseau de capteurs multi-sauts intégrant ces nouvelles fonctionnalités est proposée
Wireless sensor networks (WSNs) technology have been demonstrated to be a usefulmeasurement system for numerous bath indoor and outdoor applications. There is avast amount of applications that are operating with WSN technology, such asenvironmental monitoring, for forest fire detection, weather forecasting, water supplies, etc. The independence nature of WSNs from the existing infrastructure. Virtually, the WSNs can be deployed in any sort of location, and provide the sensor samples accordingly in bath time and space. On the contrast, the manual deployments can only be achievable at a high cost-effective nature and involve significant work. ln real-world applications, the operation of wireless sensor networks can only be maintained, if certain challenges are overcome. The lifetime limitation of the distributed sensor nodes is amongst these challenges, in order to achieve the energy optimization. The propositions to the solution of these challenges have been an objective of this thesis. ln summary, the contributions which have been presented in this thesis, address the system lifetime, exploitation of redundant and correlated data messages, and then the sensor node in terms of usability. The considerations have led to the simple data redundancy and correlated algorithms based on hierarchical based clustering, yet efficient to tolerate bath the spatio-temporal redundancies and their correlations. Furthermore, a multihop sensor network for the implementation of propositions with more features, bath the analytical proofs and at the software level, have been proposed
APA, Harvard, Vancouver, ISO, and other styles
26

Litman, Jacob Mordechai. "Advanced optimization and sampling techniques for biomolecules using a polarizable force field." Diss., University of Iowa, 2019. https://ir.uiowa.edu/etd/6792.

Full text
Abstract:
Biophysical simulation can be an excellent complement to experimental techniques, but there are unresolved practical constraints to simulation. While computers have continued to improve, the scale of systems we wish to study has continued to increase. This has driven the use of approximate energy functions (force fields), compensating for relatively short simulations via careful structure preparation and accelerated sampling techniques. To address structure preparation, we developed the many-body dead end elimination (MB-DEE) optimizer. We first proved the MB-DEE algorithm on a set of PCNA crystal structures, and accelerated it on GPUs to optimize 472 homology models of proteins implicated in inherited deafness. Advanced physics has been clearly demonstrated to help optimize structures, and with GPU acceleration, this becomes a possibility for large numbers of structures. We also show the novel “simultaneous bookending” algorithm, which is a new approach to indirect free energy (IFE) methods. These first perform simulations under a cheaper “reference” potential, then correct the thermodynamics to a more sophisticated “target” potential, combining the speed of the reference potential with the accuracy of the target potential. Simultaneous bookending is shown as a valid IFE approach, and methods to realize speedups vs. the direct path are discussed. Finally, we are developing the Monte Carlo Orthogonal Space Random Walk (MC-OSRW) algorithm for high-performance alchemical free energy simulations, bypassing some of the difficulty in OSRW methods. This work helps prevent inaccuracies caused by simpler electrostatic models by making advanced polarizable force fields more accessible for routine simulation.
APA, Harvard, Vancouver, ISO, and other styles
27

Cossio, Tejada Pilar. "Protein physics by advanced computational techniques: conformational sampling and folded state discrimination." Doctoral thesis, SISSA, 2011. http://hdl.handle.net/20.500.11767/4685.

Full text
Abstract:
Proteins are essential parts of organisms and participate in virtually every process within cells. Many proteins are enzymes that catalyze biochemical reactions and are vital to metabolism. Proteins also have structural or mechanical functions, such as actin and myosin in muscle that are in charge of motion and locomotion of cells and organisms. Others proteins are important for transporting materials, cell signaling, immune response, and several other functions. Proteins are the main building blocks of life. A protein is a polymer chain of amino acids whose sequence is defined in a gene: three nucleo type basis specify one out of the 20 natural amino acids. All amino acids possess common structural features. They have an α-carbon to which an amino group, a carboxyl group, a hydrogen atom and a variable side chain are attached. In a protein, the amino acids are linked together by peptide bonds between the carboxyl and amino groups of adjacent residues...
APA, Harvard, Vancouver, ISO, and other styles
28

Luo, Chenchi. "Non-uniform sampling: algorithms and architectures." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/45873.

Full text
Abstract:
Modern signal processing applications emerging in telecommunication and instrumentation industries have placed an increasing demand for ADCs with higher speed and resolution. The most fundamental challenge in such a progress lies at the heart of the classic signal processing: the Shannon-Nyquist sampling theorem which stated that when sampled uniformly, there is no way to increase the upper frequency in the signal spectrum and still unambiguously represent the signal except by raising the sampling rate. This thesis is dedicated to the exploration of the ways to break through the Shannon-Nyquist sampling rate by applying non-uniform sampling techniques. Time interleaving is probably the most intuitive way to parallel the uniform sampling process in order to achieve a higher sampling rate. Unfortunately, the channel mismatches in the TIADC system make the system an instance of a recurrent non-uniform sampling system whose non-uniformities are detrimental to the performance of the system and need to be calibrated. Accordingly, this thesis proposed a flexible and efficient architecture to compensate for the channel mismatches in the TIADC system. As a key building block in the calibration architecture, the design of the Farrow structured adjustable fractional delay filter has been investigated in detail. A new modified Farrow structure is proposed to design the adjustable FD filters that are optimized for a given range of bandwidth and fractional delays. The application of the Farrow structure is not limited to the design of adjustable fractional delay filters. It can also be used to implement adjustable lowpass, highpass and bandpass filters as well as adjustable multirate filters. This thesis further extends the Farrow structure to the design of filters with adjustable polynomial phase responses. Inspired by the theory of compressive sensing, another contribution of this thesis is to use randomization as a means to overcome the limit of the Nyquist rate. This thesis investigates the impact of random sampling intervals or jitters on the power spectrum of the sampled signal. It shows that the aliases of the original signal can be well shaped by choosing an appropriate probability distribution of the sampling intervals or jitters such that aliases can be viewed as a source of noise in the signal power spectrum. A new theoretical framework has been established to associate the probability mass function of the random sampling intervals or jitters with the aliasing shaping effect. Based on the theoretical framework, this thesis proposes three random sampling architectures, i.e., SAR ADC, ramp ADC and level crossing ADC, that can be easily implemented based on the corresponding standard ADC architectures. Detailed models and simulations are established to verify the effectiveness of the proposed architectures. A new reconstruction algorithm called the successive sine matching pursuit has also been proposed to recover a class of spectrally sparse signals from a sparse set of non-uniform samples onto a denser uniform time grid so that classic signal processing techniques can be applied afterwards.
APA, Harvard, Vancouver, ISO, and other styles
29

Wong, Ming-hong Daniel, and 黃明康. "A study of passive sampling and modelling techniques for urban air pollution determination." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B30252325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Georgiev, Iliyan Verfasser], and Philipp [Akademischer Betreuer] [Slusallek. "Path sampling techniques for efficient light transport simulation / Iliyan Georgiev. Betreuer: Philipp Slusallek." Saarbrücken : Saarländische Universitäts- und Landesbibliothek, 2015. http://d-nb.info/1073877035/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Weihong. "Multi-scale simulations of intrinsically disordered proteins and development of enhanced sampling techniques." Diss., Kansas State University, 2014. http://hdl.handle.net/2097/17916.

Full text
Abstract:
Doctor of Philosophy
Department of Biochemistry and Molecular Biophysics
Jianhan Chen
Intrinsically disordered proteins (IDPs) are functional proteins that lack stable tertiary structures under physiological conditions. IDPs are key components of regulatory networks that dictate various aspects of cellular decision-making, and are over-represented in major disease pathways. For example, about 30% of eukaryotic proteins contain intrinsic disordered regions, and over 70% of cancer-associated proteins have been identified as IDPs. The highly heterogeneous nature of IDPs has presented significant challenge for experimental characterization using NMR, X-ray crystallography, or FRET. These challenges represent a unique opportunity for molecular mod- eling to make critical contributions. In this study, computer simulations at multiple scales were utilized to characterize the structural properties of unbound IDPs as well as to obtain a mechanistic understanding of IDP interactions. These studies of IDPs also reveal significant limitations in the current simulation methodology. In particular, successful simulations of biomolecules not only require accurate molecular models, but also depend on the ability to sufficiently sample the com- plex conformational space. By designing a realistic yet computationally tractable coarse-grained protein model, we demonstrated that the popular temperature replica exchange enhanced sampling is ineffective in driving faster reversible folding transitions for proteins. The second original contribution of this dissertation is the development of novel simulation methods for enhanced sampling of protein conformations, specifically, replica exchange with guided-annealing (RE-GA) method and multiscale enhanced sampling (MSES) method. We expect these methods to be highly useful in generating converged conformational ensembles.
APA, Harvard, Vancouver, ISO, and other styles
32

Coates, Adam Ross. "Methods for ultra-broadband correlator development focusing on high-speed digital sampling techniques." Thesis, University of Oxford, 2013. http://ora.ox.ac.uk/objects/uuid:8ace9e68-d8e7-4f1d-b6c9-13853eecbd47.

Full text
Abstract:
In radio astronomy, a key limiting factor to observations made is the available bandwidth of the system. This thesis looks at two different approaches to building ultra-broadband correlators for use in radio astronomy. The first was a 2-20GHz double-sideband complex analogue correlator that was constructed before the work of this thesis. Characterisation tests are performed and a basic calibration is attempted. Both these sets of experiments show good results, with the basic calibration successfully being able to compensate for gain difference between the lags over a reduced bandwidth range used in the testing. The second approach was the investigation into different techniques for high-speed digital sampling, capable of providing equivalent bandwidths to the analogue system. The use of FPGA high-speed serial interfaces as direct 1-bit 3.125 GS/s samplers was investigated. Single-frequency sampling showed that a signal-to-noise ratio close to the theoretical maximum across the band was achieved (≈ 0.8 effective bits). Techniques were also identified to use multiple transceivers to generate a single interleaved stream at higher effective sampling rate. Two different methods were also explored for producing greater-than 1-bit sampling. A hysteresis approach was shown not to produce the desired results and a reference based sampler in the end was adopted. Finally, the interleaving and multi-bit techniques were combined to generate a single 1.5-bit 6.25 GS/s sampler. This was seen to have reduced signal-to-noise compared to the expected values. This was believed to be caused by the poor method of RF signal injection causing cross-talk between the channels and large amounts of loss. As a comparison to the direct sampling method, an external 1-bit high-speed Hittite comparator was also examined. The single-frequency experiment was repeated with a slightly higher signal-to-noise ratio found compared to the direct sampling method. This was again believed to be due to the RF environments used. From the sampling setups a four-input, six-baseline lag correlator was constructed using the direct sampling method. The entire correlator, as well as the sampling transceivers, was incorporated into a single Xilinx Virtex 5 FPGA. This was shown to have the expected response to single-frequency, broadband and noise signals. The thesis concludes with a characterisation of the RF devices used throughout the testing procedures. Several new devices were developed through the course of the experiments with the designs being documented. All the necessary components to construct IF chains for both the analogue and digital correlators described are present. This leads to simulations being made of complete IF chains, with the expected responses shown.
APA, Harvard, Vancouver, ISO, and other styles
33

Millerand, Gaëtan. "Enhancing decision tree accuracy and compactness with improved categorical split and sampling techniques." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279454.

Full text
Abstract:
Decision tree is one of the most popular algorithms in the domain of explainable AI. From its structure, it is simple to induce a set of decision rules which are totally understandable for a human. That is why there is currently research on improving decision or mapping other models into a tree. Decision trees generated by C4.5 or ID3 tree suffer from two main issues. The first one is that they often have lower performances in term of accuracy for classification tasks or mean square error for regression tasks compared to state-of-the-art models like XGBoost or deep neural networks. On almost every task, there is an important gap between top models like XGboost and decision trees. This thesis addresses this problem by providing a new method based on data augmentation using state-of-the-art models which outperforms the old ones regarding evaluation metrics. The second problem is the compactness of the decision tree, as the depth increases the set of rules becomes exponentially big, especially when the splitted attribute is a categorical one. Standards solution to handle categorical values are to turn them into dummy variables or to split on each value producing complex models. A comparative study of current methods of splitting categorical values in classification problems is done in this thesis. A new method is also studied in the case of regression.
Beslutsträd är en av de mest populära algoritmerna i den förklarbara AI-domänen. I själva verket är det från dess struktur verkligen enkelt att framställa en uppsättning beslutsregler som är helt förståeliga för en vanlig användare. Därför forskas det för närvarande på att förbättra beslut eller kartlägga andra modeller i ett träd. Beslutsträd genererat av C4.5 eller ID3-träd lider av två huvudproblem. Den första är att de ofta har lägre prestanda när det gäller noggrannhet för klassificeringsuppgifter eller medelkvadratfel för regressionsuppgiftens noggrannhet jämfört med modernaste modeller som XGBoost eller djupa neurala nätverk. I nästan varje uppgift finns det faktiskt ett viktigt gap mellan toppmodeller som XGboost och beslutsträd. Detta examensarbete tar upp detta problem genom att tillhandahålla en ny metod baserad på dataförstärkning med hjälp av modernaste modeller som överträffar de gamla när det gäller utvärderingsmätningar. Det andra problemet är beslutsträdets kompakthet, allteftersom djupet ökar, blir uppsättningen av regler exponentiellt stor, särskilt när det delade attributet är kategoriskt. Standardlösning för att hantera kategoriska värden är att förvandla dem till dummiesvariabler eller dela på varje värde som producerar komplexa modeller. En jämförande studie av nuvarande metoder för att dela kategoriska värden i klassificeringsproblem görs i detta examensarbete, en ny metod studeras också i fallet med regression.
APA, Harvard, Vancouver, ISO, and other styles
34

Wong, Ming-hong Daniel. "A study of passive sampling and modelling techniques for urban air pollution determination /." Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B2093385X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Harlou, Rikke. "Understanding magma genesis through analysis of melt inclusions : application of innovative micro-sampling techniques." Thesis, Durham University, 2007. http://etheses.dur.ac.uk/3728/.

Full text
Abstract:
Melt entrapped as inclusions in early-formed phenocrysts provide geochemists with an exceptional opportunity to study sample material from the earliest stages in the formation of a suite of lavas. With a foucus on olivine-hosted melt inclusions, this Ph.D. thesis has explored the potentials for obtaining Sr isotope ratios on individual olivine-hosted melt inclusions, and examined the potentials for Sr isotope studies on melt inclusions to reveal new information on the origin of CFB and OIB. A novel technique is introduced that facilitate precise and accurate Sr isotope and trace element analysis of individual melt inclusions at sub-nanogram levels - thus applicable to typical melt inclusion suites from OIB and CFB, and in general to 'problems' where precise and accurate Sr isotope and trace element information is required on sub-nanogram Sr samples. The technique developed combines off-line sampling by micro-milling, micro Sr column chemistry, Sr isotope determination by TIMS, and trace element analysis by ICPMS. Olivine-hosted melt inclusions from two suites of high (^3)He/(^4)He lavas of the North Atlantic Igneous Province are studied. These reveal that Sr isotope and trace element measurements on individual melt inclusions provide a higher resolution picture of the pre-aggregated melt compositions and the different mantle and crustal components involved in the magma genesis, which otherwise were obscured within the whole-rock data. The Sr isotope and elemental variability recorded by the olivine-hosted melt inclusions contrast the more subtle variations of the host lava suites and raises the question of whether the (^3)He/(^4)He measured in melt inclusions in olivine phenocrysts should be related to the chemistry of melt inclusions rather than the bulk lava chemistry. The study further provides strong evidence that the extreme, high (^3)He/(^4)He signature observed in magmas from the North Atlantic Igneous Province is derived from a depleted component in their source, and hence such He isotopic signature should no longer be regarded as canonical evidence for primitive, lower mantle source.
APA, Harvard, Vancouver, ISO, and other styles
36

Mohamed, Lina Mahgoub Yahya. "Novel sampling techniques for reservoir history matching optimisation and uncertainty quantification in flow prediction." Thesis, Heriot-Watt University, 2011. http://hdl.handle.net/10399/2435.

Full text
Abstract:
Modern reservoir management has an increasing focus on accurately predicting the likely range of field recoveries. A variety of assisted history matching techniques has been developed across the research community concerned with this topic. These techniques are based on obtaining multiple models that closely reproduce the historical flow behaviour of a reservoir. The set of resulted history matched models is then used to quantify uncertainty in predicting the future performance of the reservoir and providing economic evaluations for different field development strategies. The key step in this workflow is to employ algorithms that sample the parameter space in an efficient but appropriate manner. The algorithm choice has an impact on how fast a model is obtained and how well the model fits the production data. The sampling techniques that have been developed to date include, among others, gradient based methods, evolutionary algorithms, and ensemble Kalman filter (EnKF). This thesis has investigated and further developed the following sampling and inference techniques: Particle Swarm Optimisation (PSO), Hamiltonian Monte Carlo, and Population Markov Chain Monte Carlo. The inspected techniques have the capability of navigating the parameter space and producing history matched models that can be used to quantify the uncertainty in the forecasts in a faster and more reliable way. The analysis of these techniques, compared with Neighbourhood Algorithm (NA), has shown how the different techniques affect the predicted recovery from petroleum systems and the benefits of the developed methods over the NA. The history matching problem is multi-objective in nature, with the production data possibly consisting of multiple types, coming from different wells, and collected at different times. Multiple objectives can be constructed from these data and explicitly be optimised in the multi-objective scheme. The thesis has extended the PSO to handle multi-objective history matching problems in which a number of possible conflicting objectives must be satisfied simultaneously. The benefits and efficiency of innovative multi-objective particle swarm scheme (MOPSO) are demonstrated for synthetic reservoirs. It is demonstrated that the MOPSO procedure can provide a substantial improvement in finding a diverse set of good fitting models with a fewer number of very costly forward simulations runs than the standard single objective case, depending on how the objectives are constructed. The thesis has also shown how to tackle a large number of unknown parameters through the coupling of high performance global optimisation algorithms, such as PSO, with model reduction techniques such as kernel principal component analysis (PCA), for parameterising spatially correlated random fields. The results of the PSO-PCA coupling applied to a recent SPE benchmark history matching problem have demonstrated that the approach is indeed applicable for practical problems. A comparison of PSO with the EnKF data assimilation method has been carried out and has concluded that both methods have obtained comparable results on the example case. This point reinforces the need for using a range of assisted history matching algorithms for more confidence in predictions.
APA, Harvard, Vancouver, ISO, and other styles
37

Farrar, Nick. "The development and application of novel passive air sampling techniques for persistent organic pollutants." Thesis, Lancaster University, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.429977.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Masemene, Monyadiwa Martha. "Analysis of the mandibular pheromone of living honeybee queens using non-destructive sampling techniques." Pretoria : [s. n.], 2008. http://upetd.up.ac.za/thesis/available/etd-08122009-164729/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Masemene, Monyadiwa Martha. "Analysis of the mandibular pheromone of living honeybee queens using non-destructive sampling techniques." Diss., University of Pretoria, 2009. http://hdl.handle.net/2263/27199.

Full text
Abstract:
Honeybee queens produce a number of pheromones that influence the behaviour and physiology of worker bees. The mandibular gland secretion of queens, the major pheromone source, suppresses the formation of emergency queen cells, worker reproduction and coordinates the social organisation of the colony. A study of analytical procedures for honeybee queen mandibular gland pheromone was undertaken, with the aim of doing multiple analyses of the same individual over a period of time. Attention was given to developing new non-destructive sampling methods that would help to characterize signal changes. This study involves the characterisation of non-destructive sampling devices that are highly selective and sensitive towards extraction of mandibular pheromone. Two polymer based sampling techniques, solid phase micro extraction and silicone rubber tubing, compatible with gas chromatography were studied. A solvent extract, of mandibular pheromone was analysed by gas chromatography (GC) and employed as a tested reference method for the two newly developed techniques. Direct sampling with solid phase micro extraction fibres at the glandular openings at the base of the mandibles is a non-destructive method that met our objectives. Mandibular gland secretions from living honeybee queens were sampled with polar and non-polar fibres. Non-polar fibres were saturated with Bis(trimethylsilyl)triflouroacetamide (BSTFA) prior to mandibular pheromone extraction. Treatment of the polymer devices with derivatising agent enhances extraction of polar components of the mandibular pheromone. BSTFA saturated non-polar fibres with a low-polarity column gave consistent results compared to polar fibres with a mid-polar column. The results confirmed that the solid phase micro extraction technique is a sensitive and non-destructive method that can ideally be used to analyse insect secretions particularly in tracking temporal changes in the secretion composition during an individual’s life. Silicone rubber tubing consisting of polydimethylsiloxane was explored as an alternative sampling technique for pheromones from living individuals. Prepared One cm long silicone rubber tubing was saturated with BSTFA prior to mandibular pheromone extraction to enhance extraction of polar components. Preliminary studies done on mandibular pheromone standards sampled with this method showed promising results. However, queen mandibular secretion analyses were characterized by low recovery of pheromonal compounds. The new polymer based techniques that we employed isolated the mandibular pheromones from living honeybee queens directly from the mandibles. The pheromonal components of the mandibular gland secretion were successfully analysed. Copyright
Dissertation (MSc)--University of Pretoria, 2009.
Chemistry
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
40

Batidzirai, Jesca Mercy. "Randomization in a two armed clinical trial: an overview of different randomization techniques." Thesis, University of Fort Hare, 2011. http://hdl.handle.net/10353/395.

Full text
Abstract:
Randomization is the key element of any sensible clinical trial. It is the only way we can be sure that the patients have been allocated into the treatment groups without bias and that the treatment groups are almost similar before the start of the trial. The randomization schemes used to allocate patients into the treatment groups play a role in achieving this goal. This study uses SAS simulations to do categorical data analysis and comparison of differences between two main randomization schemes namely unrestricted and restricted randomization in dental studies where there are small samples, i.e. simple randomization and the minimization method respectively. Results show that minimization produces almost equally sized treatment groups, but simple randomization is weak in balancing prognostic factors. Nevertheless, simple randomization can also produce balanced groups even in small samples, by chance. Statistical power is also improved when minimization is used than in simple randomization, but bigger samples might be needed to boost the power.
APA, Harvard, Vancouver, ISO, and other styles
41

DING, MENGMENG. "REGRESSION BASED ANALOG PERFORMANCE MACROMODELING: TECHNIQUES AND APPLICATIONS." University of Cincinnati / OhioLINK, 2006. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1146145102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Amoah, Barbara Amoh. "Monitoring populations of the ham mite, Tyrophagus putrescentiae (Schrank) (Acari: Acaridae): research on traps, orientation behavior, and sampling techniques." Diss., Kansas State University, 2016. http://hdl.handle.net/2097/32489.

Full text
Abstract:
Doctor of Philosophy
Department of Entomology
Thomas W. Phillips
The phase-out of methyl bromide production, the most effective fumigant for the control of the ham mite, Tyrophagus putrescentiae (Schrank) (Acari: Acaridae), on dry-cured ham has necessitated the search for other management methods. The foundation of a successful management program is an effective monitoring program that provides information on pest presence and abundance over time and space to help in making management decisions. By using the standard trap made from disposable Petri dishes and a dog food-based bait, mite activity was monitored weekly in five dry-cured ham aging rooms from three commercial processing facilities from June 2012 to September 2013. Results indicated that mite numbers in traps in facilities typically had a pattern of sharp decline after fumigation, followed by a steady increase until the next fumigation. Average trap captures varied due to trap location, indicating that traps could be used to identify locations where mite infestation of hams may be more likely to occur. Experiments were also conducted in 6 m x 3 m climate-controlled rooms to determine the effects of some physical factors on trap capture. Factors such as trap design, trap location, trap distance, duration of trapping, and light conditions had significant effects on mite capture. Mites also responded differently to light emitting diodes of different wavelengths, either as a component of the standard trap or as a stand-alone stimulus to orientation. To determine the relationship between trap capture and mite density, experiments were carried out in the climate-controlled rooms. Mite density was varied but trap number remained constant for all mite densities. There was strong positive correlation between trap capture and mite density. In simulated ham aging rooms, the distribution of mites on hams was determined and different sampling techniques such as vacuum sampling, trapping, rack sampling, ham sampling and absolute mite counts from whole hams were compared and correlated. Results showed weak or moderate correlations between sampling techniques in pairwise comparisons. Two sampling plans were developed to determine the number of samples required to estimate mite density on ham with respect to fixed precision levels or to an action threshold for making pest management decisions. Findings reported here can help in the optimization of trapping and sampling of ham mite populations to help in the development of efficient, cost-effective tools for pest management decisions incorporated with alternatives to methyl bromide.
APA, Harvard, Vancouver, ISO, and other styles
43

Summers, B. "On extensions to the sampling theorem and processing techniques for non-equispaced sampled-data systems." Thesis, University of Westminster, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.233046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

DeGeer, Staci Lynn. "Evaluation of four different surface sampling techniques for microbes on three different food preparation surfaces." Thesis, Manhattan, Kan. : Kansas State University, 2009. http://hdl.handle.net/2097/1350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Galgey, NeilJames. "Sampling techniques for the recovery of male offender DNA from the female victim skin surfaces." Thesis, Galgey, NeilJames (2021) Sampling techniques for the recovery of male offender DNA from the female victim skin surfaces. Masters by Coursework thesis, Murdoch University, 2021. https://researchrepository.murdoch.edu.au/id/eprint/63186/.

Full text
Abstract:
Sexual assault is a very serious criminal offence with great health and welfare repercussions. Trace DNA is often referred to as the minute quantities of DNA that can be transferred between a victim, perpetrator and/or the crime scene during an assault. Often in a sexual assault case the victim’s body is the most important crime scene and sometimes the only available crime scene. Because of this it is critically important to successfully take a DNA sample after the assault. There has only been one unpublished study that has touched on the comparison of all available sampling techniques. This study aimed to compare and analyse five sampling techniques consisting of double swabbing, single swabbing, tape-lifting, mini tape-lifting and alcohol wipes which was a somewhat new method. To do this, a scenario where saliva from the male offender is found on the female victim’s neck was conducted for this study. While the results were considerably contaminated, it was found that double swabbing obtained the highest DNA concentration of samples when compared to the other four methods. Also, these contaminated results also provided somewhat promising results in the use of tape-lifting, mini tape-lifting and alcohol wipes. But these methodologies would need to be verified via future peer reviewed research as these contaminated results were unhelpful in providing an idea of what these results were but cannot conclusively support any of the hypotheses provided. Overall, this study did not find anything ground breaking due to the largely contaminated results of this study, but it is imperative that follow-up research takes place to provide verification that while these results were contaminated something of value may have been found. Unfortunately, nothing of any statistical significance was found due to the contamination and hence in the future great care when moving samples from tube to the well plate should be taken to minimise the risk of contamination.
APA, Harvard, Vancouver, ISO, and other styles
46

Good, Norman Markus. "Methods for estimating the component biomass of a single tree and a stand of trees using variable probability sampling techniques." Thesis, Queensland University of Technology, 2001. https://eprints.qut.edu.au/37097/1/37097_Good_2001.pdf.

Full text
Abstract:
This thesis developed multistage sampling methods for estimating the aggregate biomass of selected tree components, such as leaves, branches, trunk and total, in woodlands in central and western Queensland. To estimate the component biomass of a single tree randomised branch sampling (RBS) and importance sampling (IS) were trialed. RBS and IS were found to reduce the amount of time and effort to sample tree components in comparison with other standard destructive sampling methods such as ratio sampling, especially when sampling small components such as leaves and small twigs. However, RBS did not estimate leaf and small twig biomass to an acceptable degree of precision using current methods for creating path selection probabilities. In addition to providing an unbiased estimate of tree component biomass, individual estimates were used for developing allometric regression equations. Equations based on large components such as total biomass produced narrower confidence intervals than equations developed using ratio sampling. However, RBS does not estimate small component biomass such as leaves and small wood components with an acceptable degree of precision, and should be mainly used in conjunction with IS for estimating larger component biomass. A whole tree was completely enumerated to set up a sampling space with which RBS could be evaluated under a number of scenarios. To achieve a desired precision, RBS sample size and branch diameter exponents were varied, and the RBS method was simulated using both analytical and re-sampling methods. It was found that there is a significant amount of natural variation present when relating the biomass of small components to branch diameter, for example. This finding validates earlier decisions to question the efficacy of RBS for estimating small component biomass in eucalypt species. In addition, significant improvements can be made to increase the precision of RBS by increasing the number of samples taken, but more importantly by varying the exponent used for constructing selection probabilities. To further evaluate RBS on trees with differing growth forms from that enumerated, virtual trees were generated. These virtual trees were created using L-systems algebra. Decision rules for creating trees were based on easily measurable characteristics that influence a tree's growth and form. These characteristics included; child-to-child and children-to-parent branch diameter relationships, branch length and branch taper. They were modelled using probability distributions of best fit. By varying the size of a tree and/or the variation in the model describing tree characteristics; it was possible to simulate the natural variation between trees of similar size and fonn. By creating visualisations of these trees, it is possible to determine using visual means whether RBS could be effectively applied to particular trees or tree species. Simulation also aided in identifying which characteristics most influenced the precision of RBS, namely, branch length and branch taper. After evaluation of RBS/IS for estimating the component biomass of a single tree, methods for estimating the component biomass of a stand of trees (or plot) were developed and evaluated. A sampling scheme was developed which incorporated both model-based and design-based biomass estimation methods. This scheme clearly illustrated the strong and weak points associated with both approaches for estimating plot biomass. Using ratio sampling was more efficient than using RBS/IS in the field, especially for larger tree components. Probability proportional to size sampling (PPS) -size being the trunk diameter at breast height - generated estimates of component plot biomass that were comparable to those generated using model-based approaches. The research did, however, indicate that PPS is more precise than the use of regression prediction ( allometric) equations for estimating larger components such as trunk or total biomass, and the precision increases in areas of greater biomass. Using more reliable auxiliary information for identifying suitable strata would reduce the amount of within plot variation, thereby increasing precision. PPS had the added advantage of being unbiased and unhindered by numerous assumptions applicable to the population of interest, the case with a model-based approach. The application of allometric equations in predicting the component biomass of tree species other than that for which the allometric was developed is problematic. Differences in wood density need to be taken into account as well as differences in growth form and within species variability, as outlined in virtual tree simulations. However, the development and application of allometric prediction equations in local species-specific contexts is more desirable than PPS.
APA, Harvard, Vancouver, ISO, and other styles
47

Carter, David Hugh Harrison. "An assessment of variable radius plot sampling techniques for measuring change over time : a simulation study." Thesis, University of British Columbia, 2007. http://hdl.handle.net/2429/31553.

Full text
Abstract:
The most commonly used approach for measuring change over time involves using Fixed Radius Plots (FRPs). A major disadvantage of using FRPs for detecting change is the cost of implementation, both statistically and monetarily. Variable Radius Plots (VRP) can also used for measuring change over time; however, VRPs have not been widely used due to perceived problems including: 1. sudden additions to the value of interest which produce high variability; 2. complex mathematical formulae used for computation; 3. use of an angle for measuring trees; and 4. lack of studies on the use of VRPs for measuring change over time. In this thesis four VRP change estimators were evaluated: (1) the traditional subtraction estimator; (2) the Grosenbaugh estimator; (3) the Distance Variable (DV) estimator; and (4) the Flewelling estimator. Tree and stand data were generated using a stand simulator (StandSim), and stands were sampled using a series of SAS programs. Volume change, basal area change, and stems per hectare change were calculated for a series of 54 stand conditions in which density, dbh distribution, mortality, and spatial distribution were varied. Relative efficiencies were calculated comparing each VRP estimator to both the FRP estimator, and the subtraction estimator. The DV estimator had a relative efficiency greater than 1.0 (i.e., it was more precise) in 59% of the scenarios for volume change, 15% of the scenarios for basal area change, and 0% of scenarios for stems per hectare change when compared to FRP. Mortality is a high source of variability for all estimators and in another comparison where high mortality scenarios were excluded, the DV estimator had a relative efficiency greater than 1.0 in 74% of scenarios for volume change, 30% of scenarios for basal area change, and 0% of scenarios for stems per hectare change. The DV estimator had a relative efficiency greater than 1.0 in at least 70% of the scenarios for each attribute when compared to the traditional subtraction method, excluding the high mortality scenarios. This thesis demonstrates that VRP estimators for measuring change over time (in particular the DV estimator) can reduce the variability introduced by sudden additions of value. VRP estimators are not so complex that they should be excluded as an option for use, and require no new field techniques. Further studies should be completed to support the use of VRPs for change detection.
Forestry, Faculty of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
48

Putzu, Marina [Verfasser], and M. [Akademischer Betreuer] Elstner. "Investigation of the dynamic nature of proteins with enhanced sampling techniques / Marina Putzu ; Betreuer: M. Elstner." Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1162544066/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Zhang, Nan. "Advanced fault diagnosis techniques and their role in preventing cascading blackouts." Texas A&M University, 2006. http://hdl.handle.net/1969.1/4897.

Full text
Abstract:
This dissertation studied new transmission line fault diagnosis approaches using new technologies and proposed a scheme to apply those techniques in preventing and mitigating cascading blackouts. The new fault diagnosis approaches are based on two time-domain techniques: neural network based, and synchronized sampling based. For a neural network based fault diagnosis approach, a specially designed fuzzy Adaptive Resonance Theory (ART) neural network algorithm was used. Several ap- plication issues were solved by coordinating multiple neural networks and improving the feature extraction method. A new boundary protection scheme was designed by using a wavelet transform and fuzzy ART neural network. By extracting the fault gen- erated high frequency signal, the new scheme can solve the difficulty of the traditional method to differentiate the internal faults from the external using one end transmis- sion line data only. The fault diagnosis based on synchronized sampling utilizes the Global Positioning System of satellites to synchronize data samples from the two ends of the transmission line. The effort has been made to extend the fault location scheme to a complete fault detection, classification and location scheme. Without an extra data requirement, the new approach enhances the functions of fault diagnosis and improves the performance. Two fault diagnosis techniques using neural network and synchronized sampling are combined as an integrated real time fault analysis tool to be used as a reference of traditional protective relay. They work with an event analysis tool based on event tree analysis (ETA) in a proposed local relay monitoring tool. An interactive monitoring and control scheme for preventing and mitigating cascading blackouts is proposed. The local relay monitoring tool was coordinated with the system-wide monitoring and control tool to enable a better understanding of the system disturbances. Case studies were presented to demonstrate the proposed scheme. An improved simulation software using MATLAB and EMTP/ATP was devel- oped to study the proposed fault diagnosis techniques. Comprehensive performance studies were implemented and the test results validated the enhanced performance of the proposed approaches over the traditional fault diagnosis performed by the transmission line distance relay.
APA, Harvard, Vancouver, ISO, and other styles
50

Ruprecht, Nathan Alexander. "Implementation of Compressive Sampling for Wireless Sensor Network Applications." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1157614/.

Full text
Abstract:
One of the challenges of utilizing higher frequencies in the RF spectrum, for any number of applications, is the hardware constraints of analog-to-digital converters (ADCs). Since mid-20th century, we have accepted the Nyquist-Shannon Sampling Theorem in that we need to sample a signal at twice the max frequency component in order to reconstruct it. Compressive Sampling (CS) offers a possible solution of sampling sub-Nyquist and reconstructing using convex programming techniques. There has been significant advancements in CS research and development (more notably since 2004), but still nothing to the advantage of everyday use. Not for lack of theoretical use and mathematical proof, but because of no implementation work. There has been little work on hardware in finding the realistic constraints of a working CS system used for digital signal process (DSP). Any parameters used in a system is usually assumed based on stochastic models, but not optimized towards a specific application. This thesis aims to address a minimal viable platform to implement compressive sensing if applied to a wireless sensor network (WSN), as well as address certain parameters of CS theory to be modified depending on the application.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography