To see the other types of publications on this topic, follow the link: Single samples.

Dissertations / Theses on the topic 'Single samples'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Single samples.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Butler, Corey. "Quantitative single molecule imaging deep in biological samples using adaptive optics." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0632/document.

Full text
Abstract:
La microscopie optique est un outil indispensable pour la recherche de la neurobiologie et médecine qui permet l’étude des cellules dans leur environnement natif. Les processus sous-cellulaires restent néanmoins cachés derrière les limites de la résolution optique, ce qui rend la résolution des structures plus petites que ~300nm impossible. Récemment, les techniques de la localisation des molécules individuelles (SML) ont permis le suivi des protéines de l’échelle nanométrique grâce à l’ajustement des molécules uniques à la réponse impulsionnelle du système optique. Ce processus dépend de la quantité de lumière recueilli et rend ces techniques très sensibles aux imperfections de la voie d’imagerie, nommé des aberrations, qui limitent l’application de SML aux cultures cellulaires sur les lamelles de verre. Un système commercial d’optiques adaptatives est implémenté pour compenser les aberrations du microscope, et un flux de travail est défini pour corriger les aberrations dépendant de la profondeur qui rend la 3D SML possible dans les milieux biologiques complexes. Une nouvelle méthode de SML est présentée qui utilise deux objectifs pour détecter le spectre d’émission des molécules individuelles pour des applications du suivi des particules uniques dans 5 dimensions (x,y,z,t,λ) sans compromis ni de la résolution spatiotemporelle ni du champ de vue. Pour faciliter les analyses de manière quantitative des Go de données générés, le développement des outils biochimiques, numériques et optiques est présenté. Ensemble, ces approches ont le but d’amener l’imagerie quantitative des molécules uniques dans les échantillons biologiques complexes<br>Optical microscopy is an indispensable tool for research in neurobiology and medicine, enabling studies of cells in their native environment. However, subcellular processes remain hidden behind the resolution limits of diffraction-limited optics which makes structures smaller than ~300nm impossible to resolve. Recently, single molecule localization (SML) and tracking has revolutionized the field, giving nanometer-scale insight into protein organization and dynamics by fitting individual fluorescent molecules to the known point spread function of the optical imaging system. This fitting process depends critically on the amount of collected light and renders SML techniques extremely sensitive to imperfections in the imaging path, called aberrations, that have limited SML to cell cultures on glass coverslips. A commercially available adaptive optics system is implemented to compensate for aberrations inherent to the microscope, and a workflow is defined for depth-dependent aberration correction that enables 3D SML in complex biological environments. A new SML technique is presented that employs a dual-objective approach to detect the emission spectrum of single molecules, enabling 5-dimensional single particle imaging and tracking (x,y,z,t,λ) without compromising spatiotemporal resolution or field of view. These acquisitions generate ~GBs of data, containing a wealth of information about the localization and environment of individual proteins. To facilitate quantitative acquisition and data analysis, the development of biochemical, software and hardware tools are presented. Together, these approaches aim to enable quantitative SML in complex biological samples
APA, Harvard, Vancouver, ISO, and other styles
2

Bassan, Paul. "Light scattering during infrared spectroscopic measurements of biomedical samples." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/light-scattering-during-infrared-spectroscopic-measurements-of-biomedical-samples(a2a41f54-0e61-443a-bd32-faf8f65806a7).html.

Full text
Abstract:
Infrared (IR) spectroscopy has shown potential to quickly and non-destructively measure the chemical signatures of biomedical samples such as single biological cells, and tissue from biopsy. The size of a single cell (diameter ~10-50 µm) are of a similar magnitude to the mid-IR wavelengths of light (~1-10 µm) giving rise to Mie-type scattering. The result of this scattering is that chemical information is significantly distorted in the IR spectrum.Distortions in biomedical IR spectra are often observed as a broad oscillating baseline on which the absorbance spectrum is superimposed. A spectral feature commonly observed is the sharp decrease in intensity at approximately 1700 cm-1, next to the Amide I band (~1655 cm-1), which pre-2009 was called the 'dispersion artefact'. The first contributing factor towards the 'dispersion artefact' investigated was the reflection signal arising from the air to sample interface entering the collection optics during transflection experiments. This was theoretically modelled, and then experimentally verified. It was shown that IR mapping could be done using reflection mode, yielding information from the optically dense nucleus which previously caused extinction of light in transmission mode.The most important contribution to the spectral distortions was due to resonant Mie scattering (RMieS) which occurs when the scattering particle is strongly absorbing such as biomedical samples. RMieS was shown to explain both the baselines in IR spectra, and the 'dispersion artefact' and was validated using a model system of poly(methyl methacrylate) (PMMA) of varying sizes from 5 to 15 µm. Theoretical simulations and experimental data had an excellent match thus proving the theory proposed. With an understanding of the physics/mathematics of the spectral distortions, a correction algorithm was written, the RMieS extended multiplicative signal correction (RMieS-EMSC). This algorithm modelled the measured spectrum as superposition of a first guess (the reference spectrum) which was of a similar biochemical composition to the pure absorbance spectrum of the sample, and a scattering curve. The scattering curve was estimated as the linear combination of a database of a large number of scattering curves covering a range of feasible physical parameters. Simulated and measured data verified that the RMieS-EMSC increased IR spectral quality.
APA, Harvard, Vancouver, ISO, and other styles
3

Schroeder, Matthew William. "Association of Campylobacter spp. Levels between Chicken Grow-Out Environmental Samples and Processed Carcasses." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/32169.

Full text
Abstract:
Campylobacter spp. have been isolated from live poultry, production environment, processing facility, and raw poultry products. The detection of Campylobacter using both quantitative and qualitative techniques would provide a more accurate assessment of pre- or post harvest contamination. Environmental sampling in a poultry grow-out house, combined with carcass rinse sampling from the same flock may provide a relative assessment of Campylobacter contamination and transmission. Air samples, fecal/litter samples, and feed pan/drink line samples were collected from four commercial chicken grow-out houses. Birds from the sampled house were the first flock slaughtered the following day, and were sampled by post-chill carcass rinses. Quantitative (direct plating) and qualitative (direct plating after enrichment step) detection methods were used to determine Campylobacter contamination in each environmental sample and carcass rinse. Campylobacter, from post-enrichment samples, was detected from 27% (32/120) of house environmental samples and 37.5% (45/120) of carcass rinse samples. All sample types from each house included at least one positive sample except the house 2 air samples. Samples from house 1 and associated carcass rinses accounted for the highest total of Campylobacter positives (29/60). The fewest number of Campylobacter positives, based on both house environmental (4/30) and carcass rinse samples (8/30) were detected from flock B. Environmental sampling techniques provide a non-invasive and efficient way to test for foodborne pathogens. Correlating qualitative or quantitative Campylobacter levels from house and plant samples may enable the scheduled processing of flocks with lower pathogen incidence or concentrations, as a way to reduce post-slaughter pathogen transmission.<br>Master of Science in Life Sciences
APA, Harvard, Vancouver, ISO, and other styles
4

Shibahara, Aya. "Low field dc SQUID NMR on room temperature samples and single crystal UPt3." Thesis, Royal Holloway, University of London, 2010. http://repository.royalholloway.ac.uk/items/60ae93f1-93a4-8001-24e7-5b75961fa6c3/11/.

Full text
Abstract:
This thesis is an account of two distinct experiments with a common theme, which is the technique of dc SQUID NMR. Firstly the application of the technique for broadband spectroscopy on room temperature samples is described. The motivation behind this work was to try to obtain SQUID NMR signals from liquid samples such as water and the amino acid glycine in order to demonstrate a few of the potential applications of this technique in the low field regime. These include increased frequency resolution, the possible detection of relatively small amounts of oil contamination in water samples and low field J-spectroscopy as a chemical bond detector. Measurements were performed on samples of water and oil-water mixtures on a dipper probe that provided a simple, compact shielding arrangement that uses a two-stage SQUID sensor as the front end amplier. With the introduction of a low frequency amplier and modications to the setup the SNR was increased by a factor of 4. However, the SNR became limited by flux trapping in the superconducting materials surrounding the sample, leading to the investigation of alternative materials and methods to maintain low field homogeneity at higher polarising fields. The second part describes SQUID NMR measurements on a single crystal of the heavy fermion superconductor UPt3. Despite serious experimental and theoretical efforts, the symmetry and nature of the unconventional superconducting order parameter has not been resolved due to contradicting experiments, in particular that of the Knight shift into the superconducting state. Measurements were performed on a dilution refrigerator. A two-stage SQUID was mounted onto the fridge, a 3He marker was implemented for accurate local field determination and an overlapping superconducting shield was made to decrease the spectrometer deadtime. NMR measurements are presented for the static field parallel to the c-axis of the crystal from 600 mK down to 400 mK in a static field of 71.5 mT. The Knight shift is measured to decrease from -1.7 % in the normal state to -1.35 % at 400 mK. Together with the results from a Knight shift experiment at 30 degrees to the c-axis, it is argued that these results may provide evidence for the E2u model. Strategies for future measurements are addressed.
APA, Harvard, Vancouver, ISO, and other styles
5

Starnoni, Michele. "Modelling single and two-phase flow on micro-CT images of rock samples." Thesis, University of Aberdeen, 2017. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=232293.

Full text
Abstract:
In this Thesis, numerical simulations of single and two-phase pore-scale flow on three dimensional images obtained from micro-CT scanning of different reservoir rocks are presented. For single-phase flow, the petrophysical properties of rocks, namely Representative Elementary Volume (REV), mean pore and grain size, and absolute permeability, are calculated using an integrated approach comprising image processing, statistical correlation and numerical simulations. Two rock formations, one carbonate and one sandstone, are used throughout this Thesis. It is shown that REV and mean pore and grain size are effectively estimated using the two-point spatial correlation function. A comparison of different absolute permeability estimates is also presented, showing a good agreement between the numerical value and the experimentally determined one for the carbonate sample, but a huge discrepancy for the sandstone. For two-phase flow, the Volume-of-fluid method is used to track the interfaces. The surface tension forces are modelled using a filtered sharp formulation, and the Stokes equations are solved using the PISO algorithm. A study on the snap-off mechanism, investigating the role of several parameters including contact angle and viscosity ratio, is presented. Results show that the threshold contact angle for snap-off increases from a value of 28◦ for a circular cross-section to 30-34◦ for a square cross-section and up to 40◦ for a triangular one. For a throat of square cross-section, increasing the viscosity of the injected phase results in a drop in the threshold contact angle from a value of 30◦ when µ = 1 to 26◦ when µ = 10 and down to 24◦ when µ = 20, where µ is the viscosity ratio. Finally, a rigorous spatial averaging procedure is presented, leading to a novel definition of the macroscopic capillary pressure. Simulations results of drainage on the scanned images of the rock samples are used to compare different estimates of the macroscopic capillary pressure. The comparison reveals that, contrary to what is commonly done following the traditional approach, use of surface average for the pressures is more appropriate than that of volume average, when averaging the microscopic balance equations relevant for pore-scale two-phase flows in porous media.
APA, Harvard, Vancouver, ISO, and other styles
6

Pinkerton, Susan A. "The assessment of phonological processes : a comparison of connected-speech samples and single-word production tests." PDXScholar, 1990. https://pdxscholar.library.pdx.edu/open_access_etds/4191.

Full text
Abstract:
The purpose of this study was to determine if single-word elicitation procedures used in the assessment of phonological processes would have highly similar results to those obtained through connected speech. Connected speech sampling provides a medium for natural production with coarticulatory influence, but can be time-consuming and impractical for clinicians maintaining heavy caseloads or working with highly unintelligible children. Elicitation through single words requires less time than a connected-speech sample and may be more effective with highly unintelligible children because the context is known, but it lacks the influence of surrounding words. Given the inherent differences between these two methods of elicitation, knowledge of the relative effectiveness of single-word and connected-speech sampling may become an issue for clinicians operating under severe time constraints and requiring an efficient and effective means of assessing phonological processes.
APA, Harvard, Vancouver, ISO, and other styles
7

Andersson, Eva. "TaqMan® Sample-to-SNP Kit™ : evaluation of kit for low-cost and fast preparing of DNA-samples before genotype analysis." Thesis, Uppsala universitet, Institutionen för medicinsk biokemi och mikrobiologi, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-105963.

Full text
Abstract:
Genotyping can be used to link genetic variation among individuals to certain diseases or conditions. Some known disorders and states that are dependent on single nucleotide polymorphism (SNPs) are lactose intolerance, venous thrombosis, hereditary hemochromatosis and the difference in sensibility among people to metabolise drugs. In this project a new kit, TaqManÒ Sample-to-SNP KitÔ for extraction of DNA and preparation of the extract for genotyping with real-time PCR and allelic discrimination, was evaluated. QIAamp® DNA Blood Biorobot® MDx Kit was used as the reference method. The purpose of the comparison was to find a method that makes DNA extraction from blood samples cheaper and faster, but with the same reliability as the reference procedure. The results of the evaluation showed a complete agreement of the genotype results between the methods tested, which means that the new method was as reliable as the reference method. The costs of reagents and material would be reduced with 52% if the new method is adopted, that alone would result in a cost reduction of 144 000SEK a year with a sample volume of 650 samples/month. The time for DNA extraction would also be reduced with the new procedure.
APA, Harvard, Vancouver, ISO, and other styles
8

Almozlino, Adam. "A computer program for the production of horizon graphs from multiple samples of single-gene expression data." Thesis, Boston University, 2012. https://hdl.handle.net/2144/12264.

Full text
Abstract:
Thesis (M.A.)--Boston University PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.<br>In Bioinformatics, as in many fields of science, it is often necessary to analyze collections of data sets. These data sets are often composed of matched values for an independent and a dependent variable. For example, a chemist may examine the absorption spectra of a group of compounds, a sociologist might examine life expectancy versus income in different communities, or a bioinformatician might examine expression of individual nucleotides in a genome by an organism. In these examples, the independent variables are frequency, income, and position in the genome, respectively, and the dependent variables are absorption, life expectancy, and expression, respectively. Previously, such a group of data set may have been displayed visually by a collection of individual tiled line graphs, or by superimposing multiple line graphs on the same graph space. The Horizon Graph is a newly developed data visualization method for these sorts of data sets, and outcompetes its alternatives. A Horizon Graph is composed of a series of "bars", each of which displays a single data set. Bars code for the independent variable along the horizontal axis, and for the dependent variable using both color and the vertical axis. These bars are stacked atop one another vertically, resulting in a single plot that enhances a user's ability to recognize patterns within and between the data sets We wrote a program in Java that generates a Horizon Graph. The program accepts a collection of data sets describing mRNA expression of a common genomic region across multiple samples. It then generates a plot that represents the genetic splicing of the genomic region. The program may generate one of two types of Horizon Graph. The first plot type shows genetic splicing for the data sets. The second plot shows the splicing of every data set relative to the splicing of a "control" subset of the data sets. The program also offers a user several other options regarding the produced plot. These include the ability to mark contiguous portions of nucleotide positions. This can be used to mark regions of exogenous sequences. The individual tasks the program had to accomplish to generate a plot from data sets were separated into isolated Java "classes". These "classes" interacted through well-defined inputs and outputs, and were coordinated in another, separate "class". The resulting structure maximizes the ease for future users to use portions of the program to accomplish portions of its behavior in novel contexts. The program was demonstrated on a collection of data sets from the mef2d gene locus, each from a unique tissue type.
APA, Harvard, Vancouver, ISO, and other styles
9

Clark, Kendal. "Ultra High Vacuum Low Temperature Scanning Tunneling Microscope for Single Atom Manipulation on Molecular Beam Epitaxy Grown Samples." Ohio University / OhioLINK, 2005. http://www.ohiolink.edu/etd/view.cgi?ohiou1125611713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Czyz, Zbigniew Tadeusz [Verfasser], and Christoph A. [Akademischer Betreuer] Klein. "Development of a reliable single-cell aCGH suitable for clinical samples / Zbigniew Tadeusz Czyz. Betreuer: Christoph A. Klein." Regensburg : Universitätsbibliothek Regensburg, 2015. http://d-nb.info/1072820552/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Nimon, Kim. "Convergent Validity of Variables Residualized By a Single Covariate: the Role of Correlated Error in Populations and Samples." Thesis, University of North Texas, 2013. https://digital.library.unt.edu/ark:/67531/metadc271870/.

Full text
Abstract:
This study examined the bias and precision of four residualized variable validity estimates (C0, C1, C2, C3) across a number of study conditions. Validity estimates that considered measurement error, correlations among error scores, and correlations between error scores and true scores (C3) performed the best, yielding no estimates that were practically significantly different than their respective population parameters, across study conditions. Validity estimates that considered measurement error and correlations among error scores (C2) did a good job in yielding unbiased, valid, and precise results. Only in a select number of study conditions were C2 estimates unable to be computed or produced results that had sufficient variance to affect interpretation of results. Validity estimates based on observed scores (C0) fared well in producing valid, precise, and unbiased results. Validity estimates based on observed scores that were only corrected for measurement error (C1) performed the worst. Not only did they not reliably produce estimates even when the level of modeled correlated error was low, C1 produced values higher than the theoretical limit of 1.0 across a number of study conditions. Estimates based on C1 also produced the greatest number of conditions that were practically significantly different than their population parameters.
APA, Harvard, Vancouver, ISO, and other styles
12

Kaeser, Jasmin Christine. "Investigating the origin of stochastic effects in low-template DNA samples by developing a single-tube extraction protocol." Thesis, Boston University, 2013. https://hdl.handle.net/2144/21183.

Full text
Abstract:
Thesis (M.S.F.S.) PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.<br>The use of polymerase chain reaction (PCR) has revolutionized DNA typing in forensic laboratories. Producing a deoxyribonucleic acid (DNA) profile now requires less time and less DNA than before. However, not all evidence samples can be reliably profiled, particularly those with low masses of DNA. These samples often exhibit stochastic effects such as allele dropout, elevated stutter and peak height imbalance, which are challenging to separate from true donor alleles. Several scholarly articles have documented these difficulties and suggest that these stochastic effects are due to uneven amplification of heterozygous alleles in early PCR. However, in early PCR all reaction components are at their maximum concentrations and should be able to amplify all alleles in a sample proportionate to their original concentrations. If both alleles are present in the sample at equal concentrations prior to PCR, both alleles should theoretically be amplified with the same efficiency; the fact that this is not the case suggests that there may already be variation within the sample. One possible reason is that pre-PCR sampling error from pipetting and sample transfers results in an uneven number of allele copies in the sample prior to amplification. Thus, it may not be PCR chemistry alone that contributes to stochastic effects, but also sampling error, which creates unequal allele concentrations prior to PCR. In order to separate and study these possibilities, a single-tube DNA extraction method was developed. The forensicGEM™ Saliva kit developed by ZyGEM provides an extraction method that utilizes a thermostable proteinase found in a proprietary Bacillus species to lyse the cell and destroy nucleases without inhibiting downstream amplification. Combining this extraction protocol with the McCrone and Associates, Inc. cell transfer method allowed for the addition of cells directly to the PCR tube, giving an approximate DNA mass without quantitation. These samples should show the effects of PCR chemistry alone, with pipetting and tube transfer steps prior to amplification removed. For comparison, samples of bulk DNA extracted with forensicGEM™ Saliva were diluted down to a comparable concentration and subjected to multiple transfer steps in an effort to identify both pre-PCR sampling error and any error due to PCR chemistry. Results show that the single-tube extraction method gives reliable results, with forensicGEM™ Saliva showing comparable peak heights (PH) and peak height ratios (PHR) to the Qiagen QIAmp DNA Investigator kit and the cell transfer method providing accurate DNA concentrations with minimal PCR inhibition. Comparison of the cell transfer-generated samples to the diluted bulk DNA samples showed that the cell transfer samples had higher average PHRs at 0.0625 ng of target DNA when amplified with Identifiler® Plus, but showed no significant difference between the sample types at 0.125 ng of target DNA. The cell transfer samples were also shown to have lower overall PHs at both concentrations and a higher occurrence of allelic dropout, but only when amplified with the Identifiler® kit; when amplified with Identifiler® Plus, the occurrence of dropout was low for cell transfer and bulk DNA samples at both concentrations. These results suggest that as DNA mass decreases, pre-PCR sampling error may contribute to the development of stochastic effects; however, the vast majority of stochastic effects are due to the PCR chemistry itself. As the PCR chemistry improves and the prevalence of stochastic effects decreases, the importance of pre-PCR sampling error may increase.<br>2031-01-01
APA, Harvard, Vancouver, ISO, and other styles
13

Du, Xinpeng. "Laser-Ultrasonic Measurement of Single-Crystal Elastic Constants from Polycrystalline Samples by Measuring and Modeling Surface Acoustic Wave Velocities." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1524177819455643.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Snyder, Emily Katherine. "A Comparison of Single Word Identification, Connected Speech Samples, and Imitated Sentence Tasks for Assessment of Children with a SSD." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/362.

Full text
Abstract:
Speech-language pathologists are constantly trying to use the most efficient and effective assessments to obtain information about the phonetic inventory, speech sound errors, and phonological error patterns of children who are suspected of having a speech sound disorder. These assessments may involve a standardized measure of single words and/or sentences and a non standardized measure, such as a spontaneous speech sample. While research has shown both of these types of assessments to give clinicians information about a child's speech production abilities, the use of delayed imitation tasks, either words or sentences, has not been a widely studied topic and has produced conflicting results when researched. The purpose of the present study was to examine speech sound production abilities in children with a speech sound disorder in a single-word task, an imitated sentence task, and spontaneous speech sample to compare their results of speech sound errors, phonological error patterns, and time administration. The study used the Phonological and Articulatory Bilingual Assessment - English version (PABA-E, Gildersleeve-Neumann , 2008), a formal assessment for identifying children who may have a speech sound disorder. Three male children, between the ages of 4;0 and 5;4 (years;months), participated in this study. All participants were being treated by a speech-language pathologist for a diagnosed speech sound disorder and had hearing within normal limits. The results of the study showed that the majority of participants produced the highest number of speech sounds targeted within the imitated sentence task. Participants attempted and produced the least amount of speech sounds on their spontaneous speech sample. The assessment with the highest percentage of accurately produced consonants was the imitated sentence task. The majority of participants produced a higher number of error patterns in their single-word and imitated sentence task. In terms of efficiency and effectiveness, the imitated sentence task took the least amount of time to administer and transcribe.
APA, Harvard, Vancouver, ISO, and other styles
15

Muharam, Firman Alamsyah. "Overcoming problems with limiting DNA samples in forensics and clinical diagnostics using multiple displacement amplification." Thesis, Queensland University of Technology, 2006. https://eprints.qut.edu.au/16207/1/Firman_Muharam_Thesis.pdf.

Full text
Abstract:
The availability of DNA samples that are of adequate quality and quantity is essential for any genetic analysis. The fields of forensic biology and clinical diagnostic pathology testing often suffer from limited samples that yield insufficient DNA material to allow extensive analysis. This study examined the utility of a recently introduced whole genome amplification method termed Multiple Displacement Amplification (MDA) for amplifying a variety of limited sample types that are commonly encountered in the fields of forensic biology and clinical diagnostics. The MDA reaction, which employs the highly processive bacteriophage φ29 DNA polymerase, was found to generate high molecular weight template DNA suitable for a variety of downstream applications from low copy number DNA samples down to the single genome level. MDA of single cells yielded sufficient DNA for up to 20,000,000 PCR assays, allowing further confirmatory testing on samples of limited quantities or the archiving of precious DNA material for future work. The amplification of degraded DNA material using MDA identified a requirement for samples of sufficient quality to allow successful synthesis of product DNA templates. Furthermore, the utility of MDA products in comparative genomic hybridisation (CGH) assays identified the presence of amplification bias. However, this bias was overcome by introducing a novel modification to the MDA protocol. Future directions for this work include investigations into the utility of MDA products in short tandem repeat (STR) assays for human identifications and application of the modified MDA protocol for testing of single cell samples for genetic abnormalities.
APA, Harvard, Vancouver, ISO, and other styles
16

Muharam, Firman Alamsyah. "Overcoming problems with limiting DNA samples in forensics and clinical diagnostics using multiple displacement amplification." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16207/.

Full text
Abstract:
The availability of DNA samples that are of adequate quality and quantity is essential for any genetic analysis. The fields of forensic biology and clinical diagnostic pathology testing often suffer from limited samples that yield insufficient DNA material to allow extensive analysis. This study examined the utility of a recently introduced whole genome amplification method termed Multiple Displacement Amplification (MDA) for amplifying a variety of limited sample types that are commonly encountered in the fields of forensic biology and clinical diagnostics. The MDA reaction, which employs the highly processive bacteriophage φ29 DNA polymerase, was found to generate high molecular weight template DNA suitable for a variety of downstream applications from low copy number DNA samples down to the single genome level. MDA of single cells yielded sufficient DNA for up to 20,000,000 PCR assays, allowing further confirmatory testing on samples of limited quantities or the archiving of precious DNA material for future work. The amplification of degraded DNA material using MDA identified a requirement for samples of sufficient quality to allow successful synthesis of product DNA templates. Furthermore, the utility of MDA products in comparative genomic hybridisation (CGH) assays identified the presence of amplification bias. However, this bias was overcome by introducing a novel modification to the MDA protocol. Future directions for this work include investigations into the utility of MDA products in short tandem repeat (STR) assays for human identifications and application of the modified MDA protocol for testing of single cell samples for genetic abnormalities.
APA, Harvard, Vancouver, ISO, and other styles
17

Kurz, Anton [Verfasser], and Dirk-Peter [Akademischer Betreuer] Herten. "Characterization and Application of Photon-Statistics in Single-Molecule Measurements for Quantitative Studies of Fluorescently Labeled Samples / Anton Kurz ; Betreuer: Dirk-Peter Herten." Heidelberg : Universitätsbibliothek Heidelberg, 2013. http://d-nb.info/1177247879/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Kokkaliaris, Stylianos. "Investigation of the vortex phase diagram and dynamics in single crystalline samples of the high temperature superconductor YBa←2Cu←3O←7←-←#delta#." Thesis, University of Southampton, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.310294.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Olsen, Matthew William. "Investigation of Speech Samples from Typically Developing Preschool Age Children: A Comparison of Single Words and Imitated Sentences Elicited with the PABA-E." PDXScholar, 2010. https://pdxscholar.library.pdx.edu/open_access_etds/434.

Full text
Abstract:
Assessment of speech sound production in young children provides the basis for diagnosis and treatment of speech sound disorders. Standardized single-word articulation tests are typically used for identification of speech sound errors because they can provide an efficient means of obtaining a speech sample for analysis and comparison to same-age peers. A major criticism of single-word articulation tests is that they may not accurately reflect speech sound production abilities in conversation. Comparison of performance in single-word and conversational contexts has produced conflicting results in the available research. The purpose of the present study was to compare speech samples obtained using an extensive single-word naming task with samples of continuous speech elicited by sentence imitation. It was hypothesized that there would be differences in overall speech sound production accuracy as well as differences in types and frequency of errors across the two sampling conditions. The present study is a pilot investigation as part of the development of the Phonological and Bilingual Articulation Assessment, English Version (PABA-E; Gildersleeve-Neumann, unpublished). Twelve preschool children ages 3;11 to 4;7 (years;months) from the Portland Metropolitan area participated in this study. Participants were monolingual native English speakers and exhibited typical speech sound development as measured by the GFTA-2 (Goldman-Fristoe, 2000). Hearing acuity for participants was within acceptable limits, and participants' families reported no significant illnesses or developmental concerns that would impact speech sound production abilities. Mean t-scores for percentage of consonants correct (PCC) in the single-word samples were significantly higher at the .05 level than those for the sentence imitation samples. There was no significant difference between the percentage of vowels produced correctly (PVC) in the two sampling conditions. Similar types of error patterns were found in both the single-word and continuous speech samples, however error frequency was relatively low for the participant population. Only the phonological process of stopping was found to be significantly different across sampling conditions. The mean frequency of occurrence for stopping was found to be significantly higher in continuous speech as compared with the production of single-words.
APA, Harvard, Vancouver, ISO, and other styles
20

Nguyen, Van Dong. "Speciation analysis of butyl- and phenyltin compounds in environmental samples by GC separation and atomic spectrometric detection." Doctoral thesis, Umeå : Department of Chemistry, Umeå University, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-892.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Wilke, Robin Niklas [Verfasser], Tim [Akademischer Betreuer] Salditt, and Claus [Akademischer Betreuer] Ropers. "Coherent X-Ray Diffractive Imaging on the Single-Cell-Level of Microbial Samples: : Ptychography, Tomography, Nano-Diffraction and Waveguide-Imaging / Robin Niklas Wilke. Gutachter: Tim Salditt ; Claus Ropers. Betreuer: Tim Salditt." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2014. http://d-nb.info/1064148360/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Pang, Meng. "Single sample face recognition under complex environment." HKBU Institutional Repository, 2019. https://repository.hkbu.edu.hk/etd_oa/635.

Full text
Abstract:
Single sample per person face recognition (SSPP FR), i.e., recognizing a person with a single face image in the biometric enrolment database only for training, has lots of attractive real-world applications such as criminal identification, law enforcement, access control, video surveillance, just to name a few. This thesis studies two important problems in SSPP FR, i.e., 1) SSPP FR with a standard biometric enrolment database (SSPP-se FR), and 2) SSPP FR with a contaminated biometric enrolment database (SSPP-ce FR). The SSPP-ce FR is more challenging than SSPP-se FR since the enrolment samples are collected under more complex environments and can be contaminated by nuisance variations. In this thesis, we propose one patch-based method called robust heterogeneous discriminative analysis (RHDA) to tackle SSPP-se FR, and propose two generic learning methods called synergistic generic learning (SGL) and iterative dynamic generic learning (IDGL), respectively, to tackle SSPP-ce FR. RHDA is proposed to address the limitations in the existing patch-based methods, and to enhance the robustness against complex facial variations for SSPP-se FR from two aspects. First, for feature extraction, a new graph-based Fisher-like criterion is presented to extract the hidden discriminant information across two heterogeneous adjacency graphs, and meanwhile improve the discriminative ability of patch distribution in underlying subspaces. Second, a joint majority voting strategy is developed by considering both the patch-to-patch and patch-to-manifold distances, which can generate complementary information as well as increase error tolerance for identification. SGL is proposed to address the SSPP-ce FR problem. Different from the existing generic learning methods simply based on prototype plus variation (P+V) model, SGL presents a new "learned P + learned V" framework that enables the prototype learning and variation dictionary learning to work collaboratively to identify new probe samples. Specifically, SGL learns prototypes for contaminated enrolment samples by preserving the more discriminative parts while learns variation dictionary by extracting the less discriminative intra-personal variants from an auxiliary generic set, on account of a linear Fisher information-based feature regrouping (FIFR). IDGL is proposed to address the limitations in SGL and thus better handling the SSPP-ce FR problem. IDGL is also based on the "learned P + learned V" framework. However, rather than using the linear FIFR to recover prototypes for contaminated enrolment samples, IDGL constructs a dynamic label-feedback network to update prototypes iteratively, where both linear and non-linear variations can be well removed. Besides, the supplementary information in probe set is effectively employed to enhance the correctness of the prototypes to represent the enrolment persons. Furthermore, IDGL proposes a new "sample-specific" corruption strategy to learn a representative variation dictionary. Comprehensive validations and evaluations are conducted on various benchmark face datasets. The computational complexities of the proposed methods are analyzed and empirical studies on parameter sensitivities are provided. Experimental results demonstrate the superior performance of the proposed methods for both SSPP-se FR and SSPP-ce FR.
APA, Harvard, Vancouver, ISO, and other styles
23

Ross, Edith. "Inferring tumour evolution from single-cell and multi-sample data." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/274604.

Full text
Abstract:
Tumour development has long been recognised as an evolutionary process during which cells accumulate mutations and evolve into a mix of genetically distinct cell subpopulations. The resulting genetic intra-tumour heterogeneity poses a major challenge to cancer therapy, as it increases the chance of drug resistance. To study tumour evolution in more detail, reliable approaches to infer the life histories of tumours are needed. This dissertation focuses on computational methods for inferring trees of tumour evolution from single-cell and multi-sample sequencing data. Recent advances in single-cell sequencing technologies have promised to reveal tumour heterogeneity at a much higher resolution, but single-cell sequencing data is inherently noisy, making it unsuitable for analysis with classic phylogenetic methods. The first part of the dissertation describes OncoNEM, a novel probabilistic method to infer clonal lineage trees from noisy single nucleotide variants of single cells. Simulation studies are used to validate the method and to compare its performance to that of other methods. Finally, OncoNEM is applied in two case studies. In the second part of the dissertation, a comprehensive collection of existing multi-sample approaches is used to infer the phylogenies of metastatic breast cancers from ten patients. In particular, shallow whole-genome, whole exome and targeted deep sequencing data are analysed. The inference methods comprise copy number and point mutation based approaches, as well as a method that utilises a combination of the two. To improve the copy number based inference, a novel allele-specific multi-sample segmentation algorithm is presented. The results are compared across methods and data types to assess the reliability of the different methods. In summary, this thesis presents substantial methodological advances to understand tumour evolution from genomic profiles of single cells or related bulk samples.
APA, Harvard, Vancouver, ISO, and other styles
24

Hisey, Colin Lee Hisey. "Microfluidic Devices for Clinical Cancer Sample Characterization." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1525783108483419.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Koyama, M., R. Imai, M. Shikida, et al. "Micromachined sample divider for analyzing biochemical reaction based on single molecules." IEEE, 2008. http://hdl.handle.net/2237/11138.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

VEGA, PEDRO JUAN SOTO. "SINGLE SAMPLE FACE RECOGNITION FROM VIDEO VIA SATCKED SUPERVISED AUTO-ENCODER." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28102@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR<br>PROGRAMA DE EXCELENCIA ACADEMICA<br>Esta dissertação propõe e avalia estratégias baseadas nos Stacked Supervised Auto-encoders (SSAE) para representação de imagens faciais em aplicações de vídeo vigilância. O estudo foca na identificação de faces a partir de uma amostra por pessoa na galeria (single sample per person - SSPP). Variações em termos de pose, expressão facial, iluminação e oclusão são abordadas de duas formas. Primeiro, o SSAE extrai atributos das imagens de faces que são robustos contra tais variações. Segundo, exploram-se as múltiplas amostras que podem ser coletadas nas sequências de vídeo de uma pessoa (multiple samples per person probe - MSPPP). Os métodos propostos foram avaliados e comparados usando os bancos de vídeos Honda/UCSD e VIDTIMIT. Adicionalmente, foi estudada a influência de parâmetros relacionados com a arquitetura do SSAE utilizando o banco de imagens estáticas Extended Yale B. Os resultados demonstraram que as estratégias que exploram as MSPPP em combinação com o SSAE podem superar o desempenho de outros métodos SSPP, como os Padrões Binários Locais (LBP), para reconhecimento de faces em vídeos.<br>This work proposes and evaluates strategies based on Stacked Supervised Auto-encoders (SSAE) for face representation in video surveillance applications. The study focuses on the identification task with a single sample per person (SSPP) in the gallery. Variations in terms of pose, facial expression, illumination and occlusion are approached in two ways. First, the SSAE extracts features from face images, which are robust to such variations. Second, multiple samples per persons probes (MSPPP) that can be extracted from video sequences are exploited to improve recognition accuracy. The proposed methods were compared upon Honda/UCSD and VIDTIMIT video datasets. Additionally, the influence of the parameters related to SSAE architecture was studied using the Extended Yale B dataset. The experimental results demonstrated that strategies combining SSAE and MSPPP are able to outperform other SSPP methods, such as local binary patterns, in face recognition from video.
APA, Harvard, Vancouver, ISO, and other styles
27

Metto, Eve C. "Development of microanalytical methods for solving sample limiting biological analysis problems." Diss., Kansas State University, 2013. http://hdl.handle.net/2097/15680.

Full text
Abstract:
Doctor of Philosophy<br>Department of Chemistry<br>Christopher T. Culbertson<br>Analytical separations form the bulk of experiments in both research and industry. The choice of separation technique is governed by the characteristics of the analyte and purpose of separation. Miniaturization of chromatographic techniques enables the separation and purification of small volume samples that are often in limited supply. Capillary electrophoresis and immunoaffinity chromatography are examples of techniques that can be easily miniaturized with minimum loss in separation efficiency. These techniques were used in the experiments presented in this dissertation. Chapter 1 discusses the underlying principles of capillary electrophoresis and immunoaffinity chromatography. In the second chapter, the results from immunoaffinity chromatography experiments that utilized antibody-coated magnetic beads to purify serine proteases and serine protease inhibitors (serpins) from A. gambiae hemolymph are presented and discussed. Serine proteases and serpins play a key role in the insect innate immunity system. Serpins regulate the activity of serine proteases by forming irreversible complexes with the proteases. To identify the proteases that couple to these serpins, protein A magnetic beads were coated with SRPN2 antibody and then incubated with A. gambiae hemolymph. The antibody isolated both the free SRPN2 and the SRPN2-protease complex. The purified proteases were identified by ESI-MS from as few as 25 insects. In Chapter 3, an integrated glass/PDMS hybrid microfluidic device was utilized for the transportation and lysis of cells at a high throughput. Jurkat cells were labeled with 6-CFDA (an internal standard) and DAF-FM (a NO specific fluorophore). Laser-induced fluorescence (LIF) detection was utilized to detect nitric oxide (NO) from single Jurkat cells. The resulting electropherograms were used to study the variation in NO production following stimulation with lipopolysaccharide (LPS). 3 h LPS-stimulation resulted in a two fold increase in NO production in both bulk and single cell analysis. A comparison of bulk and single cell NO measurements were performed and the average NO production in single cells compared well to the increase measured at the bulk cell level. Chapter 4 discusses the preliminary experiments with a T-shaped microfluidic device that exploit the property of poly(dimethylsiloxane) (PDMS) as an electroactive polymer (EAP), to enhance fluid mixing. EAPs deform when placed in an electric field. A thin layer of PDMS was sandwiched between chrome electrodes, positioned on the horizontal arms of the T design, and the electrolyte-filled fluidic channel. A potential difference across the PDMS layer caused it to shrink and stretch, thereby increasing the channel volume. The electrodes were actuated at 180[degrees] out of phase and this caused the fluid stream in the vertical channel to fold and stretch resulting in enhanced contact surface area and shorter diffusion distances of the fluid, thereby improving mixing efficiency. All the experiments presented in this dissertation demonstrate the application of miniaturized chromatographic techniques for the efficient analysis of small volume biological samples.
APA, Harvard, Vancouver, ISO, and other styles
28

Östlin, Christofer. "Single-molecule X-ray free-electron laser imaging : Interconnecting sample orientation with explosion data." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-231009.

Full text
Abstract:
X-ray crystallography has been around for 100 years and remains the preferred technique for solving molecular structures today. However, its reliance on the production of sufficiently large crystals is limiting, considering that crystallization cannot be achieved for a vast range of biomolecules. A promising way of circumventing this problem is the method of serial femtosecond imaging of single-molecules or nanocrystals utilizing an X-ray free-electron laser. In such an approach, X-ray pulses brief enough to outrun radiation damage and intense enough to provide usable diffraction signals are employed. This way accurate snapshots can be collected one at a time, despite the sample molecule exploding immediately following the pulse due to extreme ionization. But as opposed to in conventional crystallography, the spatial orientation of the molecule at the time of X-ray exposure is generally unknown. Consequentially, assembling the snapshots to form a three-dimensional representation of the structure of interest is cumbersome, and normally tackled using algorithms to analyze the diffraction patterns. Here we explore the idea that the explosion data can provide useful insights regarding the orientation of ubiquitin, a eukaryotic regulatory protein. Through two series of molecular dynamics simulations totaling 588 unique explosions, we found that a majority of the carbon atoms prevalent in ubiquitin are directionally limited in their respective escape paths. As such we conclude it to be theoretically possible to orient a sample with known structure based on its explosion pattern. Working with an unknown sample, we suggest these discoveries could be applicable in tandem with X-ray diffraction data to optimize image assembly.
APA, Harvard, Vancouver, ISO, and other styles
29

Garikiparthi, Chaitanya N. Liefvoort Appie van de. "Sample path analysis of stochastic processes busy periods of auto-correlated single server queues /." Diss., UMK access, 2008.

Find full text
Abstract:
Thesis (Ph. D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2008.<br>"A dissertation in computing networking and telecommunications networking." Advisor: Appie van de Liefvoort. Typescript. Vita. Title from "catalog record" of the print edition Description based on contents viewed Feb. 6, 2009. Includes bibliographical references (leaves 85-89). Online version of the print edition.
APA, Harvard, Vancouver, ISO, and other styles
30

Hannel, Thaddaeus S. "PATTERN RECOGNITION INTEGRATED SENSING METHODOLOGIES (PRISMS) IN PHARMACEUTICAL PROCESS VALIDATION, REMOTE SENSING AND ASTROBIOLOGY." UKnowledge, 2009. http://uknowledge.uky.edu/gradschool_diss/751.

Full text
Abstract:
Modern analytical instrumentation is capable of creating enormous and complex volumes of data. Analysis of large data volumes are complicated by lengthy analysis time and high computational demand. Incorporating real-time analysis methods that are computationally efficient are desirable for modern analytical methods to be fully utilized. The use of modern instrumentation in on-line pharmaceutical process validation, remote sensing, and astrobiology applications requires real-time analysis methods that are computationally efficient. Integrated sensing and processing (ISP) is a method for minimizing the data burden and sensing time of a system. ISP is accomplished through implementation of chemometric calculations in the physics of the spectroscopic sensor itself. In ISP, the measurements collected at the detector are weighted to directly correlate to the sample properties of interest. This method is especially useful for large and complex data sets. In this research, ISP is applied to acoustic resonance spectroscopy, near-infrared hyperspectral imaging and a novel solid state spectral imager. In each application ISP produced a clear advantage over the traditional sensing method. The limitations of ISP must be addressed before it can become widely used. ISP is essentially a pattern recognition algorithm. Problems arise in pattern recognition when the pattern-recognition algorithm encounters a sample unlike any in the original calibration set. This is termed the false sample problem. To address the false sample problem the Bootstrap Error-Adjusted Single-Sample Technique (BEST, a nonparametric classification technique) was investigated. The BEST-ISP method utilizes a hashtable of normalized BEST points along an asymmetric probability density contour to estimate the BEST multidimensional standard deviation of a sample. The on-line application of the BEST method requires significantly less computation than the full algorithm allowing it to be utilized in real time as sample data is obtained. This research tests the hypothesis that a BEST-ISP metric can be used to detect false samples with sensitivity > 90% and specificity > 90% on categorical data.
APA, Harvard, Vancouver, ISO, and other styles
31

Kemp, Michael. "Single event paediatric trauma: Sample representation and the efficacy of response-focused exposure and EMDR." Thesis, Kemp, Michael (2014) Single event paediatric trauma: Sample representation and the efficacy of response-focused exposure and EMDR. PhD thesis, Murdoch University, 2014. https://researchrepository.murdoch.edu.au/id/eprint/23705/.

Full text
Abstract:
This thesis focused on paediatric populations who had been exposed to single event trauma such as motor vehicle accidents, burns, falls, animal bites, anaphylaxis and near drowning. The planning for the thesis commenced 16 years ago and the related PhD candidature commenced a few years later1. Since then, the volume of research investigating child trauma and, more specifically, treatments for child trauma has increased markedly. The aims of the thesis were to determine: i) the efficacy of EMDR compared to a waitlist control condition in children aged 6 to 12 years following a motor vehicle accident, ii) if those who participated in a trauma study were representative of the population compared to those who did not participate in a trauma study; iii) if an assessment involving additional exposure to response focused trauma memories (based on Lang’s 1977, 1979, 1983 bio-informational theory) facilitated recovery, and if so iv) compare the efficacy of a treatment based on response-focused exposure to an established treatment condition such as EMDR. These aims were met by the following four studies. Study one compared four EMDR sessions to a six week wait-list control condition amongst 27 children (aged 6 to 12 years) suffering from persistent PTSD symptoms after a motor vehicle accident. The efficacy of EMDR was confirmed. In comparison to the wait list condition, EMDR was superior on primary outcome measures including the Child Post Traumatic Stress – Reaction Index and clinician rated diagnostic criteria for PTSD. EDMR was also superior on process measures including Subjective Units of Disturbance and Validity of Cognition scales. Notably, 100% of participants in both groups met two or more PTSD criteria at pre-treatment. At post treatment, this remained unchanged in the wait-list group, but decreased to 25% in the EMDR group. These therapeutic gains were maintained at three and 12 month follow-up. Study two compared 211 participants with 2333 non-participants in a trauma study on several measures of trauma and injury severity such as duration of hospital visit, heart rate in the emergency department, emergency transport to hospital, admission to hospital, injury severity score, and triage code. Participants were exposed to more severe trauma or injury than non-participants and within the non-participant group, those who had requested further information about the study (N = 573) were exposed to more severe trauma or injury than other non-participants (N = 1760). These findings were contrary to the view that non-participants could be more severely traumatised than participants, and the discovery of a gradient effect within non-participants suggests that participation or greater interest in participation may be associated with greater trauma and injury severity. In study three, 52 of the children and adolescents from study two with at least moderate PTSD symptoms completed a standard assessment one month after their trauma. A random sample of 22 of these completed an additional response focused assessment task based on Lang’s (1977, 1979, 1983) bio-informational theory which involved the detailed recall of five components of their trauma memory. The stimulus component consisted of visual and auditory memories, whereas the response information consisted of four domains: verbal (words, sounds, thoughts and feelings), somato-motor (head and body position, gross body actions), visceral or autonomic (changes in heart rate, sweating or hot flushes), and processor (mental processes such as dream-like perceptions, racing or muddled thoughts). The response focused assessment resulted in an accelerated rate of recovery in avoidance symptoms from one week to two months later. There was also a reduction in the proportion of participants meeting the PTSD (DSM-IV) criterion for avoidance and a decrease in parent ratings of their child’s somatic complaints. Study four compared Eye Movement Desensitisation and Reprocessing (EMDR) to a Response Focused Exposure Therapy condition based on the assessment utilised in study three. A total of 28 children and adolescents (aged six to 16 years) who continued to experience persistent PTSD symptoms three months after their trauma were recruited from study two. The EMDR protocol was consistent with the protocol used in study one and the detailed protocol described by Tinker and Wilson (1999). The Response Focused Exposure Therapy condition henceforth referred to as “exposure therapy” involved the repeated and detailed exposure to information from the five components of the trauma memory (as per study three), including one stimulus component (e.g., visual and auditory memories) and four response components (verbal, somato-motor, visceral or autonomic and processor). Both treatment conditions resulted in robust improvements in child, parent and clinician rated PTSD measures and child and parent rated non-PTSD measures. Whilst there was no difference in the duration of treatment sessions between the EMDR and exposure group, the exposure condition involved fewer exposure periods than the EMDR condition [4.8 (+2.1) versus 17.8 (+6.4), p<.001] but longer periods of exposure [157.7 (+58.3) versus 23.5 (+4.7) seconds, p<.001] and a greater total duration of exposure in each session [12.3 (+8.0) versus 7.0 (+3.2) minutes, p<.05]. This result provides support for the efficiency of EMDR, although more research is necessary. The efficacy of both treatments is best explained by the use of vivid and repeated exposure to the trauma memory in a safe environment along with other non-specific elements common to both treatments.
APA, Harvard, Vancouver, ISO, and other styles
32

Gref, Margareta. "Glomerular filtration rate in adults : a single sample plasma clearance method based on the mean sojurn time." Licentiate thesis, Umeå universitet, Klinisk fysiologi, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-42319.

Full text
Abstract:
Glomerular filtration rate (GFR) is a key parameter in evaluating kidney function. After a bolus injection of an exogenous GFR marker in plasma an accurate determination of GFR can be made by measuring the marker concentration in plasma during the excretion. Simplified methods have been developed to reduce the number of plasma samples needed and yet still maintain a high accuracy in the GFR determination. Groth previously developed a single sample GFR method based on the mean sojourn time of a GFR marker in its distribution volume. This method applied in adults using the marker 99m Tc-DTPA is recommended for use when GFR is estimated to be ≥ 30 mL/min. The aim of the present study was to further develop the single plasma sample GFR method by Groth including patients with severely reduced renal function and different GFR markers. Three different GFR markers 51Cr-EDTA, 99mTc-DTPA and iohexol were investigated. Formulas were derived for the markers 51Cr-EDTA and iohexol when GFR is estimated to be ≥ 30 mL/min. For patients with an estimated GFR &lt; 30 mL/min a special low clearance formula with a single sample obtained about 24 h after marker injection was developed. The low clearance formula was proven valid for use with all three markers. The sources of errors and their influence on the calculated single sample clearance were investigated. The estimated distribution volume is the major source of error but its influence can be reduced by choosing a suitable sampling time. The optimal time depends on the level of GFR; the lower GFR the later the single sample should be obtained. For practical purpose a 270 min sample is recommended when estimated GFR ≥ 30 mL/min and a 24 h sample when estimated GFR &lt; 30 mL/min. Sampling at 180 min after marker injection may be considered if GFR is estimated to be essentially normal.
APA, Harvard, Vancouver, ISO, and other styles
33

Siemes, Kerstin. "Establishing a sea bottom model by applying a multi-sensor acoustic remote sensing approach." Doctoral thesis, Universite Libre de Bruxelles, 2013. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209381.

Full text
Abstract:
Detailed information about the oceanic environment is essential for many applications in the field of marine geology, marine biology, coastal engineering, and marine operations. Especially, knowledge of the properties of the sediment body is often required. Acoustic remote sensing techniques have become highly attractive for classifying the sea bottom and for mapping the sediment properties, due to their high coverage capabilities and low costs compared to common sampling methods. In the last decades, a number of different acoustic devices and related techniques for analyzing their signals have evolved. Each sensor has its specific application due to limitations in the frequency range and resolution. In practice, often a single acoustic tool is chosen based on the current application, supported by other non-acoustic data where required. However, different acoustic remote sensing techniques can supplement each other, as shown in this thesis. Even more, a combination of complementary approaches can contribute to the proper understanding of sound propagation, which is essential when using sound for environmental classification purposes. This includes the knowledge of the relation between acoustics and sediment properties, the focus of this thesis. Providing a detailed three dimensional picture of the sea bottom sediments that allows for gaining maximum insight into this relation is aimed at.<p><p><p>Chapters 4 and 5 are adapted from published work, with permission:<p>DOI:10.1121/1.3569718 (link: http://asadl.org/jasa/resource/1/jasman/v129/i5/p2878_s1) and<p>DOI:10.1109/JOE.2010.2066711 (link: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=5618582&queryText%3Dsiemes)<p>In reference to IEEE copyrighted material which is used with permission in this thesis, the IEEE does not endorse any of the Université libre de Bruxelles' products or services.<p><br>Doctorat en Sciences de l'ingénieur<br>info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
34

Stumpf, Fabian [Verfasser], and Roland [Akademischer Betreuer] Zengerle. "Automated microfluidic nucleic acid analysis for single-cell and sample-to-answer applications / Fabian Stumpf ; Betreuer: Roland Zengerle." Freiburg : Albert-Ludwigs-Universität Freiburg, 2017. http://d-nb.info/1126922102/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Dama, Tavisha. "Development of a method for the utilization of a single sample for presumptive, confirmatory and DNA analysis of blood." Thesis, Boston University, 2013. https://hdl.handle.net/2144/21144.

Full text
Abstract:
Thesis (M.S.F.S) PLEASE NOTE: Boston University Libraries did not receive an Authorization To Manage form for this thesis or dissertation. It is therefore not openly accessible, though it may be available by request. If you are the author or principal advisor of this work and would like to request open access for it, please contact us at open-help@bu.edu. Thank you.<br>In any forensic investigation it is important to consider sample preservation. Oftentimes trace quantities of biological materials are found at crime scenes. The usual practice among forensic analysts is to take one sample of a suspected biological stain for presumptive testing, another for confirmatory testing and if both these results are positive, take a third portion for DNA analysis. This works well when sufficient sample is available, however, when trace quantities of sample are present at crime scenes, sample preservation becomes of importance. Thus, this study attempts to develop a procedure where presumptive, confirmatory and DNA analysis could be carried out on a single portion of the sample. In this study four different presumptive reagents – phenolphthalein, o-tolidine, 3, 3’, 5, 5’- tetramethylbenzidine (TMB) and luminol – were used and their effects on the ABAcard® Hematrace® immunochromatographic membrane test and subsequent DNA analysis were studied. In order to develop the method for one-sample analysis, the lowest volume of blood that gave sufficient quantity of DNA was determined by extracting different volumes (20, 10, 5, 2.5 and 1.25 μL) of whole blood. Additionally, different volumes of blood mixed with ABAcard® Hematrace® buffer were extracted. From this preliminary work it was determined that 1.25 μL of whole blood yielded sufficient DNA quantity even when mixed with the ABAcard® Hematrace® buffer. Bloodstains of 1.25 μL were then prepared and the one-sample analysis was carried out. The method developed was most successful when luminol was used as the presumptive reagent. For the bloodstains treated with the other three presumptive reagents (phenolphthalein, o-tolidine and TMB), a decrease in DNA yield was detected. This decrease was attributed to the inability of the Qiagen® QIAmp® column to adsorb the DNA after exposure to the chemical reagents and to the insolubility of the bloodstain in ABAcard® Hematrace® buffer following the addition of presumptive blood test reagents. Extraction of DNA from the ABAcard® Hematrace® immunochromatographic membrane was also carried out using the Qiagen® QIAmp® DNA investigator kit; no DNA was obtained from the membranes on which 150 μL of a dilute blood sample had been applied. This suggests that either the extraction method used was not capable of extracting the minute quantities of DNA that might be present on the membrane or there were insufficient white blood cells deposited on the membrane during the testing process. Thus, a one-sample procedure was successfully developed for bloodstains treated with luminol. A loss/reduction of DNA was observed for the samples previously exposed to phenolphthalein, o-tolidine and TMB due to the incapability of the reagents to work with silicon-based extraction chemistries. Further experimentation is needed to develop a similar procedure to be used with such presumptive testing reagents. Alternatively, a procedure can be developed that utilizes two samples: one for presumptive testing and another for confirmatory and subsequent DNA analysis, since it was observed that only the presumptive reagents, and not the ABAcard® Hematrace® buffer, interfered with DNA analysis.<br>2031-01-01
APA, Harvard, Vancouver, ISO, and other styles
36

Rhode, Owen H. J. "Intraspecies diversity of Cryptococcus laurentii (Kufferath) C.E. Skinner and Cryptococcus podzolicus (Bab’eva & Reshetova) originating from a single soil sample." Thesis, Stellenbosch : University of Stellenbosch, 2005. http://hdl.handle.net/10019.1/1812.

Full text
Abstract:
Thesis (MSc (Microbiology))--University of Stellenbosch, 2005.<br>Intraspecific diversity among yeasts, including basidiomycetous yeasts has mostly been studied from a taxonomic point of view. The heterobasidiomycetous genus Cryptococcus is no exception and it was found to contain species that display heterogeneity both on a genetic and physiological level, i.e. diversity among strains originating from different geographical areas. It was stated that this diversity within yeast species is possibly caused by intrinsic attributes of the different habitats the strains of a particular species originate from. However, little is known about the diversity of a species within a specific habitat. Thus, in this study intraspecific diversity among selected cryptoccoci isolated from a single soil sample originating from pristine Fynbos vegetation , was investigated.
APA, Harvard, Vancouver, ISO, and other styles
37

Harris, Jennifer M. "A Phenomenological Exploration of the Experience and Understanding of Depression within a Sample of Young, Single, Latter-day Saint Women." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/31823.

Full text
Abstract:
Depression is the black plague of the 21st Century, affecting twice as many women as men, and continuing to increase among the younger generations. Little research has been conducted looking at single, young adults with depression. In addition, more research is needed to look at how culture influences the struggle with depression. With both the prevalence of depression in young women increasing and the membership of the LDS Church on the rise, it is crucial that clergy and clinicians alike better understand the experience of young, single, LDS women struggling with depression. This study is a qualitative exploration of six young, single, LDS womenâ s struggle with depression. Six young (24-31 years old), single, white, active LDS women living in the Washington DC metropolitan area participated in 60 to 90 minute long interviews. Using a qualitative method and phenomenological perspective this study describes what an episode of depression is like for, and how it is understood by, young, single, LDS women. Themes identified from the womenâ s interviews included identifying that something was not quite right/ something was going wrong, faith attempts, internalizing and blaming self, awareness of the depression, reaching out, spreading the word, and lessons learned. Several of these themes corroborate with current literature about the experience of depression, while others are unique to these women. In addition to these themes, the poignant role of the LDS culture in these womenâ s experience of struggling with depression is discussed. Acknowledgements<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
38

Rinker, Brett A. "A single-sided access simultaneous solution of acoustic wave speed and sample thickness for isotropic materials of plate-type geometry." Diss., Columbia, Mo. : University of Missouri-Columbia, 2006. http://hdl.handle.net/10355/4585.

Full text
Abstract:
Thesis (M.S.)--University of Missouri-Columbia, 2006.<br>The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on April 17, 2009) Vita. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
39

Haardoerfer, Regine. "Power and Bias in Hierarchical Linear Growth Models: More Measurements for Fewer People." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/eps_diss/57.

Full text
Abstract:
Hierarchical Linear Modeling (HLM) sample size recommendations are mostly made with traditional group-design research in mind, as HLM as been used almost exclusively in group-design studies. Single-case research can benefit from utilizing hierarchical linear growth modeling, but sample size recommendations for growth modeling with HLM are scarce and generally do not consider the sample size combinations typical in single-case research. The purpose of this Monte Carlo simulation study was to expand sample size research in hierarchical linear growth modeling to suit single-case designs by testing larger level-1 sample sizes (N1), ranging from 10 to 80, and smaller level-2 sample sizes (N2), from 5 to 35, under the presence of autocorrelation to investigate bias and power. Estimates for the fixed effects were good for all tested sample-size combinations, irrespective of the strengths of the predictor-outcome correlations or the level of autocorrelation. Such low sample sizes, however, especially in the presence of autocorrelation, produced neither good estimates of the variances nor adequate power rates. Power rates were at least adequate for conditions in which N2 = 20 and N1 = 80 or N2 = 25 and N1 = 50 when the squared autocorrelation was .25.Conditions with lower autocorrelation provided adequate or high power for conditions with N2 = 15 and N1 = 50. In addition, conditions with high autocorrelation produced less than perfect power rates to detect the level-1 variance.
APA, Harvard, Vancouver, ISO, and other styles
40

Wiklund, Sofia. "Effects on immune cell viability, morphology and proliferation in a sub-microliter cell sampler system." Thesis, Linköpings universitet, Institutionen för fysik, kemi och biologi, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-89982.

Full text
Abstract:
Today,   most traditional method used in the research of immune cells, such as flow   cytometry and microscopy, are based on average values of cell responses.   However, immune cells are heterogeneous and respond differently to a given   stimuli. There is also a risk that important, but rare, behaviors of   individual cells are missed when a larger population of immune cells is   analyzed. Also, flow cytometry and microscopy do not allow long-term survival   of cells; these methods lack the ability to do dynamic long-term analysis of   motile immune cells, i.e. studies of cell-cell interactions, morphology and proliferation.   In a   patient who is affected by cancer, the cell heterogeneity contributes to the   ability to battle various types of cancer or virus infections. In an   outbreak, immune cells recognize and kill tumor cells. However, the number of   specific immune cells is sometimes too few to kill all the tumor cells in a   successful way. One way to help these patients is to isolate, select out and   cultivate the active immune cells with capacity to kill tumor cells.   The   Cell Physic Laboratory (a part of the department of Applied Physics) at the   Royal Institute of Technology (KTH) has developed a method for single-cell   analysis where the immune cells are trapped in microwells in a silicon chip.   The immune cells are then studied by using fluorescence microscopy in an   inverted setup. The method enables high-throughput experiments due to the   parallelization. Furthermore, since the immune cells survive long periods in   the chip, the cells can be analyzed over several days up to weeks. The   research group has also developed a semi-automatic ‘cell-picker’. The   cell-picker will be used in combination with the developed method for   single-cell analysis, which enables picking of cells of interest. In this report, experiments for the characterization and evaluation of the biocompatibility of two generations of the cell-picker will be presented. The experiments include development of a protocol for the cell-picking process, studies of the survival time of transferred cells for both generation of the cell-picker and studies of surface coating in the chip in order to increase the biocompatibility. The preliminary results indicate that the cell-picker has potential to be used as a selection tool for immune cells of interest.
APA, Harvard, Vancouver, ISO, and other styles
41

Buschmann, Tilo. "The Systematic Design and Application of Robust DNA Barcodes." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-209812.

Full text
Abstract:
High-throughput sequencing technologies are improving in quality, capacity, and costs, providing versatile applications in DNA and RNA research. For small genomes or fraction of larger genomes, DNA samples can be mixed and loaded together on the same sequencing track. This so-called multiplexing approach relies on a specific DNA tag, index, or barcode that is attached to the sequencing or amplification primer and hence accompanies every read. After sequencing, each sample read is identified on the basis of the respective barcode sequence. Alterations of DNA barcodes during synthesis, primer ligation, DNA amplification, or sequencing may lead to incorrect sample identification unless the error is revealed and corrected. This can be accomplished by implementing error correcting algorithms and codes. This barcoding strategy increases the total number of correctly identified samples, thus improving overall sequencing efficiency. Two popular sets of error-correcting codes are Hamming codes and codes based on the Levenshtein distance. Levenshtein-based codes operate only on words of known length. Since a DNA sequence with an embedded barcode is essentially one continuous long word, application of the classical Levenshtein algorithm is problematic. In this thesis we demonstrate the decreased error correction capability of Levenshtein-based codes in a DNA context and suggest an adaptation of Levenshtein-based codes that is proven of efficiently correcting nucleotide errors in DNA sequences. In our adaptation, we take any DNA context into account and impose more strict rules for the selection of barcode sets. In simulations we show the superior error correction capability of the new method compared to traditional Levenshtein and Hamming based codes in the presence of multiple errors. We present an adaptation of Levenshtein-based codes to DNA contexts capable of guaranteed correction of a pre-defined number of insertion, deletion, and substitution mutations. Our improved method is additionally capable of correcting on average more random mutations than traditional Levenshtein-based or Hamming codes. As part of this work we prepared software for the flexible generation of DNA codes based on our new approach. To adapt codes to specific experimental conditions, the user can customize sequence filtering, the number of correctable mutations and barcode length for highest performance. However, not every platform is susceptible to a large number of both indel and substitution errors. The Illumina “Sequencing by Synthesis” platform shows a very large number of substitution errors as well as a very specific shift of the read that results in inserted and deleted bases at the 5’-end and the 3’-end (which we call phaseshifts). We argue in this scenario that the application of Sequence-Levenshtein-based codes is not efficient because it aims for a category of errors that barely occurs on this platform, which reduces the code size needlessly. As a solution, we propose the “Phaseshift distance” that exclusively supports the correction of substitutions and phaseshifts. Additionally, we enable the correction of arbitrary combinations of substitution and phaseshift errors. Thus, we address the lopsided number of substitutions compared to phaseshifts on the Illumina platform. To compare codes based on the Phaseshift distance to Hamming Codes as well as codes based on the Sequence-Levenshtein distance, we simulated an experimental scenario based on the error pattern we identified on the Illumina platform. Furthermore, we generated a large number of different sets of DNA barcodes using the Phaseshift distance and compared codes of different lengths and error correction capabilities. We found that codes based on the Phaseshift distance can correct a number of errors comparable to codes based on the Sequence-Levenshtein distance while offering the number of DNA barcodes comparable to Hamming codes. Thus, codes based on the Phaseshift distance show a higher efficiency in the targeted scenario. In some cases (e.g., with PacBio SMRT in Continuous Long Read mode), the position of the barcode and DNA context is not well defined. Many reads start inside the genomic insert so that adjacent primers might be missed. The matter is further complicated by coincidental similarities between barcode sequences and reference DNA. Therefore, a robust strategy is required in order to detect barcoded reads and avoid a large number of false positives or negatives. For mass inference problems such as this one, false discovery rate (FDR) methods are powerful and balanced solutions. Since existing FDR methods cannot be applied to this particular problem, we present an adapted FDR method that is suitable for the detection of barcoded reads as well as suggest possible improvements.
APA, Harvard, Vancouver, ISO, and other styles
42

Thomson, Joanna. "A single blind prospective Randomised Controlled Trial (RCT) to investigate the need for a transition phase following successful completion of functional appliance therapy and a sample size calculation pilot study." Thesis, University of Liverpool, 2017. http://livrepository.liverpool.ac.uk/3008124/.

Full text
Abstract:
Aims: The aim of this study was to investigate whether transition following successful completion of functional appliance therapy was important in retention of the corrected overjet. Objectives: The main objective was to investigate whether a transition phase following successful completion of functional appliance therapy was important in retention of the corrected overjet. In addition, a secondary objective was to perform a pilot study to aid in a sample size calculation for a larger trial to investigate the effect of transition versus no transition following successful completion of functional appliance therapy. Method: Patients were randomly allocated to either Group A (Standard care) or Group B (intervention). They were then followed up monthly for a period of three months. At each monthly visit, clinical measurements of overjet were taken. At the end of this period the clinical measurements were analysed by a blinded assessor who was unaware of the allocation of the treatment. Outcome measures: The primary outcome variable was the presence/absence of a corrected overjet at the end of the study within 2mm of the overjet at the end of Twin Block therapy. The overjet measurements were based on clinical findings. Results: • The recruited sample was primarily female at 66.6% compared to males at 33.3%. Females were shown to have a higher success rate than males at 80%; however these results were not significant. • The average overjet in Group A was 7.1 mm. The average overjet of the successful patients in Group A was 7.3 mm and 7 mm in the unsuccessful patients. • The average overjet in Group B was 10.5 mm. The average overjet in the successful patients in Group B was 11.3 mm and 9.3 mm in the unsuccessful patients. • In Group A, 1 successful and 1 unsuccessful patient attended the casuals' clinic. In Group B, 3 successful and 1 unsuccessful patients attended the casuals' clinic. • In the total sample only 1 patient cancelled an appointment, this appointment was subsequently rearranged for within 2 weeks of original date. • In the total sample only 1 patient failed to attend an appointment. • All patients in the study had a Dental Health Component (DHC) of either a 4a or 5a. 33.3% had a DHC component of a 5a and 66.66% had a 4a. • A Fisher's exact test was carried out to compare the proportions with retention of the corrected overjet. The p value was 0.643 (Appendix 1) which was substantially higher than the 0.05 level of statistical significance set for this study. Conclusion: 1. The null hypothesis could not be rejected based on the results of this study. Therefore, there is no evidence of difference in using any form of transition following successful completion of functional appliance therapy compared to no transition. 2. A secondary objective of this study was to investigate the size of sample size needed to appropriately power a project of this type. This was measured to 97 patients per group with α set at 80% and significance level of 0.05. 3. Due to the small sample size, the study was not powered to detect a clinical significance difference between the groups and therefore it is not possible to determine if the intervention is better at maintaining the control group than the standard care group 4. This pilot did not give sufficient information to accurately estimate the proportion of successful retention of the overjet in the transition group. 5. The investigation was not continued after the doctoral project was complete due to problems with recruitment and lack of personnel to continue the study.
APA, Harvard, Vancouver, ISO, and other styles
43

Panchal, Hemang B., Shimin Zheng, Ashraf Abusara, Eunice Mogusu, and Timir K. Paul. "National Trend in Hospitalization Cost for In-patient Single Vessel Percutaneous Coronary Intervention in Patients with and without Diabetes Mellitus in the United States: An Analysis from Nationwide Inpatient Sample from 2006-2011." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etsu-works/114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Good, Norman Markus. "Methods for estimating the component biomass of a single tree and a stand of trees using variable probability sampling techniques." Thesis, Queensland University of Technology, 2001. https://eprints.qut.edu.au/37097/1/37097_Good_2001.pdf.

Full text
Abstract:
This thesis developed multistage sampling methods for estimating the aggregate biomass of selected tree components, such as leaves, branches, trunk and total, in woodlands in central and western Queensland. To estimate the component biomass of a single tree randomised branch sampling (RBS) and importance sampling (IS) were trialed. RBS and IS were found to reduce the amount of time and effort to sample tree components in comparison with other standard destructive sampling methods such as ratio sampling, especially when sampling small components such as leaves and small twigs. However, RBS did not estimate leaf and small twig biomass to an acceptable degree of precision using current methods for creating path selection probabilities. In addition to providing an unbiased estimate of tree component biomass, individual estimates were used for developing allometric regression equations. Equations based on large components such as total biomass produced narrower confidence intervals than equations developed using ratio sampling. However, RBS does not estimate small component biomass such as leaves and small wood components with an acceptable degree of precision, and should be mainly used in conjunction with IS for estimating larger component biomass. A whole tree was completely enumerated to set up a sampling space with which RBS could be evaluated under a number of scenarios. To achieve a desired precision, RBS sample size and branch diameter exponents were varied, and the RBS method was simulated using both analytical and re-sampling methods. It was found that there is a significant amount of natural variation present when relating the biomass of small components to branch diameter, for example. This finding validates earlier decisions to question the efficacy of RBS for estimating small component biomass in eucalypt species. In addition, significant improvements can be made to increase the precision of RBS by increasing the number of samples taken, but more importantly by varying the exponent used for constructing selection probabilities. To further evaluate RBS on trees with differing growth forms from that enumerated, virtual trees were generated. These virtual trees were created using L-systems algebra. Decision rules for creating trees were based on easily measurable characteristics that influence a tree's growth and form. These characteristics included; child-to-child and children-to-parent branch diameter relationships, branch length and branch taper. They were modelled using probability distributions of best fit. By varying the size of a tree and/or the variation in the model describing tree characteristics; it was possible to simulate the natural variation between trees of similar size and fonn. By creating visualisations of these trees, it is possible to determine using visual means whether RBS could be effectively applied to particular trees or tree species. Simulation also aided in identifying which characteristics most influenced the precision of RBS, namely, branch length and branch taper. After evaluation of RBS/IS for estimating the component biomass of a single tree, methods for estimating the component biomass of a stand of trees (or plot) were developed and evaluated. A sampling scheme was developed which incorporated both model-based and design-based biomass estimation methods. This scheme clearly illustrated the strong and weak points associated with both approaches for estimating plot biomass. Using ratio sampling was more efficient than using RBS/IS in the field, especially for larger tree components. Probability proportional to size sampling (PPS) -size being the trunk diameter at breast height - generated estimates of component plot biomass that were comparable to those generated using model-based approaches. The research did, however, indicate that PPS is more precise than the use of regression prediction ( allometric) equations for estimating larger components such as trunk or total biomass, and the precision increases in areas of greater biomass. Using more reliable auxiliary information for identifying suitable strata would reduce the amount of within plot variation, thereby increasing precision. PPS had the added advantage of being unbiased and unhindered by numerous assumptions applicable to the population of interest, the case with a model-based approach. The application of allometric equations in predicting the component biomass of tree species other than that for which the allometric was developed is problematic. Differences in wood density need to be taken into account as well as differences in growth form and within species variability, as outlined in virtual tree simulations. However, the development and application of allometric prediction equations in local species-specific contexts is more desirable than PPS.
APA, Harvard, Vancouver, ISO, and other styles
45

Krieger, Brian L. "Rapid automated calibration using discontinuous flow analysis and sequential injection." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
46

Bryan, Paul David. "Accelerating microarchitectural simulation via statistical sampling principles." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/47715.

Full text
Abstract:
The design and evaluation of computer systems rely heavily upon simulation. Simulation is also a major bottleneck in the iterative design process. Applications that may be executed natively on physical systems in a matter of minutes may take weeks or months to simulate. As designs incorporate increasingly higher numbers of processor cores, it is expected the times required to simulate future systems will become an even greater issue. Simulation exhibits a tradeoff between speed and accuracy. By basing experimental procedures upon known statistical methods, the simulation of systems may be dramatically accelerated while retaining reliable methods to estimate error. This thesis focuses on the acceleration of simulation through statistical processes. The first two techniques discussed in this thesis focus on accelerating single-threaded simulation via cluster sampling. Cluster sampling extracts multiple groups of contiguous population elements to form a sample. This thesis introduces techniques to reduce sampling and non-sampling bias components, which must be reduced for sample measurements to be reliable. Non-sampling bias is reduced through the Reverse State Reconstruction algorithm, which removes ineffectual instructions from the skipped instruction stream between simulated clusters. Sampling bias is reduced via the Single Pass Sampling Regimen Design Process, which guides the user towards selected representative sampling regimens. Unfortunately, the extension of cluster sampling to include multi-threaded architectures is non-trivial and raises many interesting challenges. Overcoming these challenges will be discussed. This thesis also introduces thread skew, a useful metric that quantitatively measures the non-sampling bias associated with divergent thread progressions at the beginning of a sampling unit. Finally, the Barrier Interval Simulation method is discussed as a technique to dramatically decrease the simulation times of certain classes of multi-threaded programs. It segments a program into discrete intervals, separated by barriers, which are leveraged to avoid many of the challenges that prevent multi-threaded sampling.
APA, Harvard, Vancouver, ISO, and other styles
47

Bedewy, Ahmed M. "OPTIMIZING DATA FRESHNESS IN INFORMATION UPDATE SYSTEMS." The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1618573325086709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bastardo-Fernandez, Isabel. "Vers une fiabilité améliorée de la détermination de (nano)particules de TiO2 par single particle inductively coupled plasma-mass spectrometry : application à la caractérisation des aliments et aux études de migration." Electronic Thesis or Diss., Maisons-Alfort, École nationale vétérinaire d'Alfort, 2024. http://www.theses.fr/2024ENVA0001.

Full text
Abstract:
Le projet de thèse NanoTi-Food vise principalement à améliorer la fiabilité de la caractérisation des nanoparticules de TiO2 (NPs) et à acquérir des connaissances sur l'additif alimentaire E171 y compris la migration de ces NP à partir des emballages alimentaires. Dans la première partie de l'étude (à réaliser à Anses), une nouvelle approche pour la caractérisation des NP de TiO2 sera développée et optimisée en utilisant l'approche « single particle » en combinaison avec la spectrométrie de masse à plasma à couplage inductif triple quadripôle (Sp-ICP- QQQMS). À cette fin, les paramètres analytiques les plus critiques, tels que les méthodes de calcul de l'efficacité du transport (TE) et le système d'introduction des échantillons seront évalués dans différentes conditions de travail (par exemple gaz de réaction, choix de l'isotope). Dans ce dernier cas, deux systèmes d'introduction d'échantillons à haut rendement (type APEX) seront comparés. Par ailleurs, une approche Sp complémentaire basée sur la MS-ICP haute résolution (Sp-ICP-HR MS) sera développée au LNE. La nouveauté dans ce cas sera l'utilisation d'un ICP-MS à haute résolution (champ de secteur magnétique) pour la détection, qui est la technique de pointe pour la détermination des éléments traces métalliques fortement interférés tels que le Ti. Un système d'injection interne sera également optimisé pour augmenter l'efficacité et la sensibilité du transport de l'échantillon. La validation de la méthode sera réalisée par comparaison inter-laboratoires entre le LNE et l'Anses. Une véritable valeur ajoutée du projet sera l'évaluation de l'incertitude de mesure liée à la caractérisation des NP de TiO2 par les deux approches Sp-ICP-MS (QQQ et HR). Les calculs d'incertitude prendront en compte non seulement la reproductibilité expérimentale et les incertitudes de chacune des variables nécessaires pour convertir le signal ICP-MS en taille et concentration de NPs, mais aussi et pour la première fois, l'effet du choix du seuil pour discriminer le signal ionique ICP-MS de celui des NP. L'effet des écarts par rapport à la forme sphérique sur les tailles sera également étudié et comparé à la microscopie électronique à balayage (MEB), qui est la méthode de référence pour la caractérisation des NP. Le projet vise également la préparation et la caractérisation exhaustive d'un matériau de référence réel (additif alimentaire) contenant des nanoparticules de TiO2. Une étude de faisabilité du développement d'une MR à base de E171 sous forme de suspension sera réalisée. À cette fin, un échantillon E171 représentatif sera préparé et entièrement caractérisé par un panel de techniques complémentaires, telles que SEM, Sp-ICP-QQQMS, Sp-ICP-HR MS, diffraction des rayons X (XRD) pour évaluer avec précision les principaux paramètres d'intérêt, tels que le diamètre médian et moyen, la distribution de taille, la fraction de nanoparticules, les impuretés chimiques et la fraction cristallographique. Enfin, les deux approches analytiques développées à l'Anses et au LNE, dont la méthode développée pour l'évaluation de l'incertitude globale, seront appliquées à l'étude du transfert des NP de TiO2 à partir des emballages alimentaires. Tout au long du projet, les données de taille obtenues en utilisant les nouvelles approches basées sur l'approche « single particle » pour la caractérisation des NP de TiO2 seront comparées aux mesures SEM, qui est la méthode de référence pour la taille dans ce domaine d'étude. Les études sur la migration des emballages alimentaires sont en effet une étude de cas sélectionnée où la Sp-ICP-MS a le potentiel de fournir des informations supplémentaires par rapport à d'autres paramètres tels que la concentration de particules, la proportion de particules par rapport à la forme dissoute, qui sont également importantes pour la migration qui est important afin d'améliorer les études d'évaluation des risques<br>This PhD project aims primarily to improve the reliability of the characterisation of TiO2 nanoparticles (NPs) and to gain knowledge of the food additive E171 and in real-life applications such as migration of these NPs from food packaging. In the first part of the study (to be carried out at Anses), a new approach for TiO2 NPs characterisation will be developed and optimized by using the single particle approach in combination with inductively coupled plasma-triple quadrupole mass spectrometry (Sp-ICP-QQQMS). For this purpose, the most critical analytical parameters, such as the transport efficiency (TE) calculation methods and the sample introduction system will be assessed under different working conditions (e.g. reaction gas, choice of isotope). In the latter case, two high efficiency sample introduction systems (APEX type) will be critically compared. Further, a complementary Sp approach based on ICP-high resolution MS (Sp-ICP-HRMS) will be developed at LNE. The novelty in this case will be the use of a high resolution (magnetic sector field) ICP-MS for detection, which is the state-of-the art technique for trace and ultra-trace metals determination of highly interfered elements such as the case of Ti. An in-house injection system will also be optimized to increase the transport efficiency and sensitivity. Method validation by inter-laboratory comparison between LNE and ANSES will be achieved here. A truly added value of the project will be the assessment of the measurement uncertainty related to TiO2 NPs characterization by both Sp-ICP-MS (QQQ and HR) approaches. The uncertainty calculations will take into account, not only the experimental reproducibility and the uncertainties of each variables required to convert ICP-MS signal into NPs size and concentration, but also and for the first time, the effect of the choice of the cut-off to discriminate the ICP-MS ionic signal from that of NPs. The effect of deviations from the spherical shape on the sizes will also be explored and compared with scanning electron microscopy (SEM), which is the reference method for NPs characterisation. The project also aims at the preparation and exhaustive characterization of a real-life (food additive) reference material containing TiO2 nanoparticles. A feasibility study of the development of an E171-based RM under a suspension form will be carried out. For this purpose, a representative E171 sample will be prepared and fully characterized by a panel of complementary techniques, such as SEM, Sp-ICP-QQQ MS, Sp-ICP-HRMS, X-ray diffraction (XRD) to accurately assess the main parameters of interest, such as the median and mean diameter, size distribution, fraction of nanoparticles, chemical impurities and crystallographic fraction. Finally, both analytical approaches developed at Anses and LNE, including the developed method for global uncertainty assessment, will be applied to the study of the transfer of TiO2 NPs from food packaging. All along the project, the size data obtained by using the newly developed “single particle” based approaches for TiO2 NPs characterisation will be compared to SEM measurements, which is the reference method for size in this study field. Food packaging migration studies is indeed a selected case study where Sp-ICP-MS has the potential of supplying additional information compared to other instruments, such as: particle concentration, proportion of particulate vs. dissolved form, which are of importance for migration as well as to improve risk assessment studies
APA, Harvard, Vancouver, ISO, and other styles
49

Braun, Stefan K. "Aspekte des „Samplings“." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-147027.

Full text
Abstract:
Mash-Ups (auch Bootlegging, Bastard Pop oder Collage genannt) erfreuen sich seit Jahren steigender Beliebtheit. Waren es zu Beginn der 1990er Jahre meist nur 2 unterschiedliche Popsongs, deren Gesangs- und Instrumentenspuren in Remixform ineinander gemischt wurden, existieren heute Multi-Mash-Ups mit mehreren Dutzend gemixten und gesampelten Songs, Interpreten, Videosequenzen und Effekten. Eine Herausforderung stellt die Kombination unterschiedlichster Stile dar, diese zu neuen tanzbaren Titeln aus den Charts zu mischen. Das Mash-Up Projekt Pop Danthology z.B. enthält in einem knapp 6 minütigen aktuellen Musikclip 68 verschiedene Interpreten, u. a. Bruno Mars, Britney Spears, Rhianna und Lady Gaga. Die Verwendung und das Sampeln fremder Musik- und Videotitel kann eine Urheberrechtsverletzung darstellen. Die Komponisten des Titels „Nur mir“ mit Sängerin Sabrina Setlur unterlagen in einem Rechtsstreit, der bis zum BGH führte. Sie haben im Zuge eines Tonträger-Samplings, so der BGH , in das Tonträgerherstellerrecht der Kläger (Musikgruppe Kraftwerk) eingegriffen, in dem sie im Wege des „Sampling“ zwei Takte einer Rhythmussequenz des Titels „Metall auf Metall“ entnommen und diese im eigenen Stück unterlegt haben. Der rasante technische Fortschritt macht es mittlerweile möglich, immer einfacher, schneller und besser Musik-, Film- und Bildaufnahmen zu bearbeiten und zu verändern. Computer mit Bearbeitungssoftware haben Keyboards, Synthesizer und analoge Mehrspurtechnik abgelöst. Die Methoden des Samplings unterscheiden sich von der klassischen Raubkopie dahingehend, dass mit der Sampleübernahme eine weitreichende Umgestaltung und Bearbeitung erfolgt. Die Raubkopie zeichnet sich durch eine unveränderte Übernahme des Originals aus. Betroffen von den Auswirkungen eines nicht rechtmäßig durchgeführten Sampling sind Urheber- und Leistungsschutzrechte ausübender Künstler sowie Leistungsschutzrechte von Tonträgerherstellern. U. U. sind auch Verstöße gegen das allgemeine Persönlichkeits- und Wettbewerbsrecht Gegenstand von streitigen Auseinandersetzungen.
APA, Harvard, Vancouver, ISO, and other styles
50

Hantke, Max Felix. "Coherent Diffractive Imaging with X-ray Lasers." Doctoral thesis, Uppsala universitet, Molekylär biofysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-306609.

Full text
Abstract:
The newly emerging technology of X-ray free-electron lasers (XFELs) has the potential to revolutionise molecular imaging. XFELs generate very intense X-ray pulses and predictions suggest that they may be used for structure determination to atomic resolution even for single molecules. XFELs produce femtosecond pulses that outrun processes of radiation damage and permit the study of structures at room temperature and of structural dynamics. While the first demonstrations of flash X-ray diffractive imaging (FXI) on biological particles were encouraging, they also revealed technical challenges. In this work we demonstrated how some of these challenges can be overcome. We exemplified, with heterogeneous cell organelles, how tens of thousands of FXI diffraction patterns can be collected, sorted, and analysed in an automatic data processing pipeline. We improved  image resolution and reduced problems with missing data. We validated, described, and deposited the experimental data in the Coherent X-ray Imaging Data Bank. We demonstrated that aerosol injection can be used to collect FXI data at high hit ratios and with low background. We reduced problems with non-volatile sample contaminants by decreasing aerosol droplet sizes from ~1000 nm to ~150 nm. We achieved this by adapting an electrospray aerosoliser to the Uppsala sample injector. Mie scattering imaging was used as a diagnostic tool to measure positions, sizes, and velocities of individual injected particles. XFEL experiments generate large amounts of data at high rates. Preparation, execution, and data analysis of these experiments benefits from specialised software. In this work we present new open-source software tools that facilitates prediction, online-monitoring, display, and pre-processing of XFEL diffraction data. We hope that this work is a valuable contribution in the quest of transitioning FXI from its first experimental demonstration into a technique that fulfills its potentials.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography