To see the other types of publications on this topic, follow the link: Sample-sample.

Dissertations / Theses on the topic 'Sample-sample'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sample-sample.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Hippert, Theresa M. "Hippert work sample." Online version, 2002. http://www.uwstout.edu/lib/thesis/2002/2002hippertt.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Grossman, J. P. 1973. "Point sample rendering." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/50063.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.<br>Includes bibliographical references (p. 54-56).<br>We present an algorithm suitable for real-time, high quality rendering of complex objects. Objects are represented as a dense set of surface point samples which contain colour, depth and normal information. These point samples are obtained by sampling orthographic views on an equilateral triangle lattice. They are rendered directly and independently without any knowledge of surface topology. We introduce a novel solution to the problem of surface reconstruction using a hierarchy of Z-buffers to detect tears. The algorithm is fast, easily vectorizable, and requires only modest resources.<br>by J.P. Grossman.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
3

Rattray, V. Robin (Vaughn Robin). "Sample preconcentration and analysis by direct sample insertion inductively coupled plasma spectrometry." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=28892.

Full text
Abstract:
Several approaches to sample preconcentration combined with sample introduction by Direct Sample Insertion (DSI) into the Inductively Coupled Plasma (ICP) for ultra trace elemental analysis have been developed. A microcolumn based flow injection (FI) manifold was used with ICP Mass Spectrometry (MS) but performance was adversely affected by high and variable blank levels.<br>Physical preconcentration by depositing the sample as an aerosol into an inductively heated graphite DSI probe yielded detection limit improvements of over two orders of magnitude for ICP Atomic Emission Spectrometry (AES). Instrumentation was developed to automate the aerosol deposition preconcentration process, and this apparatus was used in conjunction with ICP-MS. Several of the aerosol deposition, DSI, and ICP parameters that impact on the performance of the technique were studied. Detection limit improvements averaged two orders of magnitude, and analysis of a river water reference material for 10 elements gave good results even at the part per trillion (pg ml$ sp{-1}$) level.<br>Investigations into direct analysis of the analyte-laden chelating resin by DSI-ICP-AES were carried out. It was clearly demonstrated that the determination of volatile elements was adversely affected by the effect of the pyrolysis products of the resin on the plasma excitation conditions.
APA, Harvard, Vancouver, ISO, and other styles
4

Serra, Puertas Jorge. "Shrinkage corrections of sample linear estimators in the small sample size regime." Doctoral thesis, Universitat Politècnica de Catalunya, 2016. http://hdl.handle.net/10803/404386.

Full text
Abstract:
We are living in a data deluge era where the dimensionality of the data gathered by inexpensive sensors is growing at a fast pace, whereas the availability of independent samples of the observed data is limited. Thus, classical statistical inference methods relying on the assumption that the sample size is large, compared to the observation dimension, are suffering a severe performance degradation. Within this context, this thesis focus on a popular problem in signal processing, the estimation of a parameter, observed through a linear model. This inference is commonly based on a linear filtering of the data. For instance, beamforming in array signal processing, where a spatial filter steers the beampattern of the antenna array towards a direction to obtain the signal of interest (SOI). In signal processing the design of the optimal filters relies on the optimization of performance measures such as the Mean Square Error (MSE) and the Signal to Interference plus Noise Ratio (SINR). When the first two moments of the SOI are known, the optimization of the MSE leads to the Linear Minimum Mean Square Error (LMMSE). When such statistical information is not available one may force a no distortion constraint towards the SOI in the optimization of the MSE, which is equivalent to maximize the SINR. This leads to the Minimum Variance Distortionless Response (MVDR) method. The LMMSE and MVDR are optimal, though unrealizable in general, since they depend on the inverse of the data correlation, which is not known. The common approach to circumvent this problem is to substitute it for the inverse of the sample correlation matrix (SCM), leading to the sample LMMSE and sample MVDR. This approach is optimal when the number of available statistical samples tends to infinity for a fixed observation dimension. This large sample size scenario hardly holds in practice and the sample methods undergo large performance degradations in the small sample size regime, which may be due to short stationarity constraints or to a system with a high observation dimension. The aim of this thesis is to propose corrections of sample estimators, such as the sample LMMSE and MVDR, to circumvent their performance degradation in the small sample size regime. To this end, two powerful tools are used, shrinkage estimation and random matrix theory (RMT). Shrinkage estimation introduces a structure on the filters that forces some corrections in small sample size situations. They improve sample based estimators by optimizing a bias variance tradeoff. As direct optimization of these shrinkage methods leads to unrealizable estimators, then a consistent estimate of these optimal shrinkage estimators is obtained, within the general asymptotics where both the observation dimension and the sample size tend to infinity, but at a fixed rate. That is, RMT is used to obtain consistent estimates within an asymptotic regime that deals naturally with the small sample size. This RMT approach does not require any assumptions about the distribution of the observations. The proposed filters deal directly with the estimation of the SOI, which leads to performance gains compared to related work methods based on optimizing a metric related to the data covariance estimate or proposing rather ad-hoc regularizations of the SCM. Compared to related work methods which also treat directly the estimation of the SOI and which are based on a shrinkage of the SCM, the proposed filter structure is more general. It contemplates corrections of the inverse of the SCM and considers the related work methods as particular cases. This leads to performance gains which are notable when there is a mismatch in the signature vector of the SOI. This mismatch and the small sample size are the main sources of degradation of the sample LMMSE and MVDR. Thus, in the last part of this thesis, unlike the previous proposed filters and the related work, we propose a filter which treats directly both sources of degradation.<br>Estamos viviendo en una era en la que la dimensión de los datos, recogidos por sensores de bajo precio, está creciendo a un ritmo elevado, pero la disponibilidad de muestras estadísticamente independientes de los datos es limitada. Así, los métodos clásicos de inferencia estadística sufren una degradación importante, ya que asumen un tamaño muestral grande comparado con la dimensión de los datos. En este contexto, esta tesis se centra en un problema popular en procesado de señal, la estimación lineal de un parámetro observado mediante un modelo lineal. Por ejemplo, la conformación de haz en procesado de agrupaciones de antenas, donde un filtro enfoca el haz hacia una dirección para obtener la señal asociada a una fuente de interés (SOI). El diseño de los filtros óptimos se basa en optimizar una medida de prestación como el error cuadrático medio (MSE) o la relación señal a ruido más interferente (SINR). Cuando hay información sobre los momentos de segundo orden de la SOI, la optimización del MSE lleva a obtener el estimador lineal de mínimo error cuadrático medio (LMMSE). Cuando esa información no está disponible, se puede forzar la restricción de no distorsión de la SOI en la optimización del MSE, que es equivalente a maximizar la SINR. Esto conduce al estimador de Capon (MVDR). El LMMSE y MVDR son óptimos, pero no son realizables, ya que dependen de la inversa de la matriz de correlación de los datos, que no es conocida. El procedimiento habitual para solventar este problema es sustituirla por la inversa de la correlación muestral (SCM), esto lleva al LMMSE y MVDR muestral. Este procedimiento es óptimo cuando el tamaño muestral tiende a infinito y la dimensión de los datos es fija. En la práctica este tamaño muestral elevado no suele producirse y los métodos LMMSE y MVDR muestrales sufren una degradación importante en este régimen de tamaño muestral pequeño. Éste se puede deber a periodos cortos de estacionariedad estadística o a sistemas cuya dimensión sea elevada. El objetivo de esta tesis es proponer correcciones de los estimadores LMMSE y MVDR muestrales que permitan combatir su degradación en el régimen de tamaño muestral pequeño. Para ello se usan dos herramientas potentes, la estimación shrinkage y la teoría de matrices aleatorias (RMT). La estimación shrinkage introduce una estructura de los estimadores que mejora los estimadores muestrales mediante la optimización del compromiso entre media y varianza del estimador. La optimización directa de los métodos shrinkage lleva a métodos no realizables. Por eso luego se propone obtener una estimación consistente de ellos en el régimen asintótico en el que tanto la dimensión de los datos como el tamaño muestral tienden a infinito, pero manteniendo un ratio constante. Es decir RMT se usa para obtener estimaciones consistentes en un régimen asintótico que trata naturalmente las situaciones de tamaño muestral pequeño. Esta metodología basada en RMT no requiere suposiciones sobre el tipo de distribución de los datos. Los filtros propuestos tratan directamente la estimación de la SOI, esto lleva a ganancias de prestaciones en comparación a otros métodos basados en optimizar una métrica relacionada con la estimación de la covarianza de los datos o regularizaciones ad hoc de la SCM. La estructura de filtro propuesta es más general que otros métodos que también tratan directamente la estimación de la SOI y que se basan en un shrinkage de la SCM. Contemplamos correcciones de la inversa de la SCM y los métodos del estado del arte son casos particulares. Esto lleva a ganancias de prestaciones que son notables cuando hay una incertidumbre en el vector de firma asociado a la SOI. Esa incertidumbre y el tamaño muestral pequeño son las fuentes de degradación de los LMMSE y MVDR muestrales. Así, en la última parte de la tesis, a diferencia de métodos propuestos previamente en la tesis y en la literatura, se propone un filtro que trata de forma directa ambas fuentes de degradación.
APA, Harvard, Vancouver, ISO, and other styles
5

Sima, Chao. "Small sample feature selection." Texas A&M University, 2003. http://hdl.handle.net/1969.1/5796.

Full text
Abstract:
High-throughput technologies for rapid measurement of vast numbers of biolog- ical variables offer the potential for highly discriminatory diagnosis and prognosis; however, high dimensionality together with small samples creates the need for fea- ture selection, while at the same time making feature-selection algorithms less reliable. Feature selection is required to avoid overfitting, and the combinatorial nature of the problem demands a suboptimal feature-selection algorithm. In this dissertation, we have found that feature selection is problematic in small- sample settings via three different approaches. First we examined the feature-ranking performance of several kinds of error estimators for different classification rules, by considering all feature subsets and using 2 measures of performance. The results show that their ranking is strongly affected by inaccurate error estimation. Secondly, since enumerating all feature subsets is computationally impossible in practice, a suboptimal feature-selection algorithm is often employed to find from a large set of potential features a small subset with which to classify the samples. If error estimation is required for a feature-selection algorithm, then the impact of error estimation can be greater than the choice of algorithm. Lastly, we took a regression approach by comparing the classification errors for the optimal feature sets and the errors for the feature sets found by feature-selection algorithms. Our study shows that it is unlikely that feature selection will yield a feature set whose error is close to that of the optimal feature set, and the inability to find a good feature set should not lead to the conclusion that good feature sets do not exist.
APA, Harvard, Vancouver, ISO, and other styles
6

Isheden, Gabriel. "Bayesian Hierarchic Sample Clustering." Thesis, KTH, Matematik (Inst.), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168316.

Full text
Abstract:
This report presents a novel algorithm for hierarchical clustering called Bayesian Sample Clustering (BSC). BSC is a single linkage algorithm that uses data samples to produce a predictive distribution for each sample. The predictive distributions are compared using the Chan-Darwiche distance, a metric for finite probability distributions, to produce a hierarchy of samples. The implemented version of BSC is found at https://github.com/Skjulet/Bayesian Sample Clustering.<br>Denna rapport presenterar en ny algoritm för hierarkisk klustring, Bayesian Sample Clustering (BSC). BSC är en single-linkage algoritm som använder stickprov av data för att skapa en prediktiv fördelning för varje stickprov. De prediktiva fördelningarna jämförs med Chan-Darwiche avståndet, en metrik över ändliga sannolikhetsfördelningar, vilket möjliggör skapandet av en hierarki av kluster. BSC finns i implementerad version på https://github.com/Skjulet/Bayesian Sample Clustering.
APA, Harvard, Vancouver, ISO, and other styles
7

Angelikaki, C. "An intelligent sample changer." Thesis, University of Reading, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.234786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kathman, Steven Jay Jr. "Discrete Small Sample Asymptotics." Diss., Virginia Tech, 1999. http://hdl.handle.net/10919/30101.

Full text
Abstract:
Random variables defined on the natural numbers may often be approximated by Poisson variables. Just as normal approximations may be improved by saddlepoint methods, Poisson approximations may be substantially improved by tilting, expansion, and other related methods. This work will develop and examine the use of these methods, as well as present examples where such methods may be needed.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
9

Ying, Lishi. "An automated direct sample insertion-inductively coupled plasma spectrometer for environmental sample analysis." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq39610.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Sing, Robert L. A. "Liquid and solid sample introduction into the inductively coupled plasma by direct sample insertion." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=74023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Almowalad, Najah K. "Elimination of Electrochemical Oxidation during Sample Ionization Using Liquid Sample Desorption Electrospray Ionization (DESI)." Ohio University / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1474911988216817.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Litborn, Erik. "Sample handling in nanoscale chemistry." Doctoral thesis, KTH, Chemistry, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-2975.

Full text
Abstract:
<p>Miniaturization is a strong on-going trend within analyticalchemistry. This has led to an increased demand for newtechnologies allowing smaller volumes of samples as well asreagents to be utilized. This thesis deals with the use of openchip-based reactors (vials); a concept that offers an increasedflexibility compared to the use of closed reactors. The vialsare manufactured by anisotropic etching of silicon.</p><p>First, a short introduction is given on the benefits ofperforming chemistry in miniaturized formats. Different typesof reactors useful for performing chemistry in nanoscale aredescribed and the advantages and disadvantages of using aclosed contra anopen system are discussed.</p><p>Precise dosing of nanoliter-sized volumes of liquids incontact mode is performed by using miniaturized pipette tips orcapillaries<b>(Paper I, III&IV)</b>. Also, non-contact dosing usingpiezo-electric dispensers is demonstrated by performingnanoliter sized acid-base titrations<b>(Paper II)</b>. Standard deviations on the order of 1% wereobtained.</p><p>Several strategies for handling the evaporation of water,while performing tryptic digests of native myoglobin in lownanoliter sized vials, are demonstrated. An increasedconversion rate of the protein to peptides was observed when ananovial (15 nL) reactor was used compared to the use of aconventional plastic vial (100 µL). Principles based onreducing the driving force for evaporation<b>(Paper III)</b>, continuous compensation of evaporatedmaterial<b>(Paper IV)</b>as well as covering the reaction liquid witha volatile liquid lid of solvent<b>(Paper V)</b>are used. The volatile liquid lid is also usedwhen performing PCR in volumes as low as 50 nL<b>(Paper VI)</b>.</p><p>Short descriptions of the analytical methods utilized in thethesis; capillary electrophoresis<b>(Paper III, IV, V&VI)</b>, matrix assisted laserdesorption/ionization time of flight mass spectrometry<b>(Paper I)</b>and fluorescence measurements<b>(Paper II&VI)</b>are presented.</p><p>Finally, an outlook of the developed technologies is giventogether with a discussion concerning possible futurerequirements in miniaturized chemistry.</p><p>Keywords: capillary electrophoresis, continuouscompensation, enzymatic degradation, evaporation,lab-on-a-chip, liquid lid, MALDI, miniaturization, myoglobin,nanoliter, nanoscale chemistry, parallel, picoliter,piezo?dispenser, polymerase chain reaction, reactor, titration,tryptic digest, vial</p><p>© Erik Litborn, 2000</p>
APA, Harvard, Vancouver, ISO, and other styles
13

Montilla, Alfonso. "Sample treatment for arsenic speciation." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape2/PQDD_0010/MQ60154.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

劉長拿 and Cheung-na Lau. "Interviewer effects in sample surveys." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1991. http://hub.hku.hk/bib/B31976566.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Salar, Kemal. "Sample size for correlation estimates." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lau, Cheung-na. "Interviewer effects in sample surveys." [Hong Kong] : University of Hong Kong, 1991. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13064794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Disque, J. Graham. "Counselor Supervision: Videotape Sample #6." Digital Commons @ East Tennessee State University, 1997. https://dc.etsu.edu/etsu-works/2852.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Bakx, Tom J. L. C. "The Herschel bright sources sample." Thesis, Cardiff University, 2018. http://orca.cf.ac.uk/115889/.

Full text
Abstract:
Far-infrared observations have detected dusty star-forming galaxies, a subset of galaxies which is extremely dust-extincted from the ultraviolet down to near-infrared colours. Recent studies show that this population of sources contributes significantly to the history of star formation, especially out to very high redshift. Recent surveys with the Herschel Space Observatory have uncovered around half a million of these sources, with the largest of these surveys, the H-ATLAS, covering 616 square degrees. One of the most exciting discoveries is the lensing nature of the brightest of these sources, where the gravitational potential of a foreground galaxy lenses and amplifies the signal. The applications of gravitational lensing range from studying individual sources down to unprecidented resolution at high redshift in sub-mm wavelengths with ALMA, to cosmological studies by analysing the distribution of groups of lenses. In this thesis, I explore the effect of applying a more inclusive selection criterion for lensed sources, and study the properties of the sources that are selected. Whereas the first attempts at finding lensed sources use a strict S500μm > 100 mJy flux density cut, the sample I study is selected with a flux cut at 80 mJy: The Herschel Bright Sources (HerBS) sample. A photometric redshift cut of zphot > 2 is also taken, as most lensing takes place out at higher redshift. This redshift is calculated by fitting a spectral template to the 250, 350 and 500 μm observations from the Herschel SPIRE instrument. I push down the selection flux in order to select more lensed sources from the sub-mm surveys, whilst potentially including several unlensed sources. These unlensed sources could be among the most intrinsically luminous and star-forming objects in the Universe. Only less than five of such objects are known to exist, while our HerBS sample could contain up to 35 of these sources, which could teach us about the upper-limits of star-formation and their contribution to forming the most massive galaxies in the Universe. I use 850 μm SCUBA-2 observations on the James Clerk Maxwell Telescope (JCMT) to remove blazar interlopers, which results in 209 sources in the HerBS sample, after removing 14 blazar sources. At the time I wrote the paper upon which Chapter 2 is based, 24 sources had a spectroscopic redshift. I use this sub-sample to fit a two-temperature modified blackbody, and find a cold-body temperature of 21.3 K, a warm-body temperature of 45.8 K, a mass ratio of 26.7 and a dustemissivity index of 1.8. These values do not challenge the current knowledge of sub-mm galaxies, but the quality of the fit suggests a large diversity among the galaxies in the sub-sample, and that they are poorly fitted by a single template. This diversity is also found by the spectroscopic observations with the IRAM 30m-telescope observations on eight of the highest-redshift (zphot > 4) sources of the HerBS sample. We found five spectroscopic redshifts, with one of the sources at the highest known HerBS redshift at zspec = 4.8. The spectrum fitted in Chapter 2 shows a poor agreement with the photometric data points. The spatial resolution of the SPIRE instrument on Herschel is not fine enough to resolve the structure of these high-redshift sources. Worse still, the beam width is so large, ranging from 18 to 36 arcseconds, that it is unsure whether we observe a single galaxy, or perhaps observe multiple galaxies together. The beam width of the SCUBA-2 instrument at 850 μm is only 13 arcseconds. In the case the sample would be dominated by blended sources, one would expect to resolve several of the sources into their individual components. This is not seen in any of the continuum images, although the blended sources might be blended on scales smaller than 13 arcseconds. The IRAM-observations of two sources have detected multiple, contradicting spectral lines, suggesting we might be observing multiple sources, instead of a single source, that are aligned along the line-of sight. Unfortunately, only single spectral lines have been observed per source, and we are awaiting more observations verifying the blending nature of these sources, which are still expected to lie at high redshift. The hypothesis that our sample consists for a significant portion out of blended sources is in contradiction with multi-wavelength observations. When I look at the positions of these sources at different wavelengths, I find that most sources have a counterpart in these multi-wavelength observations, also when chance-encounters are considered. Considering the high redshift nature of our sources, together with the possibility of lensing, these counterpart sources are most likely foreground, lensing galaxies. I compare the positions of the HerBS sources to both the Sloan Digital Sky Survey (SDSS), which covers 121 out of the 209 sources, and the VISTA Kilo-Degree Infrared Galaxy (VIKING) survey, which covers 98 HerBS sources. For the SDSS counterparts, I use the H-ATLAS catalogue of counterpart sources, which was done by using a statistical estimator. This statistical estimator assumes a certain angular distribution between the sources in the Herschel position, and the opticalor near-infrared observations. I expect the majority of my sources to be lensed, and therefore I adjust the original angular distribution by including the effect of gravitational lensing. The adjustment is based on 15 ALMA observations of lensed, bright H-ATLAS sources. The revised analysis finds 41 counterparts, instead of the 31 that were found by the initial analysis. This catalogue is not available for VIKING counterparts, and therefore I had to do the entire analysis for the VIKING counterparts, starting from the VIKING fields. I use the sextractor package to extract the potential counterparts, and then derive the necessary estimators for the statistical method. I find a significantly different angular distribution, even than the one derived from the 15 ALMA observations of lensed H-ATLAS sources. The angular distribution extends to much larger angular scales, potentially suggesting a stronger contribution to galaxy-cluster lensing, which produces larger angular offsets due to the larger masses and different mass profiles associated with galaxy clusters. In total, I find 60 counterparts with a reliability greater than 80% to the 98 HerBS sources covered by VIKING. Possibly, not all counterparts could be positively identified, as the analysis showed 88% of sources has a source within 10 arcseconds when taking chance encounters into account. This is mostly due to ambiguity between several nearby sources, which causes a low reliability of the counterpart identification, but it does allow us to state that an counterpart could be present. A cosmological model suggest that 76% of our sources are gravitationally lensed. This model assumes a certain distribution of halo masses, and lensing magnification based on mass density profiles. The validity of these models has been shown with the 15 ALMA observations of lensed H-ATLAS sources, and also agree with the SMA observations from Bussmann et al. (2013). The IRAM observations provide me with both line luminosities and line velocity widths. Larger galaxies are expected to be brighter, and have larger line velocity widths. The five sources with confirmed redshift (and therefore line luminosity) have a luminosity-to-velocity width ratio agreeing with a magnification of around 10, when compared to unlensed, and known lensed sources. I show that the SDSS is not deep enough to observe all the foreground galaxies, while the VIKING observations agree with the results from the simulation, with 60 sources actually cross-compared, and 88% of sources have a source nearby, when accounting for random chance.
APA, Harvard, Vancouver, ISO, and other styles
19

Fiorenza, John Kenneth 1977. "Gain compensated sample and hold." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/87311.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.<br>Includes bibliographical references (p. 71-72).<br>by John Kenneth Fiorenza.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
20

Ikoma, Hayato. "Computational microscopy for sample analysis." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/91427.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2014.<br>46<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 41-44).<br>Computational microscopy is an emerging technology which extends the capabilities of optical microscopy with the help of computation. One of the notable example is super resolution fluorescence microscopy which achieves sub-wavelength resolution. This thesis explores the novel application of computational imaging methods to fluorescence microscopy and oblique illumination microscopy. In fluorescence spectroscopy, we have developed a novel nonlinear matrix unmixing algorithm to separate fluorescence spectra distorted by absorption effect. By extending the method to tensor form, we have also demonstrated the performance of a nonlinear fluorescence tensor unmixing algorithm on spectral fluorescence imaging. In the future, this algorithm may be applied to fluorescence unmixing in deep tissue imaging. The performance of the two algorithms were examined on simulation and experiments. In another project, we applied switchable multiple oblique illuminations to reflected-light microscopy. While the proposed system is easily implemented compared to existing methods, we demonstrate that the microscope detects the direction of surface roughness whose height is as small as illumination wavelength.<br>by Hayato Ikoma.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
21

Liddle, Kathryn. "Curriculum and Sample Lesson Plans." Miami University Honors Theses / OhioLINK, 2003. http://rave.ohiolink.edu/etdc/view?acc_num=muhonors1111150109.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Silva, Luciane Sussuchi da [UNESP]. "Espectroscopia de correlação bidimensional generalizada e sample-sample aplicada ao estudo da tripsina pancreática bovina." Universidade Estadual Paulista (UNESP), 2010. http://hdl.handle.net/11449/87544.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:22:55Z (GMT). No. of bitstreams: 0 Previous issue date: 2010-03-19Bitstream added on 2014-06-13T18:49:58Z : No. of bitstreams: 1 silva_ls_me_sjrp.pdf: 743298 bytes, checksum: 2f464575e4025f3b9eb2a23dd8bfd2a8 (MD5)<br>Neste trabalho a tripsina pancreática bovina é utilizada como proteína alvo na aplicação da espectroscopia de correlação bidimensional investigando os processos de desenovelamento e re-enovelamento. A espectroscopia de correlação bidimensional generalizada proposta por Noda (Noda, 1993) tem crescido em aplicações e novas técnicas de correlação 2D têm sido desenvolvidas. Assim como em RMN 2D, picos espectrais são espalhados em uma segunda dimensão, simplificando a visualização de espectros complexos, por serem constituídos de bandas sobrepostas ajudando na resolução espectral, de similar modo foi desenvolvido técnicas de analises para o infravermelho. O desenvolvimento de um software para obtenção da correlação generalizada e correlação sample-sample (Sasic, et al., 2000) foi uma etapa importante neste estudo, entretanto a associação entre estes resultados torna a abordagem do problema de desenovelamento e re-enovelamento mais rica em detalhes. A análise de correlação bidimensional sample-sample (2D-SS) indica uma temperatura de desenovelamento em 48°C que é corroborado por DSC (49C). Além dessa, temperaturas de pré-transição em 31°, 37° e 43°C e pós-transição em 54°, 59° e 69°C foram reveladas por 2D sample-sample. O perfil do termograma obtido por DSC indica que o processo de desenovelamento é do tipo múltiplo estado o que valida a descoberta de temperaturas de pré e pós-transição. A análise generalizada revela as estruturas secundárias e os eventos seqüenciais envolvidos em cada uma destas transições. A partir da equação de van’t Hoff é obtido o valor da entalpia de desenovelamento 28,1 kcal/mol, já a análise calorimétrica por sua vez fornece um valor para a entalpia aparente de 59,3 kcal/mol.<br>In this work the bovine pancreatic trypsin is used as the target protein in the application of two-dimensional correlation spectroscopy investigating the processes of unfolding and refolding. The two-dimensional generalized correlation spectroscopy proposed by Noda (Noda, 1993) has grown into applications and new 2D correlation techniques have been developed. As well as in 2D NMR spectral peaks are spread across a second dimension, simplifying the complex spectra view because they are made up of overlapping bands increasing the spectral resolution of similar way was developed techniques of analysis to the infrared technique. The development of software for obtaining generalized correlation and correlation sample-sample (Sasic, et al., 2000) was an important step in this study, however the association between these results makes the approach to the problem of unfolding and refolding richer in detail. The two-dimensional sample-sample correlation analysis (2D-SS) indicates a temperature of unfolding at 48 °C which is corroborated by DSC (49 °C). In addition to this, pre transitions temperatures in 31 ° 37° and 43 °C and post transition in 54°, 59° and 69°C were revealed by 2D-SS. The profile of the thermogram obtained by DSC indicates that the process of unfolding is of the type of multiple state that validates the discovery of temperatures of pre and post transition. The generalized analysis reveals the secondary structures and sequential events involved in each of these transitions. From the van't Hoff equation is retrieved a value of the unfolding enthalpy of 28.1 kcal/mol, from the calorimetric analysis the value for the apparent enthalpy was 59.3 kcal/mol.
APA, Harvard, Vancouver, ISO, and other styles
23

Silva, Luciane Sussuchi da. "Espectroscopia de correlação bidimensional generalizada e sample-sample aplicada ao estudo da tripsina pancreática bovina /." São José do Rio Preto : [s.n.], 2010. http://hdl.handle.net/11449/87544.

Full text
Abstract:
Orientador: Marinômio Lopes Cornélio<br>Banca: Marcelo Andrés Fossey<br>Banca: Luiz Alberto Colnago<br>Resumo: Neste trabalho a tripsina pancreática bovina é utilizada como proteína alvo na aplicação da espectroscopia de correlação bidimensional investigando os processos de desenovelamento e re-enovelamento. A espectroscopia de correlação bidimensional generalizada proposta por Noda (Noda, 1993) tem crescido em aplicações e novas técnicas de correlação 2D têm sido desenvolvidas. Assim como em RMN 2D, picos espectrais são espalhados em uma segunda dimensão, simplificando a visualização de espectros complexos, por serem constituídos de bandas sobrepostas ajudando na resolução espectral, de similar modo foi desenvolvido técnicas de analises para o infravermelho. O desenvolvimento de um software para obtenção da correlação generalizada e correlação sample-sample (Sasic, et al., 2000) foi uma etapa importante neste estudo, entretanto a associação entre estes resultados torna a abordagem do problema de desenovelamento e re-enovelamento mais rica em detalhes. A análise de correlação bidimensional sample-sample (2D-SS) indica uma temperatura de desenovelamento em 48°C que é corroborado por DSC (49C). Além dessa, temperaturas de pré-transição em 31°, 37° e 43°C e pós-transição em 54°, 59° e 69°C foram reveladas por 2D sample-sample. O perfil do termograma obtido por DSC indica que o processo de desenovelamento é do tipo múltiplo estado o que valida a descoberta de temperaturas de pré e pós-transição. A análise generalizada revela as estruturas secundárias e os eventos seqüenciais envolvidos em cada uma destas transições. A partir da equação de van't Hoff é obtido o valor da entalpia de desenovelamento 28,1 kcal/mol, já a análise calorimétrica por sua vez fornece um valor para a entalpia aparente de 59,3 kcal/mol.<br>Abstract: In this work the bovine pancreatic trypsin is used as the target protein in the application of two-dimensional correlation spectroscopy investigating the processes of unfolding and refolding. The two-dimensional generalized correlation spectroscopy proposed by Noda (Noda, 1993) has grown into applications and new 2D correlation techniques have been developed. As well as in 2D NMR spectral peaks are spread across a second dimension, simplifying the complex spectra view because they are made up of overlapping bands increasing the spectral resolution of similar way was developed techniques of analysis to the infrared technique. The development of software for obtaining generalized correlation and correlation sample-sample (Sasic, et al., 2000) was an important step in this study, however the association between these results makes the approach to the problem of unfolding and refolding richer in detail. The two-dimensional sample-sample correlation analysis (2D-SS) indicates a temperature of unfolding at 48 °C which is corroborated by DSC (49 °C). In addition to this, pre transitions temperatures in 31 ° 37° and 43 °C and post transition in 54°, 59° and 69°C were revealed by 2D-SS. The profile of the thermogram obtained by DSC indicates that the process of unfolding is of the type of multiple state that validates the discovery of temperatures of pre and post transition. The generalized analysis reveals the secondary structures and sequential events involved in each of these transitions. From the van't Hoff equation is retrieved a value of the unfolding enthalpy of 28.1 kcal/mol, from the calorimetric analysis the value for the apparent enthalpy was 59.3 kcal/mol.<br>Mestre
APA, Harvard, Vancouver, ISO, and other styles
24

Chlipala, M. Linda. "Organized Semantic Fluency and Executive Functioning in an Adult Clinical Sample and a Community Sample." Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc30445/.

Full text
Abstract:
The study investigated an organized semantic fluency task, (the Controlled Animal Fluency Task - CAFT) as a measure of executive functioning (EF) in adults, and the relationship with instrumental activities of daily living (IADL). Participants (N = 266) consisted of a clinical sample (n = 142) utilizing neuropsychological assessment data collected at an outpatient psychological center, and a community sample (n = 124). The clinical sample was a heterogeneous mixed neurological group including a variety of health conditions and comorbid anxiety and depression. The CAFT Animals by Size demonstrated a significant positive correlation with Category Fluency (r = .71, n = 142, p < .001) , Animal Fluency (r = .70, n = 142, p < .001), and with other, established neuropsychological measures. The CAFT Animals by Size condition demonstrated a significant moderate negative correlation with IADL for the sample as a whole (r = -.46, n = 248, p < .001), and for the clinical sample (r = -.38, n = 129, p < .001), but not for the community sample. In a hierarchical regression analysis, CAFT Animal by Size explained additional variance in IADL (&#916;R2 = .15). In a hierarchical regression analysis predicting IADL with the control variables entered first, followed by Category Fluency, with CAFT Animal by Size entered last, CAFT Animals by Size did not make a significant additional contribution. A stepwise forward regression indicated Category Fluency, education, and Category Switching are better predictors of IADL than CAFT Animals by Size. Normative data for the CAFT were calculated separately for age groups and education levels. Simple logistic regression indicated CAFT Animal by Size was a significant predictor of clinical or community group membership. A second logistic regression analysis indicated the CAFT Animal by Size condition improved the prediction of membership in the clinical versus the community group, compared to the MMSE alone. Applications of the CAFT are discussed.
APA, Harvard, Vancouver, ISO, and other styles
25

Baker, Heather Victoria. "Safety behaviours in generalized anxiety disorder : a clinical adult sample and a community youth sample." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/52044.

Full text
Abstract:
Anxiety disorders are the most common mental health problem, affect individuals across the lifespan, and cause significant impairment and distress in a variety of life domains. Safety behaviour use has been identified as contributing to the maintenance of anxiety. The reduction of safety behaviours is a component of several adult-focused Cognitive Behavioural Therapies for anxiety. Safety behaviour use is discussed in the literature specific to individual anxiety disorders. Currently, there are few psychometrically sound measures of safety behaviours available to researchers and clinicians. The few available safety behaviour measures are associated with Social Phobia (SoP) and Panic Disorder. Few studies have examined safety behaviours associated with Generalized Anxiety Disorder (GAD). This study is composed of two separate studies : Study 1 evaluated the psychometric properties of a measure of GAD-associated safety behaviours, the Generalized Safety Behaviour Scale (GSBS), in an adult sample diagnosed with Generalized Anxiety Disorder (GAD; n = 36) compared with adults with Social Phobia (SoP; n = 34) and with non-anxious controls (n = 38). The GSBS demonstrated strong internal consistency and displayed convergent validity with measures of worry and intolerance of uncertainty. Two underlying factors were identified. Construct validity of the GSBS was further assessed through one-way ANOVAs revealing that participants with GAD engaged in more frequent GAD-associated safety behaviour use than those with SoP or no anxiety. Study 2 contributed to further psychometric investigation of the GSBS and explored safety behaviour use by youth in a community sample (N = 175). The GSBS demonstrated strong internal consistency, and good convergent validity. Two underlying factors were identified. Linear regression analysis revealed that youth with high levels of anxiety engaged in more frequent use of safety behaviours. A MANOVA analysis, grouping youth into low/moderate and at-risk/clinical levels of anxiety, revealed that the at-risk/clinical group endorsed more frequent use of safety behaviours. Implications include a discussion of the benefits of using safety behaviours to help inform treatment sessions, the importance of developing psychometrically sound measures of safety behaviours, and the need to examine safety behaviour use in youth.<br>Education, Faculty of<br>Educational and Counselling Psychology, and Special Education (ECPS), Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
26

CHILAKALA, SUJATHA. "DEVELOPMENT OF LIQUID CHROMATOGRAPHY-MASS SPECTROMETRIC ASSAYS AND SAMPLE PREPARATION METHODS FOR THE BIOLOGICAL SAMPLE ANALYSIS." Cleveland State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=csu1512927043412916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ho, Stacy Zhanying. "Aerosol sample introduction and mass spectrometry." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/30517.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Franks, Jeff. "Sample introduction into ICP-MS systems." Thesis, University of Hull, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.262437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Jinks, R. C. "Sample size for multivariable prognostic models." Thesis, University College London (University of London), 2012. http://discovery.ucl.ac.uk/1354112/.

Full text
Abstract:
Prognosis is one of the central principles of medical practice; useful prognostic models are vital if clinicians wish to predict patient outcomes with any success. However, prognostic studies are often performed retrospectively, which can result in poorly validated models that do not become valuable clinical tools. One obstacle to planning prospective studies is the lack of sample size calculations for developing or validating multivariable models. The often used 5 or 10 events per variable (EPV) rule (Peduzzi and Concato, 1995) can result in small sample sizes which may lead to overfitting and optimism. This thesis investigates the issue of sample size in prognostic modelling, and develops calculations and recommendations which may improve prognostic study design. In order to develop multivariable prediction models, their prognostic value must be measurable and comparable. This thesis focuses on time-to-event data analysed with the Cox proportional hazards model, for which there are many proposed measures of prognostic ability. A measure of discrimination, the D statistic (Royston and Sauerbrei, 2004), is chosen for use in this work, as it has an appealing interpretation and direct relationship with a measure of explained variation. Real datasets are used to investigate how estimates of D vary with number of events. Seeking a better alternative to EPV rules, two sample size calculations are developed and tested for use where a target value of D is estimated: one based on significance testing and one on confidence interval width. The calculations are illustrated using real datasets; in general the sample sizes required are quite large. Finally, the usability of the new calculations is considered. To use the sample size calculations, researchers must estimate a target value of D, but this can be difficult if no previous study is available. To aid this, published D values from prognostic studies are collated into a ‘library’, which could be used to obtain plausible values of D to use in the calculations. To expand the library further an empirical conversion is developed to transform values of the more widely-used C-index (Harrell et al., 1984) to D.
APA, Harvard, Vancouver, ISO, and other styles
30

Denne, Jonathan S. "Sequential procedures for sample size estimation." Thesis, University of Bath, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.320460.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Barnabas, Ian Joseph. "Sample preparation in environmental organic analysis." Thesis, Northumbria University, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Inglis, William. "Investigating probe-sample interactions in NSOM." Thesis, University of Nottingham, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.288999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Ness, Kevin Dean. "Microfluidic technologies for biological sample preparation /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Strathmann, Stefan. "Sample conditioning for multi-sensor systems." [S.l. : s.n.], 2001. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB8988943.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Gross, Lara M. "Wordperfect 6.1 for Windows work sample /." Online version, 1998. http://www.uwstout.edu/lib/thesis/1998/1998grossl.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Raez, de Ramírez Matilde. "Rorschach contents in a Peruvian sample." Pontificia Universidad Católica del Perú, 2003. http://repositorio.pucp.edu.pe/index/handle/123456789/101543.

Full text
Abstract:
This study stressed the Rorschach-Content area in 237 Lima inhabitants. The Contents are an indicator that helps to understand the basic characteristics of personality (self-perception,interpersonal relations, cognitive mediation and ideation). This descriptive study used an accidental-probabilistic sample. The variables were age, gender, schooling, The descriptive statistic was used to analyze demographic contents and variables, and the non-parametric statistic (Kruskall-Wallis) to compare data. The results stresses the absence of religious contents (R 1) and the importance of anatomic contents (An) across the variables. The variable gender shows differences: men are interested in culture and show achievement motivation. Women are interested in home. With regarded lo schooling, the group including superior educated and complete secondary educated students shows solidarity signs, interest in socialization and ahigh cognitive level.<br>Se investigó el área de Contenidos Rorschach en 237 habitantes de Lima. Los Contenidos constituyen un indicador para la comprensión de características básicas de la personalidad (auto-percepción, relaciones interpersonales, mediación cognitiva e ideación). El estudio fue descriptivo con muestreo probabilístico accidental. Las variables fueron edad, género, nivel de escolaridad. Se utilizó estadística descriptiva para analizar contenidos y variables demográficas, y no paramétrica (Kruskall-Wallis) para comparar datos. Los resultados destacan la ausencia de contenidos religiosos (Rl) y la importancia de contenidos anatómicos (An) entodas las variables. Los géneros difieren significativamente: los hombres interesados en lacultura y afán de logros y las mujeres, en el hogar. En cuanto a nivel de escolaridad, el grupo de educación superior y secundaria completa obtiene signos de solidaridad, interés en la socialización y mayor riqueza cognitiva.
APA, Harvard, Vancouver, ISO, and other styles
37

Lehman, Gloria L. "Adolescents' sexual attitudes: a Mennonite sample." Thesis, Virginia Tech, 1987. http://hdl.handle.net/10919/45773.

Full text
Abstract:
<p>One hundred fifty-six adolescent respondents from the Virginia Mennonite Conference were surveyed regarding their perceived influences of their attitudes toward sexuality and the Mennonite Church's position on various sexual issues. The adolescents were more sure of their own beliefs about sexuality than they were about the church's position. The church was not perceived as a major source of influence on their attitudes when compared to friends, the media, and the family. A comparison of early and late adolescents did not reveal any significant difference in the amount of perceived influence of the church. The gender of the respondent was not found to differentiate significantly on any of the variables under investigation. The type of school the adolescent attended--either public or Mennonite--was related to a difference in the response to beliefs about premarital sex and pregnancy outside of marriage. Students at public schools held more accepting views on these issues.</p><br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
38

Shen, Zuchao. "Optimal Sample Allocation in Multilevel Experiments." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1553528863915366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Callan, Peggy Ann. "Developmental sentence scoring sample size comparison." PDXScholar, 1990. https://pdxscholar.library.pdx.edu/open_access_etds/4170.

Full text
Abstract:
In 1971, Lee and Canter developed a systematic tool for assessing children's expressive language: Developmental Sentence Scoring (DSS). It provides normative data against which a child's delayed or disordered language development can be compared with the normal language of children the same age. A specific scoring system is used to analyze children's use of standard English grammatical rules from a tape-recorded sample of their spontaneous speech during conversation with a clinician. The corpus of sentences for the DSS is obtained from a sample of 50 complete, different, consecutive, intelligible, non-echolalic sentences elicited from a child in conversation with an adult using stimulus materials in which the child is interested. There is limited research on the reliability of language samples smaller and larger than 50 utterances for DSS analysis. The purpose of this study was to determine if there is a significant difference among the scores obtained from language samples of 25, 50, and 75 utterances when using the DSS procedure for children aged 6.0 to 6.6 years. Twelve children, selected on the basis of chronological age, normal receptive vocabulary skills, normal hearing, and a monolingual background, were chosen as subjects.
APA, Harvard, Vancouver, ISO, and other styles
40

Tarik, Hamad Maryam. "Development of a Profile Sample Cutter." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281751.

Full text
Abstract:
The pulp- and paper industry is a key industry globally and stands for a production of 600 million tonnes pulp and paper worldwide with a total revenue of 400 billion dollars. Due to high quality requirements on paper, it is important to use instruments that control that the produced paper fulfill the promised requirements. To control the quality, a sample strip needs to be taken along the full length of each paper reel. The purpose of this master’s thesis project was consequently to develop a unit that will be used when cutting out these samples.  The project started with a pre-study on existing sample cutters to define all integrated subsystems and their functionalities, advantages and drawbacks. The essential subsystems were found to be: (1) paper reel cutting, (2) path movement, (3) motion generation and (4) mechanical transmission. The advantages and drawbacks were found by interviewing people who have used the cutters or in any way have encountered them for opinions, experience and knowledge. Sample winding is an additional subsystem which was found in a few of the existing cutters but was not further studied.  After defining were the problems and development potentials lay, a range of concepts were generated for the subsystems. These concepts were presented to a defined target group to ensure a unit which will create costumer value. By taking their views and ideas into account, further concept development was made. After a few iterations, one concept was chosen for each subsystem in an evaluation with domain experts.  A detail study and design were then made which incorporated all subsystems into one unit. The solution: (1) has two rotating circular blade pushed against sharp guide rails, (2) is hand-held with two pair of wheels on the cutter-head and a digital inclinometer, (3) has a manually generated motion and (4) uses a synchronous belt drive which transfer the manually generated motion to the rotating blades. In addition to this, complete 2D and 3D drawings, along with a bill of materials, were delivered for the future manufacturing of the unit.  Since this thesis presents no solution for the sample winding, the next step is to develop a collector which is easily handled and should be able to co-use the manually generated power with the rotating blades. Further, a design which allows modularization of the unit should be developed.<br>Pappersindustrin är en nyckelindustri globalt och står för en produktion på 600 miljoner ton pappersmassa och papper över hela världen med en total intäkt på 400 miljarder dollar. På grund av höga kvalitetskrav på papper är det viktigt att använda instrument som kontrollerar att det producerade papperet uppfyller de utlovade kraven. För att kontrollera kvaliteten måste en provremsa tas längs hela pappersrullen. Syftet med denna masteruppsats var följaktligen att utveckla en enhet som kommer att användas vid utskärning av dessa prover.  Projektet startades med en förstudie av befintliga provskärare för att definiera alla integrerade delsystem och deras funktioner, fördelar och nackdelar. De väsentliga delsystemen visade sig vara: (1) skärning av pappersrulle, (2) rörelse av vägar, (3) generering av rörelse och (4) mekanisk kraftöverföring. Fördelarna och nackdelarna hittades genom att intervjua personer som har använt skärarna eller på något sätt har stött på dem för åsikter, erfarenhet och kunskap. Upprullning av proverna är ett ytterligare delsystem som hittades i vissa befintliga skärare men studerades inte vidare.  Efter att ha definierat var problemen och utvecklingspotentialerna låg, genererades ett antal lösningar för delsystemen. Dessa koncept presenterades för en definierad målgrupp för att säkerställa en enhet som skapar kundvärde. Genom att ta hänsyn till deras åsikter och idéer, vidareutvecklades lösningarna. Efter några iterationer valdes ett koncept för varje delsystem i en utvärdering med domänexperter.  En detaljstudie och -design gjordes sedan som inkluderade alla delsystem i en enhet. Lösningen: (1) har två roterande cirkulära knivar tryckta mot vassa styrskenor, (2) är handhållen med två par hjul på skärhuvudet och en digital lutningsmätare, (3) har en manuellt genererad rörelse och (4) använder en synkroniserad remdrivning som överför den manuellt genererade rörelsen till de roterande bladen. Utöver detta levererades kompletta 2D- och 3D-ritningar tillsammans med en stycklista för den framtida tillverkningen av enheten.  Eftersom den här rapporten inte innehåller någon lösning för upprullning är nästa steg att utveckla en provupprullare som är lätt att hantera och bör kunna använda den manuellt genererade effekten. Vidare bör en design som möjliggör modulering av enheten utvecklas.
APA, Harvard, Vancouver, ISO, and other styles
41

Wilson, David S. "Correlated Sample Synopsis on Big Data." Youngstown State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1544264480082086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Geissel, Dorota I. "Cognitive functioning within an incarcerated sample." Thesis, University of Ottawa (Canada), 1985. http://hdl.handle.net/10393/4653.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Raterink, Lisa A. "Paleoenvironment and Lateral Extent of an Exposed Carbonate Build-up: Horry County, South Carolina." Wright State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=wright1220654468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Rybak, Michael E. "Development and application of thermal vaporization sample introduction techniques with demountable sample supports for inductively coupled plasma spectrometry." Thesis, McGill University, 2000. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=37829.

Full text
Abstract:
Novel approaches to thermal vaporization sample introduction for inductively coupled plasma (ICP) spectrometry using techniques with demountable sample supports are presented and compared. Developments and applications are presented for two sample introduction arrangements that use interchangeable sample probes in contact-free heating environments: direct sample insertion (DSI), and induction heating-electrothermal vaporization (IH-ETV). For the well-established technique of DSI, studies focused on the development and application of a pyrolytically coated graphite sample support, a feature common to conventional ETV systems. A process for coating sample probes in the ICP discharge was developed, and improvements were seen in both the reproducibility of the volatilization event and the appearance of the transient emission signals with ICP-optical emission spectrometry (OES) when a coated probe was used over an uncoated one. The pyrolytically coated sample probe was successfully used for the direct determination of several metals in wood pulps by ICP-OES, and was found not only to improve the appearance of the temporal signals, but also demonstrated a heightened resistance to chemical attack. For the prototypical IH-ETV system, a general study was first conducted to establish performance attributes and benchmarks such as heating characteristics, transport efficiency, and ICP-OES detection limits using several mixed carrier gases. The IH-ETV arrangement was found to be capable of rapid heating rates and precise, reproducible temperature control suitable for a thermal vaporization sample introduction technique. Of all the gas mixtures studied, the incorporation of 15% (v/v) SF6 into the Ar carrier flow resulted in the best overall conditions for the vaporization, transport and detection of analytes by ICP-OES. These conditions were successfully used for the determination of various metals in soil and sediment samples by ICP-OES using IH-ETV sample introduction. Addit
APA, Harvard, Vancouver, ISO, and other styles
45

Skoglund, Christina. "Monolithic packed 96-Tip robotic device for high troughput sample preparation and for handling of small sample volumes." Thesis, Karlstad University, Karlstad University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-2216.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Rothacher, Fritz Markus. "Sample-rate conversion : algorithms and VLSI implementation /." [Konstanz] : Hartung-Gorre, 1995. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=10980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Cámara, Hagen Luis Tomás. "A consensus based Bayesian sample size criterion." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ64329.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Fig, Matthew Kenneth. "Resonant Ultrasound Spectroscopy in Complex Sample Geometry." Thesis, Montana State University, 2005. http://etd.lib.montana.edu/etd/2005/fig/FigM1205.pdf.

Full text
Abstract:
Resonant Ultrasound Spectroscopy (RUS) is the study of the mechanical resonances, or normal modes, of elastic bodies to infer material properties such as the elasticity matrix. This powerful technique is based on two physical facts, the first of which is that the resonant response of an elastic object depends on several parameters intrinsic to the object, such as the object's shape, density, elastic constants, and crystallographic orientation. The second is that using these parameters, the resonant spectrum of an object can be calculated. This method has widely been applied to rectangular parallelepipeds (RPPDs) because the use of such simple geometry frees an investigator interested only in acquiring the elastic constants of a particular material from the hindrance of dealing with the additional computational difficulty imposed by more complex sample geometry. In addition to the use of RPPDs, some work has been done with other objects of high symmetry such as cylinders and spheres. The goal of this research was to explore the extension of RUS techniques to objects exhibiting more complex shape. Toward this end, a computational method was developed for handling the addition of complex geometry. This computational scheme was then verified experimentally through the examination of several objects exhibiting complex shapes.
APA, Harvard, Vancouver, ISO, and other styles
49

Ahn, Jeongyoun Marron James Stephen. "High dimension, low sample size data analysis." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2006. http://dc.lib.unc.edu/u?/etd,375.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2006.<br>Title from electronic title page (viewed Oct. 10, 2007). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Statistics and Operations Research." Discipline: Statistics and Operations Research; Department/School: Statistics and Operations Research.
APA, Harvard, Vancouver, ISO, and other styles
50

蘇政湟. "Production of Vibarting Sample Magnetometer and measurment of sample." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/62994535632651155786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography