To see the other types of publications on this topic, follow the link: Bean method.

Dissertations / Theses on the topic 'Bean method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Bean method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Vashro, Taylor Nadine. "The effect of mung bean on improving dietary diversity in women and children in Senegal." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/86361.

Full text
Abstract:
Since 2015, a U.S. Agency for International Development and Virginia Tech Education and Research in Agriculture collaboration has introduced and tested mung bean as a potential crop to alleviate malnutrition and food insecurity in Senegal. This MS thesis describes a study conducted to assess the impact of mung bean on dietary diversity of Senegalese women and children in the Kaolack, Matam and Bakel localities of Senegal. A mixed-methods research approach included individual surveys to determine dietary diversity scores (DDS) and focus groups to assess the perceived impacts of mung bean. The dietary diversity survey was conducted with 194 participants including adult women, ages 15 to 70 years (n=109) and children, ages 0-10 years (n=85). Half (52%) of the population were mung bean consumers. The dietary diversity surveys revealed an average DDS of 5.73 on a scale of one to 10, with 5.83 and 5.62 for mung bean and non-mung bean consuming groups, respectively. There was a statistically significant difference in DDS between mung-bean consuming women and both mung bean and non-mung bean children, and between mung bean and non-mung bean consumers in Bakel; however, there was no significant difference between overall mung bean and non-mung bean groups DDS. Focus groups (n=11) with mung bean consuming women identified perceived agricultural, health, and financial benefits associated with mung bean consumption. These results can increase our understanding of how mung bean may influence policy-relevant issues for the Senegalese population, including agricultural, health and financial outcomes that are not reflected in dietary diversity surveys.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
2

Harrison, Leigh Ann. "Characterization, development of a field inoculation method, and fungicide sensitivity screening of the Pythium blight pathogen of snap bean (Phaseolus vulgaris L.)." Diss., Virginia Tech, 2011. http://hdl.handle.net/10919/77324.

Full text
Abstract:
New Jersey, Georgia, and the Eastern Shore of Virginia (ESV) are important snap bean (Phaseolus vulgaris L.) growing regions, but profitability is threatened by Pythium blight. Causal agents of Pythium blight on snap bean were identified using morphological characterization and sequence analysis of the rDNA-internal transcribed spacer (ITS) regions of 100 isolates. Most isolates were Pythium aphanidermatum (Edson) Fitzp. (53%), and also included Pythium deliense Meurs (31%; all from Georgia), Pythium ultimum Trow (12%), Pythium myriotylum Drechsler (2%), Pythium catenulatum Matthews (1%), and unknown Pythium sp. (1%). To our knowledge, this is the first report of P. deliense in Georgia and on common bean and squash (Cucurbita pepo L.); as well as the first report of P. catenulatum on lima bean (Phaseolus lunatus L.) and in New Jersey. Fungicide labeling and cultivar selection for Pythium blight management is hindered by difficulties associated with conducting successful trials, because the disease occurs sporadically and clustered in the field. Three P. aphanidermatum-infested inoculum substrates were evaluated at three concentrations. The vermiculite/V8 juice (5:3 weight to volume) inoculum (10,000 ppg/0.3 m) consistently caused at least 50% disease in 3 field trials. Sensitivity of the Pythium blight pathogens was determined in vitro against five fungicides. Twenty-two Pythium isolates representing P. aphanidermatum, P. deliense, P. ultimum, and P. myriotylum were inoculated to media amended with each active ingredient at 0, 100μg/ml, the concentration equivalent to the field labeled rate if applied on succulent beans at 187 L/ha, and the equivalent if applied at 374 L/ha. All isolates were completely sensitive (100% growth reduction, or GR) to all active ingredients at the labeled rates, except azoxystrobin. At 100μg/ml azoxystrobin, one P. deliense isolate had 8.9% GR. All isolates had 100% GR to copper hydroxide at 100μg/ml, and the lowest GR on mefenoxam-amended medium was 91.9%. At 100μg/ml cyazofamid, all P. deliense isolates were completely sensitive and variation was observed in P. aphanidermatum isolates. At 100μg/ml potassium phosphite, significant GR similarities were recorded within isolates of the same species, and less than 50% GR was observed in all P. deliense isolates.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Motamedian, Hamid Reza. "Robust Formulations for Beam-to-Beam Contact." Licentiate thesis, KTH, Hållfasthetslära (Avd.), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-183980.

Full text
Abstract:
Contact between beam elements is a specific category of contact problems which was introduced by Wriggers and Zavarise in 1997 for normal contact and later extended by Zavarise and Wriggers to include tangential and frictional contact. In these works, beam elements are assumed to have rigid circular cross-sections and each pair of elements cannot have more than one contact point. The method proposed in the early papers is based on introducing a gap function and calculating the incremental change of that gap function and its variation in terms of incremental change of the nodal displacement vector and its variation. Due to complexity of derivations, specially for tangential contact, it is assumed that beam elements have linear shape functions. Furthermore, moments at the contact point are ignored. In the work presented in this licentiate thesis, we mostly adress the questions of simplicity and robustness of implementations, which become critical once the number of contact is large. In the first paper, we have proposed a robust formulation for normal and tangential contact of beams in 3D space to be used with a penalty stiffness method. This formulation is based on the assumption that contact normal, tangents, and location are constant (independent of displacements) in each iteration, while they are updated between iterations. On the other hand, we have no restrictions on the shape functions of the underlying beam elements. This leads to a mathematically simpler derivation and equations, as the linearization of the variation of the gap function vanishes. The results from this formulation are verified and benchmarked through comparison with the results from the previous algorithms. The proposed method shows better convergence rates allowing for selecting larger loadsteps or broader ranges for penalty stiffness. The performance and robustness of the formulation is demonstrated through numerical examples. In the second paper, we have suggested two alternative methods to handle in-plane rotational contact between beam elements. The first method follows the method of linearizing the variation of gap function, originally proposed by Wriggers and Zavarise. To be able to do the calculations, we have assumed a linear shape function for the underlying beam elements. This method can be used with both penalty stiffness and Lagrange multiplier methods. In the second method, we have followed the same method that we used in our first paper, that is, using the assumption that the contact normal is independent of nodal displacements at each iteration, while it is updated between iterations. This method yields simpler equations and it has no limitations on the shape functions to be used for the beam elements, however, it is limited to penalty stiffness methods. Both methods show comparable convergence rates, performance and stability which is demonstrated through numerical examples.<br>Kontakt mellan balkelement är en speciell typ av kontaktproblem som först analyserades 1997 av Wriggers och Zavarise med avseende på kontakt i normalriktningen. Teorin utvecklades senare av Zavarise och Wriggers och  inkluderade då även kontakt i tangentiella riktningar. I dessa arbeten antas balkelementen ha ett styvt cirkulärt tvärsnitt och varje elementpar kan inte ha mer än en kontaktpunkt. Metodiken i dessa artiklar bygger på  att en glipfunktion införs och därefter beräknas den inkrementella förändringen av glipfunktionen, och också dess variation, som funktion av den inkrementella förändringen av förskjutningsvektorn och dess variation. På grund av de komplicerade härledningar som resulterar, speciellt för den tangentiella kontakten, antas det att balkelementen har linjära formfunktioner. Dessutom tas ingen hänsyn till de moment som uppstår vid kontaktpunkten. I de arbeten som presenteras i denna licentiatavhandling har vi valt att inrikta oss mot frågeställningar kring enkla och robusta implementeringar, något som blir viktigt först när problemet innefattar ett stort antal kontakter. I den första artikeln i avhandlingen föreslår vi en robust formulering för normal och tangentiell kontakt mellan balkar i en 3D-rymd.Formuleringen bygger på en kostnadsmetod och på antagandet att kontaktens normal- och tangentriktning samt dess läge förblir detsamma (oberoende av förskjutning) under varje iteration. Dock uppdateras dessa storheter mellan varje iteration. Å andra sidan har inga begränsningar införts för formfunktionerna hos de underliggande balkelementen. Detta leder till en matematiskt enklare härledning samt enklare ekvationer, eftersom variationen hos glipfunktionen försvinner. Resultat framtagna med hjälp av denna formulering har verifierats och jämförts med motsvarande resultat givna av andra metoder. Den föreslagna metoden ger snabbare konvergens vilket ger möjlighet att använda större laststeg eller större omfång hos styvheten i kontaktpunkten (s.k. kostnadsstyrhet). Genom att lösa numeriska exempel påvisas prestanda och robusthet hos den föreslagna formuleringen. I den andra artikeln föreslår vi två alternativa metoder för att hantera rotationer i kontaktplanet hos balkelementen. I den första metoden linjäriseras glipfunktionen. Denna metod presenterades först av Wriggers och Zavarise. För att kunna genomföra beräkningarna ansattes linjära formfunktioner för balkelementen. Den här metoden kan användas både med kostnadsmetoder och metoder baserade på Lagrangemultiplikatorer. I den andra föreslagna metoden har vi valt att följa samma tillvägagångsätt som i vår första artikel. Detta betyder att vi antar att kontaktens normalriktning är oberoende av förskjutningarna under en iteration men uppdateras sedan mellan iterationerna. Detta tillvägagångsätt ger enklare ekvationer och har inga begränsningar vad gäller de formfunktioner som används i balkelementen. Dock är metoden begränsad till att utnyttja kostnadsmetoder. Båda de föreslagna metoderna i denna artikel ger jämförbar konvergens, prestanda och stabilitet vilket påvisas genom att lösningar till olika numeriska exempel presenteras.<br><p>QC 20160408</p>
APA, Harvard, Vancouver, ISO, and other styles
4

Quiroga, Gonzáles Cruz Sonia, Juan Límaco, and Rioco K. Barreto. "The penalty method and beam evolution equations." Pontificia Universidad Católica del Perú, 2014. http://repositorio.pucp.edu.pe/index/handle/123456789/96079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Menard, Kenneth A. "Gaussian beam resonator formalism using the yy method." Master's thesis, University of Central Florida, 1995. http://digital.library.ucf.edu/cdm/ref/collection/RTD/id/21214.

Full text
Abstract:
University of Central Florida College of Engineering Thesis<br>A simple and powerful new paraxial ray formalism is shown to provide an alternate method for designing Gaussian Beam Resonators. The theory utilizes the Delano yybar diagram approach and is an extensio of the recent work by Shack and Kessler for laser systems. The method is shown to be complementary to the conventional ABCD method and is founded upon J.A. Arnaud's pioneering ideas for complex rays. The thesis develops an analytic formulation of a ray based complex wavefront curvature and yields a clearly generalized description of spherical wave propagation, for which Gaussian beams are considered a special case. The resultant theory unifies the complex q parameter and the ABCD law, with the yybar complex ray components and also suggests that the ABCD law for the complex q parameter has its origin in the yybar complex ray. New fundamental equations for designing stable multi-element resonators using the yybar coordinates are derived, and it is shown that the yybar diagram provides a novel method for defining automatically stable resonators. Various applications for the yybar design technique are also discussed, including the setting of convenient design constraints, the description of M2 beams, generating phase diagrams, and resonator syntheis and analysis.<br>M.S.;<br>Electrical Engineering<br>Engineering;<br>Electrical Engineering<br>49 p.<br>vii, 49 leaves, bound : ill. ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
6

Mi, Yongcui. "Novel beam shaping and computer vision methods for laser beam welding." Licentiate thesis, Högskolan Väst, Avdelningen för produktionssystem (PS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hv:diva-16970.

Full text
Abstract:
Laser beam welding has been widely applied in different industrial sectors due to its unique advantages. However, there are still challenges, such as beam positioning in T-joint welding, and gap bridging in butt joint welding,especially in the case of varying gap width along a joint. It is expected that enabling more advanced control to a welding system, and obtaining more in-depth process knowledge could help to solve these issues. The aim of this work is to address such welding issues by a laser beam shaping technology using a novel deformable mirror together with computer vision methods and also to increase knowledge about the benefits and limitations with this approach. Beam shaping in this work was realized by a novel deformable mirror system integrated into an industrial processing optics. Together with a wave front sensor, a controlled adaptive beam shaping system was formed with a response time of 10 ms. The processes were monitored by a coaxial camera with selected filters and passive or active illumination. Conduction mode autogenous bead-on-plate welding and butt joint welding experiments have been used to understand the effect of beam shaping on the melt pool geometry. Circular Gaussian, and elliptical Gaussian shapes elongated transverse to and along the welding direction were studied. In-process melt pool images and cross section micrographs of the weld seams/beads were analyzed. The results showed that the melt pool geometry can be significantly modified by beam shaping using the deformable mirror. T-joint welding with different beam offset deviations relative to the center of the joint line was conducted to study the potential of using machine learning to track the process state. The results showed that machine learning can reach sufficient detection and estimation performance, which could also be used for on-line control. In addition, in-process and multidimensional data were accurately acquired using computer vision methods. These data reveal weaknesses of current thermo-fluid simulation model, which in turn can help to better understand and control laser beam welding. The obtained results in this work shows a huge potential in using the proposed methods to solve relevant challenges in laser beam welding.<br>Lasersvetsning används i stor utsträckning i olika industrisektorer på grund av dess unika fördelar. Det finns emellertid fortfarande utmaningar, såsom rätt positionering av laserstrålen vid genomträngningssvetsning av T-fogar och hantering av varierande spaltbredd längs fogen vid svetsning av stumfogar. Sådana problem förväntas kunna lösas med avancerade metoder för automatisering, metoder som också förväntas ge fördjupade kunskaper om processen. Syftet med detta arbete är att ta itu med dessa problem med hjälp av en teknik för lasereffektens fördelning på arbetsstycket, s.k. beam shaping. Det sker med hjälp av en ny typ av i realtid deformerbar spegel tillsammans med bildbehandling av kamerabilder från processen. För- och nackdelar med detta tillvägagångssätt undersöks.Beam shaping åstadkoms med hjälp av ny typ av deformerbart spegelsystem som integreras i en industriell processoptik. Tillsammans med en vågfrontsensor bildas ett adaptivt system för beam shaping med en svarstid på 10 ms. Processen övervakas av en kamera linjerad koaxialt med laserstrålen. För att kunna ta bilder av svetspunkten belyses den med ljus av lämplig våglängd, och kameran är försedd med ett motsvarande optiskt filter. Försök har utförts med svetsning utan tillsatsmaterial, direkt på plåtar, svetsning utan s.k. nyckelhål, för att förstå effekten av beam shaping på svetssmältans geometri. Gauss fördelade cirkulära och elliptiska former, långsträckta både tvärs och längs svetsriktningen har studerats. Bilder från svetssmältan har analyserats och även mikrostrukturen i tvärsnitt från de svetsade plåtarna. Resultaten visar att svetssmältans geometri kan modifieras signifikant genom beam shaping med hjälp av det deformerbara spegelsystemet. Genomträngningssvetsning av T-fogar med avvikelser relativt foglinjens centrum genomfördes för att studera potentialen i att använda maskininlärning för att fånga processens tillstånd. Resultaten visade att maskininlärning kan nå tillräcklig prestanda för detektering och skattning av denna avvikelse. Något som också kan användas för återkopplad styrning. Flerdimensionell processdata har samlats i realtid och analyserats med hjälp av bildbehandlingsmetoder.  Dessa data avslöjar brister i nuvarande simuleringsmodeller,vilket i sin tur hjälper till med att bättre förstå och styra lasersvetsning.Resultaten från detta arbete uppvisar en god potential i att använda de föreslagna metoderna för att lösa relevanta utmaningar inom lasersvetsning.<br><p>Till licentiatuppsats hör 2 inskickade artiklar, som visas inte nu.</p>
APA, Harvard, Vancouver, ISO, and other styles
7

Matushewski, Bradley. "Critical Investigation of the Pulse Contour Method for Obtaining Beat-By-Beat Cardiac Output." Thesis, University of Waterloo, 2001. http://hdl.handle.net/10012/713.

Full text
Abstract:
The purpose of this study was to explore the efficacy of two existing pulse contour analysis (PCA) models for estimating cardiac stroke volume from the arterial pressure waveform during kicking ergometer exercise and head-up tilt manoeuvres. Secondly, one of the existing models was modified in an attempt to enhance its performance. In part I, seven healthy young adults repeated two submaximal exercise sessions on a kicking ergometer, each with three different sets of steady-state cardiac output comparisons (pulsed Doppler vs. pulse contour). Across all exercise trials regression results were found to be PCA = 1. 23 x Doppler-1. 38 with an r2 = 0. 51. In part II, eight young and eight older male healthy subjects participated in a head-up tilt experiment. Cardiac output comparisons were again performed during the supine and tilt conditions using pulsed Doppler and pulse contour cardiac output. Regression results revealed that PCA performed best during supine conditions and preferentially on the older subjects. In all instances, impedance-calibrated pulse contour analysis will provide reasonable beat-by-beat cardiac output within very narrow confines and will result in a progressively more significant bias as cardiovascular dynamics change. In addition, it appears that heart rate variability negatively influences beat-by-beat pulse contour cardiac output results, further limiting application of existing models.
APA, Harvard, Vancouver, ISO, and other styles
8

Huq, Syed Ejazul. "Thin film deposition by the ionized cluster beam method." Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.304288.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Liu, Deyun. "Advances in beam propagation method for facet reflectivity analysis." Thesis, University of Nottingham, 2013. http://eprints.nottingham.ac.uk/13491/.

Full text
Abstract:
Waveguide discontinuities are frequently encountered in modern photonic structures. It is important to characterize the reflection and transmission that occurs at the discontinuous during the design and analysis process of these structures. Significant effort has been focused upon the development of accurate modelling tools, and a variety of modelling techniques have been applied to solve this kind of problem. Throughout this work, a Transmission matrix based Bidirectional Beam Propagation Method (T-Bi-BPM) is proposed and applied on the uncoated facet and the single coating layer reflection problems, including both normal and angled incident situations. The T-Bi-BPM method is developed on the basis of an overview of Finite Difference Beam Propagation Method (FD-BPM) schemes frequently used in photonic modelling including paraxial FD-BPM, Imaginary Distance (ID) BPM, Wide Angle (WA) BPM and existing Bidirectional (Bi) BPM methods. The T-Bi-BPM establishes the connection between the total fields on either side of the coating layer and the incident field at the input of a single layer coated structure by a matrix system on the basis of a transmission matrix equation used in a transmission line approach. The matrix system can be algebraically preconditioned and then solved by sparse matrix multiplications. The attraction of the T-Bi-BPM method is the potential for more rapid evaluation without iterative approach. The accuracy of the T-Bi-BPM is verified by simulations and the factors that affect the accuracy are investigated.
APA, Harvard, Vancouver, ISO, and other styles
10

Sager, Benay. "A method for understanding and predicting stereolithography resolution." Thesis, Georgia Institute of Technology, 2003. http://hdl.handle.net/1853/17832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Chen, Yong. "Ultimate Strength Analysis of Stiffened Panels Using a Beam-Column Method." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/26000.

Full text
Abstract:
An efficient beam-column approach, using an improved step-by-step numerical method, is developed in the current research for studying the ultimate strength problems of stiffened panels with two load cases: 1) under longitudinal compression, and 2) under transverse compression. Chapter 2 presents an improved step-by-step numerical integration procedure based on (Chen and Liu, 1987) to calculate the ultimate strength of a beam-column under axial compression, end moments, lateral loads, and combined loads. A special procedure for three-span beam-columns is also developed with a special attention to usability for stiffened panels. A software package, ULTBEAM, is developed as an implementation of this method. The comparison of ULTBEAM with the commercial finite element package ABAQUS shows very good agreement. The improved beam-column method is first applied for the ultimate strength analysis of stiffened panel under longitudinal compression. The fine mesh elasto-plastic finite element ultimate strength analyses are carried out with 107 three-bay stiffened panels, covering a wide range of panel length, plate thickness, and stiffener sizes and proportions. The FE results show that the three-bay simply supported model is sufficiently general to apply to any panel with three or more bays. The FE results are then used to obtain a simple formula that corrects the beam-column result and gives good agreement for panel ultimate strength for all of the 107 panels. The formula is extremely simple, involving only one parameter: the product λΠorth2. Chapter 4 compares the predictions of the new beam-column formula and the orthotropic-based methods with the FE solutions for all 107 panels. It shows that the orthotropic plate theory cannot model the "crossover" panels adequately, whereas the beam-column method can predict the ultimate strength well for all of the 107 panels, including the "crossover" panels. The beam-column method is then applied for the ultimate strength analysis of stiffened panel under transverse compression, with or without pressure. The method is based on a further extension of the nonlinear beam-column theory presented in Chapter 2, and application of it to a continuous plate strip model to calculate the ultimate strength of subpanels. This method is evaluated by comparing the results with those obtained using ABAQUS, for several typical ship panels under various pressures.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
12

Monteiro, Sérgio Henrique. "Desenvolvimento e validação de metodologia para determinação de resíduos de pesticidas piretróides por HPLC em feijão." Universidade de São Paulo, 2006. http://www.teses.usp.br/teses/disponiveis/46/46133/tde-22062016-160326/.

Full text
Abstract:
Um método rápido utilizando cromatografia liquida (LC) foi desenvolvido para determinação simultânea de 7 pesticidas piretróides (bifentrina, cipermetrina, fenpropatrina, fenvalerato, permetrina, lambda-cialotrina, e deltametrina). Os resíduos são extraídos com acetona e a partição realizada de acordo com o método multi-resíduos DFG-S19, substituindo diclorometano por acetato de etila/ciclohexano (1+1) e purificação usando cromatografia de permeação a gel com uma coluna Biobeads SX3 e acetato de etila/ciclohexano (1+1) como eluente. A separação por LC é realizada com uma coluna LiChrospher 100 RP-18 e acetonitrila/água (8+2) como fase móvel. Os pesticidas são detectados em 212nm. As recuperações dos 7 pesticidas piretróides em amostras de feijão fortificadas em 0,010; 0,100; e 1,000 mg/kg ficaram entre 71-105%. A diferença particular deste método é o limite de quantificação, os quais ficaram entre 0,004-0,011 mg/kg, abaixo de muitos outros métodos de LC descritos na literatura. A cromatografia a gás (GC) com detector de captura de elétrons é mais sensível que a LC, mas o método com LC facilita a identificação dos picos. A GC apresenta muitos picos enquanto a LC apresenta apenas um para a maioria dos piretróides. A análise com LC é uma boa alternativa para a determinação de resíduos de piretróides em feijão. Durante o ano de 2005, um total de 48 amostras de feijão comercializadas na cidade de São Paulo, foram analisadas. Nenhum resíduo de pesticida piretróide foi detectado nas amostras.<br>A rapid liquid chromatographic (LC) method has been developed for simultaneous determination of 7 pyrethroid insecticides (bifenthrin, cypermethrin, fenpropathrin, fenvalerate, permethrin, lambda-cyhalothrin and deltamethrin) in beans. Residues are extracted from beans with acetone and the partition realized according to the multi-residue method DFG-S19, replacing dichloromethane by ethyl acetate/cyclohexane (1+1) and cleaned up using gel-permeation with a Biobeads SX3 column and ethyl acetate/cyclohexane (1+1) as eluant. LC separation is performed on a LiChrospher 100 RP-18 column with acetonitrile/water (8+2) as mobile phase. The pesticides are detected at 212 nm. Recoveries of 7 pyrethroid insecticides from beans fortified at 0.010; 0.100; 1.000 mg/kg levels were 71-105 %. The particular differential of this method is the quantification limits, which were between 0.004-0.011 mg/kg, lower than most of the limits reported for LC methods described in the literature. The gas chromatographic (GC) with electron capture detection is more sensitive than LC, but the LC method facilitates the identification of the peaks. Analysis of pyrethroids by GC shows several peaks, but LC shows only one for most pyrethroids. The analysis by LC was a good alternative for determination pyrethroid residues in beans. During 2005 year, a total of 48 bean samples commercialized in Sao Paulo City were analyzed. No residues of pyrethroids pesticides were detected in the samples.
APA, Harvard, Vancouver, ISO, and other styles
13

Herman, Tomáš. "Identifikace klopné tuhosti nápravy automobilu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2018. http://www.nusl.cz/ntk/nusl-378016.

Full text
Abstract:
This diploma thesis deals with roll stiffness of twist beam axle. There are described experi-mental and analytic methods of measuring, its applications and comparison on a particular type of axle. There is also construction plan of system usable for this measuring.
APA, Harvard, Vancouver, ISO, and other styles
14

Russom, Aman. "Microfluidic bead-based methods for DNA analysis." Doctoral thesis, KTH, Skolan för elektro- och systemteknik (EES), 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-155.

Full text
Abstract:
With the completion of the human genome sequencing project, attention is currently shifting toward understanding how genetic variation, such as single nucleotide polymorphism (SNP), leads to disease. To identify, understand, and control biological mechanisms of living organisms, the enormous amounts of accumulated sequence information must be coupled to faster, cheaper, and more powerful technologies for DNA, RNA, and protein analysis. One approach is the miniaturization of analytical methods through the application of microfluidics, which involves the manipulation of fluids in micrometer-sized channels. Advances in microfluidic chip technology are expected to play a major role in the development of cost-effective and rapid DNA analysis methods. This thesis presents microfluidic approaches for different DNA genotyping assays. The overall goal is to combine the potential of the microfluidic lab-on-a-chip concept with biochemistry to develop and improve current methods for SNP genotyping. Three genotyping assays using miniaturized microfluidic approaches are addressed. The first two assays are based on primer extension by DNA polymerase. A microfluidic device consisting of a flow-through filter chamber for handling beads with nanoliter liquid volumes was used in these studies. The first assay involved an allelespecific extension strategy. The microfluidic approach took advantage of the different reaction kinetics of matched and mismatched configurations at the 3’-ends of a primer/template complex. The second assay consisted of adapting pyrosequencing technology, a bioluminometric DNA sequencing assay based on sequencing-bysynthesis, to a microfluidic flow-through platform. Base-by-base sequencing was performed in a microfluidic device to obtain accurate SNP scoring data on nanoliter volumes. This thesis also presents the applications of monolayer of beads immobilized by microcontact printing for chip-based DNA analysis. Single-base incorporation could be detected with pyrosequencing chemistry on these monolayers. The third assay developed is based on a hybridization technology termed Dynamic Allele-Specific Hybridization (DASH). In this approach, monolayered beads containing DNA duplexes were randomly immobilized on the surface of a microheater chip. DNA melting-curve analysis was performed by dynamically heating the chip while simultaneously monitoring the DNA denaturation profile to determine the genotype. Multiplexing based on single-bead analysis was achieved at heating rates more than 20 times faster than conventional DASH provides.<br>QC 20101008
APA, Harvard, Vancouver, ISO, and other styles
15

Lidgate, Simon. "Advanced finite difference - beam propagation : method analysis of complex components." Thesis, University of Nottingham, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.408596.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Wang, Fei. "Vertical beam emittance correction with independent component analysis measurement method." [Bloomington, Ind.] : Indiana University, 2008. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3319892.

Full text
Abstract:
Thesis (Ph.D.)--Indiana University, Dept. of Physics, 2008.<br>Title from PDF t.p. (viewed on May 13, 2009). Source: Dissertation Abstracts International, Volume: 69-08, Section: B, page: 4823. Adviser: Shyh-Yuan Lee.
APA, Harvard, Vancouver, ISO, and other styles
17

Le, Thanh Nam. "Corotational formulation for nonlinear analysis of flexible beam structures." Licentiate thesis, KTH, Bro- och stålbyggnad, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-94880.

Full text
Abstract:
Flexible beam structures are popular in civil and mechanical engineering. Many of these structures undergo large displacements and finite rotations, but with small deformations. Their dynamic behaviors are usually investigated using finite beam elements. A well known method to derive such beam elements is the corotational approach. This method has been extensively used in nonlinear static analysis. However, its application in nonlinear dynamics is rather limited. The purpose of this thesis is to investigate the nonlinear dynamic behavior of flexible beam structures using the corotational method. For the 2D case, a new dynamic corotational beam formulation is presented. The idea is to adopt the same corotational kinetic description in static and dynamic parts. The main novelty is to use cubic interpolations to derive both inertia terms and internal terms in order to capture correctly all inertia effects. This new formulation is compared with two classic formulations using constant Timoshenko and constant lumped mass matrices. This work is presented in the first appended journal paper. For the 3D case, update procedures of finite rotations, which are central issues in development of nonlinear beam elements in dynamic analysis, are discussed. Three classic and one new formulations of beam elements based on the three different parameterizations of the finite rotations are presented. In these formulations, the corotational method is used to develop expressions of the internal forces and the tangent stiffness matrices, while the dynamic terms are formulated into a total Lagrangian context. Many aspects of the four formulations are investigated. First, theoretical derivations as well as practical implementations are given in details. The similarities and differences between the formulations are pointed out. Second, numerical accuracy and computational efficiency of these four formulations are compared. Regarding efficiency, the choice of the predictor at each time step and the possibility to simplify the tangent inertia matrix are carefully investigated. This work is presented in the second appended journal paper. To make this thesis self-contained, two chapters concerning the parametrization of the finite rotations and the derivation of the 3D corotational beam element in statics are added.<br>QC 20120521
APA, Harvard, Vancouver, ISO, and other styles
18

Le, Thanh-Nam. "Nonlinear dynamics of flexible structures using corotational beam elements." Doctoral thesis, KTH, Bro- och stålbyggnad, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-131701.

Full text
Abstract:
The purpose of this thesis is to develop corotational beam elements for the nonlinear dynamic analyse of flexible beam structures. Whereas corotational beam elements in statics are well documented, the derivation of a corotational dynamic formulation is still an issue. In the first journal paper, an efficient dynamic corotational beam formulation is proposed for 2D analysis. The idea is to adopt the same corotational kinematic description in static and dynamic parts. The main novelty is to use cubic interpolations to derive both inertia terms and internal terms in order to capture correctly all inertia effects. This new formulation is compared with two classic formulations using constant Timoshenko and constant lumped mass matrices. In the second journal paper, several choices of parametrization and several time stepping methods are compared. To do so, four dynamic formulations are investigated. The corotational method is used to develop expressions of the internal terms, while the dynamic terms are formulated into a total Lagrangian context. Theoretical derivations as well as practical implementations are given in detail. Their numerical accuracy and computational efficiency are then compared. Moreover, four predictors and various possibilities to simplify the tangent inertia matrix are tested. In the third journal paper, a new consistent beam formulation is developed for 3D analysis. The novelty of the formulation lies in the use of the corotational framework to derive not only the internal force vector and the tangent stiffness matrix but also the inertia force vector and the tangent dynamic matrix. Cubic interpolations are adopted to formulate both inertia and internal local terms. In the derivation of the dynamic terms, an approximation for the local rotations is introduced and a concise expression for the global inertia force vector is obtained. Four numerical examples are considered to assess the performance of the new formulation against two other ones based on linear interpolations. Finally, in the fourth journal paper, the previous 3D corotational beam element is extended for the nonlinear dynamics of structures with thin-walled cross-section by introducing the warping deformations and the eccentricity of the shear center. This leads to additional terms in the expressions of the inertia force vector and the tangent dynamic matrix. The element has seven degrees of freedom at each node and cubic shape functions are used to interpolate local transversal displacements and axial rotations. The performance of the formulation is assessed through five examples and comparisons with Abaqus 3D-solid analyses.<br><p>QC 20131017</p>
APA, Harvard, Vancouver, ISO, and other styles
19

梁少江 and Siu-kong Leung. "Analysis of shear/core wall structures using a linear moment beam-typeelement." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1996. http://hub.hku.hk/bib/B31213352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Zamzow, Bert. "Simulation von Glasfaserspleißen mit der Beam-Propagation-Methode /." Düsseldorf : VDI-Verl, 2001. http://www.gbv.de/dms/bs/toc/32808767x.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Barapatre, Nirav. "Application of Ion Beam Methods in Biomedical Research." Doctoral thesis, Universitätsbibliothek Leipzig, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-126262.

Full text
Abstract:
The methods of analysis with a focused ion beam, commonly termed as nuclear microscopy, include quantitative physical processes like PIXE and RBS. The element concentrations in a sample can be quantitatively mapped with a sub-micron spatial resolution and a sub-ppm sensitivity. Its fully quantitative and non-destructive nature makes it particularly suitable for analysing biological samples. The applications in biomedical research are manifold. The iron overload hypothesis in Parkinson\\\'s disease is investigated by a differential analysis of human substantia nigra. The trace element content is quantified in neuromelanin, in microglia cells, and in extraneuronal environment. A comparison of six Parkinsonian cases with six control cases revealed no significant elevation in iron level bound to neuromelanin. In fact, a decrease in the Fe/S ratio of Parkinsonian neuromelanin was measured, suggesting a modification in its iron binding properties. Drosophila melanogaster, or the fruit fly, is a widely used model organism in neurobiological experiments. The electrolyte elements are quantified in various organs associated with the olfactory signalling, namely the brain, the antenna and its sensilla hairs, the mouth parts, and the compound eye. The determination of spatially resolved element concentrations is useful in preparing the organ specific Ringer\\\'s solution, an artificial lymph that is used in disruptive neurobiological experiments. The role of trace elements in the progression of atherosclerosis is examined in a pilot study. A differential quantification of the element content in an induced murine atherosclerotic lesion reveals elevated S and Ca levels in the artery wall adjacent to the lesion and an increase in iron in the lesion. The 3D quantitative distribution of elements is reconstructed by means of stacking the 2D quantitative maps of consecutive sections of an artery. The feasibility of generating a quantitative elemental rodent brain atlas by Large Area Mapping is investigated by measuring at high beam currents. A whole coronal section of the rat brain was measured in segments in 14 h. Individual quantitative maps of the segments are pieced together to reconstruct a high-definition element distribution map of the whole section with a subcellular spatial resolution. The use of immunohistochemical staining enhanced with single elements helps in determining the cell specific element content. Its concurrent use with Large Area Mapping can give cellular element distribution maps.
APA, Harvard, Vancouver, ISO, and other styles
22

Skoglund, Lovisa. "Method development for enrichment of autoantibodies from human plasma." Thesis, KTH, Skolan för kemi, bioteknologi och hälsa (CBH), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-278724.

Full text
Abstract:
Antibodies are naturally occurring in humans, with the function to protect the body from pathogens. Occasionally, antibodies towards the body’s own proteins are produced. These so called autoantibodies are present in healthy individuals but are also highly associated with diseases with autoimmune involvement. Research on autoantibodies in healthy individuals as well as in patients is important to gain knowledge and facilitate prognostics, diagnostics and treatment. However, a method for purification of these antibodies has not previously been described. In the present project, an enrichment procedure of circulating autoantibodies found in human plasma is described. Twenty protein fragments previously known to be highly reactive were attached to magnetic microbeads, enabling autoantibodies from eight human plasma sample pools to be captured. The six antigens with highest shown reactivity were chosen for elution procedure. Using pH alterations and heat treatments, a successful elution and enrichment procedure was developed. With analysis of the eluted autoantibodies, it can be established that the enrichment was successful on multiple sample pools. In the scaled-up procedure, autoantibodies could be enriched in all positive antigen-sample combinations. Concentration measurements indicated amounts of up to 0.23 mg antibodies per ml eluate. This implies sufficient concentrations for further applications of the enriched autoantibodies.<br>Antikroppar förekommer naturligt i människor, med syftet att skydda kroppen från patogen. I vissa fall skapas av misstag antikroppar som angriper kroppens egna proteiner. Dessa autoantikroppar förekommer hos alla människor, såväl friska som sjuka, men de är också starkt förknippade med autoimmuna sjukdomar. Kunskapen om autoantikroppar hos friska personer och hos patienter är idag begränsad, men fortsatt forskning inom området förväntas i framtiden underlätta prognostik, diagnostik och behandling. Hittills har ingen metod för anrikning av autoantikroppar ur blodplasma beskrivits. I detta projekt beskrivs en anrikningsmetod för autoantikroppar ur blodplasma från människa. Tjugo tidigare kända högreaktiva proteinfragment fästes på magnetiska mikrokulor. Dessa antigen-täckta mikrokulor användes för att fånga in autoantikroppar från åtta plasmaprover. De sex proteinfragment som hade högst reaktivitet i dessa prover valdes ut för elueringsförsök. Eluering genomfördes under basiska följt av sura förhållanden, tillsammans med värmebehandling. Denna elueringsmetod fungerade för anrikning av några autoantikroppar från flera av plasmaproverna. I ett utökat experiment kunde autoantikroppar anrikas ur alla kombinationer av antigen och plasmaprov som förväntades ge signal. Koncentrationen av autoantikroppar i eluaten uppskattades till högst 0.23 mg/ml. Denna koncentration är tillräcklig för flera vanliga metoder där antikroppar används.
APA, Harvard, Vancouver, ISO, and other styles
23

Elliott, Adrian. "Defining the haemodynamic response to maximal exercise using novel beat-to-beat measurement methods." Thesis, Kingston University, 2013. http://eprints.kingston.ac.uk/27792/.

Full text
Abstract:
Strenuous exercise presents a significant challenge to the cardiovascular system, such that it is widely assumed that the heart largely governs short, high-intensity aerobic exercise performance. Despite considerable investigation of this topic, the haemodynamic responses to maximal exercise are still not well understood, mostly due to insufficient measurement methods unable to quantify the beat-to-beat response of the cardiovascular system during dynamic exercise. In this thesis, two novel approaches (bioreactance and pulse contour analysis calibrated by lithium dilution) for the continuous assessment of exercise haemodynamics in a healthy, trained population were evaluated. In study I, bioreactance was found to considerably underestimate cardiac output (Q) in comparison with contemporaneous measurements with inert gas rebreathing. In studies II and III, we evaluated pulse contour analysis, calibrated by lithium indicator dilution. Our findings indicated that the timing of calibration was central to the accuracy of measurements made during exercise using this method, perhaps due to alterations in vascular compliance throughout exercise'. In study IV, optimising the calibration of this method during exercise permitted the evaluation of the haemodynamic response to maximal and supramaximal (10% greater than maximal) exercise on a beat-to-beat basis, with the finding that cardiac power output, a measure of cardiac work, was higher during supramaximal exercise despite a similar Q and oxygen consumption (V02) between the two workloads. This finding is important for the understanding of factors limiting exercise performance for it indicates that there is cardiac functional reserve at exhaustion during testing for V02max, thus indicating that the heart is unlikely to be responsible for the termination of exercise as it can be considered to be working submaximally.
APA, Harvard, Vancouver, ISO, and other styles
24

Leung, Siu-kong. "Analysis of shear/core wall structures using a linear moment beam-type element /." Hong Kong : University of Hong Kong, 1996. http://sunzi.lib.hku.hk/hkuto/record.jsp?B18155376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Labuschagne, Anneke. "Finite element analysis of plate and beam models." Thesis, Pretoria : [s.n.], 2006. http://upetd.up.ac.za/thesis/available/etd-12082006-135946.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Meesala, Vamsi Chandra. "Modeling and Analysis of a Cantilever Beam Tip Mass System." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/83378.

Full text
Abstract:
We model the nonlinear dynamics of a cantilever beam with tip mass system subjected to different excitation and exploit the nonlinear behavior to perform sensitivity analysis and propose a parameter identification scheme for nonlinear piezoelectric coefficients. First, the distributed parameter governing equations taking into consideration the nonlinear boundary conditions of a cantilever beam with a tip mass subjected to principal parametric excitation are developed using generalized Hamilton's principle. Using a Galerkin's discretization scheme, the discretized equation for the first mode is developed for simpler representation assuming linear and nonlinear boundary conditions. We solve the distributed parameter and discretized equations separately using the method of multiple scales. We determine that the cantilever beam tip mass system subjected to parametric excitation is highly sensitive to the detuning. Finally, we show that assuming linearized boundary conditions yields the wrong type of bifurcation. Noting the highly sensitive nature of a cantilever beam with tip mass system subjected to parametric excitation to detuning, we perform sensitivity of the response to small variations in elasticity (stiffness), and the tip mass. The governing equation of the first mode is derived, and the method of multiple scales is used to determine the approximate solution based on the order of the expected variations. We demonstrate that the system can be designed so that small variations in either stiffness or tip mass can alter the type of bifurcation. Notably, we show that the response of a system designed for a supercritical bifurcation can change to yield a subcritical bifurcation with small variations in the parameters. Although such a trend is usually undesired, we argue that it can be used to detect small variations induced by fatigue or small mass depositions in sensing applications. Finally, we consider a cantilever beam with tip mass and piezoelectric layer and propose a parameter identification scheme that exploits the vibration response to estimate the nonlinear piezoelectric coefficients. We develop the governing equations of a cantilever beam with tip mass and piezoelectric layer by considering an enthalpy that accounts for quadratic and cubic material nonlinearities. We then use the method of multiple scales to determine the approximate solution of the response to direct excitation. We show that approximate solution and amplitude and phase modulation equations obtained from the method of multiple scales analysis can be matched with numerical simulation of the response to estimate the nonlinear piezoelectric coefficients.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
27

Gao, Hanhong. "Iterative nonlinear beam propagation method and its application in nonlinear devices." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/63077.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 89-96).<br>In this thesis, an iterative nonlinear beam propagation method is introduced and applied to optical devices. This method is based on Hamiltonian ray tracing and the Wigner distribution function. First, wave propagation simulation using Hamiltonian ray tracing is illustrated and verified with different examples. Based on this, the iterative method is presented for beam propagation in nonlinear media, which is validated with common Kerr effect phenomena such as self-focusing and spatial solitons. As the application to the analysis of nonlinear optical devices, this method is applied to nonlinear Lineburg lens. It is found that the nonlinear Liineburg lens is able to compensate the focal shift caused by the diffraction of Gaussian illumination. The iterative nonlinear beam propagation method is computationally efficient and provides much physical insights into the wave propagation. Since it is based on Hamiltonian ray tracing, a ray diagram can be easily obtained which contains the evolution of generalized radiances. Besides bulk nonlinear media, this method provides a systematic approach to beam propagation problem in complex media such as nonlinear photonic crystals and metamaterials. Also, it is applicable to both coherent and partially coherent illumination. Therefore, this method has potential applications in the design and analysis of nonlinear optical devices and systems.<br>by Hanhong Gao.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
28

PIETROSANTO, MARCO. "BEAM: a novel method to infer conserved structural patterns in RNA." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2016. http://hdl.handle.net/2108/201742.

Full text
Abstract:
The notion of motifs (or patterns) in biological molecules, defined as local recurring elements in functionally related entities, either due to evolutionary relationships or through convergence, has been exploited successfully in the past by computational methods aimed at functional characterization. Motifs can be detected (with relative ease) at the primary sequence level, but they almost always have a structural meaning, being clusters of spatially close residues working in concert to achieve a given function. The bioinformatics field of motif finding in proteins and DNA is well developed, providing several tools, approaches and databases (Bailey et al., 2009; Burge et al., 2013; Sonnhammer et al., 1997), while fewer resources are available for structural motif finding in RNAs. Such tools can be particularly useful in helping the functional characterization of noncoding RNAs (ncRNAs), for which information about the involved specific sequences and structures is still scarce. ncRNAs are involved in a wide range of biological functions through diverse molecular mechanisms often involving the interaction with one or more RNA binding protein (RBP) partners, with other RNAs or with the genomic DNA (4,5). Experimental and computational techniques are becoming available to depict, in highthroughput settings and at high resolution, protein-RNA interactions, chromatin-RNA interactions and RNA secondary structures, allowing the identification of binding partners, binding sites and function determinants. Protein-RNA interactions are central to many cellular processes (Kishore et al., 2010; Kiven E. Lukong, Kai-wei Chang et al., 2008; Licatalosi and Darnell, 2010; R., 2002) and they often involve ncRNAs. Those processes include transcription factors/telomere regulation, alternative splicing, chromatin remodelling, nucleotide modification and many others. The complexity of the protein-RNA interaction network is starting to be fully appreciated thanks to several technological advances (Ferrè et al., 2016) such as High Throughput assays like CLIP-Seq, PAR-CLIP and others. Generally, sequence-level binding preferences are found, allowing the definition of sequence motifs and the usage of sequence-only based tools such as MEME (Bailey et al., 2009) or cERMIT (Georgiev et al., 2010). Still, these sequence determinants frequently must be carried by a specific structural context (Buckanovich and Darnell, 1997; Hiller et al., 2007; Meisner et al., 2005), while in other cases it is the RNA secondary structure that dictates the interaction specificity: for example, some proteins tend to recognize complex secondary structure elements such as stem-loops and bulges (Cusack, 1999). The RBP-RNA binding is therefore heterogeneous in nature and different RBP domains are governed by different rules. The influence of the RNA structural context upon protein binding and the impact on motif-finding methods has been recently reviewed (Li et al., 2014). Given the importance of the structural context of functional motifs in RNA molecules, a number of methods for approaching the RNA motiffinding problem that include the secondary structure are available (for two recent reviews see (Achar and Sætrom, 2015; Badr et al., 2013)). FOLDALIGN and its variants (Gorodkin, 2001; Gorodkin et al., 1997), comRNA (Ji et al., 2004), RNAProfile (Pavesi et al., 2004; Zambelli and Pavesi, 2015), RSmatch (Liu et al., 2015), RNAmine (Hamada et al., 2006), MEMERIS (Hiller et al., 2006), CMfinder (Yao et al., 2006a), Seed (Anwar et al., 2006), GeRNAMo (Michal et al., 2007), RNApromo (Rabani et al., 2008), SCARNA_LM (Tabei and Asai, 2009), GraphProt (Maticzka et al., 2014) are all tools that take advantage of secondary structure information for tackling the motif-finding problem, employing different approaches and to different extents. Some other methods were developed specifically for the identification of protein-binding motifs, e.g. RNAcontext (Kazan et al., 2010), the algorithm by Li et al. (Li et al., 2010), mCarts (Zhang et al., 2013), RBPmotif (Kazan and Morris, 2013) and Zagros (Bahrami-Samani et al., 2015). The underlying algorithms can vary: expectation maximization (MEMERIS), covariance models (CMfinder), stochastic context-free grammars (RNApromo), graph matching (comRNA, RNAmine), graph kernels (GraphProt), fold-and-align methods (FOLDALIGN), conditional random fields (SCARNA_LM), hidden Markov models (mCarts), genetic programming (GeRNAMo), and others. The nature of the secondary structure information needed by these methods can also vary: some need pre-computed structures, or perform a minimum free energy prediction onthe-fly, others employ base-pairing probabilities, while others try to build the secondary structure simultaneously with the motif finding procedure. Some methods seek for purely structural motifs, while other can consider sequence information as well. Finally, many algorithms are limited in searching motifs having a specific nature, for example only in single-stranded regions (MEMERIS), or in regions containing a limited and/or fixed number of hairpins (CMfinder, FOLDALIGN, RNAProfile), or starting from and expanding well-conserved stem structures (RNApromo, RNAmine). When the algorithm requires the RNA secondary structure, it is often converted into formats with various degrees of complexity and information content. Graph representations provide very accurate results, but are usually computationally expensive as well as limited to topological assertions that hardly detect structural similarities that find their reasons in biological relations, and models of RNA structure evolution are not implemented when comparing RNA secondary structures. To solve this issue, my research group recently proposed BEAR, a representation of the RNA secondary structure by an alphabet of characters describing secondary structure elements and their size, and computed substitution matrix-like rates of variation of these structural elements in functionally related RNAs (Mattei et al., 2014). Having an informative string-based representation of the secondary structure and a substitution matrix, it becomes possible to apply standard algorithms for sequence alignment to the problem of RNA structural comparison (Mattei et al., 2014, 2015). In my work I developed BEAM (BEAr Motif finder) (Pietrosanto et al., 2016), a method that explores sets of unaligned RNA structures sharing a biological property (e.g. the ability to bind a specific RNA-binding protein) looking for the most represented local secondary structure motifs, and evaluating their significance with respect to a common background. BEAM employs the BEAR secondary structure notation and its associated similarity matrix of secondary structure elements, in order to capture motifs by structural similarities that derive from evolutionary related ncRNAs, in a way that covers topological comparison, yet expands it by considering the evolutionary history behind the abstraction of structure representation. BEAM is able to identify structurally similar sites shared by hundreds or thousands of RNAs, and the extension of the motifs is not subject to limitations (other than those imposed by the user). Hence, it is a tool suitable for low-, medium- and high-throughput settings such as those in CLIP-Seq analysis (Änkö and Neugebauer, 2012), the latter being a feature that structural motif finding methods lacked until now (Cook et al., 2015). BEAM has been tested on a number of artificial and real cases, on its robustness to noisy datasets, and on the impact of imprecise secondary structure predictions on the results. Comparisons against state-of-the-art similar methods brought good responses. The requirement of a known or predicted secondary structure might limit BEAM applicability, but in the future this will not be a major hindrance thanks to recent technology advances that are quickly leading towards an era when high-quality RNA secondary structure information will be available for entire transcriptomes (Bai et al., 2014). BEAM source code is freely available at https://github.com/noise42/beam and a webserver has been developed for online use with some features added.
APA, Harvard, Vancouver, ISO, and other styles
29

Sasinowski, Maciek. "A Delta-f Monte Carlo method to calculate parameters in plasmas." W&M ScholarWorks, 1995. https://scholarworks.wm.edu/etd/1539623873.

Full text
Abstract:
A Monte Carlo code has been developed which very efficiently calculates plasma parameters, such as currents, potentials and transport coefficients for a fully three dimensional magnetic field configuration. The code computes the deviation, f, of the exact distribution function, f, from the Maxwellian, {dollar}F\sb{lcub}M{rcub},{dollar} with {dollar}\psi{dollar} the toroidal magnetic flux enclosed by a pressure surface and H the Hamiltonian. The particles in the simulation are followed with a traditional Monte Carlo scheme consisting of an orbit step in which new values for the positions and momenta are obtained and a collision step in which a Monte Carlo equivalent of the Lorentz operator is applied to change the pitch of each particle. Since the {dollar}\delta f{dollar} code calculates only the deviations from the Maxwellian rather than the full distribution function, it is about 10{dollar}\sp4{dollar} times as efficient as other Monte Carlo techniques used to calculate currents in plasmas.;The {dollar}\delta f{dollar} code was used to study the aspect ratio and collisionality dependence of the bootstrap current and two Fourier components of the Pfirsch-Schluter current. It was also used to calculate electric potentials within magnetic surfaces due to the explicit enforcement of the quasi-neutrality condition. The code also calculated transport coefficients for the ions and electrons under various conditions. The agreement between the values predicted by the code for the plasma currents and analytic theory is excellent. The transport parameters calculated for the ions and electrons are in qualitative agreement with values predicted from neoclassical transport theory, including transport induced by a toroidal ripple. The in-surface electric potentials induced by explicitly enforcing the quasi-neutrality condition are too small to significantly enhance transport across the magnetic surfaces.
APA, Harvard, Vancouver, ISO, and other styles
30

Kavi, Sandeep A. "Nonlinear 3-D beam/connector finite element with warping for a glulam dome." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-07102009-040624/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Jara-Almonte, J. "Extraction of eigen-pairs from beam structures using an exact element based on a continuum formulation and the finite element method." Diss., Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/54300.

Full text
Abstract:
Studies of numerical methods to decouple structure and fluid interaction have reported the need for more precise approximations of higher structure eigenvalues and eigenvectors than are currently available from standard finite elements. The purpose of this study is to investigate hybrid finite element models composed of standard finite elements and exact-elements for the prediction of higher structure eigenvalues and eigenvectors. An exact beam-element dynamic-stiffness formulation is presented for a plane Timoshenko beam with rotatory inertia. This formulation is based on a converted continuum transfer matrix and is incorporated into a typical finite element program for eigenvalue/vector problems. Hybrid models using the exact-beam element generate transcendental, nonlinear eigenvalue problems. An eigenvalue extraction technique for this problem is also implemented. Also presented is a post-processing capability to reconstruct the mode shape each of exact element at as many discrete locations along the element as desired. The resulting code has advantages over both the standard transfer matrix method and the standard finite element method. The advantage over the transfer matrix method is that complicated structures may be modeled with the converted continuum transfer matrix without having to use branching techniques. The advantage over the finite element method is that fewer degrees of freedom are necessary to obtain good approximations for the higher eigenvalues. The reduction is achieved because the incorporation of an exact-beam-element is tantamount to the dynamic condensation of an infinity of degrees of freedom. Numerical examples are used to illustrate the advantages of this method. First, the eigenvalues of a fixed-fixed beam are found with purely finite element models, purely exact-element models, and a closed-form solution. Comparisons show that purely exact-element models give, for all practical purposes, the same eigenvalues as a closed-form solution. Next, a Portal Arch and a Verdeel Truss structure are modeled with hybrid models, purely finite element, and purely exact-element models. The hybrid models do provide precise higher eigenvalues with fewer degrees of freedom than the purely finite element models. The purely exact-element models were the most economical for obtaining higher structure eigenvalues. The hybrid models were more costly than the purely exact-element models, but not as costly as the purely finite element models.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
32

Le, Thanh Nam. "Nonlinear dynamics of lexible structures using corotational beam elements." Phd thesis, INSA de Rennes, 2013. http://tel.archives-ouvertes.fr/tel-00954739.

Full text
Abstract:
The purpose of this thesis is to propose several corotational beam formulations for both 2D and 3D nonlinear dynamic analyse of flexible structures. The main novelty of these formulations is that the cubic interpolation functions are used to derive not only the internal force vector and the tangent stiffness matrix but also the inertial force vector and the dynamic matrix. By neglecting the quadratic terms of the local transversal displacements, closed-form expressions for the inertial terms are obtained for 2D problems. Based on an extensive comparative study of the parameterizations of the finite rotations and the time stepping method, and by adopting an approximation of the local rotations, two consistent and effective beam formulations for 3D dynamics are developed. In contrast with the first formulation, the second one takes into account the warping deformations and the shear center eccentricity. The accuracy of these formulations is demonstrated through several numerical examples.
APA, Harvard, Vancouver, ISO, and other styles
33

Tay, Um Leong. "Improved design methods for reinforced concrete wide beam floors." Thesis, Imperial College London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.433208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Benites, Calderón Rafael. "Seismological applications of boundary integral and Gaussian beam methods." Thesis, Massachusetts Institute of Technology, 1991. http://hdl.handle.net/1721.1/51483.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 1991.<br>Includes bibliographical references (p. 215-221).<br>by Rafael Benites Calderón.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
35

Benawra, Jashil Singh. "Developing dual beam methods for the study of polymers." Thesis, University of Cambridge, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.611359.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Sunnegårdh, Johan. "Iterative Filtered Backprojection Methods for Helical Cone-Beam CT." Doctoral thesis, Linköpings universitet, Bildbehandling, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-20035.

Full text
Abstract:
State-of-the-art reconstruction algorithms for medical helical cone-beam Computed Tomography (CT) are of type non-exact Filtered Backprojection (FBP). They are attractive because of their simplicity and low computational cost, but they produce sub-optimal images with respect to artifacts, resolution, and noise. This thesis deals with possibilities to improve the image quality by means of iterative techniques. The first algorithm, Regularized Iterative Weighted Filtered Backprojection (RIWFBP), is an iterative algorithm employing the non-exact Weighted FilteredBackprojection (WFBP) algorithm [Stierstorfer et al., Phys. Med. Biol. 49, 2209-2218, 2004] in the update step. We have measured and compared artifact reduction as well as resolution and noise properties for RIWFBP and WFBP. The results show that artifacts originating in the non-exactness of the WFBP algorithm are suppressed within five iterations without notable degradation in terms of resolution versus noise. Our experiments also indicate that the number of required iterations can be reduced by employing a technique known as ordered subsets. A small modification of RIWFBP leads to a new algorithm, the Weighted Least Squares Iterative Filtered Backprojection (WLS-IFBP). This algorithm has a slightly lower rate of convergence than RIWFBP, but in return it has the attractive property of converging to a solution of a certain least squares minimization problem. Hereby, theory and algorithms from optimization theory become applicable. Besides linear regularization, we have examined edge-preserving non-linear regularization.In this case, resolution becomes contrast dependent, a fact that can be utilized for improving high contrast resolution without degrading the signal-to-noise ratio in low contrast regions. Resolution measurements at different contrast levels and anthropomorphic phantom studies confirm this property. Furthermore, an even morepronounced suppression of artifacts is observed. Iterative reconstruction opens for more realistic modeling of the input data acquisition process than what is possible with FBP. We have examined the possibility to improve the forward projection model by (i) multiple ray models, and (ii) calculating strip integrals instead of line integrals. In both cases, for linearregularization, the experiments indicate a trade off: the resolution is improved atthe price of increased noise levels. With non-linear regularization on the other hand, the degraded signal-to-noise ratio in low contrast regions can be avoided. Huge input data sizes make experiments on real medical CT data very demanding. To alleviate this problem, we have implemented the most time consuming parts of the algorithms on a Graphics Processing Unit (GPU). These implementations are described in some detail, and some specific problems regarding parallelism and memory access are discussed.
APA, Harvard, Vancouver, ISO, and other styles
37

Gallagher, Robert. "The effects of row spacing, plant density, and weed control method on snap bean yields, yield components, and weed growth." 1990. http://catalog.hathitrust.org/api/volumes/oclc/23126306.html.

Full text
Abstract:
Thesis (M.S.)--University of Wisconsin--Madison, 1990.<br>Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 77-81).
APA, Harvard, Vancouver, ISO, and other styles
38

CHEN, ZHAO-XIAN, and 陳昭先. "The discussion of beam propagation method." Thesis, 1990. http://ndltd.ncl.edu.tw/handle/54925632371349394893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Su, Yu Wei, and 蘇昱維. "Buckling Analysis of Channel Beam with Warping Effect using Sub-Beam Method." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/50411629458724976778.

Full text
Abstract:
碩士<br>國立臺灣海洋大學<br>河海工程學系<br>96<br>In this study, based on updated Lagrangian algorithm, we proposed the sub-beam method to solve the geometric-nonlinear virtual work equation. First, we must represent again the sectional force equilibrium equation of the rectangle beam including uniform load, six terms of nonlinear virtual strain energy caused by nonlinear strain and the incremental virtual work acted by external force, therefore, We can obtain the rectangle beam including uniform load geometric-nonlinear virtual work equation. The method was proposed by Yau(2006), but he abbreviate the nonlinear effect in each section and the virtual work of and non-linear strain energy of . This method in this study invested the channel shape which are decomposed to three sub-member of rectangular beam, and constructed the geometric nonlinear strain energy of each three sub-member. Finally, we composed the geometric nonlinear strain energy of channel shape by equilibrium and compatibility. In which, we can derive logically the sub-beam moment in state and define correctly bi-moment in state and the second order incrementally virtual work in .We also completely deal with the nonlinear effect caused by each sub-beam and the virtual work in and the nonlinear strain energy in . So we can obtain the geometric nonlinear virtual work equation which composed of each rectangular beam if only we construct the relation between force of sub-beam and displacement of sub-beam, and derive colrrectly the sub-beam moment in state and define bi-moment in state and virtual work in .
APA, Harvard, Vancouver, ISO, and other styles
40

Cai, Zhang-Rong, and 蔡長榮. "Method improvement for assembling beam and slab rebars." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/18743930500842386523.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Tsai, Tsang-Zong, and 蔡長榮. "METHOD IMPROVEMENT FOR ASSEMBLING BEAM AND SLAB REBARS." Thesis, 1994. http://ndltd.ncl.edu.tw/handle/85899902139979272176.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>營建工程技術學系<br>82<br>Rebar assembly is a labor intensive work item in reinforced concrete building construction. Work sampling, flow process charting, crew balance analysis and time study methods are applied to identify problems of current rebar assembly methods. Using beam and slab rebar assembly as examples, a laboratory experimentation approach is utilized to explore potential benefit of pre assembly methods. Results show that total construction time, safety, quality, and skill labor requirement can be improved drastically through rebar pre-assembly methods.
APA, Harvard, Vancouver, ISO, and other styles
42

Cheng, Chao-Lin, and 鄭兆麟. "Curvature Ductility Design Method for RC Beam Section." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/07505361238882063286.

Full text
Abstract:
碩士<br>華梵大學<br>建築學系碩士班<br>101<br>This research established a flexural strength and curvature ductility design method for reinforced concrete (RC) beam section, designated as RCSD method. In RCSD method, a relationship between the position of neutral axis (c) and the curvature ductility ratio (μcr) was refined and used at first. And then, according a curvature ductility requirement it finds out a limited position of neutral axis, named cL. For RC beam section design, if c ≦ cL, the curvature ductility requirement will be satisfied. Following this design concept, it develops the RCSD method. The accuracy and applicability of the RCSD method was verified by designing 432 sections and comparing the required moment and ductility capacities with the provided moment and ductility capacities of the designed sections. It was found that the proposed design method has good control of moment and curvature ductility capacities of the RC beams designed. This is able to provide designers a tool for designing RC beam sections to fulfill both the flexural strength and designated curvature ductility ratio demands. The RCSD method is fairly simple and accurate. Consequently, sections designed according to RCSD method are provided economical. And, the RCSD method fairly well control on the ductility of designed sections.
APA, Harvard, Vancouver, ISO, and other styles
43

Shen, Yan-Fu, and 沈彥甫. "Comparative Structural Analysis Including D-value Method and Cantilever Beam of Moment Distribution Method." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/53618920408436765432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Chou, Shun-Yu, and 周舜虞. "Geometric-Nonlinear Analysis of Thin-Walled Beam with Warping Effect using Sub-Beam Method." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/82930756917357581194.

Full text
Abstract:
碩士<br>國立臺灣海洋大學<br>河海工程學系<br>95<br>In this study, based on updated Lagrangian algorithm, we proposed the sub-beam method to derive the geometric-nonlinear virtual work equation of the I shape beam and channel shape beam . The sub-beam method is to start with decomposing to three members of rectangular beam, then constructs the geometric nonlinear strain energy of each three members. Finally, we composed the geometric nonlinear strain energy of I shape and channel shape by equilibrium and compatibility. The completely sub-beam method, we construct in this study, is to solve the geometric nonlinear incremental virtual work equation of the rectangular sub-beam containing the uniform load. By the virtual work equation, we can deal with the nonlinear effect caused by the uniform load in the border of each sub-beam. Then we define logically the sub-beam moment in 2C state and bi-moment in 2B state, and we derive the nodal moment in 2C state and nonlinear incrementally virtual work caused by bi-moment. Finally, we associate the incremental virtual work equation of each sub-beam to obtaining the geometric nonlinear strain energy of I shape and channel shape by making up the centroid displacement of built up beam and sub-beam and the relational equation of equilibrium force.
APA, Harvard, Vancouver, ISO, and other styles
45

Hofmann, Oliver Schönefeld Reinhold. "Komponententheorie auf Basis der Methode der Boxstrukturen und der Vergleich zur Methodik der Enterprise Java Beans /." 2006. http://www.gbv.de/dms/ilmenau/abs/519678818hofma.txt.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Zeng, Yi-Ting, and 曾乙庭. "Damage Detection of Beam by the Influence Line Method." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/2d9e57.

Full text
Abstract:
碩士<br>國立中央大學<br>土木工程研究所<br>97<br>Damage assessment of structure is an important task for the maintenance and management of bridge system. An influence line type inspection technique is developed in this thesis. The displacement influence line at a point in the span of a beam structure is measured first. Through the second derivative of this displacement influence line with respect to the spatial variable along the beam, both the locations and the severities of crack damage can be clearly identified. The feasibility and accuracy of this damage assessment technique are verified both theoretically and numerically of beams with various prestress states. The second Castigliano’s theorem is applied to calculate the displacement influence line of beams with various prestress conditions and damages. Due to the measurement noise, the capability of damage identification from experimental data is not as impressive as it shown in the numerical and theoretical investigations. However, this scanning type damage assessment technique by the influence line response provides a possible manner to improve the inspection efficiency if the effect from measurement noise can be reduced in the future.
APA, Harvard, Vancouver, ISO, and other styles
47

Lee, Jia-Cheng, and 李家誠. "Scatter correction in digital radiography using beam stopper method." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/06516143591213218948.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

WU, JIN-XIANG, and 吳錦祥. "Lateral bracing of I-beam by finite element method." Thesis, 1987. http://ndltd.ncl.edu.tw/handle/80930499352762509876.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Huoy, Hwang Ming, and 黃明慧. "Design of SRC Beam-Columns:Physical Behavior and Eccentricity Method." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/96732288328271557383.

Full text
Abstract:
碩士<br>國立交通大學<br>土木工程系<br>87<br>The objectives of this research are to investigate the composite action between the steel and the RC parts of SRC structural members and to develop a new method for predicting the ultimate strength of concrete encased SRC beam-columns. The AIJ-SRC (1987) code calculates the beam flexural capacity by superposition method which neglects the composite action between the steel shape and the RC portion. But the ACI-318 code (1995) takes the SRC section as fully composite. In order to understand the actual behavior of SRC beams, some test results was collected to observe the strain distribution of SRC section. Then this research conducts an analytical study to determine the flexural capacity of a partially composite beam. Besides, this study extends the design concept used in ACI code for eccentrically loaded RC columns to the design of SRC beam-columns. Based on the concept of Strength Superposition, the component materials strength are determined by using the AISC-LRFD specification (1993) and the ACI code. This research also uses a computer program BIAX developed by University of California at Berkeley to model the nonlinear material behavior of the steel and the concrete. Finally, the predicted values are compared with previous test results and the proposed design codes.
APA, Harvard, Vancouver, ISO, and other styles
50

Tsai, Chen-Jhen, and 蔡宸蓁. "Can Lazy Person Investment Method Beat Taiwan Stock Market?." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/78375697255044650785.

Full text
Abstract:
碩士<br>義守大學<br>財務金融學系<br>103<br>The stock market is complicated and fill with uncertainty, how to set up a simple, clear and easy way which we named it as “the method of lazy person investment” in stock transaction?In this study, we are focusing on Taiwan''s large-cap listed companies as samples by using their data from December 2003 to December 2013. By calculating the return of the selected stock as indicators, compared to the market rate of return, and filter out the stocks having higher rate of return than the overall market indicators. The results showed that the ten-year average return of the indicated six target stocks are better than the ten-year average return on the overall market. The average rate of return ranked in the descending order of cash yields, return on equity, return on assets, earnings per share, price earnings ratio and price-book ratio. After using the combination of low PE ratio and high cash dividend yield as a composite stock selection indicators, we found find the best rate of return which ten-year average returns is 25.52 percent, significantly better than the overall market average rate of return of 7.53%. After T-test, there are significant differences in P-value between the rate of return of target stocks and the overall market. It demonstrates that the excess returns still existed in the market; therefore, it explains that the Taiwan stock market is not in line with the semi-strong efficient hypothesis. By carrying out long term investment on the top twenty target stocks in three consecutive years, the returns are found to be better than the overall market. This result shows that in accordance with the aforementioned method of lazy person investment is effective in indicators collected annually, and is also effective in the selection of the investment of blue chip stocks.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!