Tesis sobre el tema "Métodos experimentales"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Métodos experimentales".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Challapa, Velásquez Nancy Mariela. "Fisicoquímica del neurotransmisor dopamina y su precursor L-DOPA utilizando métodos teóricos y experimentales". Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2018. https://hdl.handle.net/20.500.12672/8026.
Texto completoEstudia las propiedades de estabilidad termodinámica y reactividad por transferencia protónica intrínsecas (en fase gas) del neurotransmisor dopamina y su precursor L-DOPA. Para ello hace uso de la Metodología DFT (B3LYP) y “ab-initio” (métodos G3 y G4) para el estudio conformacional en especies neutras, protonadas y desprotonadas, en fase gaseosa; y la determinación experimental, mediante espectrometría de masas de triple-cuadrupolo con fuente ESI (electrospray), de la afinidad protónica y basicidad de la Dopamina y acidez de la L-DOPA en fase gaseosa, aplicando el Método Cinético Extendido de Cooks (EKCM).
Tesis
Tejero, Chávez Carolina Cecilia. "El liderazgo de una directora: el caso en una escuela alternativa de Educación Inicial en el distrito de Jesús María". Master's thesis, Pontificia Universidad Católica del Perú, 2020. http://hdl.handle.net/20.500.12404/16994.
Texto completoTesis
Coral, Moncayo Hugo Edmundo. "Utilización de métodos experimentales y de simulación numérica para la microzonificación sísmica de áreas urbanizadas en Andorra". Doctoral thesis, Universitat Politècnica de Catalunya, 1998. http://hdl.handle.net/10803/6225.
Texto completoAndorra al ser un país de montaña está expuesto a numerosos riesgos naturales que intervienen tanto en la vida de las personas como sobre las infraestructuras de las poblaciones. El riesgo ante los fenómenos naturales se ha aumentado en los últimos 40 años debido a la fuerte expansión urbanística y de ocupación del suelo en todo el territorio de Andorra. En el momento en que se ocupen las zonas de montaña se ha de ser conciente de que se puede estar expuesto a los peligros naturales como: aludes, deslizamientos, inundaciones, terremotos, incendios, etc.
Se ha realizado una descripción de los diferentes factores que intervienen en el movimiento del suelo. En particular se ha descrito el fenómeno relacionado con los efectos locales. Para ello se ha realizado una síntesis bibliográfica de los estudios que han puesto de manifiesto estos efectos tanto los relacionados con los rellenos sedimentarios como los debidos a efectos topográficos aportando en este trabajo una síntesis en especial difícil de encontrar en relación con los efectos topográficos y su toma en consideración en normativas actuales como es el caso de la Norma Sismorresistente Francesa.
Se presenta igualmente una síntesis, ilustrada por las referencias bibliográficas correspondientes relativas a los métodos más usados para estimar los efectos locales en función de los datos disponibles.
Se ha realizado un inventario de los datos existentes así como una descripción del subsuelo del valle en su parte más urbanizada. Se ha considerado la nueva base de datos recopilada recientemente por el CRECIT a partir de los estudios de geotecnia realizados para la obra civil pública y privada. La base de datos ha sido elaborada e informatizada en parte importante para la realización de la microzonificación sísmica de Andorra.
Se ha llevado a cabo una campaña de medidas de ruido sísmico tanto en la parte más urbanizada correspondiente a la zona de la cubeta como en las laderas adyacentes. Se ha aplicado el método de Nakamura a estos registros y se ha obtenido las frecuencias predominantes en distintas zonas del relleno sedimentario y en la laderas.
Se han identificado 5 perfiles estratigráficos en la cubeta y se ha aplicado el método lineal equivalente para caracterizar el movimiento del suelo en los distintos emplazamientos. Como señales de entrada en roca se ha trabajado con pulsos de Ricker con diferentes frecuencias y acelerogramas adoptados a un movimiento asociado a un período de retorno de 475 años (a=0.1g), resultando que las frecuencias predominantes tiene dependencia con la pendiente.
Se realizó una revisión histórica de los deslizamientos activados por terremotos tomando los estudios de Keefer (1984), para regiones con mayor actividad sísmica, siendo las condiciones geológicas locales y los parámetros sísmicos determinantes en su clasificación.
Se hace una breve descripción de los métodos con los que se puede evaluar la aceleración crítica para determinar desplazamientos inducidos por terremotos. Se describe el modelo de los desplazamientos de Newmark y posteriormente se aplica a acelerogramas de España y Grecia con los que se obtiene ábacos de Intensidad de Arias, desplazamientos y aceleración crítica, los cuales se aplican finalmente a la zona en estudio.
Se realiza una revisión del estado del arte referente a la peligrosidad de deslizamientos activados por terremotos, historia y ejemplos. A partir de la geología superficial e información geotécnica disponible se aplica el modelo de los desplazamientos de Newmark a la zona en estudio. Finalmente se describe una metodología para la obtención de mapas digitales probabilísticos de peligrosidad sísmica para ser aplicado a Andorra.
Seismic risk studies in Andorra started at the beginning of 2001 with the creation of the Centre de Recerca en Ciencies de la Terra (CRECIT). Andorra is a country characterized by a geology of quaternary glaciers valleys with a very particular topography and geotechnique.
Andorra, being mountain country, is exposed to numerous natural risks like: avalanches, landslides, floods, earthquakes, fires, etc. that affect as much the life of the people as the infrastructures of the populations.
A description of the different factors involved in the ground motion has been performed foccussing mainly on the phenomenon related to the local effects. For the accomplishment of the seismic microzonation a geotechnical data base has been elaborated and different soil profiles had been characterised.
Seismic noise measurements had been carried out in the urbanized part more corresponding to the zone of the glacial basin as well in adjacent slopes. Nakamura's method had been applied to obtain the predominant frequencies. Also the linear equivalent (Proshake) method has been applied to compute the ground motion in different soil sites of the glacial basin.
A brief description of the methods with which the critical acceleration can be evaluated was made to determine landlides induced by earthquakes. The Newmark analysis was described and later applied considering accelerograms from Spain and Greece to obtain abacuses of Arias intensity, displacements and critical acceleration, which then are applied to the zone in study. Finally, a methodology for obtaining digital probabilistic seismic landslide hazard maps is described and proposed to be applied to Andorra.
Valdebenito, Valencia Gerardo Alonso. "Descarga de un colector de aguas lluvias a un cauce natural. Análisis comparativo entre métodos analíticos y experimentales". Tesis, Universidad de Chile, 2007. http://www.repositorio.uchile.cl/handle/2250/104535.
Texto completoMorales, Rodríguez Roderick Víctor. "Análisis de estudios experimentales realizados por el Instituto Nacional de Hidráulica, Chile, sobre sumideros de aguas lluvias". Tesis, Universidad de Chile, 2016. http://repositorio.uchile.cl/handle/2250/140494.
Texto completoEn esta memoria se presenta un análisis del comportamiento hidráulico de siete sumideros de aguas lluvias, basados en las experiencias realizadas por el Instituto Nacional de Hidráulica entre los años 2004 y 2006, en una plataforma de ensayo en escala 1:1. Para esta memoria se estudiaron 5 sumideros de reja, un sumidero de reja fuera de la calzada y un sumidero de reja en desnivel en relación a la calzada. El objetivo principal es analizar los datos experimentales obtenidos y compararlos con los resultados entregados por diferentes modelos teóricos. Con lo anterior, se obtuvo rangos de validez de los modelos además de proponer una nueva formulación para determinar la eficiencia de captación. Para llevar a cabo lo anterior, se analizaron 6 modelos teóricos existentes en la literatura que permitieron determinar la eficiencia de captación y se compararon con los resultados experimentales. Junto con lo anterior, se realizó un análisis dimensional del fenómeno que busca determinar una relación entre la eficiencia de captación y las demás variables que definen el fenómeno. Finalmente, se pudo concluir qué modelos se adaptan mejor a cada sumidero estudiado, además de determinar los rangos en los cuales son válidos los resultados, encontrándose que el modelo recomendado en la Guía de diseño y especificaciones de elementos urbanos de infraestructuras de aguas lluvias. Servicio de Vivienda y Urbanismo (SERVIU) , es el que entrega los mejores resultados. Además, del análisis dimensional, se propone una nueva relación de tipo potencial de la eficiencia de captación en función del cuociente entre la altura de escurrimiento y el ancho del sumidero.
Rodríguez, Taranco Oscar Juan. "Diseño y experimentación de un sistema de tutoría para la FIQ-UNAC". Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2003. https://hdl.handle.net/20.500.12672/1701.
Texto completoTesis
Freyre, Lira Delma Socorro. "Aplicación de un programa “Estrategias de aprendizaje intencional” en la ansiedad ante los exámenes en estudiantes del I ciclo del curso de tutoría de la carrera de psicología de la Universidad Peruana Los Andes - Huancayo 2015". Master's thesis, Universidad Nacional Mayor de San Marcos, 2017. https://hdl.handle.net/20.500.12672/6870.
Texto completoTesis
Agurto, Ramírez Dany Miguel. "Relación entre el método experimental y la formación científica y pedagógica de los futuros profesores de nivel básico, caso: universidades nacionales Mayor de San Marcos y Federico Villarreal, 2005". Bachelor's thesis, Universidad Nacional Mayor de San Marcos, 2010. https://hdl.handle.net/20.500.12672/2422.
Texto completoTesis
Romero, Panduro Luis. "Aplicación del modelo didáctico alternativo en la enseñanza de ciencia, tecnología y ambiente en el 2do grado de secundaria de la Institución Educativa Básica Regular Nº 62009 “Marcelina López Rojas”". Master's thesis, Universidad Nacional Mayor de San Marcos, 2015. https://hdl.handle.net/20.500.12672/5519.
Texto completoDetermina la influencia de la aplicación del modelo didáctico alternativo en la enseñanza de ciencia, tecnología y ambiente en el 2do grado de secundaria de la institución educativa básica regular Nº 62009 “Marcelina López Rojas”. Utiliza la metodología de enfoque cuantitativo, diseño experimental y de tipo aplicativa. La muestra del estudio consta de 41 estudiantes distribuidos en grupos control y grupo experimento, a quienes se les aplicó los instrumentos como la ficha de observación y la prueba educativa. Concluye que la aplicación del modelo didáctico experimental influye positivamente en la enseñanza de ciencia, tecnología y ambiente de los estudiantes de 2do grado de secundaria de dicha institución.
Tesis
Baldini, Vera Lucia Signoreli. "Purificação e caracterização da - galactosidade de soja (glycine max, L.) e defeijão (phaseolus vulgaris, L.)". [s.n.], 1985. http://repositorio.unicamp.br/jspui/handle/REPOSIP/255817.
Texto completoDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia de Alimentos e Agricola
Made available in DSpace on 2018-07-14T03:25:48Z (GMT). No. of bitstreams: 1 Baldini_VeraLuciaSignoreli_M.pdf: 18939700 bytes, checksum: 2ab82f9ad9029b92a0d0a1ca1c64861d (MD5) Previous issue date: 1985
Resumo: As propriedades das b-galactosidases da soja e do feijão, purificadas por fracionamento com sulfato de amônio, cromatografia em DEAE-celulose e filtração molecular em Sephadex G-100 foram examinadas.A filtração em Sephadex G-100 produziu duas frações, designadas I e II, com diferentes pesos moleculares. As formas I e II de cada amostra mostraram diferenças em seus valores de pH e temperatura ótima, energia de ativação, Km e Vmax com PNPG, melibiose ou rafinose como substratos. Não se observaram diferenças em suas estabilidades ao calor e à variação de pH.As enzimas apresentaram atividade máxima entre pH 5,0 e 6,0 e foram estáveis na faixa ao redor do pH ótimo; os valores de temperatura ótima ficaram entre 45 e 55ºC e as atividades enzímicas perma-neceram estáveis até 50ºC e todas elas mostraram maior especificidade para o PNPG. Íons metálicos como Ag+ e Hg+2 causaram completa perda de atividade; entretanto, as enzimas não foram sensíveis aos reagentes sul-fidrílicos, sugerindo não haver necessidade de grupos - SH para a atividade catalítica. As enzimas foram inibidas competitivamente por glicose, mais intensamente por galactose e não competitivamente por frutose. As duas formas enzímicas do feijão foram inibidas por al--tas concentrações de PNPG
Abstract: B-Galactosidases from soybean (Glycine max) and feijão (Phaseolus vulgaris) seeds were purified by ammonium sulfate fractionation, chromatography on DEAE-cellulose and gel filtration on Se-phadex G-100, and studied their characteristics. It was found that b-galactosidase was multiple forms, which was consisted of two frac-tions (fraction I and II). Fractions I and II of each sample exhibited differences in their optimal pH and temperature, activation energy, Km and Vmax values with PNPG, melibiose or raffinose as substrates. The properties of the enzymes from these seeds resembled each other with respect to their pH and thermal stabilities. The enzymes showed maximum activity between pH 5.0 and 6.0 and were stable in and around this range. The optimal temperature values were observed between 45 and 55ºC and the enzymes were stable up to 50ºC. All forms were more specific for PNPG. Metal ions such as Ag+ and Hg+2 caused complete loss of activity and sulfhydryl reagents had no effect on the enzyme activities, indicating that the -SH group is not involved in the catalytic site. Glucose and galactose inhibited the enzymic activities competitively, whereas fructose had a non-competitive effect. Two forms of Phaseolus vulgaris b-galactosidase were inhibited by PNPG at high concentrations
Mestrado
Mestre em Ciência de Alimentos
Sena, Neylla Teixeira. "Estudo in vitro da atividade antimicrobiana do hipoclorito de sodio e da clorexidina usados como substancias quimicas auxiliares frente a biofilmes de especie unica". [s.n.], 2004. http://repositorio.unicamp.br/jspui/handle/REPOSIP/290453.
Texto completoDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Odontologia de Piracicaba
Made available in DSpace on 2018-08-04T08:34:21Z (GMT). No. of bitstreams: 1 Sena_NeyllaTeixeira_M.pdf: 2765120 bytes, checksum: 8a7f044f8c8b34593e4ca88e997a79d7 (MD5) Previous issue date: 2004
Resumo: O objetivo deste estudo foi investigar a atividade antimicrobiana do hipoclorito de sódio (NaOCl) 2,5% e 5,25% e da clorexidina (CLX) 2.0% tanto na forma gel como líquida utilizados como substância química auxiliar durante o preparo químico-mecânico frente a biofilmes de espécies única. Biofilmes simples dos microrganismos Enterococcus faecalis, Staphylococcus aureus, Prevotella intermedia, Porphyromonas gingivalis, Porphyromonas endodontalis, Fusobacterium nucleatum e Candida albicans foram formados em filtros de membrana de nitrato de celulose sobre placas de agar sangue. Os biofilmes foram imersos nas substâncias químicas por 30s, 5, 10, 15, 30 e 60 min com ou sem agitação mecânica e em seguida transferidos para meios de cultura contendo neutralizadores das substâncias químicas. A seguir, foram realizadas diluições em série, alíquotas foram inoculadas em placas de agar sangue, incubadas e após crescimento, as unidades formadoras de colônias foram quantificadas através de suas diluições. O hipoclorito de sódio (NaOCl) 5,25% eliminou todos microrganismos testados em 30 segundos de contato. Frente aos microrganismos anaeróbios estritos, todas as substâncias químicas obtiveram o mesmo desempenho, sendo efetivas em 30 segundos. A solução salina permitiu o crescimento microbiano de todas as cepas. Concluiu-se que NaOCl a 5,25% foi a substância química testada mais efetiva, seguido pela CLX líquida 2%. Os resultados demonstraram que a efetividade do agente antimicrobiano depende dos microrganismos que constituem o biofilme, do tempo de contato destes com o a substância química, da ação ou não da agitação mecânica e forma de apresentação da substância
Abstract: The aim of this study was to investigate the antimicrobial activity of 2.5% and 5.25% sodium hypoclorite and 2.0% chlorhexidine gel and liquid as auxiliary chemical substances against selected single-species biofilms. Enterococcus faecalis, Staphylococus aureus, Prevotella intermedia, Porphyromonas gingivalis, Porphyromonas endodontalis, Fusobacterium nucleatum and Candida albicans were grown on cellulose nitrate membrane placed on agar medium, generating single biofilms, which were immersed in the endodontic auxiliary chemical substance for 30 second and 5, 10, 15, 30, and 60 minutes with mechanical agitation or not. Sterile saline was used as a control group. After each time tested, the antimicrobial activity was neutralized. The microorganisms were suspended using a vortex, which was tem-fold serially diluted. Aliquots of the dilutions were plated on 5% sheep blood agar media, and incubated. Colony-forming units were then calculated. NaOCl 5.25% was observed to eliminate all strains in 30 seconds. However, other irrigating solutions showed effective antimicrobial activity against the anaerobic microorganisms in 30 seconds. Sterile saline showed microbial growth in all tested times. NaOCl 5.25% followed by 2% liquid chlorhexidine, was the most effective agents tested. These results indicate that the effectiveness of an antimicrobial agent is closely related to the organization of microorganisms in the biofilm as well as to the contact time between microorganisms and substances, presence or lack of the mechanical agitation, and presentation form of the substances
Mestrado
Endodontia
Mestre em Clínica Odontológica
Maldonado, Arturo. "Metodologías en ciencia política: el debate y los retos en la academia peruana". Politai, 2011. http://repositorio.pucp.edu.pe/index/handle/123456789/91626.
Texto completoSousa, Neto Theófilo Machado de. "Ajuste de curvas usando métodos numéricos". Universidade Federal de Goiás, 2018. http://repositorio.bc.ufg.br/tede/handle/tede/8755.
Texto completoApproved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-08-01T13:38:17Z (GMT) No. of bitstreams: 2 Dissertação - Theófilo Machado de Sousa Neto - 2018.pdf: 5352330 bytes, checksum: 633a1463e2e997810ceffbed30fe9665 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-08-01T13:38:17Z (GMT). No. of bitstreams: 2 Dissertação - Theófilo Machado de Sousa Neto - 2018.pdf: 5352330 bytes, checksum: 633a1463e2e997810ceffbed30fe9665 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-06-28
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Given the need to discuss mathematical methods capable of adjusting curves that represent experimental data. This work presents seven methods of curves adjustment, three of these, methods that use least squares regression techniques and the other four, using interpolation techniques. Initially, it brings some definitions that present to the reader all the mathematical foundation that rules the equations. In parallel, it seeks to discuss, through examples, the area of attribution of the described methods, realizing whenever possible a comparation between the several techniques presented and their errors in the estimates. In order to demonstrate that the techniques discussed here are feasible for use in basic education, it exposes an experience of applying one of these methods in solving a basic problem of the discipline of Physics. After presenting the step-by-step method of obtaining soil resistivity, a variable that is of the utmost importance for the elaboration of projects for grounding meshes that supply energy substations, We finish this work by solving the problem with the aid of adjustment techniques curves studied, proposing the inclusion of the methods addressed in one of the steps of the procedure to obtain soil resistivity.
Diante da necessidade de se discutir sobre métodos matemáticos capazes de ajustar curvas que representem dados experimentais. Este trabalho apresenta como escopo sete métodos de ajustes de curvas, sendo que três destes, utilizam as técnicas de regressão por mínimos quadrados e os outros quatro, usando técnicas de interpolação. Inicialmente, traremos algumas definições que apresentam ao leitor todo o embasamento matemático que rege os equacionamentos. Em paralelo, procuramos discutir, através de exemplos, a área de atribuição dos métodos descritos, realizando sempre que possível um comparativo entre as variadas técnicas apresentadas e seus erros nas estimativas.Com o intuito de demonstrar que as técnicas aqui discutidas são viáveis para utilização na educação básica, apresentaremos uma experiência de aplicação de um desses métodos na resolução de um problema básico da disciplina de Física. Após relatar os procedimentos do método de obtenção da resistividade do solo, que é uma variável de suma importância para a elaboração de projetos de malhas de aterramento que atendem subestações de energia. Finaliza-se este trabalho resolvendo o problema com auxílio das técnicas de ajustes de curva estudados, propondo a inclusão dos métodos abordados em uma das etapas do procedimento de obtenção da resistividade do solo.
Díez, Moreno Beatriz. "Diversidad del picoplancton eucariótico marino mediante métodos moleculares". Doctoral thesis, Universitat Autònoma de Barcelona, 2001. http://hdl.handle.net/10803/3851.
Texto completoLa identificación de la fracción eucariótica del picoplancton en comunidades naturales es a menudo una tarea difícil, debido a su similar morfología y pequeño tamaño (< 5 mm). Algunos de ellos pueden ser discriminados a nivel de Clase mediante microscopía electrónica o cromatografía líquida de alta resolución (HPLC), pero en la mayoría de los casos no es posible hacer una identificación a más bajos niveles taxonómicos. Por otro lado, solo un pequeño porcentaje de estas especies picoplanctónicas puede crecer en cultivo, y además no hay garantía de que estos organismos aislados en cultivo sean los dominantes en la comunidad natural.
La aproximación por técnicas moleculares basadas en análisis filogenéticos de secuencias de rRNA tales como la clonación y secuenciación, y técnicas de fingerprinting como la Denaturing Gradient Gel Electrophoresis (DGGE) o la Terminal Restriction Fragemnt Length Polymorphism (T-RFLP), han sido una alternativa que nos ha permitido caracterizar con más detalle la diversidad del picoplancton eucariótico en muestras naturales de diferentes sistemas marinos. Se ha muestreado una gran variedad de sistemas, desde mar abierto a zonas costeras, gracias a diferentes campañas oceanográficas: Mar de Weddel-Scotia (campaña DOVETAIL; Paso Drake (campaña DHARMA); Atlántico Norte (campaña ACSOE-NAE); Mar de Alborán (campaña MATER´97, 98 y 99). En ellas se obtuvieron muestras que abarcaban tanto la variabilidad espacial como la variabilidad temporal de las comunidades del picoplancton eucariótico marino.
Gracias a la realización de bibliotecas genéticas de algunas de estas muestras (pertenecientes a la Antártida, al Mar de Alborán y al Atlántico Norte), mediante secuenciación y comparación con el banco de datos, se obtuvo información acerca de la diversidad de los grupos filogenéticos presentes en estos diferentes ambientes. Los resultados mostraron una elevada diversidad filogenética incluyendo muchos grupos taxónomicos diferentes y miembros de grupos filogenéticos distantes. La mayoría de estos grupos taxónomicos se afiliaban a organismos conocidos del picoplancton eucariótico fototrófico como las prasinofíceas, que fueron las más representadas, así como también las primnesiofíceas. Otra fracción, menos frecuente, pudo afiliarse a grupos claramente heterotróficos tales como ciliados, algunas crisofíceas, cercomonadales y hongos. Pero también apareció otro elevado número de secuencias distintas a cualquier secuencia conocida y que correspondían a nuevos linajes tales como los nuevos estramenópilos y los nuevos alveolados. Estos nuevos linajes aparecieron ampliamente distribuidos tanto filogenética como geográficamente. Algunos de ellos pueden constituir una fracción importante de los microorganismos heterotróficos y jugar un papel crucial en la red trófica microbiana.
Técnicas de fingerprinting como la DGGE y la T-RFLP, fueron usadas para el estudio en paralelo de la diversidad de picoeucariontes en estas muestras marinas. Gracias a la optimización de la DGGE usando cebadores específicos para amplificar un fragmento del gen rRNA 18S de eucariontes, se pudo estudiar la diversidad y variabilidad a gran escala, de una manera detallada, de las comunidades del picoplancton eucariótico marino presentes en muestras de la Antártida y del Mar de Alborán. Esto nos permitió observar cambios en su distribución y composición no sólo a lo largo de gradientes verticales, sino en relación con escalas espaciales y temporales.
Picoeukaryotes together with the heterotrophic and phototrophic prokaryotes, constitute the picoplankton. Picoeukaryotes can contribute an important part the picoplankton biomass and even to the total biomas of the system. Also, their contribution to the primary productivity of the ecosystem is very significant. However, very little is known about the diversity of the eukaryotic fraction of the marine picoplanktonic assemblages.
The identification of picoeukaryotes in natural communities is difficult, principally due at their similar morphology and small size (< 5 mm). Some of them can be discriminated at the Class taxonomic level by electron microscopy or by HPLC pigment analyses, but most of them cannot be identified at lower taxonomic levels. Also, only a small percentage of the picoeukaryotes species grow in culture, and there is no guaranty that organisms currently available in pure culture are dominant in natural communities.
The approximation with molecular techniques offers a promising alternative. The phylogenetic analyses of the rRNA sequences uses techniques such as cloning and sequencing, and/or fingerprinting techniques such as Denaturing Gradient Gel Electrophoresis (DGGE) and Terminal Restriction Fragment Length Polymorphism (T-RFLP). With these techniques I have characterized the diversity of eukaryotic picoplankton in natural samples from different marine systems. A wide variety of systems had been sampled from coast to open sea in different oceanographic cruises: Weddel-Scotia Sea (cruise DOVETAIL); Drake Passage (cruise DHARMA); North Atlantic Ocean (cruise ACSOE-NAE); Alborán Sea (cruises MATER´97, 98 y 99). Samples to study spatial and temporal variability of the marine eukaryotic picoplankton were obtained in all these cruises.
Five genetic libraries were generated (two from the Southern Ocean, two from the North Atlantic Ocean and one from the Alborán Sea). By sequencing and comparison with the database I obtained information about the diversity of the phylogenetic groups present in the different marine systems. Results showed a high phylogenetic diversity. Most of these taxonomic groups were affiliated with known phototrophic picoeukaryotes such as prasinophytes (the most frequently represented) and prymnesiophytes. Other clones could be assigned to the heterotropic organisms such as ciliates, some crysophytes, cercomonads and fungi. But a significant number of sequences in the libraries did not show a close affiliation with any know class of organisms and formed two novel lineages: novel stramenopiles and novel alveolates. These novel lineages were abundant and widely distributed. Some of them may account for a large fraction of the heterotrophic microorganisms in the sea and could play an important role in the marine food webs.
Fingerprinting techniques such as DGGE and T-RFLP were used to study de diversity of picoeukaryotes in these same samples. With the optimization of DGGE by using specific primers to amplify the 18S rRNA from eukaryotes we studied diversity and variability of the marine eukaryotic picoplankton assemblages present in samples from the Southern Ocean and the Alborán Sea. With these studies we obtained information about changes in their distribution and composition along the vertical gradient but also at larger spatial and temporal scales.
Falcon, Falcon Carles Maria. "Métodos iterativos de reconstrucción tomográfica en SPECT". Doctoral thesis, Universitat de Barcelona, 1999. http://hdl.handle.net/10803/1785.
Texto completoSPECT ("Single Photon Emission Computed Tomography" - Tomografía Computerizada por Emisión de Fotón Único) es una técnica de la Medicina Nuclear en la que se obtiene una imagen de la distribución de un fármaco marcado con un isótopo radiactivo a partir de la reconstrucción tomográfica de la radiación gamma emitida en diferentes direcciones (proyección de la distribución del radiofármaco en esas direcciones). Debido a la existencia de una gran cantidad de factores degradantes sobre las proyecciones (ruido en la emisión radiactiva y en la detección, atenuación y dispersión de los fotones en el interior del cuerpo y la respuesta del detector), la calidad de las imágenes en SPECT obtenidas con el método estándar (FBP- retroproyección filtrada) no es buena, obteniéndose imágenes ruidosas y de poca resolución. Existen métodos iterativos de reconstrucción tomográfica que corrigen el efecto de estos factores. El objetivo de esta tesis es el estudio de la variación de la calidad de la imagen en función de los parámetros de los que dependen los métodos iterativos de reconstrucción a fin de determinar sus características intrínsecas y la idoneidad de su uso. Para ello es preciso implementar, además de los métodos de reconstrucción, una simulación numérica de proyecciones a partir del objeto y un método de evaluación objetiva de las reconstrucciones.
2. PLANTEAMIENTO DEL PROBLEMA:
Es este capitulo se describe detalladamente la física que interviene en SPECT, así como del planteamiento matemático de la reconstrucción tomográfica.
3. METODOLOGÍA:
En este capitulo se expone la simulación de proyecciones implementada con la modelización de los diferentes factores degradantes utilizada, la obtención de datos experimentales y la evaluación de las imágenes con figuras de mérito (FDM): coeficiente de correlación con la imagen ideal (CC), contraste de las regiones de la imagen (CON), relación señal-ruido (SNR) y otras.
4. RETROPROYECCIÓN FILTRADA:
En este capitulo se analiza un método iterativo de compensar el efecto de la atenuación (método de Chang) y un filtro para compensar la respuesta del detector en FBP (filtro de Metz). Se analiza el valor de las diferentes FDM en la reconstrucción en función del exponente del filtro de Metz y del número de iteraciones, para dos modelos y diferentes niveles de ruido.
5. MÉTODOS DE RECONSTRUCCIÓN ALGEBRAICOS (ART):
En este capitulo se analiza la calidad de la reconstrucción al utilizar métodos iterativos algebraicos en función del número de iteraciones y del parámetro de relajación métodos de reconstrucción tomográfica para diferentes niveles de ruido.
6. MÉTODOS DE RECONSTRUCCIÓN ESTADÍSTICOS (MLE):
En este capitulo se analiza la calidad de la reconstrucción al utilizar métodos iterativos estadísticos en función del número de iteraciones empleado para dos modelos y diferentes niveles de ruido. Así mismo se analiza la aplicabilidad del criterio estadístico de la validación cruzada (CVR) en imágenes de SPECT y dos métodos de aceleración del proceso iterativo, el parámetro de sobrerelajación y los subconjuntos ordenados (MLE-OS).
7. COMPARACIÓN DE RESULTADOS:
En este capítulo se comparan los resultados obtenidos en los capítulos anteriores y los resultados producidos por los diferentes métodos de reconstrucción implementados sobre un estudio de cuantificación de la captación de un determinado radiofármaco por un tumor pulmonar simulado numéricamente.
8. CONCLUSIONES:
En este capítulo se exponen las conclusiones de la tesis. De entre ellas cabe destacar:
a) Simulación de proyecciones: Las diferentes pruebas a las que se ha sometido el simulador y la comparación cualitativa de los resultados obtenidos de proyecciones reales y de proyecciones simuladas avalan su correcta implementación y la adecuación de las aproximaciones realizadas.
b) FBP: Siempre que el exponente del filtro de Metz sea suficientemente bajo, no se observa dependencia de la calidad de la reconstrucción en función del exponente, pero si del número de iteraciones (imágenes con los mismos valores de las FDM a un número diferente de iteraciones). Cuanto menor es el exponente, mayor número de iteraciones deben realizarse para obtener una imagen de características iguales. En este contexto, el exponente puede ser considerado como un factor de aceleración.
Cuando el exponente sobrepasa un valor máximo la calidad de la reconstrucción es inferior. El rango de valores del exponente con los que se obtienen los mejores resultados depende del ruido sobre las proyecciones: a mayor número de cuentas, mayor puede ser el exponente.
Si el mapa de atenuación es uniforme, se puede escoger el exponente de manera que sea necesaria una sola iteración para alcanzar resultados aceptables, sin que sea preciso implementar el operador proyección, aunque procediendo de esta manera, se prima CON sobre CC, es decir, se obtienen imágenes con mejor contraste y mayor presencia de ruido. El uso de exponentes menores permite optar por imágenes menos ruidosas y peor contraste o imágenes más ruidosas y de mayor contraste. Si el mapa de atenuación es no uniforme, los mejores resultados se obtienen después de más de una iteración.
c) ART: En cuanto al factor de relajación, se observa, para valores altos del parámetro, una notable mejoría de la calidad de la imagen al ir disminuyendo su valor. No obstante, existe un valor a partir del cual la reducción del parámetro de relajación no conlleva un aumento de la calidad de la reconstrucción, siendo necesaria, por otra parte, una mayor cantidad de iteraciones para conseguir imágenes análogas.
d) MLE: CVR es un buen criterio de detención del proceso iterativo para imágenes de propósito general, pues en ningún caso produce imágenes de mala calidad. Además, este criterio no requiere ningún tipo de información a priori de la imagen, siendo una ventaja más del uso de los métodos de reconstrucción estadísticos.
Del estudio del parámetro de sobre-relajación, se concluye que el rango de factores de aceleración utilizables es de l a 2.5. Factores superiores deben desestimarse al deteriorarse las reconstrucciones a las pocas iteraciones.
Del estudio de la dependencia de la calidad de la imagen respecto el número de subconjuntos ordenados en que se divide las proyecciones y el número de iteraciones, se concluye que utilizando un número pequeño de subconjuntos ordenados el método MLEOS acelera el método MLE en un factor igual al número de subconjuntos utilizados. Si el número de subconjuntos empleado es excesivo, la imagen resultante es de calidad inferior a la conseguida con MLE. El número máximo de subconjuntos utilizables depende, entre otros factores, del ruido sobre las proyecciones. Cuanto menor es la presencia de ruido, mayor es el número de subconjuntos ordenados que se puede utilizar sin que se produzca una pérdida de calidad en la reconstrucción.
El criterio CVR de interrupción del proceso iterativo es compatible con la utilización de subconjuntos ordenados y factor de sobre-relajación mientras éstos utilicen parámetros adecuados.
e) Comparación de métodos: Los procesos iterativos mejoran los resultados de FBP. En particular, es notable la corrección de la atenuación.
La convergencia del proceso iterativo es parcial en los tres casos. Tras mejorar en las primeras iteraciones, la imagen empeora si se prolonga el proceso iterativo más allá de un determinado número de iteraciones.
Aunque existen pequeñas diferencias, el comportamiento de las diversas FDM en función de las iteraciones es muy parecido en los tres métodos iterativos. Tanto CC como SNR alcanzan su valor máximo a las pocas iteraciones, mientras que CON mantiene su crecimiento.
En los tres métodos existe un parámetro para regular la velocidad de convergencia. En los tres, sin embargo, existe un valor máximo de ese parámetro, dependiente básicamente de la cantidad de ruido sobre las proyecciones y que puede ser determinado mediante estudios simulados, a partir del cual los resultados obtenidos son peores.
En la aplicación a la cuantificación de la actividad de emisión de un tumor pulmonar se concluye que el método FBP empleado usualmente en este tipo de estudios no recupera el valor de la actividad del tumor mientras que los métodos iterativos si lo recuperan, pero sólo MLE-OS lo hace a un número de iteraciones independiente de la actividad del tumor. En consecuencia, el desconocimiento a priori de esa actividad hace que únicamente pueda utilizarse MLE-OS.
Noise, scattering, attenuation and the Point Spread Function (PSF) produce poor quality images when Filtered Back Projection (FBP), the standard image reconstruction from projection algorithm, is used in SPECT. Iterative reconstruction algorithms allow us to correct these degradations in the reconstruction process. We studied three iterative reconstruction algorithms: IFBP (FBP with Chang's iterative attenuation correction), ART (Algebraic Reconstruction Techniques) and MLE (Maximum Likelihood Estimator). We studied the dependence between image quality and the number of iterations and the following parameters: Metz filter exponent in IFBP, relaxation parameter in ART, over-relaxation parameter and number of ordered subsets (OS) in MLE. We also studied the applicability of Cross Validation Ratio as a
Ramos, Nuñez Carlos A. "Daniel Alcides Carrión: el métodos experimental in extremis". THĒMIS-Revista de Derecho, 2011. http://repositorio.pucp.edu.pe/index/handle/123456789/108840.
Texto completoANGELIM, Jacqueline Loureiro. "Análise experimental de três métodos de aerostasia bronquial". Universidade Federal Rural de Pernambuco, 2012. http://www.tede2.ufrpe.br:8080/tede2/handle/tede2/5681.
Texto completoMade available in DSpace on 2016-10-11T17:01:42Z (GMT). No. of bitstreams: 1 Jacqueline Loureiro Angelim.pdf: 1021168 bytes, checksum: 196071628f9b10ff3bc708fe300d4091 (MD5) Previous issue date: 2012-02-29
The pulmonary lobectomy technique can be used on the lung cancer, lung lobe torsion, pulmonary laceration and lung abscesses treatment and among the postoperative problems the bronchial stump dehiscence, prolonged air leak and development of bronchopleural fistula are highlighted. These problems are usually resulting from malocclusion of the bronchial stump and most of the times the treatment for those problems consists on a new surgical procedure to reopen the chest cavity. The ain of the present paper was to evaluate the efficiency of n-butyl cyanoacrylate and nylon brassard as methods for occlusion of the bronchial stump and the aerostasia maintenance, comparing it with the manual suture technique on an experimental model using pig tracheobronchial trees, submitted to increasing levels of positive intrabrochial pressure. 30 pig tracheobronchial trees were used, where eight lobar bronchi from each piece were selected for studying, four of 10 mm and four of 5 mm. Thereafter, the trees and their bronchi were equally distributed into three experimental groups: Suture Group (SG) – manual suture with simple isolated stitches, using nº 2,0 surgical nylon; Cyanoacrylate Group (CG) - n-butyl Cyanoacrylate and Brassard Group (BG) – nylon brassards. After the bronchial stumps occlusion, the pieces were immersed in water and the “tire fitter test” was made placing the positive intrabrochial pressure at 30 cm of H2O for five minutes, and then gradually increased to a pressure at 100 cm of H2O. Leaks with a percentage of 1, 25% (1/80) were observed on groups SG and BG, when respectively submitted to a pressure at 30 cm of H2O and 100 cm of H2O. There were no leaks in any bronchial stumps on group CG. There were no statistically significant differences among the three treatments. It was concluded that, both n-butyl Cyanoacrylate such as nylon brassard, as also the manual suture, are effective methods for implementation and maintenance of bronchial aerostasia.
A técnica de lobectomia pulmonar pode ser utilizada para tratamento de neoplasias pulmonares, torção do lobo pulmonar, laceração pulmonar e abscessos pulmonares e dentre as complicações pós-operatórias, destacam-se a deiscência do coto brônquico, escape aéreo prolongado e desenvolvimento de fístula broncopleural. Esses problemas geralmente são decorrentes da má oclusão do coto brônquico e na maioria das vezes o tratamento consiste na realização de um novo procedimento cirúrgico com reabertura da cavidade torácica. Com o presente trabalho, objetivou-se avaliar a eficácia do n-butil cianoacrilato e da braçadeira de náilon como métodos para oclusão do coto brônquico e manutenção da aerostasia, comparando-o com a técnica de sutura manual, em modelo experimental empregando árvores traqueobrônquicas de suínos, submetidas a crescentes níveis de pressão positiva intrabrôquica. Foram utilizadas 30 árvores traqueobrônquicas, onde oito brônquios lobares de cada peça foram selecionados para estudo, sendo quatro de 10 mm e quatro de 5 mm. Posteriormente, as árvores e respectivos brônquios foram distribuídos equitativamente em três grupos experimentais: Grupo Sutura (GS) – sutura manual com pontos isolados simples, utilizando náilon cirúrgico n0 2.0; Grupo Cianocrilato (GC) - n-butil cianoacrilato e Grupo Braçadeira (GB) – braçadeiras de náilon. Após a oclusão dos cotos brônquicos, as peças foram imersas em água e realizado o teste de hermeticidade empregando-se pressão positiva intrabronquial de 30 cm de H20 por cinco minutos, sendo em seguida, gradualmente aumentada até atingir a pressão de 100 cm de H2O. Nos grupos GS e GB foram observados vazamentos com percentual de 1,25% (1∕80), quando submetidos a uma pressão de 30 cm de H20 e 100 cm de H20, respectivamente. No grupo GC não foi constatado vazamento em nenhum dos cotos brônquicos. Não foram evidenciadas diferenças estatísticas significativas entre três os tratamentos. Conclui-se que tanto o n-butil cianoacrilato, como a braçadeira de náilon, igualmente à sutura manual, são métodos eficazes na execução e manutenção da aerostasia bronquial.
Rios, Victor de Souza. "Estudo experimental da injeção de vapor pelo método SAGD na recuperação melhorada de óleo pesado". [s.n.], 2011. http://repositorio.unicamp.br/jspui/handle/REPOSIP/264304.
Texto completoDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica e Instituto de Geociências
Made available in DSpace on 2018-08-18T05:54:25Z (GMT). No. of bitstreams: 1 Rios_VictordeSouza_M.pdf: 7274961 bytes, checksum: badfe10558862e259afaf6310ddae8f6 (MD5) Previous issue date: 2011
Resumo: Os métodos térmicos de recuperação viabilizaram a produção de óleo pesado em campos considerados não comerciais pelos métodos convencionais de recuperação. A injeção de vapor, em particular, veio a se consagrar ao longo dos anos e é hoje uma das principais alternativas economicamente viáveis para o aumento da recuperação dos óleos pesados. Nesse contexto, destaca-se a drenagem gravitacional assistida por vapor, "Steam Assisted Gravity Drainage" (SAGD). Esse processo caracteriza-se por utilizar dois poços horizontais: um produtor, localizado próximo à base do reservatório e um injetor, situado alguns metros acima. O objetivo desse método é criar uma câmara de vapor, enquanto promove uma melhor varredura dos fluidos do reservatório. A razão do volume de vapor injetado por volume de óleo recuperado é um parâmetro decisivo na economicidade de projetos de injeção de vapor. No presente trabalho um estudo experimental do método SAGD foi desenvolvido para entender melhor o processo de drenagem gravitacional assistida por vapor na recuperação de óleo pesado. Aliando-se a isso, também foi investigada a injeção de nitrogênio combinado com vapor, o que contribui para o mecanismo de recuperação e para a redução em volume do vapor injetado, refletindo na economicidade do projeto. Simulações numéricas utilizando um software comercial foram realizadas visando a dar suporte às análises feitas. Os estudos foram conduzidos em escala de laboratório com óleo pesado da bacia do Espírito Santo. Os resultados obtidos mostram que a injeção de nitrogênio após certo período de injeção contínua de vapor, método conhecido como SAGD - Wind Down, reflete na redução considerável da razão vapor/óleo sem, no entanto, afetar de forma muito sensível a produção de óleo, quando comparado ao SAGD convencional
Abstract: Thermal recovery methods made possible the production of heavy oil fields considered non-commercial with conventional recovery methods. The steam injection, in particular, improved itself over the years and is now a major cost-effective alternative for increasing the heavy oil recovery. In this context, the Steam Assisted Gravity Drainage (SAGD) is the process that uses two horizontal wells with the steam injector above the producer which stays at the base of the reservoir. The purpose of this method is to create a steam chamber, while promoting a better sweep of the fluid reservoir. The steam oil ratio is a decisive parameter in the economic viability of projects with steamflooding. In this paper, an experimental study of a SAGD cell was developed to understand better the steam assisted gravity drainage on the heavy oil recovery. Allied to this, it was also investigated the nitrogen injection combined with steam, which contributes to the recovery mechanism and a possible reduction in volume of steam injected, reflecting on the economy of the project. The studies were conducted in laboratory scale with heavy oil from the Espírito Santo basin. The results show that injection of nitrogen after a period of steamflooding, a method known as SAGD - Wind Down, reflects the considerable reduction in the steam oil ratio without, however, affect significantly the oil production, when compared to conventional SAGD
Mestrado
Reservatórios e Gestão
Mestre em Ciências e Engenharia de Petróleo
Guimarães, Paulo Henrique Ramos. "Parâmetros genéticos e fenotípicos em arroz irrigado estimados por método de análise espacial". Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/3956.
Texto completoApproved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-01-28T11:19:18Z (GMT) No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertação - Paulo Henrique Ramos Guimarães - 2014.pdf: 1517515 bytes, checksum: 862883b307a6231b2a7f63d38c9f03ea (MD5)
Made available in DSpace on 2015-01-28T11:19:18Z (GMT). No. of bitstreams: 2 license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Dissertação - Paulo Henrique Ramos Guimarães - 2014.pdf: 1517515 bytes, checksum: 862883b307a6231b2a7f63d38c9f03ea (MD5) Previous issue date: 2014-02-28
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Some spatial analysis methods have been applied in order to mitigate environmental variation. The objective of this study was to evaluate the efficiency of spatial statistical, through the method of Papadakis, relative to the analysis to augmented blocks Federer in the correction of environmental variation. Were evaluated 198 progenies S0:2 of rice and four witnesses for augmented blocks of Federer. Data set were taken in the grain yield (GY, kg ha- 1) and plant height (PH, cm). The data set were subjected to variance analyses and were estimated the genetic and phenotypic parameters. The different approaches (BAF and Papadakis) were compared as to their estimates of genetic and phenotypic parameters. The ranking of adjusted means in the two models analyzed was performed, and calculated the Spearman correlation. There have been improvements in the statistics that depict the experimental accuracy when the spatial analysis was, that affect the estimates of genetic and phenotypic parameters. The use of the Papadakis method yielded fewer iterations compared to BAF for the same value of . Was able to gain direct selection for the AP and PG characters when the Papadakis method was used. With the use of spatial analysis selection was less influenced by the effect of environmental variation. Finally it was found that the spatial analysis methods were effective in the removal of environmental effects highlighting the Papadakis method, indicating that it can provide improvement
Alguns métodos de análise espacial têm sido aplicados objetivando reduzir a variação ambiental. O objetivo deste trabalho foi avaliar a eficiência da análise espacial, por meio do método de Papadakis, em relação ao delineamento de blocos aumentados de Federer na correção da variação ambiental. Foram avaliadas 198 progênies S0:2 de arroz irrigado e quatro testemunhas no delineamento de blocos aumentados de Federer. Os caracteres avaliados foram: produção de grãos (PG, kg ha-1) e de altura de plantas (AP, cm). Foi efetuada análise de variância para os caracteres estudados e estimados os componentes de variância e os parâmetros genéticos e fenotípicos. As diferentes abordagens (BAF e Papadakis) foram comparadas quanto às suas estimativas de parâmetros genéticos e fenotípicos e correlação de Spearman. Houve melhorias nas estatísticas que retratam a precisão experimental quando a análise espacial foi utilizada, isto influenciou as estimativas de parâmetros genéticos e fenótipos. O uso do método de Papadakis apresentou a necessidade de uso de menor número de repetições em relação ao BAF para o mesmo valor de . Houve ganho de seleção direto para os caracteres AP e PG quando o método de Papadakis foi utilizado. Com o uso da análise espacial a seleção foi menos influenciada pelo efeito da variação ambiental. Por fim verificou-se que o método de Papadakis foi eficiente na remoção dos efeitos ambientais, indicando que o mesmo pode proporcionar melhoria na precisão experimental, o que torna o processo seletivo mais eficiente.
Kliewer, Marcus. "Método de Espectroscopia de Mistura de Níveis para Medida de Momentos de Quadrupolo Nucleares". Universidade de São Paulo, 1999. http://www.teses.usp.br/teses/disponiveis/43/43131/tde-16022011-194341/.
Texto completoThe Level Mixing Spectroscopy method allows to measure the eletric quadrupole moments of high spin isomeric nuclear states (10ns < t < 100ms) produced in nuclear reactions. The magnetic interaction is usualy created by an intense external magnetic field. The eletric quadrupole interaction can be created by recoi-implantation of the nuclei in non-cub crystals, used as hosts. The external magnetic field can then be replaced by the hiperfine fields of ferromagnetic materials, controling its intensity by temperature variation. The purpose of the research performed for this work is to verify the viability of this replacement. We adapt the LEMS method to be used in the Pelletron Laboratory. We choose the isomeric state at 398 KeV exitation energy in the 69Ge nucleus as a test case, because it has all nuclear properties well known (half-life, spin, magnetic moment, eletric quadrupole moment). It was produced by the 56Fe(16O, 2pn)69Ge reaction, with a 16O beam at 53 MeV, and implanted and stopped in a Gadolinium host, which is a ferromagnet from low temperatures up to Tc=289 K. We measure the anisotropy of the emitted gama ray as a function of the temperature of the host. The comparison of this measurement with another of the anisotropy as a function of an external magnetic field strength, done by the Leuven/Belgium group, show us two possibilities. In the first, we suppose that the eletric interaction is cosntant and independent of temprature and we obtain an anomalous magnetic hyperfine field for Gd. In the second one, we obtain a hyperfine field that follows the magnetization if we assume eletric field gradientes that are temperature dependent. New measurements by using Gd monocrystal and the TDPAD (Time Diferencial Perturbed Angular Distribution) method may solve this ambiguity.
Souza, Édila Cristina de. "Os métodos biplot e escalonamento multidimensional nos delineamentos experimentais". Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-19042010-142813/.
Texto completoThe objective of this study was to evaluate statistical methods of analysis of the interaction of genotypes with environments (G × A), emphasizing the adaptability and stability phenotype. The variables studied were production and soluble solids contents (SST) Melon Galia type, testing 9 genotypes in 12 environments. The experiment was conducted in a randomized block with 3 replications, it was done at Pole Agroindustrial Mossor´o-Assu in Rio Grande do Norte. The performance of cultivars was analyzed by using analysis of variance, methods of adaptability and stability. It carried out the analysis for the production and soluble solids, using the methodologies AMMI (Additive Main Effects and Multiplicative Interaction) and SREG (Sites Regression), graphing simultaneously the genotypes and environments through the AMMI Biplot graphs, GGE Biplot and trilinear plot. The AMMI analysis has the advantage of studying in detail the structure of the interaction effect, and represents both the scores of the interaction effects for each factor. The analysis SREG, incorporates the effect of genotype and in most cases is highly correlated with the scores of the first principal component, it has the advantage of allowing direct graphical assessment of the effect of genotype. It was also proposed the methodology MDS (Multidimensional Scalling) to check the similarities and dissimilarities between the environments, through a distance matrix, representing geometrically the data in two-dimensional space (Biplot) each variable studied, in wich one can be observed disparities environmental show different characteristics.
Gobo, Michel Stephani da Silva. "Métodos Analíticos e Experimentais para Determinação do Número Atômico Efetivo". Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/59/59135/tde-30012018-184854/.
Texto completoA composite material, formed by the mixture of several elements, can be conveniently described by, from the way the radiation interacts with it, as if it were formed by a factitious element with an eective atomic number (Zef). This parameter is not constant with the energy, however, can be considered as an useful tool for characterization of biological tissues, tissue-equivalent materials and dosimeters. Several ways for the determination of Zef were proposed by the literature, among them, are the attenuation methods which are based on the total cross section (derived form the mass attenuation coecient (µ/)) and the scatter methods which are based on the ratio between the Rayleigh and Compton dierential cross sections. In this work, were study ways to obtain Zef experimentally and theoretically by two methods (attenuation and scattering) to ll the gaps in the literature. In the attenuation method µ/ was used as interaction coecient which, in the authors knowledge, it hasnt been used. For this purpose, experimental arrangements to determine the density () and the linear attenuation coecient (µ) were built. The µ (of tissue-equivalent material and biological tissues) were determined both for 59.54 keV (241Am source) with a CdTe detector and for the energy rage between 15 and 45 keV (X ray tube, W target) with a SDD detector. A new computational program for determining Zef through µ/ was implemented and validated. The method sensibility was studied in the way to determine Zef properly. In the scattering method, an experimental arrangement to detect the Rayleigh and Compton scattered photons (with 241Am source and a CdTe detector) was built, optimized and validated. A computational program to determine Zef through Rayleigh to Compton ratio, R/C, was elaborated and validated. The methods sensibility was studied and analyzed to determine Zef properly. In the attenuation method, the arrangements for determining µ/ allowed to determine it with dierences smaller than 6% when compared with the literature and uncertainties of 3.8% for 59.54 keV and up to 7% in the range of 15 to 45 keV. It has also been found that the method is suitable for determining Zef for energies of up to 60 keV, because above that energy, the uncertainties in Zef increases (more than 10%). In the scattering method, the arrangement for determining R/C made it possible to obtain measurements of R/C with less than 10% dierence with the literature and uncertainties of 7% and it was veried that the momentum transfer range between 1 Å1 and 2 Å1 is suitable for determining Zef (uncertainties less than 3 % uncertainty)
Fuster, Obregón Salvador. "Estudio experimental sobre diferentes métodos de osteosíntesis del raquis dorsolumbar". Doctoral thesis, Universitat de Barcelona, 1987. http://hdl.handle.net/10803/295120.
Texto completoWe compared the biochemical behaviour of different methods of osteosynthysis of dorsolumbar rachis under flexion and compression stress testing, while stabilizing an expenmentally produced unstable lesion. The method of choice was the "in vitro" experimental study on specimens of the pig spinal column, including vertebra IX (a)D to V(a)L. An unstable fracture was produced in the anterior and mid-segment of the vertebra I(a)L also involving the intervertebral disc XII(o) D-I(o) L, and was osteosynthesized with the following methods: Kostuik-Harrington, Roy-Camille, Harrington's Distraction, Harrington-Luque, Luque and Harrington modified by Villanueva as an original technique not experimentally verified. Thirty-six trials were carried out on eight specimens and the following parameters were measured for each stepped-up load applied: the force supported by the specimen, the distance from the axial axis of load to the posterior wall of the lesioned vertebra, the angle turned by the involved segment and tensions supported by the implantations. The measurement of displacement undergone by the specimen and the tensions supported by the various methods of osteosynthesis were carried out by instantaneous photograph taken with each new load introduced and by the extensometric gauge implanted in the trial methods. Deformation variables, force borne by the column and tensions suffered due to the implantation were calculated and compared. The best method for the experimental model chosen was seen to be that of Kostuik-Harrington as it made the lesion very rigid, relieved the load from the column and functioned under low tensions. This was followed by the Roy-Camille plates which brougth about great rigidity of the instrumented segment with low tensions of the material. Harrington's method, modified by Villanueva, which combines interfragmentary compression and distraction, provided the greatest unloading of the lesioned column and shown little wear of material, although rigidity was low. Sublaminar segmentation techniques caused great rigidity of the column but brought about very high tensions of material and unloading was minimal. Harrington's Distraction method unloaded the column only very slightly, did not bring about rigidity and supported high tensions of materials.
Amigo, Rubio José Manuel. "Desarrollo y aplicación de métodos quimiométricos multidimensionales al estudio de sistemas enzimáticos". Doctoral thesis, Universitat Autònoma de Barcelona, 2007. http://hdl.handle.net/10803/3249.
Texto completo1) Estudio del mecanismo de reacción en matrices biológicas y cálculo de las constantes enzimáticas en reacciones complejas. Se estudia la catálisis de mezclas de hipoxantina, xantina y ácido úrico con xantina oxidasa; en un medio controlado y en orina humana. Además, se determina cuantitativamente los analitos anteriormente mencionados en orina humana. Los perfiles cinéticos se registraron usando un espectrofotómetro Ultravioleta-Visible equipado con un detector de diodos. A este espectrofotómetro se acopló un sistema de flujo interrumpido stopped-flow. Para mantener la temperatura constante se usó un módulo "Peltier".
2) Establecimiento de una metodología para el control en tiempo real de bioprocesos. El sistema estudiado es la producción de enzimas lipasas por el microorganismo Pichia pastoris. Las medidas de fluorescencia se realizaron con un sensor de fluorescencia multidimensional conectado al bioreactor a través de una ventana de cuarzo.
3) Un objetivo no relacionado con la monitorización de reacciones enzimáticas, es el desarrollo de una práctica de laboratorio dirigida a estudiantes de cursos superiores de la licenciatura de Ciencias Químicas para la introducción a los métodos de resolución de curvas y a la quimiometría en general.
Se han obtenido las siguientes conclusiones generales:
1) La introducción de la restricción del modelo cinético del sistema estudiado en el algoritmo MCR-ALS ha dado lugar a una nueva metodología de trabajo, denominada HS-MCR-ALS. Su aplicación al sistema cinético enzimático ha hecho posible:
· El estudio del sistema enzimático en presencia de interferentes espectrales.
· La elucidación del modelo cinético, ya que el anteriormente descrito en la bibliografía no contemplaba la degradación del ácido úrico.
· La determinación de las constantes cinéticas del modelo postulado, tanto en un medio tamponado de composición conocida como en orina.
2) Se han aplicado y comparado diferentes métodos, bi o tridimensionales, de análisis multivariable cuantitativo para la determinación de xantina, hipoxantina y ácido úrico en mezclas sintéticas y en orina humana dopada. Los resultados obtenidos en la aplicación cuantitativa del HS-MCR-ALS son comparables a los resultados obtenidos mediante otros algoritmos tridimensionales, 3W-PLS1 y 3W-PLS2, y a los algoritmos bidimensionales clásicos, PLS1 y PLS2, con una mínima cantidad de muestras patrón.
3) Se ha establecido una nueva metodología de trabajo para el control en tiempo real de procesos monitorizados por fluorescencia. Se basa en el seguimiento no invasivo de la reacción mediante una sonda de fluorescencia multidimensional y en el tratamiento de la señal registrada mediante el algoritmo PARAFAC. A partir del modelo PARAFAC, aplicado a bioprocesos desarrollados en condiciones normales, se obtiene una estimación de los perfiles espectrales de los fluoróforos y de la evolución de su señal. El análisis de residuales de este modelo permite establecer límites de control (criterio Q). A partir de esa información, se monitorizó y controló en tiempo real un nuevo lote del bioproceso, siendo posible determinar el punto final de la producción de lipasas y metabolización total de sustrato sin necesidad de medidas off-line de los analitos de interés.
4) Se ha estudiado la evolución espectrofotométrica del ácido 8-hidroxiquinolina-5-sulfónico con el pH como una práctica de laboratorio para la introducción del algoritmo MCR-ALS a estudiantes de cursos superiores de la licenciatura de Químicas. Esta práctica contiene instrucciones de cómo utilizar los datos obtenidos en la valoración para poder aplicar MCR-ALS de una manera sencilla y didáctica.
The main objective of this dissertation is the application of multidimensional chemometric methodologies to the kinetic analysis of enzymatic reactions. These enzymatic reactions have importance in analytical chemistry and in fermentation processes at industrial scale. This principal objective may be split into several partial objectives:
1) The study of reactions related with the catalysis of mixtures of hypoxanthine, xanthine and uric acid with xanthine oxidase with the aim of studying the reaction mechanism in biological matrices and to calculate the enzymatic constants in complex reactions. In addition, the influence of the reaction media in the enzymatic mechanism will be studied in a controlled media and in human urine. Another objective is the quantitative determination of the above mentioned analytes in urine samples by means of their catalytic reaction with xanthine oxidase. In this work different algorithms will be applied and their advantages and disadvantages will be compared.
2) Establishment of a methodology for the real time control of fermentation processes. In this sense, multidimensional tensor models will be applied to multidimensional spectrofluorimetry. The studied system will be the production of enzymes lipases throughout the microorganism Pichia pastoris.
3) Another objective linked to this thesis is the divulgation of curve resolution methods. As an example of their use, a laboratory experiment directed to students of the last courses of Chemistry Degree is presented. Multivariate curve resolution with alternating least squares (MCR-ALS) will be applied to the spectrophotometric titration of 8-hydroxyquinoline-5-sulfonic acid and the evolution of the system will be discussed.
The following conclusions have been obtained:
1) The introduction in the algorithm of a restriction related with the kinetic model of the system gives raise to a new method, combining hard-modelling and soft-modelling in the HS-MCR-ALS algorithm. The following results have been obtained:
· The study of the enzymatic mechanism in presence of a spectral interference.
· The correct elucidation of the kinetic model.
· The determination of the enzymatic constants of the proposed kinetic model in presence and absence of the interference of urine.
2) Several methods have been applied to the quantitation of hypoxanthine, xanthine and uric acid in synthetic samples and urine and their results compared. The obtained results with HS-MCR-ALS were compared with those obtained by other three-way methodologies (3W-PLS1 and 3W-PLS2) and by the classical two-way methodologies (PLS1 and PLS2). The HS-MCR-ALS algorithm shows clear advantages, such as the small amount of standards needed (one or two standard samples are enough) and the fact that they can be prepared in aqueous solution, with no need to know or include the interferences present in the samples. Furthermore, the kinetic profiles and the spectra of the compounds involved are obtained.
3) A new working methodology applied to the control and fault diagnosis of fermentation processes in real time is presented. Multidimensional fluorescence collects a great amount of information in a short period of time, so the variability related with the fluorophores is recorded. From the PARAFAC model, built with bioprocesses in normal operating conditions, an estimation of the spectral profiles of the fluorophores and the evolution of their signal is obtained. Residual analysis of the model allows establishing control limits (using Q criterion). From this information, a new batch of the culture was monitored in real-time without needing reference off-line measurements.
4) A laboratory experiment has been proposed to introduce the algorithm MCR-ALS to the upper-level students of the Chemistry Degree. The experiment includes a laboratory session were the spectrophotometric evolution of 8-hydroxyquinolin-5-sulfonic acid with pH is recorded, and other session were the students apply the MCR-ALS algorithm to the recorded data. This experiment contains instructions in how to use the obtained data using MCR-ALS in a simple and didactic manner.
Santos, Iviane Cunha e. "Atualização do modelo numérico em elementos finitos de uma passarela de pedestres com base em dados experimentais". reponame:Repositório Institucional da UnB, 2009. http://repositorio.unb.br/handle/10482/5355.
Texto completoSubmitted by Allan Wanick Motta (allan_wanick@hotmail.com) on 2010-08-25T18:05:20Z No. of bitstreams: 1 2009_IvianeCunhaSantos.pdf: 4749089 bytes, checksum: cadbd63b932acb129dd322b3bb7a423b (MD5)
Approved for entry into archive by janne cury nasser(janne@bce.unb.br) on 2010-08-26T12:10:03Z (GMT) No. of bitstreams: 1 2009_IvianeCunhaSantos.pdf: 4749089 bytes, checksum: cadbd63b932acb129dd322b3bb7a423b (MD5)
Made available in DSpace on 2010-08-26T12:10:03Z (GMT). No. of bitstreams: 1 2009_IvianeCunhaSantos.pdf: 4749089 bytes, checksum: cadbd63b932acb129dd322b3bb7a423b (MD5)
Os modelos numéricos vêm sendo cada vez mais utilizados para representar o comportamento das estruturas. O método de elementos finitos pode ajudar no projeto de modificações da estrutura, na análise de carregamento externo, etc., no entanto o nível de exatidão dos modelos numéricos não é suficiente para garantir a precisão requerida na representação de estruturas complexas, como por exemplo, pontes e passarelas. As imprecisões dos modelos numéricos são geralmente devidas às modificações realizadas no processo de modelagem, às incertezas nas propriedades geométricas e dos materiais, imprecisões nas condições de contorno, etc. Para melhorar o modelo numérico de uma passarela, este deve ser atualizado com o intuito de aproximar o comportamento do modelo aos dados modais experimentais, de modo a torná-lo mais preciso. O princípio deste processo é alterar as matrizes do sistema, que descrevem completamente o modelo em elementos finitos, com base nos parâmetros modais obtidos experimentalmente. Neste trabalho avalia-se numericamente uma passarela de pedestres existente na cidade de Brasília, por meio do software ANSYS, onde foi realizada uma análise de sensibilidade para selecionar os parâmetros a serem utilizados no processo de atualização. A atualização foi dividida em duas fases, inicialmente a manual, com o objetivo do refinamento do modelo numérico, e a segunda fase, a automática, onde foi implementado um algoritmo de otimização que utiliza dois métodos: o de Primeira Ordem e o de Aproximação por Subproblema. Os resultados encontrados mostram uma redução do índice FER (Porcentagem de variação da frequência) de 8,77% para 1,50% e um aumento no índice MAC (Modal Assurance Criterion) de 0,870345 para 0,9133. O trabalho conclui que o modelo atualizado pode ser utilizado como uma ferramenta importante para a avaliação da estrutura. Um monitoramento contínuo acompanhado de uma seqüência de atualizações do modelo numérico pode possibilitar a identificação de danos na estrutura. _______________________________________________________________________________ ABSTRACT
The numerical models have been increasingly used to represent the behavior of structures. The finite element model can help in the design of structural modifications and analysis of external loading. However, the level of accuracy of numerical models is not sufficient to ensure the required precision in the representation of complex structures, such as bridges and footbridges. The lack of accuracy of numerical models is usually due to changes made in the modeling process, the uncertainties in the geometric and material properties, inaccuracies to represent the boundary conditions, etc. To improve the numerical modelling of a footbridge, the utilized model should be updated in order to approximate the numerical results to the observed experimental data, with the purpose of make it more precise. The principle of this methodology is to change the matrices, which describe completely the finite element model, based on modal parameters obtained experimentally. In this work, we evaluate numerically, using ANSYS software, a footbridge constructed in Brasilia, where sensitivity analysis were performed to select which parameters should be used in the update process. The update has been divided into two stages; the first one is a manual procedure of updating or tuning, with the goal of refinement the numerical model. The second stage, called automatic updating, was implemented with a optimization algorithm that uses two methods: the First Order and the Subproblem Approximation. The result shows a reduction of the index FER (Percent of frequency changes) of 8.77% to 1.50% and an increase in the index MAC (Modal Assurance Criterion) from 0.870345 to 0.9133. This work concludes that the updated model can be used as an important tool for structural evaluation. A continuous monitoring together with a sequence of updates of the numerical model can enable the damage identification of the structure.
Bazani, Marcio Antonio. "Resfriamento de placas planas por um jato confinado de ar". [s.n.], 2001. http://repositorio.unicamp.br/jspui/handle/REPOSIP/263015.
Texto completoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica
Made available in DSpace on 2018-07-31T21:58:36Z (GMT). No. of bitstreams: 1 Bazani_MarcioAntonio_D.pdf: 8200093 bytes, checksum: 0eca0ed6afa5487d222dd86214f3f802 (MD5) Previous issue date: 2001
Resumo: Este trabalho considerou o resfriamento convectivo de placas planas localizadas na superfície de incidência de um jato confinado de ar. Inicialmente o problema foi simulado numericamente, considerando a superfície de incidência do jato aquecida e isotérmica, utilizando dois modelos de turbulência de alto Reynolds distintos: k-e RNG (grupos de renormalização). As simulações foram efetuadas para um escoamento bidimensional incompressível de ar. O método dos volumes de controle foi usado para resolver iterativamente as equações de conservação de massa, quantidade de movimento e energia, bem como as equações de energia cinética turbulenta e taxa de dissipação de energia cinética turbulenta. Os resultados obtidos da simulação de escoamento e da transferência de calor foram comparados com resultados numéricos e experimentais obtidos na literatura numa faixa do número de Reynolds na seção de entrada (8000 < Rej < 20000) e da razão de aspecto (1,0 < H/w < 8,0) do jato. Em seguida, uma montagem experimental foi construída, com três placas planas aquecidas montadas na superfície do jato. Os resultados de testes experimentais de laboratório foram comparados com resultados correspondentes de simulação numérica. Sob condições distintas de aquecimento das três placas, o coeficiente adiabático de transferência de calor, descrito por Moffat (1998), foi utilizado e comparado com resultados do coeficiente convectivo baseado na temperatura de entrada do jato. Nos testes experimentais, o número de Reynolds do jato variou na faixa (9000 < Re < 16000) e a razão de aspecto foi mantida fixa (H/w = 6,0). Foi verificado, pelos resultados experimentais e de simulação, que o coeficiente adiabático (had) depende apenas das condições do escoamento, enquanto que o coeficiente baseado na temperatura de entrada do ar (hin) depende tanto do escoamento quanto da potência dissipada nas placas a montante. Os resultados deste estudo tem aplicações no resfriamento de componentes eletrônicos
Abstract: The convective cooling of discrete heated plates flush mounted on the incidence surface of a confined slot jet was investigated in this Thesis. Initially numerical simulations were performed considering a heated isothermal impingement plate, using two high-Reynolds turbulence models: k-e and RNG (Groups of Renormalization). A control volumes method was used to solve a two-dimensional incompressible airflow. The conservation equations (mass, momentum and energy), and those associated to the turbulence mo del (the turbulent kinetic energy equation and that of its dissipation rate) were solved iteratively by the contral volumes method. The numerical simulation results for the flow and heat transfer were compared with numerical and experimental data obtained from the literature to explore the effects of the jet Reynods number (8000 < Re < 20000), and the aspect ratio (1 < H/w < 8). An experimental apparatus was built, with three heated plates flush mounted on the impingement plate. The obtained experimental results were compared with corresponding numerical simulations. Under distinct heating conditions of the three plates, the adiabatic heat transfer coefficient, described by Moffat (1998), was compared with the convective coefficient based on the inlet air temperature. The experimental tests were performed for a range of 9000 < Rej < 16000 and for a single aspect ratio, Hlw = 6. It was verified, by the experimental results and the numerical simulations, that the adiabatic heat transfer coefficient (had) depends on the flow conditions, while that based on the inlet air temperature (hin) depends both on the flow conditions and power dissipated in the upstream plates. The results of this investigation are relevant to applications in electronics cooling
Doutorado
Termica e Fluidos
Doutor em Engenharia Mecânica
Nesi, Cristiano Nunes. "Métodos alternativos para realização de testes de hipóteses em delineamentos experimentais". Universidade de São Paulo, 2002. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-25102002-110317/.
Texto completoFor experimental designs, it is usually necessary to do tests of hypotheses to conclude about effects considered in the linear models. In these cases, it is common to use statistical softwares that supply the analyses of variance and F statistics, among others, for taking decisions. However, the test F in an analysis of variance for sources of variation with more than a degree of freedom provides general information, about significant differences of levels of the factor. Therefore, it should be planned objective comparisons, making orthogonal decompositions of the degrees of the effects of interest to get more specific information. One technique used frequently based on the orthogonal contrasts, so that the comparisons are independent. However, this technique becomes complex as the number of levels of the factor increases. To study alternative methods to do these comparisons, we use data from a yield trail experiment considering two groups of varieties of sugarcane, in a complete randomized design with 6 treatments and 5 repetitions. Also, we use data from a fictitious experiment comparing hybrids of maize in the randomized complete block design. The technique of analysis using dummy variables to facilitate the orthogonal decomposition of degrees of freedom of treatments was proposed. This technique facilitates the orthogonal decomposition and has the same results of those obtained the function CONTRAST of PROC GLM of SAS. Another situation considered involves experiments with unbalanced data. In this case, it is possible to suppose that there is the necessity of knowing what hypotheses are being tested and if they are useful. Much has been written on the different results of analysis of variance presented by statistical software for unbalanced data. This can create confusion to the researcher. To illustrate, we used the results of an 2x3 factorial experiment with 4 replicates, to test the effect of 3 hormones, on the propagation of 2 in vitro cultivars of apple trees. Thus, considering that to test a hypotheses is equivalent to impose an estimable restriction to the parameters of the model, we use these restrictions as an alternative criteria to directly carry out tests of hypotheses in linear models with unbalanced data. The results showed that this procedure is equivalent of that used by the function CONTRAST of PROC GLM/SAS.
Hondo, Fábio Yuji. "Estudo experimental comparativo de métodos de diérese tecidual no tratamento endoscópico do divertículo faringo-esofágico". Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/5/5154/tde-23082011-132143/.
Texto completoThe pharyngoesophageal diverticulum (PED), also known as cricopharyngeal or Zenker diverticulum (ZD), is a rare disease with estimated incidence ranging from 0.01% to 0.11% and more present in elderly patients, for whom a less invasive treatment can be desirable. Despite innovations in endoscopic equipment and accessories, bleeding and perforation occur in up to 10 and 15% of the cases, respectively. The finding of pharyngeal diverticulum in domestic pigs turned into an experimental model of major interest for training and scientific purposes. For this reason, the introduction of a technical innovation and its comparison with PED conventional treatment were focused. Our aim was to compare the diaeresis of the PED septum by the harmonic scalpel Ultracision ® monopolar electrocautery (Group M) in an experimental model. Twenty domestic pigs (mean weight 20.2 kg; ±1.35) were divided into groups M and U nonrandomly. No significant differences were found related to diverticulum size (p=0.0897) or insertion time of soft diverticuloscope (p=0.7387). In group U, mean time to divide the septum and total procedure time were significantly shorter (p<0.0001) for both comparisons. Regarding incision extension, mean length was significantly higher in group U (p=0.0047). In relation to microscopic parameters, the lateral thermal spread caused by monopolar current (Group M) was found to be more intense (p<0.0001). As for depth and inflammation presence, no differences were verified between the groups. Hemorrhage was exclusively observed in group M (p=0.01) and it was endoscopically managed at all times. When compared to endoscopic incision with needle-knife and monopolar blend current, the experimental endoscopic diverticulostomy using soft diverticuloscope and harmonic scalpel demonstrated to be faster and related to less tissue damage.
Noguera, Cáceres José Felipe. "Caracterización de zonas contaminadas por métodos geoquímicos: Área de Gavà-Viladecans (Delta del Llobregat)". Doctoral thesis, Universitat Autònoma de Barcelona, 2003. http://hdl.handle.net/10803/3436.
Texto completoLucio, Gutiérrez Juan Ricardo. "Aplicación de Métodos Quimiométricos para la Caracterización y Control de Calidad de Plantas Medicinales". Doctoral thesis, Universitat Autònoma de Barcelona, 2012. http://hdl.handle.net/10803/96257.
Texto completoThis thesis deals with the application of chemometric methods for quality control; four medicinal plants for which there are reports of confusion, adulteration or presentation of adverse reactions in consumers have been studied. Chapter one is a general introduction and Chapter two presents the objectives. Chapter three studies E. senticosus, a member of the Araliaceae family. A near-infrared spectroscopic procedure to obtain a fingerprint of E. senticosus has been developed using raw materials. The spectra were processed by using different pattern recognition procedures. General classification success rates (GSR) of 84% and 92% were achieved by Soft Independent Modeling of Class Analogy (SIMCA) and Partial Least Squares Discriminant Analysis (PLS-DA), respectively. Tests on laboratory-made mixtures showed that it is possible to detect adulterations or counterfeits with about 5% foreign herbal material, depending on their closeness to the Araliaceae family. The sensitivity and specificity of constructed models were above 73%. Chapter four is about P. ginseng; proper identification, mixture detection and semi-quantitative determination of binary mixtures have been achieved using near-infrared spectroscopy and chemometrics. Raw NIR spectra were normalized and the classification ability of three different pattern recognition procedures was assayed. SIMCA, PLS-DA and discriminant analysis reached equal sensitivity value (100%); however, SIMCA obtained the best GSR (95%) and had the higher specificity (100%) and ability to detect debased samples (80%). Moreover, the semi-quantification of mixtures performed with multivariate curve resolution presented a mean percent error of 5.53% and showed that mixture composition should change in amounts larger than 3.64% in order to retrieve proper results. Chapter five describes a strategy for multi-wavelength chromatographic fingerprinting of herbal materials, using high performance liquid chromatography with a UV-Vis diode array detector. Valeriana officinalis was selected to show the proposed methodology. The enhanced fingerprints were constructed by compiling into a single data vector the chromatograms from four wavelengths (226, 254, 280 and 326 nm), those where characteristic chemical constituents of valerian presented maximum absorbance. Chromatographic data pretreatment included baseline correction, normalization and correlation optimized warping. The GSR values achieved by SIMCA and PLS-DA were above 90%. The sensitivity and specificity of constructed models were above 94%. Tests on laboratory-made mixtures showed that it is possible to detect adulterations or counterfeits with 5% foreign herbal material, even if it is from the Valerianaceae family. Chapter six deals with the implementation of the enhanced fingerprint strategy together with PLS regression, in order to improve the prediction of the antioxidant activity of Turnera diffusa. The wavelengths were selected from a contour plot, in order to obtain the greater number of peaks at each of the wavelengths (216, 238, 254 and 345 nm). A PLSR model with four latent variables explained 52.5% of X variance and 98.4% of Y, with a root mean square error for cross validation of 6.02. To evaluate its reliability, it was applied to an external prediction set, retrieving a relative standard error for prediction of 7.8%. The study of the most important variables for the regression indicated the chromatographic peaks related to antioxidant activity at the used wavelengths. Chapter seven presents the overall conclusions drawn from the quality control methods proposed for the four medicinal plants studied and Chapter eight describes several possibilities for further research. Finally, the appendix lists the publications resulting from the work carried out.
Ghiggi, Lisete. "A roda : método de aprendizagem que desafia o individualismo". Faculdades EST, 2008. http://tede.est.edu.br/tede/tde_busca/arquivo.php?codArquivo=97.
Texto completoThis dissertation is about the importance of the circle, which is the name given to a group of persons that base their individual and collective actions in search of behavioral and social changes. Two kinds of circles will be discussed here: the World Social Forum as a discussion space of the concerns that afflict humanity, with the perspective of pointing ways for social well being of the people; and the Pedagogical Circles, when they are used to promote significant learning coming from interactivity among pairs, especially in text production and analysis. These two kinds of circles are studied from the possibility of strengthening human values and to help in the construction of a more just and cooperative society, less individualist and competitive. In this perspective the circles are seen as contraposition ways against individualism, in the sense of selfishness or egocentrism. About the World Social Forum, its objectives will be pointed out as well as its importance in the global context as the biggest world circle based on the slogan another world is possible. About the circles dedicated to learning, called here pedagogical, the circle method which results from constant practice directed to participative text analysis and production and that reveals its importance as a meaningful way of learning will be described. There are also presented the results obtained from using the method with a group of university students, based on specific criteria. The method of the circle was evaluated by the persons that have tried it, through questions that allow observing if the technique helps in teaching and learning and contributes to tighten relationships as well as structuring a more cooperative and less competitive learning able to instigate changes in the society in which we live. The World Social Forum and the circles dedicated to teaching and learning; the small circles as well as the big circles discussed here are considered pedagogical because they teach and they have in common the cooperation and the challenge to overcome individualism that results in a limitless competitiveness that comes from neoliberal globalization or the triumph of capitalism. When they are structured in this way they form strengthening nucleus of the communities in face of the individualism that is observed mainly in the western society, which denies any form of ethics because ethics presupposes the other. What arises in the beginning of this century is a fragile and weakened communitarian life that allows corruption, insecurity and social lack of control. The circles described here constitute models to confront the lack of social arrangement and to strengthen the communities and their whole, the society.
Fortiana, Gregori Josep. "Enfoque basado en distancias de algunos métodos estadísticos multivariantes". Doctoral thesis, Universitat de Barcelona, 1992. http://hdl.handle.net/10803/1563.
Texto completoUna de las aplicaciones estadísticas de la Geometría Métrica es la representación de conjuntos, consistente en determinar puntos en un espacio de métrica conocida (frecuentemente euclídea) cuyas distancias reproduzcan exacta o aproximadamente las observadas.
Los Métodos de Regresión y Discriminación basados en Distancias, propuestos por Cuadras, proporcionan predicciones estadísticas aplicando propiedades geométricas de una representación euclídea. Tienen la ventaja de permitir el tratamiento de Variables continuas, cualitativas de tipo nominal y ordinal, binarias y, en general, cualquier mixtura de estas variables.
Esta memoria es una contribución al estudio de estos métodos de predicción basados en distancias. En lo sucesivo emplearemos la abreviatura "DB" para referirnos él estos métodos.
2) Fundamento teórico de la predicción DB
Supongamos que se ha medido una variable respuesta "Y" sobre un conjunto "U" de "n" objetos, definidos por unas coordenadas "Z", y se desea predecir el valor Y(n+l) de esta variable para un nuevo objeto "omega" definido por las coordenadas "Epsilon"(n+1).
Aplicando una función distancia adecuada se obtiene una matriz "delta" de distancias entre los objetos "U", y de ella las coordenadas "X" de los "U" en cierto espacio euclídeo RP. Existe una expresión para las coordenadas euclídeas X(n+l) de "omega".
Si "Y" es continua (regresión DB), la predicción Y(n+l) se obtiene calculando regresión lineal sobre las variables "X" y aplicando a X(n+1) la ecuación de regresión obtenida. Si "Y" es discreta, con estados que equivalen a sub-poblaciones de "U" (discriminación DB), se asigna "omega" a aquella subpoblación para la cual es mínima la distancia euclídea entre su centro de gravedad y X(n+l). Conviene observar que en la práctica no se emplean en general estas construcciones teóricas, sino cálculos equivalentes.
3) La distancia Valor Absoluto
La elección de la función distancia es crítica para estos métodos. Para cada problema concreto se puede elegir una medida de distancia que refleje el conocimiento del modelo.
Existen, sin embargo, algunas medidas de distancia "standard", adecuadas a gran número de problemas. Un caso notable es el de la distancia Valor Absoluto, cuya fórmula se aborda en esta tesis. Se ha observado que da lugar a predicciones excelentes, comparables a las de una regresión no lineal. Uno de los objetivos de este trabajo ha sido precisamente dar una justificación teórica a este buen comportamiento.
En el teorema (2.2.1) se muestra que para todo conjunto "U" de puntos en R(n) existe una configuración de puntos en un espacio euclídeo R(P) que reproduce la matriz ele distancias valor absoluto entre los "U".
Seguidamente se realiza el estudio teórico de la estructura de coordenadas principales asociada a esta distancia para "n" puntos sobre la recta real (al ser no bilineal la función distancia, en general "n-1" coordenadas son no triviales).
El caso de puntos equidistantes se resuelve analíticamente, partiendo de una configuración euclídea inicial X(o) (convencional, con el único requerimiento de reproducir las distancias valor absoluto entre los puntos dados), y a partir de ella se obtienen las componentes principales. Las coordenadas principales resultan aplicando a la matriz X(o) la rotación resultante. Este método indirecto es más accesible que el usual para la obtención de Coordenadas Principales.
En el teorema (2.4.1) se expresan los elementos de la columna "j" de la matriz de coordenadas principales como los valores de una función polinómica de grado "j" en unos puntos "z(i)" fijos.
Este teorema se deduce del estudio de una familia paramétrica de matrices cuyo problema de valores y vectores propios se resuelve mediante una ecuación en diferencias. La fórmula de recurrencia se identifica como la de los polinomios de Chehychev. Empleando propiedades de estos polinomios se llega a expresiones explícitas.
Estas matrices tienen notables propiedades combinatorias. En particular el teorema (3.3.1) muestra que todos sus vectores propios se obtienen aplicando a1 primero de ellos potencias de una matriz de permutación con signo.
Si se dispone de un modelo paramétrico y de una distancia entre individuos estadísticos aplicable a dicho modelo, se puede emplear la versión para variables aleatorias de las funciones discriminantes. La distancia entre individuos más adecuada es la deducida de la Geometría Riemanniana de la variedad de parámetros, que tiene por tensor métrico la "Métrica de Rao".
Se han calculado las funciones discriminantes DB para variables aleatorias que siguen algunas distribuciones conocidas. En particular, de la proposición (5.4.2), para variables multinomiales las funciones discriminantes DB coinciden con el tradicional estadístico Ji cuadrado, y de la (5.4.5), para variables normales con matriz de covarianzas común conocida, las funciones discriminantes DB coinciden con las clásicas (lineales) de Fisher.
4)Representación de Poblaciones
Se propone una distancia entre poblaciones, obtenida como diferencia de Jensen a partir de promedios sobre las distancias entre los individuos. El teorema (5.5.1) permite interpretarla como distancia euclídea entre los centros de gravedad de los puntos que representan los individuos de cada población.
Se demuestra que generaliza la de Mahalanobis, pues coincide con ella en poblaciones normales, si se emplea como distancia entre individuos la deducida de la Geometría Diferencial.
Calculando esta distancia para todos los pares de sub-poblaciones se obtiene una matriz, a la que se aplica Multidimensional Scaling, dando lugar a un representación euclídea que generaliza el Análisis Canónico de Poblaciones clásico, es decir, para poblaciones normales se obtienen los mismos resultados que con dicho análisis. Este método no proporciona regiones de confianza para los valores medios de las poblaciones. Se sugiere el empleo de "bootstrap" para dicho cálculo.
5)Aspectos computacionales
Se discuten algunos puntos relevantes de la implementación realizada de los algoritmos DB en los programas MULTICUA ®, así como de la estimación "bootstrap" de la distribución de probabilidad de las distancias entre poblaciones, con especial énfasis en las dificultades debidas a las grandes dimensiones de los objetos tratados.
6)Puntos arbitrarios sobre una recta
En este caso se llega a una descripción cualitativa de las coordenadas principales, que permite todavía describir la primera coordenada como una dimensión lineal, la segunda como una dimensión cuadrática, la tercera como una dimensión cúbica, etc.
La proposición (4.1.1) reduce el problema al estudio de los cambios de signo de las componentes de los vectores propios de una matriz "C". En (4.1.2) se muestra que "C" es oscilatoria, propiedad equivalente a la de tener todos los menores no negativos. Un teorema de Gantmacher sobre matrices oscilatorias da la descripción de los signos.
7)Coordenadas principales de una variable aleatoria uniforme
La técnica empleada para obtener las coordenadas principales de un conjunto unidimensional discreto de puntos da lugar a una generalización aplicable a una distribución continua uniforme en el intervalo (0,1). La "configuración euclídea" de partida es un proceso estocástico con parámetro continuo. El cálculo de componentes principales se sustituye por el cálculo de las funciones propias de la función de covarianza del proceso, y de ellas una sucesión (numerable) de variables aleatorias centradas C(j).
En (4.2.1) se muestra que estas variables son incorrelacionadas, igualmente distribuidas, y con una sucesión de varianzas sumable, de suma igual a la "variabilidad total" del proceso (traza del núcleo), y por ello el apropiado llamarlas "coordenadas principales de la variable aleatoria uniforme".
Aplicando a este modelo de coordenadas principales el esquema de predicción DB se propone una medida de bondad de ajuste de una muestra a una distribución dada.
8)Análisis discriminante DB
Las funciones discriminantes DB descritas más arriba pueden obtenerse directamente de los elementos de la matriz de distancias, sin precisar ninguna diagonalización, según resulta de (5.2.1) y (5.2.2). En consecuencia, el cálculo es rápido y efectivo.
Distance Based (DB) Regression and Discrimination methods, proposed by Cuadras, give statistical predictions by exploiting geometrical properties of a Euclidean representation obtained from distances between observations. They are adequate to deal with mixed variables.
Choice of a suitable distance function is a critical step. Some "standard" functions, however, fit a wide range of problems, and particularly the Absolute Value distance.
This is explained showing that for "n" equidistant points on the real line, elements in the "j"-th row of the principal coordinate matrix are values of a "j"-th degree polynomial function. For arbitrary one-dimensional sets of points a qualitatively analogous result holds.
Using results from the theory of random processes, a sequence of random variables is obtained from a continuous uniform distribution on the (0, 1) interval. Their properties show that they deserve the name of "Principal Coordinates". The DB prediction scheme in this case provides a goodness-of-fit measuring technique.
DB discriminant functions are evaluated from distances between observations. They have a simple geometrical interpretation in the Euclidean representation of data.
For parametric models, distances can be derived from the Differential Geometry of the parametric manifold. Several DB discriminant functions are computed using this approach. In particular, for multinomial variables they coincide with the classic Pearson's Chi Square statistic, and for Normal variables, Fisher's linear discriminant function is obtained.
A distance between populations generalizing Mahalanobis' is obtained as a Jensen difference from distances between observations. It can be interpreted in terms of the Euclidean representation. Using Multidimensional Scaling, it originates a Euclidean representation of populations which generalizes the classical Canonical Analysis.
Several issues concerning implementation of DB algorithms are discussed, specially difficulties related to the huge dimension of objects involved.
Candela, Soto Angélica María. "Desarrollo y caracterización de métodos de separación y preconcentración de Uranio (VI) a nivel de trazas para su efectiva determinación". Doctoral thesis, Universitat Autònoma de Barcelona, 2013. http://hdl.handle.net/10803/131278.
Texto completoIn this study, separation, preconcentration and determination methods for uranium (VI) in trace levels have been developed. The design of a liquid-liquid extraction system for uranium (VI) is the first step to develop other separation systems. For this purpose, different extracting solutions are checked, i.e. tri-n-butyl phosphate (TBP), methyltrioctylammonium chloride (Aliquat 336), and bis(2-ethylhexyl) phosphoric acid (D2EHPA). Above them, D2EHPA is the most effective extracting agent for uranyl ion working under the same conditions. In this way, a Doehlert experimental design is developed, allowing the optimization of chemical conditions of the liquid-liquid separation system for both, the extraction and the recovery of U (VI). The corresponding optimal chemical values established by the model, and related to the concentration of the extracting agent and the recovery agents, are called System 1 and System 2. These conditions are applied in the separation and preconcentration of U (VI) by means of membrane systems. These include bulk liquid membranes (BLM), supported liquid membranes (SLM), and modified polymeric membranes with D2EHPA (PMM), which is the carrier agent in all the membrane systems. Thus, in the case of the SLM two types of polymeric supports are checked, such as a commercial polyvinylidene fluoride (PVDF) and a homemade polysulfone (PSf), the later synthesized in the laboratory, as well as the PMM. System 2 and PVDF supports show good effectiveness for the uranyl ion separation, also from an economical point of view (lower amount of reactive and less time of working experiments, respectively). In this way the commercial and the homemade polymeric membranes are characterized by different spectroscopic and surface methods, allowing a comparison in morphology between them, as well as evaluating its effectiveness in the separation and preconcentration of the ion of interest. In the case of the determination of uranium (VI) various methods have been developed and applied, such as alpha spectrometry, and inductively coupled plasma mass spectrometry (ICP-MS), and additionally surface enhanced raman spectroscopy (SERS) is also tested. For the later, the study of the application of different nanoparticles systems as SERS surfaces has been checked. This nanoparticle surfaces properly interact with the uranyl ion, allowing its enhanced determination. The different systems studied as SERS surfaces are: gold nanoparticles (AuNPs), gold nanoparticles modified with aminomethyl phosphonic acid (AuNPs-APA), anionic resins modified with AuNPs-APA (Resina-AuNPs-APA), silver nanoparticles mirrors (AgNPs), and silver nanoparticles synthesized in a functionalized polymeric matrix, which is the sulfonated polyetherether ketone (SPEEK-AgNPs). Good signals are found with SPEEK-AgNPs, and the best results are obtained with AgNPs mirrors.
Porcel, García Marta. "Aplicación de técnicas quimiométricas para el desarrollo de nuevos métodos cinético-espectrofotométricos de análisis". Doctoral thesis, Universitat Autònoma de Barcelona, 2001. http://hdl.handle.net/10803/3118.
Texto completoDespués de una introducción a la calibración multivariable y a los métodos cinéticos, se realiza una revisión crítica de los trabajos publicados sobre el tema en los últimos años. A continuación, se presentan cinco trabajos que aplican las técnicas quimiométricas anteriormente mencionadas para el desarrollo de nuevos métodos cinético-espectrofotométricos de análisis.
Determinación simultánea de mezclas de metanol y etanol
Se han resuelto mezclas de metanol y etanol utilizando un método enzimático y ANNs como método de calibración multivariable. El sistema químico utiliza dos reacciones acopladas: la primera que es la enzimática donde la enzima alcohol oxidasa oxida los alcoholes primarios a aldehídos, y la segunda, que es la indicadora, donde la p-fenilendiamina se transforma en una mezcla de productos (la Base de Bandrowski y la p-quinona) por la acción catalítica de los aldehídos obtenidos en la primera reacción. La gran complejidad del sistema en estudio presupone la utilización de métodos de calibración no lineales que proporcionan mejores resultados que los obtenidos mediante PCR. La ANN óptima permite la cuantificación de ambos componentes de las mezclas con relaciones Etanol:Metanol que van desde 20:1 a 400:1 obteniéndose errores relativos estándar de predicción del orden del 5 % para ambos analitos.
Determinación simultánea de enantiómeros utilizando métodos cinéticos
Se utiliza como sistema de registro la técnica quiróptica de dicroísmo circular para la resolución de las mezclas de los enantiómeros de la 1-feniletilamina. El método se basa en la diferente velocidad de reacción de ambos enantiómeros con un reactivo quiral (-)-citronellal cuando se trabaja en condiciones lejanas a las de pseudo primer orden, las habituales en este tipo de reacciones. La misma reacción ha sido monitorizada en condiciones de pseudo primer orden utilizando la técnica UV-Vis y los resultados proporcionados por ambas técnicas han sido comparados utilizando los métodos de calibración: PCR, PLS y ANN. Los mejores resultados han sido los obtenidos efectuando una reducción de las variables obtenidas mediante la técnica de dicroísmo circular mediante un análisis en componentes principales y utilizando los scores como datos de entrada en la ANN. El error relativo estándar de predicción ha sido del orden del 3 % para ambos enantiómeros.
Evaluación de métodos de calibración multivariable, bi y tridimensionales, en análisis cinético diferencial
Las técnicas bidimensionales MLR, PCR, PLS y CR y algunos métodos tridimensionales tales como PARAFAC y N-PLS son evaluados como métodos de calibración para la determinación de mezclas ternarias en un sistema cinético de pseudo primer orden. Los diferentes métodos de calibración han sido primero aplicados a datos cinético-espectrofotométricos simulados, donde el efecto del solapamiento espectral y la diferencia en las constantes de velocidad han sido evaluados manteniendo constantes y a bajos niveles el ruido instrumental y las fluctuaciones de las constantes de velocidad. Posteriormente, han sido aplicadas a la resolución de mezclas de Co-Ni y de Co-Ni-Ga usando el reactivo complejante PAR y un sistema de mezcla de flujo interrumpido. Se han obtenido errores relativos estándar del orden del 8 % aun considerando un elevado solapamiento espectral y constantes de velocidad muy similares. El estudio de la influencia del ruido experimental en el sistema de tres componentes ha justificado las diferencias entre las simulaciones y los resultados experimentales. PARAFAC y MLR no permiten la resolución del sistema de tres componentes y CR proporciona resultados ligeramente mejores que los proporcionados por PCR, PLS y N-PLS.
Selección de los intervalos de longitud de onda y tiempos para el calibrado de un sistema cinético de dos componentes
Se propone un método para la selección de los mejores intervalos de longitudes de onda y tiempo en un sistema cinético-espectrofotométrico de dos componentes. Se basa en encontrar los intervalos que proporcionen la mínima correlación espectral y cinética entre los productos de reacción de cada uno de los analitos. El método se ha aplicado primero sobre datos simulados y posteriormente al sistema formado por la difilina y la proxifilina. Este sistema se ha caracterizado por la elevada similitud en el comportamiento cinético de ambas especies en condiciones de pseudo-primer orden y por el elevado solapamiento espectral de los productos de reacción. A pesar de ello, se han obtenido resultados satisfactorios para ambos analitos utilizando como método de calibración la regresión PLS. El error estándar de predicción y la desviación estándar entre replicados no han mostrado diferencias significativas para todos los modelos obtenidos siendo ambos errores del orden del 4 y 3 % para difilina y proxifilina respectivamente.
Determinación simultánea de metilxantinas en un fármaco
Se propone un método similar al del trabajo anterior para la determinación simultanea de teofilina, difilina y proxifilina en un fármaco comercial utilizando como técnica de calibración la regresión PLS. Los resultados han sido satisfactorios y han sido comparados con los obtenidos mediante HPLC, utilizado como método de referencia.
Teniendo en cuenta los resultados obtenidos en todos los trabajos presentados, se puede concluir que los métodos cinéticos, junto a la calibración multivariable, pueden ser utilizados para la resolución de mezclas de naturaleza química muy similar, como los enantiómeros, o que tan solo se diferencian por su comportamiento cinético como es el caso de los catalizadores. Como métodos de calibración para sistemas lineales cabe destacar la regresión PLS y en sistemas con comportamiento claramente no lineal las ANNs, siendo su proceso de optimización un paso clave para su correcta aplicación y para obtener buenos resultados. Para estudiar el comportamiento de los métodos de calibración en sistemas cinético-espectrofotométricos las simulaciones constituyen una herramienta importante para observar tendencias generales. Las técnicas de calibración tridimensionales (N-PLS y PARAFAC) en situaciones de elevado solapamiento espectral y similitud cinética entre las especies analizadas simultáneamente no proporcionan resultados satisfactorios.
The application of kinetic methods of analysis to the simultaneous determination of several analytes has grown in the last years as the result of the use of modern computerized data acquisition systems and the development of powerful mathematical treatments for processing the recorded information. The kinetic methods used in this thesis are based on the spectrophotometric record, at several wavelengths, of the analytical signal obtained during a chemical reaction. The obtained information, with a three-dimensional structure (wavelength, time, sample), has been processed using multivariate calibration methods in order to resolve mixtures of compounds with very similar chemical nature. Linear methods, as multiple linear regression (MLR), principal component regression (PCR), partial least squared regression (PLS) and continuum regression (CR) were applied to first and second order data. Three way methods, as PARAFAC and N-PLS, have been used for second order data. The advantage of all these methods in front of classical kinetic methods is that they do not require a prior knowledge of the kinetic model followed by the analytes of interest. When the system shows a non-linear behaviour (ex. interactions between analytes), it is necessary the use of non-linear multivariate calibration methods; in this thesis, artificial neural networks (ANN) were used.
After an introduction to multivariate calibration and kinetic methods, a review of the recently published papers has been done. Next, five works which develop new kinetic-spectrophotometric methods of analysis, applying the chemometric techniques above mentioned, are presented.
Simultaneous determination of methanol and ethanol mixtures.
Binary mixtures of ethanol-methanol were resolved by use of an enzymatic method using artificial neural network (ANN) methodology for multivariate calibration. The chemical system involves two coupled reactions, viz. the oxidation of the primary alcohols to the corresponding aldehydes in the presence of alcohol oxidase and the oxidation of p-phenylenediamine to a mixture of two products (the Bandrowski's base and p-quinone) by hydrogen peroxide, catalysed by the previously formed aldehydes. The high complexity of the system studied entails the use of this non-linear calibration methodology, which provides significantly improved results relative to the principal component regression, PCR, which was used for comparison. The optimised ANN allows the quantitation of both mixture components in ethanol-to-methanol mole ratios from 20:1 to 400:1, with relative standard errors of prediction in the region of 5% for both analytes.
Simultaneous determination of enantiomers by using kinetic methods.
The circular dichroism technique was used to resolve 1-phenylethylamine enantiomers. The method is based in their reaction rate differences with a quiral reagent, (-)-citronellal, under non-pseudo first order conditions. The same reaction was also monitored under pseudo first order conditions using UV-Vis spectrophotometry and the results provided by the two techniques were compared by using the multivariate calibration methods: PCR, PLS y ANN. The best results were obtained by compressing the data matrix provided by the CD technique with principal component analysis (PCA) and using the scores as inputs for the ANN. The relative standard error of prediction thus obtained was about 3% for both enantiomers.
Evaluation of bi- and three-way multivariate calibration procedures in differential kinetic analysis.
The bidimensional multivariate regression procedures MLR, PCR, PLS and CR, and several n-way methods such as PARAFAC and N-PLS are tested as calibration methods for the determination of ternary mixtures in a pseudo first order kinetic system. The different calibration procedures were first applied to computer simulated kinetic-spectrophotometric data where the effect of spectral overlap and the differences in the kinetic constants were evaluated at low levels of instrumental noise and rate constant fluctuations. Later, they were applied to the resolution of Co-Ni and Co-Ni-Ga mixtures using a stopped-flow mixing system. Accurate estimations of concentrations with relative standard errors of prediction of about 8% were obtained even though a high degree of spectral overlap and similar rate constants were present. The study of the influence of the experimental noise on the three-component system justifies the differences between the simulations and the experimental results. PARAFAC and MLR did not allow the resolution of the proposed three-component system. CR provided slightly better results than those obtained by PCR, PLS y N-PLS.
Selection of wavelength and time ranges for calibration in a two-component kinetic system
A method for the selection of the best wavelength and time ranges, which can be used in a kinetic-spectrophotometric system of binary mixtures, is proposed. It is based on finding those ranges that provide the least spectral and kinetic correlations between the reaction products of both analytes. First, the method was applied to the simulated date and subsequently; the proposed method was applied to the resolution of dyphylline and proxyphylline mixtures. The system studied was characterized by an elevated similarity in the kinetic behaviour of the analytes under pseudo first order conditions and an elevated degree of spectral overlap of the spectra of the reaction products. In spite of this, satisfactory results were obtained in the quantification of the two analytes using PLS regression. The standard error of prediction (SEP) and the standard deviation between replicates (SDBR) did not show significant differences being of the order of 4 and 3% for dyphylline and proxyphylline, respectively.
Simultaneous determination of methylxantines in a pharmaceutical preparation
A similar method as before for the determination of theophylline, dyphylline and proxyphylline, using PLS regression, is proposed. This method was satisfactorily applied to the determination of the three compounds in a pharmaceutical preparation and provided similar results than those obtained by HPLC procedure.
Taking into account the results obtained in all the works it can be concluded that kinetic methods together with multivariate calibration can be used for the resolution of mixtures with very similar chemical structure, like enantiomers, or with differences only in the kinetic behaviour that is the case of catalysers. As calibration methods in linear systems stand out PLS regression and in non linear systems ANNs being their process of optimisation the key steep for their correct application and for having good results. In order to study the behaviour of the calibration methods in kinetic-spectrophotometric systems the simulations are an important tool to observe general trends. Three-way calibration methods (N-PLS y PARAFAC), when the analysed species show a high spectral overlap and similar kinetic behaviour, do not provide satisfactory results.
Gándara, Fierro Guillermo. "Teoría y aplicaciones de corrección de sesgos para métodos de valoración ambiental". Doctoral thesis, Universitat Autònoma de Barcelona, 2002. http://hdl.handle.net/10803/3987.
Texto completoCon relación al sesgo del precio de salida, el Capítulo 2 se presenta como un desarrollo teórico y empírico del diseño de mecanismos para determinar los precios de salida que corrijan de manera ex-ante este sesgo. Se contrastan algunas diferencias entre distintos formatos de la pregunta de valoración (abierto y mixto) y se comparan los mecanismos propuestos con los diseños tradicionalmente utilizados para definir los precios. Para contrastar la robustez de los mecanismos de corrección propuestos en la parte teórica, se desarrolla un experimento de simulación de Monte Carlo y se realiza un ejercicio de valoración contingente aplicado a la valoración de externalidades asociadas a la gestión de residuos sólidos urbanos.
Respecto al sesgo estratégico, el Capítulo 3 se centra en el estudio de los incentivos que llevan a los individuos a revelar o no su verdadera disposición a pagar (comportamiento estratégico). Se plantea que existen formatos de la pregunta de valoración que no son manipulables y por tanto están libres del sesgo estratégico. Una vez definido el planteamiento teórico, a través de un experimento de laboratorio se demuestra por una parte que el formato abierto de la pregunta de valoración con una regla de decisión basada en la mediana no es manipulable, y por otra, que el formato abierto con una regla de decisión basada en la media es manipulable.
En Capítulo 4, a través de un ejercicio de simulación de Monte Carlo se comparan las estimaciones paramétrica y no paramétricamente de la disposición a pagar con una disposición de pago real. Se definen tres formas funcionales para la disposición a pagar real: normal, Weibull y lognormal, y se prueban diversos diseños para definir los precios de oferta. En este sentido la tesis realiza aportaciones sobre el tamaño muestral y el diseño de los precios de salida en el marco de la estimación no paramétrica.
The thesis objective is to study the detection and correction of some bias in CVM exercises. Concretely the starting point bias in close-open ended format and the strategic bias in open format applications are attended. On the other hand, the estimation bias is studied, where are analyzed the differences between to estimate the results from close format with a non parametric approach and a parametric approach.
The proposal of the thesis in chapter 2 consists to a change in the processing of the starting point bias. For it one propose, it applies and it verifies a new design of mechanisms to determine the bids in the valuation question of close-open format, that correct in ex-ante form the starting point bias. These designs are denominated sequential, because they follow a self-adaptation process, and its novelty is that they are defined as the survey progresses. In addition, it is verified that these mechanisms maintain the advantages of the close-open format over the open format. The application and verification of the sequential designs are made in an exercise of Monte Carlo simulation and the different designs are applied in the valuation of externalities associated to the RSU incineration and disposal as well as to the PMGRM of the Barcelona Metropolitan Area.
The potential manipulability of CVM can lead to biased results if a strategic behaviour exists. However, in accordance with economic theory exists eliciting question formats that are not manipulable. The objective of chapter 3 has been to confirm in practice that a open ended format with a media rule is strategy-proof, and to compare its differences between a manipulable mean rule format. Information for the tests preceding from a laboratory experiment. The result indicates that a greater proportion of people states their true WTP in a experiment when non manipulability information of median rule is provided, in comparison with the rest of options used here.
In chapter 4 it has been did a comparison between the WTP estimated nonparametric and parametrically, and a comparison between each one of these estimations with real WTP, through an exercise of Monte Carlo simulation. Three functional forms for WTP* have been considered (normal, Weibull and lognormal). In the nonparametric approach three values are calculated: maximum (Paashe), intermedium and minimum (Laspeyres), and in the parametric approach have been considered a model logit that although is not the model that corresponds to each one of the three functional forms of WTP*, is the model that is most frequently used in this type of exercises. The bids have been determined according to three different designs: one by percentiles in agreement with the suggestions of Alberini and Carson (1993) for parametric estimations, another systematic following McFadden (1994) for nonparametric estimations and finally, a design of random bids.
Penteado, Ricardo Batista. "Utilização de técnicas do planejamento de experimentos na otimização de um processo de torneamento da superliga NIMONIC 80A /". Guaratinguetá : [s.n.], 2011. http://hdl.handle.net/11449/93083.
Texto completoAbstract: The nickel-based alloys have a chemical composition with high content of alloying elements, which are responsible for their mechanical and thermal properties. Its widespread use in these areas is mainly due to its performance at high temperatures. The propose of this work is to study the machining process in a cylindrical external turning in a nickel based alloy Nimonic 80A, using tools of the design of Experiments such as Taguchi Method and Response Surface Methodology for assistance to solve problems with multiple responses machining experiments aimed at analyzing what are the best setting among the factors studied with variable response to surface roughness and cutting length. The machining tests were performed on a CNC lathe, being considered the following machining parameters: cutting speed (75 and 90 m / min), depth of cut (0.8 and 1.6 mm) and feed rate (0.12 and 0.18 mm / V), TP2500 and CP250 tools, workpiece of Nimonic 80A material hot rolled and annealed and Fluid Lubricant varying amount in Minimal Quantity of Fluid (MQF) and abundant. The whole process was conducted in cycles where each cycle ended when it reached the maximum feed length (Lf). After each step of turning, were performed the measurements of the tool wear and the roughness of the parts. It can be observed that the factor feed had greater significance with respect to roughness and cutting length, leading to the conclusion that the lower were feed rate, lower values of roughness and higher values for cutting length will be found
Orientador: Messias Borges Silva
Coorientador: Marcos Valério Ribeiro
Banca: Marcela Aparecida G. Machado de Freitas
Banca: Rosinei Batista Ribeiro
Mestre
Garcia, Capdevila Javier. "Síntesis de cerámicos tecnológicos mediante métodos de combustión de geles de acrilamida". Doctoral thesis, Universitat de Barcelona, 2007. http://hdl.handle.net/10803/1076.
Texto completoSe utilizan métodos de combustión de geles de acrilamida para la síntesis de diferentes familias de óxidos magnéticos, superconductores, conductores iónicos y mixtos, etc. El hecho de partir de una disolución proporciona una gran homogeneidad estructural y composicional a los productos así obtenidos, mientras que la combustión de geles de acrilamida se revela como un método ágil, versatil y robusto de síntesis, permitiendo la fabricación de lotes relativamente grandes en un breve lapso de tiempo.
En este trabajo, en primer lugar, se estudia el propio método para determinar cuáles son los factores significativos en la síntesis utilizando el diseño de experimentos para maximizar la información que se puede extraer del sistema con el menor número posible de experimentos. A continuación se aplica la combustión de geles de acrilamida para la síntesis de diversos óxidos : Conductores iónicos, conductores mixtos, óxidos magnéticos, óxidos magnetorresistivos y óxidos superconductores.
En capítulos posteriores se procede a estudiar la síntesis mediante combustión de geles de poliacrilamida como una alternativas que sortea la toxicidad de la acrilamida.
Finalmente se describe un dispositivo de spray-pirólisis para la síntesis en continuo a escala piloto.
Concerning to the oxide synthesis, nanoparticle obtention seems to be an interesting alternative to traditional synthesis methods because the specific surface increase alows an improvement of sintering and other surface-related properties.
We used acrylamide gel combustion methods to obtain different materials as superconductive oxides, ionic and mixed conductors, magnetic spinels, etc. Departing from a solution provides higher chemical and structural homogeneity than traditional routes, while the combustion scheme brings the hability of large batch production.
In this work we used the design of experiments to determine which, from all possible variables, are statisticaly relevants with the mínimum number of experiments. The conclusions of this study was applied to the synthesis of metalic oxides with technological interest as: Ionic conductors, Mixed conductors, Magnetic oxides, Magnetorresistive oxides, Superconductive oxides.
In further chapters we try to avoid acrylamide toxicity using polyacrylamide as gel former.
Finally we designed and tested a spray-pirolisis device as the way to scale the process to pre-industrial scale.
GURGEL, Ana Pavla Almeida Diniz. "A importância de Plectranthus amboinicus (Lour.) Spreng como alternativa terapêutica métodos experimentais". Universidade Federal de Pernambuco, 2007. https://repositorio.ufpe.br/handle/123456789/3510.
Texto completoConselho Nacional de Desenvolvimento Científico e Tecnológico
Plectranthus amboinicus (Lour.) Spreng (Lamiaceae) é uma planta herbácea nativa da Ásia Oriental e encontra-se distribuída por toda a América. No Brasil, é conhecida como hortelã da folha grossa, hortelã da folha graúda, malvariço e mundialmente como orégano, sendo utilizada popularmente como analgésica, antiinflamatória e antimicrobiana. Esta pesquisa avaliou o perfil fitoquímico e as ações toxicológicas, microbiológicas e farmacológicas (atividade antineoplásica e antiinflamatória, em roedores) do resíduo do extrato hidroalcoólico das folhas de P. amboinicus. No estudo fitoquímico constatou-se a presença de flavonóides (quercetina e luteolina, rutina), terpenos, derivados cinâmicos, monoterpenos (carvacrol, timol), triterpenos ( sitosterol e amirina) e esteróides. No ensaio de toxicidade aguda do extrato foram verificadas reações estimulantes seguida por reações depressoras sobre Sistema Nervoso Central. Não houve óbitos em nenhuma das doses testadas (a maior dose 3.800 mg.kg-1 por via intraperitoneal e 4.000 mg.kg-1 por via oral) descartando uma possível DL50. Nos ensaios microbiológicos foi verificado uma forte ação antimicrobiana do extrato em bactérias gram-positivas, principalmente Staphylococcus aureus resistente a Meticilina (MRSA), onde a concentração inibitória mínima oscilou entre 18.7 e 9.3 mg.ml-1 do referido extrato. A ação do extrato sobre a cinética bacteriana (curva de mortalidade) sugere uma ação tanto bactericida como bacteriostática, variando conforme concentração do extrato. Em bactérias gram-negativas e fungos leveduriformes não foram constatadas ações do extrato. Para a atividade antiinflamatória utilizou-se o modelo do edema de pata induzido por carragenina. Foram administradas as doses crescente de 150, 250 e 350 mg.kg-1 do extrato por via oral e intraperitoneal. Nos ensaios antiinflamatórios por via oral não foram observados diminuições nos edemas, porém, nas administrações intraperitoneais, foram verificadas diminuições significativas do edema em todas as doses testadas. Nos ensaios de atividade antineoplásica, foram utilizados os modelos experimentais de Sarcoma 180 e Carcinoma de Ehrlich. Nos grupos tratados com o extrato foram averiguadas diminuições na média dos pesos dos tumores em todas as doses administradas por via intraperitineal (100, 150, 250 e 350 mg.kg-1), exceto na dose 350 mg.kg-1, no modelo de Carcinoma de Ehrlich, que apresentou uma ação carcinogênica
Rojas, Tarazona Fredy E. "Análisis microestructural de materiales fotovoltaicos mediante métodos ópticos y microscopía electrónica". Doctoral thesis, Universitat de Barcelona, 2014. http://hdl.handle.net/10803/285675.
Texto completoAquest treball de tesi s’emmarca dins de la modelització òptica i estructural de capes primes semiconductores, particularment d’òxids conductors transparents del tipus ZnO i ZnO:Al, així com de capes dopades tipus p i tipus n de µc-Si: H, totes elles amb propietats per ser utilitzades com a part integral de cèl·lules solars tipus p-i-n de a-Si:H i de µc-Si:H. A partir de ’mesures de transmitància i reflectància i mitjançant l’ús d’un model òptic de medi efectiu basat en, es caracteritza la microestructura de capes primes dels materials abans esmentats. Aquests resultats són contrastats amb aquells que han estat obtinguts mitjançant tècniques de caracterització morfològica i estructural tals com microscòpia de forces atòmiques (AFM) espectroscòpia Raman, difracció de raigs X (XRD), microscòpia electrònica de rastreig (SEM) i de transmissió (TEM). Posteriorment, amb l’experiència adquirida de l’anàlisi microestructural de les mostres estudiades, es realitza la modelització microestructural d’una cèl·lula solar completa tipus p-i-n dipositada sobre un substrat rugós comercial. Aquest treball s’ha estructurat en 4 blocs. Al primer bloc s’estudien els fonaments teòrics dels models òptics utilitzats en el present treball doctoral. En aquest estudi es descriu la manera de deduir dels paràmetres òptics i estructurals de capes primes a partir de mètodes òptics simples, basats en mesures de transmitància i reflectància espectral i de l’ajust de les mesures experimentals. Això es realitza mitjançant diferents aproximacions, tals com el mètode de suma aplicat a capes primes homogènies i heterogènies i, finalment, utilitzant un mètode matricial i l’aproximació de Medi Efectiu per modelitzar la microestructura de les capes primes semiconductores. Finalment es presenten 2 models que permeten tractar la rugositat en la caracterització microestructural de capes primes. El primer d’ells utilitzat en capes molt primes o en aquelles que tenen una rugositat rms srms inferior a 10-15 nm, i el segon model aplicat a 2 medis separats per una interfície rugosa amb (15 nm = srms = 50 nm). Al segon bloc s’estudia la microestructura de sèries de capes primes de ZnO dipositades sobre substrats de vidre tipus Corning 1737F a temperatura variable en el rang entre temperatura ambient i 200°C, i sèries de capes dipositades a igual temperatura de substrat (100°C) però amb diferent fracció de dopatge amb Al2O3 en el rang de (0.5 – 2 wt%). L’anàlisi microestructural d’aquest grup de capes primes, amb gruixos entre 300 - 500 nm, s’ha realitzat a partir del càlcul de les constants òptiques del material, utilitzant amb aquesta finalitat el model de Sellmeier per al càlcul de l’índex de refracció n en tot el rang de longituds d’ona. El càlcul del coeficient d’absorció s’ha dut a terme amb la relació de Tauc al front d’alta absorció. Amb les dades de n, a i d es realitza una simulació de l’espectre experimental, i aquesta, al seu torn, és millorada usant un model de medi efectiu EMA per mitjà de la inclusió d’una petita capa superficial, la qual té el propòsit de simular la rugositat superficial de les capes primes mitjançant la barreja d’una petita fracció de microcavitats, juntament amb els valors de les constants òptiques prèviament calculades. Aquest estudi s’ha correlacionat amb els resultats obtinguts mitjançant tècniques d’anàlisi morfològica i estructural tals com AFM, XRD i TEM. El tercer bloc aborda l’estudi de les propietats microestructurals de capes molt primes dopades tipus p i tipus n de silici microcristal·lí (µc-Si:H), que han sigut obtingudes a baixa temperatura de substrat mitjançant HW-CVD amb característiques per ser utilitzades en cèl·lules solars tipus p-i-n basades en a-Si:H i µcSi:H. Aquest estudi s’ha realitzat mitjançant l’aplicació d’un model d’EMA i, posteriorment, s’ha correlacionat amb mesures de TEM. El rang de gruixos estudiats es troba entre 20 i 60 nm, i totes les capes primes han estat caracteritzades microestructuralment emprant un model de 3 capes primes (capa superficial, zona de creixement i fase d’incubació), els resultats obtinguts han estat correlacionats amb mesures d’AFM, espectroscòpia Raman, i TEM. A partir de la correlació entre les tècniques s’ha observat s va observar que el model òptic és fiable per reproduir la microestructura d’una capa prima sense importar el seu gruix. En el quart bloc es presenta la modelització òptica d’una multiestructura de capes primes que, en aquest cas particular, correspon a una cèl·lula solar tipus p-i-n-de a-Si:H utilitzant un model de medi efectiu basat en mesures de transmitància i reflectància. Aquest procés s’ha desenvolupat en sis passos, i ha consistit, en primer lloc, en caracteritzar microestructuralment les capes primes de manera individual dipositades sobre substrat pla i sobre substrat rugós i, posteriorment, s’han anat afegint una a una les capes que componen la cèl·lula i s’ha estudiat cas a cas fins a completar la cèl·lula solar. Finalment, s’han analitzat algunes característiques de la microestructura de les capes que constitueixen el dispositiu, tals com gruixos, cristal·linitat, rèplica i propagació de la rugositat, així com defectes estructurals propis del procés de dipòsit de les mateixes. Aquestes característiques han estat analitzades contrastant les microestructures obtingudes pel model òptic davant tècniques de microscòpia tals com SEM i TEM.
The framework of this doctoral thesis is the optical and structural modelling of semiconductor thin films, particularly transparent conducting oxides such as ZnO and ZnO: Al, as well as doped layers p-type and n-type µc-Si:H, which are used as layers in a-Si:H and µc-Si:H p-i-n solar cells. Using the effective medium approximation optical model (EMA) on the transmittance and reflectance, the aforementioned materials have been structurally characterized. The results have been contrasted with those obtained by morphological and structural characterization techniques such as atomic force microscopy (AFM) Raman spectroscopy, X-ray diffraction (XRD), scanning electron microscopy (SEM) and transmission electron microscopy (TEM). Finally, with the gained experience in the microstructural analysis of the samples, the modelling of a complete p-i-n solar cell deposited on a rough substrate type was performed. This doctoral thesis is divided into four sections. In the first section the theoretical bases of the optical models used in this work have been studied. The deduction of the optical and structural parameters of thin films from simple optical methods is presented. These methods are based on the measurement of spectral transmittance and reflectance, and the adjustment of experimental measurements. This is accomplished by using different approaches such as the addition method usually employed to homogeneous and heterogeneous thin films and, finally, using a matrix method and an effective medium approximation model to deduce also the microstructure of the semiconductor thin films. Finally, we present two models that allow treating the roughness on microstructural characterization of thin films. The first one is used in very thin layers or layers with an rms roughness srms values lower than 10-15 nm, and the second model applied to media separated by an interface with roughness in the range of (15 nm = srms = 50 nm). In the second section we deal with the microstructure of the ZnO thin films series deposited on Corning 1737F glass substrates at different substrate temperatures in the range between room temperature and 200°C and a series of layers deposited at 100 °C, but with different doping concentration of Al2O3 in the range of (0.5 – 2 wt %). The microstructural analysis of this group of thin films having a thickness of 300-500 nm was made from the calculation of the optical constants of the material, by using the Sellmeier model to calculate the refractive index over the range of wavelengths. The absorption coefficient calculation was performed using the Tauc relation in the high absorption region. With the data of n, a d we can simulate the experimental spectra of T(.) and R(.). This fitting is further improved by using a model of effective medium approximation EMA and the addition of a very thin surface layer, which is intended to simulate the surface roughness of the thin layers. This surface layer is formed by a mixture of voids and the material with the optical constants previously calculated. This study has been correlated with the results obtained by techniques of morphological and structural analysis such as AFM, XRD and TEM. The third section deals with the study of the microstructural properties of very thin p-type and n-type doped layers of microcrystalline silicon µc-Si:H, which were deposited at low substrate temperature by HWCVD and were suitable to be used in p-i-n solar cells based on a-Si:H and µc-Si:H. This study was conducted by applying a model of EMA, and subsequently has been validated with TEM and HRTEM measurements. The thickness range studied was between 20 and 60 nm, and all the thin films were microstructurally characterized by using a 3-thin-layer model (surface layer, bulk area and the incubation area), the results were also correlated with measures of AFM, Raman spectroscopy, and TEM. From the correlation between the techniques it was noticed that the optical model is reliable to reproduce the microstructure of a thin layer regardless of its thickness. The fourth section is about the study of the optical modelling of a multi-structure system of thin layers, which particularly corresponds to an a-Si: H p-i-n solar cell, using a model of EMA based on measurements of spectral transmittance and reflectance. This process was developed in six steps, and consisted first in microstructural characterization of individually thin layers deposited on flat substrate and later on rough substrate. Subsequently were added one by one the layers that make the cell up. Finally, some features of the microstructure of the layers constituting the photovoltaic device such as thickness, crystallinity, roughness replica and propagation of the surface roughness, and some structural defects typical of the deposition process are analysed. These features were tested by contrasting the microstructures obtained by the optical model versus microscopy techniques such as SEM and TEM.
Olinto, Cláudio Rodrigues. "Estudo experimental das características do escoamento turbulento nas primeiras fileiras de bancos de tubos". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2005. http://hdl.handle.net/10183/5718.
Texto completoJardim, Renato de Figueiredo. "O método da contra corrente : extensão e resultados experimentais". [s.n.], 1986. http://repositorio.unicamp.br/jspui/handle/REPOSIP/277510.
Texto completoDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Fisica Gleb Wataghin
Made available in DSpace on 2018-07-17T00:30:59Z (GMT). No. of bitstreams: 1 Jardim_RenatodeFigueiredo_M.pdf: 2151391 bytes, checksum: 5c35a8f94f6b1481d5b2bf9aa82f160b (MD5) Previous issue date: 1986
Resumo: No presente trabalho são discutidos métodos experimentais utilizados para a determinação de resistividade elétrica em metais. O método convencional (método das quatro pontas ou potenciométrico) é discutido rapidamente, sendo dada maior atenção aos métodos indutivos: o método da amostra girante, o método da impedância mútua e, particularmente, o método da contra corrente. Este é estudado sob novo aspecto, não desprezando o comportamento magnético do metal, e estendido para amostras que apresentam geometria tipo cilindro circular oco em duas situações distintas: alimentação da bobina primária por fonte de corrente e por fonte de tensão. Para avaliar este procedimento são construídos três sistemas de bobinas distintos e medidas de resistividade elétrica, à temperatura ambiente e à temperatura de nitrogênio líquido, são efetuadas em amostras de nióbio purificado pelo processo eletrolítico em sais fundidos, de cobre, de alumínio, de latão e de bronze. Estes resultados são comparados com medidas efetuadas pelo método convencional, apresentando concordância dentro de erro máximo de 5%
Abstract: Not informed
Mestrado
Física
Mestre em Física
Ferreira, Marcio Drumond Costa. "Estudo da eficiência de um detector HPGe por métodos semiempíricos e experimental". CNEN - Centro de Desenvolvimento da Tecnologia Nuclear, Belo Horizonte, 2012. http://www.bdtd.cdtn.br//tde_busca/arquivo.php?codArquivo=268.
Texto completoA espectrometria gama é uma técnica que proporciona informações diversas em uma única análise, além de ser rápida e não destrutiva. Por meio de um detector de radiação acoplado a um sistema eletrônico de aquisição de dados, a técnica identifica os radionuclídeos emissores gama em uma amostra com radioatividade natural ou induzida, bem como fornece informações para calcular as suas atividades, a partir do registro e análise do espectro gama. A eficiência é um dos principais parâmetros a ser considerado quando se trabalha com detectores de radiação gama. Seu conhecimento possibilita uma melhor exatidão na quantidade de quantuns de radiação que o detector pode registrar dentre o total que emerge da fonte ou da amostra em estudo. A eficiência pode ser determinada experimentalmente, por métodos teóricos e semiempíricos. Usualmente, a determinação por método experimental é a mais exata, porém exige a aquisição de mais dados, o que torna o procedimento mais trabalhoso. Por sua vez, os métodos teóricos e os semiempíricos são muito menos laboriosos, apesar dos riscos de apresentarem maiores incertezas. Neste trabalho, foi realizado um estudo comparativo para verificar se as equações de eficiência determinadas por métodos semiempíricos e experimental teriam desempenhos similares na determinação das concentrações elementares da amostra de referência IAEA/Soil7, em geometria puntual e não puntual, irradiadas no reator de pesquisa TRIGA MARK I IPR-R1. Por métodos semiempíricos, foram aplicados os programas KayZero for Windows, V. 2.42 específico para cálculo de concentração elementar no método k0 de Ativação Neutrônica e o ANGLE V3.0 desenvolvido para cálculo de eficiência de detectores semicondutores para diversas geometrias. Os resultados indicaram, por meio de avaliação estatística, que as eficiências determinadas experimentalmente e por métodos semiempíricos são similares e igualmente eficazes tanto para a amostra puntual quanto para a não puntual. As diferenças observadas nos resultados das concentrações elementares foram relacionadas às correções aplicadas pelo programa e não consideradas quando se usa a planilha eletrônica e não devido às eficiências determinadas. Este estudo mostrou também que o programa KayZero for Windows analisa e considera como puntual amostras com massa cinco vezes maior que a massa das amostras usualmente analisadas, o que expande o campo de aplicação do programa.
Gamma spectrometry is a technique that provides a piece of information on a sample in one measurement, in a fast and non destructive assay. Such technique identifies gamma emitter radionuclides in natural and induced radioactivity, using a radiation detector linked to an electronic system in order to acquire the data and gamma spectra. To determine the full energy peak efficiency of High-Purity Germanium detector is important for gamma-ray spectrometry experiments. The efficiency of a detector is proportionality constant, which relates the activity of the source being counted and the number of counts observed. This efficiency can be determined applying experimental methods, using theoretical and semi empirical methods. Usually, the determination via an experimental procedure is more accurate, however, it is necessary more data acquisition that makes the procedure more tiring. On the other hand, the theoretical and semi empirical methods are less laborious procedures, despite the risks of higher uncertainties. This comparative study was carried out in order to verify whether the full energy peak efficiency curves determined by experimental and semi empirical methods, would present a similar performance on elemental concentration of reference material IAEA/Soil7, prepared in punctual and non-punctual geometries, and irradiated in the TRIGA MARK I IPR-R1 research reactor. It was applied were applied the KayZero for Windows, V. 2.42 a specific software for elemental concentration determination in the k0-stardadization neutron activation analysis - and ANGLE V3.0 specific method to determine semi conductor detectors gamma efficiencies for several sample geometries. Based on statistical tests u-score and Relative Tendency - the results pointed out for two types of sample geometries, the efficiencies determined experimentally and by semi empirical methods are similar, which were fitted well and worked properly. The deviations observed in the results were related to corrections made by the KayZero for Windows software and didnt apply when the values were calculated using the spread sheet and not related to the efficiency curves themselves. Additionally, it was evidenced that KayZero for Windows software is able to analyze a non-punctual sample, with mass 5 times higher that the usual size, as it were punctual. It points out the versatility of the software and expands the application field.
Miglioranza, Bruna. "Emprego de planejamento experimental no desenvolvimento de métodos cromatográficos na indústria farmacêutica". Universidade Federal de São Carlos, 2012. https://repositorio.ufscar.br/handle/ufscar/6643.
Texto completoThe aim of this study was to use experimental design techniques during the development and validation of high performance liquid chromatography (HPLC) methods for analysis of drugs in the pharmaceutical industry in order to demonstrate its applicability and benefits of its use. A multivariate assessment of the robustness of an assay method for vitamins of the B complex enabled the visualization of interactions between variables, which would not be possible through the univariate approach. A mixture design was performed for determining the extraction solvents of sesquiterpene lactones in pharmaceutical formulation containing arnica tincture. The mixture of solvents selected through the study allowed desirable extraction of compounds of interest from the sample, which was confirmed through the validation of the methodology. Finally, we conducted an exploratory planning on an analytical methodology for the determination of latanoprost, timolol maleate, preservatives and degradation product in eye drops. The design allowed the reduction of chromatographic run time and waste generation by about 44%.
O objetivo deste trabalho foi utilizar técnicas de planejamento experimental durante o desenvolvimento e validação de métodos de cromatografia liquida de alta eficiência (CLAE), para análise de medicamentos na indústria farmacêutica, a fim de demonstrar sua aplicabilidade e as vantagens de sua utilização. Uma avaliação multivariada da robustez do método de doseamento de vitaminas do complexo B permitiu a visualização de interações entre as variáveis, o que não seria possível através do estudo da forma univariada. Foi realizado também um planejamento de mistura para a determinação de solventes de extração de lactonas sesquiterpênicas de formulação farmacêutica à base de tintura de arnica. A mistura de solventes selecionada através do estudo permitiu extração desejável dos compostos de interesse da amostra, que foi comprovada através da validação da metodologia. Finalmente, foi realizado um planejamento exploratório de uma metodologia analítica de doseamento de latanoprosta, maleato de timolol, conservantes e produto de degradação em colírio. O planejamento permitiu a o redução do tempo de corrida e da geração de resíduos em cerca de 44%.
Suarez, Rafael Antonio Bonilla. "Estudo de feixes de Airy e sua geração experimental via métodos holográficos". reponame:Repositório Institucional da UFABC, 2015.
Buscar texto completoDissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Física, 2015.
Os feixes de Airy tem atraido um grande interesse em fotonica, uma vez que tem propriedades vantajosas como a propagacao nao-difrativa e aceleracao sem acao de forcas externas. Por outro lado, a holografia e uma tecnica interferometrica que permite o registro e a reconstrucao de frentes de onda de objetos e feixes opticos, pois um holograma carrega as informacoes de intensidade e fase de um objeto ou feixe eholografadof. Recentemente, o aumento da capacidade de processamento e armazenagem dos microcomputadores; o desenvolvimento de novos dispositivos optoeletronicos, como os moduladores espaciais de luz (SLMs), as cameras CCDs (Coupled Charge Device) de alta resolucao; e, novos materiais fotossensiveis (materiais eletro-opticos e fotorrefrativos, PRMs); vem possibilitando a viabilidade dos hologramas gerados por computador (CGH, Computer Generate Hologram) e a implementacao experimental de sistemas holograficos de registro e reconstrucao numericas e opticas de objetos tridimensionais e geracao de ondas e feixes opticos especiais. Neste trabalho, nos apresentamos um estudo dos feixes de Airy e suas caracteristicas nao-difrativas e de aceleracao durante sua propagacao. Numa primeira etapa, montamos um sistema experimental que permitiu o registro (construcao) numerico e a reconstrucao de hologramas de feixes de Airy usando dispositivos LC-SLM. Numa segunda etapa, montamos um sistema experimental que permitiu o registro e a reconstrucao optica de hologramas de feixes de Airy usando o metodo de holografia fotorrefrativa com cristais ....12......20 (BTO) para a geracao optica de feixes de Airy. Na messma linha, a partir da obtencao numerica dos campos, mostramos a possibilidade de criar matrizes de feixes de Airy em numa so reconstruccao holografica. Paralelamente, fizemos um analise dos resultados de reconstrucao numerica optica dos hologramas e dos resultados de reconstrucao optica destes hologramas, bem como, suas potencialidades para aplicacoes tecnologicas.
The Airy beams have attracted great interest in photonics, once it has advantageous properties as the non-diffractive propagation and acceleration without action of external forces. On the other hand, holography is an interferometric technique that allows the recording and reconstruction of wavefront objects and optical beams, due to hologram carries the intensity and phase information of an object or beam. Recently, increased processing capacity and storage microcomputers; the development of novel optoelectronic devices such as spatial light modulators (SLMs), the CCD camera (Charge Coupled Device) with high resolution; and new photosensitive materials (electro-optic and photorefractive materials, DRP); has enabled the viability of computer-generated holograms (CGH, Computer Generate Hologram) and the experimental implementation of holographic systems of numerical and optical recording-reconstruction of three-dimensional objects and generation of special optical beams. In this work, we present a study of the beam Airy their non-diffractive characteristics and acceleration during its propagation. In a first step, we developed a experimental system that allowed me the record (construction) numerical and optical reconstruction holograms of Airy beams using LC-SLM devices. In a second step, developed a experimental system that allowed me the record and the optical reconstruction Airy beam holograms using photorefractive holography method with ....12......20 (BTO) crystal for optical generation of beams Airy. Similarly, upon the obtaining numerical fields, we show the possibility to create arrays of Airy beams in a single holographic recontruccao. In parallel, we did an analysis of the results of the numerical optical reconstruction of hologrmas and the resultlts of optical reconstruction of such holograms, as well as their potential for technological applications.
Subirana, Cachinero Isaac. "Métodos estadísticos para tratar incertidumbre en estudios de asociación genética: aplicación a CNVs y SNPs imputados". Doctoral thesis, Universitat de Barcelona, 2014. http://hdl.handle.net/10803/283969.
Texto completoIn the last years, a large number of genetic variants have been discovered, from the simplest ones indicating a change in a nucleotide (SNPs), until the much more complex ones which are repetitions in a segment of DNA chain (CNVs). Although it exist more genetic variants such as microsatellites, inversions, etc. this thesis has focused on SNPs and CNVs, since these variants are the most analyzed by far. In many cases, the methods to analyze the effect of SNPs or CNVs on a disease are well solved. However, in some cases, SNPs and CNVs are measured with uncertainty. For example, sometimes the genotype for a SNP has not been directly observed but has been imputed instead. At the same time, to establish the number of copies for a CNV is done indirectly from the quantitative signal by a designed probe. This makes necessary “no standard” and appropriated statistical methods to study the association between imputed SNPs or CNVs incorporating this uncertainty. Several strategies have been described in the literature to perform association studies between a genetic variant measured with uncertainty and a response: (i) Naive strategy and (ii) a strategy known as Dosage. A grosso modo, the first does not take into account uncertainty, while the second does but in an approximated way. In this thesis, a statistical method is proposed to deal with genetic data measured with uncertainty and overcome the limitations of other existing methods. This method has been described analytically, which incorporates the uncertainty in the model likelihood properly. Also, numerical algorithms have been built to maximize the likelihood in an efficient way in order to analyze hundreds of thousand variants in a reasonable time (GWAS –Genome Wide Association Studies-). All this has been implemented in several functions structured and integrated as part of a free and very popular software in genetic epidemiology called R. Also, part of the code has been translated to C++ to speed up the process. Quantitative, binary or time-to-event response types are supported by the proposed method, covering the most popular designs in genetic association studies: case-control, quantitative traits or longitudinal studies. The method has been accommodated to perform interaction analysis (epistasis), as well.
Dos, Santos Maria Gisele. "Estudio del metabolismo energético muscular y de la composición corporal de atletas por métodos no destructivos". Doctoral thesis, Universitat Autònoma de Barcelona, 2001. http://hdl.handle.net/10803/3462.
Texto completoSe estudiaron 14 atletas de alto nivel de fondo/medio-fondo del Centre d- Alt Rendiment Esportiu (CAR, San Cugat del Vallés) y 22 atletas de un único equipo de la segunda división de la liga de fútbol de España, Palamós, F.C.. Los atletas de fondo/medio-fondo se distribuyeron en dos grupos equivalentes, un grupo control (placebo) y un grupo suplementado con creatina, según su rendimiento tal como se midió en pruebas preliminares. Los futbolistas también se distribuyeron en dos grupos equivalentes según su posición en el campo, y se suplemntaron respectivamente antes y después del entrenamiento. La suplementación oral se realizó durante 12 días, en los fondistas/medio-fondistas en forma de 20 g de monohidrato de creatina, y durante 17 días en los futbolistas en forma de 0,1 g de monohidrato de creatina y 1,1g de polímeros de glucosa por kg de masa magra.
Los fondistas/medio-fondistas fueron al laboratorio de fisiología del CAR para realizar un test submáximo seguido de un test máximo hasta llegar a la fatiga para determinar el consumo máximo de oxígeno. Después de 3 días fueron al Centre Diagnòstic Pedralbes (CDP) para llevar a cabo medidas de MRS. El patrón de metabolitos fosforilados se midió en el músculo vasto medial por 31P-MRS, mientras que los trigliceridos intra (IT) y extramiocelulares (ET) se determinaron mediante 1H-MRS.
Los futbolistas fueron al laboratorio del Centre Diagnòstic Pedralbes para realizar las mediciones de 31P-MRS y 1H-MRS, y 3 días después se desplazaron a la Universidad Autónoma de Barcelona (Servicio de Actividad Física) para realizar el test de fatiga y las medidas de composición corporal por densimetria.
Se ha encontrado una correlación negativa estadisticamente significativa entre la concentración de triglicéridos musculares (IT o ET) y la capacidad aeróbica de los fondistas/medio-fondistas. Es decir, una mayor capacidad aeróbica se ve reflejada en una menor reserva de IT.
El protocolo de ejercicio realizado por los fondistas/medio-fondistas en el CDP permitió detectar mediante 31P-MRS una disminución del consumo de PCr durante los períodos de ejercicio debida a la suplementación con creatina. Como el pH intracelular no se vio afectado, concluimos que el aporte energético necesario para desarrollar la misma potencia en el grupo suplementado con creatina debe provenir de un aumento de la contribución de la fosforilación oxidativa con respecto al grupo placebo. Dado que no se detectó un aumento significativo del cociente PCr/ATP durante el período de la suplementación en ninguno de los grupos, cabe considerar que el efecto detectado sea debido a variaciones en la concentración de creatina libre no fosforilada.
Finalmente, los resultados obtenidos con los futbolistas sugieren que la suplementación inmediatamente antes del entrenamiento mejora la retención muscular de creatina y produce un mayor consumo de grasa periférica debido a dicha suplementación.
Magnetic resonance (MR) methodologies are increasingly used to investigate the physiology of human muscle. Although the MR image reveals the morphology of muscles with great detail, MR spectroscopy (MRS) provides information about the chemical composition of the tissue. Depending on the observed nucleus, MRS allows for the observation of phosphorylated metabolites related to the muscle bioenergetics (31P-MRS), glycogen (13C-MRS), or intramyocellular triglycerides (1H-MRS). Taking this into account, the main purpose of this work was to study the effect of creatine supplementation to the diet on the muscle bionergetics and body composition of elite athletes by means of non destructive methods.
We studied 14 top rank athletes, mid-distance and long distance runners , from the Centre d'Alt Rendiment Esportiu (CAR, San Cugat del Vallès) and 22 athletes from a single soccer team from the second division of the Spanish soccer league, Palamós F. C. Mid- and long-distance runners were distributed into two equivalent groups, a control (placebo) group and a creatine supplemented group, according to performance as measured in preliminary tests. Soccer players were also distributed into two equivalent groups according to their field positions, and supplemented before and after training respectively. Oral supplementation was carried out during 12 days for the mid/long-distance runners as 20 g of creatine monohydrate, and during 17 days for the soccer players as 0.1 g of creatine monohydrate and 1.1 g of glucose polymers per kg of fat-free mass.
Mid- and long-distance runners were reported to the Laboratory of Physilogy at CAR to carry out a submaximal test, followed by a maximal test until exhaustion, to determine the maximum oxygen consumption. Three days later, they were reported to the Centre Diagnòstic Pedralbes (CDP) to conduct MRS measurements. The pattern of phosphorylated metabolites was measured in the vastus medialis muscle by means of 31P-MRS, while intra (IT) and extramyocellular (ET) tryglicerides were measured by means of 1H-MRS.
Soccer players were reported to the Centre Diagnòstic Pedralbes (CDP) to carry out 31P-MRS and 1H-MRS measurements, and three days later went to the Universitat Autònoma de Barcelona (Servei d'Activitat Física) to perform a fatigue test and body compositions measurements by using densitometry.
We have shown a negative correlation, statistically significant, between the concentration of intramuscle tryglicerides (IT or ET) and aerobic capacity in mid- and long-distance runners. That is, a higher aerobic capacity becomes translated into a smaller pool of IT.
The exercise protocol performed by mid- and long-distance runners at CDP allowed us to detect, by means of 31P-MRS, a decrease in the consumption of PCr during the exercise periods caused by the creatine supplementation. Since instracellular pH remained unaffected, we conclude that the energy contribution required to develop the same power in the creatine supplemented group must originate in an increase in the contribution of oxidative phosphorylation with respect to the placebo group. Since we did not detect a significant increase in the PCr/ATP ratio during the supplementation period in either group, we must take into account that the detected effect could arise from variations in the concentration of unphosphorylated free creatine.
Last, the results obtained with the soccer players suggest that supplementation immediately before training improves the retention of creatine by muscles, and produces a much larger consumption of peripheral fat because of said supplementation.
D'Angelo, José Vicente Hallak 1967. "Projeto, montagem e teste de um equipamento para a determinação do segundo coeficiente virial de gases". [s.n.], 1994. http://repositorio.unicamp.br/jspui/handle/REPOSIP/267230.
Texto completoDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Engenharia Quimica
Made available in DSpace on 2018-07-20T01:36:51Z (GMT). No. of bitstreams: 1 D'Angelo_JoseVicenteHallak_M.pdf: 3496953 bytes, checksum: 83b2dec0eb5a6e75cbc8211e5dc307aa (MD5) Previous issue date: 1994
Resumo: Foi projetado, construído e testado um equipamento para medir o segundo coeficiente virial de gases. O método experimental utilizado é o método relativo da variação de temperatura a volume constantes. Como o gás de referência nos experimentos realizados foi usado nitrogênio e como gás em estudo, o metano. Os dados do segundo coeficiente virial do metano forma obtidos a 303,15K e 323,15K, a pressões próximas da atmosférica. Os resultados para as temperaturas citadas foram ¿33,9 '+ ou ¿' 1,5 'cm POT. 3¿/mol e ¿32,3 '+ ou ¿' 1,5 'cm POT. 3¿/mol, respectivamente.Os resultados a 303,15 K apresentaram desvios da ordem de 17 a 26% em relação aos dados da literatura. Esses valores se devem a vazamentos contatados na aparelhagem. Sanados estes vazamentos, os resultados do segundo coeficientes virial a 323,15 K apresentaram desvios na faixa de 1 a 10%. Esta faixa representa desvios absolutos entre 0,3 e 3,0 'cm POT. 3¿/mol sobre o valor medido. Estes valores estão dentro da faixa de erro tolerável para o método experimental aplicado. Os resultados alcançados permitem concluir que o equipamento desenvolvido e o método utilizado possibilitam a determinação do segundo coeficiente virial de gases. O equipamento pode também ser aplicado em medidas do coeficiente virial de soluções gasosas
Abstract: An equipment for measuring the second virial coefficients of gases has been projected, built and tested. The experimental method used is the relative method of chaging temperature, with constant volume. The reference gas used in the experiments was nitrogen and the gas studied was methane. The data about second virial coefficient of methane were obtained at 303,15 K and 323,15K, with pressures near the atmospheric pressure. The results for the mentioned temperatures were ¿33,9 '+ or ¿' 1,5 'cm POT. 3¿/mol and ¿32,3 '+ or ¿' 1,5 'cm POT. 3¿/mol, respectively. The results at 303,15 K have showed deviations of 17 to 26%, comparing with literature data. These values are due to leaks detected in the equipment. Fixing these leaks, the results of the second virial coefficients at 323,15 K have showed deviations in the range of 1 to 10%. This range represents absolute deviations between 0,3 and 3,0 'cm POT. 3¿/mol over the measured value. These values are within the range of tolerable error for the experimental method applied. The results reached allow us to conclude that the equipment and the experimental method may be used in the determination of the second virial coefficient of gases. This equipment can also be used for measuring the virial coefficient of gas mixtures
Mestrado
Sistemas de Processos Quimicos e Informatica
Leal, Antonio da Costa. "Jogos e invenções para uma escrita poética e libertaria: I - jogos gráficos". reponame:Repositório Institucional do FGV, 1991. http://hdl.handle.net/10438/9271.
Texto completoMade available in DSpace on 2012-02-10T17:02:31Z (GMT). No. of bitstreams: 1 000059429.pdf: 4090729 bytes, checksum: e5b56559d90d2e1479b18ef10211e976 (MD5)
Plutôt que défendre des idées, cette thése suggère presque trois cents jeux graphiques – de la ligne à la lettre – comme alternative poétique et libertaire pour le processus d’alphabétisation ci-compris comme expérience vécue avec l’écriture, pas comme simple exercice, copie ou répétition escolaires. Cette alternative poétique et libertaire arrivée lorsque l’enfant commence à dresser ses premières lignes, le remet devant l’écriture comme rituel, comme jeu qui rend sa subjectivité ainsi exprimée, beaucoup plus forte. Il faut retire l’écriture de l’estrict sens de la raison escolarisée. Nous avons tous besoin de revisiter les rituels primordiaux de l’écriture – là oú elle est traitée comme jeu, comme devinette, mais surtout comme poésie. Il faut aller mil fois boire à la source des eaux cristalines – et y parcourir les enchantements de lignes et diagrames, des graphismes de la nature et du corpos, des graphismes littéraux – que se transformeront en symboles, en signes, en codes.
Vão aqui sugeridos quase trezentos jogos gráficos, que vão desde o rabisco até a letra como alternativa poética e libertária para o processo de alfabetização. Processo esse que se quer construido como vivência com a escrita, e não como exercícios, cópias ou repetições escolares. Essa alternativa poética e libertária, logo que a criança começa a rabiscar, recoloca para essa criança a escrita como ritual, brincadeira, jogo - que irá certamente fortalecendo a sua subjetividade. É preciso tirar a escrita do estrito senso da razão escolarizada. Todos precisamos revisitar os rituais primordiais de escrever - e aí a escrita é tratada como jogo, como adivinha, como poesia. Ir mil vezes à fonte de águas cristalinas - percorrer os encantamentos de linhas e diagramas, grafismos da natureza e do corpo e grafismos literais que se transmutam em símbolos, em signos, em códigos. Ao invés de ensinar um código único a alguém, é preciso descobrir com ele os registros de prazer que ele tem quando se sente um inventor de códigos. E aí estaremos falando da alfabetização não só agora - mas para o ensino da escrita no próximo milênio.
Lórenz, Fonfría Víctor A. "Análisis de la estructura del transportador ADP/ATP por espectroscopia de infrarrojo utilizando métodos matemáticos de estrechamiento y ajuste de bandas". Doctoral thesis, Universitat Autònoma de Barcelona, 2003. http://hdl.handle.net/10803/3509.
Texto completoEl, Ghenymy Abdellatif. "Mineralización de fármacos sulfamidas por métodos electroquímicos de oxidación avanzada". Doctoral thesis, Universitat de Barcelona, 2013. http://hdl.handle.net/10803/131942.
Texto completoThis doctoral thesis is devoted to the degradation of sulfanilic acid (SA) and sulfa drugs as sulfanilamide (SNM) and sulfamethazine (SMZ) in acidic aqueous medium using electrochemical advanced oxidation processes (EAOPs) like anodic oxidation (AO) in divided and undivided cells and electro-Fenton (EF), UVA photoelectro-Fenton (PEF) and solar photoelectro-Fenton (SPEF). AO experiments were made in 100 mL cells with a boron-doped diamond (BDD) anode and a stainless steel cathode, whereas in EF, PEF and SPEF, the cell of 100 or 230 mL was equipped with a BDD or Pt anode and an airdiffusion (ADE) or carbon-felt cathode. The AO process in divided cell and PEF between 50 and 450 mA gave total mineralization with > 98% total organic carbon reduction. Increasing current always accelerated the mineralization due to the higher production of ●OH via wáter oxidation in AO, along with ●OH formed from Fenton’s reaction and UVA action in PEF. Total mineralization was achieved up to 2530 mg L-1 SA, 2390 mg L-1 SNM and 1930 mg L-1 SMZ. The substrate decay always obeyed a pseudo-first-order kinetics. HPLC allowed detecting intermediates like hydroquinone, p-benzoquinone and oxalic and oxamic acids for SA, and catechol, resorcinol, hydroquinone, p-benzoquinone, 1,2,4-trihydroxybenzene and fumaric, maleic, acetic, oxalic and formic acids for SNM. In the case of SMZ, 4,6-dimethyl-2-pyrimidinamine and catechol, resorcinol, hydroquinone and p-benzoquinone were detected by GC-MS and mainly oxalic and oxamic acids by HPLC. The initial N was lost mainly as NH4+ ion and, in lesser proportion, as NO3- ion. These results allowed the proposal of a reaction sequence for each compound by the EAOPs tested. The study of SA degradation was further extended to a solar pre-pilot plant of 2.5 L with a Pt/ADE reactor as a first step of the application of SPEF to industrial level. The EF and SPEF processes were optimized by means of response surface methodology, yielding 100 mA cm-2, 0.5 mM Fe2+ and pH 4.0 as best variables. Similar results were found for SNM using the same pre-pilot plant. The SPEF process allowed 94% mineralization, more rapidly when current density rose from 50 and 150 mA cm-2, while the comparative EF process yielded lower decontamination.