To see the other types of publications on this topic, follow the link: Classifi cation.

Dissertations / Theses on the topic 'Classifi cation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'Classifi cation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Jahan, Farah. "Fusion of Hyperspectral and LiDAR Data for Land Cover Classification." Thesis, Griffith University, 2019. http://hdl.handle.net/10072/386555.

Full text
Abstract:
Land cover classification has become increasingly important for making the plan to overcome the problems of disorganized and uncontrolled development, the disappearance of prime agricultural lands and deteriorating environmental quality by losing forest, wildlife habitat, wetlands etc. Different remote sensing technologies capture different properties e.g., spectral, shape, etc of ground objects. Nowadays, combined use of multiple remote sensing technologies for land cover classification becomes popular. Spectral image e.g., hyperspectral and lidar point cloud data are commonly used in land cover classification. Among the spectral images, the hyperspectral image contains detailed spectral responses of an object. On the contrary, light detection and ranging (LiDAR) data capture structural information of an object. Thus hyperspectral and LiDAR complement each other by accumulating information from land cover. Several state-of-the-art methods were developed for fusing hyperspectral and LiDAR data for land cover classification where the methods included feature extraction, feature fusion and classification. Still, there are undiscovered properties of both modalities which can contribute significantly in this domain. In this thesis, we discover a number of effective ways for feature extraction from both hyperspectral and LiDAR data. Furthermore, we propose two feature fusion techniques which are able to decrease between-class correlation and increase within-class correlation while fusing features from two modalities. Finally, a decision fusion approach e.g., ensemble classification is incorporated for integrating prediction metrics. In this thesis, we propose three different approaches for separating complex land cover classes by fusing hyperspectral and LiDAR data. The effectiveness of these approaches is validated by experimenting on two datasets e.g., Houston and GU datasets. The Houston dataset is a benchmark dataset that contains fifteen landcover classes and distributed in 2013 IEEE GRSS Data Fusion Contest. On the other hand, GU dataset consists of land cover classes and is prepared from the hyperspectral and LiDAR data collected by the Spectral Imaging Lab of Griffith University. We use two state-of-the-art classifiers e.g., random forest (RF) and support vector machine (SVM) for classifying the features derived by our proposed approaches. In our first approach, we derive eight features from hyperspectral and LiDAR data. Among them two are from hyperspectral and six are from LiDAR data. These eight features show perfect complement property to hyperspectral features. In feature fusion, we explore the effectiveness of layer stacking and principal component analysis (PCA) where effective combination of features is investigated specially for PCA fusion. In our second approach, we integrate three key tasks e.g., band-group fusion, multisource fusion and generic feature (GF) extraction. In band-group fusion, we group hyperspectral bands based on their joint entropy and structural similarity. We apply PCA on each group and retain a few principal components and apply differential attribute profiles (DAP) for extracting spatial features. The spatial and spectral features from individual groups are fused using discriminant correlation analysis (DCA). In multisource fusion, spatial features from hyperspectral and LiDAR are also fused by DCA.We derive eight pixel-wise GF from hyperspectral and LiDAR data which are then arranged sequentially to form an additional feature vector. Finally, we concatenate the features generated by band-group fusion, multi-source fusion and generic feature extraction steps to get a final signature. In our third approach, we propose a novel feature extraction technique named inverse coeffcient of variation (ICV) which explores the Gaussian probability of neighbourhood between every pair of bands in hyperspectral data. We calculate ICV for each band with respect to every other band and form an ICV cube. We derive spatial features (e.g. DAP) from the first few principal components of both hyperspectral and ICV cube. In addition, we derive GF from both hyperspectral and LiDAR data and then spatial features from GF. Secondly, we propose a two-stream fusion approach where canonical correlation analysis (CCA) is used as a basic fusion unit. In one stream pair-wise CCA fusion of spectral features of hyperspectral with spatial features of both hyperspectral and LiDAR takes place. In the other stream, pair-wise CCA fusion of ICV features with spatial features derived from ICV, hyperspectral and LiDAR are performed. Thirdly, an ensemble classification system is designed for decision fusion where features from twostream fusion are distributed into random subsets, and then each subset is transformed for improving feature quality, all are concatenated and classified. This process is executed for several iteration. The final classification results are obtained by weighting and aggregating the prediction metrics given by RF or applying majority voting on the predicted classes given by SVM.<br>Thesis (PhD Doctorate)<br>Doctor of Philosophy (PhD)<br>School of Info & Comm Tech<br>Science, Environment, Engineering and Technology<br>Full Text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Feng. "Treatment-Based Classi?cation in Residential Wireless Access Points." Digital WPI, 2014. https://digitalcommons.wpi.edu/etd-dissertations/295.

Full text
Abstract:
" IEEE 802.11 wireless access points (APs) act as the central communication hub inside homes, connecting all networked devices to the Internet. Home users run a variety of network applications with diverse Quality-of-Service requirements (QoS) through their APs. However, wireless APs are often the bottleneck in residential networks as broadband connection speeds keep increasing. Because of the lack of QoS support and complicated configuration procedures in most off-the-shelf APs, users can experience QoS degradation with their wireless networks, especially when multiple applications are running concurrently. This dissertation presents CATNAP, Classification And Treatment iN an AP , to provide better QoS support for various applications over residential wireless networks, especially timely delivery for real-time applications and high throughput for download-based applications. CATNAP consists of three major components: supporting functions, classifiers, and treatment modules. The supporting functions collect necessary flow level statistics and feed it into the CATNAP classifiers. Then, the CATNAP classifiers categorize flows along three-dimensions: response-based/non-response-based, interactive/non-interactive, and greedy/non-greedy. Each CATNAP traffic category can be directly mapped to one of the following treatments: push/delay, limited advertised window size/drop, and reserve bandwidth. Based on the classification results, the CATNAP treatment module automatically applies the treatment policy to provide better QoS support. CATNAP is implemented with the NS network simulator, and evaluated against DropTail and Strict Priority Queue (SPQ) under various network and traffic conditions. In most simulation cases, CATNAP provides better QoS supports than DropTail: it lowers queuing delay for multimedia applications such as VoIP, games and video, fairly treats FTP flows with various round trip times, and is even functional when misbehaving UDP traffic is present. Unlike current QoS methods, CATNAP is a plug-and-play solution, automatically classifying and treating flows without any user configuration, or any modification to end hosts or applications. "
APA, Harvard, Vancouver, ISO, and other styles
3

Castells, Domingo Xavier. "Towards Objective Human Brain Tumours Classification using DNA microarrays." Doctoral thesis, Universitat Autònoma de Barcelona, 2009. http://hdl.handle.net/10803/3624.

Full text
Abstract:
Els tumors de cervell humans (HBTs) són uns dels càncers més agressius i intractables. El sistema actual de diagnosi i prognosi dels HBTs es basa en l'examinació histològica d'un tall de biòpsia, el qual es considera el sistema de referència ("gold¬standard"). A més de ser invasiva, aquesta tècnica no és prou acurada per a diferenciar els graus de malignitat de determinats HBTs i la correlació amb la resposta del pacient a la teràpia sol ser variable. En aquest context, les signatures gèniques obtingudes a partir de microarrays de DNA poden millorar els resultats del "gold-standard". <br/>En aquesta tesi, vaig recollir 333 biòpsies de varis tipus de HBTs. Com un 38% de les mostres tenien l'RNA degradat, vam avaluar si el tipus de HBTs, el contingut aparent de sang de la biòpsia i el medi de recollida de la biòpsia hi afectaven. Com no vam determinar cap relació, hipotetitzo que un temps variable d'isquèmia a temperatura normal del cos abans de l'extracció de la biòpsia podria induir la degradació de l'RNA. Això va ser avaluat en un tumor glial pre-clínic desenvolupat en ratolí. Es va detectar que 30 minuts de temps d'isquèmia afecta la integritat del RNA en tumors no necròtics, però no en els necròtics. <br/>Una part crucial de la tesi va ser la demostració com una "prova de principis" de l'habilitat de les signatures gèniques per a predir objectivament els HBTs. Això es va mostrar mitjançant una predicció perfecta de glioblastoma multiforme (Gbm) i meningioma meningotelial (Mm) utilitzant microarrays de cDNA i microchips d'Affymetrix. Els histopatòlegs poden discriminar perfectament aquests dos tipus de tumors, però aquest treball demostra una predicció perfecta utilitzant una fórmula matemàtica objectiva. <br/>Un cop es va demostrar això, em vaig sentir confiat per a predir diferents graus de malignitat i possibles subtipus moleculars de tumors glials. En aquest sentit, es va descriure una signatura gènica basada en l'expressió de 59 transcrits, la qual va distingir dos grups de glioblastomes. Finalment, una anàlisi inicial de les dades clíniques associades suggereix que la signatura gènica podria correlacionar amb glioblastomes primaris i secundaris.<br>Human brain tumours (HBTs) are among the most aggressive and intractable cancers. The current system for diagnosis and prognosis of HBTs is based on the histological examination of a biopsy slice, which is considered the 'gold standard'. Apart from being invasive, this technique is not accurate enough to differentiate malignancy grades of some HBTs and it provides a variable correlation with response to therapy of the patient. In this context, gene signatures from DNA microarray experiments can improve the results of the 'gold standard'. <br/>In this thesis, I collected 333 biopsies from various types of HBTs. As 38% of samples displayed degraded RNA, I evaluated whether the HBT type, the apparent blood content and the collection medium of the biopsy could play a role in this. As no relationship was found, I hypothesized that the variable ischaemia time at normal body temperature prior to removal of the biopsy may induce degradation of RNA. This was tested in a preclinical glial tumour model in mice. It was detected that 30 minutes ischaemia time affects the integrity of the RNA in non-necrotic tumours, but not in the necrotic ones. <br/>A crucial part of this thesis was the demonstration of proof-of-principle of the ability of gene signatures for objective prediction of HBTs. This was shown by perfect prediction of glioblastoma multiforme (Gbm) and meningothelial meningioma (Mm) using cDNA and Affymetrix microarrays. Histopathologists perfectly discriminates both tumour types, but this work demonstrated perfect prediction using a simple mathematical formula. <br/>Once this was demonstrated, I felt confident to predict different malignancy grades and possible molecular subtypes of glial tumours. In this respect, a gene signature based on the expression of 59 transcripts, which distinguished two groups of glioblastomas, was described. Finally, a crude initial analysis of associated clinical data suggests that this gene signature may correlate to primary and secondary glioblastomas.
APA, Harvard, Vancouver, ISO, and other styles
4

Eric, Andersson. "Motion Classi cation and Step Length Estimation for GPS/INS Pedestrian Navigation." Thesis, KTH, Reglerteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-98866.

Full text
Abstract:
The primary source for pedestrian navigation is the well known Global Positioning System. However, for applications including pedestrians walking in urban or indoor environments the GPS is not always reliable since the signal often is corrupted or completely blocked. A solution to this problem is to make a fusion between the GPS and an Inertial Navigation System (INS) that uses sensors attached to the pedestrian for positioning. The sensor platform consists of a tri-axial accelerometer, gyroscope and magnetometer. In this thesis, a dead reckoning approach is proposed for the INS, which means that the travelled distance is obtained by counting steps and multiplying with step length. Three parts of the dead reckoning system are investigated; step detection, motion classification and step length estimation. A method for step detection is proposed, which is based on peak/valley detection in the vertical acceleration. Each step is then classified based on the motion performed; forward, backward or sideways walk. The classification is made by extracting relevant features from the sensors, such as correlations between sensor signals. Two different classifiers are investigated; the first makes a decision by looking directly on the extracted features using simple logical operations, while the second uses a statistical approach based on a Hidden Markov Model. The step length is modelled as a function of sensor data, and two diffrerent functions are investigated. A method for on-line estimation of the step length function parameters is proposed, enabling the system to learn the pedestrian's step length when the GPS is active. The proposed algorithms were implemented in Simulink R and evaluated using real data collected from field tests. The results indicated an accuracy of around 2 % of the travelled distance for 8 minutes of walking and running without GPS.
APA, Harvard, Vancouver, ISO, and other styles
5

Tekkaya, Gokhan. "Improving Interactive Classification Of Satellite Image Content." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608326/index.pdf.

Full text
Abstract:
Interactive classi&amp<br>#64257<br>cation is an attractive alternative and complementary for automatic classi&amp<br>#64257<br>cation of satellite image content, since the subject is visual and there are not yet powerful computational features corresponding to the sought visual features. In this study, we improve our previous attempt by building a more stable software system with better capabilities for interactive classi&amp<br>#64257<br>cation of the content of satellite images. The system allows user to indicate a few number of image regions that contain a speci&amp<br>#64257<br>c geographical object, for example, a bridge, and to retrieve similar objects on the same satellite images. Retrieval process is iterative in the sense that user guides the classi&amp<br>#64257<br>cation procedure by interaction and visual observation of the results. The classi&amp<br>#64257<br>cation procedure is based on one-class classi&amp<br>#64257<br>cation.
APA, Harvard, Vancouver, ISO, and other styles
6

Boone, Edna Karen. "Radical cation propagation through bulged and mis-paired DNA : "A purine:purine staggered walk" (Part I). Part II, bulged DNA cleavage via a "Classic intercalator": a surprising role for siglet-excited ethidium bromide in the selective cleavage of a G-bulge containing duplex. Part III, synthesis and photochemical behavior of peptide nucleic acid trimers containning [sic] benzamidonaphthalimide." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/30733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Barros, Ana Luiza Bessa de Paula. "Revisitando o problema de classificaÃÃo de padrÃes na presenÃa de outliers usando tÃcnicas de regressÃo robusta." Universidade Federal do CearÃ, 2013. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=11176.

Full text
Abstract:
Nesta tese, aborda-se o problema de classi&#64257;caÃÃo de dados que estÃo contaminados com pa- drÃes atÃpicos. Tais padrÃes, genericamente chamados de outliers, sÃo onipresentes em conjunto de dados multivariados reais, porÃm sua detecÃÃo a priori (i.e antes de treinar um classi&#64257;cador) à uma tarefa de difÃcil realizaÃÃo. Como conseqÃÃncia, uma abordagem reativa, em que se descon&#64257;a da presenÃa de outliers somente apÃs um classi&#64257;cador previamente treinado apresen- tar baixo desempenho, à a mais comum. VÃrias estratÃgias podem entÃo ser levadas a cabo a &#64257;m de melhorar o desempenho do classi&#64257;cador, dentre elas escolher um classi&#64257;cador mais poderoso computacionalmente ou promover uma limpeza dos dados, eliminando aqueles pa- drÃes difÃceis de categorizar corretamente. Qualquer que seja a estratÃgia adotada, a presenÃa de outliers sempre irà requerer maior atenÃÃo e cuidado durante o projeto de um classi&#64257;cador de padrÃes. Tendo estas di&#64257;culdades em mente, nesta tese sÃo revisitados conceitos e tÃcni- cas provenientes da teoria de regressÃo robusta, em particular aqueles relacionados à estimaÃÃo M, adaptando-os ao projeto de classi&#64257;cadores de padrÃes capazes de lidar automaticamente com outliers. Esta adaptaÃÃo leva à proposiÃÃo de versÃes robustas de dois classi&#64257;cadores de padrÃes amplamente utilizados na literatura, a saber, o classi&#64257;cador linear dos mÃnimos qua- drados (least squares classi&#64257;er, LSC) e a mÃquina de aprendizado extremo (extreme learning machine, ELM). AtravÃs de uma ampla gama de experimentos computacionais, usando dados sintÃticos e reais, mostra-se que as versÃes robustas dos classi&#64257;cadores supracitados apresentam desempenho consistentemente superior aos das versÃes originais.<br>This thesis addresses the problem of data classi&#64257;cation when they are contaminated with atypical patterns. These patterns, generally called outliers, are omnipresent in real-world multi- variate data sets, but their a priori detection (i.e. before training the classi&#64257;er) is a dif&#64257;cult task to perform. As a result, the most common approach is the reactive one, in which one suspects of the presence of outliers in the data only after a previously trained classi&#64257;er has achieved a low performance. Several strategies can then be carried out to improve the performance of the classi&#64257;er, such as to choose a more computationally powerful classi&#64257;er and/or to remove the de- tected outliers from data, eliminating those patterns which are dif&#64257;cult to categorize properly. Whatever the strategy adopted, the presence of outliers will always require more attention and care during the design of a pattern classi&#64257;er. Bearing these dif&#64257;culties in mind, this thesis revi- sits concepts and techniques from the theory of robust regression, in particular those related to M-estimation, adapting them to the design of pattern classi&#64257;ers which are able to automatically handle outliers. This adaptation leads to the proposal of robust versions of two pattern classi- &#64257;ers widely used in the literature, namely, least squares classi&#64257;er (LSC) and extreme learning machine (ELM). Through a comprehensive set of computer experiments using synthetic and real-world data, it is shown that the proposed robust classi&#64257;ers consistently outperform their original versions.
APA, Harvard, Vancouver, ISO, and other styles
8

Ekong, Udeme Essien. "Development of computational intelligence methods to deal with classification problems." Thesis, King's College London (University of London), 2016. https://kclpure.kcl.ac.uk/portal/en/theses/development-of-computational-intelligence-methods-to-deal-with-classi-cation-problems(0f04e932-0e3d-4309-a21f-1e2f6df6d29d).html.

Full text
Abstract:
In the thesis presented here, variations of two very prominent machine learning techniques, the Neural Network (NN) and Support Vector Machine (SVM) are used in an attempt to solve two classification problems. Classification involves the assignment of an unknown object into a pre-determined group which consists of a set of preclassified objects with similar features to that unknown object. The main theme of the research conducted in this thesis involves investigation into existing and proposed classifier architectures to improve the classification performance for certain research problems. The aim of the research conducted is to develop new classifiers that are robust and able to show a high level of classification accuracy to the problems that are being considered. The problems being considered in this thesis are material surface classification and epilepsy seizure phase classification. The material surface classification problem involves the classification of a material based on its surface features which are obtained from a tactile-sensing robotic arm. Feature extraction is carried out on this input and the classifier is then used to classify based on the extracted feature inputs. Epileptic seizure is a common neurological disorder which causes the sudden discharge of cortical neurons in the brain. This results in the onset of seizures lasting from a few seconds to around a minute. The input consists of data obtained from the electroencephalograph (EEG) of patients who suffer from epilepsy. The input is then subjected to feature extraction and the extracted feature inputs are applied to the classifier. Four traditional classifiers, namely SVM, NN, k-nearest neighbour (kNN) and naive Bayes classifier are utilised for comparison purposes to evaluate the performance of the proposed classifiers during the research conducted. To evaluate the robustness property of the classifier, the original data is contaminated with Gaussian white noise at various levels. The results of the research carried out are presented in three parts: 1)The performance of six commonly used neural-network-based classifiers are investigated in solving the material surface classification problem. The significant contribution from the research conducted in this section is in the application of the neural network architectures to a novel problem (i.e material classification). The neural network architectures are also altered and re-structured in order to t the problem space. Experimental results show that the parallel-structured, tree-structured and naive Bayes classifier outperform the others based on the average classification accuracy of the classifier when under the original data. The tree-structured classifier demonstrates the best robustness property under the noisy data. 2) In continuation of the research conducted in the previous section, a novel neural network having variable weights is proposed to deal with the material classification problem. The aim of doing this is to compare its performance to the best out of the 6 neural network architectures applied in dealing with the material classification problem. The epilepsy seizure phase classification problem is also introduced with the proposed variable weight neural network being implemented to deal with this problem. It is shown that the variable weight neural network (VWNN) classifier outperforms the traditional methods in terms of classification accuracy and robustness property when the input data is contaminated with noise. 3) A novel Interval Type-2 Fuzzy Support Vector Machine (IT2FSVM) classifier has been proposed to deal with the epilepsy seizure phase classification problem. The performance of the classifier is measured based on its classification accuracy for each of the epilepsy phases. Three traditional classifiers (SVM, kNN and naive Bayes) are used for comparison purposes. The results obtained from simulations show that the novel IT2FSVM is able to show improved performance in terms of the average classification accuracy when compared to the other classifiers under the original dataset and also shows a high level of robustness when compared to other classifiers under a noisy dataset.
APA, Harvard, Vancouver, ISO, and other styles
9

Thorne, Mark Allen. "Lucan's Cato, the defeat of victory, the triumph of memory." Diss., University of Iowa, 2010. https://ir.uiowa.edu/etd/749.

Full text
Abstract:
This dissertation provides a new examination of the figure of Cato within Lucan's epic poem Bellum Civile by focusing on the theme of memory within the epic and its interaction with Cato's character specifically. It argues that one may read the epic as possessing the rhetorical function of a literary funeral monumentum, the purpose of which is to retell the death of Rome in the Roman Civil War, mourn its passing, and yet in so doing simultaneously preserve its memory so that future generations may remember the liberty Rome once possessed and may be influenced by that memory to action. In this reading, the epic itself--like Cato within the epic--offers a counter-memory of what the civil wars meant to Rome in competition with that promoted by Caesar and his descendants. This study centers upon the speech of Cato found in Book 2 in which Cato states his two major goals for participation in the civil war: successfully commemorate a perishing Roma et Libertas and transform his own defeat into a self-sacrifice that is beneficial to his fellow Romans. The opening chapters place Cato's speech into its larger context by arguing that it is an integral part of a narrative arc spanning most of the first two books. The image of national suicide within the epic's proem reveals that gaining victory in civil war is what assures self-defeat. This economy of universal defeat pervades Lucan's epic and stands as the greatest threat facing Cato in the successful achievement of his goals. Lucan also shows that the very nature of civil war poses a threat to the viability of memory, as evidenced by scenes in which Roman soldiers and citizens forget and abandon the social ties that bind their identity to that of Rome. Cato's speech illustrates that his chosen weapon against the epic's economy of defeat will be the power of memory. A careful analysis of the speech reveals that Cato's desired goal of enacting a self-sacrifice--a nod to his future suicidal martyrdom at Utica--can transform him into a monumentum of `Old Rome' (the pre-Caesarian Rome that still retained its libertas) which will in turn ensure his second goal of achieving funeral commemoration of what Rome used to be--and could still be again. The closing chapter examines key passages in Book 9 in which the power of memory is explicitly connected with renewal even in the midst of defeat, suggesting that Cato's (and the epic's) mission to preserve memory can be ultimately successful. This reading of Lucan's Cato has the benefit of showing that his success need no longer be based mainly upon whether or not he can be a virtuous sapiens but also upon what he can actually do for future generations of Romans by preserving the powerful memory of a Rome that still possessed her freedom from the Caesars.
APA, Harvard, Vancouver, ISO, and other styles
10

Carvalho, João Pedro Martins de. "High Performance shallow packet inspection system for traffic identification." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/18455.

Full text
Abstract:
Mestrado em Engenharia de Computadores e Telemática<br>The evolution and growth of the Internet has led to a growing preoccupation regarding dynamic allocation of resources in large networks, as well as to an unprecedented growing adoption of security policies based on tra c classi cation. This phenomenon triggered the creation of deep inspection mechanisms for packets where we can see a cross-access that is based on the retrieval of speci c strings present in packet's Payload. This event raises a number of technical, ethical, and potentially legal limitations. With the increasing need to develop less invasive and more e cient inspection mechanisms, in terms of processing speed and potentially, memory management, the scienti c community began working in other types of approaches to solve the problem. In this dissertation, we propose a tra c ow classi cation system based on Shallow packet inspection. Given the latest forecasts and current statistical data, which estimates that about 90 % of all tra c will be video in the next few years, we have decided to devote special attention to this speci c type. For this, we proceeded to collect non-sensitive information, with which we perform a statistical study based on low-level statistics. The results obtained from this study were analysed from a behavioural point of view, in order to reach the extraction of coherent rules that allow the di erentiation of independent types of tra c. Finally, we studied, conceived and test an e cient ow organisation paradigm. The system has been tested and evaluated using packet ood tests. Following to the measurement and examination of results in terms of processing times as well as the use of main memory.<br>A evolução e crescimento da Internet tem levado a uma crescente preocupação tendo em vista a alocação dinâmica de recursos em redes de grande dimensão, assim como uma adopção sem precedente de politicas de segurança baseadas em classi ficação de tráfego. Este fenómeno desencadeou a criação de mecanismos de inspecção profunda de pacotes onde se assiste a um acesso transversal, que assenta na obtenção de sequências de bytes especificas, presentes no Payload de cada pacote, o que levanta uma série de limitações técnicas, éticas e potencialmente legais. Com a crescente necessidade de desenvolvimento de mecanismos de inspecção menos invasivos e mais e cientes em termos de velocidade e potencialmente gestão de memória, a comunidade cientifi ca começou a trabalhar em outros tipos de abordagem ao problema. Nesta dissertação, propomos um sistema de classi cação de fluxos de trafego que assenta em Shallow packet inspection. Tendo em conta as ultimas previsões e dados estatísticos atuais, que estimam que cerca de 90% de todo tráfego na Internet, seja do tipo vídeo nos próximos anos, decidimos dedicar especial atenção sobre esse tipo especifico. Para isso, procedemos a recolha de informação não sensível, com a qual efetuamos um estudo estatístico baseado em estatísticas de baixo nivel. Os resultados obtidos nesse estudo, foram analisados de um ponto de vista comportamental, por forma a alcançar uma prova de conceito na extracção de regras coerentes que permitam diferenciar tipos de tráfego independentes. Por fim, estudamos, concebemos e testamos um paradigma de organizaçao de fluxos de forma e ciente. O sistema foi testado e avaliado recorrendo a testes de inundação por pacotes, seguidos da medição e avaliação dos resultados em termos de tempo de processamento, assim como, ao uso de memoria principal.
APA, Harvard, Vancouver, ISO, and other styles
11

Griffon, Nicolas. "Modélisation, création et évaluation de ux de terminologies et de terminologies d'interface : application à la production d'examens complémentaires de biologie et d'imagerie médicale." Phd thesis, Université de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00877697.

Full text
Abstract:
Les intérêts théoriques, cliniques et économiques, de l'informatisation des prescriptions au sein des établissements de santé sont nombreux : diminution du nombre de prescriptions, amélioration de leur pertinence clinique, diminution des erreurs médicales... Ces béné ces restent théoriques car l'informatisation des prescriptions est, en pratique, confrontée à de nombreux problèmes, parmi lesquels les problèmes d'interopérabilité et d'utilisabilité des solutions logicielles. L'utilisation de terminologies d'interface au sein de ux de terminologies permettrait de dépasser ces problèmes. L'objectif principal de ce travail était de modéliser et développer ces ux de terminologies pour la production d'examens de biologie et d'imagerie médicale puis d'en évaluer les béné ces en termes d'interopérabilité et d'utilisabilité. Des techniques d'analyse des processus ont permis d'aboutir à une modélisation des ux de terminologies qui semble commune à de nombreux domaines. La création des ux proprement dits repose sur des terminologies d'interface, éditées pour l'occasion, et des référentiels nationaux ou internationaux reconnus. Pour l'évaluation, des méthodes spéci- ques mises au point lors du travail d'intégration d'une terminologie d'interface iconique au sein d'un moteur de recherche de recommandations médicales et d'un dossier médical, ont été appliquées. Les ux de terminologies créés induisaient d'importantes pertes d'information entre les di érents systèmes d'information. En imagerie, la terminologie d'interface de prescription était signi cativement plus simple à utiliser que les autres terminologies, une telle di érence n'a pas été mise en évidence dans le domaine de la biologie. Si les ux de terminologies ne sont pas encore fonctionnels, les terminologies d'interface, elles, sont disponibles pour tout établissement de santé ou éditeur de logiciels et devraient faciliter la mise en place de logiciels d'aide à la prescription.
APA, Harvard, Vancouver, ISO, and other styles
12

Abreu, Marjory Cristiany da Costa. "Analisando o desempenho do ClassAge: um sistema multiagentes para classifica??o de padr?es." Universidade Federal do Rio Grande do Norte, 2006. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18070.

Full text
Abstract:
Made available in DSpace on 2014-12-17T15:48:05Z (GMT). No. of bitstreams: 1 MarjoryCCA.pdf: 917121 bytes, checksum: 918ccb19adcf29ebd6cdbf1f3ac97310 (MD5) Previous issue date: 2006-10-26<br>Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior<br>The use of multi-agent systems for classi&#64257;cation tasks has been proposed in order to overcome some drawbacks of multi-classi&#64257;er systems and, as a consequence, to improve performance of such systems. As a result, the NeurAge system was proposed. This system is composed by several neural agents which communicate and negotiate a common result for the testing patterns. In the NeurAge system, a negotiation method is very important to the overall performance of the system since the agents need to reach and agreement about a problem when there is a con&#64258;ict among the agents. This thesis presents an extensive analysis of the NeurAge System where it is used all kind of classi&#64257;ers. This systems is now named ClassAge System. It is aimed to analyze the reaction of this system to some modi&#64257;cations in its topology and con&#64257;guration<br>A utiliza??o de sistemas baseados no paradigma dos agentes para resolu??o de problemas de reconhecimento de padr?es vem sendo propostos com o intuito de resolver, ou atenuar, o problema de tomada de decis?o centralizada dos sistemas multi-classi&#64257;cadores e, como consequ?ncia, melhorar sua capacidade de classi&#64257;ca??o. Com a inten??o de solucionar este problema, o Sistema NeurAge foi proposto. Este sistema ? composto por agentes neurais que podem se comunicar e negociar um resultado comum para padr?es de teste. No Sistema NeurAge, os m?todos de negocia??o s?o muito importantes para prover uma melhor precis?o ao sistema, pois os agentes necessitam alcan?ar a melhor solu??o e resolver con&#64258;itos, quando estes existem, em rela??o a um problema. Esta disserta??o apresenta uma extens?o do Sistema NeurAge que pode utilizar qualquer tipo de classi&#64257;cador e agora ser? chamado de Sistema ClassAge. Aqui ? feita uma an?lise do comportamento do Sistema ClassAge diante de v?rias modi&#64257;ca??es na topologia e nas con&#64257;gura??es dos componentes deste sistema
APA, Harvard, Vancouver, ISO, and other styles
13

Manandhr-Shrestha, Nabin K. "Statistical Learning and Behrens Fisher Distribution Methods for Heteroscedastic Data in Microarray Analysis." Scholar Commons, 2010. http://scholarcommons.usf.edu/etd/3513.

Full text
Abstract:
The aim of the present study is to identify the di®erentially expressed genes be- tween two di®erent conditions and apply it in predicting the class of new samples using the microarray data. Microarray data analysis poses many challenges to the statis- ticians because of its high dimensionality and small sample size, dubbed as "small n large p problem". Microarray data has been extensively studied by many statisticians and geneticists. Generally, it is said to follow a normal distribution with equal vari- ances in two conditions, but it is not true in general. Since the number of replications is very small, the sample estimates of variances are not appropriate for the testing. Therefore, we have to consider the Bayesian approach to approximate the variances in two conditions. Because the number of genes to be tested is usually large and the test is to be repeated thousands of times, there is a multiplicity problem. To remove the defect arising from multiple comparison, we use the False Discovery Rate (FDR) correction. Applying the hypothesis test repeatedly gene by gene for several thousands of genes, there is a great chance of selecting false genes as di®erentially expressed, even though the signi¯cance level is set very small. For the test to be reliable, the probability of selecting true positive should be high. To control the false positive rate, we have applied the FDR correction, in which the p -values for each of the gene is compared with its corresponding threshold. A gene is, then, said to be di®erentially expressed if the p-value is less than the threshold. We have developed a new method of selecting informative genes based on the Bayesian Version of Behrens-Fisher distribution which assumes the unequal variances in two conditions. Since the assumption of equal variances fail in most of the situation and the equal variance is a special case of unequal variance, we have tried to solve the problem of ¯nding di®erentially expressed genes in the unequal variance cases. We have found that the developed method selects the actual expressed genes in the simulated data and compared this method with the recent methods such as Fox and Dimmic’s t-test method, Tusher and Tibshirani’s SAM method among others. The next step of this research is to check whether the genes selected by the pro- posed Behrens -Fisher method is useful for the classi¯cation of samples. Using the genes selected by the proposed method that combines the Behrens Fisher gene se- lection method with some other statistical learning methods, we have found better classi¯cation result. The reason behind it is the capability of selecting the genes based on the knowledge of prior and data. In the case of microarray data due to the small sample size and the large number of variables, the variances obtained by the sample is not reliable in the sense that it is not positive de¯nite and not invertible. So, we have derived the Bayesian version of the Behrens Fisher distribution to remove that insu±ciency. The e±ciency of this established method has been demonstrated by ap- plying them in three real microarray data and calculating the misclassi¯cation error rates on the corresponding test sets. Moreover, we have compared our result with some of the other popular methods, such as Nearest Shrunken Centroid and Support Vector Machines method, found in the literature. We have studied the classi¯cation performance of di®erent classi¯ers before and after taking the correlation between the genes. The classi¯cation performance of the classi¯er has been signi¯cantly improved once the correlation was accounted. The classi¯cation performance of di®erent classi¯ers have been measured by the misclas- si¯cation rates and the confusion matrix. The another problem in the multiple testing of large number of hypothesis is the correlation among the test statistics. we have taken the correlation between the test statistics into account. If there were no correlation, then it will not a®ect the shape of the normalized histogram of the test statistics. As shown by Efron, the degree of the correlation among the test statistics either widens or shrinks the tail of the histogram of the test statistics. Thus the usual rejection region as obtained by the signi¯cance level is not su±cient. The rejection region should be rede¯ned accordingly and depends on the degree of correlation. The e®ect of the correlation in selecting the appropriate rejection region have also been studied.
APA, Harvard, Vancouver, ISO, and other styles
14

Funiak, Martin. "Klasifikace testovacích manévrů z letových dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-264978.

Full text
Abstract:
Zapisovač letových údajů je zařízení určené pro zaznamenávání letových dat z různých senzorů v letadlech. Analýza letových údajů hraje důležitou roli ve vývoji a testování avioniky. Testování a hodnocení charakteristik letadla se často provádí pomocí testovacích manévrů. Naměřená data z jednoho letu jsou uložena v jednom letovém záznamu, který může obsahovat několik testovacích manévrů. Cílem této práce je identi kovat základní testovací manévry s pomocí naměřených letových dat. Teoretická část popisuje letové manévry a formát měřených letových dat. Analytická část popisuje výzkum v oblasti klasi kace založené na statistice a teorii pravděpodobnosti potřebnou pro pochopení složitých Gaussovských směšovacích modelů. Práce uvádí implementaci, kde jsou Gaussovy směšovací modely použité pro klasifi kaci testovacích manévrů. Navržené řešení bylo testováno pro data získána z letového simulátoru a ze skutečného letadla. Ukázalo se, že Gaussovy směšovací modely poskytují vhodné řešení pro tento úkol. Další možný vývoj práce je popsán v závěrečné kapitole.
APA, Harvard, Vancouver, ISO, and other styles
15

Yalabik, Ismet. "A Pattern Classification Approach Boosted With Genetic Algorithms." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/3/12608432/index.pdf.

Full text
Abstract:
Ensemble learning is a multiple-classi&amp<br>#64257<br>er machine learning approach which combines, produces collections and ensembles statistical classi&amp<br>#64257<br>ers to build up more accurate classi&amp<br>#64257<br>er than the individual classi&amp<br>#64257<br>ers. Bagging, boosting and voting methods are the basic examples of ensemble learning. In this thesis, a novel boosting technique targeting to solve partial problems of AdaBoost, a well-known boosting algorithm, is proposed. The proposed systems &amp<br>#64257<br>nd an elegant way of boosting a bunch of classi&amp<br>#64257<br>ers successively to form a better classi&amp<br>#64257<br>er than each ensembled classi&amp<br>#64257<br>er. AdaBoost algorithm employs a greedy search over hypothesis space to &amp<br>#64257<br>nd a good suboptimal solution. On the other hand, this work proposes an evolutionary search with genetic algorithms instead of greedy search. Empirical results show that classi&amp<br>#64257<br>cation with boosted evolutionary computing outperforms AdaBoost in equivalent experimental environments.
APA, Harvard, Vancouver, ISO, and other styles
16

Král, Jiří. "Strojové učení v klasifikaci obrazu." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237067.

Full text
Abstract:
This project deals vith analysis and testing of algorithms and statistical models, that could potentionaly improve resuts of FIT BUT in ImageNet Large Scale Visual Recognition Challenge and TRECVID. Multinomial model was tested. Phonotactic Intersession Variation Compensation (PIVCO) model was used for reducing random e ffects in image representation and for dimensionality reduction. PIVCO - dimensionality reduction achieved the best mean average precision while reducing to one-twenyth of original dimension. KPCA model was tested to approximate Kernel SVM. All statistical models were tested on Pascal VOC 2007 dataset.
APA, Harvard, Vancouver, ISO, and other styles
17

Pribil, Nathaniel Brent. "Virtue Conquered by Fortune: Cato in Lucan's Pharsalia." BYU ScholarsArchive, 2017. https://scholarsarchive.byu.edu/etd/6625.

Full text
Abstract:
This thesis looks at how the Roman poet Lucan uses the character of Cato to elucidate his beliefs about Fortune and Stoicism. The traditional Stoic view of Fortune views it as a force for good that allows people to improve through hardship. Lucan portrays Fortune as a purely antagonistic force that actively seeks to harm the Roman people and corrupt even good individuals like Cato. Lucan's Fortune arranges events to place Cato in a situation where it is impossible to maintain his virtue. Rather than providing him an opportunity to improve in the civil war, Fortune makes it so that whatever choice Cato makes, he becomes guilty. Brutus' dialogue with Cato in Book 2 of Pharsalia illuminates the position that Cato is in. Brutus looks to Cato as the traditional Stoic exemplar that can forge a path for virtue in civil war. However, Cato admits that joining any side in the civil war would cause him to become guilty. Fortune's support of Caesar and its dominance over contemporary events has forced Cato into this situation. Cato's desert march in Book 9 continues to show Fortune's dominance over Cato by continually denying him opportunities to gain virtue for himself. Lucan's portrayal of Fortune shows his rejection of Stoic teaching about Fortune and the ultimate futility of trying to remain virtuous in a time of civil war.
APA, Harvard, Vancouver, ISO, and other styles
18

Tillich, Daniel. "Generalized Modeling and Estimation of Rating Classes and Default Probabilities Considering Dependencies in Cross and Longitudinal Section." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-222601.

Full text
Abstract:
Our sample (Xit; Yit) consists of pairs of variables. The real variable Xit measures the creditworthiness of individual i in period t. The Bernoulli variable Yit is the default indicator of individual i in period t. The objective is to estimate a credit rating system, i.e. to particularly divide the range of the creditworthiness into several rating classes, each with a homogeneous default risk. The field of change point analysis provides a way to estimate the breakpoints between the rating classes. As yet, the literature only considers models without dependencies or with dependence only in cross section. This contribution proposes multi-period models including dependencies in cross section as well as in longitudinal section. Furthermore, estimators for the model parameters are suggested. The estimators are applied to a data set of a German credit bureau.
APA, Harvard, Vancouver, ISO, and other styles
19

Nonn, Kayla A. "Virtue, Politics, and Republican Heroes: A Comparison of George Washington and Cato the Younger." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/cmc_theses/1025.

Full text
Abstract:
This thesis examines the extent to which George Washington may have intentionally modeled himself upon Cato the Younger, the Roman senator who famously resisted tyranny during the decline of the Roman Republic. Having seen a rendition of Joseph Addison's Cato as a young man and quoting the play throughout his life, Washington was profoundly impacted by the performance and bore many resemblances to the play's protagonist. Though scholars often paint Washington as a near reincarnation of Cato, I will provide both an interpretation of Addison's Cato and evaluate Washington and Cato int their respective historical contexts in order to ultimately conclude that Washington was much more of a reasonable, practical politician than his Roman counterpart.
APA, Harvard, Vancouver, ISO, and other styles
20

Tillich, Daniel, and Christoph Lehmann. "Estimation in discontinuous Bernoulli mixture models applicable in credit rating systems with dependent data." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-222582.

Full text
Abstract:
Objective: We consider the following problem from credit risk modeling: Our sample (Xi; Yi), 1 < i < n, consists of pairs of variables. The first variable Xi measures the creditworthiness of individual i. The second variable Yi is the default indicator of individual i. It has two states: Yi = 1 indicates a default, Yi = 0 a non-default. A default occurs, if individual i cannot meet its contractual credit obligations, i. e. it cannot pay back its outstandings regularly. In afirst step, our objective is to estimate the threshold between good and bad creditworthiness in the sense of dividing the range of Xi into two rating classes: One class with good creditworthiness and a low probability of default and another class with bad creditworthiness and a high probability of default. Methods: Given observations of individual creditworthiness Xi and defaults Yi, the field of change point analysis provides a natural way to estimate the breakpoint between the rating classes. In order to account for dependency between the observations, the literature proposes a combination of three model classes: These are a breakpoint model, a linear one-factor model for the creditworthiness Xi, and a Bernoulli mixture model for the defaults Yi. We generalize the dependency structure further and use a generalized link between systematic factor and idiosyncratic factor of creditworthiness. So the systematic factor cannot only change the location, but also the form of the distribution of creditworthiness. Results: For the case of two rating classes, we propose several estimators for the breakpoint and for the default probabilities within the rating classes. We prove the strong consistency of these estimators in the given non-i.i.d. framework. The theoretical results are illustrated by a simulation study. Finally, we give an overview of research opportunities.
APA, Harvard, Vancouver, ISO, and other styles
21

Tanasa, Doru. "Fouille de données d'usage du Web : Contributions au prétraitement de logs Web Intersites et à l'extraction des motifs séquentiels avec un faible support." Phd thesis, Université de Nice Sophia-Antipolis, 2005. http://tel.archives-ouvertes.fr/tel-00178870.

Full text
Abstract:
Les quinze dernières années ont été marquées par une croissance exponentielle du domaine du Web tant dans le nombre de sites Web disponibles que dans le nombre d'utilisateurs de ces sites. Cette croissance a généré de très grandes masses de données relatives aux traces d'usage duWeb par les internautes, celles-ci enregistrées dans des fichiers logs Web. De plus, les propriétaires de ces sites ont exprimé le besoin de mieux comprendre leurs visiteurs afin de mieux répondre à leurs attentes. Le Web Usage Mining (WUM), domaine de recherche assez récent, correspond justement au processus d'extraction des connaissances à partir des données (ECD) appliqué aux données d'usage sur le Web. Il comporte trois étapes principales : le prétraitement des données, la découverte des schémas et l'analyse (ou l'interprétation) des résultats. Un processus WUM extrait des patrons de comportement à partir des données d'usage et, éventuellement, à partir d'informations sur le site (structure et contenu) et sur les utilisateurs du site (profils). La quantité des données d'usage à analyser ainsi que leur faible qualité (en particulier l'absence de structuration) sont les principaux problèmes en WUM. Les algorithmes classiques de fouille de données appliqués sur ces données donnent généralement des résultats décevants en termes de pratiques des internautes (par exemple des patrons séquentiels évidents, dénués d'intérêt). Dans cette thèse, nous apportons deux contributions importantes pour un processus WUM, implémentées dans notre bo^³te à outils AxisLogMiner. Nous proposons une méthodologie générale de prétraitement des logs Web et une méthodologie générale divisive avec trois approches (ainsi que des méthodes concrètes associées) pour la découverte des motifs séquentiels ayant un faible support. Notre première contribution concerne le prétraitement des données d'usage Web, domaine encore très peu abordé dans la littérature. L'originalité de la méthodologie de prétraitement proposée consiste dans le fait qu'elle prend en compte l'aspect multi-sites du WUM, indispensable pour appréhender les pratiques des internautes qui naviguent de fa»con transparente, par exemple, sur plusieurs sites Web d'une même organisation. Outre l'intégration des principaux travaux existants sur ce thème, nous proposons dans notre méthodologie quatre étapes distinctes : la fusion des fichiers logs, le nettoyage, la structuration et l'agrégation des données. En particulier, nous proposons plusieurs heuristiques pour le nettoyage des robots Web, des variables agrégées décrivant les sessions et les visites, ainsi que l'enregistrement de ces données dans un modèle relationnel. Plusieurs expérimentations ont été réalisées, montrant que notre méthodologie permet une forte réduction (jusqu'à 10 fois) du nombre des requêtes initiales et offre des logs structurés plus riches pour l'étape suivante de fouille de données. Notre deuxième contribution vise la découverte à partir d'un fichier log prétraité de grande taille, des comportements minoritaires correspondant à des motifs séquentiels de très faible support. Pour cela, nous proposons une méthodologie générale visant à diviser le fichier log prétraité en sous-logs, se déclinant selon trois approches d'extraction de motifs séquentiels au support faible (Séquentielle, Itérative et Hiérarchique). Celles-ci ont été implémentées dans des méthodes concrètes hybrides mettant en jeu des algorithmes de classification et d'extraction de motifs séquentiels. Plusieurs expérimentations, réalisées sur des logs issus de sites académiques, nous ont permis de découvrir des motifs séquentiels intéressants ayant un support très faible, dont la découverte par un algorithme classique de type Apriori était impossible. Enfin, nous proposons une boite à outils appelée AxisLogMiner, qui supporte notre méthodologie de prétraitement et, actuellement, deux méthodes concrètes hybrides pour la découverte des motifs séquentiels en WUM. Cette boite à outils a donné lieu à de nombreux prétraitements de fichiers logs et aussi à des expérimentations avec nos méthodes implémentées.
APA, Harvard, Vancouver, ISO, and other styles
22

Su, Junjie. "Accurate and Reliable Cancer Classi cation Based on Pathway-Markers and Subnetwork-Markers." Thesis, 2010. http://hdl.handle.net/1969.1/ETD-TAMU-2010-12-8865.

Full text
Abstract:
Finding reliable gene markers for accurate disease classification is very challenging due to a number of reasons, including the small sample size of typical clinical data, high noise in gene expression measurements, and the heterogeneity across patients. In fact, gene markers identified in independent studies often do not coincide with each other, suggesting that many of the predicted markers may have no biological significance and may be simply artifacts of the analyzed dataset. To nd more reliable and reproducible diagnostic markers, several studies proposed to analyze the gene expression data at the level of groups of functionally related genes, such as pathways. Given a set of known pathways, these methods estimate the activity level of each pathway by summarizing the expression values of its member genes and using the pathway activities for classification. One practical problem of the pathway-based approach is the limited coverage of genes by currently known pathways. As a result, potentially important genes that play critical roles in cancer development may be excluded. In this thesis, we first propose a probabilistic model to infer pathway/subnetwork activities. After that, we developed a novel method for identifying reliable subnetwork markers in a human protein-protein interaction (PPI) network based on probabilistic inference of subnetwork activities. We tested the proposed methods based on two independent breast cancer datasets. The proposed method can efficiently find reliable subnetwork markers that outperform the gene-based and pathway-based markers in terms of discriminative power, reproducibility and classification performance. The identified subnetwork markers are highly enriched in common GO terms, and they can more accurately classify breast cancer metastasis compared to markers found by a previous method.
APA, Harvard, Vancouver, ISO, and other styles
23

(9706502), Claire T. Nimlos. "Influence of Organic and Inorganic Cations on Directing Aluminum Distributions in Zeolite Frameworks and Effects on Brønsted Acid Catalysis." Thesis, 2020.

Find full text
Abstract:
<p>Zeolites are microporous crystalline solids with tetrahedrally bonded Si<sup>4+</sup> atoms linked together with bridging oxygens, interconnected in various geometries and arrangements to generate a diversity of microporous topologies. The substitution of Al<sup>3+</sup> into framework tetrahedral sites (T-sites) generates anionic lattice charges that can be counterbalanced by protons (Brønsted acid sites) or extraframework metal cations and complexes that can act as catalytic active sites. The local arrangement of Al ensembles can be categorized by the size of the (alumino)silicate rings and the number and order of the Al atoms they contain, which are critical structural features that influence their ability to serve as binding sites for extraframework cations of different size and oxidation state. The ability to exercise control over the isomorphic substitution of Al<sup>3+</sup> into the zeolite framework during hydrothermal crystallization has long been envisioned, but recognized to depend on complex and kinetically-controlled nucleation and crystal growth events that challenge the development of reproducible synthesis routes and predictive synthesis-structure relations. Here, we present the results of extensive experimental and theoretical investigation of the chabazite (CHA) zeolite topology, which contains a single crystallographically distinct T-site that enables studying effects of Al arrangement independent of T-site location. We then extend these findings and methodologies to investigate more complex zeolite topologies with larger numbers of distinct T-sites, including other small-pore (AEI, LEV) and medium-pore zeolites (MFI, MEL), with a specific focus on MFI zeolites because of their versatility in commercial applications. Cationic species are often present during hydrothermal zeolite crystallization, in the form of inorganic and organic structure directing agents (SDAs), to help guide formation of the intended zeolite topology and to compensate charge when Al is incorporated into the lattice. Variations in the type and amount of cationic SDAs have been shown to influence both the Al siting within different void locations of a given zeolite and the local Al arrangement. In order to make quantitative assessments of the number of Al-Al site pairs formed in a given zeolite, experimental protocols to titrate the specific Al-Al site ensembles are required. We specifically explore the use of Co<sup>2+ </sup>titrants at saturation uptakes, verifying the sole presence of Co<sup>2+</sup> cations via spectroscopic identification and a cation site balance that is closed by quantifying residual Brønsted acid sites by NH<sub>3 </sub>titration. We then investigate the role of the cationic SDA content in the synthesis mixture on the Al arrangement in MFI zeolites. Depending on the specific mixture of the</p><p>organic cation tetrapropylammonium (TPA<sup>+</sup>) or various neutral organic molecules when used together with smaller Na<sup>+</sup> cations, MFI zeolites can be crystallized over a range of Al content. Moreover, the fraction of Co<sup>2+</sup>-titratable Al-Al pairs correlates with the amount of occluded Na<sup>+</sup> cations when the total Al content is held approximately constant (Si/Al ~ 50). These results are consistent with our prior reports of CHA zeolites, wherein the occlusion of smaller Na<sup>+</sup> cations correlates positively with the formation of Al-Al pairs in six-membered ring (6-MR) locations. Unlike the N,N,N trimethyl-1-adamantylammonium (TMAda<sup>+</sup>) cation used to crystallize CHA, which alone does not form Co<sup>2+ </sup>titratable Al-Al site pairs, the organic TPA+ alone can form Al-Al site pairs in MFI. DFT calculations of Al siting energies, using a 96 T-site MFI unit cell containing either one or two Al charge-balanced by one or two occluded TPA<sup>+ </sup>respectively, reveal the dominant influence of electrostatic interactions between the cationic N of TPA<sup>+</sup> and the anionic lattice charge. DFT calculations of probable Co<sup>2+ </sup>exchange sites are used to identify a subset of Al-Al site pairs with favorable energies when compensated either by Co<sup>2+</sup> or by two TPA<sup>+</sup> molecules in adjacent MFI channel intersections. MFI crystallized with one cationic species (TPA<sup>+</sup> or Na<sup>+</sup>) with a neutral organic species (ethylenediamine, pentaerythritol, or a mixture of methylamine and 1,4-diazabicyclo[2.2.2]octane) contain significantly lower fractions of Co<sup>2+</sup>-titratable Al-Al pairs at similar bulk Al content (Si/Al = 43–58), demonstrating the role of neutral organic species to occupy void spaces without providing the capacity to compensate charge, thus serving to increase the average spatial separation of framework Al sites. The kinetics of methanol dehydration to dimethyl ether can be quantified by first-order and zero order rate constants (415 K, per H<sup>+</sup>) to probe acid strength and confinement effects in solid Brønsted acids. Here, we use this quantitative probe reaction to investigate how Al arrangements in MFI and CHA affect the mechanism and kinetics of this reaction, in order to connect synthetic protocols to structure and to catalytic function. This effort first involved measurement of methanol dehydration kinetics on a suite of commercially sourced MFI samples to benchmark results obtained on our kinetic instruments to prior literature reports. CHA zeolites with 6-MR isolated protons show zero-order rate constants similar to those for commercial MFI zeolites and other topologies previously studied in the literature, reflecting the invariance in Brønsted acid strength with zeolite topology. First-order rate constants on isolated acid sites in CHA are an order of magnitude higher than acid sites in MFI, reflecting the smaller confining environments present in CHA than in the medium-pore zeolite MFI. In contrast, both first-order and zero-order rate constants among CHA samples increase systematically with the fraction of 6-MR Al pairs, even for samples of nominally similar composition (Si/Al ~ 15). DFT provides evidence for lower activation barriers at protons of 6-MR paired Al sites in CHA, which stabilize transition states via H-bonding interactions through co-adsorbed methanol bound at the proximal acid site, in a manner dependent on the specific Al arrangement and ring size and structure. Such favorable configurations are identified for 6-MR paired Al sites in CHA, but were not identified within MFI zeolite, which shows first-order and zero-order rate constants that are invariant with varying Al-Al site pair content. These findings and conclusions demonstrate how quantitative experimental characterization and kinetic data, augmented by theory insights, can aid in the development of more predictive synthesis-structure-function relations for zeolite materials and help transform empirical efforts in active site design and engineering into a more predictive science.</p>
APA, Harvard, Vancouver, ISO, and other styles
24

(9801566), Christine Hanley. "An examination of question order effects on population health survey data using split sample CATI experiments." Thesis, 2012. https://figshare.com/articles/thesis/An_examination_of_question_order_effects_on_population_health_survey_data_using_split_sample_CATI_experiments/13463285.

Full text
Abstract:
"The use of CATI methods for conducting population health surveys is an enduring and popular practice. Although it is one of the most reliable ways to collect data, there is a range of sampling and non sampling errors which can potentially affect the abiliity of CATI surveys to provide accurate results. This thesis examined the impact of a particular type of non-sampling error - question order effects - in CATI surveys. Two studies were conducted one which examined order effects in data collected using a standard health survey instrument designed to measure and classify physical activity behaviour, and one which examined order effects in data collected using two question blocks designed to evaluate health related knowledge and attitudes"--Abstract.
APA, Harvard, Vancouver, ISO, and other styles
25

(8972660), Rashmi Kumar. "INVESTIGATION OF THE PROTONATION SITES IN POLYFUNCTIONAL ANALYTES UPON ATMOSPHERIC PRESSURE IONIZATION IN MASS SPECTROMETRY AND STUDIES OF THE REACTIVITIES OF RADICALS IN THE GAS PHASE AND SOLUTION." Thesis, 2020.

Find full text
Abstract:
<p>High resolution tandem mass spectrometry (MS<sup>n</sup>) coupled with various separation techniques, such as high-performance liquid chromatography (HPLC) and gas chromatography (GC), is widely used to analyze mixtures of unknown organic compounds. In a mass spectrometric analysis, analytes of interest are at first transferred into the gas phase, ionized (protonated or deprotonated) and introduced into the instrument. Tandem mass spectrometric experiments may then be used to gain insights into structure and reactivity of the analyte ions in the gas phase. The tandem mass spectral data are often compared to those reported in external databases. However, the tandem mass spectra obtained for protonated analytes may be markedly different from those in external databases because protonation site manifested during a mass spectrometric experiment can be affected by the ionization technique, ionization solvents and condition of the ion source. This thesis focuses on investigating the effects of instrumental conditions and analyte concentrations on the protonation sites of 4-aminobenzoic acid. Reactivities of radical species were also investigated. A modified bracketing method was developed and proton affinities of a series of mono- and biradicals of pyridine were measured. In another study, a <i>para</i>-benzyne analog was generated in both solution and the gas phase and its reactivities towards various neutral reagents in the gas phase were compared to those in solution.</p> <p> Chapter 2 discusses the fundamental aspects of the instruments used in this research. In chapter 3, the effects of residual moisture in linear quadrupole ion trap on the protonation sites of 4-aminobenzoic acid are considered. Chapter 4 focuses on the use of gas-phase ion-molecule reactions with trimethoxymethylsilane (TMMS) for the identification of the protonation sites of 4-aminobenzoic acid. Further, the effects of analyte concentration on the protonation sites of 4-aminobenzoic acid are considered. Chapter 5 introduces a modified bracketing method for the experimental determination of proton affinities of a series of pyridine-based mono- and biradicals. In chapter 6, successful generation of <i>para</i>-benzynes in solution is discussed. The reactivity of a <i>para</i>-benzyne analog, 1,4-didehydrophenazine, is compared to its reactivity in the gas phase.</p>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography