To see the other types of publications on this topic, follow the link: Discovery of the large.

Dissertations / Theses on the topic 'Discovery of the large'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Discovery of the large.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kohlsdorf, Daniel. "Data mining in large audio collections of dolphin signals." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53968.

Full text
Abstract:
The study of dolphin cognition involves intensive research of animal vocal- izations recorded in the field. In this dissertation I address the automated analysis of audible dolphin communication. I propose a system called the signal imager that automatically discovers patterns in dolphin signals. These patterns are invariant to frequency shifts and time warping transformations. The discovery algorithm is based on feature learning and unsupervised time series segmentation using hidden Markov models. Researchers can inspect the patterns visually and interactively run com- parative statistics between the distribution of dolphin signals in different behavioral contexts. The required statistics for the comparison describe dolphin communication as a combination of the following models: a bag-of-words model, an n-gram model and an algorithm to learn a set of regular expressions. Furthermore, the system can use the patterns to automatically tag dolphin signals with behavior annotations. My results indicate that the signal imager provides meaningful patterns to the marine biologist and that the comparative statistics are aligned with the biologists’ domain knowledge.
APA, Harvard, Vancouver, ISO, and other styles
2

Tedeschi, Cédric. "Peer-to-Peer Prefix Tree for Large Scale Service Discovery." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2008. http://tel.archives-ouvertes.fr/tel-00529666.

Full text
Abstract:
Cette thèse étudie la découverte de services (composants logiciels, exécutables, librairies scientifiques) sur des plates-formes distribuées à grande échelle. Les approches traditionnelles, proposées pour des environnements stables et relativement petits, s'appuient sur des techniques centralisées impropres au passage à l'échelle dans des environnements géographiquement distribués et instables. Notre contribution s'articule autour de trois axes. 1) Nous proposons une nouvelle approche appelée DLPT (Distributed Lexicographic Placement Table), qui s'inspire des systèmes pair-à-pair et s'appuie sur un réseau de recouvrement structuré en arbre de préfixes. Cette structure permet des recherches multi-attributs sur des plages de valeurs. 2) Nous étudions la distribution des noeuds de l'arbre sur les processeurs de la plate-forme sous-jacente, distribuée, dynamique et hétérogène. Nous proposons et adaptons des heuristiques de répartition de la charge pour ce type d'architectures. 3) Notre plate-forme cible, par nature instable, nécessite des mécanismes robustes pour la tolérance aux pannes. La réplication traditionnellement utilisée s'y avère coûteuse et incapable de gérer des fautes transitoires. Nous proposons des techniques de tolérance aux pannes best-effort fondées sur la théorie de l'auto-stabilisation pour la construction d'arbres de préfixes dans des environnements pair-à-pair. Nous présentons deux approches. La première, écrite dans un modèle théorique à gros grain, permet de maintenir des arbres de préfixes instantanément stabilisants, c'est-à-dire reconstruits en un temps optimal après un nombre arbitraire de fautes. La deuxième, écrite dans le modèle à passage de messages, permet l'implantation d'une telle architecture dans des réseaux très dynamiques. Enfin, nous présentons un prototype logiciel mettant en oeuvre cette architecture et présentons ses premières expérimentations sur la plate-forme Grid'5000.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Xi. "Knowledge discovery from large-scale biological networks and their relationships." Thesis, University of British Columbia, 2010. http://hdl.handle.net/2429/23353.

Full text
Abstract:
The ultimate aim of postgenomic biomedical research is to understand mechanisms of cellular systems in a systematical way. It is therefore necessary to examine various biomolecular networks and to investigate how the interactions between biomolecules determine biological functions within cellular systems. Rapid advancement in high-throughput techniques provides us with increasing amounts of large-scale datasets that could be transformed into biomolecular networks. Analyzing and integrating these biomolecular networks have become major challenges. I approached these challenges by developing novel methods to extract new knowledge from various types of biomolecular networks. Protein-protein interactions and domain-domain interactions are extremely important in a wide range of biological functions. However, the interaction data are incomplete and inaccurate due to experimental limitations. Therefore, I developed a novel algorithm to predict interactions between membrane proteins in yeast based on the protein interaction network and the domain interaction network. In addition, I also developed a novel algorithm, a gram-based interaction analysis tool (GAIA), to identify interacting domains by integrating the protein primary sequences, the domain annotations and interactions and the structural annotations of proteins. Biological assessment against several metrics indicated that both algorithms were capable of satisfactory performance, facilitating the elucidation of cell interactome. Predicting biological pathways is one of major challenges in systems biology. I proposed a novel integrated approach, called Pandora, which used network topology to predict biological pathways by integrating four types of biological evidence (protein-protein interactions, genetic interactions, domain-domain interactions, and semantic similarity of GO terms). I demonstrated that Pandora achieved better performance compared to other predictive approaches, allowing the reconstruction of biological pathways and the delineation of cellular machinery in a systematic view. Finally, I focused on investigating biological network perturbations in diseases. I developed a novel algorithm to capture highly disturbed sub-networks in the human interactome as the signatures linked to cancer outcomes. This method was applied to breast cancer and yielded improved predictive performance, providing the possibility to predict the outcome of cancers based on “network-based gene signatures”. These methods and tools contributed to the analysis and understanding of a wide variety of biological networks and the relationships between them.
APA, Harvard, Vancouver, ISO, and other styles
4

Lam, Lap-Hing Raymond. "Design and analysis of large chemical databases for drug discovery." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/NQ65249.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Binder, Polina. "Unsupervised discovery of emphysema subtypes in a large clinical cohort." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105678.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 45-47).
Emphysema is one of the hallmarks of Chronic Obstructive Pulmonary Disease (COPD), a devastating lung disease often caused by smoking. Emphysema appears on Computed Tomography (CT) scans as a variety of textures that correlate with the disease subtypes. It has been shown that the disease subtypes and the lung texture are linked to physiological indicators and prognosis, although neither is well characterized clinically. Most previous computational approaches to modeling emphysema imaging data have focused on supervised classification of lung textures in patches of CT scans. In this work, we describe a generative model that jointly captures heterogeneity of disease subtypes and of the patient population. We also derive a corresponding inference algorithm that simultaneously discovers disease subtypes and population structure in an unsupervised manner. This approach enables us to create image-based descriptors of emphysema beyond those that can be identified through manual labeling of currently defined phenotypes. By applying the resulting algorithm to a large data set, we identify groups of patients and disease subtypes that correlate with distinct physiological indicators.
by Polina Binder.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
6

Elsilä, U. (Ulla). "Knowledge discovery method for deriving conditional probabilities from large datasets." Doctoral thesis, University of Oulu, 2007. http://urn.fi/urn:isbn:9789514286698.

Full text
Abstract:
Abstract In today's world, enormous amounts of data are being collected everyday. Thus, the problems of storing, handling, and utilizing the data are faced constantly. As the human mind itself can no longer interpret the vast datasets, methods for extracting useful and novel information from the data are needed and developed. These methods are collectively called knowledge discovery methods. In this thesis, a novel combination of feature selection and data modeling methods is presented in order to help with this task. This combination includes the methods of basic statistical analysis, linear correlation, self-organizing map, parallel coordinates, and k-means clustering. The presented method can be used, first, to select the most relevant features from even hundreds of them and, then, to model the complex inter-correlations within the selected ones. The capability to handle hundreds of features opens up the possibility to study more extensive processes instead of just looking at smaller parts of them. The results of k-nearest-neighbors study show that the presented feature selection procedure is valid and appropriate. A second advantage of the presented method is the possibility to use thousands of samples. Whereas the current rules of selecting appropriate limits for utilizing the methods are theoretically proved only for small sample sizes, especially in the case of linear correlation, this thesis gives the guidelines for feature selection with thousands of samples. A third positive aspect is the nature of the results: given that the outcome of the method is a set of conditional probabilities, the derived model is highly unrestrictive and rather easy to interpret. In order to test the presented method in practice, it was applied to study two different cases of steel manufacturing with hot strip rolling. In the first case, the conditional probabilities for different types of retentions were derived and, in the second case, the rolling conditions for the occurrence of wedge were revealed. The results of both of these studies show that steel manufacturing processes are indeed very complex and highly dependent on the various stages of the manufacturing. This was further confirmed by the fact that with studies of k-nearest-neighbors and C4.5, it was impossible to derive useful models concerning the datasets as a whole. It is believed that the reason for this lies in the nature of these two methods, meaning that they are unable to grasp such manifold inter-correlations in the data. On the contrary, the presented method of conditional probabilities allowed new knowledge to be gained of the studied processes, which will help to better understand these processes and to enhance them.
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, ZengHua. "Discovery and characterisation of ultra-cool dwarfs in large scale surveys." Thesis, University of Hertfordshire, 2013. http://hdl.handle.net/2299/13900.

Full text
Abstract:
Ultracool dwarfs including the lowest mass stars and substellar dwarfs (or brown dwarfs) is a rapidly evolving and very active field. In this thesis I present the discovery and characterization of ultracool dwarfs and their binary systems with solar and subsolar abundances and try to answer a few scientific questions related to these ultracool objects. I use different techniques based on photometric and astrometric data of modern large scale surveys to identify ultracool dwarfs and their binaries. I identify around 1000 ultracool dwarfs from SDSS, 2MASS and UKIDSS surveys, including 82 L dwarfs and 129 L dwarf candidates (Chapter 2 and 4). This work largely increases the known number of ultracool dwarfs and aid the statistic study of these objects. Eighteen ultracool dwarfs in my sample are found to be in wide binary systems by common proper motion (Chapter 4 and 5). Wide binary systems are often used to test formation theories of low mass stars and brown dwarfs, which have different predictions of separations and binary fractions. One of these binary systems is the first L dwarf companion to a giant star eta Cancri. The eta Cancri B is clearly a useful benchmark object, with constrained distance, age, and metallicity. Further more, the L3.5 dwarf companion eta Cancri B is found to be a potential L4 + T4 binary. I focus on the studies of low mass stars and brown dwarfs with subsolar abundance referred as red and ultracool subdwarfs. They belong to the older Population II of the Galactic halo contain more information of the formation, early evolution and structure of the Milky Way. Using the most extensive optical survey, the Sloan Digital Sky Survey (SDSS), to select low mass stars with subsolar abundance, referred as red subdwarfs with spectral types of late K and M. I identify about 1800 M subdwarfs including 30 new >M6 subdwarfs and five M ultra subdwarfs with very high gravity as well as 14 carbon enhanced red subdwarfs. I also identify 45 red subdwarf binary systems from my red subdwarf sample. Thirty of them are in wide binary systems identified by common proper motion. Fifteen binaries are partially resolved in SDSS and UKIDSS. I estimate the M subdwarf binary fraction. I fit the relationships of spectral types and absolute magnitudes of optical and near infrared bands for M and L subdwarfs. I also measure $UVW$ space velocities of the my M subdwarf sample (Chapter 5). Our studies of the lowest mass stars and brown dwarfs of the Galactic halo are limited by the lack of known objects. There are only seven L subdwarfs published in the literature. I search for ultracool subdwarfs by a combine use of the most extensive optical and near infrared surveys, the SDSS and the UKIRT Infrared Deep Sky Survey. I identify three new L subdwarfs with spectral types of sdL3, sdL7 and esdL6. I re-examine the spectral types and metal classes of all known L subdwarfs and propose to use 2.3 um CO line as an indicator of L subdwarfs. Two of my new L subdwarfs are found to be candidates of halo brown dwarfs (or substellar subdwarfs). I find four of these known ten L subdwarfs could be halo brown dwarfs. I propose a new name "purple dwarf" for lowest-mass stars and brown dwarfs with subsolar abundance (Chapter 3). Finally I summarize and discuss the thesis project in Chapter 6 and describe future research plans in Chapter 7.
APA, Harvard, Vancouver, ISO, and other styles
8

Graham, Eleanor(Eleanor L. ). "Sensitivity Models for [Beta]+/EC Discovery in Large-Volume Scintillation Detectors." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/127094.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Physics, 2020
In title on title page, "[Beta]" is the Greek letter. Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 47-49).
In this thesis, we consider the [Beta]+/EC decay of 124Xe and take the first steps towards characterizing a hypothetical experiment to detect it, making use of techniques traditionally employed in neutrinoless double beta decay experiments. We use a simulated large-volume scintillation detector modeled on the Super-Kamiokande experiment, fully implementing this detector in RAT/Geant4. This allows us to extract authentic spectra for the experimental signature of the [Beta]+/EC decay in 124Xe, paving the way for future sensitivity studies. We also consider the relevance of next-generation techniques for background discrimination, specifically particle identification based on counting Cherenkov photons. We find that discrimination between [Beta] and [Beta] particles is readily possible in experiments run at the 1.25 MeV energy scale and also see evidence for the possibility of distinguishing between [Beta]+ and [Beta]- particles via their Cherenkov signatures
by Eleanor Graham.
S.B.
S.B. Massachusetts Institute of Technology, Department of Physics
APA, Harvard, Vancouver, ISO, and other styles
9

Ewert, Kevin. "An Adaptive Machine Learning Approach to Knowledge Discovery in Large Datasets." NSUWorks, 2006. http://nsuworks.nova.edu/gscis_etd/510.

Full text
Abstract:
Large text databases, such as medical records, on-line journals, or the Internet, potentially contain a great wealth of data and knowledge. However, text representation of factual information and knowledge is difficult to process. Analyzing these large text databases often rely upon time consuming human resources for data mining. Since a textual format is a very flexible way to describe and store various types of information, large amounts of information are often retained and distributed as text. 'The amount of accessible textual data has been increasing rapidly. Such data may potentially contain a great wealth of knowledge. However, analyzing huge amounts of textual data requires a tremendous amount of work in reading al l of the text and organizing the content. Thus, the increase in accessible textual data has caused an information flood in spite of hope of becoming knowledgeable about various topics" (Nasukawa and Nagano, 2001). Preliminary research focused on key concepts and techniques derived from clustering methodology, machine learning, and other communities within the arena of data mining. The research was based on a two-stage machine-intelligence system that clustered and filtered large datasets. The overall objective was to optimize response time through parallel processing while attempting to reduce potential errors due to knowledge manipulation. The results generated by the two-stage system were reviewed by domain experts and tested using traditional methods that included multi variable regression analysis and logic testing for accuracy. The two-stage prototype developed a model that was 85 to 90% accurate in determining childhood asthma and disproved existing stereotypes related to sleep breathing disorders. Detail results will be discussed in the proposed dissertation. While the initial research demonstrated positive results in processing large text datasets limitations were identified. These limitations included processing de lays resulting from equal distribution of processing in a heterogeneous client environment and utilizing the results derived from the second-stage as inputs for the first-stage. To address these limitations the proposed doctoral research will investigate the dynamic distribution of processing in heterogeneous environment and cyclical learning involving the second stage neural network clients modifying the first-stage expert systems.
APA, Harvard, Vancouver, ISO, and other styles
10

Weninger, Timothy Edwards. "Link discovery in very large graphs by constructive induction using genetic programming." Thesis, Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/1087.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Fathy, Yasmin. "Large-scale indexing, discovery and ranking for the Internet of Things (IoT)." Thesis, University of Surrey, 2018. http://epubs.surrey.ac.uk/848997/.

Full text
Abstract:
Network-enabled sensing and actuation devices are key enablers to connect real-world objects to the cyber world. The Internet of Things (IoT) consists of network-enabled devices and communication technologies that allow connectivity and integration of physical objects (Things) into the digital world (Internet). Dealing with the data deluge from heterogeneous IoT resources and services imposes new challenges on indexing, discovery and ranking mechanisms. Novel indexing and discovery methods will enable developing applications that use on-line access and retrieval of ad-hoc IoT data. Investigation of the related work leads to the conclusion that there has been significant work on processing and analysing sensor data streams. However, there is still a need for integrating solutions that contemplate the work-flow from connecting IoT resources to make their published data indexable, searchable and discoverable. This research proposes a set of novel solutions for indexing, processing and discovery in IoT networks. The work proposes novel distributed in-network and spatial indexing solutions. The proposed solutions scale well and provide up to 92% better response time and higher success rates in response to data search queries compared to a baseline approach. A co-operative, adaptive, change detection algorithm has also been developed. It is based on a convex combination of two decoupled Least Mean Square (LMS) windowed filters. The approach provides better performance and less complexity compared to the state-of-the-art solutions. The change detection algorithm can also be applied to distributed networks in an on-line fashion. This co-operative approach allows publish/subscribe based and change based discovery solutions in IoT. Continuous transmission of large volumes of data collected by sensor nodes induces a high communication cost for each individual node in IoT networks. An Adaptive Method for Data Reduction (AM-DR) has been proposed for reducing the number of data transmissions in IoT networks. In AM-DR, identical predictive models are constructed at both the sensor and the sink nodes to describe data evolution such that sensor nodes require transmitting only their readings that deviate significantly from actual values. This has a significant impact on reducing the data load in IoT data discovery scenarios. Finally, a solution for quality and energy-aware resource discovery and accessing IoT resources has been proposed. The solution effectively achieves a communication reduction while retaining a high prediction accuracy (i.e. only a deviation of ±1.0 degree between actual and predicted sensor readings). Furthermore, an energy cost model has been discussed to demonstrate how the proposed approach reduces energy consumption significantly and effectively prolongs the network lifetime.
APA, Harvard, Vancouver, ISO, and other styles
12

Paten, Benedict John. "Large-scale multiple alignment and transcriptionally-associated pattern discovery in vertebrate genomes." Thesis, University of Cambridge, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.612811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Dooley, James. "An information centric architecture for large scale description and discovery of ubiquitous computing objects." Thesis, University of Essex, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.537928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Hsin-Fang. "DATA MINING AND PATTERN DISCOVERY USING EXPLORATORY AND VISUALIZATION METHODS FOR LARGE MULTIDIMENSIONAL DATASETS." UKnowledge, 2013. http://uknowledge.uky.edu/epb_etds/4.

Full text
Abstract:
Oral health problems have been a major public health concern profoundly affecting people’s general health and quality of life. Given that oral health data is composed of several measurable dimensions including clinical measurements, socio-behavioral factors, genetic predispositions, self-reported assessments, and quality of life measures, strategies for analyzing multidimensional data are neither computationally straightforward nor efficient. Researchers face major challenges to identify tools that circumvent the processes of manually probing the data. The purpose of this dissertation is to provide applications of the proposed methodology on oral health-related data that go beyond identifying risk factors from a single dimension, and to describe large-scale datasets in a natural intuitive manner. The three specific applications focus on the utilization of 1) classification regression tree (CART) to understand the multidimensional factors associated with untreated decay in childhood, 2) network analyses and network plots to describe connectedness of concurrent co-morbid conditions for pediatric patients with autism receiving dental treatments under general anesthesia, and 3) random forests in addition to conventional adjusted main effects analyses to identify potential environmental risk factors and interactive effects for periodontitis. Compared to findings from the previous literature, the use of these innovative applications demonstrates overlapping findings as well as novel discoveries to the oral health knowledge. The results of this research not only illustrate that these data mining techniques can be used to improve the delivery of information into knowledge, but also provide new avenues for future decision making and planning for oral health-care management.
APA, Harvard, Vancouver, ISO, and other styles
15

Nadella, Pravallika. "Discovery of Outlier Points and Dense Regions in Large Data-Sets Using Spark Environment." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627665840826411.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Zhu, Cheng. "Efficient network based approaches for pattern recognition and knowledge discovery from large and heterogeneous datasets." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1378215769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Schott, Benjamin [Verfasser], and R. [Akademischer Betreuer] Mikut. "Interactive and Quantitative Knowledge-Discovery in Large-Scale 3D Tracking Data / Benjamin Schott ; Betreuer: R. Mikut." Karlsruhe : KIT-Bibliothek, 2018. http://d-nb.info/1174252219/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Gomez, Kayeromi Donoukounmahou. "A Comparison of False Discovery Rate Method and Dunnett's Test for a Large Number of Treatments." Diss., North Dakota State University, 2015. http://hdl.handle.net/10365/24842.

Full text
Abstract:
It has become quite common nowadays to perform multiple tests simultaneously in order to detect differences of a certain trait among groups. This often leads to an inflated probability of at least one Type I Error, a rejection of a null hypothesis when it is in fact true. This inflation generally leads to a loss of power of the test especially in multiple testing and multiple comparisons. The aim of the research is to use simulation to address what a researcher should do to determine which treatments are significantly different from the control when there is a large number of treatments and the number of replicates in each treatment is small. We examine two situations in this simulation study: when the number of replicates per treatment is 3 and also when it is 5 and in each of these situations, we simulated from a normal distribution and in mixture of normal distributions. The total number of simulated treatments was progressively increased from 50 to 100 then 150 and finally 300. The goal is to measure the change in the performances of the False Discovery Rate method and Dunnett’s test in terms of type I error and power as the total number of treatments increases. We reported two ways of examining type I error and power: first, we look at the performances of the two tests in relation to all other comparisons in our simulation study, and secondly per simulated sample. In the first assessment, the False Discovery Rate method appears to have a higher power while keeping its type I error in the same neighborhood as Dunnett’s test and in the latter, both tests have similar powers and the False Discovery Rate method has a higher type I error. Overall, the results show that when the objective of the researcher is to detect as many of the differences as possible, then FDR method is preferred. However if error is more detrimental to the outcomes of the research, Dunnett’s test offers a better alternative.
APA, Harvard, Vancouver, ISO, and other styles
19

Sariyuce, Ahmet Erdem. "Fast Algorithms for Large-Scale Network Analytics." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429825578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Last, Kim William. "Discovery of novel molecular and biochemical predictors of response and outcome in diffuse large B-cell lymphoma." Thesis, Queen Mary, University of London, 2009. http://qmro.qmul.ac.uk/xmlui/handle/123456789/563.

Full text
Abstract:
Discovery of Novel Molecular and Biochemical Predictors of Response and Outcome in Diffuse Large B-cell Lymphoma Diffuse large B-cell lymphoma (DLBCL) is the commonest form of non-Hodgkin lymphoma and responds to treatment with a 5-year overall survival (OS) of 40-50%. Predicting outcome using the best available method, the International Prognostic Index (IPI), is inaccurate and unsatisfactory. This thesis describes research undertaken to discover, explore and validate new molecular and biochemical predictors of response and long-term outcome with the aims of improving on the inaccurate IPI and of suggesting novel therapeutic approaches. Two strategies were adopted: a rational and an empirical approach. The rational strategy used gene expression profiling to identify transcriptional signatures that correlated with outcome to treatment and from which a model of 13-genes accurately predict long-term OS. Two components of the 13-gene model, PKC and PDE4B, were studied using inhibitors in lymphoma cell-lines and primary cell cultures. PKC inhibition using SC-236 proved to be cytostatic and cytotoxic in the cell-lines examined and to a lesser extent in primary tumours. PDE4 inhibition using piclamilast and rolipram had no effect either alone or in combination with chemotherapy. The empirical approach investigated the trace element selenium in presentation serum and found that it was a biochemical predictor of response and outcome to treatment. In an attempt to provide evidence of a causal relationship as an explanation for the associations between presentation serum selenium, response and outcome, two selenium compounds, methylseleninic acid (MSA) and selenodiglutathione (SDG) were studied in vitro in the same lymphoma cell-lines and primary cell cultures. Both MSA and SDG exhibited cytostatic and cytotoxic activity and caspase-8 and caspase-9 driven apoptosis. For SDG reactive oxygen species generation was important for its activity in three of the four cell-lines. In conclusion, molecular and biochemical predictors of response and survival were discovered in DLBCL that led to viable targets for drug intervention being validated in vitro.
APA, Harvard, Vancouver, ISO, and other styles
21

Schiavon, Ricardo P., Olga Zamora, Ricardo Carrera, Sara Lucatello, A. C. Robin, Melissa Ness, Sarah L. Martell, et al. "Chemical tagging with APOGEE: discovery of a large population of N-rich stars in the inner Galaxy." OXFORD UNIV PRESS, 2017. http://hdl.handle.net/10150/623045.

Full text
Abstract:
Formation of globular clusters (GCs), the Galactic bulge, or galaxy bulges in general is an important unsolved problem in Galactic astronomy. Homogeneous infrared observations of large samples of stars belonging to GCs and the Galactic bulge field are one of the best ways to study these problems. We report the discovery by APOGEE (Apache Point Observatory Galactic Evolution Experiment) of a population of field stars in the inner Galaxy with abundances of N, C, and Al that are typically found in GC stars. The newly discovered stars have high [N/Fe], which is correlated with [Al/Fe] and anticorrelated with [C/Fe]. They are homogeneously distributed across, and kinematically indistinguishable from, other field stars within the same volume. Their metallicity distribution is seemingly unimodal, peaking at [Fe/H] similar to -1, thus being in disagreement with that of the Galactic GC system. Our results can be understood in terms of different scenarios. N-rich stars could be former members of dissolved GCs, in which case the mass in destroyed GCs exceeds that of the surviving GC system by a factor of similar to 8. In that scenario, the total mass contained in so-called 'first-generation' stars cannot be larger than that in 'second-generation' stars by more than a factor of similar to 9 and was certainly smaller. Conversely, our results may imply the absence of a mandatory genetic link between 'second-generation' stars and GCs. Last, but not least, N-rich stars could be the oldest stars in the Galaxy, the by-products of chemical enrichment by the first stellar generations formed in the heart of the Galaxy.
APA, Harvard, Vancouver, ISO, and other styles
22

Bădescu, Toma, Yujin Yang, Frank Bertoldi, Ann Zabludoff, Alexander Karim, and Benjamin Magnelli. "Discovery of a Protocluster Associated with a Ly α Blob Pair at z = 2.3." IOP PUBLISHING LTD, 2017. http://hdl.handle.net/10150/625775.

Full text
Abstract:
Bright Ly alpha blobs (LABs)-extended nebulae with sizes of similar to 100 kpc and Ly alpha luminosities of similar to 10(44) erg s(-1)-often reside in overdensities of compact Ly alpha emitters (LAEs) that may be galaxy protoclusters. The number density, variance, and internal kinematics of LABs suggest that they themselves trace group-like halos. Here, we test this hierarchical picture, presenting deep, wide-field Ly alpha narrowband imaging of a 1 degrees x. 0 degrees.5 region around a LAB pair at z = 2.3 discovered previously by a blind survey. We find 183 Lya emitters, including the original LAB pair and three new LABs with Ly alpha luminosities of (0.9-1.3) x 10(43) erg s(-1) and isophotal areas of 16-24 arcsec2. Using the LAEs as tracers and a new kernel density estimation method, we discover a large-scale overdensity (Bootes J1430+3522) with a surface density contrast of delta(Sigma) = 2.7, a volume density contrast of delta similar to 10.4, and a projected diameter of approximate to 20 comoving Mpc. Comparing with cosmological simulations, we conclude that this LAE overdensity will evolve into a present-day Coma-like cluster with log(M/M-circle dot) similar to 15.1 +/- 0.2. In this and three other wide-field LAE surveys re-analyzed here, the extents and peak amplitudes of the largest LAE overdensities are similar, not increasing with survey size, and implying that they were indeed the largest structures then and today evolve into rich clusters. Intriguingly, LABs favor the outskirts of the densest LAE concentrations, i.e., intermediate LAE overdensities of delta(Sigma) = 1-2. We speculate that these LABs mark infalling protogroups being accreted by the more massive protocluster.
APA, Harvard, Vancouver, ISO, and other styles
23

Savulionienė, Loreta. "Association rules search in large data bases." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2014. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2014~D_20140519_102242-45613.

Full text
Abstract:
The impact of information technology is an integral part of modern life. Any activity is related to information and data accumulation and storage, therefore, quick analysis of information is necessary. Today, the traditional data processing and data reports are no longer sufficient. The need of generating new information and knowledge from given data is understandable; therefore, new facts and knowledge, which allow us to forecast customer behaviour or financial transactions, diagnose diseases, etc., can be generated applying data mining techniques. The doctoral dissertation analyses modern data mining algorithms for estimating frequent sub-sequences and association rules. The dissertation proposes a new stochastic algorithm for mining frequent sub-sequences, its modifications SDPA1 and SDPA2 and stochastic algorithm for discovery of association rules, and presents the evaluation of the algorithm errors. These algorithms are approximate, but allow us to combine two important tests, i.e. time and accuracy. The algorithms have been tested using real and simulated databases.
Informacinių technologijų įtaka neatsiejama nuo šiuolaikinio gyvenimo. Bet kokia veiklos sritis yra susijusi su informacijos, duomenų kaupimu, saugojimu. Šiandien nebepakanka tradicinio duomenų apdorojimo bei įvairių ataskaitų formavimo. Duomenų tyrybos technologijų taikymas leidžia iš turimų duomenų išgauti naujus faktus ar žinias, kurios leidžia prognozuoti veiklą, pavyzdžiui, pirkėjų elgesį ar finansines tendencijas, diagnozuoti ligas ir pan. Disertacijoje nagrinėjami duomenų tyrybos algoritmai dažniems posekiams ir susietumo taisyklėms nustatyti. Disertacijoje sukurtas naujas stochastinis dažnų posekių paieškos algoritmas, jo modifikacijos SDPA1, SDPA2 ir stochastinis susietumo taisyklių nustatymo algoritmas bei pateiktas šių algoritmų paklaidų įvertinimas. Šie algoritmai yra apytiksliai, tačiau leidžia suderinti du svarbius kriterijus  laiką ir tikslumą. Šie algoritmai buvo testuojami naudojant realias bei imitacines duomenų bazes.
APA, Harvard, Vancouver, ISO, and other styles
24

Weyand, Tobias [Verfasser], Bastian [Akademischer Betreuer] Leibe, and Ondrej [Akademischer Betreuer] Chum. "Visual discovery of landmarks and their details in large-scale image collections / Tobias Weyand ; Bastian Leibe, Ondrej Chum." Aachen : Universitätsbibliothek der RWTH Aachen, 2016. http://d-nb.info/1130792749/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Jovanovic, Mihajlo A. "Modeling Large-scale Peer-to-Peer Networks and a Case Study of Gnutella." University of Cincinnati / OhioLINK, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=ucin989967592.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Jimenez, Raul. "Distributed Peer Discovery in Large-Scale P2P Streaming Systems : Addressing Practical Problems of P2P Deployments on the Open Internet." Doctoral thesis, KTH, Network Systems Laboratory (NS Lab), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-134608.

Full text
Abstract:
Peer-to-peer (P2P) techniques allow users with limited resources to distribute content to a potentially large audience by turning passive clients into peers. Peers can self-organize to distribute content to each other, increasing the scalability of the system and decreasing the publisher’s costs, compared to a publisher distributing the data himself using a content delivery network (CDN) or his own servers. Peer discovery is the mechanism that peers use to find each other. Peer discovery is a critical component of any P2P-based system, because P2P networks are dynamic by nature. That is, peers constantly join and leave the network and each individual peer is assumed to be unreliable. This thesis addresses practical issues in distributed peer discovery mech- anisms in the context of three different large-scale P2P streaming systems: a (1) BitTorrent-based streaming system, (2) Spotify, and (3) our own mobile P2P streaming system based on the upcoming Peer-to-peer Streaming Protocol (PPSP) Internet standard. We dramatically improve peer discovery performance in BitTorrent’s Mainline DHT, the largest distributed hash table (DHT) overlay on the open Internet. Our implementation’s median lookup latency is an order of magnitude lower than the best performing measurement reported in the literature and does not exhibit a long tail of high-latency lookups, which is critical for P2P streaming applications. We have achieved these results by studying how connectivity artifacts on the underlying network —probably caused by network address translation (NAT) gateways— affect the DHT overlay. Our measurements of more than three million nodes reveal that connectivity artifacts are widespread and can severely degrade DHT performance. This thesis also addresses the practical issues of integrating mobile devices into P2P streaming systems. In particular, we enable P2P on Spotify’s Android app, study how distributed peer discovery affects energy consumption, and implement and evaluate backwards-compatible modifications which dramatically reduce energy consumption on 3G. Then, we build the first complete system that not only is capable of streaming content to mobile devices but also allows them to publish content directly into the P2P system, even when they are behind a NAT gateway, with minimal impact on their battery and data usage. While our preferred approach is implementing backwards-compatible modifications, we also propose and analyze backwards-incompatible ones. The former allow us to evaluate them in the existing large-scale systems and allow developers to deploy our modifications into the actual system. The latter free us to propose deeper changes. In particular, we propose (1) a DHT-based peer discovery mechanism that improves scalability and introduces localityawareness, and (2) modifications on Spotify’s gossip-like peer discovery to better accommodate mobile devices

QC 20131203

APA, Harvard, Vancouver, ISO, and other styles
27

Katarina, Gavrić. "Mining large amounts of mobile object data." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2017. https://www.cris.uns.ac.rs/record.jsf?recordId=105036&source=NDLTD&language=en.

Full text
Abstract:
Within this thesis, we examined the possibilities of using an increasing amount ofpublicly available metadata about locations and peoples' activities in order to gainnew knowledge and develop new models of behavior and movement of people. Thepurpose of the research conducted for this thesis was to solve practical problems,such as: analyzing attractive tourist sites, defining the most frequent routes peopleare taking, defining main ways of transportation, and discovering behavioralpatterns in terms of defining strategies to suppress expansion of virus infections. Inthis thesis, a practical study was carried out on the basis of protected (aggregatedand anonymous) CDR (Caller Data Records) data and metadata of geo-referencedmultimedia content.
Предмет и циљ истраживања докторске дисертације представља евалуацијамогућности коришћења све веће количине јавно доступних података олокацији и кретању људи, како би се дошло до нових сазнања, развили новимодели понашања и кретања људи који се могу применити за решавањепрактичних проблема као што су: анализа атрактивних туристичких локација,откривање путања кретања људи и средстава транспорта које најчешћекористе, као и откривање важних параметара на основу којих се можеразвити стратегија за заштиту нације од инфективних болести итд. У раду је уту сврхе спроведена практична студија на бази заштићених (агрегираних ианонимизираних) ЦДР података и метаподатака гео-референцираногмултимедијалног садржаја. Приступ је заснован на примени техникавештачке интелигенције и истраживања података.
Predmet i cilj istraživanja doktorske disertacije predstavlja evaluacijamogućnosti korišćenja sve veće količine javno dostupnih podataka olokaciji i kretanju ljudi, kako bi se došlo do novih saznanja, razvili novimodeli ponašanja i kretanja ljudi koji se mogu primeniti za rešavanjepraktičnih problema kao što su: analiza atraktivnih turističkih lokacija,otkrivanje putanja kretanja ljudi i sredstava transporta koje najčešćekoriste, kao i otkrivanje važnih parametara na osnovu kojih se možerazviti strategija za zaštitu nacije od infektivnih bolesti itd. U radu je utu svrhe sprovedena praktična studija na bazi zaštićenih (agregiranih ianonimiziranih) CDR podataka i metapodataka geo-referenciranogmultimedijalnog sadržaja. Pristup je zasnovan na primeni tehnikaveštačke inteligencije i istraživanja podataka.
APA, Harvard, Vancouver, ISO, and other styles
28

Agamah, Francis Edem. "Large–scale data–driven network analysis of human–plasmodium falciparum interactome: extracting essential targets and processes for malaria drug discovery." Master's thesis, Faculty of Health Sciences, 2020. http://hdl.handle.net/11427/32185.

Full text
Abstract:
Background: Plasmodium falciparum malaria is an infectious disease considered to have great impact on public health due to its associated high mortality rates especially in sub Saharan Africa. Falciparum drugresistant strains, notably, to chloroquine and sulfadoxine-pyrimethamine in Africa is traced mainly to Southeast Asia where artemisinin resistance rate is increasing. Although careful surveillance to monitor the emergence and spread of artemisinin-resistant parasite strains in Africa is on-going, research into new drugs, particularly, for African populations, is critical since there is no replaceable drug for artemisinin combination therapies (ACTs) yet. Objective: The overall objective of this study is to identify potential protein targets through host–pathogen protein–protein functional interaction network analysis to understand the underlying mechanisms of drug failure and identify those essential targets that can play their role in predicting potential drug candidates specific to the African populations through a protein-based approach of both host and Plasmodium falciparum genomic analysis. Methods: We leveraged malaria-specific genome wide association study summary statistics data obtained from Gambia, Kenya and Malawi populations, Plasmodium falciparum selective pressure variants and functional datasets (protein sequences, interologs, host-pathogen intra-organism and host-pathogen inter-organism protein-protein interactions (PPIs)) from various sources (STRING, Reactome, HPID, Uniprot, IntAct and literature) to construct overlapping functional network for both host and pathogen. Developed algorithms and a large-scale data-driven computational framework were used in this study to analyze the datasets and the constructed networks to identify densely connected subnetworks or hubs essential for network stability and integrity. The host-pathogen network was analyzed to elucidate the influence of parasite candidate key proteins within the network and predict possible resistant pathways due to host-pathogen candidate key protein interactions. We performed biological and pathway enrichment analysis on critical proteins identified to elucidate their functions. In order to leverage disease-target-drug relationships to identify potential repurposable already approved drug candidates that could be used to treat malaria, pharmaceutical datasets from drug bank were explored using semantic similarity approach based of target–associated biological processes Results: About 600,000 significant SNPs (p-value< 0.05) from the summary statistics data were mapped to their associated genes, and we identified 79 human-associated malaria genes. The assembled parasite network comprised of 8 clusters containing 799 functional interactions between 155 reviewed proteins of which 5 clusters contained 43 key proteins (selective variants) and 2 clusters contained 2 candidate key proteins(key proteins characterized by high centrality measure), C6KTB7 and C6KTD2. The human network comprised of 32 clusters containing 4,133,136 interactions between 20,329 unique reviewed proteins of which 7 clusters contained 760 key proteins and 2 clusters contained 6 significant human malaria-associated candidate key proteins or genes P22301 (IL10), P05362 (ICAM1), P01375 (TNF), P30480 (HLA-B), P16284 (PECAM1), O00206 (TLR4). The generated host-pathogen network comprised of 31,512 functional interactions between 8,023 host and pathogen proteins. We also explored the association of pfk13 gene within the host-pathogen. We observed that pfk13 cluster with host kelch–like proteins and other regulatory genes but no direct association with our identified host candidate key malaria targets. We implemented semantic similarity based approach complemented by Kappa and Jaccard statistical measure to identify 115 malaria–similar diseases and 26 potential repurposable drug hits that can be 3 appropriated experimentally for malaria treatment. Conclusion: In this study, we reviewed existing antimalarial drugs and resistance–associated variants contributing to the diminished sensitivity of antimalarials, especially chloroquine, sulfadoxine-pyrimethamine and artemisinin combination therapy within the African population. We also described various computational techniques implemented in predicting drug targets and leads in drug research. In our data analysis, we showed that possible mechanisms of resistance to artemisinin in Africa may arise from the combinatorial effects of many resistant genes to chloroquine and sulfadoxine–pyrimethamine. We investigated the role of pfk13 within the host–pathogen network. We predicted key targets that have been proposed to be essential for malaria drug and vaccine development through structural and functional analysis of host and pathogen function networks. Based on our analysis, we propose these targets as essential co-targets for combinatorial malaria drug discovery.
APA, Harvard, Vancouver, ISO, and other styles
29

Shabara, Yahia. "Establishing Large-Scale MIMO Communication: Coding for Channel Estimation." The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1618578732285999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Raciti, Daniela. "A large-scale gene discovery screen identifies over hundred solute carrier (SLC) genes with organ specific expression patterns in the Xenopus embryo /." Zürich : ETH, 2007. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bergeås, Kuutmann Elin. "Calibration of the ATLAS calorimeters and discovery potential for massive top quark resonances at the LHC." Doctoral thesis, Stockholms universitet, Fysikum, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-32854.

Full text
Abstract:
ATLAS is a multi-purpose detector which has recently started to take data at the LHC at CERN. This thesis describes the tests and calibrations of the central calorimeters of ATLAS and outlines a search for heavy top quark pair resonances.The calorimeter tests were performed before the ATLAS detector was assembled at the LHC, in such a way that particle beams of known energy were targeted at the calorimeter modules. In one of the studies presented here, modules of the hadronic barrel calorimeter, TileCal, were exposed to beams of pions of energies between 3 and 9 GeV. It is shown that muons from pion decays in the beam can be separated from the pions, and that the simulation of the detector correctly describes the muon behaviour. In the second calorimeter study, a scheme for local hadronic calibration is developed and applied to single pion test beam data in a wide range of energies, measured by the combination of the electromagnetic barrel calorimeter and the TileCal hadronic calorimeter. The calibration method is shown to provide a calorimeter linearity within 3%, and also to give a reasonable agreement between simulations and data. The physics analysis of this thesis is the proposed search for heavy top quark resonances, and it is shown that a narrow uncoloured top pair resonance, a Z', could be excluded (or discovered) at 95% CL for cross sections of 4.0±1.6 pb (in the case of M=1.0 TeV/c2) or 2.0±0.3 pb (M=2.0 TeV/c2), including systematical uncertainties in the model, assuming √s = 10 TeV and an integrated luminosity of 200 pb-1. It is also shown that an important systematical uncertainty is the jet energy scale, which further underlines the importance of hadronic calibration.
APA, Harvard, Vancouver, ISO, and other styles
32

Rodrigues, Preston. "Interoperabilité à large échelle dans le contexte de l'Internet du future." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2013. http://tel.archives-ouvertes.fr/tel-00920457.

Full text
Abstract:
La croissance de l'Internet en tant que plateforme d'approvisionnement à grande échelled'approvisionnement de contenus multimédia a été une grande success story du 21e siécle.Toutefois, les applications multimédia, avec les charactéristiques spécifiques de leur trafic ainsique les les exigences des nouveaux services, posent un défi intéressant en termes de découverte,de mobilité et de gestion. En outre, le récent élan de l'Internet des objets a rendu très nécessairela revitalisation de la recherche pour intégrer des sources hétérogènes d'information à travers desréseaux divers. Dans cet objectif, les contributions de cette thèse essayent de trouver un équilibreentre l'hétérogénéité et l'interopérabilité, pour découvrir et intégrer les sources hétérogènesd'information dans le contexte de l'Internet du Futur.La découverte de sources d'information sur différents réseaux requiert une compréhensionapprofondie de la façon dont l'information est structurée et quelles méthodes spécifiques sontutilisés pour communiquer. Ce processus a été régulé à l'aide de protocoles de découverte.Cependant, les protocoles s'appuient sur différentes techniques et sont conçues en prenant encompte l'infrastructure réseau sous-jacente, limitant ainsi leur capacité à franchir la limite d'unréseau donné. Pour résoudre ce problème, le première contribution dans cette thèse tente detrouver une solution équilibrée permettant aux protocoles de découverte d'interagir les uns avecles autres, tout en fournissant les moyens nécessaires pour franchir les frontières entre réseaux.Dans cet objectif, nous proposons ZigZag, un middleware pour réutiliser et étendre les protocolesde découverte courants, conçus pour des réseaux locaux, afin de découvrir des servicesdisponibles dans le large. Notre approche est basée sur la conversion de protocole permettant ladécouverte de service indépendamment de leur protocole de découverte sous-jacent. Toutefois,dans les réaux de grande échelle orientée consommateur, la quantité des messages de découvertepourrait rendre le réseau inutilisable. Pour parer à cette éventualité, ZigZag utilise le conceptd'agrégation au cours du processus de découverte. Grâce à l'agrégation, ZigZag est capabled'intégrer plusieurs réponses de différentes sources supportant différents protocoles de découverte.En outre, la personnalisation du processus d'agrégation afin de s'aligner sur ses besoins,requiert une compréhension approfondie des fondamentaux de ZigZag. À cette fin, nous proposonsune seconde contribution: un langage flexible pour aider à définir les politiques d'unemanière propre et efficace.
APA, Harvard, Vancouver, ISO, and other styles
33

Sequeira, Ana Filipa Pereira. "Development of a novel platform for high-throughput gene design and artificial gene synthesis to produce large libraries of recombinant venom peptides for drug discovery." Doctoral thesis, Universidade de Lisboa, Faculdade de Medicina Veterinária, 2016. http://hdl.handle.net/10400.5/12265.

Full text
Abstract:
Tese de Doutoramento em Ciências Veterinárias na Especialidade de Ciências Biológicas e Biomédicas
Animal venoms are complex mixtures of biologically active molecules that, while presenting low immunogenicity, target with high selectivity and efficacy a variety of membrane receptors. It is believed that animal venoms comprise a natural library of more than 40 million different natural compounds that have been continuously fine-tuned during the evolutionary process to disturb cellular function. Within animal venoms, reticulated peptides are the most attractive class of molecules for drug discovery. However, the use of animal venoms to develop novel pharmacological compounds is still hampered by difficulties in obtaining these low molecular mass cysteine-rich polypeptides in sufficient amounts. Here, a high-throughput gene synthesis platform was developed to produce synthetic genes encoding venom peptides. The final goal of this project is the production of large libraries of recombinant venom peptides that can be screened for drug discovery. A robust and efficient Polymerase Chain Reaction (PCR) methodology was refined to assemble overlapping oligonucleotides into small artificial genes (< 500 bp) with high-fidelity. In addition, two bioinformatics tools were constructed to design multiple optimized genes (ATGenium) and overlapping oligonucleotides (NZYOligo designer), in order to allow automation of the high-throughput gene synthesis platform. The platform can assemble 96 synthetic genes encoding venom peptides simultaneously, with an error rate of 1.1 mutations per kb. To decrease the error rate associated with artificial gene synthesis, an error removal step using phage T7 endonuclease I was designed and integrated into the gene synthesis methodology. T7 endonuclease I was shown to be highly effective to specifically recognize and cleave DNA mismatches allowing a dramatically reduction of error frequency in large synthetic genes, from 3.45 to 0.43 errors per kb. Combining the knowledge acquired in the initial stages of the work, a comprehensive study was performed to investigate the influence of gene design, presence of fusion tags, cellular localization of expression, and usage of Tobacco Etch Virus (TEV) protease for tag removal, on the recombinant expression of disulfide-rich venom peptides in Escherichia coli. Codon usage dramatically affected the levels of recombinant expression in E. coli. In addition, a significant pressure in the usage of the two cysteine codons suggests that both need to be present at equivalent levels in genes designed de novo to ensure high levels of expression. This study also revealed that DsbC was the best fusion tag for recombinant expression of disulfide-rich peptides, in particular when expression of the fusion peptide was directed to the bacterial periplasm. TEV protease was highly effective for efficient tag removal and its recognition sites can tolerate all residues at its C-terminal, with exception of proline, confirming that no extra residues need to be incorporated at the N-terminus of recombinant venom peptides. This study revealed that E. coli is a convenient heterologous host for the expression of soluble and potentially functional venom peptides. Thus, this novel high-throughput gene synthesis platform was used to produce ~5,000 synthetic genes with a low error rate. This genetic library supported the production of the largest library of recombinant venom peptides constructed until now. The library contains 2736 animal venom peptides and it is presently being screened for the discovery of novel drug leads related to different diseases.
RESUMO - Desenvolvimento de uma nova plataforma de alta capacidade para desenhar e sintetizar genes artificiais, para a produção de péptidos venómicos recombinantes - Os venenos animais são misturas complexas de moléculas biologicamente activas que se ligam com elevada selectividade e eficácia a uma grande variedade de receptores de membrana. Embora apresentem baixa imunogenicidade, os venenos podem afectar a função celular actuando ao nível dos seus receptores. Actualmente, pensa-se que os venenos de animais constituam uma biblioteca natural de mais de 40 milhões de moléculas diferentes que têm sido continuamente aperfeiçoadas ao longo do processo evolutivo. Tendo em conta a composição dos venenos, os péptidos reticulados são a classe mais atractiva de moléculas com interesse farmacológico. No entanto, a utilização de venenos para o desenvolvimento de novos fármacos está limitada por dificuldades em obter estas moléculas em quantidades adequadas ao seu estudo. Neste trabalho desenvolveu-se uma plataforma de alta capacidade para a síntese de genes sintéticos codificadores de péptidos venómicos, com o objectivo de produzir bibliotecas de péptidos venómicos recombinantes que possam ser rastreadas para a descoberta de novos medicamentos. Com o objectivo de sintetizar genes pequenos (< 500 pares de bases) com elevada fidelidade e em simultâneo, desenvolveu-se uma metodologia de PCR (polymerase chain reaction) robusta e eficiente, que se baseia na extensão de oligonucleótidos sobrepostos. Para possibilitar a automatização da plataforma de síntese de genes, foram construídas duas ferramentas bioinformáticas para desenhar simultaneamente dezenas a milhares de genes optimizados para a expressão em Escherichia coli (ATGenium) e os respectivos oligonucleótios sobrepostos (NZYOligo designer). Esta plataforma foi optimizada para sintetizar em simultâneo 96 genes sintéticos, tendo-se obtido uma taxa de erro de 1.1 mutações por kb de DNA sintetizado. A fim de diminuir a taxa de erro associada à produção de genes sintéticos, desenvolveu-se um método para remoção de erros utilizando a enzima T7 endonuclease I. A enzima T7 endonuclease I mostrou-se muito eficaz no reconhecimento e clivagem de moléculas DNA que apresentam emparelhamentos incorrectos, reduzindo drasticamente a frequência de erros identificados em genes grandes, de 3.45 para 0.43 erros por kb de DNA sintetizado. Investigou-se também a influência do desenho dos genes, da presença de tags de fusão, da localização celular da expressão e da actividade da protease Tobacco Etch Virus (TEV) para a remoção eficiente de tags, na expressão de péptidos venómicos ricos em cisteínas em E. coli. A utilização de codões meticulosamente escolhidos afectou drasticamente os níveis de expressão em E. coli. Para além disso, os resultados mostram que existe uma pressão significativa no uso dos dois codões que codificam para o resíduo cisteína, o que sugere que ambos os codões têm de estar presentes, em níveis equivalentes, nos genes que foram desenhados e optimizados para garantir elevados níveis de expressão. Este trabalho indicou também que o tag de fusão DsbC foi o mais apropriado para a expressão eficiente de péptidos venómicos ricos em cisteínas, particularmente quando os péptidos recombinantes foram expressos no periplasma bacteriano. Confirmou-se que a protease TEV é eficaz na remoção de tags de fusão, podendo o seu local de reconhecimento conter quaisquer aminoácidos na extremidade C-terminal, com excepção da prolina. Desta forma, verificou-se não ser necessário incorporar qualquer aminoácido extra na extremidade N-terminal dos péptidos venómicos recombinantes. Reunindo todos os resultados, verificou-se que a E. coli é um hospedeiro adequado para a expressão, na forma solúvel, de péptidos venómicos potencialmente funcionais. Por último, foram produzidos, com uma taxa de erro reduzida, ~5000 genes sintéticos codificadores de péptidos venómicos utilizando a nova plataforma de elevada capacidade para a síntese de genes aqui desenvolvida. A nova biblioteca de genes sintéticos foi usada para produzir a maior biblioteca de péptidos venómicos recombinantes construída até agora, que inclui 2736 péptidos venómicos. Esta biblioteca recombinante está presentemente a ser rastreada com o objectivo de descobrir novas drogas com interesse para a saúde humana.
APA, Harvard, Vancouver, ISO, and other styles
34

Rais, Issam. "Discover, model and combine energy leverages for large scale energy efficient infrastructures." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEN051/document.

Full text
Abstract:
La consommation énergétique de nos entités de calculs à grande échelle est une problématique de plus en plus inquiétante. Il est d'autant plus inquiétant que nous nous dirigeons vers "L'exascale",machine qui calcule 10^18 opérations flottantes par secondes, soit 10 fois plus que les meilleurs machines publiques actuelles. En 2017, les data-center consommaient 7% de la demande globale et étaient responsable de 2% de l’émission globale de CO2. Avec la multiplication actuelle du nombre d'outils connectés par personne, réduire la consommation énergétique des data-centers et supercalculateurs à grande échelle est une problématique cruciale pour construire une société numérique durable.Il est donc urgent de voir la consommation énergétique comme une problématique phare de cescentres. De nombreuses techniques, ici nommé "levier", ont été développées dans le but de réduire la consommation électrique des centres de calculs, à différents niveaux : infrastructure, matériel, intergiciel et applicatif. Bien utiliser ces leviers est donc capitale pour s'approcher de l'efficience énergétique. Un grand nombre de leviers sont disponibles dans ces centres de calculs. Malgré leurs gains potentiels, il peut être compliqué de bien les utiliser mais aussi d'en combiner plusieurs en restant efficace en énergie.Dans cette thèse, nous avons abordé la découverte, compréhension et usage intelligent des leviers disponibles à grande échelle dans ces centres de calculs. Nous avons étudié des leviers de manière indépendante, puis les avons combinés à d'autres leviers afin de proposer une solution générique et dynamique à l'usage combiné des leviers
Energy consumption is a growing concern on the verge of Exascale computing, a machine reaching 10^18 operations per seconds, 10 times the actual best public supercomputers, it became a crucial focus. Data centers consumed about 7% of total demand of electricity and are responsible of 2% of global carbon emission. With the multiplication of connected devices per person around the world, reducing the energy consumption of large scale computing system is a mandatory step to address in order to build a sustainable digital society.Several techniques, that we call leverage, have been developed in order to lower the electricalconsumption of computing facilities. To face this growing concern many solutions have beendeveloped at multiple levels of computing facilities: infrastructure, hardware, middle-ware, andapplication.It is urgent to embrace energy efficiency as a major concern of our modern computing facilities. Using these leverages is mandatory to better energy efficiency. A lot of leverages are available on large scale computing center. In spite of their potential gains, users and administrators don't fully use them or don't use them at all to better energy efficiency. Although, using these techniques, alone and combined, could be complicated and counter productive if not wisely used.This thesis defines and investigates the discovery, understanding and smart usage of leverages available on a large scale data center or supercomputer. We focus on various single leverages and understand them. We then combine them to other leverages and propose a generic solution to the dynamic usage of combined leverages
APA, Harvard, Vancouver, ISO, and other styles
35

Georgi, Victoria [Verfasser], Stefan [Akademischer Betreuer] Knapp, Michael [Akademischer Betreuer] Brands, Stefan [Gutachter] Knapp, Eugen [Gutachter] Proschak, Dieter [Gutachter] Steinhilber, and Martin [Gutachter] Grinninger. "Large-scale analysis of kinase inhibitors' target binding kinetics : implications for drug discovery? / Victoria Georgi ; Gutachter: Stefan Knapp, Eugen Proschak, Dieter Steinhilber, Martin Grinninger ; Stefan Knapp, Michael Brands." Frankfurt am Main : Universitätsbibliothek Johann Christian Senckenberg, 2018. http://d-nb.info/120371310X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Scurlock, Bobby Joe. "Compact muon solenoid discovery potential for the minimal supergravity model of supersymmetry in single muon events with jets and large missing transverse energy in proton-proton collisions at center-of-mass energy 14 TEV." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0015695.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Valverde, Quispe Janeth Veronica. "New insights on the nature of blazars from a decade of multi-wavelength observations : Discovery of a very large shift of the synchrotron peak frequency, long-term optical-gamma-ray flux correlations, and rising flux trend in the BL Lac 1ES 1215+303." Thesis, Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAX013.

Full text
Abstract:
Les blazars sont connus pour leur variabilité sur une large gamme d'échelles de temps à toutes les longueurs d'onde; et leur classification (en quasars radio à spectre plat, BL Lac à basse fréquence crête, intermédiaire ou haute fréquence; FSRQ, LBL, IBL, HBL, pour ses sigles en anglais) est basée sur des caractéristiques spectrales à large bande qui ne considèrent pas la source comme étant, éventuellement, dans différentes états d'activité. Récemment, il a été proposé de classer les blazars en fonction de la cinématique de leurs caractéristiques radio. La plupart des études sur les blazars à rayons gamma TeV se concentrent sur des échelles de temps courtes, en particulier pendant les éruptions, en raison de la rareté des campagnes d'observation ou de l'existence relativement récente de détecteurs spécialisés suffisamment sensibles.Avec une décennie d'observations du Fermi-LAT, VERITAS, Je présente une étude approfondie de la variabilité à long terme multi longueurs d'onde du blazar 1ES 1215+303, des rayons gamma à la radio. Cet ensemble de données sans précédent révèle de multiples éruptions de rayons gamma fortes et une augmentation à long terme de la ligne de base des rayons gamma et du flux optique de la source sur une période de dix ans, ce qui se traduit par une corrélation linéaire entre ces deux bandes d'énergie sur une décennie. Des comportements HBL typiques sont identifiés dans la morphologie radio de la source. Cependant, des analyses de la distribution d'énergie spectrale à large bande à différents états de flux de la source, révèlent un déplacement extrême de l'énergie de la fréquence de crête du synchrotron de l'IR aux rayons X mous; indiquant que la source présente les caractéristiques IBL pendant les états de repos et le comportement HBL pendant les états éruptifs. Un modèle synchrotron self-Compton à deux composantes est utilisé pour décrire ce changement spectaculaire.Un cadre détaillé de l'analyse des données de l'instrument Fermi-LAT est fourni et pourrait servir de guide aux chercheurs intéressés par ce domaine. Je présente les efforts approfondis de validation des méthodes utilisées et les contrôles d'intégrité des résultats effectués. Une description des analyses de niveau supérieur est fournie, comme la sélection des éruptions et la recherche d'un comportement plus dur quand plus lumineux dans les données de Fermi-LAT, l'analyse de corrélation croisée et de variabilité à plusieurs longueurs d'onde; la recherche de tendances, log-normalité et variabilité, la caractérisation des éruptions et des distributions spectrales d'énergie, et la recherche d'observations Fermi-LAT - VERITAS simultanées. Ce sont le cœur de ce travail de doctorat.Les différentes méthodes appliquées et présentées dans ce travail fournissent un panorama complet et détaillé de la nature complexe de ce blazar et peuvent même remettre en question notre système de classification actuel. De plus, ce travail fournit une illustration du type d'analyses à long terme que les futurs instruments d'imagerie atmosphérique, tels que le Cherenkov Telescope Array, non seulement permettront mais pourrait même d'améliorer
Blazars are known for their variability on a wide range of timescales at all wavelengths; and their classification (into flat spectrum radio quasars, low-, intermediate- or high-frequency-peaked BL Lac; FSRQ, LBL, IBL, HBL) is based on broadband spectral characteristics that do not consider the source being at, possibly, different states of activity. Recently, it was proposed that blazars could be classified according to the kinematics of their radio features. Most studies of TeV gamma-ray blazars focus on short timescales, especially during flares, due to the scarcity of observational campaigns or due to the relatively young existence of specialized, sensitive enough detectors.With a decade of observations from the Fermi-LAT and VERITAS, I present an extensive study of the long-term multi-wavelength variability of the blazar 1ES 1215+303 from gamma-rays to radio. This unprecedented data set reveals multiple strong gamma-ray flares and a long-term increase in the gamma-ray and optical flux baseline of the source over the ten-year period, which results in a linear correlation between these two energy bands over a decade. Typical HBL behaviors are identified in the radio morphology of the source. However, analyses of the broadband spectral energy distribution at different flux states of the source, unveil an extreme shift in energy of the synchrotron peak frequency from IR to soft X-rays; indicating that the source exhibits IBL characteristics during quiescent states and HBL behavior during high states. A two-component synchrotron self-Compton model is used to describe this dramatic change.A detailed framework of the analysis of the data from the Fermi-LAT instrument is provided, and could serve as a guideline for researchers interested in this field. I present the thorough efforts that were employed in validating the methods used and the sanity checks that were performed on the results obtained. A description of the higher-level analyses are provided, including the flare-selection algorithms, the search for harder-when-brighter behavior in the Fermi-LAT data, the multi-wavelength cross-correlation and variability analysis, the search for trends, log-normality and variability, the characterization of flares and of the spectral energy distributions, and the search for simultaneousFermi-LAT - VERITAS observations. These are the heart of this PhD work.The different methods applied and presented in this work provide a complete and detailed panorama of the intricate nature of this blazar, and possibly even challenge our current classification scheme. Moreover, this work provides an illustration of the type of long-term analyses that future imaging atmospheric instruments, such as the Cherenkov Telescope Array, will not only allow but potentially improve
APA, Harvard, Vancouver, ISO, and other styles
38

McCoy, Jan. "Outdoor Discovery." College of Agriculture, University of Arizona (Tucson, AZ), 1989. http://hdl.handle.net/10150/295610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Firriolo, Marco. "Discovery copy." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/8224/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Keck, Andrew G. "Electronic discovery." Thesis, Utica College, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10101099.

Full text
Abstract:

Cyber incidents continue to increase across the entire globe. The increase in security threats requires organizations to rethink strategies and policies continually fortifying against known and unknown threats. Cyber incident policies and response plans range from non-existent to hundreds of pages in length. A policy may include sections discussing roles and responsibility, incident detection, escalation, and many additional categories, and often discuss the collection and preservation of forensic evidence. Policies briefly address, in many cases, the proper collection of evidence; however, the written regulation concerning the potential liabilities, the risks associated with current and future litigation, and the legal consequences to a cyber incident remains sparse. The desired outcome of this paper is to enlighten the reader through identification of the risks, the potential pitfalls, and steps to policy development pertaining to the handling of electronic evidence, with a cross examination of overlapping sectors between forensics, electronic discovery, and cyber security.

APA, Harvard, Vancouver, ISO, and other styles
41

Wendel, Patrick. "The architecture of discovery net : towards grid-based discovery services." Thesis, Imperial College London, 2008. http://hdl.handle.net/10044/1/7708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hildebrandt, Leonore S. "A Small Discovery." Fogler Library, University of Maine, 2004. http://www.library.umaine.edu/theses/pdf/HildebrandtLS2004.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Cheng, Peter C.-H. "Modelling scientific discovery." Thesis, Open University, 1990. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.256257.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Eriksson, Gustav, and Martin Kevin Garcia. "Discovery of Neptune." Thesis, KTH, Skolan för teknikvetenskap (SCI), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-230700.

Full text
Abstract:
This project is an analysis of how a planet can be found in space with the aid of mathematics. This is based on the fact that in the 19th century two mathematicians John C. Adams and Urbain Le Verrier both independent of each other found Neptune, the 8th planet in the solar system, by calculating its location based on discrepancies between theoretical and observed longitudes. We recreate Adams’ problem and solve it with numerical analysis to see how one could improve this method of finding a planet using mathematics. We created a model of the solar system using Runge-Kutta 4 (RK4) to solve ODE’s explaining how the planets affect each other. We then created an inverse problem where we pretended that Neptune did not exist and tried to find its position and data using Gauss-Newton’s algorithm. Our method gives a better result than those of Adams, although we use a better start guess for the position of Neptune than he did. The important parameter to find is at what direction to look for the planet, also called the longitude angle. Both Adams and us get close to the correct longitude—Adams’ being 2:5_ off and us within 1_. This is especially interesting since without getting this parameter correct they would never have found the planet at that time.
Detta projekt är en analys om hur en planet kan hittas i rymden med hjälp av matematik. Det är baserat på två matematiker, John C. Adams och Urbain Le Verrier, som på 1800-talet oberoende av varandra hittade Neptunus, den åttonde planeten i solsystemet, genom att approximera dess position baserat på avvikelser mellan teoretiska och observerade longituder. Vi återskapar Adams problem och löser det med numerisk analys för att se hur man kan förbättra metoden att hitta planeter genom matematik. Vi skapade en modell av solsystemet med Runge-Kutta 4 (RK4) för att lösa ODE’s som beskriver hur planeterna påverkar varandra. Sedan skapar vi ett inverterat problem där vi låtsas om att Neptunus inte finns och försöker hitta dess position med Gauss-Newtons algoritm. Vår metod ger ett bättre resultat än Adams, vilket beror på att vi använder en bättre startgissning för Neptunus position. Den viktiga parametern att hitta är vid vilken vinkel man ska kolla efter planeten, även kallat longitudvinkeln. Både Adams och vi kommer nära det riktiga värdet --Adams är 2,5  ifrån och vi är inom 1. Detta är särskilt intressant eftersom de aldrig skulle hittat planeten utan denna parameter.
APA, Harvard, Vancouver, ISO, and other styles
45

Oliveira, Olga Margarida Fajarda. "Network topology discovery." Doctoral thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/18692.

Full text
Abstract:
Doutoramento em Matemática
A monitorização e avaliação do desempenho de uma rede são essenciais para detetar e resolver falhas no seu funcionamento. De modo a conseguir efetuar essa monitorização, e essencial conhecer a topologia da rede, que muitas vezes e desconhecida. Muitas das técnicas usadas para a descoberta da topologia requerem a cooperação de todos os dispositivos de rede, o que devido a questões e políticas de segurança e quase impossível de acontecer. Torna-se assim necessário utilizar técnicas que recolham, passivamente e sem a cooperação de dispositivos intermédios, informação que permita a inferência da topologia da rede. Isto pode ser feito recorrendo a técnicas de tomografia, que usam medições extremo-a-extremo, tais como o atraso sofrido pelos pacotes. Nesta tese usamos métodos de programação linear inteira para resolver o problema de inferir uma topologia de rede usando apenas medições extremo-a-extremo. Apresentamos duas formulações compactas de programação linear inteira mista (MILP) para resolver o problema. Resultados computacionais mostraram que a medida que o número de dispositivos terminais cresce, o tempo que as duas formulações MILP compactas necessitam para resolver o problema, também cresce rapidamente. Consequentemente, elaborámos duas heurísticas com base nos métodos Feasibility Pump e Local ranching. Uma vez que as medidas de atraso têm erros associados, desenvolvemos duas abordagens robustas, um para controlar o número máximo de desvios e outra para reduzir o risco de custo alto. Criámos ainda um sistema que mede os atrasos de pacotes entre computadores de uma rede e apresenta a topologia dessa rede.
Monitoring and evaluating the performance of a network is essential to detect and resolve network failures. In order to achieve this monitoring level, it is essential to know the topology of the network which is often unknown. Many of the techniques used to discover the topology require the cooperation of all network devices, which is almost impossible due to security and policy issues. It is therefore, necessary to use techniques that collect, passively and without the cooperation of intermediate devices, the necessary information to allow the inference of the network topology. This can be done using tomography techniques, which use end-to-end measurements, such as the packet delays. In this thesis, we used some integer linear programming theory and methods to solve the problem of inferring a network topology using only end-to-end measurements. We present two compact mixed integer linear programming (MILP) formulations to solve the problem. Computational results showed that as the number of end-devices grows, the time need by the two compact MILP formulations to solve the problem also grows rapidly. Therefore, we elaborate two heuristics based on the Feasibility Pump and Local Branching method. Since the packet delay measurements have some errors associated, we developed two robust approaches, one to control the maximum number of deviations and the other to reduce the risk of high cost. We also created a system that measures the packet delays between computers on a network and displays the topology of that network.
APA, Harvard, Vancouver, ISO, and other styles
46

Taylor, Jonathan Lorin. "Lines of Discovery." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/35461.

Full text
Abstract:
An entry for the World Trade Center Memorial Competition was expanded upon as a study into the nature of design. The project influenced its own evolution and revealed exciting outcomes. The memorial is a reinforced concrete tower with an acrylic water tank at the top. The water tank acts as a prism casting colorful light displays both in the tower and around the site. The tank is also the source for continually falling droplets of water. The drops fall 450 feet through an open chamber to land in a shallow overflowing pool and then wash over a stone sepulcher containing the unidentified remains of victims.
Master of Architecture
APA, Harvard, Vancouver, ISO, and other styles
47

Wuitschik, Georg. "Oxetanes in drug discovery /." Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17929.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Murty, Paul. "Discovery processes in designing." Connect to full text, 2006. http://hdl.handle.net/2123/1809.

Full text
Abstract:
PhD
This thesis describes an interview study of forty five professionally accomplished male and female designers and architects. The study considers how each respondent designs and makes discoveries throughout conceptual design. How they start designing, what they attempt to achieve, the means they employ, how they cope with getting stuck, their breakthroughs and discoveries and the circumstances of these experiences, are the main ingredients of the study. The aim of the research is to estimate the extent to which designing may be regarded as an insightful activity, by investigating experiences of discoveries as reported by the respondents. Throughout the thesis, discoveries or ideas occurring to respondents when they are not actively designing, an apparent outcome of a latent designing or preparation activity, are referred to as cold discoveries. This label is used to distinguish these discoveries from discoveries that emerge in the run of play, when individuals are actively designing. The latter are referred to as hot discoveries. The relative insightfulness of hot and cold discoveries is also investigated. In general, the evidence from the research suggests that designing is significantly insightful. Most respondents (39:45) reported experiences of insights that have contributed to their designing. In addition there is strong evidence that cold discoveries are considerably more important, both quantitatively and qualitatively, than is currently recognized. More than half of the respondents (25:45) reported the experience of cold discoveries, many after disengaging from designing, when they had been stuck. Being stuck means they were experiencing frustration, or had recognised they were not making satisfactory progress in attempts to resolve some aspect of conceptual design. Typically these respondents reported experiencing discoveries while doing other work, performing some physical activity, resting, or very soon after resuming work. They had elected to let ideas come to them, rather than persist in searching and this strategy was successful. Moreover, many respondents (10:45) described positive attributes of cold discoveries using terms such as stronger, more potent, or pushes boundaries, which suggest their cold discoveries are more insightful than their hot discoveries. Many respondents associated their cold discoveries with mental activities such as incubation, a concept identified by Gestalt theorists nearly a century ago. They used a range of informal terms, such as ideas ticking over, or percolating away. These apparently uncontrolled mental experiences, which I refer to generically as latent preparation, varied from one respondent to another in when, where and how they occurred. Latent preparation or its outcomes, in the form of interruptive thoughts, apparently takes place at any time and during different states of consciousness and attentiveness. It appears to be, at different times, unplanned, unintentional, undirected, unnoticed, or unconscious, in combinations, not necessarily all at once. It is clearly not only an unconscious process. This suggests one, or more of the following; 1) that incubation is only a component of latent preparation, or 2) that the conventional view of incubation, as an unconscious process, does not adequately account for the range of insightful experiences of mentally productive people, such as designers, or 3) that the old issue of whether incubation is a conscious, or an unconscious process, is not vital to a systematic investigation of insightful discovery. The thesis concludes by considering prospects for further research and how the research outcomes could influence education. Apart from the findings already described, statements by the respondents about personal attributes, designing, coping with being stuck and discoveries, were wide ranging, resourceful and down-to-earth, suggesting there are many ways for individuals to become proficient, creative designers at the high end of their profession. A major implication for future research is that latent preparation may be found as readily among highly motivated and skilled individuals in other occupations unrelated to architecture or designing. The evidence of the research so far suggests there is much to be learned about latent preparation that can be usefully applied, for the benefit of individuals aiming to be designers, or simply wanting to become more adept at intervening, transforming and managing unexpected and novel situations of any kind.
APA, Harvard, Vancouver, ISO, and other styles
49

Viswanathan, Murlikrishna. "Towards robust discovery systems." Monash University, School of Computer Science and Software Engineering, 2003. http://arrow.monash.edu.au/hdl/1959.1/9397.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Heeks, Richard James. "Discovery writing and genre." Thesis, University of Exeter, 2012. http://hdl.handle.net/10871/13802.

Full text
Abstract:
This study approaches ‘discovery writing’ in relation to genre, investigating whether different genres of writing might be associated with different kinds of writing processes. Discovery writing can be thought of as writing to find out what you think, and represents a reversal of the more usual sense that ideas precede writing, or that planning should precede writing. Discovery writing has previously been approached in terms of writers’ orientations, such as whether writers are Planners or Discoverers. This study engages with these previous theories, but places an emphasis on genres of writing, and on textual features, such as how writers write fictional characters, or how writers generate arguments when writing essays. The two main types of writing investigated are fiction writing and academic writing. Particular genres include short stories, crime novels, academic articles, and student essays. 11 writers were interviewed, ranging from professional fiction authors to undergraduate students. Interviews were based on a recent piece of a writer’s own writing. Most of the writers came from a literary background, being either fiction writers or Literature students. Interviews were based on set questions, but also allowed writers to describe their writing largely in their own terms and to describe aspects of their writing that interested them. A key aspect of this approach was that of engaging writers in their own interests, from where interview questions could provide a basis for discussion. Fiction writing seemed characterized by emergent processes, where writers experienced real life events and channelled their experiences and feelings into stories. The writing of characters was often associated with discovery. A key finding for fiction writing was that even writers who planned heavily and identified themselves somewhat as Planners, also tended to discover more about their characters when writing. Academic writing was characterized by difficulty, where discovery was often described in relation to struggling to summarize arguments or with finding key words. A key conclusion from this study is that writers may be Planners or Discoverers by orientation, as previous theory has recognised. However, the things that writers plan and discover, such as plots and characters, also play an important role in their writing processes.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography