To see the other types of publications on this topic, follow the link: Indices de De Bruijn.

Dissertations / Theses on the topic 'Indices de De Bruijn'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Indices de De Bruijn.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pouillard, Nicolas. "Une approche unifiante pour programmer sûrement avec de la syntaxe du premier ordre contenant des lieurs." Phd thesis, Université Paris-Diderot - Paris VII, 2012. http://tel.archives-ouvertes.fr/tel-00759059.

Full text
Abstract:
Cette thèse décrit une nouvelle approche pour la méta-programmation sûre. Un méta-programme est un programme qui manipule des programmes ou assimilés. Les compilateurs et systèmes de preuves sont de bons exemples de méta-programmes qui bénéficieraient de cette approche. Dans ce but, ce travail se concentre sur la représentation des noms et des lieurs dans les structures de données. Les erreurs de programmation étant courantes avec les techniques usuelles, nous proposons une interface abstraite pour les noms et les lieurs qui élimine ces erreurs. Cette interface est implémentée sous forme d'une bibliothèque en Agda. Elle permet de définir et manipuler des représentations de termes dans le style nominal. Grâce à l'abstraction, d'autres styles sont aussi disponibles : le style de De Bruijn, les combinaisons de ces styles, et d'autres encore. Nous indiçons les noms et les termes par des mondes. Les mondes sont en même temps précis et abstraits. Via les relations logiques et la paramétricité, nous pouvons démontrer dans quel sens notre bibliothèque est sûre, et obtenir des "théorèmes gratuits" à propos des fonctions monde-polymorphiques. Ainsi une fonction monde-polymorphique de transformation de termes doit commuter avec n'importe quel renommage des variables libres. La preuve est entièrement conduite en Agda. Notre technique se montre utile sur plusieurs exemples, dont la normalisation par évaluation qui est connue pour être un défi. Nous montrons que notre approche indicée par des mondes permet d'exprimer un large panel de type de données grâce a des langages de définition embarqués.
APA, Harvard, Vancouver, ISO, and other styles
2

Evans, Stephen David. "Methods of rapid bruise assessment and the formulation of robust bruise indices for potatoes." Thesis, University of Edinburgh, 1995. http://hdl.handle.net/1842/27990.

Full text
Abstract:
When potato tubers are subjected to impacts, the sub-surface tissue may become discoloured as damaged cells produce the blue-black pigment melanin. Bruising caused during harvesting and handling can lead to downgrading of potatoes for the processing industry and quality retail trade. The two aims of this thesis were to reduce the time to detect bruising, and to develop a non-subjective method for the quantification of bruising. Reflectance spectrophotometry was investigated as a rapid, non-subjective and non-invasive way of detecting bruising. Wavelengths from ultraviolet to near infrared were selected by discriminant analysis to separate unbruised and bruised tubers. Neural nets were trained with three wavelengths to identify bruised tubers in a sample of unbruised and bruised tubers. The detection of bruising gave inconsistent results in unpeeled tubers, but proved to be reliable in peeled tubers. The rate of bruise development at air pressures up to 10 bar was measured by reflectance spectrophotometry and by a visual rating. The production of dopachrome, an orange precursor pigment of melanin, was used as an early indication of bruising. Dopachrome is visible to the human eye and the time for bruise detection can be reduced to approximately 3 hours when compressed air is used. Infrared and microwave thermography were used to measure possible rises in bruised tissue temperature. Thermography was used in conjunction with scanning laser Doppler imaging to detect changes in the biological zero of bruised tissue. No significant differences could be detected between unbruised and bruised tubers using these techniques. Reflectance spectrophotometry was also used in combination with a colour digital camera to automatically measure bruise area in peeled tubers. While the camera alone could measure bruise area with precision, constant adjustments were needed. Reflectance spectrophotometry was faster but less precise than the camera for measuring bruise area.
APA, Harvard, Vancouver, ISO, and other styles
3

Varnet, Léo. "Identification des indices acoustiques utilisés lors de la compréhension de la parole dégradée." Thesis, Lyon 1, 2015. http://www.theses.fr/2015LYO10221/document.

Full text
Abstract:
Bien qu’il existe un large consensus de la communauté scientifique quant au rôle des indices acoustiques dans la compréhension de la parole, les mécanismes exacts permettant la transformation d’un flux acoustique continu en unités linguistiques élémentaires demeurent aujourd’hui largement méconnus. Ceci est en partie dû à l’absence d’une méthodologie efficace pour l’identification et la caractérisation des primitives auditives de la parole. Depuis les premières études de l’interface acoustico-phonétique par les Haskins Laboratories dans les années 50, différentes approches ont été proposées ; cependant, toutes sont fondamentalement limitées par l’artificialité des stimuli utilisés, les contraintes du protocole expérimental et le poids des connaissances a priori nécessaires. Le présent travail de thèse s’est intéressé { la mise en oeuvre d’une nouvelle méthode tirant parti de la situation de compréhension de parole dégradée pour mettre en évidence les indices acoustiques utilisés par l’auditeur.Dans un premier temps, nous nous sommes appuyés sur la littérature dans le domaine visuel en adaptant la méthode des Images de Classification à une tâche auditive de catégorisation de phonèmes dans le bruit. En reliant la réponse de l’auditeur { chaque essai à la configuration précise du bruit lors de cet essai, au moyen d’un Modèle Linéaire Généralisé, il est possible d’estimer le poids des différentes régions temps-fréquence dans la décision. Nous avons illustré l’efficacité de notre méthode, appelée Image de Classification Auditive, à travers deux exemples : une catégorisation /aba/-/ada/, et une catégorisation /da/-/ga/ en contexte /al/ ou /aʁ/. Notre analyse a confirmé l’implication des attaques des formants F2 et F3, déjà suggérée par de précédentes études, mais a également permis de révéler des indices inattendus. Dans un second temps, nous avons employé cette technique pour comparer les résultats de participants musiciens experts (N=19) ou dyslexiques (N=18) avec ceux de participants contrôles. Ceci nous a permis d’étudier les spécificités des stratégies d’écoute de ces différents groupes.L’ensemble des résultats suggèrent que les Images de Classification Auditives pourraient constituer une nouvelle approche, plus précise et plus naturelle, pour explorer et décrire les mécanismes { l’oeuvre au niveau de l’interface acoustico-phonétique
There is today a broad consensus in the scientific community regarding the involvement of acoustic cues in speech perception. Up to now, however, the precise mechanisms underlying the transformation from continuous acoustic stream into discrete linguistic units remain largely undetermined. This is partly due to the lack of an effective method for identifying and characterizing the auditory primitives of speech. Since the earliest studies on the acoustic–phonetic interface by the Haskins Laboratories in the 50’s, a number of approaches have been proposed; they are nevertheless inherently limited by the non-naturalness of the stimuli used, the constraints of the experimental apparatus, and the a priori knowledge needed. The present thesis aimed at introducing a new method capitalizing on the speech-in-noise situation for revealing the acoustic cues used by the listeners.As a first step, we adapted the Classification Image technique, developed in the visual domain, to a phoneme categorization task in noise. The technique relies on a Generalized Linear Model to link each participant’s response to the specific configuration of noise, on a trial-by-trail basis, thereby estimating the perceptual weighting of the different time-frequency regions for the decision. We illustrated the effectiveness of our Auditory Classification Image method through 2 examples: a /aba/-/ada/ categorization and a /da/-/ga/ categorization in context /al/ or /aʁ/. Our analysis confirmed that the F2 and F3 onsets were crucial for the tasks, as suggested in previous studies, but also revealed unexpected cues. In a second step, we relied on this new method to compare the results of musical experts (N=19) or dyslexics participants (N=18) to those of controls. This enabled us to explore the specificities of each group’s listening strategies.All the results taken together show that the Auditory Classification Image method may be a more precise and more straightforward approach to investigate the mechanisms at work at the acoustic-phonetic interface
APA, Harvard, Vancouver, ISO, and other styles
4

Löthgren, Anders. "de Bruijn-sekvenserDet effektiva paketbudet." Thesis, Örebro universitet, Institutionen för naturvetenskap och teknik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-36148.

Full text
Abstract:
Denna uppsats behandlar specialfall av de Bruijn-sekvenser där varje sekvens av längd n i de Bruijn-sekvensen innehåller samtliga k olika element från ett alfabet Ak. Uppsatsen kommer att demonstrera hur man kan generera de Bruijn-sekvenser med hjälp av Eulercykler. Arbetet kommer därför även att ge en bakgrund om Eulercykler och även ange en metod för att bestämma antalet unika cykler.
APA, Harvard, Vancouver, ISO, and other styles
5

Bryant, Roy Dale. "Covering the de Bruijn graph." Thesis, Monterey, California. Naval Postgraduate School, 1986. http://hdl.handle.net/10945/21751.

Full text
Abstract:
Random-like sequences of 0's and l's are generated efficiently by binary shift registers. The output of n-stage shift registers viewed as a sequence of binary n-tuples also give rise to a special graph called the de Bruijn graph B . The de Bruijn graph is a directed graph with 2n nodes. Each node has 2 arcs entering it and 2 arcs going out of it. Thus, there are a total of 2n+1 arcs in Bn. In this thesis, we define a cover of the de Bruijn graph, different from the usual graph theoretic cover. A cover S of the de Bruijn graph is defined as an independent subset of the nodes of Bn that satisfy the following J property. For each node x in Bn - S, there exists a node y in S such that either the arc or the arc is in Bn. Combinatorially, we are able to place both upper and lower bounds on the cardinality of S. We find examples of covers that approach these bounds in cardinality. Several algorithms are presented that produce either a maximal or a minimal cover. Among them are Frugal, Sequential Fill, Double and Redouble, Greedy and Quartering.
APA, Harvard, Vancouver, ISO, and other styles
6

Hunt, D'Hania J. "Constructing higher-order de Bruijn graphs." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02Jun%5FHunt.pdf.

Full text
Abstract:
Thesis (M.S. in Applied Mathematics)--Naval Postgraduate School, June 2002.
Thesis advisor(s): Harold Fredricksen, Craig W. Rasmussen. Includes bibliographical references (p. 45-46). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
7

Alharthy, Shathaa. "De Bruijn Graphs and Lamplighter Groups." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/38832.

Full text
Abstract:
De Bruijn graphs were originally introduced for finding a superstring representation for all fixed length words of a given finite alphabet. Later they found numerous applications, for instance, in DNA sequencing. Here we study a relationship between de Bruijn graphs and the family of lamplighter groups (a particular class of wreath products). We show how de Bruijn graphs and their generalizations can be presented as Cayley and Schreier graphs of lamplighter groups.
APA, Harvard, Vancouver, ISO, and other styles
8

Krahn, Gary William. "Double Eulerian cycles on de Bruijn digraphs." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1994. http://handle.dtic.mil/100.2/ADA283334.

Full text
Abstract:
Dissertation (Ph.D. in Applied Mathematics) Naval Postgraduate School, June 1994.
Dissertation supervisor(s): Harold Fredricksen. "June 1994" Includes bibliographical references. Also available online.
APA, Harvard, Vancouver, ISO, and other styles
9

Popovic, Lada McEliece Robert J. McEliece Robert J. "Finite state codes and generalized De Bruijn sequences /." Diss., Pasadena, Calif. : California Institute of Technology, 1991. http://resolver.caltech.edu/CaltechETD:etd-07092007-131600.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zerbino, Daniel Robert. "Genome assembly and comparison using de Bruijn graphs." Thesis, University of Cambridge, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.611752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lyman, Cole Andrew. "Comparative Genomics Using the Colored de Bruijn Graph." BYU ScholarsArchive, 2020. https://scholarsarchive.byu.edu/etd/8441.

Full text
Abstract:
Comparing genomes in a computationally efficient manner is a difficult problem. Methods that provide the highest resolution are too inefficient and methods that are efficient are too low resolution. In this thesis, we show that the Colored de Bruijn Graph (CdBG) is a suitable method for comparing genomes because it is efficient while maintaining a useful amount of resolution. To illustrate the usefulness of the CdBG, the phylogenetic tree for 12 species in the Drosophila genus is reconstructed using pseudo-homologous regions of the genome contained in the CdBG.
APA, Harvard, Vancouver, ISO, and other styles
12

Badr, Eman. "Identifying Splicing Regulatory Elements with de Bruijn Graphs." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/73366.

Full text
Abstract:
Splicing regulatory elements (SREs) are short, degenerate sequences on pre-mRNA molecules that enhance or inhibit the splicing process via the binding of splicing factors, proteins that regulate the functioning of the spliceosome. Existing methods for identifying SREs in a genome are either experimental or computational. This work tackles the limitations in the current approaches for identifying SREs. It addresses two major computational problems, identifying variable length SREs utilizing a graph-based model with de Bruijn graphs and discovering co-occurring sets of SREs (combinatorial SREs) utilizing graph mining techniques. In addition, I studied and analyzed the effect of alternative splicing on tissue specificity in human. First, I have used a formalism based on de Bruijn graphs that combines genomic structure, word count enrichment analysis, and experimental evidence to identify SREs found in exons. In my approach, SREs are not restricted to a fixed length (i.e., k-mers, for a fixed k). Consequently, the predicted SREs are of different lengths. I identified 2001 putative exonic enhancers and 3080 putative exonic silencers for human genes, with lengths varying from 6 to 15 nucleotides. Many of the predicted SREs overlap with experimentally verified binding sites. My model provides a novel method to predict variable length putative regulatory elements computationally for further experimental investigation. Second, I developed CoSREM (Combinatorial SRE Miner), a graph mining algorithm for discovering combinatorial SREs. The goal is to identify sets of exonic splicing regulatory elements whether they are enhancers or silencers. Experimental evidence is incorporated through my graph-based model to increase the accuracy of the results. The identified SREs do not have a predefined length, and the algorithm is not limited to identifying only SRE pairs as are current approaches. I identified 37 SRE sets that include both enhancer and silencer elements in human genes. These results intersect with previous results, including some that are experimental. I also show that the SRE set GGGAGG and GAGGAC identified by CoSREM may play a role in exon skipping events in several tumor samples. Further, I report a genome-wide analysis to study alternative splicing on multiple human tissues, including brain, heart, liver, and muscle. I developed a pipeline to identify tissue-specific exons and hence tissue-specific SREs. Utilizing the publicly available RNA-Seq data set from the Human BodyMap project, I identified 28,100 tissue-specific exons across the four tissues. I identified 1929 exonic splicing enhancers with 99% overlap with previously published experimental and computational databases. A complicated enhancer regulatory network was revealed, where multiple enhancers were found across multiple tissues while some were found only in specific tissues. Putative combinatorial exonic enhancers and silencers were discovered as well, which may be responsible for exon inclusion or exclusion across tissues. Some of the enhancers are found to be co-occurring with multiple silencers and vice versa, which demonstrates a complicated relationship between tissue-specific enhancers and silencers.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
13

Feng, Zhi. "Optical MANs based on the de Bruijn graph." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/6647.

Full text
Abstract:
This thesis proposes and studies multihop lightwave networks based on the de Bruijn Graph (dBG) and their hierarchical structures. Routing algorithms that take advantage of the inherent structure of the network are studied. Three new algorithms that would progressively improve on the mean path length performance are proposed, and the network performances are studied in terms of mean path length, network throughput, delay, locality factor and the propagation delay. The study shows that the performance of this type of network topology is desirable and is comparable to other types of multihop systems. It may be considered as a candidate for optical Metropolitan Area Networks. The optical implementation and design criteria are also discussed in the thesis. It shows that such networks are feasible with today's lightwave technology and can also be easily adapted to the more advanced technique in the future.
APA, Harvard, Vancouver, ISO, and other styles
14

Leiba, Raphaël. "Conception d'un outil de diagnostic de la gêne sonore en milieu urbain." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066602/document.

Full text
Abstract:
Le bruit, en particulier celui dû au trafic routier, est cité par de nombreuses études comme une source de préoccupation sociétale majeure. Jusqu'à présent les réponses des pouvoirs publics ne se basent que sur une quantification énergétique de l'exposition sonore, souvent via la mesure ou l'estimation du LA ou du Lden, et des prises de décisions relatives à la diminution du niveau sonore. Or des études psychoacoustiques ont montré que le niveau sonore n'expliquait qu'une faible part de la gêne sonore ressentie. Il est donc intéressant d'avoir plus d'information sur la source de bruit et de ne pas la réduire à un simple niveau sonore. Dans cette thèse, nous proposons de concevoir un outil permettant d'estimer la gêne sonore associée à chaque véhicule du trafic routier via l'utilisation de son signal audio et de modèles de gêne sonore. Pour ce faire, le signal audio du véhicule est isolé de l'ensemble du trafic routier urbain grâce à l'utilisation de méthodes inverses et de grands réseaux de microphones ainsi que du traitement d'images pour obtenir sa trajectoire. Grâce à la connaissance de la trajectoire ainsi que du signal, le véhicule est classifié par une méthode de machine learning suivant la taxonomie de Morel et al. Une fois sa catégorie obtenue, la gêne spécifique du véhicule est estimée grâce à un modèle de gêne sonore utilisant des indices psychoacoustiques et énergétiques. Cela permet l'estimation des gênes sonores spécifiques à chaque véhicule au sein du trafic routier. L'application de cette méthode est faite lors d'une journée de mesure sur une grande artère parisienne
Noise, especially road traffic noise, is cited by many studies as a source of major societal concern. So far, public responses are based only on energy quantification of sound exposure, often by measuring or estimating LA or Lden, and sound-level reduction related decision are taken. Nevertheless, psychoacoustic studies have shown that the sound level explains only a small part of the perceived noise annoyance. It is interesting to have more information about the source of noise and not to reduce the information to its sound level. In this thesis a tool is proposed for estimating the noise annoyance induced by each road vehicle using its audio signal and noise annoyance models. To do so, the audio signal of the vehicle is isolated by using inverse methods, large microphones arrays and image processing to obtain its trajectory. The knowledge of the trajectory and of the signal allows the vehicle to be classified by a machine learning method according to Morel et al. taxonomy. Once its category obtained, the specific annoyance of the vehicle is estimated thanks to a noise annoyance model using psychoacoustic and energetic indices. This allows the estimation of specific noise annoyance for each vehicle within the road traffic. The application of this method is made during a measurement day on a large Parisian artery
APA, Harvard, Vancouver, ISO, and other styles
15

Poirier, Carl. "Assemblage d'ADN avec graphes de de Bruijn sur FPGA." Master's thesis, Université Laval, 2015. http://hdl.handle.net/20.500.11794/27132.

Full text
Abstract:
Ce mémoire est consacré à la parallélisation d'un algorithme d'assemblage d'ADN de type de novo sur différentes plateformes matérielles, soit les processeurs multicoeurs et les accélérateurs de type FPGA. Plus précisément, le langage OpenCL est utilisé pour accélérer l'algorithme dont il est question, et de permettre un comparatif direct entre les les plateformes. Cet algorithme est d'abord introduit, puis son implémentation originale, développée pour une exécution sur une grappe de noeuds, est discutée. Les modifications apportées à l'algorithme dans le but de faciliter la parallélisation sont ensuite divulgées. Ensuite, le coeur du travail est présenté, soit la programmation utilisant OpenCL. Finalement, les résultats sont présentés et discutés.
APA, Harvard, Vancouver, ISO, and other styles
16

Hines, Peter Anthony. "The linear complexity of de Bruijn sequences over finite fields." Thesis, Royal Holloway, University of London, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313736.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Peng, Yu, and 彭煜. "Iterative de Bruijn graph assemblers for second-generation sequencing reads." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2012. http://hub.hku.hk/bib/B50534051.

Full text
Abstract:
The recent advance of second-generation sequencing technologies has made it possible to generate a vast amount of short read sequences from a DNA (cDNA) sample. Current short read assemblers make use of the de Bruijn graph, in which each vertex is a k-mer and each edge connecting vertex u and vertex v represents u and v appearing in a read consecutively, to produce contigs. There are three major problems for de Bruijn graph assemblers: (1) branch problem, due to errors and repeats; (2) gap problem, due to low or uneven sequencing depth; and (3) error problem, due to sequencing errors. A proper choice of k value is a crucial tradeoff in de Bruijn graph assemblers: a low k value leads to fewer gaps but more branches; a high k value leads to fewer branches but more gaps. In this thesis, I first analyze the fundamental genome assembly problem and then propose an iterative de Bruijn graph assembler (IDBA), which iterates from low to high k values, to construct a de Bruijn graph with fewer branches and fewer gaps than any other de Bruijn graph assembler using a fixed k value. Then, the second-generation sequencing data from metagenomic, single-cell and transcriptome samples is investigated. IDBA is then tailored with special treatments to handle the specific issues for each kind of data. For metagenomic sequencing data, a graph partition algorithm is proposed to separate de Bruijn graph into dense components, which represent similar regions in subspecies from the same species, and multiple sequence alignment is used to produce consensus of each component. For sequencing data with highly uneven depth such as single-cell and metagenomic sequencing data, a method called local assembly is designed to reconstruct missing k-mers in low-depth regions. Then, based on the observation that short and relatively low-depth contigs are more likely erroneous, progressive depth on contigs is used to remove errors in both low-depth and high-depth regions iteratively. For transcriptome sequencing data, a variant of the progressive depth method is adopted to decompose the de Bruijn graph into components corresponding to transcripts from the same gene, and then the transcripts are found in each component by considering the reads and paired-end reads support. Plenty of experiments on both simulated and real data show that IDBA assemblers outperform the existing assemblers by constructing longer contigs with higher completeness and similar or better accuracy. The running time of IDBA assemblers is comparable to existing algorithms, while the memory cost is usually less than the others.
published_or_final_version
Computer Science
Doctoral
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
18

Eduardo, Moreno. "Graphes et cycles de de Bruijn dans des langages avec des restrictions." Phd thesis, Université de Marne la Vallée, 2005. http://tel.archives-ouvertes.fr/tel-00628709.

Full text
Abstract:
Soit un langage composé par tous les mots d'une longueur donnée $n$. Un cycle de de Bruijn d'ordre $n$ est un mot cyclique tel que tous les mots du langage apparaissent exactement une fois comme facteurs de ce cycle. Un algorithme pour construire le cycle de de Bruijn lexicographiquement minimal est dû à Fredricksen et Maiorana, il utilise les mots de Lyndon du langage. Cette thèse étudie comment généraliser le concept de cycles de de Bruijn pour un langage composé par un sous-ensemble de mots de longueur $n$, en particulier les langages de tous les mots de longueur $n$ sans facteurs dans une liste de facteurs interdits. Premièrement, nous étudions le cas des mots sans le facteur 11. Nous fournissons de nouvelles preuves de l'algorithme de Fredricksen et Maiorana qui nous permettent de prolonger ce resultat au cas des mots sans le facteur $1^i$ pour n'importe quel $i$. Nous caractérisons pour quels langages de mots de longueur $n$ existe un cycle de de Bruijn, et nous étudions également quelques propriétés de la dynamique symbolique de ces langages, en particulier des langages définis par des facteurs interdits. Pour ces genres de langages, nous présentons un algorithme pour produire un cycle de de Bruijn, en utilisant les mots de Lyndon du langage. Ces résultats utilisent la notion du graphe de de Bruijn et réduit le problème à construire un cycle eulérien dans ce graphe. Nous étudions le problème de la construction du cycle minimal dans un langage avec des facteurs interdits en employant le graphe de de Bruijn. Nous étudions deux algorithmes, un algorithme glouton simple et efficace qui fonctionne avec quelques familles de langages, et un algorithme plus complexe qui résout ce problème pour n'importe quel graphe eulérien.
APA, Harvard, Vancouver, ISO, and other styles
19

Moreno, Eduardo. "Graphes et cycles de de Bruijn dans des langages avec des restrictions." Marne-la-Vallée, 2005. https://tel.archives-ouvertes.fr/tel-00628709.

Full text
Abstract:
Soit un langage composée par tous les mots d’une longueur donnée n. Un cycle de de Bruijn d’ordre n est un mot cyclic tels que tous les mots dans le langage apparaît exactement une fois comme facteurs de cet cycle. Un de l’algorithme pour construire le cycle de de Bruijn lexicographiquement minimal est dû à Fredricksen et à Maiorana, lequel utilise les mots de Lyndon dans le language. Cette thèse étudie comment généraliser le concept de cycles de de Bruijn pour un language composée par un sous-ensemble de mots de longueur n, particularment les languages de tous les mots de longueur n sans facteurs dans une liste de facteurs interdits. Premièrement, nous étudie le cas des mots sans le facteur 11. Nous fournissons de nouvelles preuves de l’algorithme de Fredricksen et de Maiorana qui nous en permet de prolonger ce résultat au cas des mots sans facteur 1i pour n’importe quelle i. Nous caractérisons pour quelles langues des mots de longueur n existe un cycle de de Bruijn, et nous étudions également quelques propriétés de la dynamique symbolique de ces languages, particularment des languages définies par des facteurs interdits. Pour ces genres de languages, nous présentons un algorithme pour produire un cycle de de Bruijn, en utilisant les mots de Lyndon du language. Ces résultats utilisent la notion du graphe de de Bruijn et réduit le problème à construire un cycle Eulerian dans ce graphe. Nous étudions le problème de la construction du cycle minimal dans un language avec des facteurs interdits employant le graphe de de Bruijn. Nous étudions deux algorithmes, un algorithme avide simple et efficace qui fonctionne avec quelques ensembles de langues, et un algorithme plus complexe qui résout ce problème pour n’importe quel graphe Eulerian
Let be a language composed by all words of a given length n. A de Bruijn sequence of span n is a cyclic string such that all words in the language appears exactly once as factors of this sequence. One of the algorithms to construct the lexicographically minimal de Bruijn sequence is due to Fredricksen and Maiorana and it use the Lyndon words in the language. This thesis studies how to generalize the concept of de Bruijn sequence for a language composed by a subset of words of length n, particularly the languages of all words of length n without factors in a list of forbidden factors. Firstly, we study the case of words without the factor 11. We give a new proof of the algorithm of Fredricksen and Maiorana which allows us to extend this result to the case of words without the factor 1i for any i. We characterize for which languages of words of length n exists a de Bruijn sequence, and we also study some symbolic dynamical properties of these languages, particularly of the languages defined by forbidden factors. For these kinds of languages, we present an algorithm to produce a de Bruijn sequence, using the Lyndon words of the language. These results use the notion of de Bruijn graph and reduce the problem to construct an Eulerian cycle in this graph. We study the problem of construct the lexicographically minimal de Bruijn sequence in a language with forbidden factors using the de Bruijn graph. We study two algorithms, a simple and efficient greedy algorithm which works with some sets of languages, and a more complex algorithm which solves this problem for any Eulerian labelled graph
APA, Harvard, Vancouver, ISO, and other styles
20

Richard, Céline. "Etude de l’encodage des sons de parole par le tronc cérébral dans le bruit." Thesis, Lyon 2, 2010. http://www.theses.fr/2010LYO20116/document.

Full text
Abstract:
Ce travail s’est intéressé au traitement sous cortical de la parole dégradée par le bruit, notamment par la caractérisation première de l’importance de certains traits acoustiques dans la perception de la parole normale. Pour cela, nous avons d’abord participé à la mise au point de la technique électrophysiologique de potentiels évoqués auditifs obtenus en réponse à des sons de parole, technique proche de celle des potentiels évoqués auditifs précoces, mais qui a des exigences propres en matière de traitement du signal et de techniques de recueil, qui nécessitent une adaptation importante de part la nature différente des stimuli français par rapport aux stimuli anglais utilisés par l’équipe de référence américaine. Les différents axes de notre recherche ont, par ailleurs, permis de mettre en évidence l’importance de l’encodage sous cortical de certaines caractéristiques acoustiques telles que l’enveloppe temporelle, le voisement, mettant par là même en évidence un possible effet corticofuge sur l’encodage de celui-ci. Ces différentes expériences nous ont amenés à nous poser la question des conditions idéales de recueil des PEASP, et notamment l’effet de l’intensité sur le recueil de ceux-ci, mettant en évidence une relation non-linéaire entre l’intensité de stimulation, et les caractéristiques des PEAPSP. Si une intensité de 20 dB SL semble nécessaire au recueil d’un PEAPSP, les réponses montrent une variabilité qui reste très grande à l’échelon individuel, ce qui rend l’utilisation de l’outil PEAPSP à visée diagnostique, que ce soit dans les troubles du langage chez l’enfant, ou dans les troubles de l’audition dans le bruit, difficile
The major purpose of my thesis was the investigation of brainstem structures implications into speech in noise processing, particularly by identifying the impact of acoustic cues on normal speech perception. Firstly, we were involved in the engineering of the speech auditory brainstem responses (SABR) recording system. SABR are similar to brainstem auditory evoked responses to clicks, but require different acquisition and signal processing set-ups, due to the difference between the French and the American stimuli used by the American reference team. The different studies presented here, permitted to emphasize the role of brainstem structures into the subcortical processing of acoustical cues, such as the temporal enveloppe, or the voicing, with a possible evidence of a corticofugal effect on SABR. These experimentations lead us to a more fundamental question on the best conditions required for PEASP collection, in particular, the best stimulation intensity needed. The results of the experiment on intensity effect showed a non linear relation between the stimulation intensity and PEASP characteristics. Even if an intensity of only 20 dB SL seems enough for SABR recording, individual results are still highly variable so that diagnostic application of SABR on, for example, children with language learning problems or subject suffering from speech in noise perception impairment remains difficult
APA, Harvard, Vancouver, ISO, and other styles
21

Zeineddine, Hassan. "Effect of limited wavelength conversion in all-optical networks based on de Bruijn graphs." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0010/MQ52494.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ventura, Daniel Lima. "Cálculos de substituições explícitas à la de Bruijn com sistemas de tipos com interseção." reponame:Repositório Institucional da UnB, 2010. http://repositorio.unb.br/handle/10482/8787.

Full text
Abstract:
Dissertação (Mestrado em Matemática)-Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Matemática, Brasília, 2010.
Submitted by Jaqueline Ferreira de Souza (jaquefs.braz@gmail.com) on 2011-06-29T20:20:12Z No. of bitstreams: 1 2010_DanielLimaVentura.pdf: 943550 bytes, checksum: 33a595d3777e87c528eaa8917894eba0 (MD5)
Approved for entry into archive by Jaqueline Ferreira de Souza(jaquefs.braz@gmail.com) on 2011-06-29T20:25:33Z (GMT) No. of bitstreams: 1 2010_DanielLimaVentura.pdf: 943550 bytes, checksum: 33a595d3777e87c528eaa8917894eba0 (MD5)
Made available in DSpace on 2011-06-29T20:25:33Z (GMT). No. of bitstreams: 1 2010_DanielLimaVentura.pdf: 943550 bytes, checksum: 33a595d3777e87c528eaa8917894eba0 (MD5)
O calculus é um modelo teórico de computação tão antigo quanto a própria noção de função computável. Devido a definição da substituição como uma metaoperação. existem várias formas de tornar esta substituição explícita no sistema, dando surgimento a uma grande variedade de sistemas baseados no calculus. Estudamos dois cálculos de substituições explícitas, o e o se, com sistemas de tipos com interseção. Estes cálculos utilizam uma notação à la de Bruijn, onde variáveis são representadas por índices ao invés de nomes. Sistemas de atribuição de tipos permitem uma análise sintática (estática) de propriedades semânticas (dinâmicas) de programas, dispensando qualquer declaração de tipos dentro destes. Os tipos com interseção apresentam uma maneira de integrar polimorfismo ao sistema, que tem se mostrado conveniente computacionalmente com propriedades como a tipagem principal, que permite, e.g a compilação separada e a recompilação inteligente para o sistema de tipos computacionais. Para a adição de tipos com interseção aos cálculos estudados, fazemos um estudo do calculus à la de Bruijn com dois sistemas de tipos diferentes. Uma caracterização sintática de tipagens principais, para termos irredutíveis, em um dos sistemas é apresentada. Baseado neste sistema, introduzimos sistemas de tipos com interseção para e o se. A propriedade básica de redução de sujeito, que garante a preservação dos tipos em qualquer computação possível para termos tipáveis, é analisada nas variações dos sistemas propostos. Outra propriedade analisada é a relevância do sistema, garantindo que apenas a informação de tipos necessária para inferência é utilizada, impossibilitando a admissibilidade de uma lei de redundância para o sistema de tipos. ______________________________________________________________________________________ ABSTRACT
The ג-calculus is a well known theoretical computation model as old as the concept of computable functions. Due to the substitution definition as a meta-operator there exists a great quantity of variations of this computational system in which the operation of substitution is treated explicitly. In this work we investigate intersection type systems for two explicit substitution calculi, the גσ and the גse, both with de Bruijn indices. Type assignment systems allow one to have a static code analysis through implicit typing inference, where no type declaration is required. Intersection types present a machine friendly way to add polymorphism to type systems with features such as the principal typing property, allowing e.g. a separate compilation and the smartest recompilation. We study the ג–calculus with de Bruijn indices with two diferente type systems, in a preliminary step for adding intersection types for both explicit substitution calculi. A characterisation for principal typíngs of irreducibe terms is a given in on of the systems, wich the intersection type systems for each גσ and גse are basead on. We analyse the subject reduction property, which guarantees that all terms of the system preserve their types during any possible computation, in some variations for the proposed type systems. Another analysed property is the relevance, in which only necessary suppositions are allowed in a typing inference, turning a weakening rule inadmissible in the type system.
APA, Harvard, Vancouver, ISO, and other styles
23

House, Margaret A. "Water quality indices." Thesis, Middlesex University, 1986. http://eprints.mdx.ac.uk/13379/.

Full text
Abstract:
Given the present constraints on capital expenditure for water quality improvements, it is essential that best management practices be adopted whenever possible. This research provides an evaluation of existing practices in use within the water industry for surface water quality classification and assesses water quality indices as an alternative method for monitoring trends in water quality. To this end, a new family of indices have been developed and evaluated and the management flexibility provided by their application has been examined. It is shown that water-quality indices allow the reduction of vast amounts of data on a range of determinand concentrations, to a single number in an objective and reproducible manner. This provides an accurate assessment of surface water quality which will be beneficial to the operational management of surface water quality. Previously developed water quality indices and classifications are reviewed and evaluated. Two main types of index are identified: biotic indices and chemical indices. The former are based exclusively upon biological determinands/indicators and are used extensively within the United Kingdom in the monitoring of surface water quality. The latter includes a consideration of both physico-chemical and biological determinands, but with an emphasis on the former variables. Their use is still the subject of much controversy and discussion. Four main approaches to the development of chemical indices can be identified in accordance with the aims and objectives of their design. Those developed for general application are known as General Water Quality Indices (WQIs) or Indices of Pollution, with the latter based predominantly upon determinands associated with man-made pollution. Those which reflect water quality in terms of its suitability for a specific use are termed use-related; whilst planning indices are those which attempt to highlight areas of high priority for remedial action on the basis of more wide-ranging determinands. The derivation and structure of previously developed indices have been evaluated and the merits and strengths of each index assessed. In this way, nine essential index characteristics were identified, including the need to develop an index in relation to legal standards or guidelines. In addition it was recognised that one requirement of an index should be to reflect potential water use and toxic water quality in addition to general quality as reflected by routinely monitored determinands. The development of river quality classifications within the United Kingdom is reviewed and the additional management flexibility afforded by the use of an index evaluated by comparing the results produced by the SOD (1976) Index with those of the National Water Council (NWC, 1977) Classification. The latter classification is that presently used to monitor water quality in Britain. The SOD Index was found to be biased towards waters of high quality and provided no indication of potential water use or toxic water quality. Nevertheless, it displayed a number of advantages over the NWC Classification in terms of the operational management of surface water quality. It was therefore decided to develop a new family of water quality indices, each based upon legally established water quality standards and guidelines for both routinely monitored and toxic determinands and each relating water quality to a range of potential water uses, thereby indicating economic gains or losses resulting from changes in quality. Four stages in the development of a water quality index are discussed: determinand selection; the development of determinand transformations and weightings; and the selection of appropriate aggregation functions. Four separate indices have been developed as a result of this research. These may be used either independently or in combination with one another where a complete assessment of water quality is required. The first of these is a General Water Quality Index (WQI) which reflects water quality in terms of a range of potential water uses. This index is based upon nine physico-chemical and biological determinands which are routinely monitored by the water authorities and river purification boards of England, Wales and Scotland. The second, the Potable Water Supply Index (PWSI) is based upon thirteen routinely monitored determinands, but reflects water quality exclusively in terms of its suitability for use in potable water supply (PWS). The two remaining indices, the Aquatic Toxicity (ATI) and Potable Sapidity (PSI) Indices are based upon toxic determinands such as heavy metals, pesticides and hydrocarbons which are potentially harmful to both human and aquatic life. Both indices are use-related, the former reflecting the suitability of water for the protection of fish and wildlife populations; the latter, the suitability of water for use in PWS. Each index is based upon nine and twelve toxic determinands respectively. These indices were developed in as objective and rigorous a manner as possible, utilising an intensive interview and questionnaire programme with members of both the water authorities and river purification boards. Rating curves were selected as the best way in which individual determinand concentrations could be transformed to the same scale. The scales selected for the WQI and PWSI are 10 - 100 and 0 - 100 respectively, whilst those of the ATI and PSI are 0 - 10. Each has been sub-divided in such a way as to indicate not only water quality, but also possible water use. Thus, the indices reflect both current and projected changes in the economic value of a water body which would occur as a result of the implementation of alternative management strategies. The curves were developed using published water quality standards and guidelines relating to specific water uses. Therefore, they contain information on standards which must be adhered to within the United Kingdom and this adds a further dimension to their management flexibility. Determinand weightings indicating the emphasis placed by water quality experts upon individual determinands were assigned to the determinands of the WQI and PWSI. However, weightings were omitted from the ATI and PSI due to the sporadic nature of pollution events associated with these determinands. These vary spatially and temporally, both in concentration and in terms of which determinand is found to be in violation of consent conditions. Therefore, on a national scale, no one determinand could be isolated as being more important than any other. Three aggregation formulae were evaluated for use within the developed indices: the weighted and unweighted versions of an arithmetic, modified arithmetic and multiplicative formulation. Each index was applied to data collected from a series of water quality monitoring bodies covering a range of water quality conditions. In each instance, the modified arithmetic formulation was found to produce index scores which agreed most closely with a predetermined standard, normally the classifications assigned using the NWC classification. In addition, this formulation produced scores which best covered the ascribed index range. However, the multiplicative unweighted formulation was retained for use within the ATI and PSI for the detection of zero index scores, i.e. when concentrations in excess of legal limits were recorded for these toxic determinands. The results from these studies validate the ability of each index to detect fluctuations in surface water quality. Therefore, the utility of the developed indices for the operational management of surface water quality was effectively demonstrated and the flexibility and advantages of an index approach in providing additional information upon which to base management decisions was highlighted.
APA, Harvard, Vancouver, ISO, and other styles
24

Khani, Hossein. "Ordinal Power Indices." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLD025.

Full text
Abstract:
La conception de procédures visant à classer les personnes en fonction de leur comportement dans des groupes estd’une grande importance dans de nombreuses situations. Le problème se pose dans une variété de scénarios de lathéorie du choix social, de la théorie des jeux coopératifs ou de la théorie de la décision multi-attributs. Cependant, dansde nombreuses applications du monde réel, une évaluation précise sur les "coalitions de pouvoir" peut être difficile pourde nombreuses raisons. Dans ce cas, il peut être intéressant de ne considérer que les informations ordinales concernantles comparaisons binaires entre les coalitions. L’objectif de cette thèse est d’étudier le problème de la recherche d’unclassement ordinal sur l’ensemble N d’individus (appelé classement social),en lui attribuant un rang ordinal par rapportà son ensemble de pouvoir (appelé relation de pouvoir). Pour ce faire, nous utilisons des notions de la théorie de voteclassique et la théorie des jeux coopératifs. Nous avons principalement défini des concepts de solution nommés règle demajorité ceteris paribus, et l’indice ordinal Banzhad, qui sont respectivement inspirées de la théorie de vote classique etde la théorie des jeux coopératifs. Comme la majorité de notre travail de thèse consiste à étudier des solutions à partird’une approche fondée sur la propriété, nous étudions axiomatiquement les solutions en reformulant les axiomes dansla théorie classique du vote. Enfin, l’exploration des extensions pondérées de la règle de la majorité ceteris paribus pourclasser plus de deux personnes, engendre une étude des familles de solutions pondérées
The design of procedures aimed at ranking individuals according to how they behave in various groups is of great importance in many practical situations. The problem occurs in a variety of scenarios coming from social choice theory,cooperative game theory or multi-attribute decision theory, and examples include: comparing researchers in a scientificdepartment by taking into account their impact across different teams; finding the most influential political parties in aparliament based on past alliances within alternative majority coalitions; rating attributes according to their influence ina multi-attribute decision context, where independence of attributes is not verified because of mutual interactions. However, in many real world applications, a precise evaluation on the coalitions’ “power” may be hard for many reasons (e.g., uncertain data, complexity of the analysis, missing information or difficulties in the update, etc.). In this case, it may be interesting to consider only ordinal information concerning binary comparisons between coalitions. The main objectiveof this thesis is to study the problem of finding an ordinal ranking over the set N of individuals (called social ranking),given an ordinal ranking over its power set (called power relation). In order to do that, during the thesis we use notionsin classical voting theory and cooperative game theory. Mainly, we have defined solution concepts named ceteris paribusmajority rule, and ordinal Banzhad index, which are respectively inspired from classical voting theory and cooperativegame theory. Since the majority of our work in the thesis is to study solutions from property-driven approach, we axiomatically study the solutions by reformulating axioms in classical voting theory. Finally, exploring weighted extensionsof the ceteris paribus majority rule to rank more than two individuals result in an axiomatic study of families of weightedsolutions
APA, Harvard, Vancouver, ISO, and other styles
25

Casselryd, Linnéa, Agnes Lantto, and Alicia Julienne Zanic. "MSCI Climate Paris Aligned Indices : A quantitative study comparing the performance of SR indices and their conventional benchmark indices." Thesis, Umeå universitet, Företagsekonomi, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185021.

Full text
Abstract:
There is no clear consensus about whether green investments perform better, worse orequal to conventional brown investments. With the rising popularity of socialinvestments, it becomes increasingly important to understand these investments. Therecent launch of the MSCI Climate Paris Aligned Indices (CPAI) aim to illustrate thedevelopment of an economy that is in line with the requirements and goals of the ParisAgreement from 2015. In this research we aim to find out whether the MSCI Europe,USA and EM Climate Paris Aligned Indices outperform their parent indices. We do thisby comparing performance measures such as the net return, standard deviation of netreturns and Sharpe ratio. We further conduct an ordinary least squares regression to testwhether the betas and Jensen´s alphas of the CPAI differ significantly from their parentindices.The results show that only the USA CPAI clearly outperforms its parent index. This isdue to it having a higher Sharpe Ratio and Jensen’s alpha as well as higher monthly netreturns and a lower standard deviation compared to its parent index. The regressionshows that it does perform better than the parent index. The results for the EM CPAIshow that it performs in a similar way as its parent index. It has a higher monthly netreturn but also slightly higher standard deviation which leads to an equally large Sharperatio. Neither the estimated Jensen’s alpha nor the beta are significantly different fromthose of its parent index and thus the hypothesis of it performing equally as well as itsparent index cannot not be rejected. Lastly, the Europe CPAI has a higher Sharpe ratio,Jensen’s alpha and monthly net returns than its parent index, but it also exhibits a higherstandard deviation. The regression indicated that it performs in a similar way as itsparent index, no difference could be proven. In conclusion, this means that all CPAIperform at least equally as well as their parent indices, if not better.
APA, Harvard, Vancouver, ISO, and other styles
26

Daniel, Frédéric. "Sur les communications globales dans les réseaux à topologie de de Bruijn et de Kautz." Toulouse 3, 1996. http://www.theses.fr/1996TOU30248.

Full text
Abstract:
Les performances d'algorithmes de communication globale dans une machine multi-processeurs reconfigurable dependent de ses caracteristiques, son environnement de programmation et sa topologie. Des graphes de degre fixe tels que le ccc (cube-connected cycles) sont etudies et utilises pour la configurer. Les notions elementaires de la theorie des graphes et de l'arithmetique des nombres entiers nous permettent d'etudier les proprietes des familles de graphes de de bruijn et de kautz. Ces graphes sont de degre fixe mais en plus d'ordre quelconque, de diametre presque optimal et orientes. Les differents cas de figure de communications globales ont entre eux des relations d'equivalence ou de sous-jacence. C'est donc sur les communications les plus complexes (tous vers tous) que sont recherchees les ameliorations des algorithmes classiques. Un modele d'evaluation des performances d'un algorithme de communication est utilise afin de les comparer par simulation. L'implementation sur un t-node dans les environnements helios et 3lc est realisee par deux systemes de tables qui permettent de decrire tous les algorithmes proposes et qui rendent chaque processeur autonome dans sa gestion des messages. La comparaison des performances reelles des diverses solutions topologiques et algorithmiques permet de les evaluer entre eux mais aussi de discuter de la validite du modele utilise pour la simulation
APA, Harvard, Vancouver, ISO, and other styles
27

Hanegan, Andrew Aaron. "Industrial energy use indices." Texas A&M University, 2007. http://hdl.handle.net/1969.1/85849.

Full text
Abstract:
Energy use index (EUI) is an important measure of energy use which normalizes energy use by dividing by building area. Energy use indices and associated coefficients of variation are computed for major industry categories for electricity and natural gas use in small and medium-sized plants in the U.S. The data is very scattered with the coefficients of variation (CoV) often exceeding the average EUI for an energy type. The combined CoV from all of the industries considered, which accounts for 8,200 plants from all areas of the continental U.S., is 290%. This paper discusses EUIs and their variations based on electricity and natural gas consumption. Data from milder climates appears more scattered than that from colder climates. For example, the ratio of the average of coefficient of variations for all industry types in warm versus cold regions of the U.S. varies from 1.1 to 1.7 depending on the energy sources considered. The large data scatter indicates that predictions of energy use obtained by multiplying standard EUI data by plant area may be inaccurate and are less accurate in warmer than colder climates (warmer and colder are determined by annual average temperature weather data). Data scatter may have several explanations, including climate, plant area accounting, the influence of low cost energy and low cost buildings used in the south of the U.S. This analysis uses electricity and natural gas energy consumption and area data of manufacturing plants available in the U.S. Department of Energy's national Industrial Assessment Center (IAC) database. The data there come from Industrial Assessment Centers which employ university engineering students, faculty and staff to perform energy assessments for small to medium-sized manufacturing plants. The nation-wide IAC program is sponsored by the U.S. Department of Energy. A collection of six general energy saving recommendations were also written with Texas manufacturing plants in mind. These are meant to provide an easily accessible starting point for facilities that wish to reduce costs and energy consumption, and are based on common recommendations from the Texas A&M University IAC program.
APA, Harvard, Vancouver, ISO, and other styles
28

Crawford, Ian Anderson. "Cost of living indices." Thesis, University College London (University of London), 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286226.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Louzas, Andre Luiz da Conceição. "Cidades : floresta de indices." [s.n.], 2004. http://repositorio.unicamp.br/jspui/handle/REPOSIP/284878.

Full text
Abstract:
Orientador : Roberto Berton de Angelo
Dissertação (mestrado) - Universidade Estadual de Campinas, Instituto de Artes
Made available in DSpace on 2018-08-03T22:54:28Z (GMT). No. of bitstreams: 1 Louzas_AndreLuizdaConceicao_M.pdf: 23363930 bytes, checksum: 552c0446f0ff7c443d72fa75235c4d28 (MD5) Previous issue date: 2004
Mestrado
APA, Harvard, Vancouver, ISO, and other styles
30

Florio, Anna. "Indices de Maslov asymptotiques." Thesis, Avignon, 2019. http://www.theses.fr/2019AVIG0422.

Full text
Abstract:
Nous étudions l’indice de Maslov asymptotique pour de difféomorphismes de surface. En mots, cette quantité est la limite de la vitesse angulaire moyenne des vecteurs tangents qui évoluent sous l’action de la différentielle du difféomorphisme. Pour des applications déviant la verticale de l’anneau, nous montrons que l’ensemble des points d’indice zéro a une dimension d’Hausdorff supérieure ou égale à 1. Dans le cadre des applications déviant la verticale conservatives, nous prouvons que chaque région d’instabilité bornée a un ensemble de mesure de Lebesgue positive de points d’indice non nul. Finalement, nous étudions cet indice en présence de points périodiques hyperboliques avec intersections homoclines transverses, en donnant des exemples de points auxquels l’indice de Maslov asymptotique n’existe pas
We study the asymptotic Maslov index for surface diffeomorphisms. Roughly speaking,this quantity is the limit of the average rotational velocity of tangent vectors which evolveunder the action of the differential of the diffeomorphism. For twist maps on the annulus,we prove that the set of points of zero index has Hausdorff dimension at least one. In theframework of conservative twist maps, we show that every bounded instability region has apositive Lebesgue measure set of points with non zero index. Finally, we study such indexin the presence of periodic hyperbolic points with transverse homoclinic intersections,providing examples of points at which the asymptotic Maslov index does not exist
APA, Harvard, Vancouver, ISO, and other styles
31

Carvalho, Luís Filipe Cordeiro. "Do market indices overreact?" Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/19756.

Full text
Abstract:
Mestrado em Finanças
Entende-se por sobreajustamento de mercado quando o optimismo (pessimismo) por parte dos investidores leva a que o preço da ação de uma empresa suba (desça) de tal forma que esta é considerada vencedora (perdedora), num período compreendido de 3 a 5 anos. Esta dissertação estuda a hipótese de sobreajustamento nos índices de mercado. Utilizando dados mensais de dezembro 1970 a dezembro 2018 de 49 índices internacionais da Morgan Stanley Capital, foi estudada a hipótese de sobreajustamento nos índices de Mercado para períodos de 3 e 5 anos. Ao invés de retornos cumulativos os retornos foram calculados segundo a metodologia de investimento passivo com o intuito de evitar enviesamentos. Foram encontradas fortes reversões dos retornos para períodos de investimento de 3 anos, e estatisticamente significativos. Quando implementada a estratégia de comprar os maiores perdedores e vender os maiores vencedores apenas em índices de mercados desenvolvidos, encontra-se igualmente evidência para a hipótese de sobreajustamento, ainda que os retornos sejam menos expressivos do ponto de vista económico. Foi igualmente encontrada evidência para a hipótese de sobreajustamento quando se considera períodos de investimento de 5 anos, com resultados estatisticamente significativos. Os perdedores não só têm rendibilidades superiores aos ganhadores, como apresentam menos risco. Independentemente do período de investimento, anos da amostra ou do tipo de mercado o Beta do portfólio de perdedores foi, em média, inferior ao do portfólio de vencedores. Não obstante estes resultados, a hipótese de sobreajustamento aparenta não ser estacionária no tempo.
Investors are told to be overreacting when their sentiment drives the price of a certain security up (down) enough to make it the biggest winner (loser), in most cases considering this overreaction period as long as 3 or 5 years. This dissertation studies the overreaction hypothesis in market indices. Using end of month data from December 1970 to December 2018 from 49 Morgan Stanley Capital International Indices we studied the overreaction hypothesis on Market Indices for 3- and 5-years investment periods. Instead of Cumulative Average Returns the returns were computed as Holding Period Returns to avoid the upward bias. We found strong return reversals for 3-year investment periods, which were statistically significant at a 5% significant level. However, the returns might be weaker depending on the time period we consider. When implemented only in developed markets there is still evidence supporting the overreaction hypothesis, although the excess returns are economically weaker. Evidence for the overreaction hypothesis was also found when 5-year investment periods were considered. Not only did losers outperform winners, but they were also less risky than winners. Regardless of the market, investment period and/or time-period considered, losers' portfolio beta was always smaller than the winners' portfolio beta. Notwithstanding these results, the overreaction strategy is sensitive to the time periods considered which highlights the possibility that the overreaction strategy success it's not time stationary.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
32

LIMA, Jamerson Felipe Pereira. "Representações cache eficientes para montagem de fragmentos baseada em grafos de de Bruijn de sequências biológicas." Universidade Federal de Pernambuco, 2017. https://repositorio.ufpe.br/handle/123456789/25352.

Full text
Abstract:
Submitted by Pedro Barros (pedro.silvabarros@ufpe.br) on 2018-08-01T20:18:15Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Jamerson Felipe Pereira Lima.pdf: 1541250 bytes, checksum: ccefce36b254aed5273279c3a4600f9f (MD5)
Approved for entry into archive by Alice Araujo (alice.caraujo@ufpe.br) on 2018-08-02T20:09:33Z (GMT) No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Jamerson Felipe Pereira Lima.pdf: 1541250 bytes, checksum: ccefce36b254aed5273279c3a4600f9f (MD5)
Made available in DSpace on 2018-08-02T20:09:33Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Jamerson Felipe Pereira Lima.pdf: 1541250 bytes, checksum: ccefce36b254aed5273279c3a4600f9f (MD5) Previous issue date: 2017-02-20
FACEPE
O estudo dos genomas dos seres vivos têm sido impulsionado pelos avanços na biotecnologia ocorridos desde a segunda metade do Séc. XX. Particularmente, o desenvolvimento de novas plataformas de sequenciamento de alto desempenho ocasionou a proliferação de dados brutos de fragmentos de sequências nucleicas. Todavia, a montagem dos fragmentos de DNA continua a ser uma das etapas computacionais mais desafiadoras, visto que a abordagem tradicional desse problema envolve a solução de problemas intratáveis sobre grafos obtidos a partir dos fragmentos, como, por exemplo, a determinação de caminhos hamiltonianos. Mais recentemente, soluções baseadas nos grafos de de Bruijn (gdB), também obtidos a partir dos fragmentos sequenciados, têm sido adotadas. Nesse caso, o problema da montagem relaciona-se com o de encontrar caminhos eulerianos, o qual possui soluções polinomiais conhecidas. Embora apresentem custo computacional teórico mais baixo, ainda demandam, na prática, grande poder computacional, face ao volume de dados envolvido. Por exemplo, a representação empregada por algumas ferramentas para o gdB do genoma humano pode alcançar centenas de gigabytes. Faz-se necessário, portanto, o emprego de técnicas algorítmicas para manipulação eficiente de dados em memória interna e externa. Nas arquiteturas computacionais modernas, a memória é organizada de forma hierárquica em camadas: cache, memória RAM, disco, rede, etc. À medida que o nível aumenta, cresce a capacidade de armazenagem, porém também o tempo de acesso. O ideal, portanto, seria manter a informação limitada o mais possível aos níveis inferiores, diminuindo a troca de dados entre níveis adjacentes. Para tal, uma das abordagens são os chamados algoritmos cache-oblivious, que têm por objetivo reduzir o número de trocas de dados entre a memória cache e a memória principal sem que seja necessário para tanto introduzir parâmetros relativos à configuração da memória ou instruções para a movimentação explícita de blocos de memória. Uma outra alternativa que vêm ganhando ímpeto mais recentemente é o emprego de estruturas de dados ditas sucintas, ou seja, estruturas que representam a informação usando uma quantidade ótima de bits do ponto de vista da teoria da informação. Neste trabalho, foram implementadas três representações para os gdB, com objetivo de avaliar seus desempenhos em termos da utilização eficiente da memória cache. A primeira corresponde a uma implementação tradicional com listas de adjacências, usada como referência, a segunda é baseada em estruturas de dados cache-oblivious, originalmente descritas para percursos em grafos genéricos, e a terceira corresponde a uma representação sucinta específica para os gdB, com otimizações voltadas ao melhor uso da cache. O comportamento dessas representações foi avaliado quanto à quantidade de acessos à memória em dois algoritmos, nomeadamente o percurso em profundidade (DFS) e o tour euleriano. Os resultados experimentais indicam que as versões tradicional e cache-oblivious genérica apresentam, nessa ordem, os menores números absolutos de cache misses e menores tempos de execução para dados pouco volumosos. Entretanto, a versão sucinta apresenta melhor desempenho em termos relativos, considerando-se a proporção entre o número de cache misses e a quantidade de acessos à memória, sugerindo melhor desempenho geral em situações extremas de utilização de memória.
The study of genomes was boosted by advancements in biotechnology that took place since the second half of 20th century. In particular, the development of new high-throughput sequencing platforms induced the proliferation of nucleic sequences raw data. Although, DNA assembly, i.e., reconstitution of original DNA sequence from its fragments, is still one of the most computational challenging steps. Traditional approach to this problem concerns the solution of intractable problems over graphs that are built over the fragments, as the determination of Hamiltonian paths. More recently, new solutions based in the so called de Bruijn graphs, also built over the sequenced fragments, have been adopted. In this case, the assembly problem relates to finding Eulerian paths, for what polynomial solutions are known. However, those solutions, in spite of having a smaller computational cost, still demand a huge computational power in practice, given the big amount of data involved. For example, the representation employed by some assembly tools for a gdB of human genome may reach hundreds of gigabytes. Therefore, it is necessary to apply algorithmic techniques to efficiently manipulate data in internal and external memory. In modern computer architectures, memory is organized in hierarchical layers: cache, RAM, disc, network, etc. As the level grows, the storage capacity is also bigger, as is the access time (latency). That is, the speed of access is smaller. The aim is to keep information limited as much as possible in the highest levels of memory and reduce the need for block exchange between adjacent levels. For that, an approach are cache-oblivious algorithms, that try to reduce the the exchange of blocks between cache and main memory without knowing explicitly the physical parameters of the cache. Another alternative is the use of succinct data structures, that store an amount of data in space close to the minimum information-theoretical. In this work, three representations of the de Bruijn graph were implemented, aiming to assess their performances in terms of cache memory efficiency. The first implementation is based in a traditional traversal algorithm and representation for the de Bruijn graph using adjacency lists and is used as a reference. The second implementation is based in cache-oblivious algorithms originally described for traversal in general graphs. The third implementation is based in a succinct representation of the de Bruijn graph, with optimization for cache memory usage. Those implementations were assessed in terms of number of accesses to cache memory in the execution of two algorithms, namely depth-first search (DFS) and Eulerian tour. Experimental results indicate that traditional and generic cache-oblivious representations show, in this order, the least absolute values in terms of number of cache misses and least times for small amount of data. However, the succinct representation shows a better performance in relative terms, when the proportion between number of cache misses and total number of access to memory is taken into account. This suggests that this representation could reach better performances in case of extreme usage of memory.
APA, Harvard, Vancouver, ISO, and other styles
33

Rudewicz, Justine. "Méthodes bioinformatiques pour l'analyse de données de séquençage dans le contexte du cancer." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0635/document.

Full text
Abstract:
Le cancer résulte de la prolifération excessive de cellules qui dérivent toutes de la même cellule initiatrice et suivent un processus Darwinien de diversification et de sélection. Ce processus est défini par l'accumulation d'altérations génétiques et épigénétiques dont la caractérisation est un élément majeur pour pouvoir proposer une thérapie ciblant spécifiquement les cellules tumorales. L'avènement des nouvelles technologies de séquençage haut débit permet cette caractérisation à un niveau moléculaire. Cette révolution technologique a entraîné le développement de nombreuses méthodes bioinformatiques. Dans cette thèse, nous nous intéressons particulièrement au développement de nouvelles méthodes computationnelles d'analyse de données de séquençage d'échantillons tumoraux permettant une identification précise d'altérations spécifiques aux tumeurs et une description fine des sous populations tumorales. Dans le premier chapitre, il s'agît d'étudier des méthodes d'identification d'altérations ponctuelles dans le cadre de séquençage ciblé, appliquées à une cohorte de patientes atteintes du cancer du sein. Nous décrivons deux nouvelles méthodes d'analyse, chacune adaptée à une technologie de séquençage, spécifiquement Roche 454 et Pacifique Biosciences.Dans le premier cas, nous avons adapté des approches existantes au cas particulier de séquences de transcrits. Dans le second cas, nous avons été confronté à un bruit de fond élevé entraînant un fort taux de faux positifs lors de l'utilisation d'approches classiques. Nous avons développé une nouvelle méthode, MICADo, basée sur les graphes de De Bruijn et permettant une distinction efficace entre les altérations spécifiques aux patients et les altérations communes à la cohorte, ce qui rend les résultats exploitables dans un contexte clinique. Le second chapitre aborde l'identification d'altérations de nombre de copies. Nous décrivons l'approche mise en place pour leur identification efficace à partir de données de très faible couverture. L'apport principal de ce travail consiste en l'élaboration d'une stratégie d'analyse statistique afin de mettre en évidence des changements locaux et globaux au niveau du génome survenus durant le traitement administré à des patientes atteintes de cancer du sein. Notre méthode repose sur la construction d'un modèle linéaire permettant d'établir des scores de différences entre les échantillons avant et après traitement. Dans le troisième chapitre, nous nous intéressons au problème de reconstruction clonale. Cette problématique récente est actuellement en plein essor, mais manque cependant d'un cadre formel bien établi. Nous proposons d'abord une formalisation du problème de reconstruction clonale. Ensuite nous utilisons ce formalisme afin de mettre en place une méthode basée sur les modèles de mélanges Gaussiens. Cette méthode utilise les altérations ponctuelles et de nombre de copies - comme celles abordées dans les deux chapitres précédents - afin de caractériser et quantifier les différentes populations clonales présentes dans un échantillon tumoral
Cancer results from the excessive proliferation of cells decending from the same founder cell and following a Darwinian process of diversification and selection. This process is defined by the accumulation of genetic and epigenetic alterations whose characterization is a key element for establishing a therapy that would specifically target tumor cells. The advent of new high-throughput sequencing technologies enables this characterization at the molecular level. This technological revolution has led to the development of numerous bioinformatics methods. In this thesis, we are particularly interested in the development of new computational methods for the analysis of sequencing data of tumor samples allowing precise identification of tumor-specific alterations and an accurate description of tumor subpopulations. In the first chapter, we explore methods for identifying single nucleotide alterations in targeted sequencing data and apply them to a cohort of breast cancer patients. We introduce two new methods of analysis, each tailored to a particular sequencing technology, namely Roche 454 and Pacific Biosciences. In the first case, we adapted existing approaches to the particular case of transcript sequencing. In the second case, when using conventional approaches, we were confronted with a high background noise resulting in a high rate of false positives. We have developed a new method, MICADo, based on the De Bruijn graphs and making possible an effective distinction between patient-specific alterations and alterations common to the cohort, which makes the results usable in a clinical context. Second chapter deals with the identification of copy number alterations. We describe the approach put in place for their efficient identification from very low coverage data. The main contribution of this work is the development of a strategy for statistical analysis in order to emphasise local and global changes in the genome that occurred during the treatment administered to patients with breast cancer. Our method is based on the construction of a linear model to establish scores of differences between samples before and after treatment. In the third chapter, we focus on the problem of clonal reconstruction. This problem has recently gathered a lot of interest, but it still lacks a well-established formal framework. We first propose a formalization of the clonal reconstruction problem. Then we use this formalism to put in place a method based on Gaussian mixture models. Our method uses single nucleotide and copy number alterations - such as those discussed in the previous two chapters - to characterize and quantify different clonal populations present in a tumor sample
APA, Harvard, Vancouver, ISO, and other styles
34

Green, Shawn Jeffrey. "Extensions of the Power Group Enumeration Theorem." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7526.

Full text
Abstract:
The goal of this paper is to develop extensions of Polya enumeration methods which count orbits of functions. De Bruijn, Harary, and Palmer all worked on this problem and created generalizations which involve permuting the codomain and domain of functions simultaneously. We cover their results and specifically extend them to the case where the group of permutations need not be a direct product of groups. In this situation, we develop a way of breaking the orbits into subclasses based on a characteristic of the functions involved. Additionally, we develop a formula for the number of orbits made up of bijective functions. As a final extension, we also expand the set we are acting on to be the set of all relations between finite sets. Then we show how to count the orbits of relations.
APA, Harvard, Vancouver, ISO, and other styles
35

Demazeau, Yves. "Niveaux de représentation pour la vision par ordinateur : indices d'image et indices de scène." Phd thesis, Grenoble INPG, 1986. http://tel.archives-ouvertes.fr/tel-00322886.

Full text
Abstract:
La première partie analyse les méthodes utilisées dans le domaine, justifie l'existence de différents niveaux de représentation et de traitement de l'information visuelle, puis explicite les cinq niveaux (IMAGE, INDICES D'IMAGE, INDICES DE SCHENE, OBJET et SCHENE) que nous distinguons. La seconde partie décrit, du niveau IMAGE au niveau INDICES DE SCHENE, une expérimentation de l'inférence de formes à partir des contours dans le domaine restreint d'objets solides du monde des blocs. S'appuyant sur les résultats obtenus, la troisième partie expose comment la stéréovision et la couleur s'intègrent au sein des niveaux préconisés, et comment ils permettent d'atteindre le niveau OBJET pour des objets flexibles filiformes. Ces apports sont illustrés par une application au domaine industriel: l'identification et la localisation de fils électriques dans un contexte d'automatisation de la production des ensembles câbles-connecteurs
APA, Harvard, Vancouver, ISO, and other styles
36

Demazeau, Yves. "Niveaux de représentation pour la vision par ordinateur indices d'image et indices de scène." Grenoble 2 : ANRT, 1986. http://catalogue.bnf.fr/ark:/12148/cb37597056c.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Demazeau, Yves Latombe Jean-Claude. "Niveaux de représentation pour la vision par ordinateur indices d'image et indices de scène /." S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00322886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Christie, Gregory J., and University of Lethbridge Faculty of Arts and Science. "Electrophysiological indices of feedback processing." Thesis, Lethbridge, Alta. : University of Lethbridge, Dept. of Neuroscience, 2010, 2010. http://hdl.handle.net/10133/2551.

Full text
Abstract:
All sentient organisms use contextual information to assess the amount of reward associated with a particular behavior. Human beings have arguably evolved the most sophisticated of these mechanisms and are capable of integrating information over a long duration of time to accurately assess the expected outcome of a chosen action. This thesis used electroencephalography (EEG) to measure how the human brain processes rewarding and punishing feedback in a gambling-type game with variable risk and reward. Experiment 1 determined that phase-locked (evoked) and non-phase-locked (induced) electroencephalographic activity share only partially overlapping generators in human mediofrontal cortex. Experiment 2 determined that the magnitude of certain evoked EEG components during reward processing tracked subsequent changes in bets placed in the next round. These results extend the body of literature by assessing the overlap between induced and evoked EEG components and the role of evoked activity in affecting future decision making.
xii, 76 leaves : ill. (chiefly col.) ; 29 cm
APA, Harvard, Vancouver, ISO, and other styles
39

Etowa, Christian Bassey. "Inherently safer process design indices." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp05/MQ63510.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Rhodes, Peter. "Indices of nitric oxide production." Thesis, University of Cambridge, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.307982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Parker, Jonathan Duguid Edward. "Environmental reporting and environmental indices." Thesis, University of Cambridge, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358483.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Süss, Stephan. "Volatility indices and their derivatives /." [S.l.] : [s.n.], 2009. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=018685872&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Schoonmaker, Benjamin L. "Clean Indices of Common Rings." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7027.

Full text
Abstract:
Lee and Zhou introduced the clean index of rings in 2004. Motivated by this work, Basnet and Bhattacharyya introduced both the weak clean index of rings and the nil clean index of rings and Cimpean and Danchev introduced the weakly nil clean index of rings. In this work, we calculate each of these indices for the rings ℤ/nℤ and matrix rings with entries in ℤ/nℤ. A generalized index is also introduced.
APA, Harvard, Vancouver, ISO, and other styles
44

Gahramanov, Ilmar. "Superconformal indices, dualities and integrability." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17568.

Full text
Abstract:
In dieser Arbeit behandeln wir exakte, nicht-perturbative Ergebnisse, die mithilfe der superkonformen Index-Technik, in supersymmetrischen Eichtheorien mit vier Superladungen (d. h. N=1 Supersymmetrie in vier Dimensionen und N=2 in drei Dimensionen) gewonnen wurden. Wir benutzen die superkonforme Index-Technik um mehrere Dualitäts Vermutungen in supersymmetrischen Eichtheorien zu testen. Wir führen Tests der dreidimensionalen Spiegelsymmetrie und Seiberg ähnlicher Dualitäten durch. Das Ziel dieser Promotionsarbeit ist es moderne Fortschritte in nicht-perturbativen supersymmetrischen Eichtheorien und ihre Beziehung zu mathematischer Physik darzustellen. Im Speziellen diskutieren wir einige interessante Identitäten der Integrale, denen einfache und hypergeometrische Funktionen genügen und ihren Bezug zu supersymmetrischen Dualitäten in drei und vier Dimensionen. Methoden der exakten Berechnungen in supersymmertischen Eichtheorien sind auch auf integrierbare statistische Modelle anwendbar. Dies wird im letzten Kapitel der vorliegenden Arbeit behandelt.
In this thesis we discuss exact, non-perturbative results achieved using superconformal index technique in supersymmetric gauge theories with four supercharges (which is N = 1 supersymmetry in four dimensions and N = 2 supersymmetry in three). We use the superconformal index technique to test several duality conjectures for supersymmetric gauge theories. We perform tests of three-dimensional mirror symmetry and Seiberg-like dualities. The purpose of this thesis is to present recent progress in non-perturbative supersymmetric gauge theories in relation to mathematical physics. In particular, we discuss some interesting integral identities satisfied by basic and elliptic hypergeometric functions and their relation to supersymmetric dualities in three and four dimensions. Methods of exact computations in supersymmetric theories are also applicable to integrable statistical models, which we discuss in the last chapter of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
45

Heaver, Becky. "Psychophysiological indices of recognition memory." Thesis, University of Sussex, 2012. http://sro.sussex.ac.uk/id/eprint/39455/.

Full text
Abstract:
It has recently been found that during recognition memory tests participants' pupils dilate more when they view old items compared to novel items. This thesis sought to replicate this novel ‘‘Pupil Old/New Effect'' (PONE) and to determine its relationship to implicit and explicit mnemonic processes, the veracity of participants' responses, and the analogous Event-Related Potential (ERP) old/new effect. Across 9 experiments, pupil-size was measured with a video-based eye-tracker during a variety of recognition tasks, and, in the case of Experiment 8, with concurrent Electroencephalography (EEG). The main findings of this thesis are that: - the PONE occurs in a standard explicit test of recognition memory but not in “implicit” tests of either perceptual fluency or artificial grammar learning; - the PONE is present even when participants are asked to give false behavioural answers in a malingering task, or are asked not to respond at all; - the PONE is present when attention is divided both at learning and during recognition; - the PONE is accompanied by a posterior ERP old/new effect; - the PONE does not occur when participants are asked to read previously encountered words without making a recognition decision; - the PONE does not occur if participants preload an “old/new” response; - the PONE is not enhanced by repetition during learning. These findings are discussed in the context of current models of recognition memory and other psychophysiological indices of mnemonic processes. It is argued that together these findings suggest that the increase in pupil-size which occurs when participants encounter previously studied items is not under conscious control and may reflect primarily recollective processes associated with recognition memory.
APA, Harvard, Vancouver, ISO, and other styles
46

Rose, Philip. "Indices of fatty acid metabolism." Thesis, Sheffield Hallam University, 1992. http://shura.shu.ac.uk/20296/.

Full text
Abstract:
During the fed state energy requirements are met by glycolysis of carbohydrates. When the stores of carbohydrates are diminished, for example during prolonged fasting, metabolism switches to that of fatty acids. Fatty acids are broken down by fi-oxidation within the mitochondrial matrix. Prolonged fasting results in the production of ketone bodies. These can also be used as an energy source by the brain. In defects of fatty acid metabolism where individual steps are inhibited or blocked, such as medium chain acyl-CoA dehydrogenase deficiency, an abnormal accumulation of the metabolites that lead up to the block, or their breakdown products, is often seen. Non-compensatory levels of metabolites following the site of the defect also occur. In the fed state, when flux through the defective fatty acid pathway is minimal, metabolic profiles can appear completely normal. It is therefore often necessary to induce metabolic stress before a full laboratory investigation can proceed. Interpretation of individual metabolite quantitations can often be difficult and a variation of 'normal values' according to metabolic state can lead to misinterpretation. Comparison between the concentrations of related metabolites along the fatty acid metabolic pathway may diminish the need for exact knowledge of the metabolic state and by correlation plotting could clearly identify abnormal relationships. This thesis describes an investigation into the efficacy of paired metabolite correlation plots in preliminary detection of defects in fatty acid metabolism. In certain inborn errors of fatty acid metabolism where the fi-oxidation cycle is affected, abnormal urine metabolite patterns have been used as diagnostic markers. Similar patterns have been reported in the urine of healthy newborns and termed generalised neonatal dicarboxylic aciduria177. This report documents an investigation of the connections between generalised neonatal dicarboxylic aciduria and a number of overlying factors (vis type of feed, gender, sibling history of sudden infant death syndrome and urine carnitine levels). Also discussed is the development of two laboratory assays. A radio-enzymatic method was developed and used to determine the levels of total, free and acyl carnitine in urine or blood. Suberyl, hexanoyl, and phenylpropionyl glycine in urine can be quantitated by use of stable isotope internal standards and gas chromatography / electron impact mode mass spectrometry. Synthesis and calibration of such internal standards is described. Finally, methods used to culture and store skin fibroblasts from biopsy samples are included as an appendix. These fibroblasts can then be used in various diagnostic tests such as carbon dioxide release and electron transfer flavoprotein enzyme analysis. The costs encountered during tissue culture could be avoided by medium term storage of the biopsy material prior to culture to await sufficient clinical evidence to merit such analyses. Preliminary results of extended cryogenic storage and viability of recovered specimens are also included.
APA, Harvard, Vancouver, ISO, and other styles
47

Gutiérrez, Hernández Julián Eli. "Drought Indices in Panama Canal." Master's thesis, Česká zemědělská univerzita v Praze, 2015. http://www.nusl.cz/ntk/nusl-258961.

Full text
Abstract:
Panama has a warm, wet, tropical climate. Unlike countries that are farther from the equator, Panama does not experience seasons marked by changes in temperature. Instead, Panama's seasons are divided into Wet and Dry. The Dry Season generally begins around mid-December, but this may vary by as much 3 to 4 weeks. Around this time, strong northeasterly winds known as "trade winds" begin to blow and little or no rain may fall for many weeks in a row. Daytime air temperatures increase slightly to around 30-31 Celsius (86-88 Fahrenheit), but nighttime temperatures remain around 22-23 Celsius (72-73 Fahrenheit). Relative humidity drops throughout the season, reaching average values as low as 70 percent. The Wet Season usually begins around May 1, but again this may vary by 1 or 2 weeks. May is often one of the wettest months, especially in the Panama Canal area, so the transition from the very dry conditions at the end of the Dry Season to the beginning of Wet Season can be very dramatic. With the arrival of the rain, temperatures cool down a little during the day and the trade winds disappear. Relative humidity rises quickly and may hover around 90 to 100% throughout the Wet Season. Drought forecasts can be an effective tool for mitigating some of the more adverse consequences of drought. The presented thesis compares forecast of drought indices based on seven different models of artificial neural networks model. The analyzed drought indices are SPI and SPEI-ANN Drought forecast, and was derived for the period of 1985-2014 on Panama Canal basin; I've selected seven of sixty-one Hydro-meteorological networks, existing in the Panama Canal basin. The rainfall is 1784 mm per year. The meteorological data were obtained from the PANAMA CANAL AUTHORITY, Section of Water Resources, and Panama Canal Authority, Panama. The performance of all the models was compared using ME, MAE, RMSE, NS, and PI. The results of drought indices forecast, explained by the values of seven model performance indices, show, that in Panama Canal has problem with the drought. Even though The Panama is generally seen as a wet country, droughts can cause severe problems. Significant drought conditions are observed in the index based on precipitation and potential evaporation found in this thesis; The Standardized Precipitation Index (SPI), the Standardized Precipitation Evapotranspiration Index (SPEI), were used to quantify drought in the Panama Canal basin, Panama Canal, at multiple time scales within the period 1985-2014. The results indicate that drought indices based on different variables show the same major drought events. Drought indices based on precipitation and potential evaporation are more variable in time while drought indices based on discharge. Spatial distribution of meteorological drought is uniform over Panama Canal.
APA, Harvard, Vancouver, ISO, and other styles
48

Mermoz, Vincent. "Les indices en procédure pénale." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS094/document.

Full text
Abstract:
Prenant jadis la forme d’un « signe de divinité » sous le règne des ordalies, l’indice désignerait dorénavant tout « événement, objets ou traces » amené à forger la conviction du juge. Les traits de l’indice se reconnaissent ainsi à la capacité qu’il possède de rendre possible le fait recherché. En ce sens, l’indice ne peut – aujourd’hui comme hier – indiquer directement la culpabilité, bien qu’il demeure – depuis toujours – en capacité de faire présumer l’imputabilité du fait prohibé à l’encontre des personnes suspectées. Les effets attachés à l’indice sont convoités de tout temps, sans pourtant que quiconque ne parvienne à les expliquer. L’indice rend possible, dispose d’un pouvoir spécifique et s’intègre parfaitement au sein du raisonnement dialectique intrinsèque à la matière juridique. Les juristes usent des présomptions fondées sur l’indice aux fins de compenser les lacunes inhérentes à la preuve en matière pénale. Indéniablement, l’indice occupe une place centrale dans le processus probatoire. Néanmoins, un constat de carence s’impose : les raisons pour lesquelles l’indice produit cet effet à la fois si caractéristique et par là même si commun, ne sont jamais explicitées. Sans doute trop prosaïque, l’indice s’est éclipsé à l’arrière-plan d’une preuve pénale devenue prépondérante par la gravité des conséquences juridiques qu’elle justifie. Un regard cette fois plus aiguisé aurait néanmoins pressenti l’enjeu universel d’une telle notion : depuis toujours, l’indice constitue le socle de la preuve. Fondements d’une réalité morcelée que la justice souhaite reconstituer, les indices jalonnent le cheminement procédural jusqu’à l’obtention d’une preuve. Les différentes phases de la procédure pénale s’organisent au rythme des indices interprétés, autant qu’ils forgent une conviction sur le déroulement des faits prohibés. L’intime conviction ancre de fait l’interprétation de l’indice au cœur de la preuve pénale et, avec elle, la perfectibilité d’une construction humaine au centre de la procédure pénale
Once taking the form of a "sign of divinity" in the trial by ordeal, the clue would henceforth designate any "event, object or trace" that might forge the judge's conviction. The characteristics of the clue can thus be recognized by its ability to make the desired result possible. In this sense, the clue cannot – today as in the past – directly indicate guilt, although it has always been able to allow for the presumption that the prohibited fact is imputable to suspects. The effects of the clue have always been sought after, without anyone ever being able to explain them. The clue makes possible, has specific power and fits perfectly into the dialectical reasoning inherent in the legal field.Lawyers use clue-based presumptions to compensate for the deficiencies inherent in criminal evidence. Undeniably, the clue occupies a central place in the probationary process. Nevertheless, a finding of deficiency is inevitable: the reasons why the clue produces this effect, which is so characteristic and therefore so common, are never explained. Undoubtedly too prosaic, the clue has vanished into the background of criminal evidence that has become preponderant because of the seriousness of the legal consequences it justifies. A sharper look this time would nevertheless have foreshadowed the universal importance of such a notion: since time immemorial, the clue has been the foundation of proof. As the foundations of a fragmented reality that the justice system wishes to reconstruct, the clues mark out the procedural path until evidence is obtained. The various phases of criminal proceedings are organised according to the rhythm of the interpreted clues, as much as they forge a conviction about the conduct of the prohibited acts. The intimate conviction in fact anchors the interpretation of the clue at the heart of the criminal evidence and, with it, the perfectibility of a human construction at the centre of criminal procedure
APA, Harvard, Vancouver, ISO, and other styles
49

Oladele, Oluwatosin Seun. "Low volatility alternative equity indices." Master's thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/15691.

Full text
Abstract:
In recent years, there has been an increasing interest in constructing low volatility portfolios. These portfolios have shown significant outperformance when compared with the market capitalization-weighted portfolios. This study analyses the low volatility portfolios in South Africa using sectors instead of individual stocks as building blocks for portfolio construction. The empirical results from back-testing these portfolios show significant outperformance when compared with their market capitalization weighted equity benchmark counterpart (ALSI). In addition, a further analysis of this study delves into the construction of the low volatility portfolios using the Top 40 and Top 100 stocks. The results also show significant outperformance over the market-capitalization portfolio (ALSI), with the portfolios constructed using the Top 100 stocks having a better performance than portfolio constructed using the Top 40 stocks. Finally, the low volatility portfolios are also blended with typical portfolios (ALSI and the SWIX indices) in order to establish their usefulness as effective portfolio strategies. The results show that the Low volatility Single Index Model (SIM) and the Equally Weight low-beta portfolio (Lowbeta) were the superior performers based on their Sharpe ratios.
APA, Harvard, Vancouver, ISO, and other styles
50

Shorrer, Ran I. "Essays on Indices and Matching." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:17467351.

Full text
Abstract:
In many decision problems, agents base their actions on a simple objective index, a single number that summarizes the available information about objects of choice independently of their particular preferences. The first chapter proposes an axiomatic approach for deriving an index which is objective and, nevertheless, can serve as a guide for decision making for decision makers with different preferences. Unique indices are derived for five decision making settings: the Aumann and Serrano (2008) index of riskiness (additive gambles), a novel generalized Sharpe ratio (for a standard portfolio allocation problem), Schreiber’s (2013) index of relative riskiness (multiplicative gambles), a novel index of delay embedded in investment cashflows (for a standard capital budgeting problem), and the index of appeal of information transactions (Cabrales et al., 2014). All indices share several attractive properties in addition to satisfying the axioms. The approach may be applicable in other settings in which indices are needed. The second chapter uses conditions from previous literature on complete orders to generate partial orders in two settings: information acquisition and segregation. In the setting of information acquisition, I show that the partialorder prior independent investment dominance (Cabrales et al., 2013) refines Blackwell’s partial order in the strict sense. In the segregation setting, I show that without the requirement of completeness, all of the axioms suggested in Frankel and Volij (2011) are satisfied simultaneously by a partial order which refines the standard partial order (Lasso de la Vega and Volij, 2014). In the third and fourth chapters, I turn to examine matching markets. Although no stable matching mechanism can induce truth-telling as a dominant strategy for all participants (Roth, 1982), recent studies have presented conditions under which truthful reporting by all agents is close to optimal (Immorlica and Mahdian, 2005; Kojima and Pathak, 2009; Lee, 2011). The third chapter demonstrates that in large, balanced, uniform markets using the Men-Proposing Deferred Acceptance Algorithm, each woman’s best response to truthful behavior by all other agents is to truncate her list substantially. In fact, the optimal degree of truncation for such a woman goes to 100% of her list as the market size grows large. Comparative statics for optimal truncation strategies in general one-to-one markets are also provide: reduction in risk aversion and reduced correlation across preferences each lead agents to truncate more. So while several recent papers focused on the limits of strategic manipulation, the results serve as a reminder that without preconditions ensuring truthful reporting, there exists a potential for significant manipulation even in settings where agents have little information. Recent findings of Ashlagi et al. (2013) demonstrate that in unbalanced random markets, the change in expected payoffs is small when one reverses which side of the market “proposes,” suggesting there is little potential gain from manipulation. Inspired by these findings, the fourth chapter studies the implications of imbalance on strategic behavior in the incomplete information setting. I show that the “long” side has significantly reduced incentives for manipulation in this setting, but that the same doesn’t always apply to the “short” side. I also show that risk aversion and correlation in preferences affect the extent of optimal manipulation as in the balanced case.
Business Economics
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography