To see the other types of publications on this topic, follow the link: Algorithmic identity.

Dissertations / Theses on the topic 'Algorithmic identity'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Algorithmic identity.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Bellesia, Francesca <1990&gt. "Individuals in the Workplatform. Exploring Implications for Work Identity and Algorithmic Reputation Management." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amsdottorato.unibo.it/9259/1/Thesis%20Final%20February%202020.pdf.

Full text
Abstract:
In the new world of work, workers not only change jobs more frequently, but also perform independent work on online labor markets. As they accomplish smaller and shorter jobs at the boundaries of organizations, employment relationships become unstable and career trajectories less linear. These new working conditions question the validity of existing management theories and call for more studies explaining gig workers’ behavior. Aim of this dissertation is contributing to this emerging body of knowledge by (I) exploring how gig workers shape their work identity on online platforms, and (II) investigating how algorithmic reputation changes dynamics of quality signaling and affects gig workers’ behavior. Chapter 1 introduces the debate on gig work, detailing why existing theories and definitions cannot be applied to this emergent workforce. Chapter 2 provides a systematic review of studies on individual work in online labor markets and identifies areas for future research. Chapter 3 describes the exploratory, qualitative methodology applied to collect and analyze data. Chapter 4 presents the first empirical paper investigating how the process of work identity construction unfolds for gig workers. It explores how digital platforms, intended both as providers of technological features and online environments, affect this process. Findings reveal the online environment constrains the action of workers who are pushed to take advantage of platform’s technological features to succeed. This interplay leads workers to develop an entrepreneurial orientation. Drawing on signaling theory, Chapter 5 understands how gig workers interpret algorithmic calculated reputation and with what consequences for their experience. Results show that, after complying to platform’s rules in the first period, freelancers respond to algorithmic management through different strategies – i.e. manipulation, nurturing relationships, and living with it. Although reputation scores standardize information on freelancers’ quality, and, apparently, freelancers’ work, this study shows instead responses to algorithmic control can be diverse.
APA, Harvard, Vancouver, ISO, and other styles
2

Hayman, Bernard Akeem. "Community, Identity, and Agency in the Age of Big Social Data: A Place-based Study on Literacies, Perceptions, and Responses of Digital Engagement." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1586602013429227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Putigny, Herve. "Dynamiques socioculturelles et algorithmiques d'entrée dans une communauté cybercriminelles." Electronic Thesis or Diss., Bourgogne Franche-Comté, 2022. http://www.theses.fr/2022UBFCH019.

Full text
Abstract:
La généralisation croissante des technologies informatiques conventionnelles et l'émergence de l'internet des objets constituent probablement des principes fondateurs d'un nouvel ordre mondial numérique. L'omniprésence, l'interdépendance des réseaux et la convergence des composantes du cyberespace catalysent la formation de groupes sociaux dans un objet tel qu'Internet et en particulier dans les médias sociaux. Cette thèse étudiera en particulier la naissance de communautés cybercriminelles, ainsi que les rites et les liens d'affinité des membres de ces sociétés. Au-delà de mettre en exergue les éléments de dynamiques socioculturelles liés à la création de ces espaces numériques et leur maintien, la thèse tendra à démontrer qu'il existe une forme de logique dans le processus d'intégration et le développement de ces communautés virtuelles cybercriminelles<br>The increasing generalization of conventional computer technologies and the emergence of the Internet of Things are probably founding principles of a new digital world order. The omnipresence, the interdependence of networks and the convergence of the components of cyberspace, catalyze the formation of social groups in an object such as the Internet and in particular in social media. The thesis will study in particular the birth of cybercriminal communities, as well as the rites and ties of affinity of the members of these societies. Beyond highlighting the elements of socio-cultural dynamics related to the creation of these digital spaces and their maintenance, the thesis will tend to show that there is a form of logic in the process of integration and the development of these virtual communities. cybercriminal
APA, Harvard, Vancouver, ISO, and other styles
4

Poudel, Bhuwan Krishna Som. "Algorithms to identify failure pattern." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-23011.

Full text
Abstract:
This project report was written for ?Algorithms to Identify Failure Pattern? at NTNU (Norwegian University of Science and Technology), IME (Faculty of Information Technology, Mathematics and Electrical Engineering) and IDI (Department of Computer Science).In software application, there are three types of failure pattern: point pattern, block pattern and stripe pattern. The purpose of the report is to prepare an algorithm that identifies the pattern in a software application. Only theoretical concept is written in this report. My goal is to compare these algorithms and find the efficient one.The report is written in the period from February 2012 to June 2013.
APA, Harvard, Vancouver, ISO, and other styles
5

Wei, Hao. "Evolving test environments to identify faults in swarm robotics algorithms." Thesis, University of York, 2018. http://etheses.whiterose.ac.uk/22022/.

Full text
Abstract:
Swarm robotic systems are often considered to be dependable. However, there is little empirical evidence or theoretical analysis showing that dependability is an inherent property of all swarm robotic systems. Recent literature has identified potential issues with respect to dependability within certain types of swarm robotic control algorithms. However, there is little research on the testing of swarm robotic systems; this provides the motivation for developing a novel testing method for swarm robotic systems. An evolutionary testing method is proposed in this thesis to identify unintended behaviours during the execution of swarm robotic systems autonomously. Three case studies are carried out on flocking control algorithm, foraging algorithm, and task partitioning algorithm. These case studies not only show that the evolutionary testing method has the ability to identify faults in swarm robotic system, but also show that this evolutionary testing method is able to reveal failures in various swarm control algorithms. The experimental results show that the evolutionary testing method can lead to worse swarm performance and reveal more failures than the random testing method within the same number of computing evaluations. Moreover, the case study of flocking control algorithm also shows that the evolutionary testing method covers more failure types than the random testing method. In all three case studies, the dependability of each swarm robotic system has been improved by tackling the faults identified during the testing phase. Consequently, the evolutionary testing method has the potential to be used to help the developers of swarm robotic systems to design and calibrate the swarm control algorithms thereby assuring the dependability of swarm robotic systems.
APA, Harvard, Vancouver, ISO, and other styles
6

Aimejalii, K., Keshav P. Dahal, and M. Alamgir Hossain. "GA-based learning algorithms to identify fuzzy rules for fuzzy neural networks." IEEE, 2007. http://hdl.handle.net/10454/2553.

Full text
Abstract:
Identification of fuzzy rules is an important issue in designing of a fuzzy neural network (FNN). However, there is no systematic design procedure at present. In this paper we present a genetic algorithm (GA) based learning algorithm to make use of the known membership function to identify the fuzzy rules form a large set of all possible rules. The proposed learning algorithm initially considers all possible rules then uses the training data and the fitness function to perform ruleselection. The proposed GA based learning algorithm has been tested with two different sets of training data. The results obtained from the experiments are promising and demonstrate that the proposed GA based learning algorithm can provide a reliable mechanism for fuzzy rule selection.
APA, Harvard, Vancouver, ISO, and other styles
7

Chen, Liang. "Motif Selection Using Simulated Annealing Algorithm with Application to Identify Regulatory Elements." Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1531343206505855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Jennifer J., and Hsinchun Chen. "Fighting organized crimes: using shortest-path algorithms to identify associations in criminal networks." Elsevier, 2004. http://hdl.handle.net/10150/106207.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona<br>Effective and efficient link analysis techniques are needed to help law enforcement and intelligence agencies fight organized crimes such as narcotics violation, terrorism, and kidnapping. In this paper, we propose a link analysis technique that uses shortest-path algorithms, priority-first-search (PFS) and two-tree PFS, to identify the strongest association paths between entities in a criminal network. To evaluate effectiveness, we compared the PFS algorithms with crime investigatorsâ typical association-search approach, as represented by a modified breadth-first-search (BFS). Our domain expert considered the association paths identified by PFS algorithms to be useful about 70% of the time, whereas the modified BFS algorithmâ s precision rates were only 30% for a kidnapping network and 16.7% for a narcotics network. Efficiency of the two-tree PFS was better for a small, dense kidnapping network, and the PFS was better for the large, sparse narcotics network.
APA, Harvard, Vancouver, ISO, and other styles
9

Tolley, Joseph D. "Implementation and Evaluation of an Algorithm for User Identity and Permissions for Situational Awareness Analysis." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/89907.

Full text
Abstract:
The thesis analyzes the steps and actions necessary to develop an application using a user identity management system, user permissions system, message distribution system, and message response data collection and display system to deliver timely command and control of human assets and the input of intelligence in emergency response situations. The application, MinuteMan, uniquely manages messages sent between multiple users and their parent organizations. Specifically, messages are stored, managed, and displayed to managers based on the hierarch or organizational rank as well as situational allowances of the users sending and receiving messages and permissions. Using an algorithm for user identity and permissions for situational awareness analysis, messages and information is sent to multiple addressees in an organization. Responses are correlated to the rank of the responding recipients in the organization, to assist the users and the parent organizations to identify which responses to have been read. Receipt of the messages is acknowledged before the message can be fully read. Responses to the messages include a selection of a user status from a preset choice of statuses, and may include other response attributes required or offered by the sender of the message. The locations of responding and non-responding addresses can be mapped and tracked. The resulting solution provides improved situational awareness during emergency response situations.<br>M.S.<br>The thesis analyzes the steps and actions necessary to develop an application using a user identity management system, user permissions system, message distribution system, and message response data collection and display system to deliver timely command and control of human assets and the input of intelligence in emergency response situations. Using an algorithm for user identity and permissions for situational awareness analysis, messages and information are sent to multiple user addressees for individuals supporting an organization. Responses are correlated to the rank of the responding recipient in the organization, and to assist the senders of the messages to identify which responses to read by the targeted recipients. Receipt of the messages is acknowledged before the message can be fully read. Responses to the messages include a selection of a user status from a preset choice of statuses, and may include other response attributes required or offered by the sender of the message. The locations of responding and non-responding addresses can be mapped and tracked. The resulting solution provides improved situational awareness during emergency response situations.
APA, Harvard, Vancouver, ISO, and other styles
10

Aurangabadwala, Tehsin T. "DEVELOPMENT OF AN EXPERT ALGORITHM TO IDENTIFY RISKS ASSOCIATED WITH A RESEARCH FACILITY." Ohio University / OhioLINK, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1173823780.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Saxena, Akriti. "A SEQUENTIAL ALGORITHM TO IDENTIFY THE MIXING ENDPOINTS IN LIQUIDS IN PHARMACEUTICAL APPLICATIONS." VCU Scholars Compass, 2009. http://scholarscompass.vcu.edu/etd/1931.

Full text
Abstract:
The objective of this thesis is to develop a sequential algorithm to determine accurately and quickly, at which point in time a product is well mixed or reaches a steady state plateau, in terms of the Refractive Index (RI). An algorithm using sequential non-linear model fitting and prediction is proposed. A simulation study representing typical scenarios in a liquid manufacturing process in pharmaceutical industries was performed to evaluate the proposed algorithm. The data simulated included autocorrelated normal errors and used the Gompertz model. A set of 27 different combinations of the parameters of the Gompertz function were considered. The results from the simulation study suggest that the algorithm is insensitive to the functional form and achieves the goal consistently with least number of time points.
APA, Harvard, Vancouver, ISO, and other styles
12

Hu, Jialu [Verfasser]. "Algorithms to Identify Functional Orthologs And Functional Modules from High-Throughput Data / Jialu Hu." Berlin : Freie Universität Berlin, 2015. http://d-nb.info/1064869807/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Xiaohui. "Development and Testing of a Combined Neural-Genetic Algorithm to Identify CO2 Sequestration Candidacy Wells." Thesis, University of Louisiana at Lafayette, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1594272.

Full text
Abstract:
<p>This study was motivated by how to use statistical tool to identify the candidacy wells for CO2 Capture and Sequestration based on the idea of using Artificial Neural Networks to predict the leakage index of a well. A Combined Neural-Genetic Algorithm was introduced to avoid BP neural network getting a local minimum because Genetic Algorithm simulates the survival of the fittest among individuals over consecutive generation. Based on the algorithm, 1356 lines of C code were composed using Microsoft Visual Studio 2010. The Combined Neural-Genetic Algorithm developed in this thesis is able to handle large size of data sample with at least 10 factors. Several parameters were considered as factors that may have an effect on the performance of Combined Neural-Genetic Algorithm, including the population size, max epoch, error goal, probability of crossover, probability of mutation, number of neurons in hidden layer, number of factors and size of data sample. The accuracy of the BP neural network and the CPU time are the two major parameters to evaluate the performance of the Combined Neural-Genetic Algorithm. A sensitivity analysis was performed to identify the effect these factor have on the performance. Based on the result of the sensitivity analysis, some recommendations are provided about initializing these factors.
APA, Harvard, Vancouver, ISO, and other styles
14

Rattay, Sonja. "Profiling Algorithms and Content Targeting - An Exploration of the Filter Bubble Phenomenon." Thesis, Malmö högskola, Fakulteten för kultur och samhälle (KS), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-22561.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Browning, Joseph Stuart. "Developing a Method to Identify Horizontal Curve Segments with High Crash Occurrences Using the HAF Algorithm." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/8809.

Full text
Abstract:
Crashes occur every day on Utah’s roadways. Curves can be particularly dangerous as they require driver focus due to potentially unseen hazards. Often, crashes occur on curves due to poor curve geometry, a lack of warning signs, or poor surface conditions. This can create conditions in which vehicles are more prone to leave the roadway, and possibly roll over. These types of crashes are responsible for many severe injuries and a few fatalities each year, which could be prevented if these areas are identified. This highlights a need for identification of curves with high crash occurrences, particularly on a network-wide scale. The Horizontal Alignment Finder (HAF) Algorithm, originally created by a Brigham Young University team in 2014, was improved to achieve 87-100 percent accuracy in finding curved segments of Utah Department of Transportation (UDOT) roadways, depending on roadway type. A tool was then developed through Microsoft Excel Visual Basic for Applications (VBA) to sort through curve and crash data to determine the number of severe and total crashes that occurred along each curve. The tool displays a list of curves with high crash occurrences. The user can sort curves by several different parameters, including various crash rates and numbers of crashes. Many curves with high crash rates have already been identified, some of which are shown in this thesis. This tool will help UDOT determine which roadway curves warrant improvement projects.
APA, Harvard, Vancouver, ISO, and other styles
16

Blackley, David, Shimin Zheng, and Winn Ketchum. "Implementing a Weighted Spatial Smoothing Algorithm to Identify a Lung Cancer Belt in the United States." Digital Commons @ East Tennessee State University, 2012. https://dc.etsu.edu/etsu-works/42.

Full text
Abstract:
Lung cancer is the leading cause of cancer death in the United States, but a large fraction of cases is preventable. We use a spatial smoothing algorithm to identify a geographic pattern of high lung cancer mortality, primarily in the Southeast, which we call a lung cancer belt. Disease belts are an effective mode for conveying patterns of high incidence or mortality; formally defining this lung cancer belt may encourage increased public dialogue and more focused research. Public health officials could complement existing population lung cancer data with this information to help inform resource allocation decisions.
APA, Harvard, Vancouver, ISO, and other styles
17

Blackley, David, Shimin Zheng, and Winn Ketchum. "Implementing a Spatial Smoothing Algorithm to Help Identify a Lung Cancer Belt in the United States." Digital Commons @ East Tennessee State University, 2012. https://dc.etsu.edu/etsu-works/81.

Full text
Abstract:
Disease mapping is used to identify high risk areas, inform resource allocation and generate hypotheses. The stroke and diabetes belts in the U.S. have encouraged public dialogue and spurred research. Lung cancer is the leading cause of U.S. cancer mortality, accounting for 158135 deaths in 2010 compared to 129180 from cerebrovascular disease and 68905 from diabetes mellitus. If one exists, defining a distinct pattern of high lung cancer mortality could increase public awareness of the disease and facilitate investigation of its determinants. To begin our inquiry, we generated a map and observed an area of high lung cancer mortality, primarily in the Southeast. However, variability in county rates, likely due to small populations, made determining patterns difficult. Spatial smoothing can clarify obscured patterns. We downloaded county lung cancer mortality rates, population sizes and death counts. Concurrent incidence and mortality rates for lung cancer were nearly equivalent, so mortality was used as a proxy for risk. After downloading county population centroids with latitudes and longitudes, we implemented a median-based, weighted, two-dimensional smoothing algorithm to enhance spatial patterns by borrowing strength from neighbor counties. The algorithm selected three proximate centroids, forming a “triple,” anchored by the centroid of the county to be smoothed. The parameter for nearest neighbor (NN) counties was set to NN=10, with the number of triples (NTR) for each county NTR=(2/3)*NN, producing seven collinear triples for each county with a center angle ≥135°. Median rates for the top and bottom 50% of neighbor counties were calculated and weighted by 1/SE, creating a “window,” whereby if the original rate was between the two medians, or if the county population was sufficiently large, it was not smoothed. If the original rate was outside the window, it was adjusted according to the corresponding neighbor median. Ten iterations of this process were conducted for each county. Smoothed rates were imported to ArcGIS and joined to a U.S. counties layer. Congruent counties in or near the Southeast with rates above 64 per 100,000 were defined as one class. We observed clustering of high lung cancer mortality, comprising 724 counties and forming an arc not evident in the unsmoothed data. This area, which we define as the lung cancer belt, included nearly all of Arkansas, Kentucky and Tennessee, and portions of 16 other states. Heavily affected regions include much of the Ohio Valley, Central Appalachia, the Tennessee Valley, the Ozarks, the Mississippi Delta and the northern Gulf Coast. Smoking, a modifiable behavior, causes the majority of lung cancer deaths, and is the single leading cause of mortality in the United States. Lung cancer mortality rates presented at the state level obscure differences within states. The lung cancer belt may provide a tool to identify areas in greatest need of resources. National survey data could be utilized to determine demographic, socioeconomic and behavioral differences between the lung cancer belt and the rest of the nation.
APA, Harvard, Vancouver, ISO, and other styles
18

Ishi, Soares de Lima Leandro. "De novo algorithms to identify patterns associated with biological events in de Bruijn graphs built from NGS data." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1055/document.

Full text
Abstract:
L'objectif principal de cette thèse est le développement, l'amélioration et l'évaluation de méthodes de traitement de données massives de séquençage, principalement des lectures de séquençage d'ARN courtes et longues, pour éventuellement aider la communauté à répondre à certaines questions biologiques, en particulier dans les contextes de transcriptomique et d'épissage alternatif. Notre objectif initial était de développer des méthodes pour traiter les données d'ARN-seq de deuxième génération à l'aide de graphes de De Bruijn afin de contribuer à la littérature sur l'épissage alternatif, qui a été exploré dans les trois premiers travaux. Le premier article (Chapitre 3, article [77]) a exploré le problème que les répétitions apportent aux assembleurs de transcriptome si elles ne sont pas correctement traitées. Nous avons montré que la sensibilité et la précision de notre assembleur local d'épissage alternatif augmentaient considérablement lorsque les répétitions étaient formellement modélisées. Le second (Chapitre 4, article [11]) montre que l'annotation d'événements d'épissage alternatifs avec une seule approche conduit à rater un grand nombre de candidats, dont beaucoup sont importants. Ainsi, afin d'explorer de manière exhaustive les événements d'épissage alternatifs dans un échantillon, nous préconisons l'utilisation combinée des approches mapping-first et assembly-first. Étant donné que nous avons une énorme quantité de bulles dans les graphes de De Bruijn construits à partir de données réelles d'ARN-seq, qui est impossible à analyser dans la pratique, dans le troisième travail (Chapitre 5, articles [1, 2]), nous avons exploré théoriquement la manière de représenter efficacement et de manière compacte l'espace des bulles via un générateur des bulles. L'exploration et l'analyse des bulles dans le générateur sont réalisables dans la pratique et peuvent être complémentaires aux algorithmes de l'état de l'art qui analysent un sous-ensemble de l'espace des bulles. Les collaborations et les avancées sur la technologie de séquençage nous ont incités à travailler dans d'autres sous-domaines de la bioinformatique, tels que: études d'association à l'échelle des génomes, correction d'erreur et assemblage hybride. Notre quatrième travail (Chapitre 6, article [48]) décrit une méthode efficace pour trouver et interpréter des unitigs fortement associées à un phénotype, en particulier la résistance aux antibiotiques, ce qui rend les études d'association à l'échelle des génomes plus accessibles aux panels bactériens, surtout ceux qui contiennent des bactéries plastiques. Dans notre cinquième travail (Chapitre 7, article [76]), nous évaluons dans quelle mesure les méthodes existantes de correction d'erreur ADN à lecture longue sont capables de corriger les lectures longues d'ARN-seq à taux d'erreur élevé. Nous concluons qu'aucun outil ne surpasse tous les autres pour tous les indicateurs et est le mieux adapté à toutes les situations, et que le choix devrait être guidé par l'analyse en aval. Les lectures longues d'ARN-seq fournissent une nouvelle perspective sur la manière d'analyser les données transcriptomiques, puisqu'elles sont capables de décrire les séquences complètes des ARN messagers, ce qui n'était pas possible avec des lectures courtes dans plusieurs cas, même en utilisant des assembleurs de transcriptome de l'état de l'art. En tant que tel, dans notre dernier travail (Chapitre 8, article [75]), nous explorons une méthode hybride d'assemblage d'épissages alternatifs qui utilise des lectures à la fois courtes et longues afin de répertorier les événements d'épissage alternatifs de manière complète, grâce aux lectures courtes, guidé par le contexte intégral fourni par les lectures longues<br>The main goal of this thesis is the development, improvement and evaluation of methods to process massively sequenced data, mainly short and long RNA-sequencing reads, to eventually help the community to answer some biological questions, especially in the transcriptomic and alternative splicing contexts. Our initial objective was to develop methods to process second-generation RNA-seq data through de Bruijn graphs to contribute to the literature of alternative splicing, which was explored in the first three works. The first paper (Chapter 3, paper [77]) explored the issue that repeats bring to transcriptome assemblers if not addressed properly. We showed that the sensitivity and the precision of our local alternative splicing assembler increased significantly when repeats were formally modeled. The second (Chapter 4, paper [11]), shows that annotating alternative splicing events with a single approach leads to missing out a large number of candidates, many of which are significant. Thus, to comprehensively explore the alternative splicing events in a sample, we advocate for the combined use of both mapping-first and assembly-first approaches. Given that we have a huge amount of bubbles in de Bruijn graphs built from real RNA-seq data, which are unfeasible to be analysed in practice, in the third work (Chapter 5, papers [1, 2]), we explored theoretically how to efficiently and compactly represent the bubble space through a bubble generator. Exploring and analysing the bubbles in the generator is feasible in practice and can be complementary to state-of-the-art algorithms that analyse a subset of the bubble space. Collaborations and advances on the sequencing technology encouraged us to work in other subareas of bioinformatics, such as: genome-wide association studies, error correction, and hybrid assembly. Our fourth work (Chapter 6, paper [48]) describes an efficient method to find and interpret unitigs highly associated to a phenotype, especially antibiotic resistance, making genome-wide association studies more amenable to bacterial panels, especially plastic ones. In our fifth work (Chapter 7, paper [76]), we evaluate the extent to which existing long-read DNA error correction methods are capable of correcting high-error-rate RNA-seq long reads. We conclude that no tool outperforms all the others across all metrics and is the most suited in all situations, and that the choice should be guided by the downstream analysis. RNA-seq long reads provide a new perspective on how to analyse transcriptomic data, since they are able to describe the full-length sequences of mRNAs, which was not possible with short reads in several cases, even by using state-of-the-art transcriptome assemblers. As such, in our last work (Chapter 8, paper [75]) we explore a hybrid alternative splicing assembly method, which makes use of both short and long reads, in order to list alternative splicing events in a comprehensive manner, thanks to short reads, guided by the full-length context provided by the long reads
APA, Harvard, Vancouver, ISO, and other styles
19

Kalla, Caroline. "Fay's identity in the theory of integrable systems." Phd thesis, Université de Bourgogne, 2011. http://tel.archives-ouvertes.fr/tel-00622289.

Full text
Abstract:
Fay's identity on Riemann surfaces is a powerful tool in the context of algebro-geometric solutions to integrable equations. This relation generalizes a well-known identity for the cross-ratio function in the complex plane. It allows to establish relations between theta functions and their derivatives. This offers a complementary approach to algebro-geometric solutions of integrable equations with certain advantages with respect to the use of Baker-Akhiezer functions. It has been successfully applied by Mumford et al. to the Korteweg-de Vries, Kadomtsev-Petviashvili and sine-Gordon equations. Following this approach, we construct algebro-geometric solutions to the Camassa-Holm and Dym type equations, as well as solutions to the multi-component nonlinear Schrödinger equation and the Davey-Stewartson equations. Solitonic limits of these solutions are investigated when the genus of the associated Riemann surface drops to zero. Moreover, we present a numerical evaluation of algebro-geometric solutions of integrable equations when the associated Riemann surface is real.
APA, Harvard, Vancouver, ISO, and other styles
20

Macdonald, Kristian I. "Development and Validation of an Administrative Data Algorithm to Identify Adults who have Endoscopic Sinus Surgery for Chronic Rhinosinusitis." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35148.

Full text
Abstract:
Objective: 1) Systematic review on the accuracy of Chronic Rhinosinusitis (CRS) identification in administrative databases; 2) Develop an administrative data algorithm to identify CRS patients who have endoscopic sinus surgery (ESS). Methods: A chart review was performed for all ESS surgical encounters at The Ottawa Hospital from 2011-12. Cases were defined as encounters in which ESS for performed for Otolaryngologist-diagnosed CRS. An algorithm to identify patients who underwent ESS for CRS was developed using diagnostic and procedural codes within health administrative data. This algorithm was internally validated. Results: Only three studies meeting inclusion criteria were identified in the systematic review and showed inaccurate CRS identification. The final algorithm using administrative and chart review data found that encounters having at least one CRS diagnostic code and one ESS procedural code had excellent accuracy for identifying ESS: sensitivity 96.0% sensitivity, specificity 100%, and positive predictive value 95.4%. Internal validation showed similar accuracy. Conclusion: Most published AD studies examining CRS do not consider the accuracy of case identification. We identified a simple algorithm based on administrative database codes accurately identified ESS-CRS encounters.
APA, Harvard, Vancouver, ISO, and other styles
21

Grenet, Bruno. "Représentations des polynômes, algorithmes et bornes inférieures." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2012. http://tel.archives-ouvertes.fr/tel-00770148.

Full text
Abstract:
La complexité algorithmique est l'étude des ressources nécessaires -- le temps, la mémoire, ... -- pour résoudre un problème de manière algorithmique. Dans ce cadre, la théorie de la complexité algébrique est l'étude de la complexité algorithmique de problèmes de nature algébrique, concernant des polynômes.Dans cette thèse, nous étudions différents aspects de la complexité algébrique. D'une part, nous nous intéressons à l'expressivité des déterminants de matrices comme représentations des polynômes dans le modèle de complexité de Valiant. Nous montrons que les matrices symétriques ont la même expressivité que les matrices quelconques dès que la caractéristique du corps est différente de deux, mais que ce n'est plus le cas en caractéristique deux. Nous construisons également la représentation la plus compacte connue du permanent par un déterminant. D'autre part, nous étudions la complexité algorithmique de problèmes algébriques. Nous montrons que la détection de racines dans un système de n polynômes homogènes à n variables est NP-difficile. En lien avec la question " VP = VNP ? ", version algébrique de " P = NP ? ", nous obtenons une borne inférieure pour le calcul du permanent d'une matrice par un circuit arithmétique, et nous exhibons des liens unissant ce problème et celui du test d'identité polynomiale. Enfin nous fournissons des algorithmes efficaces pour la factorisation des polynômes lacunaires à deux variables.
APA, Harvard, Vancouver, ISO, and other styles
22

Cosa, Liñán Alejandro. "Analytical fusion of multimodal magnetic resonance imaging to identify pathological states in genetically selected Marchigian Sardinian alcohol-preferring (msP) rats." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90523.

Full text
Abstract:
[EN] Alcohol abuse is one of the most alarming issues for the health authorities. It is estimated that at least 23 million of European citizens are affected by alcoholism causing a cost around 270 million euros. Excessive alcohol consumption is related with physical harm and, although it damages the most of body organs, liver, pancreas, and brain are more severally affected. Not only physical harm is associated to alcohol-related disorders, but also other psychiatric disorders such as depression are often comorbiding. As well, alcohol is present in many of violent behaviors and traffic injures. Altogether reflects the high complexity of alcohol-related disorders suggesting the involvement of multiple brain systems. With the emergence of non-invasive diagnosis techniques such as neuroimaging or EEG, many neurobiological factors have been evidenced to be fundamental in the acquisition and maintenance of addictive behaviors, relapsing risk, and validity of available treatment alternatives. Alterations in brain structure and function reflected in non-invasive imaging studies have been repeatedly investigated. However, the extent to which imaging measures may precisely characterize and differentiate pathological stages of the disease often accompanied by other pathologies is not clear. The use of animal models has elucidated the role of neurobiological mechanisms paralleling alcohol misuses. Thus, combining animal research with non-invasive neuroimaging studies is a key tool in the advance of the disorder understanding. As the volume of data from very diverse nature available in clinical and research settings increases, an integration of data sets and methodologies is required to explore multidimensional aspects of psychiatric disorders. Complementing conventional mass-variate statistics, interests in predictive power of statistical machine learning to neuroimaging data is currently growing among scientific community. This doctoral thesis has covered most of the aspects mentioned above. Starting from a well-established animal model in alcohol research, Marchigian Sardinian rats, we have performed multimodal neuroimaging studies at several stages of alcohol-experimental design including the etiological mechanisms modulating high alcohol consumption (in comparison to Wistar control rats), alcohol consumption, and treatment with the opioid antagonist Naltrexone, a well-established drug in clinics but with heterogeneous response. Multimodal magnetic resonance imaging acquisition included Diffusion Tensor Imaging, structural imaging, and the calculation of magnetic-derived relaxometry maps. We have designed an analytical framework based on widely used algorithms in neuroimaging field, Random Forest and Support Vector Machine, combined in a wrapping fashion. Designed approach was applied on the same dataset with two different aims: exploring the validity of the approach to discriminate experimental stages running at subject-level and establishing predictive models at voxel-level to identify key anatomical regions modified during the experiment course. As expected, combination of multiple magnetic resonance imaging modalities resulted in an enhanced predictive power (between 3 and 16%) with heterogeneous modality contribution. Surprisingly, we have identified some inborn alterations correlating high alcohol preference and thalamic neuroadaptations related to Naltrexone efficacy. As well, reproducible contribution of DTI and relaxometry -related biomarkers has been repeatedly identified guiding further studies in alcohol research. In summary, along this research we demonstrate the feasibility of incorporating multimodal neuroimaging, machine learning algorithms, and animal research in the advance of the understanding alcohol-related disorders.<br>[ES] El abuso de alcohol es una de las mayores preocupaciones de las autoridades sanitarias en la Unión Europea. El consumo de alcohol en exceso afecta en mayor o menor medida la totalidad del organismo siendo el páncreas e hígado los más severamente afectados. Además de estos, el sistema nervioso central sufre deterioros relacionados con el alcohol y con frecuencia se presenta en paralelo con otras patologías psiquiátricas como la depresión u otras adicciones como la ludopatía. La presencia de estas comorbidades demuestra la complejidad de la patología en la que multitud de sistemas neuronales interaccionan entre sí. El uso imágenes de resonancia magnética (RM) han ayudado en el estudio de enfermedades psiquiátricas facilitando el descubrimiento de mecanismos neurológicos fundamentales en el desarrollo y mantenimiento de la adicción al alcohol, recaídas y el efecto de los tratamientos disponibles. A pesar de los avances, todavía se necesita investigar más para identificar las bases biológicas que contribuyen a la enfermedad. En este sentido, los modelos animales sirven, por lo tanto, a discriminar aquellos factores únicamente relacionados con el alcohol controlando otros factores que facilitan el desarrollo del alcoholismo. Estudios de resonancia magnética en animales de laboratorio y su posterior evaluación en humanos juegan un papel fundamental en el entendimiento de las patologías psiquatricas como la addicción al alcohol. La imagen por resonancia magnética se ha integrado en entornos clínicos como prueba diagnósticas no invasivas. A medida que el volumen de datos se va incrementando, se necesitan herramientas y metodologías capaces de fusionar información de muy distinta naturaleza y así establecer criterios diagnósticos cada vez más exactos. El poder predictivo de herramientas derivadas de la inteligencia artificial como el aprendizaje automático sirven de complemento a tradicionales métodos estadísticos. En este trabajo se han abordado la mayoría de estos aspectos. Se han obtenido datos multimodales de resonancia magnética de un modelo validado en la investigación de patologías derivadas del consumo del alcohol, las ratas Marchigian-Sardinian desarrolladas en la Universidad de Camerino (Italia) y con consumos de alcohol comparables a los humanos. Para cada animal se han adquirido datos antes y después del consumo de alcohol y bajo dos condiciones de abstinencia (con y sin tratamiento de Naltrexona, una medicaciones anti-recaídas usada como farmacoterapia en el alcoholismo). Los datos de resonancia magnética multimodal consistentes en imágenes de difusión, de relaxometría y estructurales se han fusionado en un esquema analítico multivariable incorporando dos herramientas generalmente usadas en datos derivados de neuroimagen, Random Forest y Support Vector Machine. Nuestro esquema fue aplicado con dos objetivos diferenciados. Por un lado, determinar en qué fase experimental se encuentra el sujeto a partir de biomarcadores y por el otro, identificar sistemas cerebrales susceptibles de alterarse debido a una importante ingesta de alcohol y su evolución durante la abstinencia. Nuestros resultados demostraron que cuando biomarcadores derivados de múltiples modalidades de neuroimagen se fusionan en un único análisis producen diagnósticos más exactos que los derivados de una única modalidad (hasta un 16% de mejora). Biomarcadores derivados de imágenes de difusión y relaxometría discriminan estados experimentales. También se han identificado algunos aspectos innatos que están relacionados con posteriores comportamientos con el consumo de alcohol o la relación entre la respuesta al tratamiento y los datos de resonancia magnética. Resumiendo, a lo largo de esta tesis, se demuestra que el uso de datos de resonancia magnética multimodales en modelos animales combinados en esquemas analíticos multivariados es una herramienta válida en el entendimiento de patologías<br>[CAT] L'abús de alcohol es una de les majors preocupacions per part de les autoritats sanitàries de la Unió Europea. Malgrat la dificultat de establir xifres exactes, se estima que uns 23 milions de europeus actualment sofreixen de malalties derivades del alcoholisme amb un cost que supera els 150.000 milions de euros per a la societat. Un consum de alcohol en excés afecta en major o menor mesura el cos humà sent el pàncreas i el fetge el més afectats. A més, el cervell sofreix de deterioraments produïts per l'alcohol i amb freqüència coexisteixen amb altres patologies com depressió o altres addiccions com la ludopatia. Tot aquest demostra la complexitat de la malaltia en la que múltiple sistemes neuronals interactuen entre si. Tècniques no invasives com el encefalograma (EEG) o imatges de ressonància magnètica (RM) han ajudat en l'estudi de malalties psiquiàtriques facilitant el descobriment de mecanismes neurològics fonamentals en el desenvolupament i manteniment de la addició, recaiguda i la efectivitat dels tractaments disponibles. Tot i els avanços, encara es necessiten més investigacions per identificar les bases biològiques que contribueixen a la malaltia. En aquesta direcció, el models animals serveixen per a identificar únicament dependents del abús del alcohol. Estudis de ressonància magnètica en animals de laboratori i posterior avaluació en humans jugarien un paper fonamental en l' enteniment de l'ús del alcohol. L'ús de probes diagnostiques no invasives en entorns clínics has sigut integrades. A mesura que el volum de dades es incrementa, eines i metodologies per a la fusió d' informació de molt distinta natura i per tant, establir criteris diagnòstics cada vegada més exactes. La predictibilitat de eines desenvolupades en el camp de la intel·ligència artificial com la aprenentatge automàtic serveixen de complement a mètodes estadístics tradicionals. En aquesta investigació se han abordat tots aquestes aspectes. Dades multimodals de ressonància magnètica se han obtingut de un model animal validat en l'estudi de patologies relacionades amb el consum d'alcohol, les rates Marchigian-Sardinian desenvolupades en la Universitat de Camerino (Italià) i amb consums d'alcohol comparables als humans. Per a cada animal es van adquirir dades previs i després al consum de alcohol i dos condicions diferents de abstinència (amb i sense tractament anti-recaiguda). Dades de ressonància magnètica multimodal constituides per imatges de difusió, de relaxometria magnètica i estructurals van ser fusionades en esquemes analítics multivariats incorporant dues metodologies validades en el camp de neuroimatge, Random Forest i Support Vector Machine. Nostre esquema ha sigut aplicat amb dos objectius diferenciats. El primer objectiu es determinar en quina fase experimental es troba el subjecte a partir de biomarcadors obtinguts per neuroimatge. Per l'altra banda, el segon objectiu es identificar el sistemes cerebrals susceptibles de ser alterats durant una important ingesta de alcohol i la seua evolució durant la fase del tractament. El nostres resultats demostraren que l'ús de biomarcadors derivats de varies modalitats de neuroimatge fusionades en un anàlisis multivariat produeixen diagnòstics més exactes que els derivats de una única modalitat (fins un 16% de millora). Biomarcadors derivats de imatges de difusió i relaxometria van contribuir de distints estats experimentals. També s'han identificat aspectes innats que estan relacionades amb posterior preferències d'alcohol o la relació entre la resposta al tractament anti-recaiguda i les dades de ressonància magnètica. En resum, al llarg de aquest treball, es demostra que l'ús de dades de ressonància magnètica multimodal en models animals combinats en esquemes analítics multivariats són una eina molt valida en l'enteniment i avanç de patologies psiquiàtriques com l'alcoholisme.<br>Cosa Liñán, A. (2017). Analytical fusion of multimodal magnetic resonance imaging to identify pathological states in genetically selected Marchigian Sardinian alcohol-preferring (msP) rats [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90523<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
23

Zitouni, Mohammed. "L’étude et l’implémentation des algorithmes de couplages sur des courbes hyperelliptiques sur des corps premiers." Electronic Thesis or Diss., Paris 8, 2021. http://www.theses.fr/2021PA080031.

Full text
Abstract:
La recherche de nouveaux groupes autres que le groupe multiplicatif pour concevoir des protocoles plus constructifs en cryptographie est devenue un défi depuis 2000. Plusieurs groupes ont émergé tels que le groupe des points rationnels d'une courbe elliptique et la Jacobienne d'une courbe hyperelliptique. De plus, les couplages sont devenus des outils encore plus pratiques pour concevoir de nouveaux protocoles en cryptographie tels que le chiffrement basé sur l'identité et la signature courte. Cette thèse étudie l'implémentation des algorithmes de couplages sur des courbes hyperelliptiques sur des corps premiers. D'une part, nous considérons le choix des courbes hyperelliptiques à utiliser et la construction des courbes hyperelliptiques de genre deux avec une Jacobienne ordinaire sur des corps premiers. D'autre part, nous améliorons les calculs de couplages sur différentes courbes hyperelliptiques. L'implémentation du couplage de Tate sur des jacobiennes ordinaires des courbes de genre 2 sur de large corps premiers et à plusieurs niveaux de sécurité. L'optimisation de la quantité non négligeable d'opérations qui doivent être optimisées pour rendre le coût de l'utilisation des appariements en cryptographie plus raisonnable. Enfin, nous donnons un schéma concret de cryptage basé sur l'identité en utilisant le couplage de Tate sur une courbe hyperelliptique de genre 2<br>Looking for new groups other than multiplicative group to design more constructive protocols in cryptography became the challenge since 2000. Several groups have emerged such as the group of rational points of an elliptic curve and the Jacobian of a hyperelliptic curve. Furthermore, pairings became even more practical tool to design new protocols in cryptography such as identity-based encryption and short signature. This thesis studies the implementation of pairing algorithms on hyperelliptic curves over prime fields. On the one hand, we regard the choice of the hyperelliptic curves to be used and the construction of genus two hyperelliptic curve of the ordinary Jacobian over a large prime field. On the other hand, we improve the pairing computations on different hyperelliptic curves. Tate pairing are implemented on ordinary Jacobian curves over a large prime field for several security levels. The optimization of the non negligible amount of operations that must be optimised to make the cost of using pairings in cryptography more reasonable. Finally, we give a concrete identity-based encryption scheme using the Tate pairing over genus two hyperelliptic curve
APA, Harvard, Vancouver, ISO, and other styles
24

Wu, Yanan. "ON THE PREDICTIVE PERFORMANCE OF THE STOCK RETURNS BY USING THE MARKOV-SWITCHING MODELS." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-412930.

Full text
Abstract:
This paper proposes the basic predictive regression and Markov Regime-Switching regression to predict the excess stock returns in both US and Sweden stock markets. The analysis shows that the Markov Regime-Switching regression models out perform the linear ones in out-of-sample forecasting, which is due to the fact that the regime-switching models capture the economic expansion and recession better.
APA, Harvard, Vancouver, ISO, and other styles
25

Nuñovero, Daniela, Ernesto Rodríguez, Jimmy Armas, and Paola Gonzalez. "A Technological Solution to Identify the Level of Risk to Be Diagnosed with Type 2 Diabetes Mellitus Using Wearables." Repositorio Academico - UPC, 2021. http://hdl.handle.net/10757/653787.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.<br>This paper proposes a technological solution using a predictive analysis model to identify and reduce the level of risk for type 2 diabetes mellitus (T2DM) through a wearable device. Our proposal is based on previous models that use the auto-classification algorithm together with the addition of new risk factors, which provide a greater contribution to the results of the presumptive diagnosis of the user who wants to check his level of risk. The purpose is the primary prevention of type 2 diabetes mellitus by a non-invasive method composed of the phases: (1) Capture and storage of risk factors; (2) Predictive analysis model; (3) Presumptive results and recommendations; and (4) Preventive treatment. The main contribution is in the development of the proposed application.<br>Revisión por pares
APA, Harvard, Vancouver, ISO, and other styles
26

Huttner, Liane. "La décision de l'algorithme : étude de droit privé sur les relations entre l'humain et la machine." Electronic Thesis or Diss., Paris 1, 2022. https://ecm.univ-paris1.fr/nuxeo/site/esupversions/1519e5dc-267a-46bf-8e75-4699db7e89fe.

Full text
Abstract:
Depuis 1978, le droit encadre les algorithmes décisionnels, c’est-à-dire les algorithmes aidant ou remplaçant la décision humaine. Pourtant, le développement effréné de ces outils et leur diffusion dans tous les domaines questionnent la pertinence de ces règles ; En particulier, il apparaît que l’encadrement des algorithmes décisionnels s’oriente aujourd’hui vers la protection du destinataire de la décision, soit la personne soumise à la décision prise sur le fondement de l'algorithme. Ce faisant, une part essentielle des enjeux sont oubliés : la protection de l’auteur de la décision et du caractère humain de la décision. La réintégration à sa juste valeur de la protection de l’auteur de la décision, sans oublier celle du destinataire, permet alors de donner toute sa force au droit des algorithmes. Ainsi, les deux catégories classiques d’algorithmes de prise de décision et d’algorithmes d’aide à la décision peuvent être réinterprétées. De la même manière, les règles de conception et d’utilisation des algorithmes décisionnels peuvent également être lues sous la double fonction de la protection de l’auteur et du destinataire de la décision. Dans le premier cas, c’est la faculté même de décider qui est protégée. L’interdiction des algorithmes de prise de décision dans certains domaines ou l’encadrement strict de la légalité de ces outils en sont deux illustrations. Dans le second cas, c’est le droit de ne pas être soumis à une décision prise par une machine qui doit être mis en avant. On retrouve alors de nombreux mécanismes issus du droit du pouvoir tels que la faculté de demander le réexamen de la décision ou l'obligation de motivation<br>In France, decision-making algorithms have been regulated for almost 50 years. However, given the constant development of these tools and their ever-broadening use, the effeetivity of this control has come into question. In particular, the law seems to focus on the protection of the person subjected to an automated decision. In doing so, it neglects one of the most important issues at stake : the protection of the authors of the decision themselves. This thesis argues that it is only through a subtle balance between the protection of the authors and the subjects of a given decision that the law might be able to properly regulate decision-making algorithms. Using this approach, the two classic categories of decision-making algorithms namely algorithms serving as the only basis for a decision versus algorithms serving as a simple help for the decision - can be reinterpreted. At the same time, rules regulating the conception and the use of such algorithms can be reinforced. The interdiction of all decision-making algorithms in certain domains can be seen as a proper protection of the human decision. Other mechanisms, such as the right to obtain human intervention, or to contest the decision, arc specifically designed to protect the person subjected to a decision based on an algorithm
APA, Harvard, Vancouver, ISO, and other styles
27

PIROZZI, MICHELA. "Development of a simulation tool for measurements and analysis of simulated and real data to identify ADLs and behavioral trends through statistics techniques and ML algorithms." Doctoral thesis, Università Politecnica delle Marche, 2020. http://hdl.handle.net/11566/272311.

Full text
Abstract:
Con una popolazione di anziani in crescita, il numero di soggetti a rischio di patologia è in rapido aumento. Molti gruppi di ricerca stanno studiando soluzioni pervasive per monitorare continuamente e discretamente i soggetti fragili nelle loro case, riducendo i costi sanitari e supportando la diagnosi medica. Comportamenti anomali durante l'esecuzione di attività di vita quotidiana (ADL) o variazioni sulle tendenze comportamentali sono di grande importanza.<br>With a growing population of elderly people, the number of subjects at risk of pathology is rapidly increasing. Many research groups are studying pervasive solutions to continuously and unobtrusively monitor fragile subjects in their homes, reducing health-care costs and supporting the medical diagnosis. Anomalous behaviors while performing activities of daily living (ADLs) or variations on behavioral trends are of great importance. To measure ADLs a significant number of parameters need to be considering affecting the measurement such as sensors and environment characteristics or sensors disposition. To face the impossibility to study in the real context the best configuration of sensors able to minimize costs and maximize accuracy, simulation tools are being developed as powerful means. This thesis presents several contributions on this topic. In the following research work, a study of a measurement chain aimed to measure ADLs and represented by PIRs sensors and ML algorithm is conducted and a simulation tool in form of Web Application has been developed to generate datasets and to simulate how the measurement chain reacts varying the configuration of the sensors. Starting from eWare project results, the simulation tool has been thought to provide support for technicians, developers and installers being able to speed up analysis and monitoring times, to allow rapid identification of changes in behavioral trends, to guarantee system performance monitoring and to study the best configuration of the sensors network for a given environment. The UNIVPM Home Care Web App offers the chance to create ad hoc datasets related to ADLs and to conduct analysis thanks to statistical algorithms applied on data. To measure ADLs, machine learning algorithms have been implemented in the tool. Five different tasks have been identified. To test the validity of the developed instrument six case studies divided into two categories have been considered. To the first category belong those studies related to: 1) discover the best configuration of the sensors keeping environmental characteristics and user behavior as constants; 2) define the most performant ML algorithms. The second category aims to proof the stability of the algorithm implemented and its collapse condition by varying user habits. Noise perturbation on data has been applied to all case studies. Results show the validity of the generated datasets. By maximizing the sensors network is it possible to minimize the ML error to 0.8%. Due to cost is a key factor in this scenario, the fourth case studied considered has shown that minimizing the configuration of the sensors it is possible to reduce drastically the cost with a more than reasonable value for the ML error around 11.8%. Results in ADLs measurement can be considered more than satisfactory.
APA, Harvard, Vancouver, ISO, and other styles
28

Nosan, Klara. "Zero problems in polynomial models." Electronic Thesis or Diss., Université Paris Cité, 2024. http://www.theses.fr/2024UNIP7008.

Full text
Abstract:
Les modèles polynomiaux sont omniprésents en informatique, dans l'étude des automates et des langages formels, de l'optimisation, de la théorie des jeux, de la théorie du contrôle et de nombreux autres domaines. Dans cette thèse, nous considérons des modèles décrits par des systèmes d'équations polynomiales et des équations différentielles, où le système évolue à travers un ensemble discret de pas de temps avec des mises à jour polynomiales à chaque pas. Nous explorons trois aspects des « problèmes de zéros » pour les modèles polynomiaux : le test d'identité pour les expressions algébriques données par des polynômes, la détermination de l'existence de racines pour les systèmes polynomiaux et la détermination de l'existence de zéros dans les suites satisfaisant des récurrences à coefficients polynomiaux. Dans la première partie, nous étudions les tests d'identité pour les expressions algébriques impliquant des radicaux. En d'autres termes, étant donné un polynôme à k variables représenté par un circuit algébrique et k radicaux réels, nous examinons la complexité de déterminer si le polynôme s'annule sur l'entrée. Nous améliorons la borne PSPACE existante, en plaçant le problème dans coNP en supposant l'hypothèse de Riemann généralisée (HRG). Nous considérons ensuite une version restreinte du problème, où les entrées sont des racines carrées de nombres premiers impairs, montrant qu'il peut être résolu en temps polynomial randomisé en supposant HRG. Nous considérons ensuite les systèmes d'équations polynomiales et étudions la complexité de déterminer si un système de polynômes à coefficients polynomials a une solution. Nous présentons une approche du problème basée sur la théorie des nombres, généralisant les techniques utilisées pour les tests d'identité, et montrons que le problème appartient à la classe de complexité AM en supposant HRG. Nous analysons le lien entre ce problème et le problème de la détermination de la dimension d'une variété complexe, dont l'appartenance à AM a déjà été prouvé supposant HRG. Dans la dernière partie de cette thèse, nous analysons les suites satisfaisant des récurrences à coefficients polynomiaux. Nous étudions la question de savoir si zéro appartient d'une suite récursive polynomiale résultant d'une somme de deux suites hypergéométriques. Plus précisément, nous considérons le problème pour les suites dont les coefficients polynomiaux se décomposent dans le corps des rationnels Q. Nous montrons sa relation avec les valeurs de la fonction Gamma évaluées en des points rationnels, ce qui permet d'établir la décidabilité du problème supposant la conjecture de Rohrlich-Lang. Nous proposons une nouvelle approche basée sur l'étude des diviseurs premiers de la suite, ce qui nous permet d'établir la décidabilité inconditionnelle du problème<br>Polynomial models are ubiquitous in computer science, arising in the study of automata and formal languages, optimisation, game theory, control theory, and numerous other areas. In this thesis, we consider models described by polynomial systems of equations and difference equations, where the system evolves through a set of discrete time steps with polynomial updates at every step. We explore three aspects of "zero problems" for polynomial models: zero testing for algebraic expressions given by polynomials, determining the existence of zeros for polynomial systems and determining the existence of zeros for sequences satisfying recurrences with polynomial coefficients. In the first part, we study identity testing for algebraic expressions involving radicals. That is, given a k-variate polynomial represented by an algebraic circuit and k real radicals, we examine the complexity of determining whether the polynomial vanishes on the radical input. We improve on the existing PSPACE bound, placing the problem in coNP assuming the Generalised Riemann Hypothesis (GRH). We further consider a restricted version of the problem, where the inputs are square roots of odd primes, showing that it can be decided in randomised polynomial time assuming GRH. We next consider systems of polynomial equations, and study the complexity of determining whether a system of polynomials with polynomial coefficients has a solution. We present a number-theoretic approach to the problem, generalising techniques used for identity testing, showing the problem belongs to the complexity class AM assuming GRH. We discuss how the problem relates to determining the dimension of a complex variety, which is also known to belong to AM assuming GRH. In the final part of this thesis, we turn our attention to sequences satisfying recurrences with polynomial coefficients. We study the question of whether zero is a member of a polynomially recursive sequence arising as a sum of two hypergeometric sequences. More specifically, we consider the problem for sequences where the polynomial coefficients split over the field of rationals Q. We show its relation to the values of the Gamma function evaluated at rational points, which allows to establish decidability of the problem under the assumption of the Rohrlich-Lang conjecture. We propose a different approach to the problem based on studying the prime divisors of the sequence, allowing us to establish unconditional decidability of the problem
APA, Harvard, Vancouver, ISO, and other styles
29

Alim, Sophia. "Vulnerability in online social network profiles : a framework for measuring consequences of information disclosure in online social networks." Thesis, University of Bradford, 2011. http://hdl.handle.net/10454/5507.

Full text
Abstract:
The increase in online social network (OSN) usage has led to personal details known as attributes being readily displayed in OSN profiles. This can lead to the profile owners being vulnerable to privacy and social engineering attacks which include identity theft, stalking and re identification by linking. Due to a need to address privacy in OSNs, this thesis presents a framework to quantify the vulnerability of a user's OSN profile. Vulnerability is defined as the likelihood that the personal details displayed on an OSN profile will spread due to the actions of the profile owner and their friends in regards to information disclosure. The vulnerability measure consists of three components. The individual vulnerability is calculated by allocating weights to profile attribute values disclosed and neighbourhood features which may contribute towards the personal vulnerability of the profile user. The relative vulnerability is the collective vulnerability of the profiles' friends. The absolute vulnerability is the overall profile vulnerability which considers the individual and relative vulnerabilities. The first part of the framework details a data retrieval approach to extract MySpace profile data to test the vulnerability algorithm using real cases. The profile structure presented significant extraction problems because of the dynamic nature of the OSN. Issues of the usability of a standard dataset including ethical concerns are discussed. Application of the vulnerability measure on extracted data emphasised how so called 'private profiles' are not immune to vulnerability issues. This is because some profile details can still be displayed on private profiles. The second part of the framework presents the normalisation of the measure, in the context of a formal approach which includes the development of axioms and validation of the measure but with a larger dataset of profiles. The axioms highlight that changes in the presented list of profile attributes, and the attributes' weights in making the profile vulnerable, affect the individual vulnerability of a profile. iii Validation of the measure showed that vulnerability involving OSN profiles does occur and this provides a good basis for other researchers to build on the measure further. The novelty of this vulnerability measure is that it takes into account not just the attributes presented on each individual profile but features of the profiles' neighbourhood.
APA, Harvard, Vancouver, ISO, and other styles
30

Vishnoi, Nisheeth Kumar. "Theoretical Aspects of Randomization in Computation." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/6424.

Full text
Abstract:
Randomness has proved to be a powerful tool in all of computation. It is pervasive in areas such as networking, machine learning, computer graphics, optimization, computational number theory and is "necessary" for cryptography. Though randomized algorithms and protocols assume access to "truly" random bits, in practice, they rely on the output of "imperfect" sources of randomness such as pseudo-random number generators or physical sources. Hence, from a theoretical standpoint, it becomes important to view randomness as a resource and to study the following fundamental questions pertaining to it: Extraction: How do we generate "high quality" random bits from "imperfect" sources? Randomization: How do we use randomness to obtain efficient algorithms? Derandomization: How (and when) can we "remove" our dependence on random bits? In this thesis, we consider important problems in these three prominent and diverse areas pertaining to randomness. In randomness extraction, we present extractors for "oblivious bit fixing sources". In (a non-traditional use of) randomization, we have obtained results in machine learning (learning juntas) and proved hardness of lattice problems. While in derandomization, we present a deterministic algorithm for a fundamental problem called "identity testing". In this thesis we also initiate a complexity theoretic study of Hilbert's 17th problem. Here identity testing is used in an interesting manner. A common theme in this work has been the use of tools from areas such as number theory in a variety of ways, and often the techniques themselves are quite interesting.
APA, Harvard, Vancouver, ISO, and other styles
31

Collomb, Cléo. "Un concept technologique de trace numérique." Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2286/document.

Full text
Abstract:
Cette thèse entend proposer un concept technologique – c’est-à-dire non-anthropocentré – de trace numérique. Il s’agit de rappeler que l’informatique exigeant des objets et des actes qu’ils passent par l’inscription pour exister, les machines computationnelles sont parties prenantes des processus de production des traces numériques, qu’une « sémiotique technologique » permettrait de décrire. L’enjeu d’un tel concept est de mettre en circulation une narration qui ne soit pas de l’ordre de ces discours de fin de monde décrits par Déborah Danowski et Eduardo Viveiros de Castro. Ces discours racontent la vie d’humains réduits à habiter un environnement ontologiquement dévitalisé et artificialisé, comme cela semble être le cas lorsque la valorisation technique et économique des traces numériques débouche sur une « délégation machinique de nos relations » (Louise Merzeau) ou encore sur une « gouvernementalité algorithmique » (Antoinette Rouvroy et Thomas Berns). À partir du moment où il y a des discours de fin de monde cependant, c’est qu’une tentative est à l’œuvre : celle qui consiste à inventer une mythologie adéquate à notre présent, celle qui essaie de dire quelque chose de la fin d’une certaine aventure anthropologique. Et c’est pour participer à cette tentative, tout en cherchant à éviter de contribuer aux discours de fin de monde, qu’une approche technologique des traces numériques à même de faire compter les machines computationnelles est proposée<br>This Ph.D. thesis aims at proposing a concept that is technological – inother words, not anthropocentric – of digital traces. The point is that since computational processes require objects and actions to take the form of inscriptions as a condition of their existence, computational machines are fundamentally involved in the process of producing digital traces, which a technological semiotics could describe. What is at stake in the concept we propose is to put into circulation a narration which avoids the theme of “the end of the world” described by Déborah Danowski and Eduardo Viveiros de Castro. These “end of the world” stories evoke the life of human beings who are reduced to living in an environment that is ontologically devitalized and purely artificial, as it seems to be the case when the technical and economic valorization of digital traces has the end result of “delegating our human relations to machines” (Louise Merzeau) or yet again of leading to “algorithmic governmentality” (Antoinette Rouvroy and Berns). When the theme of “the end of the world” raises its head, it means that an attempt is being made : an attempt to invent a mythology appropriate to our present situation, a narration which tries to say something about the end of a certain anthropological adventure. And it is in order to participate in this venture, but seeking to avoid contributing to the theme of “the end of the world”, that we propose a technological approach to digital traces, enabling us to take into account computational machines as a part of the contemporary world
APA, Harvard, Vancouver, ISO, and other styles
32

Greenstein, Stanley. "Our Humanity Exposed : Predictive Modelling in a Legal Context." Doctoral thesis, Stockholms universitet, Juridiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-141657.

Full text
Abstract:
This thesis examines predictive modelling from the legal perspective. Predictive modelling is a technology based on applied statistics, mathematics, machine learning and artificial intelligence that uses algorithms to analyse big data collections, and identify patterns that are invisible to human beings. The accumulated knowledge is incorporated into computer models, which are then used to identify and predict human activity in new circumstances, allowing for the manipulation of human behaviour. Predictive models use big data to represent people. Big data is a term used to describe the large amounts of data produced in the digital environment. It is growing rapidly due mainly to the fact that individuals are spending an increasing portion of their lives within the on-line environment, spurred by the internet and social media. As individuals make use of the on-line environment, they part with information about themselves. This information may concern their actions but may also reveal their personality traits. Predictive modelling is a powerful tool, which private companies are increasingly using to identify business risks and opportunities. They are incorporated into on-line commercial decision-making systems, determining, among other things, the music people listen to, the news feeds they receive, the content people see and whether they will be granted credit. This results in a number of potential harms to the individual, especially in relation to personal autonomy. This thesis examines the harms resulting from predictive modelling, some of which are recognized by traditional law. Using the European legal context as a point of departure, this study ascertains to what extent legal regimes address the use of predictive models and the threats to personal autonomy. In particular, it analyses Article 8 of the European Convention on Human Rights (ECHR) and the forthcoming General Data Protection Regulation (GDPR) adopted by the European Union (EU). Considering the shortcomings of traditional legal instruments, a strategy entitled ‘empowerment’ is suggested. It comprises components of a legal and technical nature, aimed at levelling the playing field between companies and individuals in the commercial setting. Is there a way to strengthen humanity as predictive modelling continues to develop?
APA, Harvard, Vancouver, ISO, and other styles
33

Funiak, Martin. "Klasifikace testovacích manévrů z letových dat." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-264978.

Full text
Abstract:
Zapisovač letových údajů je zařízení určené pro zaznamenávání letových dat z různých senzorů v letadlech. Analýza letových údajů hraje důležitou roli ve vývoji a testování avioniky. Testování a hodnocení charakteristik letadla se často provádí pomocí testovacích manévrů. Naměřená data z jednoho letu jsou uložena v jednom letovém záznamu, který může obsahovat několik testovacích manévrů. Cílem této práce je identi kovat základní testovací manévry s pomocí naměřených letových dat. Teoretická část popisuje letové manévry a formát měřených letových dat. Analytická část popisuje výzkum v oblasti klasi kace založené na statistice a teorii pravděpodobnosti potřebnou pro pochopení složitých Gaussovských směšovacích modelů. Práce uvádí implementaci, kde jsou Gaussovy směšovací modely použité pro klasifi kaci testovacích manévrů. Navržené řešení bylo testováno pro data získána z letového simulátoru a ze skutečného letadla. Ukázalo se, že Gaussovy směšovací modely poskytují vhodné řešení pro tento úkol. Další možný vývoj práce je popsán v závěrečné kapitole.
APA, Harvard, Vancouver, ISO, and other styles
34

Prest, Thomas. "Gaussian sampling in lattice-based cryptography." Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0045/document.

Full text
Abstract:
Bien que relativement récente, la cryptographie à base de réseaux euclidiens s’est distinguée sur de nombreux points, que ce soit par la richesse des constructions qu’elle permet, par sa résistance supposée à l’avènement des ordinateursquantiques ou par la rapidité dont elle fait preuve lorsqu’instanciée sur certaines classes de réseaux. Un des outils les plus puissants de la cryptographie sur les réseaux est le Gaussian sampling. À très haut niveau, il permet de prouver qu’on connaît une base particulière d’un réseau, et ce sans dévoiler la moindre information sur cette base. Il permet de réaliser une grande variété de cryptosystèmes. De manière quelque peu surprenante, on dispose de peu d’instanciations pratiques de ces schémas cryptographiques, et les algorithmes permettant d’effectuer du Gaussian sampling sont peu étudiés. Le but de cette thèse est de combler le fossé qui existe entre la théorie et la pratique du Gaussian sampling. Dans un premier temps, nous étudions et améliorons les algorithmes existants, à la fois par une analyse statistique et une approche géométrique. Puis nous exploitons les structures sous-tendant de nombreuses classes de réseaux, ce qui nous permet d’appliquer à un algorithme de Gaussian sampling les idées de la transformée de Fourier rapide, passant ainsi d’une complexité quadratique à quasilinéaire. Enfin, nous utilisons le Gaussian sampling en pratique et instancions un schéma de signature et un schéma de chiffrement basé sur l’identité. Le premierfournit des signatures qui sont les plus compactes obtenues avec les réseaux à l’heure actuelle, et le deuxième permet de chiffrer et de déchiffrer à une vitesse près de mille fois supérieure à celle obtenue en utilisant un schéma à base de couplages sur les courbes elliptiques<br>Although rather recent, lattice-based cryptography has stood out on numerous points, be it by the variety of constructions that it allows, by its expected resistance to quantum computers, of by its efficiency when instantiated on some classes of lattices. One of the most powerful tools of lattice-based cryptography is Gaussian sampling. At a high level, it allows to prove the knowledge of a particular lattice basis without disclosing any information about this basis. It allows to realize a wide array of cryptosystems. Somewhat surprisingly, few practical instantiations of such schemes are realized, and the algorithms which perform Gaussian sampling are seldom studied. The goal of this thesis is to fill the gap between the theory and practice of Gaussian sampling. First, we study and improve the existing algorithms, byboth a statistical analysis and a geometrical approach. We then exploit the structures underlying many classes of lattices and apply the ideas of the fast Fourier transform to a Gaussian sampler, allowing us to reach a quasilinearcomplexity instead of quadratic. Finally, we use Gaussian sampling in practice to instantiate a signature scheme and an identity-based encryption scheme. The first one yields signatures that are the most compact currently obtained in lattice-based cryptography, and the second one allows encryption and decryption that are about one thousand times faster than those obtained with a pairing-based counterpart on elliptic curves
APA, Harvard, Vancouver, ISO, and other styles
35

Ling, Hong. "Implementation of Stochastic Neural Networks for Approximating Random Processes." Master's thesis, Lincoln University. Environment, Society and Design Division, 2007. http://theses.lincoln.ac.nz/public/adt-NZLIU20080108.124352/.

Full text
Abstract:
Artificial Neural Networks (ANNs) can be viewed as a mathematical model to simulate natural and biological systems on the basis of mimicking the information processing methods in the human brain. The capability of current ANNs only focuses on approximating arbitrary deterministic input-output mappings. However, these ANNs do not adequately represent the variability which is observed in the systems’ natural settings as well as capture the complexity of the whole system behaviour. This thesis addresses the development of a new class of neural networks called Stochastic Neural Networks (SNNs) in order to simulate internal stochastic properties of systems. Developing a suitable mathematical model for SNNs is based on canonical representation of stochastic processes or systems by means of Karhunen-Loève Theorem. Some successful real examples, such as analysis of full displacement field of wood in compression, confirm the validity of the proposed neural networks. Furthermore, analysis of internal workings of SNNs provides an in-depth view on the operation of SNNs that help to gain a better understanding of the simulation of stochastic processes by SNNs.
APA, Harvard, Vancouver, ISO, and other styles
36

Fahed, Nour. "The Dilemmal Socialization on Social Media Platforms : A Qualitative Study on the Experience of Online Socialization and the Infrastructure of Social Media Platforms." Thesis, Södertörns högskola, Medie- och kommunikationsvetenskap, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-46523.

Full text
Abstract:
Social media effects may affect self-perception and the way media users live their offline lives. The purpose of this essay was to examine the phenomenon of social media saturation in order to understand the possible risks to the development of human identity during the adolescent period. Hence, these risks may be generated by being exposed to social comparison, cyberbullying, self-validation, and self-perception in a sensitive age when self-image is still fragile and being formed. The purpose of this essay is to examine the psychological tendencies of human beings while interacting with their peers on social media platforms. Hence, this will give us a clearer view of what would be achieved by conducting interviews. Moreover, a selection of theories will be applied to those interviews in order to associate those theories with what has been said by respondents. Hence, Meyrowitz’s theory will be used in relation to  understanding the identity adaptation to online connection and linked to Goffman’s discussions of “onstage” and “backstage” (Meyrowitz, 1985: 5). After this, the essay will investigate how users’ self-perception and social comparison are enacted while socializing on social media platforms. Furthermore, this essay sheds the light on how identity is constructed online in the sense of belonging to a community on a social media platform as well as of gratification coming from peer validation in a virtual community. To be able to explain this, the “Social Identity Theory” will, therefore, be discussed (Teo, Matti, et al, 2017: 23). This will be discussed by mentioning theories like “Mediatization” (Couldry &amp; Hepp, 2013). And lastly, the sociological concept of Habitus, minted by Pierre Bourdieu will demonstrate the process of adaptation towards unspoken social codes existing in virtual communities (Markham, 2017: 55).  As found in the four qualitative semi-structured interviews with social media users, respondents are surrounding themselves with like-minded social groups which provide them with confidence about their own system of beliefs. Nevertheless, their perspectives are often marked by notable social pessimism and a lack of incentive to engage in conflictual interactions with others on social media. The results pointed out the perception among the interviewees that the impact of social media on identity formation is largely confined to adolescent users. Many users self-report significant daily screen time and are aware of the risks of social bubbles. Most of the respondents denied being subjected to cyberbullying, while they were surfing on social media, so the respondents’ physical lives were not affected by cyberbullying even for those who mentioned their exposure to cyberbullying. All the respondents expressed a sense of jealousy to some extent, even though some of them showed awareness of the thought that people post their lives from a perfect angle while hiding the flaws and not showing the imperfections of their lives on social media. Lastly, social comparison was an incentive feeling affected most of the respondents, and in their own experience, social media affected their character development and self-perception since they were exposed to social media at an adolescent age.
APA, Harvard, Vancouver, ISO, and other styles
37

Hentati, Raïda. "Implémentation d'algorithmes de reconnaissance biométrique par l'iris sur des architectures dédiées." Phd thesis, Institut National des Télécommunications, 2013. http://tel.archives-ouvertes.fr/tel-00917955.

Full text
Abstract:
Dans cette thèse, nous avons adapté trois versions d'une chaine d'algorithmes de reconnaissance biométrique par l'iris appelés OSIRIS V2, V3, V4 qui correspondent à différentes implémentations de l'approche de J. Daugman pour les besoins d'une implémentation logicielle / matérielle. Les résultats expérimentaux sur la base de données ICE2005 montrent que OSIRIS_V4 est le système le plus fiable alors qu'OSIRIS_V2 est le plus rapide. Nous avons proposé une mesure de qualité de l'image segmentée pour optimiser en terme de compromis coût / performance un système de référence basé sur OSIRIS V2 et V4. Nous nous sommes ensuite intéressés à l'implémentation de ces algorithmes sur des plateformes reconfigurables. Les résultats expérimentaux montrent que l'implémentation matériel / logiciel est plus rapide que l'implémentation purement logicielle. Nous proposons aussi une nouvelle méthode pour le partitionnement matériel / logiciel de l'application. Nous avons utilisé la programmation linéaire pour trouver la partition optimale pour les différentes tâches prenant en compte les trois contraintes : la surface occupée, le temps d'exécution et la consommation d'énergie
APA, Harvard, Vancouver, ISO, and other styles
38

Arantes, Janine Aldous. "Big data, black boxes and bias: the algorithmic identity and educational practice." Thesis, 2021. http://hdl.handle.net/1959.13/1430134.

Full text
Abstract:
Research Doctorate - Doctor of Philosophy (PhD)<br>This dissertation adds to a burgeoning conversation in education about the implications of commercial platforms being embedded in classrooms and educational practice. Drawing on a postdigital Deleuzian perspective, the study explores how Australian K-12 teachers are negotiating their educational practice as part of a broader data-driven infrastructure which includes predictive analytics and algorithmic bias. It does this by considering the changing role of the teacher’s digital profile through a transdisciplinary lens derived from Education, Media and Communications, and Learning Analytics. Drawing on twelve months of data generation, consisting of an online survey (N=129), two phases of interviews (N=40) with 23 educators from across Australia, and a platform analysis (Edmodo), the study illuminates a startling correlation between the commercial profiling of teachers and relatively intangible workplace hazards. The findings show that teachers are negotiating commercial platforms as a form of psychosocial risk in the workplace, yet not discussing their concerns due to fears of workplace victimization. As such, the study uses the findings to offer theoretical and practical approaches, aimed at improving workplace conditions for teachers. Introducing the eMorpheus Theory - a series of practical recommendations for teachers, schools and Australian Departments of Education, the study details a National Strategy in K-12 Educational Settings, suggests Policy and Legislation, and advises methods for Co-regulation and Self-regulation of commercial platforms and data stewardship in schools. The study concludes by detailing recommendations for further research as a result of the workplace issues identified in Australian educational settings.
APA, Harvard, Vancouver, ISO, and other styles
39

Sernadela, João Filipe Lopes. "Abordagens de design generativo no contexto de identidade visual." Master's thesis, 2020. http://hdl.handle.net/10316/92117.

Full text
Abstract:
Dissertação de Mestrado em Design e Multimédia apresentada à Faculdade de Ciências e Tecnologia<br>Hoje em dia, as identidades visuais são cada vez mais importantes para as organizações e marcas, definindo melhor o seu posicionamento no mercado e apresentando características que fazem com que o seu público-alvo as identifique como únicas e diferenciadas. Assim, o objetivo geral deste projeto assenta na definição e desenho de uma identidade visual nova para a organização não-governamental Zero (Associação Sistema Terrestre Sustentável). Para isso, foram exploradas abordagens algorítmicas computacionais desenvolvidas em Processing e Python, com a finalidade de elaborar uma identidade visual generativa (baseada em mecanismos de variação). A identidade visual desenvolvida foi projectada para que esteja mais em consonância com os atuais objetivos e metas da organização Zero, tendo como pontos de referência a sua missão, valores e princípios...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................<br>Nowadays, visual identities are increasingly important for organizations and brands, helping define their position on market and presenting some characteristics that make their target audience identify these organizations and brands as unique and differentiated. Thus, the main goal of this project is based on the definition and design of a new visual identity for a non-governmental organization Zero. This way,t, it was developed computational algorithmic approaches, in Processing and Python, with the purpose of creating a generative visual identity (based on variation mechanisms). This resulting visual identity was designed to be more in accordance with the current objectives and goals of the Zero organization, having its mission, values and principles as base characteristics...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
APA, Harvard, Vancouver, ISO, and other styles
40

Hung-YingChang and 張弘穎. "Low-Complexity Cell Identity Detection Algorithms bySequences Grouping for NB-IoT." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/dxc62f.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Yukish, Michael A. "Algorithms to identify Pareto points in multi-dimensional data sets." 2004. http://www.etda.libraries.psu.edu/theses/approved/WorldWideIndex/ETD-593/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Lin, and 王琳. "Development of a novel algorithm to identify ceRNA-miRNA triplets." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/31765733238150087436.

Full text
Abstract:
碩士<br>國立臺灣大學<br>流行病學與預防醫學研究所<br>105<br>Understanding physical and functional interactions between molecules in living systems is of vital importance in biology. Recent studies have shown that the interaction of microRNA (miRNA) and mRNA is not unidirectional and monotonic, which has been suggested as an important regulating mechanism in many diseases. Among the target genes of miRNAs, some of them are named as competing endogenous RNAs (ceRNAs), and their expression levels affected by the expression level of miRNAs can be regulated through competing for a pool of common binding miRNAs. Therefore, challenge arises when trying to systematically explore the association of miRNA and its target genes. Several algorithms have been developed to identify ceRNAs and their dynamic regulating systems. Most of the algorithms divide a miRNA into different groups based on its expression level and then perform the analysis accordingly. However, the expression level of a miRNA is actually a continuous variable instead of a discrete variable. To address this issue, we developed a new algorithm based on the random walk concept and circular binary algorithm. The score obtained from the random walk method was the maximum deviation from zero weighted by the correlation within each window. We then applied the circular binary algorithm to get the peaks from the miRNA expression levels across samples. Simulation studies demonstrate our proposed algorithm can accurately identify a ceRNA-miRNA triplet with high correlation. Also, we applied the algorithm to two TCGA cancers. Some common bridging miRNA and ceRNAs were found in two cancers and were verified by previous studies. Based on the results of simulation and application, our methods are effective and feasible to identify ceRNA-miRNA triplet. In particular, we can also capture the multiple peaks of correlation at specific miRNA expression levels. We believed that our algorithm is able to provide an insight in miRNA-modulated ceRNA regulatory mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
43

Le, Mesurier Daniel. "Digital Metamorphoses: How Might Personalised Targeting Algorithms Influence Social Identity and Affect Autonomy?" Thesis, 2021. http://hdl.handle.net/1885/258174.

Full text
Abstract:
In a time when technology enjoys an everyday presence in our lives, understanding the implications of the digital world is crucially important. This is especially so with personalised targeting algorithms (PTAs), which are increasingly present in facilitating our digital activity. In this thesis, I consider how the overt recommendations of PTAs might influence social identity and affect personal autonomy. In doing so, I consider how PTAs reflect the traditionally-understood mechanisms for forming and maintaining social labels and, consequently, social identity. This leads me to characterise overt PTA-generated recommendations as a type of social label. I draw on this characterisation when subsequently considering how PTAs interact with personal autonomy, and how they might promote or hinder it. Ultimately, I conclude that PTAs can both undermine and enhance autonomy. In particular, PTAs can undermine autonomy by eroding our self-trust and effecting a transfer of authorship to the recommendations made by PTAs. However, PTAs can enhance autonomy by providing us with greater personal insight and prompting our processes of critical self-reflection. These questions are highly significant for understanding how we can maintain personal autonomy while coming into constant contact with PTAs.
APA, Harvard, Vancouver, ISO, and other styles
44

Lee, Chang-Hong, and 李昶宏. "Cheating Catcher: Using Sequence Alignment Algorithms to Identify Homologous C Programs." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/21343999804190456693.

Full text
Abstract:
碩士<br>國立暨南國際大學<br>資訊工程學系<br>92<br>A lot of software tools (e.g., Diff and WinDiff) have been developed to identify the differences between text files. While being applied to compare programs, however, more situations need to be handled since renaming the identifiers, altering the function definitions, or replacing a block by a synonymous one can still make the program equivalent. In order to solve these problems, we investigate on how to analyze the similarity between programs. We develop a system to estimate the similarity of two C programs by using Local Alignment Algorithm. Simply doing this can not meet our demand because similar but non-overlapping regions may scatter in different orders. Hence, some modification is necessary. We intend to identify all of the similar regions that are non-overlapping. We propose an idea to perform “computing and recoding” in order to achieve the goal efficiently. A straightforward way to accomplish this is to apply the dynamic programming (DP) repeatedly: once a similar region is identified, remove it and apply the DP again. However, this cannot work well. Another issue addresses the notion of similarity. We adopt statistic method for this. Memory limitation is also a problem. We propose a simple encoding technique to reduce third-quarters of the required memory space. Trade-off between efficiency and accuracy is also considered in this thesis. We develop two versions of the system: one for the efficiency purpose and the other for the accuracy purpose. A FASTA-like algorithm is developed, although it is less precise than the full Local Alignment Algorithm. In addition, experiments show that our fast version can estimate the similarity well. Finally, the programs provide graphical illustration on how two programs are similar.
APA, Harvard, Vancouver, ISO, and other styles
45

Rungsarityotin, Wasinee [Verfasser]. "Algorithm to identify protein complexes from high-throughput data / Wasinee Rungsarityotin." 2007. http://d-nb.info/988800195/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Jheng, Yu-Jie, and 鄭鈺傑. "Symmetry and Bayes Classifier-based Forward Vehicle Identify Algorithm on Smartphone." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/20714944738809261965.

Full text
Abstract:
碩士<br>國立東華大學<br>電機工程學系<br>103<br>This study proposed a symmetry and Bayes classifier-based forward vehicle identify algorithm on Smartphone. The algorithm used symmetry characteristic to recognize forward object, and Bayes classifier to track. The identify algorithm is migrates to Android smartphone. The symmetry-based identify algorithm uses lane departure warning system and shadow detection to get lane coordinates and shadow coordinates. It builds a region of interest by these coordinates. The ROI can avoid environmental noise. The study proposed Bayesian classifier to analyze. It employed predicting of probability to obtain high correct identification rate and stable system. The Symmetry-based Identify algorithm is composed of C,and Android application framework is developed by Java. Therefore, algorithm was placed in native library and the system use “Java call C” method. When mobile phone camera capture road image, it will call algorithm kernel to compute. Symmetry-based identify system has been implemented in smartphone. It is test by the driving video and real driving environment and has successfully tested with well results.
APA, Harvard, Vancouver, ISO, and other styles
47

Cancellieri, Samuele. "Personal genome editing algorithms to identify increased variant-induced off-target potential." Doctoral thesis, 2022. http://hdl.handle.net/11562/1058995.

Full text
Abstract:
Clustered regularly interspaced short palindromic repeats (CRISPR) technologies allow for facile genomic modification in a site-specific manner. A key step in this process is the in-silico design of single guide RNAs (sgRNAs) to efficiently and specifically target a site of interest. To this end, it is necessary to enumerate all potential off-target sites within a given genome that could be inadvertently altered by nuclease-mediated cleavage. Off-target sites are quasi-complementary regions of the genome in which the specified sgRNA can bind, even without a perfect complementary nucleotides sequence. This problem is known as off-target sites enumeration and became common after discovery of CRISPR technology. To solve this problem, many in-silico solutions were proposed in the last years but, currently available software for this task are limited by computational efficiency, variant support, genetic annotation, assessment of the functional impact of potential off-target effects at population and individual level, and a user-friendly graphical interface designed to be usable by non-informatician without any programming knowledge. This thesis addresses all these topics by proposing two software to directly answer the off-target enumeration problem and perform all the related analysis. In details, the thesis proposes CRISPRitz, a tool designed and developed to compute fast and exhaustive searches on reference and alternative genome to enumerate all the possible off-target for a user-defined set of sgRNAs with specific thresholds of mismatches (non-complementary bps in RNA-DNA binding) and bulges (bubbles that alters the physical structure of RNA and DNA limiting the binding activity). The thesis also proposes CRISPRme, a tool developed starting from CRISPRitz, which answers the requests of professionals and technicians to implement a comprehensive and easy to use interface to perform off-target enumeration, analysis and assessment, with graphical reports, a graphical interface and the capability of performing real-time query on the resulting data to extract desired targets, with a focus on individual and personalized genome analysis.
APA, Harvard, Vancouver, ISO, and other styles
48

Freeman, James Wesley. "Using EM Algorithm to identify defective parts per million on shifting production process." 2012. http://hdl.handle.net/2152/19996.

Full text
Abstract:
The objective of this project is to determine whether utilizing an EM Algorithm to fit a Gaussian mixed model distribution model provides needed accuracy in identifying the number of defective parts per million when the overall population is made up of multiple independent runs or lots. The other option is approximating using standard software tools and common known techniques available to a process, industrial or quality engineer. These tools and techniques provide methods utilizing familiar distributions and statistical process control methods widely understood. This paper compares these common methods with an EM Algorithm programmed in R using a dataset of actual measurements for length of manufactured product.<br>text
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Yung-Chu, and 陳勇竹. "Using Bubble sort and Nearest Neighbor Algorithm to Identify the Flood Zone." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/cequsd.

Full text
Abstract:
碩士<br>國立臺北科技大學<br>土木工程系土木與防災碩士班<br>105<br>Taiwan is located in subtropical regions, Taiwan has abundant rainfall in the rainy season. In August and September due to the tropical ocean and the air temperature, In addition to Taiwan caused by Typhoon brings heavy rainfall, Excess rainfall results in the low-lying areas have depression storage. If the rainfall intensity is strong and concentrated, could trigger more floods, In this masters thesis, Mainly for the study of flooding. Because the current flooding system are generally more meticulous, which is spend a lot of times. That is hard to used in real-time to prevent disaster. This study used “instant” as the starting point in this flooding system. Get instant flooding informations and notification points from the community website of the punch card. According to the characteristics of low-lying flooding the use of Digital Elevation Model (DEM), which will be lower than the notification point to do the first filter. And then use the bubble sort algorithm to exclude points with lower relevance. Let the flooding region from prediction is more accurate and avoid to much the nonessential consideration will reduce the simulation time. Finally, the simulation results compared directly with the flooding potential map provided by the of Water Resources Agency. To investigate whether similar flooding region of this study and the feasibility of flooding algorithm automatically conjecture.
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Brian, and 陳柏穎. "AUC oriented Bidirectional LSTM-CRF Models to Identify Algorithms Described in an Abstract." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/p3grat.

Full text
Abstract:
碩士<br>國立臺灣大學<br>資訊工程學研究所<br>105<br>In this thesis, we attempt to identify algorithms mentioned in the paper abstract. We further want to discriminate the algorithm proposed in this paper from algorithms only mentioned or compared, since we are more interested in the former. We model this task as a sequential labeled task and propose to use a state-of-the-art deep learning model LSTM-CRF as our solution. However, the data or labels are generally imbalanced since not all the sentence in the abstract is describing its algorithm. That is, the ratio between different labels is skewed. As a result, it is not suitable to use traditional LSTM-CRF model since it only optimizes accuracy. Instead, it is more reasonable to optimize AUC in imbalanced data because it can deal with skewed labels and perform better in predicting rare labels. Our experiment shows that the proposed AUC-optimized LSTM-CRF outperforms the traditional LSTM-CRF. We also show the ranking of algorithms used currently, and find the trend of different algorithms used in recent years. Moreover, we are able to discover some new algorithms that do not exist in our training data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!