To see the other types of publications on this topic, follow the link: Computing Methodologies.

Dissertations / Theses on the topic 'Computing Methodologies'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Computing Methodologies.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Cabrera, Añon Guillem. "Service allocation methodologies for contributory computing environments." Doctoral thesis, Universitat Oberta de Catalunya, 2014. http://hdl.handle.net/10803/295714.

Full text
Abstract:
Aquesta tesi adreça el problema de l'assignació eficient de recursos en sistemes de computació contributius. Aquest tipus de sistemes estan formats únicament per nodes no dedicats i requereixen de mecanismes adequats per a optimitzar l'ús dels recursos disponibles. La tesi proposa algorismes que optimitzen diferents aspectes d'aquest tipus de sistemes, com la disponibilitat de serveis compostos, el consum energètic originat per un servei o la distància entre qualsevol client i una rèplica en una topologia de xarxa coneguda. Aquests algorismes es basen en la combinació de meta-heurístiques i tècniques de simulació, cosa que permet que es puguin adaptar a situacions molt diverses i ser aprofitats pels sistemes contributius.<br>Esta tesis trata el problema de la asignación eficiente de recursos en sistemas de computación contributivos. Este tipo de sistemas estan compuestos de manera exclusiva por nodos no dedicados y requieren de mecanismos orientados a optimizar el uso de los recursos disponibles. La tesis propone algoritmos que optimizan diferentes aspectos de este tipo de sistemas, como la disponibilidad de servicios compuestos, el consumo energético de un servicio o la distancia entre cualquier cliente y una réplica en una topologia de red conocida. Estos algoritmos se basan en la combinación de metaheurísticas y tecnicas de simulación, cosa que permite que puedan adaptar su funcionamiento a situaciones muy diversas i así ser aprovechados por los sistemas contributivos.<br>This thesis addresses the efficient allocation problem in contributory computing systems. These systems are built only from non-dedicated nodes and require proper mechanisms in order to optimize the resource utilization. The thesis proposes algorithms that optimize different aspects of these systems, such as the availability of composite services, the energy consumptions of a service or the network distance between any client and a service replica in a known network topology. The algorithms are based in metaheuristics combines with simulation techniques, which allows them to adapt to very different scenarios and to be harnessed by contributory systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Alomar, Barceló Miquel Lleó. "Methodologies for hardware implementation of reservoir computing systems." Doctoral thesis, Universitat de les Illes Balears, 2017. http://hdl.handle.net/10803/565422.

Full text
Abstract:
[cat]Inspirades en la forma en què el cervell processa la informació, les xarxes neuronals artificials (XNA) es crearen amb l’objectiu de reproduir habilitats humanes en tasques que són difícils de resoldre mitjançant la programació algorítmica clàssica. El paradigma de les XNA s’ha aplicat a nombrosos camps de la ciència i enginyeria gràcies a la seva capacitat d’aprendre dels exemples, l’adaptació, el paral·lelisme i la tolerància a fallades. El reservoir computing (RC), basat en l’ús d’una xarxa neuronal recurrent (XNR) aleatòria com a nucli de processament, és un model de gran abast molt adequat per processar sèries temporals. Les realitzacions en maquinari de les XNA són crucials per aprofitar les propietats paral·leles d’aquests models, les quals afavoreixen una major velocitat i fiabilitat. D’altra banda, les xarxes neuronals en maquinari (XNM) poden oferir avantatges apreciables en termes de consum energètic i cost. Els dispositius compactes de baix cost implementant XNM són útils per donar suport o reemplaçar el programari en aplicacions en temps real, com ara de control, supervisió mèdica, robòtica i xarxes de sensors. No obstant això, la realització en maquinari de XNA amb un nombre elevat de neurones, com al cas de l’RC, és una tasca difícil a causa de la gran quantitat de recursos exigits per les operacions involucrades. Tot i els possibles beneficis dels circuits digitals en maquinari per realitzar un processament neuronal basat en RC, la majoria d’implementacions es realitzen en programari usant processadors convencionals. En aquesta tesi, proposo i analitzo diverses metodologies per a la implementació digital de sistemes RC fent ús d’un nombre limitat de recursos de maquinari. Els dissenys de la xarxa neuronal es descriuen en detall tant per a una implementació convencional com per als distints mètodes alternatius. Es discuteixen els avantatges i inconvenients de les diferents tècniques pel que fa a l’exactitud, velocitat de càlcul i àrea requerida. Finalment, les implementacions proposades s’apliquen a resoldre diferents problemes pràctics d’enginyeria.<br>[spa]Inspiradas en la forma en que el cerebro procesa la información, las redes neuronales artificiales (RNA) se crearon con el objetivo de reproducir habilidades humanas en tareas que son difíciles de resolver utilizando la programación algorítmica clásica. El paradigma de las RNA se ha aplicado a numerosos campos de la ciencia y la ingeniería gracias a su capacidad de aprender de ejemplos, la adaptación, el paralelismo y la tolerancia a fallas. El reservoir computing (RC), basado en el uso de una red neuronal recurrente (RNR) aleatoria como núcleo de procesamiento, es un modelo de gran alcance muy adecuado para procesar series temporales. Las realizaciones en hardware de las RNA son cruciales para aprovechar las propiedades paralelas de estos modelos, las cuales favorecen una mayor velocidad y fiabilidad. Por otro lado, las redes neuronales en hardware (RNH) pueden ofrecer ventajas apreciables en términos de consumo energético y coste. Los dispositivos compactos de bajo coste implementando RNH son útiles para apoyar o reemplazar al software en aplicaciones en tiempo real, como el control, monitorización médica, robótica y redes de sensores. Sin embargo, la realización en hardware de RNA con un número elevado de neuronas, como en el caso del RC, es una tarea difícil debido a la gran cantidad de recursos exigidos por las operaciones involucradas. A pesar de los posibles beneficios de los circuitos digitales en hardware para realizar un procesamiento neuronal basado en RC, la mayoría de las implementaciones se realizan en software mediante procesadores convencionales. En esta tesis, propongo y analizo varias metodologías para la implementación digital de sistemas RC utilizando un número limitado de recursos hardware. Los diseños de la red neuronal se describen en detalle tanto para una implementación convencional como para los distintos métodos alternativos. Se discuten las ventajas e inconvenientes de las diversas técnicas con respecto a la precisión, velocidad de cálculo y área requerida. Finalmente, las implementaciones propuestas se aplican a resolver diferentes problemas prácticos de ingeniería.<br>[eng]Inspired by the way the brain processes information, artificial neural networks (ANNs) were created with the aim of reproducing human capabilities in tasks that are hard to solve using the classical algorithmic programming. The ANN paradigma has been applied to numerous fields of science and engineering thanks to its ability to learn from examples, adaptation, parallelism and fault-tolerance. Reservoir computing (RC), based on the use of a random recurrent neural network (RNN) as processing core, is a powerful model that is highly suited to time-series processing. Hardware realizations of ANNs are crucial to exploit the parallel properties of these models, which favor higher speed and reliability. On the other hand, hardware neural networks (HNNs) may offer appreciable advantages in terms of power consumption and cost. Low-cost compact devices implementing HNNs are useful to suport or replace software in real-time applications, such as control, medical monitoring, robotics and sensor networks. However, the hardware realization of ANNs with large neuron counts, such as in RC, is a challenging task due to the large resource requirement of the involved operations. Despite the potential benefits of hardware digital circuits to perform RC-based neural processing, most implementations are realized in software using sequential processors. In this thesis, I propose and analyze several methodologies for the digital implementation of RC systems using limited hardware resources. The neural network design is described in detail for both a conventional implementation and the diverse alternative approaches. The advantages and shortcomings of the various techniques regarding the accuracy, computation speed and required silicon area are discussed. Finally, the proposed approaches are applied to solve different real-life engineering problems.
APA, Harvard, Vancouver, ISO, and other styles
3

Subramaniam, Balaji. "Metrics, Models and Methodologies for Energy-Proportional Computing." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/56492.

Full text
Abstract:
Massive data centers housing thousands of computing nodes have become commonplace in enterprise computing, and the power consumption of such data centers is growing at an unprecedented rate. Exacerbating such costs, data centers are often over-provisioned to avoid costly outages associated with the potential overloading of electrical circuitry. However, such over provisioning is often unnecessary since a data center rarely operates at its maximum capacity. It is imperative that we realize effective strategies to control the power consumption of the server and improve the energy efficiency of data centers. Adding to the problem is the inability of the servers to exhibit energy proportionality which diminishes the overall energy efficiency of the data center. Therefore in this dissertation, we investigate whether it is possible to achieve energy proportionality at the server- and cluster-level by efficient power and resource provisioning. Towards this end, we provide a thorough analysis of energy proportionality at the server and cluster-level and provide insight into the power saving opportunity and mechanisms to improve energy proportionality. Specifically, we make the following contribution at the server-level using enterprise-class workloads. We analyze the average power consumption of the full system as well as the subsystems and describe the energy proportionality of these components, characterize the instantaneous power profile of enterprise-class workloads using the on-chip energy meters, design a runtime system based on a load prediction model and an optimization framework to set the appropriate power constraints to meet specific performance targets and then present the effects of our runtime system on energy proportionality, average power, performance and instantaneous power consumption of enterprise applications. We then make the following contributions at the cluster-level. Using data serving, web searching and data caching as our representative workloads, we first analyze the component-level power distribution on a cluster. Second, we characterize how these workloads utilize the cluster. Third, we analyze the potential of power provisioning techniques (i.e., active low-power, turbo and idle low-power modes) to improve the energy proportionality. We then describe the ability of active low-power modes to provide trade-offs in power and latency. Finally, we compare and contrast power provisioning and resource provisioning techniques. This thesis sheds light on mechanisms to tune the power provisioned for a system under strict performance targets and opportunities to improve energy proportionality and instantaneous power consumption via efficient power and resource provisioning at the server- and cluster-level.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Fayyaz, Ahmad. "Energy Efficient Resource Scheduling Methodologies for Cluster and Cloud Computing." Diss., North Dakota State University, 2015. https://hdl.handle.net/10365/27936.

Full text
Abstract:
One of the major challenges in the High Performance Computing (HPC) clusters, Data Centers, and Cloud Computing paradigms is intelligent power management to improve energy efficiency. The key contribution of the presented work is the modeling of a Power Aware Job Scheduler (PAJS) for HPC clusters, such that the: (a) threshold voltage is adjusted judiciously to achieve energy efficiency and (b) response time is minimized by scaling the supply voltage. The key novelty in our work is utilization of the Dynamic Threshold-Voltage Scaling (DTVS) for the reduction of cumulative power utilized by each node in the cluster. Moreover, to enhance the performance of the resource scheduling strategies in first part of the work, independent tasks within a job are scheduled to most suitable Computing Nodes (CNs). First, our research analyzes and compares eight scheduling techniques in terms of energy consumption and makespan. Primarily, the most suitable Dynamic Voltage Scaling (DVS) level adhering to the deadline is identified for each of the CNs by the scheduling heuristics. Afterwards, the DTVS is employed to scale down the static, as well as dynamic power by regulating the supply and bias voltages. Finally, the per node threshold scaling is used attain power saving. Our simulation results affirm that the proposed methodology significantly reduces the energy consumption using the DTVS. The work is further extended and the effect of task consolidation is studied and analyzed. By consolidating the tasks on a fewer number of servers the overall power consumed can be significantly reduced. The tasks are first allocated to suitable servers until all the tasks are exhausted. The idle servers are then turned off by using DTVS. The Virtual Machine (VM) monitor checks for under-utilized, partially filled, over-utilized, and empty servers. The VM monitor then migrates the tasks to suitable servers for execution if a set of conditions is met. By this way, many servers those were under-utilized get free and are turned off by using DTVS to save power. Simulations results confirm our study and a substantial reduction in the overall power consumption of the cloud data center is observed.
APA, Harvard, Vancouver, ISO, and other styles
5

Wilkes, Charles Thomas. "Programming methodologies for resilience and availability." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/8308.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

HANDA, MANISH. "ONLINE PLACEMENT AND SCHEDULING ALGORITHMS AND METHODOLOGIES FOR RECONFIGURABLE COMPUTING SYSTEMS." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1100030953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pauley, Wayne A. Jr. "An Empirical Study of Privacy Risk Assessment Methodologies in Cloud Computing Environments." NSUWorks, 2012. http://nsuworks.nova.edu/gscis_etd/271.

Full text
Abstract:
Companies offering services on the Internet have led corporations to shift from the high cost of owning and maintaining stand-alone, privately-owned-and-operated infrastructure to a shared infrastructure model. These shared infrastructures are being offered by infrastructure service providers which have subscription, or pay-on-demand, charge models presenting compute and storage resources as a generalized utility. Utility based infrastructures that are run by service providers have been defined as "cloud computing" by the National Institute of Standards and Technology. In the cloud computing model the concerns of security and privacy protections are exacerbated due to the requirement for an enterprise to allow third parties to own and manage the infrastructure and be custodians of the enterprises information. With this new architectural model, there are new hybrid governance models designed to support complex and uncertain environments. The cloud also requires a common infrastructure that integrates originally separate computing silos. Privacy and security policy awareness during provisioning and computing orchestration about data locality across domains and jurisdictions must be able to obey legal and regulatory constraints. Commercial use of the Internet for electronic commerce has been growing at a phenomenal rate while consumer concern has also risen about the information gathered about them. Concern about privacy of data has been rated as the number one barrier by all industries. The purpose of this dissertation is to perform an empirical study to determine if existing privacy assessment instruments adequately assess privacy risks when applied to cloud infrastructures. The methodology for determining this is to apply a specific set of privacy risk assessments against a three cloud environments. The assessments are run in the context of a typical web based application deployed against cloud providers that have the five key cloud tenets - on-demand/self-service, broad network access, resource pooling, rapid elasticity, and measured service.
APA, Harvard, Vancouver, ISO, and other styles
8

Baggi, Michele. "Rule-based Methodologies for the Specification and Analysis of Complex Computing Systems." Doctoral thesis, Universitat Politècnica de València, 2010. http://hdl.handle.net/10251/8964.

Full text
Abstract:
Desde los orígenes del hardware y el software hasta la época actual, la complejidad de los sistemas de cálculo ha supuesto un problema al cual informáticos, ingenieros y programadores han tenido que enfrentarse. Como resultado de este esfuerzo han surgido y madurado importantes áreas de investigación. En esta disertación abordamos algunas de las líneas de investigación actuales relacionada con el análisis y la verificación de sistemas de computación complejos utilizando métodos formales y lenguajes de dominio específico. En esta tesis nos centramos en los sistemas distribuidos, con un especial interés por los sistemas Web y los sistemas biológicos. La primera parte de la tesis está dedicada a aspectos de seguridad y técnicas relacionadas, concretamente la certificación del software. En primer lugar estudiamos sistemas de control de acceso a recursos y proponemos un lenguaje para especificar políticas de control de acceso que están fuertemente asociadas a bases de conocimiento y que proporcionan una descripción sensible a la semántica de los recursos o elementos a los que se accede. También hemos desarrollado un marco novedoso de trabajo para la Code-Carrying Theory, una metodología para la certificación del software cuyo objetivo es asegurar el envío seguro de código en un entorno distribuido. Nuestro marco de trabajo está basado en un sistema de transformación de teorías de reescritura mediante operaciones de plegado/desplegado. La segunda parte de esta tesis se concentra en el análisis y la verificación de sistemas Web y sistemas biológicos. Proponemos un lenguaje para el filtrado de información que permite la recuperación de informaciones en grandes almacenes de datos. Dicho lenguaje utiliza información semántica obtenida a partir de ontologías remotas para re nar el proceso de filtrado. También estudiamos métodos de validación para comprobar la consistencia de contenidos web con respecto a propiedades sintácticas y semánticas. Otra de nuestras contribuciones es la propuesta de un lenguaje que permite definir y comprobar automáticamente restricciones semánticas y sintácticas en el contenido estático de un sistema Web. Finalmente, también consideramos los sistemas biológicos y nos centramos en un formalismo basado en lógica de reescritura para el modelado y el análisis de aspectos cuantitativos de los procesos biológicos. Para evaluar la efectividad de todas las metodologías propuestas, hemos prestado especial atención al desarrollo de prototipos que se han implementado utilizando lenguajes basados en reglas.<br>Baggi ., M. (2010). Rule-based Methodologies for the Specification and Analysis of Complex Computing Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8964<br>Palancia
APA, Harvard, Vancouver, ISO, and other styles
9

Kraus, Edwin. "Interworking methodologies for DCOM and CORBA." [Johnson City, Tenn. : East Tennessee State University], 2003. http://etd-submit.etsu.edu/etd/theses/available/etd-1104103-205221/unrestricted/KrausE110503b.pdf.

Full text
Abstract:
Thesis (M.S.)--East Tennessee State University, 2003.<br>Title from electronic submission form. ETSU ETD database URN: etd-1104103-205221. Includes bibliographical references. Also available via Internet at the UMI web site.
APA, Harvard, Vancouver, ISO, and other styles
10

Pan, Long. "Effective and Efficient Methodologies for Social Network Analysis." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/25962.

Full text
Abstract:
Performing social network analysis (SNA) requires a set of powerful techniques to analyze structural information contained in interactions between social entities. Many SNA technologies and methodologies have been developed and have successfully provided significant insights for small-scale interactions. However, these techniques are not suitable for analyzing large social networks, which are very popular and important in various fields and have special structural properties that cannot be obtained from small networks or their analyses. There are a number of issues that need to be further studied in the design of current SNA techniques. A number of key issues can be embodied in three fundamental and critical challenges: long processing time, large computational resource requirements, and network dynamism. In order to address these challenges, we discuss an anytime-anywhere methodology based on a parallel/distributed computational framework to effectively and efficiently analyze large and dynamic social networks. In our methodology, large social networks are decomposed into intra-related smaller parts. A coarse-level of network analysis is built based on comprehensively analyzing each part. The partial analysis results are incrementally refined over time. Also, during the analyses process, network dynamic changes are effectively and efficiently adapted based on the obtained results. In order to evaluate and validate our methodology, we implement our methodology for a set of SNA metrics which are significant for SNA applications and cover a wide range of difficulties. Through rigorous theoretical and experimental analyses, we demonstrate that our anytime-anywhere methodology is<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
11

Bhagwat, Ashwini. "Methodologies and tools for computation offloading on heterogeneous multicores." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29688.

Full text
Abstract:
Thesis (M. S.)--Computing, Georgia Institute of Technology, 2009.<br>Committee Chair: Pande, Santosh; Committee Member: Clark, Nate; Committee Member: Yalamanchili, Sudhakar. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
12

Zhou, Yangping. "A study on soft computing methodologies for operational support system of process plant." Kyoto University, 2005. http://hdl.handle.net/2433/144449.

Full text
Abstract:
Kyoto University (京都大学)<br>0048<br>新制・課程博士<br>博士(エネルギー科学)<br>甲第11892号<br>エネ博第118号<br>新制||エネ||30(附属図書館)<br>23672<br>UT51-2005-N726<br>京都大学大学院エネルギー科学研究科エネルギー社会・環境科学専攻<br>(主査)教授 吉川 榮和, 教授 手塚 哲央, 助教授 下田 宏<br>学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
13

Zhu, Haibo. "Performance evaluation of fault tolerant methodologies for network on chip architecture." Online access for everyone, 2007. http://www.dissertations.wsu.edu/Thesis/Summer2007/h_zhu_071907.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

GASPARETTO, THOMAS. "Development of a computing farm for Cloud computing on GPU - Development and optimisation of data-analysis methodologies for the Cherenkov Telescope Array." Doctoral thesis, Università degli Studi di Trieste, 2020. http://hdl.handle.net/11368/2963769.

Full text
Abstract:
L'attività di ricerca si è concentrata sulla creazione di pipeline di simulazione e analisi da utilizzare a diversi livelli nel contesto del Cherenkov Telescope Array. Il lavoro si compone di due parti principali: la prima è dedicata alla ricostruzione degli eventi provenienti dalle simulazioni di Montecarlo utilizzando la libreria ctapipe, mentre una seconda parte è dedicata alla stima delle future performance di CTA nell'osservazione di fenomeni violenti come quelli che generano i Gamma Ray Bursts e le Onde Gravitazionali. La ricostruzione a basso livello dei dati grezzi è stata effettuata con una pipeline che utilizza l'analisi ImPACT, una tecnica basata su template con modelli derivati dalle simulazioni Montecarlo; ImPACT è stato utilizzato sia per ottenere grafici a risoluzione angolare ed energetica, ma anche completamente profilato per trovare i suoi colli di bottiglia, debuggato e accelerato. Il codice è stato usato per analizzare i dati provenienti dai diversi layout dei telescopi e rifatto per analizzare i dati per il prototipo del telescopio LST-1, lavorando in "modalità mono" invece della modalità stereo standard. L'analisi è stata reimplementata per provare in modo massiccio tutti i template sulla GPU in un unico passaggio. L'implementazione è stata fatta usando la libreria PyTorch, sviluppata per il Deep Learning. È stata studiata la stima delle prestazioni dei telescopi nella cosiddetta "modalità di puntamento divergente": in questo scenario i telescopi hanno una direzione di puntamento leggermente diversa rispetto alla configurazione in parallelo, in modo che il campo visivo finale di tutto il sistema sia più ampio rispetto al puntamento in parallelo. Il codice di ricostruzione in ctapipe è stato adattato a questa particolare modalità di osservazione. La creazione di un visualizzatore 3D, realizzata con VTK, ha aiutato a comprendere il codice e a correggerlo di conseguenza. Il modello di cielo extragalattico per il First CTA Data Challenge è stato creato selezionando fonti da diversi cataloghi. L'obiettivo del DC-1 era quello di consentire ai gruppi di lavoro scientifici del CTA Consortium Science Working Groups di derivare dei benchmark scientifici per i Key Science Projects del CTA e di coinvolgere più persone nelle analisi. Al fine di effettuare le simulazioni e le analisi per i lavori del GRB e del GW Consortium, è stata creata una pipeline attorno alla libreria ctools: questa è composta da due parti gestite da file di configurazione, che si occupano sia del compito specifico da svolgere (simulazione in background, creazione del modello e la parte di simulazione che esegue il rilevamento e la stima del significato) sia della sottomissione dei job. La ricerca è stata effettuata per 14 mesi (di cui 5 mesi coperti da una borsa di studio supplementare dell'Ambasciata francese) presso il "Laboratorie d'Annecy de Physique de Particules" (LAPP) di Annecy (Francia) nell'ambito di un programma di cotutela basato sul periodo di ricerca obbligatorio da trascorrere all'estero previsto dalla borsa di studio, finanziato dal Fondo Sociale Europeo.<br>The research activity was focused on the creation of simulation and analysis pipelines to be used at different levels in the context of the Cherenkov Telescope Array. The work consists of two main parts: the first one is dedicated to the reconstruction of the events coming from the Monte Carlo simulations using the ctapipe library, whereas a second part is devoted to the estimation of the future performances of CTA in the observation of violent phenomena such as those generating Gamma Ray Bursts and Gravitational Waves. The low-level reconstruction of the raw data was done with a pipeline which uses the ImPACT analysis, a template-based technique with templates derived from Monte Carlo simulations; ImPACT was both used to obtain angular and energy resolution plots, but also fully profiled to find its bottlenecks, debugged and sped up. The code was used to analyse data from different telescopes’ layouts and refactored to analyse data for the prototype of the LST-1 telescope, working in “mono mode” instead of the standard stereo mode. The analysis was re-implemented in order to try massively all the templates on the GPU in one single step. The implementation is done using the PyTorch library, developed for Deep Learning. The estimation of the performances of the telescopes in the so-called “divergent pointing mode” was investigated: in this scenario the telescopes have a slightly different pointing direction with respect to the parallel configuration, so that the final hyper field-of-view of all the system is larger with respect to the parallel pointing. The reconstruction code in ctapipe was adapted to this particular observation mode. The creation of a 3D displayer, done using VTK, helped in understanding the code and in fixing it accordingly. The extragalactic sky model for the First CTA Data Challenge was created selecting sources from different catalogues. The goal of the DC-1 was to enable the CTA Consortium Science Working Groups to derive science benchmarks for the CTA Key Science Projects and get more people involved in the analyses. In order to do the simulations and analysis for the GRB and GW Consortium papers, a pipeline was created around the ctools library: this is made by two parts handled by configuration files, which take care both of the specific task to do (background simulation, model creation and the simulation part which performs the detection and estimate the significance) and the jobs submission. The research was done for 14 months (with 5 months covered by an additional scholarship from the French Ambassy) at the “Laboratorie d’Annecy de Physique de Particules” (LAPP) in Annecy (France) under a joint-supervision program based on the mandatory research period to spend abroad foreseen in the scholarship, funded from the European Social Fund.
APA, Harvard, Vancouver, ISO, and other styles
15

Loyola, Rodriguez Diego G. [Verfasser]. "Methodologies for solving Satellite Remote Sensing Problems using Neuro Computing Techniques / Diego G. Loyola Rodriguez." München : Verlag Dr. Hut, 2013. http://d-nb.info/1037287096/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Dekkiche, Djamila. "Programming methodologies for ADAS applications in parallel heterogeneous architectures." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS388/document.

Full text
Abstract:
La vision par ordinateur est primordiale pour la compréhension et l’analyse d’une scène routière afin de construire des systèmes d’aide à la conduite (ADAS) plus intelligents. Cependant, l’implémentation de ces systèmes dans un réel environnement automobile et loin d’être simple. En effet, ces applications nécessitent une haute performance de calcul en plus d’une précision algorithmique. Pour répondre à ces exigences, de nouvelles architectures hétérogènes sont apparues. Elles sont composées de plusieurs unités de traitement avec différentes technologies de calcul parallèle: GPU, accélérateurs dédiés, etc. Pour mieux exploiter les performances de ces architectures, différents langages sont nécessaires en fonction du modèle d’exécution parallèle. Dans cette thèse, nous étudions diverses méthodologies de programmation parallèle. Nous utilisons une étude de cas complexe basée sur la stéréo-vision. Nous présentons les caractéristiques et les limites de chaque approche. Nous évaluons ensuite les outils employés principalement en terme de performances de calcul et de difficulté de programmation. Le retour de ce travail de recherche est crucial pour le développement de futurs algorithmes de traitement d’images en adéquation avec les architectures parallèles avec un meilleur compromis entre les performances de calcul, la précision algorithmique et la difficulté de programmation<br>Computer Vision (CV) is crucial for understanding and analyzing the driving scene to build more intelligent Advanced Driver Assistance Systems (ADAS). However, implementing CV-based ADAS in a real automotive environment is not straightforward. Indeed, CV algorithms combine the challenges of high computing performance and algorithm accuracy. To respond to these requirements, new heterogeneous circuits are developed. They consist of several processing units with different parallel computing technologies as GPU, dedicated accelerators, etc. To better exploit the performances of such architectures, different languages are required depending on the underlying parallel execution model. In this work, we investigate various parallel programming methodologies based on a complex case study of stereo vision. We introduce the relevant features and limitations of each approach. We evaluate the employed programming tools mainly in terms of computation performances and programming productivity. The feedback of this research is crucial for the development of future CV algorithms in adequacy with parallel architectures with a best compromise between computing performance, algorithm accuracy and programming efforts
APA, Harvard, Vancouver, ISO, and other styles
17

Kotiyal, Saurabh. "Design Methodologies for Reversible Logic Based Barrel Shifters." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/4106.

Full text
Abstract:
The reversible logic has the promising applications in emerging computing paradigm such as quantum computing, quantum dot cellular automata, optical computing, etc. In reversible logic gates there is a unique one-to-one mapping between the inputs and outputs. To generate an useful gate function the reversible gates require some constant ancillary inputs called ancilla inputs. Also to maintain the reversibility of the circuits some additional unused outputs are required that are referred as the garbage outputs. The number of ancilla inputs, number of garbage outputs and quantum cost plays an important role in the evaluation of reversible circuits. Thus minimizing these parameters are important for designing an efficient reversible circuit. Barrel shifter is an integral component of many computing systems due to its useful property that it can shift and rotate multiple bits in a single cycle. The main contribution of this thesis is a set of design methodologies for the reversible realization of reversible barrel shifters where the designs are based on the Fredkin gate and the Feynman gate. The Fredkin gate can implement the 2:1 MUX with minimum quantum cost, minimum number of ancilla inputs and minimum number of garbage outputs and the Feynman gate can be used so as to avoid the fanout, as fanout is not allowed in reversible logic. The design methodologies considered in this work targets 1.) Reversible logical right- shifter, 2.) Reversible universal right shifter that supports logical right shift, arithmetic right shift and the right rotate, 3.) Reversible bidirectional logical shifter, 4.) Reversible bidirectional arithmetic and logical shifter, 5) Reversible universal bidirectional shifter that supports bidirectional logical and arithmetic shift and rotate operations. The proposed design methodologies are evaluated in terms of the number of the garbage outputs, the number of ancilla inputs and the quantum cost. The detailed architecture and the design of a (8,3) reversible logical right-shifter and the (8,3) reversible universal right shifter are presented for illustration of the proposed methodologies.
APA, Harvard, Vancouver, ISO, and other styles
18

Farahini, Nasim. "SiLago: Enabling System Level Automation Methodology to Design Custom High-Performance Computing Platforms : Toward Next Generation Hardware Synthesis Methodologies." Doctoral thesis, KTH, Elektronik och Inbyggda System, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-185787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Di, Palermo Vincent. "Social Influence and Organizational Innovation Characteristics on Enterprise Social Computing Adoption." ScholarWorks, 2016. https://scholarworks.waldenu.edu/dissertations/2026.

Full text
Abstract:
Ample research has been conducted to identify the determinants of information technology (IT) adoption. No previous quantitative researchers have explored IT adoption in the context of enterprise social computing (ESC). The purpose of this study was to test and extend the social influence model of IT adoption. In addition, this study addressed a gap in the research literature and presented a model that relates the independent variables of social action, social consensus, social authority, social cooperation, perceived relative advantage, perceived compatibility, perceived ease of use, perceived usefulness, and organizational commitment to the dependent variables of social embracement and embedment. A randomized stepwise multiple linear regression analysis was performed on survey data from 125 C-level executives (i.e., chief information officers and chief technology officers). The analysis found that executives consider perceived relative advantage, organizational commitment, and social computing action as the most significant factors relating to the adoption of ESC. Executives' perceptions about ESC could impact organizational commitment, implementation, and use of such technologies. The findings could make a social contribution within organizations by helping C-level executives understand the degree to which these factors contribute to the ESC adoption. The knowledge from this study may also help organizations derive operational effectiveness, efficiency, and create business value for their clients and society.
APA, Harvard, Vancouver, ISO, and other styles
20

Isaia, Filho Eduardo. "Uma metodologia para computação com DNA." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2004. http://hdl.handle.net/10183/16662.

Full text
Abstract:
A computação com DNA é um campo da Bioinformática que, através da manipulação de seqüências de DNA, busca a solução de problemas. Em 1994, o matemático Leonard Adleman, utilizando operações biológicas e manipulação de seqüências de DNA, solucionou uma instância de um problema intratável pela computação convencional, estabelecendo assim, o início da computação com DNA. Desde então, uma série de problemas combinatoriais vem sendo solucionada através deste modelo de programação. Este trabalho analisa a computação com DNA, com o objetivo de traçar algumas linhas básicas para quem deseja programar nesse ambiente. Para isso, são apresentadas algumas vantagens e desvantagens da computação com DNA e, também, alguns de seus métodos de programação encontrados na literatura. Dentre os métodos estudados, o método de filtragem parece ser o mais promissor e, por isso, uma metodologia de programação, através deste método, é estabelecida. Para ilustrar o método de Filtragem Seqüencial, são mostrados alguns exemplos de problemas solucionados a partir deste método.<br>DNA computing is a field of Bioinformatics that, through the manipulation of DNA sequences, looks for the solution of problems. In 1994 the mathematician Leonard Adleman, using biological operations and DNA sequences manipulation, solved an instance of a problem considered as intractable by the conventional computation, thus establishing the beginning of the DNA computing. Since then, a series of combinatorial problems were solved through this model of programming. This work studies the DNA computing, aiming to present some basic guide lines for those people interested in this field. Advantages and disadvantages of the DNA computing are contrasted and some methods of programming found in literature are presented. Amongst the studied methods, the filtering method appears to be the most promising and for this reason it was chosen to establish a programming methodology. To illustrate the sequential filtering method, some examples of problems solved by this method are shown.
APA, Harvard, Vancouver, ISO, and other styles
21

Gutierrez, Soto Mariantonieta. "MULTI-AGENT REPLICATOR CONTROL METHODOLOGIES FOR SUSTAINABLE VIBRATION CONTROL OF SMART BUILDING AND BRIDGE STRUCTURES." The Ohio State University, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=osu1494249419696286.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Lai, Guojun, and Bing Li. "Handwritten Document Binarization Using Deep Convolutional Features with Support Vector Machine Classifier." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-20090.

Full text
Abstract:
Background. Since historical handwritten documents have played important roles in promoting the development of human civilization, many of them have been preserved through digital versions for more scientific researches. However, various degradations always exist in these documents, which could interfere in normal reading. But, binarized versions can keep meaningful contents without degradations from original document images. Document image binarization always works as a pre-processing step before complex document analysis and recognition. It aims to extract texts from a document image. A desirable binarization performance can promote subsequent processing steps positively. For getting better performance for document image binarization, efficient binarization methods are needed. In recent years, machine learning centered on deep learning has gathered substantial attention in document image binarization, for example, Convolutional Neural Networks (CNNs) are widely applied in document image binarization because of the powerful ability of feature extraction and classification. Meanwhile, Support Vector Machine (SVM) is also used in image binarization. Its objective is to build an optimal hyperplane that could maximize the margin between negative samples and positive samples, which can separate the foreground pixels and the background pixels of the image distinctly. Objectives. This thesis aims to explore how the CNN based process of deep convolutional feature extraction and an SVM classifier can be integrated well to binarize handwritten document images, and how the results are, compared with some state-of-the-art document binarization methods. Methods. To investigate the effect of the proposed method on document image binarization, it is implemented and trained. In the architecture, CNN is used to extract features from input images, afterwards these features are fed into SVM for classification. The model is trained and tested with six different datasets. Then, there is a performance comparison between the proposed model and other binarization methods, including some state-of-the-art methods on other three different datasets. Results. The performance results indicate that the proposed model not only can work well but also perform better than some other novel handwritten document binarization method. Especially, evaluation of the results on DIBCO 2013 dataset indicates that our method fully outperforms other chosen binarization methods on all the four evaluation metrics. Besides, it also has the ability to deal with some degradations, which demonstrates its generalization and learning ability are excellent. When a new kind of degradation appears, the proposed method can address it properly even though it never appears in the training datasets. Conclusions. This thesis concludes that the CNN based component and SVM can be combined together for handwritten document binarization. Additionally, in certain datasets, it outperforms some other state-of-the-art binarization methods. Meanwhile, its generalization and learning ability is outstanding when dealing with some degradations.
APA, Harvard, Vancouver, ISO, and other styles
23

Bessinger, Zachary. "An Automatic Framework for Embryonic Localization Using Edges in a Scale Space." TopSCHOLAR®, 2013. http://digitalcommons.wku.edu/theses/1262.

Full text
Abstract:
Localization of Drosophila embryos in images is a fundamental step in an automatic computational system for the exploration of gene-gene interaction on Drosophila. Contour extraction of embryonic images is challenging due to many variations in embryonic images. In the thesis work, we develop a localization framework based on the analysis of connected components of edge pixels in a scale space. We propose criteria to select optimal scales for embryonic localization. Furthermore, we propose a scale mapping strategy to compress the range of a scale space in order to improve the efficiency of the localization framework. The effectiveness of the proposed framework and the scale mapping strategy are validated in our experiments.
APA, Harvard, Vancouver, ISO, and other styles
24

Londoño, Humberto Reynales. "EpiDoc®: plataforma de comunicação em epidemiologia." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/5/5137/tde-24062008-144045/.

Full text
Abstract:
Introdução: EpiDoc® é um modelo para transferência de conhecimento na área de metodologia da pesquisa. Está baseado no conceito de estratégias de colaboração para a aprendizagem (learning communities ou communities of practice) mediante a união de esforços entre os interesses comuns de um grupo de profissionais. O objetivo deste projeto é desenvolver uma plataforma de comunicação para a transferência de conhecimento e desenvolvimento de competências em uma comunidade de prática de metodologia da pesquisa em saúde. Métodos:. A plataforma de comunicação está desenvolvido com a tecnologia de páginas de servidor ASP (Active Server Pages), interagindo com uma base de dados Microsoft SQL Server 2000. Na fase da avaliação, tomou-se uma amostra de 38 pessoas para responder a pesquisa de opinião de 84 perguntas que inclui as diferentes áreas a avaliar como são os conteúdos, a tecnologia, o ambiente educativo, os problemas e dificuldades, assim como os elementos positivos do processo de aprendizagem. Resultados: A plataforma divide-se basicamente em 2 zonas, uma pública e outra privada, e pode ser observado em inglês, espanhol e português. A plataforma conta com os seguintes módulos: Controle de acesso; biblioteca; administração de cursos; apresentações; assinatura de usuários para distribuição eletrônica de materiais educativos; correio eletrônico e correio massivo; salas virtuais de Chat; foros de discussão; manipulação de documentos entre tutores e usuários; aplicação de provas de avaliação para os usuários; geração automática de certificados; métricas e relatórios de atividades. A avaliação foi feita com uma amostra de 38 estudantes de um curso de Epidemiologia Clínica. O 94 % dos estudantes ficaram satisfeitos ou muito satisfeitos com a experiência de aprendizagem. O 95% considerou que tinha adquirido novas habilidades de comunicação e colaboração ao estudar por meio virtual. Para o 76% facilitou-se o trabalho em equipe, assim como para o 84% melhorou a capacidade para aprender dos demais, interagindo entre outros. Conclusão: EpiDoc® utiliza uma plataforma ou mecanismo de comunicação baseado em tecnologias modernas por meio de Internet. Os resultados em geral confirmam que as novas tecnologias aplicadas ao processo de ensino da metodologia da pesquisa são bem recebidas por parte dos estudantes. Há uma atitude positiva em relação ao fato de incorporar esta modalidade em seus cursos regulares.<br>Introduction: Epidoc® is a model for the transference of knowledge in the field or research methodology. It is based on the concept of collaboration strategies for learning (learning communities or communities of practice) by the joint effort among common interests of a professional group. The objective of this project is to develop a communication platform for the knowledge transference and developing of competences in a community which practices the Research Methodology in the health field. Methods: The communication platform was designed with a technology of ASP (Active Server Pages) interacting with a Microsoft SQL Server 2000 data base. For the evaluation phase a sample of 38 people was taken to answer an opinion questionnaire of 84 questions which include the different areas to evaluate such as the contents, the technology, the learning environment, the problems and the difficulties and also all the positive elements of the learning process. Results: The communication platform is divided in two zones, one public and one private and is available in three different languajes: English, Spanish and Portuguese. The platform contains the following modules: access control; library; courses administration; presentations; subscriptions for electronic distribution of educational materials; electronic and massive mail; Chat virtual rooms; discussion forums; documents management between users and mentors; implementation of evaluation test for the users; generation of certificates; metrics and activities reports. The evaluation was implemented with a sample of 38 students from a Clinical Epidemiology course. 94% of the students were satisfied or very satisfied by the learning experience. 95% considered that they had acquired new communication and collaboration abilities at studying by the virtual way. For 76% the group work was eased as for 84% noticed an improve capacity to learn form the others, interacting among others. Conclusion: EpiDoc uses a platform of communication based in modern technologies by theinternet. In general, the results confirm that the new technologies applied to the teaching process of research methodology are very welcomed by the students. They have a positive attitude toward the fact of incorporating this modality in their regular courses.
APA, Harvard, Vancouver, ISO, and other styles
25

Downey, Laura. "Well-being Technologies: Meditation Using Virtual Worlds." NSUWorks, 2015. http://nsuworks.nova.edu/gscis_etd/65.

Full text
Abstract:
In a technologically overloaded world, is it possible to use technology to support well-being activities and enhance human flourishing? Proponents of positive technology and positive computing are striving to answer yes to that question. However, the impact of technology on well-being remains unresolved. Positive technology combines technology and positive psychology. Positive psychology focuses on well-being and the science of human flourishing. Positive computing includes an emphasis on designing with well-being in mind as a way to support human potential. User experience (UX) is critical to positive technology and positive computing. UX researchers and practitioners are advocating for experience-driven design and third wave human-computer interaction (HCI) that focuses on multi-dimensional, interpretive, situated, and phenomenological aspects. Third-wave HCI goes beyond cognition to include emotions, values, culture, and experience. This research investigated technology-supported meditation in a three-dimensional (3D) virtual world from a positive technology perspective to examine how technology can support engagement, self-empowerment, and well-being. Designing and evaluating technology for well-being support is complex and challenging. Further, although virtual worlds have been used in positive technology applications, little research exists that illuminates the experience of user engagement in virtual worlds. In this formative exploratory study, experienced meditators (N = 12) interacted with a virtual meditation world titled Sanctuarium that was developed for this research. Using a third wave HCI approach, both quantitative and qualitative data were collected to understand the nature of engagement with a virtual world and the experiential aspects of technology-supported meditation. Results supported using virtual worlds to produce restorative natural environments. Participants overwhelmingly reacted positively to the islandscape including both visual and sound elements. Findings indicated that Sanctuarium facilitated the meditation experience, similar to guided meditation – although participants remarked on the uniqueness of the experience. Aspects of facilitation centered on the concepts of non-distraction, focus, and simplicity of design and instructions. Participants also identified Sanctuarium as a good tool for helping those new to meditation. Meditators described positive effects of their meditation experience during interviews and also rated their experience as positive using the scale titled Effects of Meditation During Meditation. Phenomenological analysis provided a rich description of the nature of engagement while meditating with Sanctuarium. Meditators also rated engagement as high via an adapted User Engagement Scale. This interdisciplinary work drew from multiple fields and contributes to the HCI domain, virtual worlds’ literature, information systems research, and the nascent areas of positive technology and positive computing.
APA, Harvard, Vancouver, ISO, and other styles
26

Nordesjö, Olle. "Searching for novel protein-protein specificities using a combined approach of sequence co-evolution and local structural equilibration." Thesis, Uppsala universitet, Institutionen för biologisk grundutbildning, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-275040.

Full text
Abstract:
Greater understanding of how we can use protein simulations and statistical characteristics of biomolecular interfaces as proxies for biological function will make manifest major advances in protein engineering. Here we show how to use calculated change in binding affinity and coevolutionary scores to predict the functional effect of mutations in the interface between a Histidine Kinase and a Response Regulator. These proteins participate in the Two-Component Regulatory system, a system for intracellular signalling found in bacteria. We find that both scores work as proxies for functional mutants and demonstrate a ~30 fold improvement in initial positive predictive value compared with choosing randomly from a sequence space of 160 000 variants in the top 20 mutants. We also demonstrate qualitative differences in the predictions of the two scores, primarily a tendency for the coevolutionary score to miss out on one class of functional mutants with enriched frequency of the amino acid threonine in one position.
APA, Harvard, Vancouver, ISO, and other styles
27

Vũ, John Huân. "Software Internationalization: A Framework Validated Against Industry Requirements for Computer Science and Software Engineering Programs." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/248.

Full text
Abstract:
View John Huân Vũ's thesis presentation at http://youtu.be/y3bzNmkTr-c. In 2001, the ACM and IEEE Computing Curriculum stated that it was necessary to address "the need to develop implementation models that are international in scope and could be practiced in universities around the world." With increasing connectivity through the internet, the move towards a global economy and growing use of technology places software internationalization as a more important concern for developers. However, there has been a "clear shortage in terms of numbers of trained persons applying for entry-level positions" in this area. Eric Brechner, Director of Microsoft Development Training, suggested five new courses to add to the computer science curriculum due to the growing "gap between what college graduates in any field are taught and what they need to know to work in industry." He concludes that "globalization and accessibility should be part of any course of introductory programming," stating: A course on globalization and accessibility is long overdue on college campuses. It is embarrassing to take graduates from a college with a diverse student population and have to teach them how to write software for a diverse set of customers. This should be part of introductory software development. Anything less is insulting to students, their family, and the peoples of the world. There is very little research into how the subject of software internationalization should be taught to meet the major requirements of the industry. The research question of the thesis is thus, "Is there a framework for software internationalization that has been validated against industry requirements?" The answer is no. The framework "would promote communication between academia and industry ... that could serve as a common reference point in discussions." Since no such framework for software internationalization currently exists, one will be developed here. The contribution of this thesis includes a provisional framework to prepare graduates to internationalize software and a validation of the framework against industry requirements. The requirement of this framework is to provide a portable and standardized set of requirements for computer science and software engineering programs to teach future graduates.
APA, Harvard, Vancouver, ISO, and other styles
28

Begum, Momotaz. "Robotic mapping using soft computing methodologies /." 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
29

Cardellino, Cristian Adrián. "Estudio de métodos semisupervisados para la desambiguación de sentidos verbales del español." Doctoral thesis, 2018. http://hdl.handle.net/11086/6601.

Full text
Abstract:
Esta tesis explora el uso de técnicas semisupervisadas para la desambigación de sentidos verbales del español. El objetivo es el estudio de como la información de datos no etiquetados, que son mayores en tamaño, puede ayudar a un clasificador entrenado desde un conjunto de datos etiquetados pequeño. La tesis comienza desde la tarea completamente supervisada de desambiguación de sentidos verbales y estudia las siguientes técnicas semisupervisadas comparando su impacto en la tarea original: uso de vectores de palabras (o word embeddings), autoaprendizaje, aprendizaje activo y redes neuronales en escalera.
APA, Harvard, Vancouver, ISO, and other styles
30

Mazuecos, Perez Mauricio Diego. "Estudio de simplificación de oraciones con modelos actor-critic." Bachelor's thesis, 2019. http://hdl.handle.net/11086/11914.

Full text
Abstract:
La simplificación de oraciones es una tarea de Procesamiento del Lenguaje Natural que se centra en transformar escritos para que su gramática, estructura y palabras sean más sencillas de comprender, sin perder la semántica de la oración original. Como tal, no es una tarea simple de abordar y requiere métodos sofisticados que permitan definir las características que hacer a una oración simple. Al mismo tiempo, estos modelos deben tener una representación adecuada del significado de la oración, que no debe ser alterado durante el proceso de simplificación. En este trabajo de tesis se explora el uso de aprendizaje por refuerzos para la tarea de simplificación de oraciones. Se utiliza un entrenamiento en dos etapas, construyendo primero un sistema de traducción automática clásico que luego es ajustado durante un segundo entrenamiento con un algoritmo actor-critic. Se muestran resultados de la primera etapa de entrenamiento y su comparación con trabajo previo y se hace una exposición de las dificultades y problemas de la segunda etapa de entrenamiento.
APA, Harvard, Vancouver, ISO, and other styles
31

Falcone, Deborah, Domenico Talia, and Sergio Greco. "Mobile Computing: energy-aware tecniques and location-based methodologies." Thesis, 2014. http://hdl.handle.net/10955/1237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Yeh, Chen-lin, and 葉貞麟. "Big Data Mining with Parallel Computing: A Comparison of Distributed and MapReduce Methodologies." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/hq5gpf.

Full text
Abstract:
碩士<br>國立中央大學<br>資訊管理學系<br>103<br>The dataset size is growing faster than Moore's Law, and the big data frenzy is currently sweeping through our daily life. The challenges of managing massive amounts of big data involve the techniques of making the data accessible. The big data concept is general and encapsulates much of the essence of data mining techniques and they can discovery the most important and relevant knowledge to be valuable information. The advancement of the Internet technology and the popularity of cloud computing can break through the time efficiency limitation of traditional data mining methods over very large scale dataset problems. The technology of big data mining should create the conditions for the efficient mining of massive amounts of data with the aim of creating real-time useful information. The data scientist, John Rauser, defines big data as “any amount of data that’s too big to be handled by one computer.” A standalone device does not have enough memory to efficiently handle big data, and the storage capacity as well. Therefore, big data mining can be efficiently performed via the conventional distributed and MapReduce methodologies. This raises an important research question: Do the distributed and MapReduce methodologies over large scale datasets perform differently in mining accuracy and efficiency? And one more question: Does Big data mining need data preprocessing? The experimental results based on four large scale datasets show that the using MapReduce without data preprocessing requires the lest processing time and it allows the classifier to provide the highest rate of classification no matter how many computer nodes are used except for a class imbalance dataset.
APA, Harvard, Vancouver, ISO, and other styles
33

"FPGA design methodologies for high-performance applications." 2001. http://library.cuhk.edu.hk/record=b6073348.

Full text
Abstract:
Leong Monk Ping.<br>Thesis (Ph.D.)--Chinese University of Hong Kong, 2001.<br>Includes bibliographical references (p. 255-278).<br>Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web.<br>Electronic reproduction. Ann Arbor, MI : ProQuest Information and Learning Company, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web.<br>Mode of access: World Wide Web.<br>Abstracts in English and Chinese.
APA, Harvard, Vancouver, ISO, and other styles
34

Bordone, Carranza Matías Eduardo. "Sincronización automática de movimientos labiales para programas de animación mediante análisis de audio en español." Bachelor's thesis, 2015. http://hdl.handle.net/11086/2813.

Full text
Abstract:
En la industria de la animación uno de los principales costos son la horas hombre, es decir, la cantidad de trabajo que hay detrás de varios de los procesos de producción. Uno de dichos procesos es el conocido lip sync o sincronización de labios, donde se articula la boca del personaje de animación con la voz de un actor de manera sincronizadas. A partir de las recientes leyes sancionadas en Argentina en los últimos años se ha propiciado una fuerte demanda de producción de contenidos audiovisuales locales. Este trabajo propone realizar e investigar la automatización del proceso de sincronización de labios a fin de proveer un mecanismo que facilite dicha producción con el fin de favorecer, desde lo técnico, la producción de contenidos de animación. La aproximación de dicha automatización se realiza a través de la detección de posiciones bucales directamente desde el audio, proponiendo un sistema que sea sencillo, rápido, de bajo costo y no requiera de aparatología externa.
APA, Harvard, Vancouver, ISO, and other styles
35

Silva, Martín Gastón. "Predicción de tendencias en redes sociales basada en características sociales y contenido." Bachelor's thesis, 2018. http://hdl.handle.net/11086/6245.

Full text
Abstract:
Tesis (Lic. en Ciencias de la Computación)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2018.<br>En el marco del análisis de redes sociales éste trabajo busca capturar el comportamiento de los usuarios influyentes sobre una publicación determinada. Con esta información, la intención es generar un modelo de aprendizaje automático capaz de predecir si un determinado tweet será “popular” o no. La construcción del conjunto de datos (dataset) fue realizada a través de la API pública de Twitter obteniendo un volumen final de más de 5,000 usuarios y 5,000,000 de publicaciones. Con esta información se entrenaron y evaluaron diversos modelos de aprendizaje auto- mático con múltiples configuraciones, con el objetivo encontrar así el mejor rendimiento. En este sentido, en un primer experimento, se logró inferir un modelo de clasificación binaria basado en SVM (Support Vector Machines) sólo utilizando información social, qué obtuvo un 77 % de certeza, basado en la métrica F1, para predecir si una publicación es considerada “popular”. En una segunda etapa, se decidió agregar técnicas de Procesamiento de Lenguaje Natural aplicadas sobre el contenido de las publicaciones, logrando algunas mejoras sig- nificativas en los casos donde el modelo anterior se veía disminuido. Dicho análisis de los tweets fue realizado utilizando detección de tópicos, mediante algoritmos tipo LDA (Latent Dirichlet Allocation).<br>n the framework of social network analysis, this work seeks to capture the behavior of influential users about a specific publication. With this information, the intention is to generate an automatic learning model capable of predicting if a certain tweet is popular or not. The construction of the dataset was made through the public Twitter API obtaining a final volume of more than 5,000 users and 5,000,000 publications. With this information, different models of machine learning with multiple configurations were trained and evaluated, in order to obtain the best performance. In this sense, in a database we can infer a classification model based on SVM (Support Vector Machines) only using social information, which obtained a 77% certainty, based on the F1 metric, for predict whether a publication is considered "popular". In a second stage, it was decided to add Natural Language Processing techniques, earning significant improvements in the cases where the previous model was reduced. This analysis of the tweets was done by detection of topics, through LDA(Latent Dirichlet Allocation) algorithms.
APA, Harvard, Vancouver, ISO, and other styles
36

(6823670), Priyadarshini Panda. "Learning and Design Methodologies for Efficient, Robust Neural Networks." Thesis, 2019.

Find full text
Abstract:
<div>"Can machines think?", the question brought up by Alan Turing, has led to the development of the eld of brain-inspired computing, wherein researchers have put substantial effort in building smarter devices and technology that have the potential of human-like understanding. However, there still remains a large (several orders-of-magnitude) power efficiency gap between the human brain and computers that attempt to emulate some facets of its functionality. In this thesis, we present design techniques that exploit the inherent variability in the difficulty of input data and the correlation of characteristic semantic information among inputs to scale down the computational requirements of a neural network with minimal impact on output quality. While large-scale artificial neural networks have achieved considerable success in a range of applications, there is growing interest in more biologically realistic models, such as, Spiking Neural Networks (SNNs), due to their energy-efficient spike based processing capability. We investigate neuroscientific principles to develop novel learning algorithms that can enable SNNs to conduct on-line learning. We developed an auto-encoder based unsupervised learning rule for training deep spiking convolutional networks that yields state-of-the-art results with computationally efficient learning. Further, we propose a novel "learning to forget" rule that addresses the catastrophic forgetting issue predominant with traditional neural computing paradigm and offers a promising solution for real-time lifelong learning without the expensive re-training procedure. Finally, while artificial intelligence grows in this digital age bringing large-scale social disruption, there is a growing security concern in the research community about the vulnerabilities of neural networks towards adversarial attacks. To that end, we describe discretization-based solutions, that are traditionally used for reducing the resource utilization of deep neural networks, for adversarial robustness. We also propose a novel noise-learning training strategy as an adversarial defense method. We show that implicit generative modeling of random noise with the same loss function used during posterior maximization, improves a model's understanding of the data manifold, furthering adversarial robustness. We evaluated and analyzed the behavior of the noise modeling technique using principal component analysis that yields metrics which can be generalized to all adversarial defenses.</div>
APA, Harvard, Vancouver, ISO, and other styles
37

Prasanth, V. "Low Overhead Soft Error Mitigation Methodologies." Thesis, 2012. http://etd.iisc.ac.in/handle/2005/3232.

Full text
Abstract:
CMOS technology scaling is bringing new challenges to the designers in the form of new failure modes. The challenges include long term reliability failures and particle strike induced random failures. Studies have shown that increasingly, the largest contributor to the device reliability failures will be soft errors. Due to reliability concerns, the adoption of soft error mitigation techniques is on the increase. As the soft error mitigation techniques are increasingly adopted, the area and performance overhead incurred in their implementation also becomes pertinent. This thesis addresses the problem of providing low cost soft error mitigation. The main contributions of this thesis include, (i) proposal of a new delayed capture methodology for low overhead soft error detection, (ii) adopting Error Control Coding (ECC) for delayed capture methodology for correction of single event upsets, (iii) analyzing the impact of different derating factors to reduce the hardware overhead incurred by the above implementations, and (iv) proposal for hardware software co-design for reliability based upon critical component identification determined by the application executing on the hardware (as against standalone hardware analysis). This thesis first surveys existing soft error mitigation techniques and their associated limitations. It proposes a new delayed capture methodology as a low overhead soft error detection technique. Delayed capture methodology is an enhancement of the Razor flip-flop methodology. In the delayed capture methodology, the parity for a set of flip-flops is calculated at their inputs and outputs. The input parity is latched on a second clock, which is delayed with respect to the functional clock by more than the soft error pulse width. It requires an extra flip-flop for each set of flip-flops. On the other hand, in the Razor flip-flop methodology an additional flip-flop is required for every functional flip-flop. Due to the skew in the clocks, either the parity flip-flop or the functional flip-flop will capture the effect of transient, and hence by comparing the output parity and latched input parity an error can be detected. Fault injection experiments are performed to evaluate the bneefits and limitations of the proposed approach. The limitations include soft error detection escapes and lack of error correction capability. Different cases of soft error detection escapes are analyzed. They are attributed mainly to a Single Event Upset (SEU) causing multiple flip-flops within a group to be in error. The error space due to SEUs is analyzed and an intelligent flip-flop grouping method using graph theoretic formulations is proposed such that no SEU can cause multiple flip-flops within a group to be in error. Once the error occurs, leaving the correction aspects to the application may not be desirable. The proposed delayed capture methodology is extended to replace parity codes with codes having higher redundancy to enable correction. The hardware overhead due to the proposed methodology is analyzed and an area savings of about 15% is obtained when compared to an existing soft error mitigation methodology with equivalent coverage. The impact of different derating factors in determining the hardware overhead due to the soft error mitigation methodology is then analyzed. We have considered electrical derating and timing derating information for the evaluation purpose. The area overhead of the circuit with implementation of delayed capture methodology, considering different derating factors standalone and in combination is then analyzed. Results indicate that in different circuits, either a combination of these derating factors yield optimal results, or each of them considered standalone. This is due to the dependency of the solution on the heuristic nature of the algorithms used. About 23% area savings are obtained by employing these derating factors for a more optimal grouping of flip-flops. A new paradigm of hardware software co-design for reliability is finally proposed. This is based on application derating in which the application / firmware code is profiled to identify the critical components which must be guarded from soft errors. This identification is based on the ability of the application software to tolerate certain errors in hardware. An algorithm to identify critical components in the control logic based on fault injection is developed. Experimental results indicated that for a safety critical automotive application, only 12% of the sequential logic elements were found to be critical. This approach provides a framework for investigating how software methods can complement hardware methods, to provide a reduced hardware solution for soft error mitigation.
APA, Harvard, Vancouver, ISO, and other styles
38

Prasanth, V. "Low Overhead Soft Error Mitigation Methodologies." Thesis, 2012. http://hdl.handle.net/2005/3232.

Full text
Abstract:
CMOS technology scaling is bringing new challenges to the designers in the form of new failure modes. The challenges include long term reliability failures and particle strike induced random failures. Studies have shown that increasingly, the largest contributor to the device reliability failures will be soft errors. Due to reliability concerns, the adoption of soft error mitigation techniques is on the increase. As the soft error mitigation techniques are increasingly adopted, the area and performance overhead incurred in their implementation also becomes pertinent. This thesis addresses the problem of providing low cost soft error mitigation. The main contributions of this thesis include, (i) proposal of a new delayed capture methodology for low overhead soft error detection, (ii) adopting Error Control Coding (ECC) for delayed capture methodology for correction of single event upsets, (iii) analyzing the impact of different derating factors to reduce the hardware overhead incurred by the above implementations, and (iv) proposal for hardware software co-design for reliability based upon critical component identification determined by the application executing on the hardware (as against standalone hardware analysis). This thesis first surveys existing soft error mitigation techniques and their associated limitations. It proposes a new delayed capture methodology as a low overhead soft error detection technique. Delayed capture methodology is an enhancement of the Razor flip-flop methodology. In the delayed capture methodology, the parity for a set of flip-flops is calculated at their inputs and outputs. The input parity is latched on a second clock, which is delayed with respect to the functional clock by more than the soft error pulse width. It requires an extra flip-flop for each set of flip-flops. On the other hand, in the Razor flip-flop methodology an additional flip-flop is required for every functional flip-flop. Due to the skew in the clocks, either the parity flip-flop or the functional flip-flop will capture the effect of transient, and hence by comparing the output parity and latched input parity an error can be detected. Fault injection experiments are performed to evaluate the bneefits and limitations of the proposed approach. The limitations include soft error detection escapes and lack of error correction capability. Different cases of soft error detection escapes are analyzed. They are attributed mainly to a Single Event Upset (SEU) causing multiple flip-flops within a group to be in error. The error space due to SEUs is analyzed and an intelligent flip-flop grouping method using graph theoretic formulations is proposed such that no SEU can cause multiple flip-flops within a group to be in error. Once the error occurs, leaving the correction aspects to the application may not be desirable. The proposed delayed capture methodology is extended to replace parity codes with codes having higher redundancy to enable correction. The hardware overhead due to the proposed methodology is analyzed and an area savings of about 15% is obtained when compared to an existing soft error mitigation methodology with equivalent coverage. The impact of different derating factors in determining the hardware overhead due to the soft error mitigation methodology is then analyzed. We have considered electrical derating and timing derating information for the evaluation purpose. The area overhead of the circuit with implementation of delayed capture methodology, considering different derating factors standalone and in combination is then analyzed. Results indicate that in different circuits, either a combination of these derating factors yield optimal results, or each of them considered standalone. This is due to the dependency of the solution on the heuristic nature of the algorithms used. About 23% area savings are obtained by employing these derating factors for a more optimal grouping of flip-flops. A new paradigm of hardware software co-design for reliability is finally proposed. This is based on application derating in which the application / firmware code is profiled to identify the critical components which must be guarded from soft errors. This identification is based on the ability of the application software to tolerate certain errors in hardware. An algorithm to identify critical components in the control logic based on fault injection is developed. Experimental results indicated that for a safety critical automotive application, only 12% of the sequential logic elements were found to be critical. This approach provides a framework for investigating how software methods can complement hardware methods, to provide a reduced hardware solution for soft error mitigation.
APA, Harvard, Vancouver, ISO, and other styles
39

Tafliovich, Anya. "Predicative Quantum Programming." Thesis, 2010. http://hdl.handle.net/1807/24890.

Full text
Abstract:
This work presents Quantum Predicative Programming --- a theory ofquantum programming that encompasses many aspects of quantum computation and quantum communication. The theory provides a methodology to specify, implement, and analyse quantum algorithms, the paradigm of quantum non-locality, quantum pseudotelepathy games, computing with mixed states, and quantum communication protocols that use both quantum and classical communication channels.
APA, Harvard, Vancouver, ISO, and other styles
40

Budde, Carlos Esteban. "Automatización de técnicas de división por importancia para la simulación de eventos raros." Doctoral thesis, 2017. http://hdl.handle.net/11086/5846.

Full text
Abstract:
Existen muchas técnicas para estudiar y verificar descripciones formales de sistemas probabilistas. La simulación de Monte Carlo por eventos discretos ofrece una alternativa para la generalidad de procesos estocásticos descriptos como autómatas. Cuando los valores a estimar dependen de la ocurrencia de eventos raros cuya presencia en una traza es muy poco probable, la cantidad de simulación requerida puede ser inviable. La división por importancia es un método de simulación especializado para atacar estas situaciones, pero requiere de una función de importancia. La eficiencia del método depende esencialmente de la definición no trivial de dicha función, que típicamente se realiza ad hoc. En esta tesis presentamos técnicas automáticas para derivar la función de importancia, basadas en una descripción formal de un proceso estocástico general y de la propiedad a estimar. Se presentan también resultados experimentales sobre casos de estudios tomados de la bibliografía en simulación de eventos raros, obtenidos con herramientas de software públicamente disponibles, implementadas en esta tesis para tal fin.
APA, Harvard, Vancouver, ISO, and other styles
41

Nievas, Francisco. "Automatización de cefalometrías utilizando métodos de aprendizaje automático." Bachelor's thesis, 2019. http://hdl.handle.net/11086/15305.

Full text
Abstract:
Tesis (Lic. en Cs. de la Computación)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2019.<br>La cefalometría es un estudio médico que logra diagnosticar problemas dentarios, esqueléticos ó problemas estéticos. Se realiza sobre un trazado obtenido del calco de líneas de las estructuras blandas y duras (piel y hueso respectivamente) a partir de una radiografía lateral de la cara, obtenida del paciente. Una vez obtenido el calco se procede a marcar ciertos puntos cefalométricos, además de líneas y ángulos característicos para poder realizar el estudio en sí. En este trabajo especial de la Licenciatura en Ciencias de la Computación se propone utilizar modelos de aprendizaje automático para la generación de cefalometrías. Dichos modelos detectan los puntos cefalométricos en imágenes de rayos x, acelerando el proceso de cómputo del estudio. Se presentaron arquitecturas novedosas, las mismas combinan una arquitectura de un Autoencoder y el uso de redes neuronales convolucionales con capas Inception para asociar a una imagen de entrada un mapa de probabilidades sobre la misma. Se compararon diferentes modelos, llegando a mostrar que estos tienen un excelente desempeño para esta tarea.<br>Cephalometry is a medical study that is used to diagnose dental, skeletal or aesthetic problems. This study involves tracing the structures of the skin and bone from a lateral x-ray image of the face which is obtained from the patient. Once this is done, cephalometric points need to be marked on top of this tracing besides specific lines and angles. This work proposes to use machine learning models to generate the aforementioned studies. These models can detect cephalometric points in X-ray images, thus reducing the time it takes to conduct this study. These novel architectures presented hereby combine both an autoencoder and the use of convolutional neural networks with inception layers, which is used to associate an input map with a probability map. Different models were compared, showing their excellent performance of this task.<br>Fil: Nievas, Francisco. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía, Física y Computación; Argentina.
APA, Harvard, Vancouver, ISO, and other styles
42

Cruz, Kouichi Julián Andrés. "Desarrollo de un algoritmo de compresión de datos optimizado para imágenes satelitales." Bachelor's thesis, 2017. http://hdl.handle.net/11086/5863.

Full text
Abstract:
Tesis (Lic. en Ciencias de la Computación)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2017.<br>Las imágenes satelitales son cada vez de mayor tamaño, de tal manera que hoy en dı́a hablar en términos de gigabytes ya es normal. A la hora de generar productos que formen un mosaico también nos encontramos con grandes volúmenes lo cual no solo dificulta el procesamiento sino también la transferencia o distribución de los mismos hacia los usuarios. Finalmente, también tenemos el problema del manejo de los datos tanto a bordo de la plataforma del satélite como de su bajada a tierra. En esta tesis, se desarrollará e implementará un algoritmo de compresión con pérdida orientado a resolver esta problemática, utilizando la Transformada Discreta de Wavelets y la codificación Huffman.<br>Satellite images are increasing in size, to the point where speaking in the order of Gigabytes is normal. We also find big volumes of data when generating products such as mosaics, which makes both their processing and their transfer to end users more difficult. Finally, we also have large volumes of data in both the satellites themselves and in their transfer back to Earth. In this thesis, we design and implement a lossy compression algorithm targeted to solve this topic, making use of the Discrete Wavelet Transform and the Huffman coding.
APA, Harvard, Vancouver, ISO, and other styles
43

Kokic, Emiliano. "Reconocimiento semi-supervisado de entidades nombradas mediante redes convolucionales en escalera." Bachelor's thesis, 2019. http://hdl.handle.net/11086/19966.

Full text
Abstract:
Tesis (Lic. en Cs. de la Computación)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2019.<br>El presente trabajo de tesis consiste en la exploración de un método de aprendizaje automático semi-supervisado llamado Redes Convolucionales en Escalera. La problemática que se decide abordar para la evaluación de dicho modelo es el Reconocimiento de Entidades Nombradas, una tarea muy relevante dentro del área de Procesamiento del Lenguaje Natural. Para realizar el estudio fue indispensable contar con WiNER, un corpus anotado de Wikipedia de gran calidad y fácil acceso. A su vez se estudian alternativas de representación de las palabras de acuerdo a su contexto. Se utiliza el bien conocido modelo Word2Vec para la generación de \textit{embeddings} de palabras junto con la aplicación de estrategias que los combinan. En particular, resulta que el uso de capas convolucionales es una gran herramienta para la extracción de atributos del contexto. Se implementaron distintas arquitecturas de modelos, cada una de ellas con su versión supervisada (a modo de baseline) y semi-supervisada (al agregar las redes en escalera). Cada arquitectura tiene distintos tipos de instancias de entrenamiento, en algunos casos utilizando el etiquetado de palabras así como también el etiquetado de secuencias. Finalmente, luego de definir las métricas de evaluación se realizaron los experimentos pertinentes encontrando el modelo de Redes Convolucionales en Amplitud en Escalera como el más prometedor. Si bien los resultados obtenidos no son del estado del arte en cuanto a la tarea de reconocimiento de entidades nombradas, se visualiza que los modelos semi-supervisados de redes neuronales en escalera generalizan mejor y su performance no disminuye en gran medida al de los supervisados gracias al uso complementario de datos no anotados.<br>The present work consists in the exploration of a semi-supervised machine learning method called Convolutional Ladder Networks. A very relevant task within the Natural Language Processing area is the Named-entity recognition, this is the problem that is decided to deal for the model evaluation. In order to carry out this study, it was essential to have WiNER, an annotated corpus of Wikipedia of great quality and easy access. At the same time, alternative strategies for the word representations according to their context are studied. The well-known Word2Vec model is used to generate word embeddings along with the application of strategies that combine them. In particular, it turns out that the use of convolutional layers is a great tool for context features extraction. Different model architectures were implemented, each of them with their supervised version (as a baseline) and the semi-supervised one (when adding ladder networks). Each architecture has different types of training instances, in some cases using word tagging as well as sequence tagging. Finally, once the experiments were executed, we found the Wide Convolutional Ladder Networks model as the most promising. Although the results obtained are not from the state of the art in terms of the task of recognising named entities, it is found that semi-supervised models of ladder neural networks generalize better and their performance does not decrease greatly to that of supervised ones thanks to the complementary use of unlabeled data.<br>publishedVersion<br>Fil: Kokic, Emiliano. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía, Física y Computación; Argentina.
APA, Harvard, Vancouver, ISO, and other styles
44

Carranza, Astrada Rodrigo Pablo. "Reconocimiento de caracteres en imágenes no estructuradas." Bachelor's thesis, 2015. http://hdl.handle.net/11086/2818.

Full text
Abstract:
El que una computadora pueda discernir un carácter de otro en la imagen de un texto no es una tarea sencilla. El objetivo es clasificar caracteres en escenas naturales en donde las técnicas tradicionales de OCR no se pueden aplicar de forma directa (De Campos et al., 2009). En este trabajo se presenta un análisis del impacto producido en la performance de clasificación al entrenar un clasificador de caracteres con imágenes sintéticas (Wang et al., 2011). Se complementa esto realizando una análisis de performance utilizando diferentes conjuntos de entrenamiento sintéticos generados a partir del dataset público conocido como Chars74k. El resultado final de este trabajo sirve para corrobar que este tipo de datos produce un impacto positivo en la clasificación y más aún al combinar estas con datos reales.
APA, Harvard, Vancouver, ISO, and other styles
45

Celayes, Pablo Gabriel. "Recomendación de información basada en análisis de redes sociales y procesamiento de lenguaje natural." Bachelor's thesis, 2017. http://hdl.handle.net/11086/5517.

Full text
Abstract:
Tesis (Lic. en Ciencias de la Computación)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2017.<br>El presente trabajo se origina en el estudio de técnicas de Análisis de Redes Sociales para mejorar la calidad de un recomendador de contenido para entornos corporativos. Estudiamos el problema de recomendación de contenido basada en preferencias del entorno social de un usuario. Construimos un conjunto de datos de muestra extraído de la red social Twitter y a partir de estos datos entrenamos y evaluamos modelos de clasificación binaria SVM ( Support Vector Machines ) que predicen retweets de un usuario en base a los de su entorno. Se obtiene una calidad media de predicción F1 superior al 84 %, sin analizar el contenido de los tweets. En los casos en que la predicción social pura no es tan buena, se estudian modelos aumentados con características extraídas del contenido, usando el modelo temático probabilístico LDA ( Latent Dirichlet Allocation ).
APA, Harvard, Vancouver, ISO, and other styles
46

Mwansa, Gardner. "Exploring the development of a framework for agile methodologies to promote the adoption and use of cloud computing services in South Africa." Thesis, 2015. http://hdl.handle.net/10500/21159.

Full text
Abstract:
The emergence of cloud computing is influencing how businesses develop, re-engineer, and implement critical software applications. The cloud requires developers to elevate the importance of compliance with security policies, regulations and internal engineering standards in their software development life cycles. Cloud computing and agile development methodologies are new technologies associated with new approaches in the way computing services are provisioned and development of quality software enhanced. However adoption and use of agile and cloud computing by SMMEs in South Africa is seemingly constrained by a number of technical and non-technical challenges. Using Grounded Theory and case study method this study was aimed at exploring the development of a framework for agile methodologies to promote the adoption and use of cloud computing services by SMMEs in South Africa. Data was collected through survey and in-depth interviews. Open, Axial and Selective coding was used to analyse the data. In tandem with its main objective the study, besides exploring the development of the envisaged framework, also generated and made available valuable propositions and knowledge that SMMEs in South Africa using agile development methodologies can use to work better with cloud computing services in the country without compromising on software quality. The findings of this study and the emerging insights around the development of the framework, which in itself also constitutes an important decision making tool for supporting adoption and use of cloud computing services, are a substantial contribution to knowledge and practice in the ICT field of information systems in South Africa<br>Information Science<br>D. Phil. (Information Systems)
APA, Harvard, Vancouver, ISO, and other styles
47

Haag, Karen Yanet. "Reconocimiento de entidades nombradas en texto de dominio legal." Bachelor's thesis, 2019. http://hdl.handle.net/11086/15323.

Full text
Abstract:
Tesis (Lic. en Cs. de la Computación)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2019.<br>Este trabajo se centra en la detección, clasificación y anotación de entidades nombradas (como Leyes, Resoluciones o Decretos, entre otros) para el corpus de InfoLEG, una base de datos que contiene los documentos de todas las leyes de la República Argentina. En primera instancia se hizo reconocimiento mediante patrones definidos por expresiones regulares. Luego, se entrenó y evaluó un modelo basado en aprendizaje automático para tratar entidades que no eran regulares y así poder ampliar la cantidad de instancias capturadas. Por último, se realizó una aproximación utilizando anotación semántica para cada entidad y obtener así el acceso a la fuente de información correspondiente.<br>This work focuses on detection, classification and annotation of named entities (such as laws, resolutions or decrees, among others) for the corpus of InfoLEG, a database that contains the documents of all the laws of the Argentine Republic. In the first instance, recognition was done using patterns defined by regular expressions. Then, a model based on machine learning was trained and evaluated to deal with entities that were not regular and thus be able to expand the number of captured instances. Finally, an approximation was made using semantic annotation for each entity and thus obtain access to the corresponding information source.<br>Fil: Haag, Karen Yanet. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía, Física y Computación; Argentina.
APA, Harvard, Vancouver, ISO, and other styles
48

Teruel, Milagro. "Estudio de representaciones mediante co-embeddings para estudiantes y contenidos en minerı́a de datos educativos." Doctoral thesis, 2019. http://hdl.handle.net/11086/17633.

Full text
Abstract:
Tesis (Doctora en Ciencias de la Computación)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2019.<br>Este trabajo es un estudio sobre la generación automática de representaciones basadas en métodos neuronales, en aplicaciones dentro del área de Minerı́a de Datos Educacionales (EDM). Se propone utilizar una arquitectura neuronal recurrente para modelar el cambio en el estado de los estudiantes a medida que interactúan con plataformas de aprendizaje en lı́nea. Al mismo tiempo, se generan representaciones automáticas para los elementos de los cursos, como problemas o lecciones, evitando la necesidad de utilizar ejemplos etiquetados con información adicional, y en consecuencia costosos de obtener. Sobre esta base, se modifica la arquitectura para modelar explı́citamente la relación entre la representación de los estudiantes y la de los componentes del curso, proyectando ambos tipos de entidades en el mismo espacio latente. De esta manera, se espera mejorar el desempeño del clasificador a través de la inyección directa de conocimiento de dominio en el modelo. Ambas propuestas son evaluadas para las tareas de estimación de conocimiento (Knowledge Tracing) y predicción del abandono escolar (dropout) en tutores inteligentes y cursos masivos, respectivamente. Se observa que las representaciones conjuntas de estudiantes y lecciones obtienen resultados similares a las representaciones disjuntas, mejorando significativamente en escenarios con pocos datos o con desbalance de clases pronunciado.<br>This work is a study on the automatic generation of representations based on neuronal methods, for applications in the area of Educational Data Mining (EDM). We proposed to use a recurrent neuronal architecture to model the change in the state of students as they interact with online learning platforms. At the same time, automatic representations are generated for course elements, such as problems or lessons, avoiding the need to use examples labeled with additional information, and consequently costly to obtain. On this basis, the architecture is modified to explicitly model the relationship between the students’ representation and that of the course components, projecting both types of entities in the same latent space. In this way, the performance of the classifier is expected to improve through the direct injection of domain knowledge into the model. Both proposals are evaluated for knowledge tracing and dropout prediction in intelligent tutor systems and mass open online courses, respectively. It is observed that the joint representations of students and lessons obtain results similar to the disjoint representations, improving significantly in scenarios with fewer training data, partial sequences, or with pronounced class imbalance.<br>Fil: Teruel, Milagro. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía, Física y Computación; Argentina.
APA, Harvard, Vancouver, ISO, and other styles
49

Molina, Heredia Facundo. "Optimización de modelo de Heisenberg para GPU." Bachelor's thesis, 2020. http://hdl.handle.net/11086/17136.

Full text
Abstract:
Tesis (Lic. en Cs. de la Computación)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2020.<br>El trabajo consiste en optimizar una simulación de un modelo de Heisenberg. Se realizaron dos implementaciones en C++, una optimizada para CPU y otra para GPU, junto con una visualización gráfica. Se analizan diferentes factores que afectan al desempeño de la simulación y cómo obtener mejoras en performance haciendo modificaciones en el manejo de los datos, permitiendo un uso más eficiente del hardware y habilitando simulaciones de mayor tamaño. Esta simulación es la herramienta de trabajo de Orlando Billoni. Cuando empezamos el trabajo, en la máquina que tenía para correr, una simulación típica tardaba entre 6 y 8 horas, ahora en GPU tarda 13 minutos, es 27x más rápido, con lo cual Billoni ahora puede hacer 27 veces más experimentos o experimentos más grandes.<br>This work's goal is to optimize a Heisenberg model simulation. Two implementations were developed in C++, one optimized to run in CPU and the other to run in GPU, complemented with a graphic visualization. Many aspects that affect the performance are taken into account and how to improve performance with more efficient data handling techniques, allowing for a more efficient use of the available hardware and enabling bigger simulations. This simulation is Orlando Billoni’s research tool. When we started this work, on the hardware available to him, a typical simulation would last between 6 and 8 hours, now running in a GPU it takes 13 minutes. This is 27 times faster which allows him to do 27 times more experiments or bigger simulations.<br>Fil: Molina Heredia, Facundo. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía, Física y Computación; Argentina.
APA, Harvard, Vancouver, ISO, and other styles
50

Peretti, Nicolás Jesús. "Aprendizaje multimodal aplicado al etiquetado de imágenes." Bachelor's thesis, 2019. http://hdl.handle.net/11086/19982.

Full text
Abstract:
Tesis (Lic. en Cs. de la Computación)--Universidad Nacional de Córdoba, Facultad de Matemática, Astronomía, Física y Computación, 2019.<br>El aprendizaje multimodal estudia problemas de aprendizaje automático utilizando datos que combinan información de diferente naturaleza. Un ejemplo de tarea multimodal es el etiquetado de imágenes, donde una imagen debe ser etiquetada con términos (palabras) que describan el contenido de la imagen. En este trabajo proponemos estudiar modelos que permiten etiquetar imágenes a través de funciones que den una ordenación (ranking) de etiquetas posibles a cada imagen dada. Este ranking se obtiene a partir de una puntuación (score) que se obtiene de una función bilineal que combina representaciones de imágenes con representaciones de etiquetas textuales.<br>Multimodal Learning is a subset of machine learning problems that work with data of different nature. An example of a multimodal task is image tagging where an image must be tagged with different representative tags that describe the image. In this work we will study different models that allow us to tag an image through functions that give a rank to image tags. In order to get this ranking we will use a bilinear function that combine image and tag embeddings.<br>publishedVersion<br>Fil: Peretti, Nicolás Jesús. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía, Física y Computación; Argentina.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography