Dissertations / Theses on the topic 'BigData'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'BigData.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Яковець, Р. І., and Ігор Віталійович Пономаренко. "Основні тенденції в BigData." Thesis, КНУТД, 2016. https://er.knutd.edu.ua/handle/123456789/4083.
Full textVitali, Federico. "Map-Matching su Piattaforma BigData." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18089/.
Full textUrssi, Nelson José. "Metacidade: projeto, bigdata e urbanidade." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/16/16134/tde-01062017-154915/.
Full textThe technologies of information and communication in all the instances of our daily life modifies the way we live and think. Urban computing, ubiquitous, locative, multimídia and interconnected, generates a large amount of data, resulting in an abundance of information on almost everything in our world. Cities permeated by personal, vehicular and environmental sensors acquire sentient characteristics. A citizen-sensitive city can work with individualized day-to-day strategies. The thesis discusses the role of cities and the complexity of our lives, the interrelationship of hardware, symbolic models and patterns of use (applications), and the design challenges to this global hybrid information ecosystem. It presents netnographic research, through case studies, urban explorations and interviews, where one can observe our presente contemporary condition. The hypothesis verified in the thesis, the city updated in real time, an urban informational ecosystem of new and infinite possibilities of interfaces and interactions.
Hashem, Hadi. "Modélisation intégratrice du traitement BigData." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLL005/document.
Full textNowadays, multiple actors of Internet technology are producing very large amounts of data. Sensors, social media or e-commerce, all generate real-time extending information based on the 3 Vs of Gartner: Volume, Velocity and Variety. In order to efficiently exploit this data, it is important to keep track of the dynamic aspect of their chronological evolution by means of two main approaches: the polymorphism, a dynamic model able to support type changes every second with a successful processing and second, the support of data volatility by means of an intelligent model taking in consideration key-data, salient and valuable at a specific moment without processing all volumes of history and up to date data.The primary goal of this study is to establish, based on these approaches, an integrative vision of data life cycle set on 3 steps, (1) data synthesis by selecting key-values of micro-data acquired by different data source operators, (2) data fusion by sorting and duplicating the selected key-values based on a de-normalization aspect in order to get a faster processing of data and (3) the data transformation into a specific format of map of maps of maps, via Hadoop in the standard MapReduce process, in order to define the related graph in applicative layer.In addition, this study is supported by a software prototype using the already described modeling tools, as a toolbox compared to an automatic programming software and allowing to create a customized processing chain of BigData
Hashem, Hadi. "Modélisation intégratrice du traitement BigData." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLL005.
Full textNowadays, multiple actors of Internet technology are producing very large amounts of data. Sensors, social media or e-commerce, all generate real-time extending information based on the 3 Vs of Gartner: Volume, Velocity and Variety. In order to efficiently exploit this data, it is important to keep track of the dynamic aspect of their chronological evolution by means of two main approaches: the polymorphism, a dynamic model able to support type changes every second with a successful processing and second, the support of data volatility by means of an intelligent model taking in consideration key-data, salient and valuable at a specific moment without processing all volumes of history and up to date data.The primary goal of this study is to establish, based on these approaches, an integrative vision of data life cycle set on 3 steps, (1) data synthesis by selecting key-values of micro-data acquired by different data source operators, (2) data fusion by sorting and duplicating the selected key-values based on a de-normalization aspect in order to get a faster processing of data and (3) the data transformation into a specific format of map of maps of maps, via Hadoop in the standard MapReduce process, in order to define the related graph in applicative layer.In addition, this study is supported by a software prototype using the already described modeling tools, as a toolbox compared to an automatic programming software and allowing to create a customized processing chain of BigData
Оверчук, Олексій Сергійович. "Методи кодування інформаційних потоків BigData фінансового ринку." Master's thesis, КПІ ім. Ігоря Сікорського, 2019. https://ela.kpi.ua/handle/123456789/32122.
Full textMaster's Thesis: 100 p., 17 fig., 14 tabl., 3 suppl., 20 sources. Object of Study - Methods for Using Bigdata Numerical Market Flows. Metal works - research of methods used on modern algorithms of modern elemental data and reliable data on preservation of data on the system of methods of diagnostics. Research Methods - Statistical methods of using and diagnosing graphs. New knowledge of work - the use of multicompressor data styling techniques and Big Data structural decompositions for the use of diagnostic diagrams. The study analyzes modern methods that are used by their data compression algorithms and develops publicly available data on various multi-compressor data compression methods; The main comparisons obtained for the Code of Regular Data Data are big data on the use of system diagnostic methods. The results of the master's thesis are published in two publications. The results obtained were used in the research works of MMSA-1/2018. In this work, it is recommended that you review additional methods of code use and explore other ways to secure information flows.
Прасол, І. Г. "Застосування технологій обробки великих даних (BigData) в маркетингу." Thesis, Київський національний універститет технологій та дизайну, 2017. https://er.knutd.edu.ua/handle/123456789/10404.
Full textDíaz, Huiza César, and Balcázar César Quezada. "Charla sobre aplicaciones de Bigdata en el mercado." Universidad Peruana de Ciencias Aplicadas (UPC), 2019. http://hdl.handle.net/10757/627937.
Full textGault, Sylvain. "Improving MapReduce Performance on Clusters." Thesis, Lyon, École normale supérieure, 2015. http://www.theses.fr/2015ENSL0985/document.
Full textNowadays, more and more scientific fields rely on data mining to produce new results. These raw data are produced at an increasing rate by several tools like DNA sequencers in biology, the Large Hadron Collider (LHC) in physics that produced 25 petabytes per year as of 2012, or the Large Synoptic Survey Telescope (LSST) that should produce 30 petabyte of data per night. High-resolution scanners in medical imaging and social networks also produce huge amounts of data. This data deluge raise several challenges in terms of storage and computer processing. The Google company proposed in 2004 to use the MapReduce model in order to distribute the computation across several computers.This thesis focus mainly on improving the performance of a MapReduce environment. In order to easily replace the software parts needed to improve the performance, designing a modular and adaptable MapReduce environment is necessary. This is why a component based approach is studied in order to design such a programming environment. In order to study the performance of a MapReduce application, modeling the platform, the application and their performance is mandatory. These models should be both precise enough for the algorithms using them to produce meaningful results, but also simple enough to be analyzed. A state of the art of the existing models is done and a new model adapted to the needs is defined. On order to optimise a MapReduce environment, the first studied approach is a global optimization which result in a computation time reduced by up to 47 %. The second approach focus on the shuffle phase of MapReduce when all the nodes may send some data to every other node. Several algorithms are defined and studied when the network is the bottleneck of the data transfers. These algorithms are tested on the Grid'5000 experiment platform and usually show a behavior close to the lower bound while the trivial approach is far from it
Melkes, Miloslav. "BigData řešení pro zpracování rozsáhlých dat ze síťových toků." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-236039.
Full textОхотний, С. М. "Особливості обробки даних великих об’ємів (BigData) з використанням нереляційних баз даних." Thesis, ЦНТУ, 2017. http://dspace.kntu.kr.ua/jspui/handle/123456789/7377.
Full textGraux, Damien. "On the efficient distributed evaluation of SPARQL queries." Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM058/document.
Full textThe Semantic Web standardized by the World Wide Web Consortium aims at providing a common framework that allows data to be shared and analyzed across applications. Thereby, it introduced as common base for data the Resource Description Framework (rdf) and its query language sparql.Because of the increasing amounts of rdf data available, dataset distribution across clusters is poised to become a standard storage method. As a consequence, efficient and distributed sparql evaluators are needed.To tackle these needs, we first benchmark several state-of-the-art distributed sparql evaluators while adapting the considered set of metrics to a distributed context such as e.g. network traffic. Then, an analysis driven by typical use cases leads us to define new development areas in the field of distributed sparql evaluation. On the basis of these fresh perspectives, we design several efficient distributed sparql evaluators which fit into each of these use cases and whose performances are validated compared with the already benchmarked evaluators. For instance, our distributed sparql evaluator named sparqlgx offers efficient time performances while being resilient to the loss of nodes
FRANÇA, Arilene Santos de. "Otimização do processo de aprendizagem da estrutura gráfica de Redes Bayesianas em BigData." Universidade Federal do Pará, 2014. http://repositorio.ufpa.br/jspui/handle/2011/5608.
Full textApproved for entry into archive by Ana Rosa Silva (arosa@ufpa.br) on 2014-09-05T12:32:05Z (GMT) No. of bitstreams: 2 license_rdf: 23898 bytes, checksum: e363e809996cf46ada20da1accfcd9c7 (MD5) Dissertacao_OtimizacaoProcessoAprendizagem.pdf: 1776244 bytes, checksum: 70399c027bdcfb2e5676cb7cc2b4d049 (MD5)
Made available in DSpace on 2014-09-05T12:32:05Z (GMT). No. of bitstreams: 2 license_rdf: 23898 bytes, checksum: e363e809996cf46ada20da1accfcd9c7 (MD5) Dissertacao_OtimizacaoProcessoAprendizagem.pdf: 1776244 bytes, checksum: 70399c027bdcfb2e5676cb7cc2b4d049 (MD5) Previous issue date: 2014
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
A automação na gestão e análise de dados tem sido um fator crucial para as empresas que necessitam de soluções eficientes em um mundo corporativo cada vez mais competitivo. A explosão do volume de informações, que vem se mantendo crescente nos últimos anos, tem exigido cada vez mais empenho em buscar estratégias para gerenciar e, principalmente, extrair informações estratégicas valiosas a partir do uso de algoritmos de Mineração de Dados, que comumente necessitam realizar buscas exaustivas na base de dados a fim de obter estatísticas que solucionem ou otimizem os parâmetros do modelo de extração do conhecimento utilizado; processo que requer computação intensiva para a execução de cálculos e acesso frequente à base de dados. Dada a eficiência no tratamento de incerteza, Redes Bayesianas têm sido amplamente utilizadas neste processo, entretanto, à medida que o volume de dados (registros e/ou atributos) aumenta, torna-se ainda mais custoso e demorado extrair informações relevantes em uma base de conhecimento. O foco deste trabalho é propor uma nova abordagem para otimização do aprendizado da estrutura da Rede Bayesiana no contexto de BigData, por meio do uso do processo de MapReduce, com vista na melhora do tempo de processamento. Para tanto, foi gerada uma nova metodologia que inclui a criação de uma Base de Dados Intermediária contendo todas as probabilidades necessárias para a realização dos cálculos da estrutura da rede. Por meio das análises apresentadas neste estudo, mostra-se que a combinação da metodologia proposta com o processo de MapReduce é uma boa alternativa para resolver o problema de escalabilidade nas etapas de busca em frequência do algoritmo K2 e, consequentemente, reduzir o tempo de resposta na geração da rede.
Automation at data management and analysis has been a crucial factor for companies which need efficient solutions in an each more competitive corporate world. The explosion of the volume information, which has remained increasing in recent years, has demanded more and more commitment to seek strategies to manage and, especially, to extract valuable strategic informations from the use of data mining algorithms, which commonly need to perform exhausting queries at the database in order to obtain statistics that solve or optimize the parameters of the model of knowledge discovery selected; process which requires intensive computing to perform calculations and frequent access to the database. Given the effectiveness of uncertainty treatment, Bayesian networks have been widely used for this process, however, as the amount of data (records and/or attributes) increases, it becomes even more costly and time consuming to extract relevant information in a knowledge base. The goal of this work is to propose a new approach to optimization of the Bayesian Network structure learning in the context of BigData, by using the MapReduce process, in order to improve the processing time. To that end, it was generated a new methodology that includes the creation of an Intermediary Database, containing all the necessary probabilities to the calculations of the network structure. Through the analyzes presented at this work, it is shown that the combination of the proposed methodology with the MapReduce process is a good alternative to solve the scalability problem of the search frequency steps of K2 algorithm and, as a result, to reduce the response time generation of the network.
Gallegati, Mattia. "Generazione di isocrone ed elaborazione di indicatori statistici con strumenti NoSql in ambiente BigData." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016.
Find full textLIMA, João Gabriel Rodrigues de Oliveira. "Stormsom: clusterização em tempo-real de fluxos de dados distribuídos no contexto de BigData." Universidade Federal do Pará, 2015. http://repositorio.ufpa.br/jspui/handle/2011/7487.
Full textApproved for entry into archive by Edisangela Bastos (edisangela@ufpa.br) on 2017-01-30T13:30:32Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_StormsomClusterizacaoTempo-Real.pdf: 1081222 bytes, checksum: 30261425224872c11433d064abb4a2d8 (MD5)
Made available in DSpace on 2017-01-30T13:30:32Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao_StormsomClusterizacaoTempo-Real.pdf: 1081222 bytes, checksum: 30261425224872c11433d064abb4a2d8 (MD5) Previous issue date: 2015-08-28
Cresce cada vez mais a quantidade de cenários e aplicações que algoritmo necessitam de processamento e respostas em tempo real e que se utilizam de modelos estatísticos e de mineração de dados a fim de garantir um melhor suporte à tomada de decisão. As ferramentas disponíveis no mercado carecem de processos computacionais mais refinados que sejam capazes de extrair padrões de forma mais eficiente a partir de grandes volumes de dados. Além disso, há a grande necessidade, em diversos cenários, que o os resultados sejam providos em tempo real, tão logo inicie o processo, uma resposta imediata já deve estar sendo produzida. A partir dessas necessidades identificadas, neste trabalho propomos um processo autoral, chamado StormSOM, que consiste em um modelo de processamento, baseado em topologia distribuída, para a clusterização de grandes volumes de fluxos, contínuos e ilimitados, de dados, através do uso de redes neurais artificiais conhecidas como mapas auto-organizáveis, produzindo resultados em tempo real. Os experimentos foram realizados em um ambiente de computação em nuvem e os resultados comprovam a eficiência da proposta ao garantir que o modelo neural utilizado possa gerar respostas em tempo real para o processamento de Big Data.
Chakraborty, Suryadip. "Data Aggregation in Healthcare Applications and BIGDATA set in a FOG based Cloud System." University of Cincinnati / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1471346052.
Full textMalka, Golan. "Thinknovation 2019: The Cyber as the new battlefield related to AI, BigData and Machine Learning Capabilities." Universidad Peruana de Ciencias Aplicadas (UPC), 2019. http://hdl.handle.net/10757/653843.
Full textBerekmeri, Mihaly. "La modélisation et le contrôle des services BigData : application à la performance et la fiabilité de MapReduce." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT126/document.
Full textThe amount of raw data produced by everything from our mobile phones, tablets, computers to our smart watches brings novel challenges in data storage and analysis. Many solutions have arisen in the industry to treat these large quantities of raw data, the most popular being the MapReduce framework. However, while the deployment complexity of such computing systems is steadily increasing, continuous availability and fast response times are still the expected norm. Furthermore, with the advent of virtualization and cloud solutions, the environments where these systems need to run is becoming more and more dynamic. Therefore ensuring performance and dependability constraints of a MapReduce service still poses significant challenges. In this thesis we address this problematic of guaranteeing the performance and availability of MapReduce based cloud services, taking an approach based on control theory. We develop the first dynamic models of a MapReduce service running a concurrent workload. Furthermore, we develop several control laws to ensure different quality of service objectives. First, classical feedback and feedforward controllers are developed to guarantee service performance. To further adapt our controllers to the cloud, such as minimizing the number of reconfigurations and costs, a novel event-based control architecture is introduced for performance management. Finally we develop the optimal control architecture MR-Ctrl, which is the first solution to provide guarantees in terms of both performance and dependability for MapReduce systems, meanwhile keeping cost at a minimum. All the modeling and control approaches are evaluated both in simulation and experimentally using MRBS, a comprehensive benchmark suite for evaluating the performance and dependability of MapReduce systems. Validation experiments were run in a real 60 node Hadoop MapReduce cluster, running a data intensive Business Intelligence workload. Our experiments show that the proposed techniques can successfully guarantee performance and dependability constraints
Fiorilla, Salvatore. "serie temporali iot in cassandra: modellazione e valutazione sperimentale." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17483/.
Full textMorabito, Andrea. "Utilizzo di Scala e Spark per l'esecuzione di programmi Data-Intensive in ambiente cloud." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/14843/.
Full textSetterquist, Erik. "The effect of quality metrics on the user watching behaviour in media content broadcast." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-304514.
Full textBoychuk, Maksym. "Zpracování a vizualizace senzorových dat ve vojenském prostředí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2016. http://www.nusl.cz/ntk/nusl-255472.
Full textCavallo, Marco. "H2F: a hierarchical Hadoop framework to process Big Data in geo-distributed contexts." Doctoral thesis, Università di Catania, 2018. http://hdl.handle.net/10761/3801.
Full textDanesh, Sabri. "BIG DATA : From hype to reality." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-37493.
Full textKola, Marin. "Progettazione ed implementazione di un database per la gestione della mappa della connettivita urbana utilizzando tecnologie nosql." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9696/.
Full textTahiri, Ardit. "Online Stream Processing di Big Data su Apache Storm per Applicazioni di Instant Coupon." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10311/.
Full textMaglione, Angelo. "Supporto ad Applicazioni di Web Reputation basate su Piattaforma Apache Storm." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10393/.
Full textPennella, Francesco. "Analisi e sperimentazione della piattaforma Cloud Dataflow." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12360/.
Full textRighi, Massimo. "apache cassandra: studio ed analisi di prestazioni." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16713/.
Full textUrbinelli, Francesco. "Benchmarking di Flussi Massivi di Dati." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020.
Find full textAddimando, Alessio. "Progettazione di un intrusion detection system su piattaforma big data." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16755/.
Full textVillalobos, Luengo César Alexis. "Análisis de archivos Logs semi-estructurados de ambientes Web usando tecnologías Big-Data." Tesis, Universidad de Chile, 2016. http://repositorio.uchile.cl/handle/2250/140417.
Full textActualmente el volumen de datos que las empresas generan es mucho más grande del que realmente pueden procesar, por ende existe un gran universo de información que se pierde implícito en estos datos. Este proyecto de tesis logró implementar tecnologías Big Data capaces de extraer información de estos grandes volúmenes de datos existentes en la organización y que no eran utilizados, de tal forma de transformarlos en valor para el negocio. La empresa elegida para este proyecto se dedicada al pago de cotizaciones previsionales de forma electrónica por internet. Su función es ser el medio por el cual se recaudan las cotizaciones de los trabajadores del país. Cada una de estas cotizaciones es informada, rendida y publicada a las instituciones previsionales correspondientes (Mutuales, Cajas de Compensación, AFPs, etc.). Para realizar su función, la organización ha implementado a lo largo de sus 15 años una gran infraestructura de alto rendimiento orientada a servicios web. Actualmente esta arquitectura de servicios genera una gran cantidad de archivos logs que registran los sucesos de las distintas aplicaciones y portales web. Los archivos logs tienen la característica de poseer un gran tamaño y a la vez no tener una estructura rigurosamente definida. Esto ha causado que la organización no realice un eficiente procesamiento de estos datos, ya que las actuales tecnologías de bases de datos relaciones que posee no lo permiten. Por consiguiente, en este proyecto de tesis se buscó diseñar, desarrollar, implementar y validar métodos que sean capaces de procesar eficientemente estos archivos de logs con el objetivo de responder preguntas de negocio que entreguen valor a la compañía. La tecnología Big Data utilizada fue Cloudera, la que se encuentra en el marco que la organización exige, como por ejemplo: Que tenga soporte en el país, que esté dentro de presupuesto del año, etc. De igual forma, Cloudera es líder en el mercado de soluciones Big Data de código abierto, lo cual entrega seguridad y confianza de estar trabajando sobre una herramienta de calidad. Los métodos desarrollados dentro de esta tecnología se basan en el framework de procesamiento MapReduce sobre un sistema de archivos distribuido HDFS. Este proyecto de tesis probó que los métodos implementados tienen la capacidad de escalar horizontalmente a medida que se le agregan nodos de procesamiento a la arquitectura, de forma que la organización tenga la seguridad que en el futuro, cuando los archivos de logs tengan un mayor volumen o una mayor velocidad de generación, la arquitectura seguirá entregando el mismo o mejor rendimiento de procesamiento, todo dependerá del número de nodos que se decidan incorporar.
Астістова, Т. І., and М. О. Потапенко. "Розробка програмного забезпечення e-commerce системи з розподіленим навантаженням." Thesis, Київський національний університет технологій та дизайну, 2020. https://er.knutd.edu.ua/handle/123456789/16506.
Full textDi, Meo Giovanni. "Analisi e Benchmarking del Sistema HIVE." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amslaurea.unibo.it/9186/.
Full textAddimando, Alessio. "Progettazione e prototipazione di un sistema di Data Stream Processing basato su Apache Storm." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/10977/.
Full textBerni, Mila. "Inclusione di Apache Samza e Kafka nel framework RAM3S." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.
Find full textMonrat, Ahmed Afif. "A BELIEF RULE BASED FLOOD RISK ASSESSMENT EXPERT SYSTEM USING REAL TIME SENSOR DATA STREAMING." Thesis, Luleå tekniska universitet, Datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-71081.
Full textCamilli, M. "Coping with the State Explosion Problem in Formal Methods: Advanced Abstraction Techniques and Big Data Approaches." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/264140.
Full textZanotti, Andrea. "Supporto a query geografiche efficienti su dati spaziali in ambiente Apache Spark." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016.
Find full textMendoza, Sánchez Jhenner Emiliano, and MONTEROLA LESLLY PAOLA EUMELIA SANCHEZ. "Gestión de la innovación abierta y los derechos de propiedad intelectual." Universidad Peruana de Ciencias Aplicadas, 2019. http://hdl.handle.net/10757/648722.
Full textProfessor Henry Chesbrough gives rise to "Open Innovation" (OI) at the beginning of this millennium. He states that "Open innovation is a paradigm that starts from the assumption that companies can and should use external ideas, as well as internal and external ways of accessing the market, in order to develop their business" (Chesbrough, 2011, p. 126). The basis of OI and intellectual property rights (IPR) in different areas play a fundamental role. Bican, Guderian&Ringbeck (2017), state that there is a deactivating effect of innovation. Above all, in developing countries. Because there is a gap to promote R & D & I from the state as a promoter with universities. In addition, "Companies must organize their innovation processes to be more open to external ideas and knowledge" (Chesbrough, 2011). In Peru, and other Latin American countries, there is a need to develop policies aimed at developing open innovation. According to ECLAC (2018), the main reason for disconnection between citizens and the state is the inability of public institutions to meet the growing and changing demands of society. In addition, there are other socio-economic challenges and the need to rethink institutions to better respond to society's demands. In this paper, we will study the possible success factors of OI and DPI management, the influence of ICTs and the generation of a Hyper-collaborative ecosystem, to create value and promote greater well-being in the population.
D'ERRICO, MARCO. "A network approach for opinion dynamics and price formation." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2013. http://hdl.handle.net/10281/49777.
Full textLa, Ferrara Massimiliano. "Elaborazione di Big Data: un’applicazione dello Speed Layer di Lambda Architecture." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016.
Find full textOuyang, Hua. "Optimal stochastic and distributed algorithms for machine learning." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49091.
Full textAllot, Alexis. "MyGeneFriends : vers un nouveau rapport entre chercheurs et mégadonnées." Thesis, Strasbourg, 2015. http://www.theses.fr/2015STRAJ058/document.
Full textIn recent years, biology has undergone a profound evolution, mainly due to high through put technologies and the rise of personal genomics. The resulting constant and massive increase of biological data offers unprecedented opportunities to decipher the function and evolution of genes and genomes at different scales and their roles in human diseases. My thesis addressed the relationship between researchers and biological information, and I contributed to (OrthoInspector) or created (Parsec, MyGeneFriends) systems allowing researchers to access, analyze, visualize, filter and annotate in real time the enormous quantity of data available in the post genomic era. MyGeneFriends is a first step in an exciting new direction: where researchers no longer search forinformation, but instead pertinent information is brought to researchers in a suitable form, allowing personalized and efficient access to large amounts of information, visualization of this information,and their integration in networks
Ramanayaka, Mudiyanselage Asanga. "Analyzing vertical crustal deformation induced by hydrological loadings in the US using integrated Hadoop/GIS framework." Bowling Green State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1525431761678148.
Full textChen, Peinan. "The BigDawg monitoring framework." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105942.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 44).
In this thesis, I designed and implemented a monitoring framework for the BigDawg federated database system which maintains performance information on benchmark queries. As environmental conditions change, the monitoring framework updates existing performance information to match current conditions. Using this information, the monitoring system can determine the optimal query execution plan for similar incoming queries. A series of test queries were run to assess whether the system correctly determines the optimal plans for such queries.
by Peinan Chen.
M. Eng.
Nguyen, Hung The. "Big Networks: Analysis and Optimal Control." VCU Scholars Compass, 2018. https://scholarscompass.vcu.edu/etd/5514.
Full textCHIESA, GIACOMO. "METRO (Monitoring Energy and Technological Real time data for Optimization) innovative responsive conception for cityfutures." Doctoral thesis, Politecnico di Torino, 2014. http://hdl.handle.net/11583/2560136.
Full textBenkő, Krisztián. "Zpracování velkých dat z rozsáhlých IoT sítí." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403820.
Full textYu, Katherine (Katherine X. ). "Database engine integration and performance analysis of the BigDAWG polystore system." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113455.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 55-56).
The BigDAWG polystore database system aims to address workloads dealing with large, heterogeneous datasets. The need for such a system is motivated by an increase in Big Data applications dealing with disparate types of data, from large scale analytics to realtime data streams to text-based records, each suited for different storage engines. These applications often perform cross-engine queries on correlated data, resulting in complex query planning, data migration, and execution. One such application is a medical application built by the Intel Science and Technology Center (ISTC) on data collected from an intensive care unit (ICU). This thesis presents work done to add support for two commonly used database engines, Vertica and MySQL, to the BigDAWG system, as well as results and analysis from performance evaluation of the system using the TPC-H benchmark.
by Katherine Yu.
M. Eng.