To see the other types of publications on this topic, follow the link: Hadoop framework.

Dissertations / Theses on the topic 'Hadoop framework'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 dissertations / theses for your research on the topic 'Hadoop framework.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Raja, Anitha. "A Coordination Framework for Deploying Hadoop MapReduce Jobs on Hadoop Cluster." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-196951.

Full text
Abstract:
Apache Hadoop is an open source framework that delivers reliable, scalable, and distributed computing. Hadoop services are provided for distributed data storage, data processing, data access, and security. MapReduce is the heart of the Hadoop framework and was designed to process vast amounts of data distributed over a large number of nodes. MapReduce has been used extensively to process structured and unstructured data in diverse fields such as e-commerce, web search, social networks, and scientific computation. Understanding the characteristics of Hadoop MapReduce workloads is the key to ach
APA, Harvard, Vancouver, ISO, and other styles
2

Capitão, Micael José Pedrosa. "Mediator framework for inserting data into hadoop." Master's thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/14697.

Full text
Abstract:
Mestrado em Engenharia de Computadores e Telemática<br>Data has always been one of the most valuable resources for organizations. With it we can extract information and, with enough information on a subject, we can build knowledge. However, it is first needed to store that data for later processing. On the last decades we have been assisting what was called “information explosion”. With the advent of the new technologies, the volume, velocity and variety of data has increased exponentially, becoming what is known today as big data. Telecommunications operators gather, using network moni
APA, Harvard, Vancouver, ISO, and other styles
3

Donepudi, Harinivesh. "An Apache Hadoop Framework for Large-Scale Peptide Identification." TopSCHOLAR®, 2015. http://digitalcommons.wku.edu/theses/1527.

Full text
Abstract:
Peptide identification is an essential step in protein identification, and Peptide Spectrum Match (PSM) data set is huge, which is a time consuming process to work on a single machine. In a typical run of the peptide identification method, PSMs are positioned by a cross correlation, a statistical score, or a likelihood that the match between the trial and hypothetical is correct and unique. This process takes a long time to execute, and there is a demand for an increase in performance to handle large peptide data sets. Development of distributed frameworks are needed to reduce the processing t
APA, Harvard, Vancouver, ISO, and other styles
4

Bock, Matthew. "A Framework for Hadoop Based Digital Libraries of Tweets." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78351.

Full text
Abstract:
The Digital Library Research Laboratory (DLRL) has collected over 1.5 billion tweets for the Integrated Digital Event Archiving and Library (IDEAL) and Global Event Trend Archive Research (GETAR) projects. Researchers across varying disciplines have an interest in leveraging DLRL's collections of tweets for their own analyses. However, due to the steep learning curve involved with the required tools (Spark, Scala, HBase, etc.), simply converting the Twitter data into a workable format can be a cumbersome task in itself. This prompted the effort to build a framework that will help in developing
APA, Harvard, Vancouver, ISO, and other styles
5

Bhatt, Parth. "A hadoop based framework for analyzing intrusion activities of advanced persistent threats." Instituto Tecnológico de Aeronáutica, 2013. http://www.bd.bibl.ita.br/tde_busca/arquivo.php?codArquivo=2831.

Full text
Abstract:
Intruders often remain persistent and stealthy in order to regularly exfilterate the continuously evolving critical information of their target organization. This compels them to rapidly discover new and advanced techniques for exploiting the target environment in order to trespass the security mechanisms. Such adversaries are known as Advanced Persistent Threats (APT). APTs heavily use their target system';s unknown vulnerabilities. Therefore, even with highly monitored networks, defenders are able to detect their footprints only in later phases of the intrusion. Moreover, highly monitoring t
APA, Harvard, Vancouver, ISO, and other styles
6

Cavallo, Marco. "H2F: a hierarchical Hadoop framework to process Big Data in geo-distributed contexts." Doctoral thesis, Università di Catania, 2018. http://hdl.handle.net/10761/3801.

Full text
Abstract:
L ampia diffusione di tecnologie ha portato alla generazione di enormi quantità di dati, o di Big Data, che devono essere raccolti, memorizzati e elaborati attraverso nuove tecniche per produrre valore nel modo migliore. I framework distribuiti di calcolo come Hadoop, basati sul paradigma MapRe- duce, sono stati utilizzati per elaborare tali quantità di dati sfruttando la potenza di calcolo di molti nodi di cluster. Purtroppo, in molte applicazioni di big data, i dati da elaborare risiedono in diversi data center computazionali eterogeni e distribuiti in luoghi diversi. In questo contesto le p
APA, Harvard, Vancouver, ISO, and other styles
7

Ramanayaka, Mudiyanselage Asanga. "Analyzing vertical crustal deformation induced by hydrological loadings in the US using integrated Hadoop/GIS framework." Bowling Green State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1525431761678148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lakkimsetti, Praveen Kumar. "A framework for automatic optimization of MapReduce programs based on job parameter configurations." Kansas State University, 2011. http://hdl.handle.net/2097/12011.

Full text
Abstract:
Master of Science<br>Department of Computing and Information Sciences<br>Mitchell L. Neilsen<br>Recently, cost-effective and timely processing of large datasets has been playing an important role in the success of many enterprises and the scientific computing community. Two promising trends ensure that applications will be able to deal with ever increasing data volumes: first, the emergence of cloud computing, which provides transparent access to a large number of processing, storage and networking resources; and second, the development of the MapReduce programming model, which provides a high
APA, Harvard, Vancouver, ISO, and other styles
9

Silva, Balocchi Erika Fernanda. "Análisis y comparación entre el motor de bases de datos orientado a columnas Infobright y el framework de aplicaciones distribuidas Hadoop en escenarios de uso de bases de datos analíticas." Tesis, Universidad de Chile, 2014. http://www.repositorio.uchile.cl/handle/2250/116665.

Full text
Abstract:
Ingeniera Civil en Computación<br>Business Intelligence es la habilidad para transformar datos en información, y la información en conocimiento, de forma que se pueda optimizar la toma de decisiones en los negocios. Debido al aumento exponencial en la cantidad de datos disponibles en los ultimos años y a la complejidad de estos, las herramientas tradicionales de bases de datos y business intelligence pueden no dar a basto, suponiendo numerosos riesgos para las empresas. El objetivo de la presente memoria fue analizar el uso del framework de aplicaciones distribuidas Hadoop en comparación a l
APA, Harvard, Vancouver, ISO, and other styles
10

Alves, Francisco Marco Morais. "Framework for location based system sustained by mobile phone users." Master's thesis, Universidade de Aveiro, 2017. http://hdl.handle.net/10773/23817.

Full text
Abstract:
mestrado em Engenharia de Computadores e Telemática<br>Vivemos na era da informação e da Internet das coisas e por isso nunca antes a informação teve tanto valor, ao mesmo tempo nunca existiu tão elevada troca de informação. Com toda esta quantidade de dados e com o aumento substancial do poder computacional, tem-se assistido a uma explosão de ferramentas para o processamento destes dados em tempo real. Um novo paradigma também emergiu, pelo facto de que muita dessa informação tem meta informação da qual é possível extrair conhecimento adicional quando enriquecida. No caso dos operadores de
APA, Harvard, Vancouver, ISO, and other styles
11

Venumuddala, Ramu Reddy. "Distributed Frameworks Towards Building an Open Data Architecture." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc801911/.

Full text
Abstract:
Data is everywhere. The current Technological advancements in Digital, Social media and the ease at which the availability of different application services to interact with variety of systems are causing to generate tremendous volumes of data. Due to such varied services, Data format is now not restricted to only structure type like text but can generate unstructured content like social media data, videos and images etc. The generated Data is of no use unless been stored and analyzed to derive some Value. Traditional Database systems comes with limitations on the type of data format schema, a
APA, Harvard, Vancouver, ISO, and other styles
12

Cabaret, Sébastien. "Algorithmes de contrôles avancés pour les installations à gaz du Large Hadron Collider au CERN suivant le framework et l'approche dirigée par les modèles du projet Gas Control System." Amiens, 2008. http://www.theses.fr/2008AMIE0104.

Full text
Abstract:
Cette thèse présente les résultats obtenus dans le cadre de mes recherches au CERN (Centre Européen pour le Recherche Nucléaire). Elle traite de l’intégration d’algorithmes de contrôles avancés pour les 21 installations à Gaz du nouvel accélérateur de particules, le LHC (Large Hadron Collider). La démarche s’inscrit dans le choix stratégique du projet appelé GCS (Gas Control System) : produire l’ensemble des systèmes de contrôle-commande par des déclinaisons récursives de production automatique des applications. Les équipements de contrôles prévus à cet effet sont des automates programmables i
APA, Harvard, Vancouver, ISO, and other styles
13

Chang, Shu-Ming, and 張書銘. "Adaptive Fusion SQL Engine on Hadoop Framework." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/61224762200725509754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

"Thermal Aware Scheduling in Hadoop MapReduce Framework." Master's thesis, 2013. http://hdl.handle.net/2286/R.I.20932.

Full text
Abstract:
abstract: The energy consumption of data centers is increasing steadily along with the associ- ated power-density. Approximately half of such energy consumption is attributed to the cooling energy, as a result of which reducing cooling energy along with reducing servers energy consumption in data centers is becoming imperative so as to achieve greening of the data centers. This thesis deals with cooling energy management in data centers running data-processing frameworks. In particular, we propose ther- mal aware scheduling for MapReduce framework and its Hadoop implementation to reduce coolin
APA, Harvard, Vancouver, ISO, and other styles
15

Chi-TsunLiao and 廖啟村. "A Framework of Distributed Snapshots for Hadoop HBase." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/5bwj3f.

Full text
Abstract:
碩士<br>國立成功大學<br>資訊工程學系碩博士班<br>101<br>Apache Hadoop HBase™ is an emerging distributed key-value persistent data store, which can accommodate a large volume of data rapidly introduced from a variety of sources. While data objects stored in HBase are precious, HBase is unable to perform parallel recovery for recovering historical data objects concurrently stored in multiple storage servers in a consistent manner. The study presents a framework for implementing a data recovery scheme in HBase. The framework consists of four components, including (1) distributed snapshots represented by event logs
APA, Harvard, Vancouver, ISO, and other styles
16

Liao, Jhih-Kai, and 廖治凱. "Fault-Tolerant Management Framework for Hadoop Distributed File System." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/20941508217703176221.

Full text
Abstract:
碩士<br>淡江大學<br>資訊工程學系碩士班<br>101<br>Due to the rapid development of modern Internet, the mode of operation of a large number of applications has changed from single-machine to a cluster of machines over the network. This trend also contributed to the development of cloud computing technology, among which Google invented the MapReduce framework, Google File System (GFS), and BigTable, and Yahoo invested the open-source Hadoop project to implement those technologies proposed by Google. The Hadoop Distributed File System (HDFS) is based on the master/slave model to manage the entire file system. Sp
APA, Harvard, Vancouver, ISO, and other styles
17

Chiang, Meng-hsiu, and 江孟修. "A Framework for the Implementation of Heuristic-based Schedulerson Hadoop." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/66919846215531260500.

Full text
Abstract:
碩士<br>國立中山大學<br>資訊工程學系研究所<br>102<br>The advance of computer technology from uni-processing to symmetric-multiprocessing to distributed computing and then to cloud computing has made it more and more difficult to come up with an optimal schedule for the tasks to be run on such a system on the fly. In order to achieve better scheduling quality than the primitive First-In-First-Out scheduler, Facebook and Yahoo have developed their own schedulers for Hadoop, a widely used cloud computing system, but none of them are aimed for optimizing the schedule in terms of makespan. Moreover, since schedulin
APA, Harvard, Vancouver, ISO, and other styles
18

Ho, Hung-Wei, and 何鴻緯. "Modeling and Analysis of Hadoop MapReduce Framework Using Petri Nets." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/37850810462698368045.

Full text
Abstract:
碩士<br>國立臺北大學<br>資訊工程學系<br>103<br>Technological advances have significantly increased the amount of corporate data available, which has created a wide range of business opportunities related to big data and cloud computing. Hadoop is a popular programming framework used for the setup of cloud computing systems. The MapReduce framework forms the core of the Hadoop program for parallel computing and its parallel framework can greatly increase the efficiency of big data analysis. This study used Petri nets to create a visual model of the MapReduce framework and verify its reachability. We present
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Sheng-Hao, and 劉勝豪. "A Profiling and Monitoring Framework for Cloud Applications on Hadoop System." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/88781706930466379236.

Full text
Abstract:
碩士<br>國立中央大學<br>資訊工程研究所<br>98<br>The emerging cloud computing technology provides on-demand, powerful computing platforms for many complex scientific and industrial applications. They usually consume lots of computing resources and execute concurrently on a cloud platform. Therefore, a cloud system demands a good monitoring and profiling framework to keep track of users’ applications, and uses the observed information for system management purposes, such as process deployment, application optimization, and load balancing. A pay-per-use cloud system can also charge their customers ac-cording to
APA, Harvard, Vancouver, ISO, and other styles
20

Hou, Kun-Liang, and 侯昆良. "High Performance Mechanism in Big Data Photomosaic Computation on Hadoop-based Framework." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/10210167341822554009.

Full text
Abstract:
碩士<br>國立中興大學<br>資訊管理學系所<br>104<br>Digital images are non-structure data. In order to apply some image processing techniques on a digital image, we need high computing power. Nowadays, digital images have become an issue of Big Data. In our research, we decide to implement a feature-based K-Medoids on a popular Big Data analysis tool, Hadoop. Hadoop can provide high computing power. The algorithm of mosaic image is a high computing complexity process. Especially in Big Data of image environment, the method needs to deal with about tons of images. There are three main goals of our research. Fris
APA, Harvard, Vancouver, ISO, and other styles
21

Chin, Bing-Da, and 秦秉達. "Design of Parallel Binary Classification Algorithm Based on Hadoop Cluster with MapReduce Framework." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/fu84aw.

Full text
Abstract:
碩士<br>國立臺中科技大學<br>資訊工程系碩士班<br>103<br>With increased amount data today,it is hard to analyze large data on single computer environment efficiently,the hadoop cluster is very important because we can save and large data by hadoop cluster. Data mining plays an important role of data analysis.Because time complexity of the binary-class classification SVM algorithm is a big issue,we design a parallel binary SVM algorithm to slove this problem,and achieve the effect of classifying appropriate data. By leveraging the parallel processing property in MapReduce ,we implement multi-layer binary SVM by
APA, Harvard, Vancouver, ISO, and other styles
22

Chaudhari, Shivangi. "Accelerating Hadoop Map-Reduce for small/intermediate data sizes using the Comet coordination framework." 2009. http://hdl.rutgers.edu/1782.2/rucore10001600001.ETD.000051790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Chien, Chien-Chung, and 簡玠忠. "Design and Implementation of a Big Data Processing and Analysis Framework on the Hadoop Ecosystem." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/44145308637212771842.

Full text
Abstract:
碩士<br>國立中興大學<br>資訊科學與工程學系所<br>101<br>A research conducted by IDC indicates that information worldwide has doubled its amount in every two years, which has broken Moore’s Law. Besides, , with the increase of digital information and the universalization of Cloud Computing, it is pridicted that the amount of digital data will reach 35ZB by 2020. In addition, one third of digital data will be stored and processed through Cloud Computing. Consequently, large amounts of digital data will be the business opportunity for corporations and individuals. However, while we analyze the mega data, the limit
APA, Harvard, Vancouver, ISO, and other styles
24

Chou, Tzu-Hao, and 周子皓. "Study on typhoon quantitative rainfall prediction based on big data Hadoop Spark parallel framework calculus." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/9fm9v5.

Full text
Abstract:
碩士<br>國立臺灣海洋大學<br>海洋環境資訊系<br>107<br>About 80 typhoons are generated every year in the world, and the typhoon generated in the northwestern Pacific Ocean is the strongest and strongest. Taiwan is happening on the main path of the typhoon in the northwestern Pacific. The typhoon brings abundant rainwater to fill the reservoir, and it also causes loss of life. Such as the reduction of agricultural production, the closure of industrial and commercial activities, flooding in some areas, and the landslides and landslides. The purpose of this study is to predict the typhoon precipitation forecast by
APA, Harvard, Vancouver, ISO, and other styles
25

Roy, Sukanta. "Automated methods of natural resource mapping with Remote Sensing Big data in Hadoop MapReduce framework." Thesis, 2022. https://etd.iisc.ac.in/handle/2005/5836.

Full text
Abstract:
For several decades, remote sensing (RS) tools have provided platforms for the large-scale exploration of natural resources across the planetary bodies of our solar system. In the context of Indian remote sensing, mineral resources are being explored, and mangrove resources are being monitored towards a sustainable socio-economic structure and coastal eco-system, respectively, by utilising several remote analytical techniques. However, RS technologies and the corresponding data analytics have made a vast paradigm shift, which eventually has produced “RS Big data” in our scientific world of lar
APA, Harvard, Vancouver, ISO, and other styles
26

Huu, Tinh Giang Nguyen, and 阮有淨江. "Design and Implement a MapReduce Framework for Converting Standalone Software Packages to Hadoop-based Distributed Environments." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/20649990806109007865.

Full text
Abstract:
碩士<br>國立成功大學<br>製造資訊與系統研究所碩博士班<br>101<br>The Hadoop MapReduce is the programming model of designing the auto scalable distributed computing applications. It provides developer an effective environment to attain automatic parallelization. However, most existing manufacturing systems are arduous and restrictive to migrate to MapReduce private cloud, due to the platform incompatible and tremendous complexity of system reconstruction. For increasing the efficiency of manufacturing systems with minimum modification of existing systems, we design a framework in this thesis, called MC-Framework: Mult
APA, Harvard, Vancouver, ISO, and other styles
27

Chou, Chien-Ting, and 周建廷. "Research on The Computing of Direct Geo Morphology Runoff on Hadoop Cluster by Using MapReduce Framework." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/13575176515358582342.

Full text
Abstract:
碩士<br>國立臺灣師範大學<br>資訊工程研究所<br>99<br>Because of the weather and landform in Taiwan, a heavy rain often cause sudden rising of the runoff of some basins, even lead to serious disaster. That makes flood information system are highly relied in Taiwan especially in typhoon season. Computing the runoff of a basin is the most important module of flood information system for checking whether the runoff exceeds warning level or not. However this module is complicated and data-intensive, it becomes the bottleneck when the real-time information are needed while a typhoon is attacking the basins. The devel
APA, Harvard, Vancouver, ISO, and other styles
28

Shabangu, Mthunzi Machell, and Mthunzi Machell Shabangu. "Big data feature selection algorithm using parallel and distributed computing under Hadoop and the Apache Spark framework." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/d7h78a.

Full text
Abstract:
碩士<br>國立臺北科技大學<br>電資國際專班<br>107<br>With the proliferation of ubiquitous computing devices available to collect data, we found ourselves with huge amount of data resulting in big datasets with a huge number of samples and high number of dimensions. On the other hand, learning models demand reduced dimensionality for better performance and accuracy. Feature selection as a dimensionality reduction technique has proven to build simpler models and also lead to significantly improved classification in extensive experiments. Feature selection also contributes in preparing clean, and much understandab
APA, Harvard, Vancouver, ISO, and other styles
29

Chrimes, Dillon. "Towards a big data analytics platform with Hadoop/MapReduce framework using simulated patient data of a hospital system." Thesis, 2016. http://hdl.handle.net/1828/7645.

Full text
Abstract:
Background: Big data analytics (BDA) is important to reduce healthcare costs. However, there are many challenges. The study objective was high performance establishment of interactive BDA platform of hospital system. Methods: A Hadoop/MapReduce framework formed the BDA platform with HBase (NoSQL database) using hospital-specific metadata and file ingestion. Query performance tested with Apache tools in Hadoop’s ecosystem. Results: At optimized iteration, Hadoop distributed file system (HDFS) ingestion required three seconds but HBase required four to twelve hours to complete the Reducer
APA, Harvard, Vancouver, ISO, and other styles
30

Paschalidi, Charikleia. "Data Governance : A conceptual framework in order to prevent your Data Lake from becoming a Data Swamp." Thesis, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-46602.

Full text
Abstract:
Information Security nowadays is becoming a very popular subject of discussion among both academics and organizations. Proper Data Governance is the first step to an effective Information Security policy. As a consequence, more and more organizations are now switching their approach to data, considering them as assets, in order to get as much value as possible out of it. Living in an IT-driven world makes a lot of researchers to approach Data Governance by borrowing IT Governance frameworks.The aim of this thesis is to contribute to this research by doing an Action Research in a big Financial
APA, Harvard, Vancouver, ISO, and other styles
31

(9530630), Akshay Jajoo. "EXPLOITING THE SPATIAL DIMENSION OF BIG DATA JOBS FOR EFFICIENT CLUSTER JOB SCHEDULING." Thesis, 2020.

Find full text
Abstract:
With the growing business impact of distributed big data analytics jobs, it has become crucial to optimize their execution and resource consumption. In most cases, such jobs consist of multiple sub-entities called tasks and are executed online in a large shared distributed computing system. The ability to accurately estimate runtime properties and coordinate execution of sub-entities of a job allows a scheduler to efficiently schedule jobs for optimal scheduling. This thesis presents the first study that highlights spatial dimension, an inherent property of distributed jobs, and underscores it
APA, Harvard, Vancouver, ISO, and other styles
32

RANJAN, RAVI. "PERFORMANCE ANALYSIS OF APRIORI AND FP GROWTH ON DIFFERENT MAPREDUCE FRAMEWORKS." Thesis, 2017. http://dspace.dtu.ac.in:8080/jspui/handle/repository/15814.

Full text
Abstract:
Association rule mining remains a very popular and effective method to extract meaningful information from large datasets. It tries to find possible associations between items in large transaction based datasets. In order to create these associations, frequent patterns have to be generated. Apriori and FP Growth are the two most popular algorithms for frequent itemset mining. To enhance the efficiency and scalability of Apriori and FP Growth, a number of algorithms have been proposed addressing the design of efficient data structures, minimizing database scan and parallel and distributed
APA, Harvard, Vancouver, ISO, and other styles
33

Paciullo, Emmanuelle. "Musique, numérisation, loi HADOPI : analyse d’une controverse dans les médias français." Thèse, 2011. http://hdl.handle.net/1866/8331.

Full text
Abstract:
Cette recherche porte sur la controverse médiatique entourant le projet de loi HADOPI en France, à compter du dépôt du rapport Olivennes en novembre 2007 jusqu’à son adoption définitive en octobre 2009, loi qui vise à développer l’offre légale d’œuvres culturelles sur Internet, en régulant les pratiques de téléchargement. Durant ces deux années, HADOPI a fait l’objet de maintes discussions, de débats et de négociations sur les activités des internautes ayant recours à ces nouveaux modes de consommation de la musique sur Internet, entre autres. L’étude porte sur un corpus d’articles journalisti
APA, Harvard, Vancouver, ISO, and other styles
34

Chakraborty, Manimala. "Implications of Supersymmetry on dark matter, precision tests and collider experiments." Thesis, 2019. http://hdl.handle.net/10821/8269.

Full text
Abstract:
In spite of the fact that the signature of supersymmetry (SUSY) is yet to be found, SUSY remains as a very strong candidate for a Beyond the Standard Model (BSM) physics. Apart from many fundamental and phenomenological reasons, special relevance comes from the observation of the Higgs boson with a mass of 125 GeV at the Large hadron Collider (LHC). The mass is well within the upper limit of 135 GeV predicted by the Minimal Supersymmetric Standard Model (MSSM) for its lighter CP-even Higgs boson (h). In contrast, we know that the unitarity requirement within the Standard Model (SM) sets the Hi
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!