To see the other types of publications on this topic, follow the link: GitLab.

Journal articles on the topic 'GitLab'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'GitLab.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Arefeen, Mohammed Shamsul, and Michael Schiller. "Continuous Integration Using Gitlab." Undergraduate Research in Natural and Clinical Science and Technology (URNCST) Journal 3, no. 8 (September 11, 2019): 1–6. http://dx.doi.org/10.26685/urncst.152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Burr, Chris, and Ben Couturier. "A gateway between GitLab CI and DIRAC." EPJ Web of Conferences 245 (2020): 05026. http://dx.doi.org/10.1051/epjconf/202024505026.

Full text
Abstract:
GitLab’s Continuous Integration has proven to be an efficient tool to manage the lifecycle of experimental software. This has sparked interest in uses that exceed simple unit tests, and therefore require more resources, such as production data configuration and physics data analysis. The default GitLab CI runner software is not appropriate for such tasks, and we show that it is possible to use the GitLab API and modern container orchestration technologies to build a custom CI runner that integrates with DIRAC, the middleware used by the LHCb experiment to run its job on the Worldwide LHC Computing Grid. This system allows for excellent utilisation of computing resources while also providing additional flexibility for defining jobs and providing authentication.
APA, Harvard, Vancouver, ISO, and other styles
3

Kaczmarek, Paweł. "GitLab - NARZĘDZIE ZESPOŁOWEJ PRACY PROGRAMISTÓW." ELEKTRONIKA - KONSTRUKCJE, TECHNOLOGIE, ZASTOSOWANIA 1, no. 6 (June 5, 2019): 27–30. http://dx.doi.org/10.15199/13.2019.6.5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

de Lannoy, Carlos, Judith Risse, and Dick de Ridder. "poreTally: run and publish de novo nanopore assembler benchmarks." Bioinformatics 35, no. 15 (December 24, 2018): 2663–64. http://dx.doi.org/10.1093/bioinformatics/bty1045.

Full text
Abstract:
Abstract Summary Nanopore sequencing is a novel development in nucleic acid analysis. As such, nanopore-sequencing hardware and software are updated frequently and extensively, which quickly renders peer-reviewed publications on analysis pipeline benchmarking efforts outdated. To provide the user community with a faster, more flexible alternative to peer-reviewed benchmark papers for de novo assembly tool performance we constructed poreTally, a comprehensive benchmarking tool. poreTally automatically assembles a given read set using several often-used assembly pipelines, analyzes the resulting assemblies for correctness and continuity, and finally generates a quality report, which can immediately be published on Github/Gitlab. Availability and implementation poreTally is available on Github at https://github.com/ cvdelannoy/poreTally, under an MIT license. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
5

Cuhadar Donszelmann, Tulay, Walter Lampl, and Graeme A. Stewart. "ART ATLAS Release Tester using the Grid." EPJ Web of Conferences 245 (2020): 05015. http://dx.doi.org/10.1051/epjconf/202024505015.

Full text
Abstract:
The ART (ATLAS Release Tester) system is designed to run test jobs on the Grid after a nightly release of the ATLAS offline software has been built. The choice was taken to exploit the Grid as a backend as it offers a huge resource pool, suitable for a deep set of integration tests, and running the tests could be delegated to the highly scalable ATLAS production system (PanDA). The challenge of enabling the Grid as a test environment is met through the use of the CVMFS file system for the software and input data files. Test jobs are submitted to the Grid by the GitLab Continuous Integration (gitlab-ci) system, which itself is triggered at end of a release build. Jobs can be adorned with special headers that inform the system how to run the specific test, allowing many options to be customised. The gitlab-ci system waits for exit status and output files are copied back from the Grid to an EOS area accessible by users. All gitlab-ci jobs run in ART’s virtual machines, using docker images for their ATLAS setup. ART jobs can be tracked by using the PanDA system. ART can also be used to run short test jobs locally. It uses the same ART command-line interface, where the backend is replaced to access a local machine for job submission rather than the Grid. This allows developers to ensure their tests work correctly before adding them to the system. In both the Grid and local machine options, running and result copying are completely parallelized. ART is written in python, complete with its own local and Grid tests to give approximately 90% code coverage of the ART tool itself. ART has been in production for one year and fully replaces and augments the former ATLAS testing system.
APA, Harvard, Vancouver, ISO, and other styles
6

Protasevich, Yu A., O. A. Zmeev, and D. A. Sokolov. "Tools for organizing teachers-students interaction using version control systems." Informatics and education, no. 4 (June 27, 2021): 36–46. http://dx.doi.org/10.32517/0234-0453-2021-36-4-36-46.

Full text
Abstract:
The article describes an approach to organizing the teacher-students interaction in programming courses using the Git version control system. In order to select the most suitable and affordable system for educational needs a comparative analysis of different Git repository management systems was carried out. Based on the experience of various educational institutions that use version control systems in their courses, the advantages and disadvantages of using these systems in teaching were identified. Taking into account the existing problems, a software solution was developed based on the GitLab system. As part of this solution, a method is proposed for organizing the work of a teacher and students in disciplines that use version control systems. This approach implies using both GitLab and additional system, which serves as a manager for Git repositories and is designed to facilitate the work of the teacher and administrator by automating the tasks they perform. The main purpose of the article is a detailed description of this approach: limiting permissions to both teachers and students, GitLab organization and functionality, a list of use cases for each user. The article also presents common workflows of the additional system, its main entities and their relationships and an overview of the features that the system provides.
APA, Harvard, Vancouver, ISO, and other styles
7

Сердечный, Алексей Леонидович, Игорь Васильевич Герасимов, Олег Юрьевич Макаров, Юрий Геннадьевич Пастернак, Николай Михайлович Тихомиров, Андрей Олегович Калашников, and Андрей Петрович Преображенский. "THE TECHNOLOGY OF DETECTION INFORMATION ABOUT VULNERABILITY IN THIRD-PARTY OPEN SOURCE SOFTWARE." ИНФОРМАЦИЯ И БЕЗОПАСНОСТЬ, no. 3(-) (December 1, 2020): 347–64. http://dx.doi.org/10.36622/vstu.2020.23.3.003.

Full text
Abstract:
В статье приведены результаты разработки технологии выявления сведений об уязвимостях сторонних компонентов программного обеспечения (ПО), позволяющей своевременно обнаруживать проблемы безопасности, связанные с использованием заимствованных компонентов с открытым исходным кодом. Технология отличается процедурами оперативного обнаружения, ранжирования и подтверждения достоверности первоисточников сообщений о таких проблемах. Разработанная технология основана на проведении сбора и семантического анализа сведений об ошибках и средствах (алгоритмах) эксплуатации уязвимостей ПО, содержащихся в сообщениях, публикуемых на информационных ресурсах разработчиков ПО с открытым исходным кодом. Технология включает процедуру подтверждения сведений о наиболее опасных уязвимостях с последующей оценкой рисков для подтверждённых уязвимостей. В статье также приводятся результаты реализации предлагаемой технологии в виде средства сбора и интерактивного анализа сообщений о ошибках в ПО с открытым исходным кодом, размещаемым на платформах для совместной разработки GitHub и GitLab. Технология выявления сведений об уязвимостях сторонних компонентов позволяет повысить защищённость ПО, использующего в своём составе общедоступные компоненты с открытым исходным кодом. The article presents the results of the development the technology of detection information about vulnerability in third-party open source software, which allows timely detection of security problems associated with the use of borrowed components provided with open source code. The technology is characterized by procedures for rapid detection, ranking, and confirmation of the authenticity sources of primary reports about such problems. The technology is based on collecting and mining information about bugs, vulnerabilities and exploits contained in messages that published in sources of open source software developers. The technology includes a procedure for confirming information about the most dangerous vulnerabilities, followed by a risk assessment for confirmed vulnerabilities. The article also presents the results of implementing the proposed technology as a tool for collecting and interactively analyzing bug messages in open source software hosted on the GitHub and GitLab collaborative version control platforms. The technology for detecting information about vulnerabilities of third-party components allows you to increase the security of software that uses publicly available open source components.
APA, Harvard, Vancouver, ISO, and other styles
8

Kamoun, Choumouss, Julien Roméjon, Henri de Soyres, Apolline Gallois, Elodie Girard, and Philippe Hupé. "biogitflow: development workflow protocols for bioinformatics pipelines with git and GitLab." F1000Research 9 (June 22, 2020): 632. http://dx.doi.org/10.12688/f1000research.24714.1.

Full text
Abstract:
The use of a bioinformatics pipeline as a tool to support diagnostic and theranostic decisions in the healthcare process requires the definition of detailed development workflow guidelines. Therefore, we implemented protocols that describe step-by-step all the command lines and actions that the developers have to follow. Our protocols capitalized on two powerful and widely used tools: git and GitLab. They address two use cases: a nominal mode to develop a new feature in the bioinformatics pipeline and a hotfix mode to correct a bug that occurred in the production environment. The protocols are available as a comprehensive documentation at https://biogitflow.readthedocs.io and the main concepts, steps and principles are presented in this report.
APA, Harvard, Vancouver, ISO, and other styles
9

Kamoun, Choumouss, Julien Roméjon, Henri de Soyres, Apolline Gallois, Elodie Girard, and Philippe Hupé. "biogitflow: development workflow protocols for bioinformatics pipelines with git and GitLab." F1000Research 9 (December 8, 2020): 632. http://dx.doi.org/10.12688/f1000research.24714.2.

Full text
Abstract:
The use of a bioinformatics pipeline as a tool to support diagnostic and theranostic decisions in the healthcare process requires the definition of detailed development workflow guidelines. Therefore, we implemented protocols that describe step-by-step all the command lines and actions that the developers have to follow. Our protocols capitalized on two powerful and widely used tools: git and GitLab. They address two use cases: a nominal mode to develop a new feature in the bioinformatics pipeline and a hotfix mode to correct a bug that occurred in the production environment. The protocols are available as a comprehensive documentation at https://biogitflow.readthedocs.io and the main concepts, steps and principles are presented in this report.
APA, Harvard, Vancouver, ISO, and other styles
10

Kamoun, Choumouss, Julien Roméjon, Henri de Soyres, Apolline Gallois, Elodie Girard, and Philippe Hupé. "biogitflow: development workflow protocols for bioinformatics pipelines with git and GitLab." F1000Research 9 (February 19, 2021): 632. http://dx.doi.org/10.12688/f1000research.24714.3.

Full text
Abstract:
The use of a bioinformatics pipeline as a tool to support diagnostic and theranostic decisions in the healthcare process requires the definition of detailed development workflow guidelines. Therefore, we implemented protocols that describe step-by-step all the command lines and actions that the developers have to follow. Our protocols capitalized on the two powerful and widely used tools git and GitLab, and are based on gitflow, a well-established workflow in the software engineering community. They address two use cases: a nominal mode to develop a new feature in the bioinformatics pipeline and a hotfix mode to correct a bug that occurred in the production environment. The protocols are available as a comprehensive documentation at https://biogitflow.readthedocs.io and the main concepts, steps and principles are presented in this report.
APA, Harvard, Vancouver, ISO, and other styles
11

Eraslan, Sukru, Kamilla Kopec-Harding, Caroline Jay, Suzanne M. Embury, Robert Haines, Julio César Cortés Ríos, and Peter Crowther. "Integrating GitLab metrics into coursework consultation sessions in a software engineering course." Journal of Systems and Software 167 (September 2020): 110613. http://dx.doi.org/10.1016/j.jss.2020.110613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Sailer, André, and Marko Petricˇ. "Automation and Testing for Simplified Software Deployment." EPJ Web of Conferences 214 (2019): 05019. http://dx.doi.org/10.1051/epjconf/201921405019.

Full text
Abstract:
Creating software releases is one of the more tedious occupations in the life of a software developer. For this purpose we have tried to automate as many of the repetitive tasks involved as possible from getting the commits to running the software. For this simplification we rely in large parts on free collaborative services available around GitHub: issue tracking, code review (GitHub), continuous integration (Travis-CI), static code analysis (coverity). The dependencies and compilers used in the continuous integration are obtained by mounting CVMFS into a docker container. This enables running any desired compiler version (e.g., gcc 6.2, llvm 3.9) or tool (e.g, clang-format, pylint). To create tags for the software package the powerful GitHub API is used. A script was developed that first collates the release notes from the description of each pull request, commits the release notes file, and finally makes a tag. This moves the burden of writing release notesfrom the package maintainer to the individual developer. The deployment of software releases to CVMFS is handled via GitLab-CI. When a tag is made the software is built and automatically deployed. In this paper we will describe the software infrastructure used for the iLCSoft and iLCDirac projects, which are used by CLICdp and the ILC detector collaborations, and give examples of automation which might be useful for others.
APA, Harvard, Vancouver, ISO, and other styles
13

Gallego, Diego, Leonardo Darré, Pablo D. Dans, and Modesto Orozco. "VeriNA3d: an R package for nucleic acids data mining." Bioinformatics 35, no. 24 (July 9, 2019): 5334–36. http://dx.doi.org/10.1093/bioinformatics/btz553.

Full text
Abstract:
Abstract Summary veriNA3d is an R package for the analysis of nucleic acids structural data, with an emphasis in complex RNA structures. In addition to single-structure analyses, veriNA3d also implements functions to handle whole datasets of mmCIF/PDB structures that could be retrieved from public/local repositories. Our package aims to fill a gap in the data mining of nucleic acids structures to produce flexible and high throughput analysis of structural databases. Availability and implementation http://mmb.irbbarcelona.org/gitlab/dgallego/veriNA3d. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
14

Currie, Robert, Rosen Mataev, and Marco Clemencic. "Evolution of the LHCb Continuous Integration system." EPJ Web of Conferences 245 (2020): 05039. http://dx.doi.org/10.1051/epjconf/202024505039.

Full text
Abstract:
The physics software stack of LHCb is based on Gaudi and is comprised of about 20 interdependent projects, managed across multiple GitLab repositories. At present, the continuous integration (CI) system used for regular building and testing of this software is implemented using Jenkins and runs on a cluster of about 300 cores. LHCb CI pipelines are python-based and relatively modern with some degree of modularity, i.e. the separation of test jobs from build jobs. However, these still suffer from obsoleted design choices that prevent improvements to scalability and reporting. In particular, the resource use and speed have not been thoroughly optimized due to the predominant use of the system for nightly builds, where a feedback time of 8 hours is acceptable. We describe recent work on speeding up pipelines by aggressively splitting and parallelizing checkout, build and test jobs and caching their artifacts. The current state of automatic code quality integration, such as coverage reports, is shown. This paper presents how feedback time from change (merge request) submission to build and test reports is reduced from “next day” to a few hours by dedicated on-demand pipelines. Custom GitLab integration allows easy triggering of pipelines, including linked changes to multiple projects, and provides immediate feedback as soon as ready. Reporting includes a comparison to tests on a unique stable reference build, dynamically chosen for every set of changes under testing. This work enables isolated testing of changes that integrates well into the development workflow, leaving nightly testing primarily for integration tests.
APA, Harvard, Vancouver, ISO, and other styles
15

Lamy-Besnier, Quentin, Bryan Brancotte, Brancotte, Hervé Ménager, Ménager, and Laurent Debarbieux. "Viral Host Range database, an online tool for recording, analyzing and disseminating virus–host interactions." Bioinformatics 37, no. 17 (February 17, 2021): 2798–801. http://dx.doi.org/10.1093/bioinformatics/btab070.

Full text
Abstract:
Abstrtact Motivation Viruses are ubiquitous in the living world, and their ability to infect more than one host defines their host range. However, information about which virus infects which host, and about which host is infected by which virus, is not readily available. Results We developed a web-based tool called the Viral Host Range database to record, analyze and disseminate experimental host range data for viruses infecting archaea, bacteria and eukaryotes. Availability and implementation The ViralHostRangeDB application is available from https://viralhostrangedb.pasteur.cloud. Its source code is freely available from the Gitlab instance of Institut Pasteur (https://gitlab.pasteur.fr/hub/viralhostrangedb).
APA, Harvard, Vancouver, ISO, and other styles
16

Wieczór, Miłosz, Adam Hospital, Genis Bayarri, Jacek Czub, and Modesto Orozco. "Molywood: streamlining the design and rendering of molecular movies." Bioinformatics 36, no. 17 (June 23, 2020): 4660–61. http://dx.doi.org/10.1093/bioinformatics/btaa584.

Full text
Abstract:
Abstract Motivation High-quality dynamic visuals are needed at all levels of science communication, from the conference hall to the classroom. As scientific journals embrace new article formats, many key concepts—particularly, in structural biology—are also more easily conveyed as videos than still frames. Notwithstanding, the design and rendering of a complex molecular movie remain an arduous task. Here, we introduce Molywood, a robust and intuitive tool that builds on the capabilities of Visual Molecular Dynamics (VMD) to automate all stages of movie rendering. Results Molywood is a Python-based script that uses an integrated workflow to give maximal flexibility in movie design. It implements the basic concepts of actions, layers, grids and concurrency and requires no programming experience to run. Availability and implementation The script is freely available on GitLab (gitlab.com/KomBioMol/molywood) and PyPI (through pip), and features an extended documentation, tutorial and gallery hosted on mmb.irbbarcelona.org/molywood.
APA, Harvard, Vancouver, ISO, and other styles
17

Andrade, Pedro, Alberto Aimar, Simone Brundu, Borja Garrido Bear, Gonzalo Menendez Borge, Luca Magnoni, Diogo Lima Nicolau, and Nikolay Tsvetkov. "WLCG Dashboards with Unified Monitoring." EPJ Web of Conferences 245 (2020): 07049. http://dx.doi.org/10.1051/epjconf/202024507049.

Full text
Abstract:
Monitoring of the CERN Data Centres and the WLCG infrastructure is now largely based on the new monitoring infrastructure provided by CERN IT. This is the result of the migration from several old in-house developed monitoring tools into a common monitoring infrastructure based on open source technologies such as Collectd, Flume, Kafka, Spark, InfluxDB, Grafana and others. This new infrastructure relies on CERN IT services (OpenStack, Puppet, Gitlab, DBOD, etc) and covers the full range of monitoring tasks: metrics and logs collection, alarms generation, data validation and transport, data enrichment and aggregation (where applicable), dashboards visualisation, reports generation, etc. This contribution will present the different services offered by the infrastructure today, highlight the main monitoring use cases from the CERN Data Centres and WLCG, and analyse the last years experience of moving from legacy well-established custom monitoring tools into a common open source-based infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
18

Monnerie, Petera, Lyan, Gaudreau, Comte, and Pujos-Guillot. "Analytic Correlation Filtration: A New Tool to Reduce Analytical Complexity of Metabolomic Datasets." Metabolites 9, no. 11 (October 24, 2019): 250. http://dx.doi.org/10.3390/metabo9110250.

Full text
Abstract:
Metabolomics generates massive and complex data. Redundant different analytical species and the high degree of correlation in datasets is a constraint for the use of data mining/statistical methods and interpretation. In this context, we developed a new tool to detect analytical correlation into datasets without confounding them with biological correlations. Based on several parameters, such as a similarity measure, retention time, and mass information from known isotopes, adducts, or fragments, the algorithm principle is used to group features coming from the same analyte, and to propose one single representative per group. To illustrate the functionalities and added-value of this tool, it was applied to published datasets and compared to one of the most commonly used free packages proposing a grouping method for metabolomics data: ‘CAMERA’. This tool was developed to be included in Galaxy and will be available in Workflow4Metabolomics (http://workflow4metabolomics.org). Source code is freely available for download under CeCILL 2.1 license at https://services.pfem.clermont.inra.fr/gitlab/grandpa /tool-acf and implement in Perl.
APA, Harvard, Vancouver, ISO, and other styles
19

Lechtenbörger, Jens. "Erstellung und Weiterentwicklung von Open Educational Resources im Selbstversuch." Forschung und Open Educational Resources – Eine Momentaufnahme für Europa 34, Research and OER (March 2, 2019): 101–17. http://dx.doi.org/10.21240/mpaed/34/2019.03.02.x.

Full text
Abstract:
Open Educational Resources (OER) versprechen einerseits den Abbau von Hürden im Bildungszugang und andererseits die Vermeidung redundanter Arbeit bei der Erstellung ähnlicher und gleichzeitig qualitativ hochwertiger Bildungsressourcen in unterschiedlichen Organisationen. Der Verbreitung von OER stehen jedoch bekannte Hürden gegenüber, wobei das ALMS-Framework einen Rahmen für die Bewertung der Wieder- und Weiternutzung von OER aus technischer Sicht bereitstellt. Ausgehend von einem Selbstversuch zur OER-Einführung werden in dieser Arbeit das ALMS-Framework erweiternde Anforderungen an OER basierend auf Konzepten aus Software-Entwicklung und technischem Schreiben definiert. Unter Beachtung dieser Anforderungen werden zwei OER-Projekte beschrieben: Zum einen wird die Weiterentwicklung eines Lehrbuchs unter Creative-Commons-Lizenz skizziert. Zum anderen werden Erstellung und Nutzung der neu entwickelten Software emacs-reveal für die Erzeugung von für das Selbststudium geeigneten, mit Audiokommentaren unterlegten HTML-Präsentationen beschrieben; die Präsentationen werden in einfachen Textdateien erstellt, wobei die Erzeugung von HTML-Code automatisiert in einer öffentlichen GitLab-Infrastruktur abläuft und damit die Software-Nutzung vereinfacht. Ergebnisse einer Umfrage unter Studierenden verdeutlichen die Vorzüge der erzeugten Präsentationen.
APA, Harvard, Vancouver, ISO, and other styles
20

Pereshivkina, Polina, Nadezhda Karandasheva, Maria Mikhaylenko, and Mikhail Kurushkin. "Immersive Molecular Dynamics in Virtual Reality: Increasing Efficiency of Educational Process with Companion Converter for NarupaXR." Journal of Imaging 7, no. 6 (June 8, 2021): 97. http://dx.doi.org/10.3390/jimaging7060097.

Full text
Abstract:
Visualization has always been a crucial part of the educational process. Implementing computer algorithms and virtual reality tools into it is vital for the new generation engineers, scientists and researchers. In the field of chemistry education, various software that allow dynamic molecular building and viewing are currently available. These software are now used to enhance the learning process and ensure better understanding of the chemical processes from the visual perspective. The present short communication provides a summary of these applications based on the NarupaXR program, which is a great educational tool that combines the functionality and simple design necessary for an educational tool. NarupaXR is used with a companion application “Narupa Builder” which requires a different file format, therefore a converter that allows a simple transition between the two extensions has been developed. The converter sufficiently increases the efficiency of the educational process. The automatic converter is freely available on GitLab The current communication provides detailed written instructions that can simplify the installation process of the converter and facilitate the use of both the software and the hardware of the VR set.
APA, Harvard, Vancouver, ISO, and other styles
21

Vasileva, Petya, Andrea Formica, and Gancho Dimitrov. "The ATLAS Wide-Range Database and Application Monitoring." EPJ Web of Conferences 214 (2019): 04036. http://dx.doi.org/10.1051/epjconf/201921404036.

Full text
Abstract:
In HEP experiments at LHC the database applications often become complex, reflecting the increasingly demanding requirements of the researchers. The ATLAS experiment has several Oracle DB clusters with over 216 database schemes each with its own set of database objects. To effectively monitor them, we designed a modern and portable application with exceptionally good characteristics. Some of them include: A concise view of the most important DB metrics; a list of top SQL statements based on CPU, executions, block reads, etc.; volume growth plots per schema and DB object type; a database jobs section with signalization for failures; and in-depth analysis in case of row-lock contention or DB sessions. This contribution also describes the technical aspects of the implementation. The project can be separated into three independent layers. The first layer consists in highly-optimized database objects hiding all complicated calculations. The second layer represents a server providing REST access to the underlying database backend. The third layer is a JavaScript/AngularJS web interface. In addition, we will summarize the continuous integration cycle of the application, which uses GitLab-ci pipelines for basic testing, containerization and deployment on the CERN Openshift infrastructure.
APA, Harvard, Vancouver, ISO, and other styles
22

Kasmi, Zakaria, Abdelmoumen Norrdine, Jochen Schiller, Mesut Güneş, and Christoph Motzko. "RcdMathLib: An Open Source Software Library for Computing on Resource-Limited Devices." Sensors 21, no. 5 (March 1, 2021): 1689. http://dx.doi.org/10.3390/s21051689.

Full text
Abstract:
We developped an open source library called RcdMathLib for solving multivariate linear and nonlinear systems. RcdMathLib supports on-the-fly computing on low-cost and resource-constrained devices, e.g., microcontrollers. The decentralized processing is a step towards ubiquitous computing enabling the implementation of Internet of Things (IoT) applications. RcdMathLib is modular- and layer-based, whereby different modules allow for algebraic operations such as vector and matrix operations or decompositions. RcdMathLib also comprises a utilities-module providing sorting and filtering algorithms as well as methods generating random variables. It enables solving linear and nonlinear equations based on efficient decomposition approaches such as the Singular Value Decomposition (SVD) algorithm. The open source library also provides optimization methods such as Gauss–Newton and Levenberg–Marquardt algorithms for solving problems of regression smoothing and curve fitting. Furthermore, a positioning module permits computing positions of IoT devices using algorithms for instance trilateration. This module also enables the optimization of the position by performing a method to reduce multipath errors on the mobile device. The library is implemented and tested on resource-limited IoT as well as on full-fledged operating systems. The open source software library is hosted on a GitLab repository.
APA, Harvard, Vancouver, ISO, and other styles
23

Welty, Ethan, Michael Zemp, Francisco Navarro, Matthias Huss, Johannes J. Fürst, Isabelle Gärtner-Roer, Johannes Landmann, et al. "Worldwide version-controlled database of glacier thickness observations." Earth System Science Data 12, no. 4 (November 24, 2020): 3039–55. http://dx.doi.org/10.5194/essd-12-3039-2020.

Full text
Abstract:
Abstract. Although worldwide inventories of glacier area have been coordinated internationally for several decades, a similar effort for glacier ice thicknesses was only initiated in 2013. Here, we present the third version of the Glacier Thickness Database (GlaThiDa v3), which includes 3 854 279 thickness measurements distributed over roughly 3000 glaciers worldwide. Overall, 14 % of global glacier area is now within 1 km of a thickness measurement (located on the same glacier) – a significant improvement over GlaThiDa v2, which covered only 6 % of global glacier area and only 1100 glaciers. Improvements in measurement coverage increase the robustness of numerical interpolations and model extrapolations, resulting in better estimates of regional to global glacier volumes and their potential contributions to sea-level rise. In this paper, we summarize the sources and compilation of glacier thickness data and the spatial and temporal coverage of the resulting database. In addition, we detail our use of open-source metadata formats and software tools to describe the data, validate the data format and content against this metadata description, and track changes to the data following modern data management best practices. Archived versions of GlaThiDa are available from the World Glacier Monitoring Service (e.g., v3.1.0, from which this paper was generated: https://doi.org/10.5904/wgms-glathida-2020-10; GlaThiDa Consortium, 2020), while the development version is available on GitLab (https://gitlab.com/wgms/glathida, last access: 9 November 2020).
APA, Harvard, Vancouver, ISO, and other styles
24

Lescisin, Michael, Qusay H. Mahmoud, and Anca Cioraca. "Design and Implementation of SFCI: A Tool for Security Focused Continuous Integration." Computers 8, no. 4 (November 1, 2019): 80. http://dx.doi.org/10.3390/computers8040080.

Full text
Abstract:
Software security is a component of software development that should be integrated throughout its entire development lifecycle, and not simply as an afterthought. If security vulnerabilities are caught early in development, they can be fixed before the software is released in production environments. Furthermore, finding a software vulnerability early in development will warn the programmer and lessen the likelihood of this type of programming error being repeated in other parts of the software project. Using Continuous Integration (CI) for checking for security vulnerabilities every time new code is committed to a repository can alert developers of security flaws almost immediately after they are introduced. Finally, continuous integration tests for security give software developers the option of making the test results public so that users or potential users are given assurance that the software is well tested for security flaws. While there already exists general-purpose continuous integration tools such as Jenkins-CI and GitLab-CI, our tool is primarily focused on integrating third party security testing programs and generating reports on classes of vulnerabilities found in a software project. Our tool performs all tests in a snapshot (stateless) virtual machine to be able to have reproducible tests in an environment similar to the deployment environment. This paper introduces the design and implementation of a tool for security-focused continuous integration. The test cases used demonstrate the ability of the tool to effectively uncover security vulnerabilities even in open source software products such as ImageMagick and a smart grid application, Emoncms.
APA, Harvard, Vancouver, ISO, and other styles
25

Alshaabi, Thayer, Michael V. Arnold, Joshua R. Minot, Jane Lydia Adams, David Rushing Dewhurst, Andrew J. Reagan, Roby Muhamad, Christopher M. Danforth, and Peter Sheridan Dodds. "How the world’s collective attention is being paid to a pandemic: COVID-19 related n-gram time series for 24 languages on Twitter." PLOS ONE 16, no. 1 (January 6, 2021): e0244476. http://dx.doi.org/10.1371/journal.pone.0244476.

Full text
Abstract:
In confronting the global spread of the coronavirus disease COVID-19 pandemic we must have coordinated medical, operational, and political responses. In all efforts, data is crucial. Fundamentally, and in the possible absence of a vaccine for 12 to 18 months, we need universal, well-documented testing for both the presence of the disease as well as confirmed recovery through serological tests for antibodies, and we need to track major socioeconomic indices. But we also need auxiliary data of all kinds, including data related to how populations are talking about the unfolding pandemic through news and stories. To in part help on the social media side, we curate a set of 2000 day-scale time series of 1- and 2-grams across 24 languages on Twitter that are most ‘important’ for April 2020 with respect to April 2019. We determine importance through our allotaxonometric instrument, rank-turbulence divergence. We make some basic observations about some of the time series, including a comparison to numbers of confirmed deaths due to COVID-19 over time. We broadly observe across all languages a peak for the language-specific word for ‘virus’ in January 2020 followed by a decline through February and then a surge through March and April. The world’s collective attention dropped away while the virus spread out from China. We host the time series on Gitlab, updating them on a daily basis while relevant. Our main intent is for other researchers to use these time series to enhance whatever analyses that may be of use during the pandemic as well as for retrospective investigations.
APA, Harvard, Vancouver, ISO, and other styles
26

Albaihaqi, Muhammad Fauzan, Anisa Nurul Wilda, and Bambang Sugiantoro. "Deploying an Application to Cloud Platform Using Continous Integration and Continous Delivery." Proceeding International Conference on Science and Engineering 3 (April 30, 2020): 279–82. http://dx.doi.org/10.14421/icse.v3.513.

Full text
Abstract:
Cloud Computing is the best way for bussiness owner deploy an application to reduce cost issue because it is implement pay as you go concept. Generally, an application on production level or deployed into cloud instance should not have any error or bug. It should be tested and maintain properly. The problem when an application have intensive development that takes more effort to test the application and deploy. So, need a strategy to deploy an application into cloud instance to make the proccess more efficient. Nowadays, Version Control System (VCS) platform provide Continous Integration and Continous Delivery (CI/CD) feature. Users can utilize that platform to perform automated test and deployment easily. This reasearch purposed to examine how to use CI/CD feature and evaluate it in case of deploying web application to Cloud Platform. Researcher use Gitlab wich is provide CI/CD feature for free and deploy the app to Amazon Web Service. The researcher also utilize docker container to accommodate all processes. The result are Continous Integration can improove application quality because most lines of codes are tested using unit or feature test scenario. Using CI/CD feature improove security issue of deployment. Deployment proccess run automatically without human intervention so it will reduce human error factors. This feature also ensure high availability of an application. Deployment proccess will take zero downtime. The application can quickly update without any downtime and configuration. Last, docker container take an important role for deployment of application into cloud instance.
APA, Harvard, Vancouver, ISO, and other styles
27

Fernandez Alvarez, Luis, Olga Datskova, Ben Jones, and Gavin McCance. "Managing the CERN Batch System with Kubernetes." EPJ Web of Conferences 245 (2020): 07048. http://dx.doi.org/10.1051/epjconf/202024507048.

Full text
Abstract:
The CERN Batch Service faces many challenges in order to get ready for the computing demands of future LHC runs. These challenges require that we look at all potential resources, assessing how efficiently we use them and that we explore different alternatives to exploit opportunistic resources in our infrastructure as well as outside of the CERN computing centre. Several projects, like BEER, Helix Nebula Science Cloud and the new OCRE project, have proven our ability to run batch workloads on a wide range of non-traditional resources. However, the challenge is not only to obtain the raw compute resources needed but how to define an operational model that is cost and time efficient, scalable and flexible enough to adapt to a heterogeneous infrastructure. In order to tackle both the provisioning and operational challenges it was decided to use Kubernetes. By using Kubernetes we benefit from a de-facto standard in containerised environments, available in nearly all cloud providers and surrounded by a vibrant ecosystem of open-source projects. Leveraging Kubernetes’ built-in functionality, and other open-source tools such as Helm, Terraform and GitLab CI, we have deployed a first cluster prototype which we discuss in detail. The effort has simplified many of the existing operational procedures we currently have, but has also made us rethink established procedures and assumptions that were only valid in a VM-based cloud environment. This contribution presents how we have adopted Kubernetes into the CERN Batch Service, the impact its adoption has in daily operations, a comparison on resource usage efficiency and the experience so far evolving our infrastructure towards this model.
APA, Harvard, Vancouver, ISO, and other styles
28

Hennig, André, and Kay Nieselt. "Efficient merging of genome profile alignments." Bioinformatics 35, no. 14 (July 2019): i71—i80. http://dx.doi.org/10.1093/bioinformatics/btz377.

Full text
Abstract:
Abstract Motivation Whole-genome alignment (WGA) methods show insufficient scalability toward the generation of large-scale WGAs. Profile alignment-based approaches revolutionized the fields of multiple sequence alignment construction methods by significantly reducing computational complexity and runtime. However, WGAs need to consider genomic rearrangements between genomes, which make the profile-based extension of several whole-genomes challenging. Currently, none of the available methods offer the possibility to align or extend WGA profiles. Results Here, we present genome profile alignment, an approach that aligns the profiles of WGAs and that is capable of producing large-scale WGAs many times faster than conventional methods. Our concept relies on already available whole-genome aligners, which are used to compute several smaller sets of aligned genomes that are combined to a full WGA with a divide and conquer approach. To align or extend WGA profiles, we make use of the SuperGenome data structure, which features a bidirectional mapping between individual sequence and alignment coordinates. This data structure is used to efficiently transfer different coordinate systems into a common one based on the principles of profiles alignments. The approach allows the computation of a WGA where alignments are subsequently merged along a guide tree. The current implementation uses progressiveMauve and offers the possibility for parallel computation of independent genome alignments. Our results based on various bacterial datasets up to several hundred genomes show that we can reduce the runtime from months to hours with a quality that is negligibly worse than the WGA computed with the conventional progressiveMauve tool. Availability and implementation GPA is freely available at https://lambda.informatik.uni-tuebingen.de/gitlab/ahennig/GPA. GPA is implemented in Java, uses progressiveMauve and offers a parallel computation of WGAs. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
29

Syahputra, Muhammad Indra. "PEMBUATAN GITAR ELEKTRIK DI JALAN GATOT SUBROTO KOTA MEDAN." Grenek Music Journal 2, no. 2 (April 9, 2013): 61. http://dx.doi.org/10.24114/grenek.v2i2.3840.

Full text
Abstract:
Secara umum penelitian ini menunjukkan bagaimana proses pembuatan gitar elektrik di Jalan Gatot Subroto Kota Medan. Proses pembuatan gitar elektrik diawali dengan pemilihan kayu. Kayu yang digunakan dalam penelitian ini adalah kayu jenis mahogany, maple dan ebony. Setelah kayu didapat dilanjutkan dengan merancang desain gitar elektrik. Proses selanjutnya yaitu pembentukkan body dan neck gitar elektrik dengan menggunakan alat pemotong kayu bandsaw dan router table. Untuk tahap penyelesaian akhir (finishing) dilakukan proses pengecatan dan pemasangan rangkaian elektrik sesuai dengan skema yang telah ditentukan. Penelitian ini juga menunjukkan hal-hal positif yang diterima oleh pengerajin setelah menjalankan kegiatan industri pembuatan gitar elektrik. Hal ini dapat dilihat dari keterlibatannya usaha industri Dedek Craft dalam kegiatan UKM (Usaha Kecil dan Menengah) yang rutin diadakan tiap tahun di kantor kecamatan Medan Petisah. Kendala yang dihadapi pengerajin dalam pembuatan gittar elektrik yaitu pada proses pengecatan yang masih sangat bergantung kepada cuaca. Pada umumnya untuk mendapatkan produk Dedek Craft, pembeli langsung datang ke tempat industri karena pengerajin membuka galeri sendiri di rumahnya yang sekaligus tempat industri ini beroprasi. Penelitian ini dimaksudkan dapat menjadi pedoman maupun acuan bagi masyarakat pada umumnya yang menginginkan informasi berkaitan dengan proses pembuatan gitar elektrik.
APA, Harvard, Vancouver, ISO, and other styles
30

Bouquin, Daina R. "GitHub." Journal of the Medical Library Association : JMLA 103, no. 3 (July 2015): 166–67. http://dx.doi.org/10.3163/1536-5050.103.3.019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Vensko, Steven, Benjamin Vincent, and Dante Bortone. "485 RAFT: A framework to support rapid and reproducible immuno-oncology analyses." Journal for ImmunoTherapy of Cancer 8, Suppl 3 (November 2020): A521. http://dx.doi.org/10.1136/jitc-2020-sitc2020.0485.

Full text
Abstract:
BackgroundAnalysis reproducibility and transparency are pillars of robust and trustworthy scientific results. The dependability of these results is crucial in clinical settings where they may guide high-impact decisions affecting patient health. Independent reproduction of computational results has been problematic and can be a burden on the individuals attempting to reproduce the results. Reproduction complications may arise from: 1) insufficiently described parameters, 2) vague methods, or 3) secret scripts required to generate final outputs, among others. Here we introduce RAFT (Reproducible Analyses Framework and Tools), a framework for immuno-oncology biomarker development built with Python 3 and Nextflow DSL2 which aims to enable end-to-end reproducibility of entire computational analyses in multiple contexts (e.g. local, compute cluster, or cloud) with minimal overhead through a focus on usability (figures 1 and 2).MethodsRAFT builds upon Nextflow’s DSL2 module-based approach to workflows by providing a ‘project’ context upon which users can add metadata, load references, and build up their analysis step-by-step. RAFT also has pre-built modules with workflows commonly utilized in immuno-oncology analyses (e.g. TCR/BCR repertoire reconstruction and HLA typing) and aids users through automatic module dependency resolution. Transparency is gained by having a single end-to-end script containing all steps and parameters as well as a single configuration file. Finally, RAFT allows users to create and share a package of project metadata files including the main script, all input and output checksums, all modules, and the RAFT steps required to create the analysis. This package, coupled with any required inputs files, can be used to recreate the analysis or further expand an analysis with additional datasets or alternative parameters.ResultsRAFT has been used by our computational team to create an immuno-oncology meta-analysis submitted to SITC 2020. A simple, proof-of-concept analysis has been used to establish RAFT’s ability to support reproducibility by running locally on laptop computers, on multiple research compute clusters, and on the Google Cloud Platform.Abstract 485 Figure 1Example RAFT UsageUsers define their required inputs, build their analysis, and run their analysis using the RAFT command-line interface. The metadata from the analysis can then be shared through a RAFT package with collaborators or interested third-parties in order to reproduce or expand upon the initial results.Abstract 485 Figure 2End-to-end RAFTRAFT supports end-to-end analysis development through a ‘project’ structure. Users link local required files (e.g. FASTQs, references or manifests) into their appropriate/raft subdirectory. (1) Projects are initiated using the raft init-project command which creates and populates a project-specific directory. (2–3) Users then load required metadata (e.g. sample manifests or clinical data) and references (e.g. alignment references) into the project using the raft load-metadata or raft load-reference commands, respectively. (4) Modules consisting of tool-specific and topical workflows are cloned from a collection of remote repositories into the project using raft load-module. (5) Specific processes and workflows from previously loaded modules are added to the analysis (main.nf) through raft add-step. Users can then modify main.nf with their desired parameters and execute the workflow using raft run-workflow. (6) Additionally, RAFT allows an iterative approach where results from RAFT can be analyzed and modified through RStudio and re-run through Nextflow.ConclusionsThe RAFT platform shows promising capabilities to support rapid and reproducible research within the field of immuno-oncology. Several features remain in development and testing, such as incorporation of additional immunogenomics feature modules such as variant/fusion detection and HLA/peptide binding affinity estimation. Other functionality in development will enable collaborators to use remote Git repository hosting (e.g. GitHub or GitLab) to jointly and iteratively modify an analysis.
APA, Harvard, Vancouver, ISO, and other styles
32

Gibson, Rabbi James A. "Mala Gitlin Betensky." Art Therapy 16, no. 3 (January 1999): 160–62. http://dx.doi.org/10.1080/07421656.1999.10129657.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Grier, David Alan. "The GitHub Effect." Computer 48, no. 5 (May 2015): 116. http://dx.doi.org/10.1109/mc.2015.146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

AYDOĞAN, Murat, and Rasim Erol DEMİRBATIR. "Gitar Öğrencilerinin Lisans Öncesi Gitar Eğitimleri ile Lisans Gitar Eğitimlerinin Karşılaştırılarak İncelenmesi." Art-e Sanat Dergisi 13, no. 25 (June 30, 2020): 276–300. http://dx.doi.org/10.21602/sduarte.696918.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Czekalski, Piotr, and Paweł Micek. "Semantic Web Approach to the GitHub Database Processing." Lecture Notes on Software Engineering 3, no. 4 (2015): 263–66. http://dx.doi.org/10.7763/lnse.2015.v3.201.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Jarczyk, Oskar, Szymon Jaroszewicz, Adam Wierzbicki, Kamil Pawlak, and Michal Jankowski-Lorek. "Surgical teams on GitHub: Modeling performance of GitHub project development processes." Information and Software Technology 100 (August 2018): 32–46. http://dx.doi.org/10.1016/j.infsof.2018.03.010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Lam, Lionel, Thomas Cochrane, Vijay Rajagopal, Katie Davey, and Sam John. "Enhancing student learning through trans-disciplinary project-based assessment in bioengineering." Pacific Journal of Technology Enhanced Learning 3, no. 1 (February 16, 2021): 4–5. http://dx.doi.org/10.24135/pjtel.v3i1.80.

Full text
Abstract:
The Bioengineering Systems major offered at the University of Melbourne aims to enable students to rigorously integrate mathematics and modelling concepts with the fundamental sciences of biology, physics, and chemistry in order to solve biomedical engineering problems. This requires mastery of core concepts in engineering design, programming, mechanics, and electrical circuits. Historically, these concepts have been sequestered into separate subjects, with minimal cross-curricular references. This has resulted in the compartmentalisation of these concepts, with students often failing to appreciate that these seemingly disparate ideas can be synergistically combined to engineer larger, more capable systems. Building the capability of students to integrate these trans-disciplinary concepts is a unique aspect of the major that seeks to prepare students to solve real-world problems in the digital age (Burnett, 2011). We previously implemented trans-disciplinary design in the second-year subject Biomechanical Physics and Computation by integrating the teaching of mechanics and programming (typically covered in separate subjects in standard engineering degrees). This integration was explored largely through assessment redesign that focuses upon authentic learning (Bozalek et al., 2014). In these assessments, students have to model real-world mechanical systems using programming, for example, the construction of an animated physics-based model for a bicep curl. Here, an understanding of either the mechanics or programming component is insufficient to properly complete these assessments – students necessarily have to master both in order to perform well. Student feedback surveys have indicated that student learning has benefited from this redesign, as they have helped put programming concepts in a real-world context by demonstrating their utility in solving complex physics problems. Quantitatively, trans-disciplinary design has contributed to improvements in the following survey scores from 2017 (pre-redesign) to 2019: “I found the assessment tasks useful in guiding my study”: 3.85 to 4.43, “I learnt new ideas, approaches, and/or skills”: 3.88 to 4.32, “I learnt to apply knowledge to practice”: 3.63 to 4.13 (averages, maximum: 5). To further model trans-disciplinary design, we have established a collaborative curriculum design team (Laurillard, 2012) to develop a coordinated set of learning activities and assessments centred around the design, construction, and control of a bionic limb. Using design-based research (McKenney & Reeves, 2019), our team will model a design-based research approach within the curriculum over a two-year project timeline. By integrating these learning activities across four core subjects in the Bioengineering Systems major, students will be involved in an authentic learning project that integrates the concepts taught in the context of a larger system. The project involves hands-on design and fabrication of a bionic limb facilitated by a learner-centric ecology of resources (Luckin, 2008), including an ePortfolio consisting of Jupyter Notebook, GitLab, MS Teams and Adobe Spark. The intended learning outcomes are to enhance students’ capacity to integrate trans-disciplinary knowledge by providing continuity in assessments and learning objectives across our curriculum. The presentation will outline the methodology behind the collaborative trans-disciplinary curriculum design project and will also explore how the team is navigating the impact of COVID-19 on a traditionally lab-based project in a hybrid mode.
APA, Harvard, Vancouver, ISO, and other styles
38

Preethi, B. Meena. "An Overview on GITHUB." International Journal for Research in Applied Science and Engineering Technology 7, no. 1 (January 31, 2019): 132–34. http://dx.doi.org/10.22214/ijraset.2019.1023.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Katz, Elihu. "Inside Prime Time.Todd Gitlin." American Journal of Sociology 90, no. 6 (May 1985): 1371–74. http://dx.doi.org/10.1086/228230.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Cohen, Fred R., Donald M. Davis, Paul G. Goerss, and Mark E. Mahowald. "Integral Brown-Gitler spectra." Proceedings of the American Mathematical Society 103, no. 4 (April 1, 1988): 1299. http://dx.doi.org/10.1090/s0002-9939-1988-0955026-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Vaccaro, Jerome V. "Introduction to Gitlin article." Community Mental Health Journal 28, no. 4 (August 1992): 353. http://dx.doi.org/10.1007/bf00755801.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Clem, Timothy, and Patrick Thomson. "Static Analysis at GitHub." Queue 19, no. 4 (August 31, 2021): 42–67. http://dx.doi.org/10.1145/3487019.3487022.

Full text
Abstract:
The Semantic Code team at GitHub builds and operates a suite of technologies that power symbolic code navigation on github.com. We learned that scale is about adoption, user behavior, incremental improvement, and utility. Static analysis in particular is difficult to scale with respect to human behavior; we often think of complex analysis tools working to find potentially problematic patterns in code and then trying to convince the humans to fix them. Our approach took a different tack: use basic analysis techniques to quickly put information that augments our ability to understand programs in front of everyone reading code on GitHub with zero configuration required and almost immediate availability after code changes.
APA, Harvard, Vancouver, ISO, and other styles
43

Dokić, Jelena. "UPOTREBA GITHAB AKCIJA ZA AUTOMATIZACIJU IZRADE I OCENJIVANJA STUDENTSKIH ZADATAKA." Zbornik radova Fakulteta tehničkih nauka u Novom Sadu 35, no. 11 (November 5, 2020): 2026–29. http://dx.doi.org/10.24867/10be43dokic.

Full text
Abstract:
Poslednjih godina, na tržištu se pojavljuju mnogi alati za automatizaciju procesa razvoja softvera. Ovaj rad opisuje jedan od takvih alata - Githab akcije, kao i prednosti i mane njegovog korišćenja. Opis je dat kroz primer razvoja dodatnih funkcionalnosti u aplikaciji koja sluiži za automatizaciju procesa kreiranja i ocenjivanja studentskih zadataka.
APA, Harvard, Vancouver, ISO, and other styles
44

Rao, Dinesh, Shishir Dubey, Mohan Kumar J, Deepak Rao, and Balaji B. "Descriptive and distribution analysis of GitHub repository data." International Journal of Engineering & Technology 7, no. 3 (June 26, 2018): 1193. http://dx.doi.org/10.14419/ijet.v7i3.12489.

Full text
Abstract:
The usage of GitHub by developers is increasing. The organizations are also using GitHub for their project development. As a huge set of users are involved, it leads researchers to analyze the GitHub data. GitHub provides the API (Application programmable interface), to collect its data related to its repository. In our work, the collected data is extensively queried and data is visualized. In this paper, a descriptive analysis is done GitHub data. The results give a lot of insight on the GitHub usage.
APA, Harvard, Vancouver, ISO, and other styles
45

Treude, Christoph, Larissa Leite, and Maurício Aniche. "Unusual events in GitHub repositories." Journal of Systems and Software 142 (August 2018): 237–47. http://dx.doi.org/10.1016/j.jss.2018.04.063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

McDONALD, NORA, KELLY BLINCOE, EVA PETAKOVIC, and SEAN GOGGINS. "MODELING DISTRIBUTED COLLABORATION ON GITHUB." Advances in Complex Systems 17, no. 07n08 (December 2014): 1450024. http://dx.doi.org/10.1142/s0219525914500246.

Full text
Abstract:
In this paper, we apply concepts from Distributed Leadership, a theory suggesting that leadership is shared among members of an organization, to frame models of contribution that we uncover in five relatively successful open source software (OSS) projects hosted on GitHub. In this qualitative, comparative case study, we show how these projects make use of GitHub features such as pull requests (PRs). We find that projects in which member PRs are more frequently merged with the codebase experience more sustained participation. We also find that projects with higher success rates among contributors and higher contributor retention tend to have more distributed (non-centralized) practices for reviewing and processing PRs. The relationships between organizational form and GitHub practices are enabled and made visible as a result of GitHub's novel interface. Our results demonstrate specific dimensions along which these projects differ and explicate a framework that warrants testing in future studies of OSS, particularly GitHub.
APA, Harvard, Vancouver, ISO, and other styles
47

GITLIN, MICHAEL, KEITH NUECHTERLEIN, KENNETH L. SUBOTNIK, and JOSEPH VENTURA. "Dr. Gitlin and Colleagues Reply." American Journal of Psychiatry 159, no. 8 (August 2002): 1442. http://dx.doi.org/10.1176/appi.ajp.159.8.1442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Goyal, Raman, Gabriel Ferreira, Christian Kästner, and James Herbsleb. "Identifying unusual commits on GitHub." Journal of Software: Evolution and Process 30, no. 1 (September 12, 2017): e1893. http://dx.doi.org/10.1002/smr.1893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

González, Jesús, and Ernesto Lupercio. "Samuel Gitler and his work." Boletín de la Sociedad Matemática Mexicana 21, no. 1 (February 28, 2015): 3–8. http://dx.doi.org/10.1007/s40590-015-0055-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Lupercio, Ernesto, and Elias Micha. "Samuel Gitler and his influence." Boletín de la Sociedad Matemática Mexicana 23, no. 1 (March 11, 2017): 1–4. http://dx.doi.org/10.1007/s40590-017-0163-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography