To see the other types of publications on this topic, follow the link: Repository software.

Dissertations / Theses on the topic 'Repository software'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Repository software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Wanjie. "A repository of software components." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0007/MQ41647.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liao, Gang. "Information repository design for software evolution." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ34039.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Hui. "Software Defects Classification Prediction Based On Mining Software Repository." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-216554.

Full text
Abstract:
An important goal during the cycle of software development is to find and fix existing defects as early as possible. This has much to do with software defects prediction and management. Nowadays,many  big software development companies have their own development repository, which typically includes a version control system and a bug tracking system. This has no doubt proved useful for software defects prediction. Since the 1990s researchers have been mining software repository to get a deeper understanding of the data. As a result they have come up with some software defects prediction models the past few years. There are basically two categories among these prediction models. One category is to predict how many defects still exist according to the already captured defects data in the earlier stage of the software life-cycle. The other category is to predict how many defects there will be in the newer version software according to the earlier version of the software defects data. The complexities of software development bring a lot of issues which are related with software defects. We have to consider these issues as much as possible to get precise prediction results, which makes the modeling more complex. This thesis presents the current research status on software defects classification prediction and the key techniques in this area, including: software metrics, classifiers, data pre-processing and the evaluation of the prediction results. We then propose a way to predict software defects classification based on mining software repository. A way to collect all the defects during the development of software from the Eclipse version control systems and map these defects with the defects information containing in software defects tracking system to get the statistical information of software defects, is described. Then the Eclipse metrics plug-in is used to get the software metrics of files and packages which contain defects. After analyzing and preprocessing the dataset, the tool(R) is used to build a prediction models on the training dataset, in order to predict software defects classification on different levels on the testing dataset, evaluate the performance of the model and comparedifferent models’ performance.
APA, Harvard, Vancouver, ISO, and other styles
4

Danish, Muhammad Rafique, and Sajjad Ali Khan. "Component Repository Browser." Thesis, Mälardalen University, School of Innovation, Design and Engineering, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-7707.

Full text
Abstract:

The main goal of this thesis is to investigate efficient searching mechanisms for searching and retrieving software components across different remote repositories and implement a supporting prototype called “Component Repository Browser” using the plug-in based Eclipse technology for PROGRESS-IDE. The prototype enables users to search the ProCom components and to import the desired components from a remote repository server over different protocols such as HTTP, HTTPS, and/or SVN. Several component searching mechanisms and suggestions were studied and examined such as keyword, facet-based search, folksonomy classification, and signature matching, from which we selected keyword search along with facet-based searching technique to help component searchers to efficiently find the desired components from a remote repository.

APA, Harvard, Vancouver, ISO, and other styles
5

Tou, Chi Pio. "Pluggable repository for Internet-based software engineering environment." Thesis, University of Macau, 1999. http://umaclib3.umac.mo/record=b1447767.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Arafat, Omar. "Metodologia per semplificare lo studio dei repository software." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
L'obiettivo di questo elaborato è quello di illustrare una tecnologia volta a semplificare l'analisi e il reperimento di informazioni utili allo studio di data set di grandi dimensioni. Indagheremo circa i vantaggi e le difficoltà che derivano dallo studio di repository software. Gli strumenti trattati sono Boa e FreeMarker. Boa è un progetto sviluppato dalla Iowa State University of Science and Technology, presentato all'interno di ICSE-13, nato con lo scopo di semplificare l'operazione di data mining su repository software di diversa natura. Boa sarà l'oggetto dell'indagine di cui al secondo capitolo, dove si analizzerà il linguaggio specifico e l'infrastuttura a supporto. È un caso di studio di particolare interesse poiché è stato oggetto della “mining challenge” all'interno della MSR 2016 A seguire si illustrerà brevemente FreeMarker e come possa essere usato per semplificare l'esposizione e il riuso dei dati di studio. Si tratta di un template engine, sviluppato dalla Apache Foundation, che permette di automatizzare la rappresentazione dei risultati prodotti dal data mining. Al termine dell'elaborato verrà esposto il codice prodotto per integrare l'uso delle tecnologie precedentemente introdotte, attraverso la classe Java Test.
APA, Harvard, Vancouver, ISO, and other styles
7

BURÉGIO, Vanilson André de Arruda. "Specification, design and implementation of a reuse repository." Universidade Federal de Pernambuco, 2006. https://repositorio.ufpe.br/handle/123456789/2641.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:59:51Z (GMT). No. of bitstreams: 2 arquivo5652_1.pdf: 2564164 bytes, checksum: 6b08baa8253889819823661c59e9a6a0 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2006
A disciplina de Reuso de Software tem crescido em importância, tornando-se uma ferramenta estratégica para empresas que almejam um aumento de produtividade, a obtenção de baixos custos e a alta qualidade dos seus produtos. Porém, antes de obtermos as vantagens inerentes ao reuso, é preciso termos mecanismos hábeis a fim de facilitar o armazenamento, a busca, a recuperação e o gerenciamento dos artefatos reusáveis. Nesse contexto, encaixa-se a idéia de repositórios de reuso. Um repositório de reuso pode ser entendido como uma base preparada para o armazenamento e a recuperação de componentes. O mesmo pode ser visto também, como um grande facilitador, que atua como suporte aos engenheiros de software e outros usuários no processo de desenvolvimento de software para e com reuso. Na literatura, existem diversos trabalhos que exploram repositórios de reuso, porém, o foco desses trabalhos está, quase sempre, voltado a questões de busca e recuperação de componentes e, muitas vezes, aspectos importantes de repositórios de reuso não são explorados adequadamente, como, por exemplo, o uso destes como ferramenta para auxiliar gerentes no monitoramento e controle do reuso em uma organização. Por outro lado, algumas questões levantadas por empresas que desejam construir um repositório de reuso continuam mal respondidas. Tais questões geralmente incluem: Que papéis um repositório deve desempenhar no contexto de reutilização? Quais são os principais requisitos de um repositório de reuso? Quais as alternativas práticas existentes? Como um repositório de reuso pode ser projetado? Motivado por essas questões, esta dissertação apresenta a especificação, o projeto e a implementação de um repositório de reuso baseado na análise das soluções existentes e em uma experiência prática de construção de um ambiente de reuso para fábricas de software. Adicionalmente, são discutidos os resultados obtidos, os problemas encontrados, e as direções futuras para pesquisa e o desenvolvimento
APA, Harvard, Vancouver, ISO, and other styles
8

Deka, Dipen. "THE ROLE OF OPEN SOURCE SOFTWARE IN BUILDING INSTITUTIONAL REPOSITORY." INFLIBNET, 2006. http://hdl.handle.net/10150/106417.

Full text
Abstract:
Advances in Information Communication Technology (ICT) has created immense methods for creating, storing, maintaining, accessing and preserving the traditional printed documents in digital form. The different publishers have taken the full advantage of publishing the research outputs of the academicians and deprive the institutions and the community of the institution from the research outputs. This paper explores the importance of Institutional Repository (IR) and the role of the Open Source Software (OSS) in building the Institutional Repository of any institution. To publish and serve the community of an institution building institutional repositories is the most feasible solution. We have to take the help of some special software packages to build up an institutional repository and the role of open source software in this regard is very important. The institutions which are economically not strong enough can take the advantage of usingopen source software to build up their own institutional repository and can expose their knowledge stock to the world.
APA, Harvard, Vancouver, ISO, and other styles
9

Moura, Dionatan de Souza. "Software Profile RAS : estendendo a padronização do Reusable Asset Specification e construindo um repositório de ativos." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/87582.

Full text
Abstract:
O reúso de software enfrenta inúmeras barreiras gerenciais, técnicas e culturais na sua adoção, e a definição da estrutura de ativos reutilizáveis de software é uma dessas barreiras técnicas. Para solucionar isso, o Reusable Asset Specification (RAS) é um padrão de facto proposto pela OMG. Uma especificação como o RAS define e padroniza um modelo de ativos (assets) reutilizáveis, e é a base para a construção e para o uso de um repositório de ativos que apoia a reutilização de software. No entanto, para ser adotado na prática, o RAS necessita resolver suas lacunas através da sua extensão e da definição de informações complementares. Essas lacunas estão detalhadas neste trabalho. Solucionando estas lacunas, o RAS torna-se útil para auxiliar efetivamente na padronização do empacotamento dos ativos reutilizáveis e para guiar a estrutura do repositório de reutilização de software. Alguns trabalhos anteriores já responderam parcialmente essa questão, porém eles atendiam propósitos muito específicos, não possuíam uma ferramenta de apoio ou não haviam sido avaliados em contexto real de (re)uso. Esse trabalho propõe o Software Profile RAS (SW-RAS), uma extensão do Profile de componentes do RAS, que propõe soluções para diversas de suas lacunas, incluindo informações úteis e artefatos relevantes apontados na literatura, baseados em outros modelos de ativos reutilizáveis, em outras extensões do RAS e na experiência do processo de reúso no desenvolvimento de software. Particularmente, o SW-RAS estende as categorias de classificação, solução, uso e ativos relacionados, cujos detalhes estão descritos no texto. Visando à experimentação da proposta através de um estudo de caso, desenvolveu-se o Lavoi, um repositório de ativos reutilizáveis baseado no SW-RAS, que foi avaliado num ambiente real de reutilização e desenvolvimento de software de uma grande companhia pública de TI. Uma descrição deste processo de avaliação em um contexto real é também apresentada neste trabalho. A principal contribuição desta dissertação é a proposta, a avaliação e a consolidação de uma extensão do RAS que atende várias de suas lacunas e é suportada por uma ferramenta de software livre.
The software reuse faces numerous managerial, technical and cultural barriers in its adoption, and the definition of the structure of reusable software assets is one of these technical barriers. To solve this, the Reusable Asset Specification (RAS) is a de facto standard proposed by OMG. A specification such as the RAS defines and standardizes a reusable asset model, and it is the foundation for the construction and for the use of an asset repository that supports the software reuse. However, for being adopted in the practice, the RAS needs to solve its lacks through its extension and the definition of complementary information. These lacks are detailed in this work. Solving these lacks, the RAS becomes useful to help effectively in the standardization of packaging reusable assets and to guide the structure of the software reuse repository. Some previous works have already partially answered this question, but they attended very specific purposes, did not have a support tool or have not been evaluated in a real context of (re)use. This work proposes the Software Profile RAS (SW-RAS), an extension of the component Profile of RAS, which proposes solutions for its various lacks, including useful information and relevant artifacts pointed out in the literature, based on other reusable asset models, on other RAS extensions and on the experience in the reuse process at software development. Particularly, the SW-RAS extends the categories of classification, solution, usage and related assets, whose details are described in the text. Aiming at the experimentation of the proposal through a case study, the Lavoi was developed, a reusable asset repository based on the SW-RAS, which is was evaluated in a real environment of reuse and software development of a large public IT company. A description of this evaluation process in real context is also presented in this work. The main contribution of this dissertation is the proposal, the evaluation and the consolidation of an extension of RAS that addresses several of its lacks and is supported by a free software tool.
APA, Harvard, Vancouver, ISO, and other styles
10

Kasianenko, Stanislav. "Predicting Software Defectiveness by Mining Software Repositories." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78729.

Full text
Abstract:
One of the important aims of the continuous software development process is to localize and remove all existing program bugs as fast as possible. Such goal is highly related to software engineering and defectiveness estimation. Many big companies started to store source code in software repositories as the later grew in popularity. These repositories usually include static source code as well as detailed data for defects in software units. This allows analyzing all the data without interrupting programing process. The main problem of large, complex software is impossibility to control everything manually while the price of the error can be very high. This might result in developers missing defects on testing stage and increase of maintenance cost. The general research goal is to find a way of predicting future software defectiveness with high precision. Reducing maintenance and development costs will contribute to reduce the time-to-market and increase software quality. To address the problem of estimating residual defects an approach was found to predict residual defectiveness of a software by the means of machine learning. For a prime machine learning algorithm, a regression decision tree was chosen as a simple and reliable solution. Data for this tree is extracted from static source code repository and divided into two parts: software metrics and defect data. Software metrics are formed from static code and defect data is extracted from reported issues in the repository. In addition to already reported bugs, they are augmented with unreported bugs found on “discussions” section in repository and parsed by a natural language processor. Metrics were filtered to remove ones, that were not related to defect data by applying correlation algorithm. Remaining metrics were weighted to use the most correlated combination as a training set for the decision tree. As a result, built decision tree model allows to forecast defectiveness with 89% chance for the particular product. This experiment was conducted using GitHub repository on a Java project and predicted number of possible bugs in a single file (Java class). The experiment resulted in designed method for predicting possible defectiveness from a static code of a single big (more than 1000 files) software version.
APA, Harvard, Vancouver, ISO, and other styles
11

Brewer, John VIII. "The Effects of Open Source License Choice on Software Reuse." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/32778.

Full text
Abstract:
Previous research shows that software reuse can have a positive impact on software de- velopment economics, and that the adoption of a specific open source license can influ- ence how a software product is received by users and programmers. This study attempts to bridge these two research areas by examining how the adoption of an open source li- cense affects software reuse. Two reuse metrics were applied to 9,570 software packages contained in the Fedora Linux software repository. Each package was evaluated to deter- mine how many external components it reuses, as well as how many times it is reused by other software packages. This data was divided into subsets according to license type and software category. The study found that, in general, (1) software released under a restrictive license reuse more external components than software released under a per- missive license, and (2) that software released under a permissive license is more likely to be reused than software released under a restrictive license. However, there are ex- ceptions to these conclusions, as the effect of license choice on reuse varies by software category.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
12

Rossi, Ana Claudia. "Representação do componente de software na FARCSoft: ferramenta de apoio à reutilização de componentes de software." Universidade de São Paulo, 2004. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-03062004-101200/.

Full text
Abstract:
Atualmente, as organizações estão cada vez mais dependentes de sistemas de informação para a realização de seus negócios. Com isso, uma das preocupações, na área de desenvolvimento de software, é a obtenção cada vez mais rápida de sistemas que atendam as necessidades atuais e que sejam flexíveis para acompanhar as mudanças de tecnologia e práticas de negócio. A reutilização de componentes de software tem sido considerada uma das formas para obter redução dos custos e do tempo de desenvolvimento e aumento da produtividade e da qualidade do produto de software. A implantação da reutilização de componentes é baseada em três elementos principais que consistem de um processo de desenvolvimento voltado para reutilização, de uma ferramenta adequada e de uma cultura de projeto. A ferramenta, por sua vez, deve ter a capacidade de armazenar os componentes e de fornecer recursos para uma recuperação eficiente. O objetivo deste trabalho é definir uma representação de componentes em um repositório, a qual permita armazenar diferentes tipos de componentes de software. Para isso, foi especificada a Ferramenta de Apoio à Reutilização de Componentes de Software, denominada de – FARCSoft, que deve fornecer suporte à reutilização de componentes de software. Esta ferramenta apresenta recursos para armazenar, gerenciar, buscar e recuperar os componentes do seu repositório. A capacidade de representação foi avaliada por meio de um conjunto de componentes de tipos, porte e tecnologia diversos, os quais foram modelados e catalogados.
Nowadays, organizations increasingly depend on information systems to carry out their business. Thus, one of the preoccupations in the software development area is the need to obtain systems faster and faster, attending to current needs and sufficiently flexible to accompany changes in technology and business practices. Software component reuse has been considered one of the ways to reduce costs and development time and increase productivity and software quality. The implantation of component reuse is based on three main elements, which consist in a development process oriented towards reuse, an adequate tool and a project culture. The tool, in turn, must be able to store the components and to supply resources for the sake of efficient recovery. This study aims to define a component representation in a repository, which allows for the storage of different kinds of software components. For this purpose, a Software Component Reuse Support Tool was specified, called FARCSoft, which should support the reuse of software components. This tool presents resources to store, manage, search and recover the components of a repository. Representation capacity was evaluated by means of a set of components with different types, sizes and technologies, which were modeled and catalogued.
APA, Harvard, Vancouver, ISO, and other styles
13

Kriukov, Illia. "Multi-version software quality analysis through mining software repositories." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-74424.

Full text
Abstract:
The main objective of this thesis is to identify how the software repository features influence software quality during software evolution. To do that the mining software repository area was used. This field analyzes the rich data from software repositories to extract interesting and actionable information about software systems, projects and software engineering. The ability to measure code quality and analyze the impact of software repository features on software quality allows us to better understand project history, project quality state, development processes and conduct future project analysis. Existing work in the area of software quality describes software quality analysis without a connection to the software repository features. Thus they lose important information that can be used for preventing bugs, decision-making and optimizing development processes. To conduct the analysis specific tool was developed, which cover quality measurement and repository features extraction. During the research general procedure of the software quality analysis was defined, described and applied in practice. It was found that there is no most influential repository feature and the correlation between software quality and software repository features exist, but it is too small to make a real influence.
APA, Harvard, Vancouver, ISO, and other styles
14

Carvalho, Joao Alvaro Brandao Soares de. "BMKB (Business Meta Knowledge Base) : a repository of models for assisting the management of organizational information systems." Thesis, University of Manchester, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.316493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Tansley, Robert, Mick Bass, and MacKenzie Smith. "DSpace as an Open Archival Information System: Current Status and Future Directions." Springer-Verlag GmbH, 2003. http://hdl.handle.net/1721.1/29464.

Full text
Abstract:
As more and more output from research institutions is born digital, a means for capturing and preserving the results of this investment is required. To begin to understand and address the problems surrounding this task, Hewlett-Packard Laboratories collaborated with MIT Libraries over two years to develop DSpace, an open source institutional repository software system. This paper describes DSpace in the context of the Open Archival Information System (OAIS) reference model. Particular attention is given to the preservation aspects of DSpace, and the current status of the DSpace system with respect to addressing these aspects. The reasons for various design decisions and trade-offs that were necessary to develop the system in a timely manner are given, and directions for future development are explored. While DSpace is not yet a complete solution to the problem of preserving digital research output, it is a production-capable system, represents a significant step forward, and is an excellent platform for future research and development.
APA, Harvard, Vancouver, ISO, and other styles
16

Tansley, Robert, MacKenzie Smith, and Julie Harford Walker. "The DSpace Open Source Digital Asset Management System: Challenges and Opportunities." Springer-Verlag GmbH, 2005. http://hdl.handle.net/1721.1/29462.

Full text
Abstract:
Last year at the ECDL 2004 conference, we reported some initial progress and experiences developing DSpace as an open source community-driven project [8], particularly as seen from an institutional manager’s viewpoint. We also described some challenges and issues. This paper describes the progress in addressing some of those issues, and developments in the DSpace open source community. We go into detail about the processes and infrastructure we have developed around the DSpace code base, in the hope that this will be useful to other projects and organisations exploring the possibilities of becoming involved in or transitioning to open source development of digital library software. Some new challenges the DSpace community faces, particularly in the area of addressing required system architecture changes, are introduced. We also describe some exciting new possibilities that open source development brings to our community.
APA, Harvard, Vancouver, ISO, and other styles
17

Chao, Sam. "The design and implementation of object management functions for web-based repository." Thesis, University of Macau, 1999. http://umaclib3.umac.mo/record=b1636967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Bala, Abdalla. "Impact analysis of a multiple imputation technique for handling missing value in the ISBSG repository of software projects." Mémoire, École de technologie supérieure, 2013. http://espace.etsmtl.ca/1236/1/BALA_Abdalla.pdf.

Full text
Abstract:
Jusqu'au début des années 2000, la plupart des études empiriques pour construire des modèles d'estimation de projets logiciels ont été effectuées avec des échantillons de taille très faible (moins de 20 projets), tandis que seules quelques études ont utilisé des échantillons de plus grande taille (entre 60 à 90 projets). Avec la mise en place d’un répertoire de projets logiciels par l'International Software Benchmarking Standards Group - ISBSG - il existe désormais un plus grand ensemble de données disponibles pour construire des modèles d'estimation: la version 12 en 2013 du référentiel ISBSG contient plus de 6000 projets, ce qui constitue une base plus adéquate pour des études statistiques. Toutefois, dans le référentiel ISBSG un grand nombre de valeurs sont manquantes pour un nombre important de variables, ce qui rend assez difficile son utilisation pour des projets de recherche. Pour améliorer le développement de modèles d’estimation, le but de ce projet de recherche est de s'attaquer aux nouveaux problèmes d’accès à des plus grandes bases de données en génie logiciel en utilisant la technique d’imputation multiple pour tenir compte dans les analyses des données manquantes et des données aberrantes.
APA, Harvard, Vancouver, ISO, and other styles
19

Kamenieva, Iryna. "Research Ontology Data Models for Data and Metadata Exchange Repository." Thesis, Växjö University, School of Mathematics and Systems Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-6351.

Full text
Abstract:

For researches in the field of the data mining and machine learning the necessary condition is an availability of various input data set. Now researchers create the databases of such sets. Examples of the following systems are: The UCI Machine Learning Repository, Data Envelopment Analysis Dataset Repository, XMLData Repository, Frequent Itemset Mining Dataset Repository. Along with above specified statistical repositories, the whole pleiad from simple filestores to specialized repositories can be used by researchers during solution of applied tasks, researches of own algorithms and scientific problems. It would seem, a single complexity for the user will be search and direct understanding of structure of so separated storages of the information. However detailed research of such repositories leads us to comprehension of deeper problems existing in usage of data. In particular a complete mismatch and rigidity of data files structure with SDMX - Statistical Data and Metadata Exchange - standard and structure used by many European organizations, impossibility of preliminary data origination to the concrete applied task, lack of data usage history for those or other scientific and applied tasks.

Now there are lots of methods of data miming, as well as quantities of data stored in various repositories. In repositories there are no methods of DM (data miming) and moreover, methods are not linked to application areas. An essential problem is subject domain link (problem domain), methods of DM and datasets for an appropriate method. Therefore in this work we consider the building problem of ontological models of DM methods, interaction description of methods of data corresponding to them from repositories and intelligent agents allowing the statistical repository user to choose the appropriate method and data corresponding to the solved task. In this work the system structure is offered, the intelligent search agent on ontological model of DM methods considering the personal inquiries of the user is realized.

For implementation of an intelligent data and metadata exchange repository the agent oriented approach has been selected. The model uses the service oriented architecture. Here is used the cross platform programming language Java, multi-agent platform Jadex, database server Oracle Spatial 10g, and also the development environment for ontological models - Protégé Version 3.4.

APA, Harvard, Vancouver, ISO, and other styles
20

Reschke, Edith, and Uwe Konrad. "Integriertes Management und Publikation von wissenschaftlichen Artikel, Software und Forschungsdaten am Helmholtz-Zentrum Dresden-Rossendorf (HZDR)." Helmholtz-Zentrum Dresden-Rossendorf (HZDR), 2019. https://slub.qucosa.de/id/qucosa%3A70622.

Full text
Abstract:
Mit dem Ziel, das Publizieren von Artikeln, Forschungsdaten und wissenschaftlicher Software gemäß den FAIR-Prinzipien zu unterstützen, wurde am HZDR ein integriertes Publikationsmanagement aufgebaut. Insbesondere Daten- und Softwarepublikationen erfordern die Entwicklung bedarfsgerechter organisatorischer und technischer Strukturen ergänzend zu bereits sehr gut funktionierenden Services im Publikationsmanagement. In der Zusammenarbeit mit Wissenschaftlern des HZDR und internationalen Partnern in ausgewählten Projekten wurde der Bedarf an Unterstützung im Forschungsdatenmanagement analysiert. Darauf aufbauend wurde schrittweise ein integriertes System von Infrastrukturen und Services entwickelt und bereitgestellt. In einer seit Mai 2018 gültigen Data Policy wurden die Rahmenbedingungen und Regelungen sowohl für wissenschaftliche Mitarbeiter als auch für externe Messgäste definiert. Im Vortrag wird auf die Erfahrungen im integrierten Publikationsmanagement für Artikel, Forschungsdaten und Forschungssoftware eingegangen und daraus resultierend werden die nächsten Aufgaben und Ziele entwickelt.
APA, Harvard, Vancouver, ISO, and other styles
21

Otlu, Suleyman Onur. "A New Technique: Replace Algorithm To Retrieve A Version From A Repository Instead Of Delta Application." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/12604951/index.pdf.

Full text
Abstract:
The thesis introduces a new technique that is an alternative method instead of applying deltas to literal file sequentially to retrieve a version from a repository. To my best knowledge
this is the first investigation about delta combination for copy/insert instruction type with many experimental results and conclusions. The thesis proves that the delta combination eliminates unnecessary I/O process for intermediate versions when delta application is considered, therefore reduces I/O time. Deltas are applied to literal sequentially to generate the required version in the classical way. Replace algorithm combines delta files which would be applied in delta application as combined delta, and applies it to literal to generate the required one. Apply runs in O (size (D)) time where D is the destination file and size (D) is its size. To retrieve nth version in a chain where 1st version is literal, it requires n-1 time apply. Replace algorithm runs in O (i + c * log2 n) time where i is the total length of all inserts, c is the total length of all copies in destination delta, and n is the number of instructions in source delta. To retrieve the same nth version, it requires n-2 time replace and one apply.
APA, Harvard, Vancouver, ISO, and other styles
22

Ferreira, João Vinícius Ferraz Dias. "Sistema EducaTICs : software online para auxiliar docentes da educação básica no contexto das tecnologias digitais." Universidade Federal de Mato Grosso, 2015. http://ri.ufmt.br/handle/1/266.

Full text
Abstract:
Submitted by Jordan (jordanbiblio@gmail.com) on 2017-04-24T15:09:31Z No. of bitstreams: 1 DISS_2015_João Vinícius Ferraz Dias Ferreira.pdf: 1627653 bytes, checksum: 6438976e86f3ae8b0c7859a8dc5b8aac (MD5)
Approved for entry into archive by Jordan (jordanbiblio@gmail.com) on 2017-04-27T14:03:05Z (GMT) No. of bitstreams: 1 DISS_2015_João Vinícius Ferraz Dias Ferreira.pdf: 1627653 bytes, checksum: 6438976e86f3ae8b0c7859a8dc5b8aac (MD5)
Made available in DSpace on 2017-04-27T14:03:05Z (GMT). No. of bitstreams: 1 DISS_2015_João Vinícius Ferraz Dias Ferreira.pdf: 1627653 bytes, checksum: 6438976e86f3ae8b0c7859a8dc5b8aac (MD5) Previous issue date: 2015-02-27
Este trabalho apresenta um estudo sobre um produto educacional – EducaTICs, um software de repositório de dados online que propõe uma otimização do trabalho do professor ao facilitar-lhe a busca por aplicativos disponíveis no mercado e que lhe sejam úteis em sua prática diária, como organização pessoal e de material didático. Com o auxílio do formulário eletrônico de pesquisa oferecido pelo aplicativo Google Drive, analisamos a afinidade entre o educador e as tecnologias de informação disponibilizadas atualmente com o objetivo de confrontar tais dados com a imensa variedade de opções apresentadas. Além de caracterizar os sujeitos da pesquisa, as informações obtidas sobre espaço virtual, instrumento computacional de auxílio ao lecionar, recursos para pesquisas científicas, software para organização pessoal e didática e aplicativo baseado no conceito de computação nas nuvens, permitiram o desenvolvimento do instrumento pedagógico EducaTICs. Sessenta e cinco professores responderam à pesquisa, destes, 29 do sexo masculino e 36 do sexo feminino, com idades entre 21 e 63 anos; sendo 63% com tempo de experiência profissional na docência inferior a dez anos. Do total de participantes da pesquisa, 71% atuam na rede pública de ensino e 17% no ensino público e privado. Cinquenta e sete por cento já participaram de formação continuada na área de TIC. A maioria relatou que acessa com maior frequência a página do Google. As redes sociais são usadas por 71% dos respondentes para se relacionarem com amigos. Entre os espaços virtuais mais utilizados está o Blog (32%) seguido pelo site pessoal (12%). O mais utilizado dentre os instrumentos computacionais como auxílio ao professor, para lecionar, são os softwares de apresentação de slides (94%); entre os sujeitos dessa pesquisa, 69% consideram que não utilizam qualquer recurso tecnológico como apoio à organização pessoal. Dos professores que utilizam ferramentas baseadas no conceito de computação nas nuvens, 57% são usuários do DropBox. Os resultados obtidos são representativos para compreensão de que os professores utilizam pouca diversidade nas ferramentas de TICs que podem servir de apoio à sua atividade didática, o que corrobora com o objetivo da ferramenta proposta.
This paper presents a study on an educational product – EducaTICs, an online data repository software that proposes an optimization of the teacher's job to facilitate his search for apps available in the market and which are useful in his daily practice, as an organization staff and teaching materials. Helped by electronic search form offered by Google Drive application, we analyze the affinity between the educator and information technologies currently available in order to confront these data with the immense variety of options presented. In addition to characterizing the subjects, the information obtained on virtual space, computational instrument of aid to teaching, resources for scientific research, software for personal and didactic organization and application based on the concept of cloud computing, enabled the development of pedagogical tool EducaTICs. Sixty-five teachers responded to the survey, of these, 29 males and 36 females, aged between 21 and 63 years; 63% with years of professional experience in teaching less than ten years. Of the participants, 71% work in the public education and 17% in public and private education. Fifty-seven percent have participated in continuing education in Information and Comunication Tecnologies. Most reported that accesses more frequently Google page. Social networks are used by 71% of respondents to engage with friends. Among the most used virtual spaces is the Blog (32%) followed by personal website (12%). The most widely used among the computational tools as an aid to the teacher, to teach, are the slideshow software (94%); among the subjects of this research, 69% consider not using any technology resource in support of personal organization. Teachers using tools based on the concept of cloud computing, 57% are users of DropBox. The results are representative for understanding that teachers use little diversity in ICT tools that can serve to support their teaching activity, which agrees with the objective of the proposed tool.
APA, Harvard, Vancouver, ISO, and other styles
23

Faria, Henrique Rocha de. "Um modelo de processo de apoio ao desenvolvimento de software baseado em componentes, orientado a qualidade, e centrado em um repositório." Universidade de São Paulo, 2005. http://www.teses.usp.br/teses/disponiveis/3/3141/tde-11012006-115522/.

Full text
Abstract:
A Engenharia de Software Baseada em Componentes (ESBC) envolve os processos de desenvolvimento de software a partir de partes embutidas prontas, a fim de se obter produtividade, reduzindo-se custos e tempo de lançamento no mercado, garantindo (e melhorando) a qualidade intrínseca de produtos de software, bem como flexibilidade de implementação, manutenção e integração de sistemas. O ciclo de vida de um componente de software, projetado para uma determinada arquitetura, para ser reutilizado e reciclado dentro de uma infra-estrutura de componentes, e para satisfazer atributos de qualidade, dependerá de um ambiente que permita que seu código evolua de maneira controlada; que suas interfaces sejam publicadas através de documentos; e que seus artefatos estejam sempre acessíveis por partes interessadas, como desenvolvedores, projetistas e arquitetos de software, gerentes de projeto, usuários etc. Isto sugere a organização de um processo que apóie a reutilização de componentes através de um repositório comum, justificando esforços de se projetar, implementar, testar e instalar estes componentes em diferentes soluções. Este trabalho tem a intenção de definir e descrever, através da linguagem e dos elementos de um meta-modelo, e através de uma proposta de implementação de um repositório de componentes, um modelo de processo alinhado a um subconjunto de requisitos estabelecidos pelos padrões ISO/IEC 12207 e ISO/IEC 9126, com o propósito de suporte de componentes a processos de desenvolvimento de software.
Component-Based Software Engineering (CBSE)involves the software development from prepared built-in parts processes, in order to achieve productivity, reducing costs and time-tomarket, assuring (and improving) the intrinsic quality of software products, as well as implementation, maintenance and systems integration flexibility. The life cycle of a software component designed for a given architecture to be reused and recycled, within a component infrastructure, and to satisfy quality attributes, will depend on an environment to allow its code to evolve in a controlled manner; its interfaces to be published through documents; and its artifacts to be always accessible from interested parties, like developers, software designers and architects, project managers, users etc. This suggests the organization of a process that supports the reuse of components through a common repository, justifying efforts to design, implement, test and install them in different solutions. This work intends to define and describe, through a meta-model language and elements, and through a component repository implementation proposal, a process model aligned to a subset of requirements established by the ISO/IEC 12207 and the ISO/IEC 9126 standards, with the purpose of development software processes support of components.
APA, Harvard, Vancouver, ISO, and other styles
24

MOURA, Irineu Martins de Lima. "Mining energy – aware commits: exploring changes performed by open – source developers to impact the energy consumption of software systems." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/17809.

Full text
Abstract:
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2016-09-06T17:39:17Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DissertacaoDeMestrado-IrineuMoura-imlm2.pdf: 1240260 bytes, checksum: 4bbaf8839fa3d5be7fca586e1f290f68 (MD5)
Made available in DSpace on 2016-09-06T17:39:17Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) DissertacaoDeMestrado-IrineuMoura-imlm2.pdf: 1240260 bytes, checksum: 4bbaf8839fa3d5be7fca586e1f290f68 (MD5) Previous issue date: 2015-08-24
Energy consumption has been gaining traction as yet another major concern that mainstream software developers must be aware of. It used to be mainly the focus of hardware designers and low level software developers, e.g., device driver developers. Nowadays, however, mostly due to the ubiquity of battery-powered devices, any developer in the software stack must be prepared to deal with this concern. Thus, to be able to properly assist them and to provide guidance in future research it is crucial to understand how they have been handling this matter. This thesis aims to aid in this regard by exploring a set of software changes, i.e., commits, to obtain insights into actual solutions implemented by open source developers when dealing with energy consumption. We use as our main data source GITHUB, a source code hosting platform for collaborative development, and extract a sample of the available commits across several different projects. From this sample, we manually curate a set of energy-aware commits, that is, any commit that refers to a source code change where developers intentionally modify, or aim to modify, the energy consumption (or power dissipation) of a system or make it easier for other developers or end users to do so. We then apply a qualitative research method to extract recurring patterns of information and to group the commits that intend to save energy into categories. A small survey was also conducted to assess the quality of our analysis and to further expand our understanding of the changes. During our analysis we also cover different aspects of the commits. We observe that the majority of the changes (~47%) still target lower levels of the software stack, i.e., kernels, drivers and OS-related services, while application level changes encompass ~34% of them. We notice that developers may not always be certain of the energy consumption impact of their changes before actually performing them, among our dataset we identify several instances (~12%) of commits where developers show signs of uncertainty towards their change’s effectiveness. We also highlight the possible software quality attributes that may be favored over energy efficiency. Notably, we spot a few instances of commits where developers performed a change that would negatively impact the energy consumption of the system in order to fix a bug. It is also worth noting, we draw attention to a specific group of changes which we call "energy-aware interfaces". They add tuning knobs that can be used by developers or end users to control the energy consumption of an underlying component.
O controle do consumo de energia tem ganhado cada vez mais atenção como outro tipo de interesse ao qual desenvolvedores de software devem estar atentos. Antes esse tipo de preocupação era principalmente o foco de designers de hardware e desenvolvedores de baixonível, como por exemplo, desenvolvedores de drivers de dispositivos. Entretanto, devido à ubiquidade de dispositivos dependentes de bateria, qualquer desenvolvedor deve estar preparado para enfrentar essa questão. Logo, entender como eles estão lidando com o consumo de energia é crucial para estarmos aptos a auxiliá-los e para prover uma direção adequada para pesquisas futuras. Com o intuito de ajudar nesse sentido, essa tese explora um conjunto de mudanças de software, isto é, commits, para entender melhor sobre os tipos de soluções que são implementadas de fato por desenvolvedores de código aberto quando os mesmos devem lidar com o consumo de energia. Nós utilizamos o GITHUB como nossa principal fonte de dados, uma plataforma de hospedagem de código fonte para o desenvolvimento colaborativo de projetos de software, e extraímos uma amostra dos commits disponíveis entre vários projetos diferentes. Dessa amostra, nós manualmente selecionamos um conjunto de commits "energy-aware", isto é, qualquer commit que se refere a uma modificação de código onde o desenvolvedor propositalmente modifica, ou intenciona modificar, o consumo de energia (ou a dissipação de potência) de um sistema ou torna mais fácil para que outros desenvolvedores ou usuários finais possam fazê-lo. Nós então aplicamos sobre esses commits um método de análise qualitativa para extrair padrões recorrentes de informação e para agrupar os commits que intencionam reduzir o consumo energético em categorias. Uma pequena pesquisa também foi realizada com os autores dos commits para avaliar a qualidade da nossa análise e para expandir nosso entendimento sobre as modificações. Nós também consideramos diferentes aspectos dos commits durante a análise. Observamos que a maioria das modificações (~47%) ainda se aplicam às mais baixas camadas de software, isto é, kernels e drivers, enquanto que mudanças a nível de aplicação compreendem ~34% do nosso conjunto de dados. Nós notamos que os desenvolvedores nem sempre estão seguros do impacto de suas modificações no consumo de energia antes de realizá-las, em nosso conjunto de dados identificamos várias instâncias de modificações (~12%) em que os desenvolvedores demonstram sinais de incerteza em relação à eficácia de suas mudanças. Também apontamos alguns dos possíveis atributos de qualidade de software que são favorecidos em detrimento do consumo de energia. Entre essas, destacamos alguns commits onde os desenvolvedores realizaram uma modificação que impactaria negativamente no consumo de energia com o intuito de consertar algum problema existente no software. Também achamos interessante ressaltar um grupo específico de modificações que chamamos de “interfaces energy-aware”. Elas adicionam controles no software em questão que possibilitam outros desenvolvedores ou usuários finais a ajustar o consumo de energia de algum componente subjacente.
APA, Harvard, Vancouver, ISO, and other styles
25

Kocabas, Fahri. "Knowledge Discovery In Microarray Data Of Bioinformatics." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12615090/index.pdf.

Full text
Abstract:
This thesis analyzes major microarray repositories and presents a metadata framework both to address the current issues and to promote the main operations such as knowledge discovery, sharing, integration, and exchange. The proposed framework is demonstrated in a case study on real data and can be used for other high throughput repositories in biomedical domain. Not only the number of microarray experimentation increases, but also the size and complexity of the results rise in response to biomedical inquiries. And, experiment results are significant when examined in a batch and placed in a biological context. There have been standardization initiatives on content, object model, exchange format, and ontology. However, they have proprietary information space. There are backlogs and the data cannot be exchanged among the repositories. There is a need for a format and data management standard at present.iv v We introduced a metadata framework to include metadata card and semantic nets to make the experiment results visible, understandable and usable. They are encoded in standard syntax encoding schemes and represented in XML/RDF. They can be integrated with other metadata cards, semantic nets and can be queried. They can be exchanged and shared. We demonstrated the performance and potential benefits with a case study on a microarray repository. This study does not replace any product on repositories. A metadata framework is required to manage such huge data. We state that the backlogs can be reduced, complex knowledge discovery queries and exchange of information can become possible with this metadata framework.
APA, Harvard, Vancouver, ISO, and other styles
26

Costa, Daniel Alencar da. "Avalia??o da contribui??o de desenvolvedores para projetos de software usando minera??o de reposit?rios de software e minera??o de processos." Universidade Federal do Rio Grande do Norte, 2013. http://repositorio.ufrn.br:8080/jspui/handle/123456789/18082.

Full text
Abstract:
Made available in DSpace on 2014-12-17T15:48:07Z (GMT). No. of bitstreams: 1 DanielAC_DISSERT.pdf: 1379221 bytes, checksum: 4e8ab78d03e452eecd9c3eaa6906e4ee (MD5) Previous issue date: 2013-02-01
Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior
Software Repository Mining (MSR) is a research area that analyses software repositories in order to derive relevant information for the research and practice of software engineering. The main goal of repository mining is to extract static information from repositories (e.g. code repository or change requisition system) into valuable information providing a way to support the decision making of software projects. On the other hand, another research area called Process Mining (PM) aims to find the characteristics of the underlying process of business organizations, supporting the process improvement and documentation. Recent works have been doing several analyses through MSR and PM techniques: (i) to investigate the evolution of software projects; (ii) to understand the real underlying process of a project; and (iii) create defect prediction models. However, few research works have been focusing on analyzing the contributions of software developers by means of MSR and PM techniques. In this context, this dissertation proposes the development of two empirical studies of assessment of the contribution of software developers to an open-source and a commercial project using those techniques. The contributions of developers are assessed through three different perspectives: (i) buggy commits; (ii) the size of commits; and (iii) the most important bugs. For the opensource project 12.827 commits and 8.410 bugs have been analyzed while 4.663 commits and 1.898 bugs have been analyzed for the commercial project. Our results indicate that, for the open source project, the developers classified as core developers have contributed with more buggy commits (although they have contributed with the majority of commits), more code to the project (commit size) and more important bugs solved while the results could not indicate differences with statistical significance between developer groups for the commercial project
Minera??o de Reposit?rios de Software (MSR) ? uma ?rea que procura analisar reposit?rios de software em busca de informa??es relevantes para a pesquisa e para a pr?tica na engenharia de software. As minera??es buscam transformar informa??es est?ticas de reposit?rios de software (sistemas de ger?ncia de configura??o e mudan?as) em informa??es relevantes que auxiliam a tomada de decis?o dentro do contexto de projetos de software. Por outro lado, a ?rea de Minera??o de Processos (MP) busca descobrir caracter?sticas dos processos que s?o utilizados em organiza??es para auxiliar na melhoria e documenta??o destes processos. Trabalhos recentes t?m buscado utilizar as t?cnicas de MSR e de MP para realizar diversas an?lises na ?rea de Engenharia de Software, tais como: (i) estudar a evolu??o dos projetos de software (ii) entender o processo de software real utilizado em um determinado projeto; e (iii) criar modelos de predi??es de defeitos. Contudo, poucos destes trabalhos buscam utilizar as t?cnicas de MP e MSR com o objetivo de analisar a contribui??o de desenvolvedores na implementa??o de sistemas de software. Esta disserta??o de mestrado prop?e a condu??o de estudos experimentais que buscam avaliar a contribui??o de desenvolvedores de software para projetos, atrav?s da utiliza??o das t?cnicas de MSR e MP. A contribui??o dos desenvolvedores ? avaliada sob tr?s diferentes perspectivas: (i) commits defeituosos; (ii) tamanho dos commits; e (iii) resolu??o de bugs priorit?rios. Dois projetos de software (um open-source e outro privado) foram analisados sob estas tr?s perspectivas. Para o projeto open-souce, 12.827 commits e 8.410 bugs foram avaliados, enquanto que para o projeto privado, 4.663 commits e 1.898 bugs foram avaliados. Os resultados obtidos indicam que para o projeto open-source os desenvolvedores classificados como desenvolvedores core, s?o os que mais produzem commits defeituosos (embora tamb?m sejam os que mais produzem commits), s?o os que contribuem com commits de maior tamanho de c?digo e tamb?m contribuem com mais bugs priorit?rios solucionados. J? para o projeto privado, os resultados n?o indicaram uma diferen?a estatisticamente significativa entre os grupos de desenvolvedores
APA, Harvard, Vancouver, ISO, and other styles
27

Pratt, Landon James. "Cliff Walls: Threats to Validity in Empirical Studies of Open Source Forges." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3511.

Full text
Abstract:
Artifact-based research provides a mechanism whereby researchers may study the creation of software yet avoid many of the difficulties of direct observation and experimentation. Open source software forges are of great value to the software researcher, because they expose many of the artifacts of software development. However, many challenges affect the quality of artifact-based studies, especially those studies examining software evolution. This thesis addresses one of these threats: the presence of very large commits, which we refer to as "Cliff Walls." Cliff walls are a threat to studies of software evolution because they do not appear to represent incremental development. In this thesis we demonstrate the existence of cliff walls in open source software projects and discuss the threats they present. We also seek to identify key causes of these monolithic commits, and begin to explore ways that researchers can mitigate the threats of cliff walls.
APA, Harvard, Vancouver, ISO, and other styles
28

Michaud, Heather M. "Detection of Named Branch Origin for Git Commits." University of Akron / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=akron1436528915.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Herb, Ulrich. "Chancen im OPUS: Automatisiert SWD-Schlagwörter produzieren." Universitätsbibliothek Chemnitz, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200901378.

Full text
Abstract:
Die Folien skizzieren einen Projektantrag, der (2008 und überarbeitet 2009) bei der Deutschen Forschungsgemeinschaft DFG eingereicht wurde. Absicht der beiden Antragssteller, des Instituts der Gesellschaft zur Förderung der Angewandten Informationsforschung (IAI, http://www.iai.uni-sb.de/iaide/index.htm) und der Saarländischen Universitäts- und Landesbibliothek (SULB, http://www.sulb.uni-saarland.de), war es, die am IAI entwickelte Software AUTINDEX zur halbautomatischen Verschlagwortung in Open-Access-Repositories einzubinden. Da Autoren große Mühe mit der Verschlagwortung ihrer Dokumente nach der Schlagwortnormdatei (SWD) haben, sollten sie, ganz im Sinne des "Easy Submission"-Postulats, beim Enspielen der Dokumente unterstützt werden. Mit Hilfe einer linguistisch intelligenten Software sollten automatisch Schlagwörter aus der SWD zu einem eingereichten Dokument erzeugt und dem Autoren angeboten werden. Dieser hätte anschließend entschieden, welche der angebotenen Schlagwörter er dem Dokument zuweisen möchte. Der typische Workflow beim Einspielen von Dokumenten verlangt vom Autor das Ausfüllen eines Metadatenformulars, idealerweise inklusive der Beschreibung mit SWD-Schlagworten. Da die SWD den Autoren nicht vertraut ist, vergeben diese meist unexakte, zu grobe oder falsche Schlagworte - oder solche, die in der SWD nicht existieren. Daher wird ein aufwändiges Nachbearbeiten seitens des Serverbetreibers nötig, der zwar über Expertise in der SWD-Nutzung verfügt, allerdings das Dokument nicht so exakt beschreiben kann wie es dem Autoren möglich wäre. Für ein exaktes Retrieval wäre es sinnvoll, wenn die Wissenschaftler selbst eine exakte Verschlagwortung vornähmen. Die im mittlerweile leider abgelehnten Antrag geplante prototypische, offene und nachnutzbare Einbindung einer Software zur automatischen Vergabe von SWD-Schlagworten hätte eine erhebliche Erleichterung des Veröffentlichungs- und Bearbeitungsprozesses einerseits und eine Verbesserung der Metadatenqualität andererseits gesichert.
APA, Harvard, Vancouver, ISO, and other styles
30

Aluc, Gunes. "Design And Implementation Of An Ontology Extraction Framework And A Semantic Search Engine Over Jsr-170 Compliant Content Repositories." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610665/index.pdf.

Full text
Abstract:
A Content Management System (CMS) is a software application for creating, publishing, editing and managing content. The future step in content management system development is building intelligence over existing content resources that are heterogeneous in nature. Intelligence collected at the knowledge base can later on be used for executing semantic queries. Expressing the relations among content resources with ontological formalisms is therefore the key to implementing such semantic features. In this work, a methodology for the semantic lifting of JSR-170 compliant content repositories to ontologies is devised. The fact that in the worst case JSR-170 enforces no particular structural restrictions on the content model poses a technical challenge both for the initial build-up and further synchronization of the knowledge base. To address this problem, some recurring structural patterns in JSR-170 compliant content repositories are exploited. The value of the ontology extraction framework is assessed through a semantic search mechanism that is built on top of the extracted ontologies. The work in this thesis is complementary to the &ldquo
Interactive Knowledge Stack for small to medium CMS/KMS providers (IKS)&rdquo
project funded by the EC (FP7-ICT-2007-3).
APA, Harvard, Vancouver, ISO, and other styles
31

Gupta, Arushi. "On the Answer Status and Usage of Requirements Traceability Questions." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1562842351223984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Laitala, Christer. "Evaluate methods for managing distributed source changes." Thesis, Blekinge Tekniska Högskola, Avdelningen för för interaktion och systemdesign, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-4624.

Full text
Abstract:
In larger developments, a use of configuration management is crucial; the company UIQ Technology is no exception. The configuration management method controls how the flow within a software developer should go, so the configuration management method and code complexity is something that works affects each other. Therefore, you might be able to combine multiple configuration methods to try to use the best from each method to decrease code complexity. That is the goal of this thesis, to see if the COTS, Single repository or Component Based could combine with the UIQ method to decrease code complexity. This have been tested through theoretical use cases for each method and the conclusion of the study is that, Single repository and Component Based works best in the UIQ method. However, COTS is not suited for the UIQ method because of the need of secrecy for large parts of the UIQ platform. UIQ wants to do as much as possible in-house, rather than give it out to other third-party companies that they are not in absolute need of. Some improvements have been achieved throughout Single repository, that the other third-parties companies needs to be upto- date before starting development, that is something that have not been valued before.
APA, Harvard, Vancouver, ISO, and other styles
33

Ferreira, Tarcísio Martins. "Classificação de issues obtidas de repositórios de software: uma abordagem baseada em características textuais." Universidade Federal de Uberlândia, 2015. https://repositorio.ufu.br/handle/123456789/18130.

Full text
Abstract:
A classificação das issues ou questões nos repositórios de manutenção de software é realizada atualmente pelos desenvolvedores de software. Entretanto, essa classificação manual não é livre de erros, os quais geram problemas na distribuição das issues para as equipes de tratamento. Isso acontece porque os desenvolvedores, geralmente os propositores das issues, possuem o mal hábito de classificá-las como bugs. Essas classificações errôneas produzem a distribuição de issues para uma equipe de tratamento de outro tipo de issue, gerando retrabalho para as equipes entre outras desvantagens. Por isso, o principal objetivo almejado com o estudo é a melhoria dessa classificação, utilizando de uma abordagem de classificação das issues realizada de maneira automatizada. Essa abordagem foi implementada com técnicas de Aprendizado de Máquina. Estas técnicas mostram que as palavras-chave discriminantes dos tipos de issues podem ser utilizadas como atributos de classificadores automáticos para a predição dessas issues. A abordagem foi avaliada sobre 5 projetos open source extraídos de 2 issue trackers conhecidos, Jira e Bugzilla. Por se tratarem de issue trackers de longa data, os projetos escolhidos forneceram boa quantidade de issues para este estudo. Essas issues, cerca de 7000, foram classificadas por especialistas humanos no trabalho [Herzig, Just e Zeller 2013], produzindo um gabarito utilizado para a realização deste estudo. Este trabalho produziu um classificador automático de issues, com acurácia de 81%, capaz de discriminá-las nos tipos bug, request for enhancement e improvement. O bom resultado de acurácia sugere que o classificador concebido possa ser utilizado em sistemas de encaminhamento de issues para as equipes de tratamento, com a Ąnalidade de diminuir retrabalho dessas equipes que ocorre em virtude da má classificação.
The classification of issues in software maintenance repositories is currently done by software developers. However, this classification is conducted manually and is not free of errors, which cause problems in the distribution of issues to the maintenance teams. This happen because the developers, which usually are the proponents of the issues, have the bad habit of classifying them as bugs. This erroneous rating generates rework and other disadvantages to the teams. Therefore, the main objective of this study is to improve this classification, using an issue classification approach conducted in an automated manner. In turn, this approach was implemented based on machine learning tecniques. These tecniques show that keywords discriminant of issues types can be used as attributes of automatic classifiers for prediction of these issues. The approach was evaluated on five open source projects extracted from two widely used issue trackers, Jira and Bugzilla. Because they are old issue trackers, the chosen projects provided good number of issues for this study. These issues, about 7.000, were classified by human experts at work [Herzig, Just e Zeller 2013], producing a feedback which was used for this study. This present work produced an automatic issues classifier, with 81% of accuracy, able to predict them in types of bug, request for enhancement and improvement. The result of accuracy obtained by this classifier suggests that it can be used in delivery systems to treatment teams with the purpose of reducing rework that occurs in these teams because of the poor issues rating.
Dissertação (Mestrado)
APA, Harvard, Vancouver, ISO, and other styles
34

Albuquerque, Regina Lucia Azevedo de. "Repositório de Instituições de Ensino Superior: composição de políticas para a sua criação." Universidade Federal do Amazonas, 2013. http://tede.ufam.edu.br/handle/tede/3559.

Full text
Abstract:
Made available in DSpace on 2015-04-22T22:10:58Z (GMT). No. of bitstreams: 1 Regina Albuquerque.pdf: 2869050 bytes, checksum: d5b7727fd2fe2198cf1e82d8d8b0b604 (MD5) Previous issue date: 2013-08-06
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
This work analyzes operating policies and procedures adopted in institutional repositories of universities, in order to help the development of strategies and policies to subsidize the creation of an open access Institutional Repository at the Federal Institute of Education, Science and Technology of Amazonas (IFAM). It proposes the general goal of creating policies for the Institutional Repository at IFAM in order to achieve gains in productivity and quality for the academic community served by the Institute. It has specific objectives: to search for policies framed in institutional repositories existing in public institutions of higher education and its operationalization as a basis for defining policy strategies to the needs of the IFAM repository and to establish a theoretical framework for institutional repositories, defining policy options for access and use that fit the multidisciplinary institutional repository to be developed in order to store, preserve, share, give visibility and manage their academic and scientific production in free access. The methodology was qualitative, the kind of research was addressed framed in descriptive and exploratory, which enabled the elements that characterize the political information were observed, recorded, analyzed, classified and interpreted without the interference of the researcher, in order to know more precisely the factors that may corroborate or substantiate to the composition of a repository for the IFAM. Data collection was carried out in stages, with the use of primary sources, and as a tool for collecting the questionnaire and through on-site observation to one of the repositories surveyed. Analysis and discussion of results present the policies and structure of the repositories surveyed. Afterwards the proposed guidelines are exposed for the composition repository at IFAM, strategies and deployment actions and policies of access and use. It is concluded that the implementation actions are not sufficient to ensure the settlement of the repository, it is recommended for the integration of all sectors and actors to share this responsibility throughout the organization. We suggest the creation of institutional information policy, to define organizational structures, process management and the ability to preserve the contents stored therein; a policy of submission, in order to establish guidelines for submission of items to the repository and parallel to instructions that create normative features of mandatory policy, to establish standards and operating procedures for the deposit of the work of completing courses, dissertations and theses at IFAM libraries. Policies are fundamental operation of the repository as an information service and recognition by the community, but, to fulfill these requirements, educational work and procedures are to be adopted and followed. With the results we propose strategic guidelines and policies to subsidize the creation of the open access Institutional Repository at IFAM.
Analisa as políticas de funcionamento e procedimentos adotados em repositórios institucionais das universidades, com o intuito de auxiliar a definir estratégias e políticas para subsidiar a criação do Repositório Institucional de acesso aberto do Instituto Federal de Educação, Ciência e Tecnologia do Amazonas (IFAM). Propõe como objetivo geral a criação de políticas para o Repositório Institucional para o IFAM visando ganhos de produtividade e qualidade à comunidade acadêmica servida pelo Instituto, e como objetivos específicos: pesquisar as políticas definidas nos repositórios institucionais existentes nas instituições públicas de ensino superior e sua operacionalização, como base para definir as estratégias políticas para as necessidades do repositório do IFAM; estabelecer um marco teórico sobre repositórios institucionais; definir as opções de políticas de acesso e uso que se enquadram ao repositório institucional de caráter multidisciplinar a ser desenvolvido, a fim de armazenar, preservar, compartilhar, dar visibilidade e gerenciar a sua produção acadêmico-científico em acesso livre. O método aplicado foi o qualitativo, o tipo de pesquisa abordado foi enquadrado em descritiva e exploratória, o qual possibilitou que os elementos que caracterizam as políticas de informação fossem observados, registrados, analisados, classificados e interpretados sem a interferência do pesquisador, na busca de conhecer com maior precisão os fatores que corroboram ou poderão corroborar para a composição de um repositório para o IFAM. A coleta de dados foi realizada em etapas, com o uso de fontes primárias, tendo como instrumento de coleta o questionário e por meio de observação in loco a um dos repositórios pesquisados. A análise e discussão dos resultados apresentam as políticas e a estrutura dos repositórios pesquisados. Segue com as propostas de diretrizes para a composição do repositório do IFAM, estratégias e ações de implantação e políticas de acesso e uso. Conclui-se que as ações de implementação não são suficientes para garantir o povoamento do repositório, para isso recomenda-se a integração de todos os setores e atores para compartilhar essa responsabilidade em toda a instituição. Sugere-se a criação de politica de informação institucional, para definir as estruturas de organização, o processo de gestão e a capacidade de preservação dos conteúdos nele armazenados; politica de submissão, com o objetivo de estabelecer diretrizes para submissão dos itens ao repositório e paralelo a isso criar instruções normativas com características de politica mandatária, para estabelecer normas e procedimentos operacionais para o depósito do trabalho de conclusão de cursos, dissertações e teses nas bibliotecas do IFAM. As políticas de funcionamento do repositório são fundamentais como serviço de informação e reconhecimento por parte da comunidade, mas para que se cumpram exige trabalho educativo e procedimentos a serem adotados e seguidos. Com os resultados obtidos foi possível propor as diretrizes estratégicas e políticas para subsidiar a criação do Repositório Institucional de acesso aberto do IFAM.
APA, Harvard, Vancouver, ISO, and other styles
35

Castro, Rute Nogueira Silveira de. "Descoberta de relacionamentos entre padrÃes de sofware utilizando semÃntica latente." Universidade Federal do CearÃ, 2006. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=1695.

Full text
Abstract:
O reuso de padrÃes de software vem se tornando cada vez mais comum no desenvolvimento de sistemas, pois se trata de uma boa prÃtica de engenharia de software que visa promover a reutilizaÃÃo de soluÃÃes comprovadas para problemas recorrentes. No entanto, existe uma carÃncia de mecanismos que promovam a busca de padrÃes adequados a cada situaÃÃo. TambÃm hà uma dificuldade na detecÃÃo de relacionamentos existentes entre os padrÃes de software disponÃveis na literatura. Este trabalho apresenta o uso de tÃcnicas de mineraÃÃo de texto em um conjunto de padrÃes de software com o objetivo de identificar como esses padrÃes se relacionam. A tÃcnica de mineraÃÃo de textos busca extrair conceitos inteligentes a partir de grandes volumes de informaÃÃo textual. O padrÃo de software deve ser tratado dentro de mineraÃÃo de texto como um grande volume de texto com uma estrutura definida por seu template. Os graus de relacionamentos entre os padrÃes sÃo determinados nos possÃveis tipos de relacionamentos entre eles, bem como atravÃs de regras fundamentadas no conceito de PadrÃes de Software. Essas regras, aliadas à tÃcnica de mineraÃÃo de texto, geram as informaÃÃes de relacionamento desejadas.
The reuse of software patterns is becoming increasingly common in developing systems, because it is a good practice of engineering software that aims to promote the reuse of solutions to recurring problems. However, there is a lack of mechanisms that promote the search for patterns appropriate to each situation. There is also a difficulty in detecting relationships among the software patterns available in the literature.This work presents the use of techniques for text mining into a set of software patterns in order to identify how these patterns are related. The technique of mining, intelligent text search extract concepts from textual information.The software pattern should be treated within the mining of text as a volume of text with a defined structure for its template. The degrees of relationships among the patterns are possible in certain types of relationships among them, and through rules based on the concept of software pattern. These rules, coupled with the technique of text mining, generate information of relationship you want.
APA, Harvard, Vancouver, ISO, and other styles
36

Wiese, Igor Scaliante. "Predição de mudanças conjuntas de artefatos de software com base em informações contextuais." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-02122016-140016/.

Full text
Abstract:
O uso de abordagens de predição de mudanças conjuntas auxilia os desenvolvedores a encontrar artefatos que mudam conjuntamente em uma tarefa. No passado, pesquisadores utilizaram análise estrutural para construir modelos de predição. Mais recentemente, têm sido propostas abordagens que utilizam informações históricas e análise textual do código fonte. Apesar dos avanços obtidos, os desenvolvedores de software ainda não usam essas abordagens amplamente, presumidamente por conta do número de falsos positivos. A hipótese desta tese é que informações contextuais obtidas das tarefas, da comunicação dos desenvolvedores e das mudanças dos artefatos descrevem as circunstâncias e condições em que as mudanças conjuntas ocorrem e podem ser utilizadas para realizar a predição de mudanças conjuntas. O objetivo desta tese consiste em avaliar se o uso de informações contextuais melhora a predição de mudanças conjuntas entre dois arquivos em relação às regras de associação, que é uma estratégia frequentemente usada na literatura. Foram construídos modelos de predição específicos para cada par de arquivos, utilizando as informações contextuais em conjunto com o algoritmo de aprendizagem de máquina random forest. Os modelos de predição foram avaliados em 129 versões de 10 projetos de código aberto da Apache Software Foundation. Os resultados obtidos foram comparados com um modelo baseado em regras de associação. Além de avaliar o desempenho dos modelos de predição também foram investigadas a influência do modo de agrupamento dos dados para construção dos conjuntos de treinamento e teste e a relevância das informações contextuais. Os resultados indicam que os modelos baseados em informações contextuais predizem 88% das mudanças corretamente, contra 19% do modelo de regras de associação, indicando uma precisão 3 vezes maior. Os modelos criados com informações contextuais coletadas em cada versão do software apresentaram maior precisão que modelos construídos a partir de um conjunto arbitrário de tarefas. As informações contextuais mais relevantes foram: o número de linhas adicionadas ou modificadas, número de linhas removidas, code churn, que representa a soma das linhas adicionadas, modificadas e removidas durante um commit, número de palavras na descrição da tarefa, número de comentários e papel dos desenvolvedores na discussão, medido pelo valor do índice de intermediação (betweenness) da rede social de comunicação. Os desenvolvedores dos projetos foram consultados para avaliar a importância dos modelos de predição baseados em informações contextuais. Segundo esses desenvolvedores, os resultados obtidos ajudam desenvolvedores novatos no projeto, pois não têm conhecimento da arquitetura e normalmente não estão familiarizados com as mudanças dos artefatos durante a evolução do projeto. Modelos de predição baseados em informações contextuais a partir de mudanças de software são relativamente precisos e, consequentemente, podem ser usados para apoiar os desenvolvedores durante a realização de atividades de manutenção e evolução de software
Co-change prediction aims to make developers aware of which artifacts may change together with the artifact they are working on. In the past, researchers relied on structural analysis to build prediction models. More recently, hybrid approaches relying on historical information and textual analysis have been proposed. Despite the advances in the area, software developers still do not use these approaches widely, presumably because of the number of false recommendations. The hypothesis of this thesis is that contextual information of software changes collected from issues, developers\' communication, and commit metadata describe the circumstances and conditions under which a co-change occurs and this is useful to predict co-changes. The aim of this thesis is to use contextual information to build co-change prediction models improving the overall accuracy, especially decreasing the amount of false recommendations. We built predictive models specific for each pair of files using contextual information and the Random Forest machine learning algorithm. The approach was evaluated in 129 versions of 10 open source projects from the Apache Software Foundation. We compared our approach to a baseline model based on association rules, which is often used in the literature. We evaluated the performance of the prediction models, investigating the influence of data aggregation to build training and test sets, as well as the identification of the most relevant contextual information. The results indicate that models based on contextual information can correctly predict 88% of co-change instances, against 19% achieved by the association rules model. This indicates that models based on contextual information can be 3 times more accurate. Models created with contextual information collected in each software version were more accurate than models built from an arbitrary amount of contextual information collected from more than one version. The most important pieces of contextual information to build the prediction models were: number of lines of code added or modified, number of lines of code removed, code churn, number of words in the discussion and description of a task, number of comments, and role of developers in the discussion (measured by the closeness value obtained from the communication social network). We asked project developers about the relevance of the results obtained by the prediction models based on contextual information. According to them, the results can help new developers to the project, since these developers have no knowledge about the architecture and are usually not familiar with the artifacts history. Thus, our results indicate that prediction models based on the contextual information are useful to support developers during the maintenance and evolution activities
APA, Harvard, Vancouver, ISO, and other styles
37

Pinho, Helder de Sousa. "RIGEL : um repositorio com suporte para desenvolvimento basaeado em componentes." [s.n.], 2006. http://repositorio.unicamp.br/jspui/handle/REPOSIP/276546.

Full text
Abstract:
Orientador: Cecilia Mary Fischer Rubira
Dissertação (mestrado profissional) - Universidade Estadual de Campinas, Instituto de Computação
Made available in DSpace on 2018-08-07T00:50:23Z (GMT). No. of bitstreams: 1 Pinho_HelderdeSousa_M.pdf: 1255692 bytes, checksum: 91ab06629ddbbf4b6885b93010e3511f (MD5) Previous issue date: 2006
Resumo: O desenvolvimento baseado em componente (DBC) pennite que uma aplicação seja construída pela composição de componentes de software que já foram previamente especificados, construídos e testados, resultando em ganhos de produtividade e qualidade no software produzido. Para haver reuso de componentes, é necessário que usuários consIgam procurar e recuperar componentes previamente especificados ou implementados Um repositório de componentes é essencial para possibilitar tal reuso. Interoperabilidade é um requisito importante para repositórios, mas nem todas as ferramentas a tratam com a devida relevância. O modelo de metadados de um repositório para DBC deve contemplar características de componentes, tais como interface e separação entre especificação e implementação. Este trabalho apresentou o Rigel, um repositório de bens de software reutilizáveis com suporte para desenvolvimento baseado em componentes. O Rigel apresenta características que facilitam atividades executadas durante o desenvolvimento de sistemas baseados em componentes, tais como pesquisa, armazenamento e recuperação de bens e integração com CVS. O padrão RAS foi adotado como o fonnato de metadados e de empacotamento de bens, facilitando a integração do Rigel com outros sistemas. O modelo de metadados do RAS foi estendido para apoiar um modelo conceitual de componentes e arquitetura de software. Esta adaptação resultou na criação de quatro novos profiles RAS, para apoiar bens relacionados à DBC: componente abstrato, componente concreto, interface e configuração arquitetural. Um estudo de caso foi conduzido a fim de mostrar como o Rigel apóia um processo de desenvolvimento baseado em componentes. Conclui-se que as características do repositório Rigel facilitam um desenvolvimento baseado em componentes
Abstract: The component based development (CBD) permits an application to be built by composition of previously specified, build and tested components, resulting in increases in productivity and quality of the produced software. 1n order to make the reuse of components happen, it is necessary that users are able to search and retrieve previously specified or implemented components. A component repository is important to support this reuse. 1nteroperability is an important requirement for repositories, but not alI the tools consider it with the required relevance. The metadata model of a CBD repository must handle components features, such as interface and separation between specification and implementation. This work presents Rigel, a repository of reusable software assets with a support for component based development. Rigel presents features that make activities performed during the development of component based systems easier, such as search, storage and retrieval of assets and CVS integration. RAS standard was adopted as the asset metadata and packaging format, making Rigel integration with other systems easier. The RAS metadata model was extended to support a conceptual model of components and software architecture. This adaptation resulted in the creation of four new RAS profiles to support CBD related assets: abstract component, concrete component, interface and architectural configuration. A case study was conducted in order to show how Rigel supports a CBD processo We also conclude that Rigel repository features make the component based development easier
Mestrado
Engenharia de Computação
Mestre em Computação
APA, Harvard, Vancouver, ISO, and other styles
38

Krbeček, Daniel. "Digitální knihovna." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2008. http://www.nusl.cz/ntk/nusl-217318.

Full text
Abstract:
The thesis contains basic information about image documents digitalization. A brief list of common used standards in Czech republic is shown. The standards can be used in description of digitalized documents by institutions such as libraries, scientific departments and universities. The thesis specifically solves the dilemma of the preservation and the accessing of B.P.Molls large map collection stored in Moravian Library in Brno city. It analyses step by step the characteristics of the saved documents, style of their interlacing and data representation. In terms of deposition and manipulation it comes with description list of open-source digital libraries and it chooses the Fedora repository. It solves methods of object-model implementation while using this digital library. The functional parts are web presentation of the mentioned map collection and an effectiveness test showing large-scale maps using the flash Zoomify browser. Web presentation uses the repository services as often as possible, and thus allows searching and searching through the bibliographic records of the presented documents. The end of the thesis sums up the obtained results and presents the incoming development course of presentation and popularization of the map collection.
APA, Harvard, Vancouver, ISO, and other styles
39

Artchounin, Daniel. "Tuning of machine learning algorithms for automatic bug assignment." Thesis, Linköpings universitet, Programvara och system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139230.

Full text
Abstract:
In software development projects, bug triage consists mainly of assigning bug reports to software developers or teams (depending on the project). The partial or total automation of this task would have a positive economic impact on many software projects. This thesis introduces a systematic four-step method to find some of the best configurations of several machine learning algorithms intending to solve the automatic bug assignment problem. These four steps are respectively used to select a combination of pre-processing techniques, a bug report representation, a potential feature selection technique and to tune several classifiers. The aforementioned method has been applied on three software projects: 66 066 bug reports of a proprietary project, 24 450 bug reports of Eclipse JDT and 30 358 bug reports of Mozilla Firefox. 619 configurations have been applied and compared on each of these three projects. In production, using the approach introduced in this work on the bug reports of the proprietary project would have increased the accuracy by up to 16.64 percentage points.
APA, Harvard, Vancouver, ISO, and other styles
40

Arteaga, Valeriano Juan Rafael, and Fernández-Dávila Daniel Gutiérrez. "Sistema Integrado de Salud: Repositorio Electrónico de Historiales Clínicos 2 (REHC)." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2015. http://hdl.handle.net/10757/579788.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Leiblinger, Luna Erika María, and Kawanishi Daniel Alejandro Higa. "RECH desarrollo del repositorio electrónico de historias clínicas para el Instituto de Salud del Niño." Bachelor's thesis, Universidad Peruana de Ciencias Aplicadas (UPC), 2015. http://hdl.handle.net/10757/583426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Ferracutti, Victor M., and Fernando A. Martinez. "Uso de temáticas y palabras clave sugeridas por software para mejorar la recuperación de tesis electrónicas a través del catálogo." Universidad Peruana de Ciencias Aplicadas (UPC), 2012. http://hdl.handle.net/10757/622602.

Full text
Abstract:
Conferencia realizado del 12 al 14 de setiembre en Lima, Peru del 2012 en el marco del 15º Simposio Internacional de Tesis y Disertaciones Electrónicas (ETD 2012). Evento aupiciado por la Universidad Nacional Mayor de San Marcos (UNMSM) y la Universidad Peruana de Ciencias Aplicadas (UPC).
El acceso libre a la información científica es esencial para llevar a cabo la labor científica y plasmar los resultados de la investigación en beneficios tangibles para la sociedad. En este sentido, el núcleo básico de la producción científica en las universidades lo constituyen las tesis y disertaciones de posgrado. La propuesta de la Universidad Nacional del Sur, utilizando tecnologías ampliamente distribuidas y proveyendo un punto de acceso único a través de su catálogo, facilita el procesamiento del material digital mejorando el acceso a la información científica promoviendo la cooperación. El trabajo colaborativo entre bibliotecarios e informáticos, apoyados por la experiencia, investigación y práctica docente, ha resultado en un prototipo automatizado (software) que sugiere temáticas y palabras clave de un texto dado utilizando una base de conocimiento compuesta por documentos científicos. Con el uso de este sistema se enriquece al objeto digital con metadatos (i.e. temáticas y palabras clave) a través de los cuales es posible relacionar diferentes documentos de distinto tipo (por ejemplo: libros, artículos de revistas, tesis y disertaciones) del catálogo, ampliando así las capacidades de recuperación -de contenidos digitales en particular- para los usuarios finales. Por otra parte, estas recomendaciones automatizadas reducen el tiempo de catalogación de las tesis y disertaciones guiando al catalogador en el uso de temáticas y palabras clave preexistentes.
APA, Harvard, Vancouver, ISO, and other styles
43

Nakashima, Chávez Giancarlo Juan. "Mejora del proceso software de una empresa desarrolladora de software : caso Competisoft-Perú Delta." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2009. http://tesis.pucp.edu.pe/repositorio/handle/123456789/355.

Full text
Abstract:
El presente proyecto de fin de carrera corresponde a la ejecución de un ciclo de mejora de procesos realizado en una pyme que se dedica principalmente al desarrollo de software, denominada en el proyecto y en el presente documento, por acuerdo de confidencialidad como DELTA. Para la realización del ciclo de mejora se utilizó el modelo COMPETISOFT, "Mejora de Procesos para Fomentar la Competitividad de la Pequeña y Mediana Industria del Software de Iberoamérica", financiado por la CYTED (Ciencia y Tecnología para el Desarrollo). Es importante resaltar que este modelo está siendo implantado y probado en diversas empresas peruanas desarrolladoras de software, a través de un trabajo conjunto con alumnos de la Pontificia Universidad Católica del Perú, grupo denominado COMPETISOFTPUCP.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
44

Vergara, González Dianne Britt. "Mejora del proceso software de una pequeña empresa desarrolladora de software : caso COMPETISOFT-Perú Lambda." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2008. http://tesis.pucp.edu.pe/repositorio/handle/123456789/358.

Full text
Abstract:
Este proyecto de fin de carrera, presenta la implementación de un ciclo de mejora basado en el marco de trabajo de COMPETISOFT. Dicho proyecto se ejecuta en una pyme a la que denominaremos LAMBDA que cuenta con desarrollos e innovaciones propietarios en un campo muy especializado en hardware y software y tecnologías de Infraestructura, cabe resaltar que dicha empresa cuenta con la certificación ISO 9001:2000 en diseño y desarrollo de software, ello facilitó en gran medida a la ejecución del proyecto, ya que la cultura organizacional no mostró tanto rechazo al cambio, pues se entendía que los cambios eran para mejorar y no para buscar culpables y por ende generar despidos.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
45

Briceño, Ortega Deborah Gabriela. "Mejora del proceso software de una pequeña empresa desarrolladora de software : caso Competisoft-Perú-Omega." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2009. http://tesis.pucp.edu.pe/repositorio/handle/123456789/356.

Full text
Abstract:
La mejora de procesos en una Pyme es un trabajo que implica, además de temas técnicos, temas de administración y gestión de recursos humanos. En este proyecto de tesis se realizó el esfuerzo de mejora en una Pyme dedicada a la venta y comercialización de un Sistema Integrado especializado, esta empresa fue evaluada al inicio del proyecto para identificar brechas respecto al modelo, posteriormente se propuso un plan de mejora, el cual fue implementado a través de pilotos dentro de la empresa. Finalmente se realizó una evaluación de los resultados obtenidos.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
46

Sánchez, Lorenzo Gonzalo Alonso. "Mejora del proceso software de una pequeña empresa desarrolladora de software : caso COMPETISOFT-Perú Tau." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2008. http://tesis.pucp.edu.pe/repositorio/handle/123456789/357.

Full text
Abstract:
Este proyecto de fin de carrera, presenta la implementación de un ciclo de mejora basado en el marco de trabajo COMPETISOFT (MoProSoft, EvalProSoft, PMCompetiSoft). La implementación se ejecuta en una PYME de Perú dedicada al desarrollo integral de soluciones tecnológicas a medida en el mercado de Internet, aplicaciones de negocios "Web enabled", marketing interactivo y producción multimedia.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
47

Cáceres, Vizcarra Lorenzo Esteban. "Mejora del proceso software de una empresa desarrolladora de software : caso Competisoft-Peru Omega, segundo ciclo." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2010. http://tesis.pucp.edu.pe/repositorio/handle/123456789/372.

Full text
Abstract:
El presente trabajo de tesis está enmarcado dentro del proyecto COMPETISOFT (mejora de procesos para fomentar la competitividad de la pequeña y mediana industria de software de Iberoamérica) y desarrollada por diferentes universidades y empresas a nivel internacional. En éste documento se presenta las memorias de la realización del segundo ciclo de mejora de procesos en el proyecto COMPETISOFT PERÚ. La empresa participante ha sido identificada con la letra OMEGA y es una pyme desarrolladora de software que provee soluciones a medida al sector financiero.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
48

Jesús, Alegre Claudio Alonso de. "Mejora de proceso software en una pequeña organización desarrolladora de software : caso PROCAL-PROSER-LIM.Nu - 1er ciclo." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2015. http://tesis.pucp.edu.pe/repositorio/handle/123456789/6365.

Full text
Abstract:
El desarrollo y uso de las Tecnologías de Información en las pequeñas organizaciones en el Perú y a nivel internacional aún es inmadura y presentan muchos inconvenientes. De otro lado, para el caso de las organizaciones que desarrollan software, desde la perspectiva de la oferta de alternativas de solución, se tienen los modelos de capacidad y madurez, siendo el más relevante para el caso de las pequeñas empresas el modelo mexicano MoProSoft que ha sido adoptado en Perú como la norma peruana NTP 291.100 y que es la base del nuevo estándar internacional ISO/IEC 29110. En dicho contexto, el Proyecto ProCal-ProSer es una iniciativa con fondos del gobierno peruano que busca, entre otras cosas, identificar factores que influyen en la adopción de modelos de procesos especializados en pequeñas organizaciones que desarrollan productos software. En el proyecto ProCal-ProSer se ha definido un componente de investigación relacionado a las pequeñas organizaciones que desarrollan software y para ese propósito se ha previsto trabajar con un grupo de empresas en donde se llevará a cabo un ciclo de mejora para la adopción del modelo propuesto por el estándar internacional que se está desarrollando bajo el grupo de estándares ISO/IEC 29110. Realizar la mejora de procesos en base a la adopción de un modelo de procesos como el que se presenta en la serie ISO/IEC 29110 para las organizaciones que desarrollan software implica un trabajo singular pues son organizaciones que por lo general no tienen disponibilidad de tiempo, no suelen tener presupuestos y en la mayoría de veces han dejado de lado buenas prácticas por la presión del quehacer cotidiano. El presente Proyecto propone la realización de un ciclo de mejora de procesos en una empresa denominada NU bajo el esquema de pruebas controladas dentro del marco del proyecto ProCal-ProSer en la que se usará principalmente el estándar internacional ISO/IEC 29110-5-1-2 y modelos relacionados. Este Proyecto de Tesis se articula bajo las directrices del Componente de Implementación en organizaciones que desarrollan software de ProCal-ProSer y se alinea a todas las directivas establecidas en ProCal-Proser.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
49

Arenas, Romero José. "Mejora de proceso software en una pequeña organización desarrolladora de software: caso PROCAL-PROSER- LIM.GAMMA - 1er ciclo." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2015. http://tesis.pucp.edu.pe/repositorio/handle/123456789/6385.

Full text
Abstract:
El presente trabajo fue realizado en base a los problemas detectados en la industria de software; en específico, en el sector de las pequeñas organizaciones desarrolladoras de software. Para esto, se pudieron identificar diversos problemas, entre lo más destacados están la entrega de documentación con atrasos a las fechas establecidas y la inexistencia de una buena difusión de la documentación para la gestión de proyectos, los cuales ocasionan la entrega de productos de baja calidad fuera del tiempo acordado con los clientes. Estos problemas surgen debido a una inadecuada gestión de los proyectos de software, el desconocimiento de la existencia de plantillas y documentación en los proyectos, la realización de procesos que no generan valor y finalmente, debido a la falta de comunicación continúa del jefe de proyecto con su equipo. Es debido a estos problemas que este trabajo consiste en la ejecución de un ciclo de mejora de procesos de una pequeña organización desarrolladora de software. Para esto, se realizó una evaluación inicial de los procesos de la empresa. Luego, se planificó la mejora de los procesos seleccionados y se ejecutó dicha mejora de acuerdo al plan establecido. Posteriormente, se realizó una evaluación teórica en base a las mejoras propuestas por el tesista antes de implementar las pruebas en un proyecto real, siguiendo el mismo esquema que para la evaluación inicial. Adicionalmente, se realizó una evaluación final de la mejora plasmada y ejecutada en un proyecto en curso de la empresa y se evaluó el esfuerzo desarrollado. Cabe destacar que para estas evaluaciones realizadas, se elaboraron reportes técnicos para la empresa. Este proyecto se justificó debido a que aporta diversos beneficios a la empresa y a sus trabajadores, incrementando la eficiencia de sus procesos. El proyecto se sustentó teóricamente en el modelo de procesos ISO/IECO 29110-5-2: Guía de Gestión e Ingeniería: Grupo Perfil Genérico: Perfil Básico y 29110-5-1-3: Guía de Gestión e Ingeniería: Grupo Perfil Genérico: Perfil Intermedio. Este modelo, ISO/IEC 29110-5-1, amolda los modelos aplicados a empresas grandes para adaptarlos a pequeñas organizaciones.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
50

Campó, Salinas Kevin Alessandro. "Mejora de proceso software en una pequeña organización desarrolladora de software: caso PROCAL-PROSER- LIM.BETA - 1er ciclo." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2015. http://tesis.pucp.edu.pe/repositorio/handle/123456789/6380.

Full text
Abstract:
El presente documento ha sido desarrollado dentro del marco del proyecto ProCal-ProSer; el cual tiene como objetivo principal encontrar los factores que influyen de manera positiva o negativa en la aplicación de la NTP ISO/IEC 29110, en pequeñas organizaciones desarrolladoras de Software (PO). Dicha Norma ha sido creada con el fin de conceder a las PO una importante herramienta de mejora de procesos aplicable a sus necesidades de negocio y recursos limitados. En ese sentido, la aplicación de la Norma y la observación de los resultados cobran un sentido primordial para la consecución de los objetivos del proyecto. Es así que se determina aplicar ciclos de mejora en la industria peruana de Software. Este trabajo es solamente una de las múltiples aplicaciones y evaluaciones que fueron llevadas a cabo dentro del proyecto; y describe, de manera detallada, las acciones que se llevaron a cabo en cada una de las diferentes etapas de un ciclo de mejora. Las etapas en las que fue dividida el ciclo de mejora fueron la evaluación diagnóstica inicial de los procesos de la organización, la planificación de la mejora sobre los procesos seleccionados, la ejecución del plan establecido y la evaluación diagnóstica final de procesos. A lo largo del documento se podrá ver en detalle la ejecución de cada una de estas etapas, así como sus resultados y observaciones. Además, se muestran en las secciones finales del documento las conclusiones y recomendaciones para un posible segundo ciclo de mejora. Este proyecto se sustentó teóricamente debido a que la NTP ISO-IEC 29910, está basada en otras normas y estándares internacionalmente reconocidos como la ISO 9001, CMMI, ISO/IEC 12207 e ISO/IEC 15504.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography