To see the other types of publications on this topic, follow the link: Information and software.

Dissertations / Theses on the topic 'Information and software'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Information and software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Taipale, T. (Taneli). "Improving software quality with software error prediction." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201512042251.

Full text
Abstract:
Today’s agile software development can be a complicated process, especially when dealing with a large-scale project with demands for tight communication. The tools used in software development, while aiding the process itself, can also offer meaningful statistics. With the aid of machine learning, these statistics can be used for predicting the behavior patterns of the development process. The starting point of this thesis is a software project developed to be a part of a large telecommunications network. On the one hand, this type of project demands expensive testing equipment, which, in turn, translates to costly testing time. On the other hand, unit testing and code reviewing are practices that improve the quality of software, but require large amounts of time from software experts. Because errors are the unavoidable evil of the software process, the efficiency of the above-mentioned quality assurance tools is very important for a successful software project. The target of this thesis is to improve the efficiency of testing and other quality tools by using a machine learner. The machine learner is taught to predict errors using historical information about software errors made earlier in the project. The error predictions are used for prioritizing the test cases that are most probably going to find an error. The result of the thesis is a predictor that is capable of estimating which of the file changes are most likely to cause an error. The prediction information is used for creating reports such as a ranking of the most probably error-causing commits. Furthermore, a line-wise map of probability of an error for the whole project is created. Lastly, the information is used for creating a graph that combines organizational information with error data. The original goal of prioritizing test cases based on the error predictions was not achieved because of limited coverage data. This thesis brought important improvements in project practices into focus, and gave new perspectives into the software development process<br>Nykyaikainen ketterä ohjelmistokehitys on monimutkainen prosessi. Tämä väittämä pätee varsinkin isoihin projekteihin. Ohjelmistokehityksessä käytettävät työkalut helpottavat jo itsessään kehitystyötä, mutta ne myös säilövät tärkeää tilastotietoa. Tätä tilastotietoa voidaan käyttää koneoppimisjärjestelmän opettamiseen. Tällä tavoin koneoppimisjärjestelmä oppii tunnistamaan ohjelmistokehitystyölle ominaisia käyttäytymismalleja. Tämän opinnäytetyön lähtökohta on ohjelmistoprojekti, jonka on määrä toimia osana laajaa telekommunikaatioverkkoa. Tällainen ohjelmistoprojekti vaatii kalliin testauslaitteiston, mikä johtaa suoraan kalliiseen testausaikaan. Toisaalta yksikkötestaus ja koodikatselmointi ovat työmenetelmiä, jotka parantavat ohjelmiston laatua, mutta vaativat paljon ohjelmistoammattilaisten resursseja. Koska ohjelmointivirheet ovat ohjelmistoprojektin edetessä väistämättömiä, on näiden työkalujen tehokkuus tunnistaa ohjelmointivirheitä erityisen tärkeää onnistuneen projektin kannalta. Tässä opinnäytetyössä testaamisen ja muiden laadunvarmennustyökalujen tehokkuutta pyritään parantamaan käyttämällä hyväksi koneoppimisjärjestelmää. Koneoppimisjärjestelmä opetetaan tunnistamaan ohjelmointivirheet käyttäen historiatietoa projektissa aiemmin tehdyistä ohjelmointivirheistä. Koneoppimisjärjestelmän ennusteilla kohdennetaan testausta painottamalla virheen todennäköisimmin löytäviä testitapauksia. Työn lopputuloksena on koneoppimisjärjestelmä, joka pystyy ennustamaan ohjelmointivirheen todennäköisimmin sisältäviä tiedostomuutoksia. Tämän tiedon pohjalta on luotu raportteja kuten listaus todennäköisimmin virheen sisältävistä tiedostomuutoksista, koko ohjelmistoprojektin kattava kartta virheen rivikohtaisista todennäköisyyksistä sekä graafi, joka yhdistää ohjelmointivirhetiedot organisaatiotietoon. Alkuperäisenä tavoitteena ollutta testaamisen painottamista ei kuitenkaan saatu aikaiseksi vajaan testikattavuustiedon takia. Tämä opinnäytetyö toi esiin tärkeitä parannuskohteita projektin työtavoissa ja uusia näkökulmia ohjelmistokehitysprosessiin
APA, Harvard, Vancouver, ISO, and other styles
2

Enescu, Mihai Adrian. "Precisely quantifying software information flow." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/57379.

Full text
Abstract:
A common attack point in a program is the input exposed to the user. The adversary crafts a malicious input that alters some internal state of the program, in order to acquire sensitive data, or gain control of the program's execution. One can say that the input exerts a degree of influence over specific program outputs. Although a low degree of influence does not guarantee the program's resistance to attacks, previous work has argued that a greater degree of influence tends to provide an adversary with an easier avenue of attack, indicating a potential security vulnerability. Quantitative information flow is a framework that has been used to detect a class of security flaws in programs, by measuring an attacker's influence. Programs may be considered as communication channels between program inputs and outputs, and information-theoretic definitions of information leakage may be used in order to measure the degree of influence which a program's inputs can have over its outputs, if the inputs are allowed to vary. Unfortunately, the precise information flow measured by this definition is difficult to compute, and prior work has sacrificed precision, scalability, and/or automation. In this thesis, I show how to compute this information flow (specifically, channel capacity) in a highly precise and automatic manner, and scale to much larger programs than previously possible. I present a tool, nsqflow, that is built on recent advances in symbolic execution and SAT solving. I use this tool to discover two previously-unknown buffer overflows. Experimentally, I demonstrate that this approach can scale to over 10K lines of real C code, including code that is typically difficult for program analysis tools to analyze, such as code using pointers.<br>Science, Faculty of<br>Computer Science, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
3

Saks, Craig Sheldon. "Expanding software process improvement models beyond the software process itself." Master's thesis, University of Cape Town, 1999. http://hdl.handle.net/11427/16844.

Full text
Abstract:
Bibliography: pages 182-188.<br>The problems besetting software development and maintenance are well recorded and numerous strategies have been adopted over the years to overcome the so-called "software crisis". One increasingly popular strategy focuses on managing the processes by which software is built, maintained and managed. As such, many software organisations see software process improvement initiatives as an important strategy to help them improve their software development and maintenance performance. Two of the more popular software process improvement (SPI) models used by the software industry to help them in this endeavour are the Capability Maturity Model for Software (SW-CMM) from the Software Engineering Institute and the Software Process Improvement and Capability determination (SPICE) model from the International Standards Organisation. This research begins with the supposition that, although these SPI models have added significant value to many organisations, they have a potential shortcoming in that they tend to focus almost exclusively on the software process itself and seem to neglect other organisational aspects that could contribute to improved software development and maintenance performance. This research is concerned with exploring this potential shortcoming and identifying complementary improvement areas that the SW -CMM and SPICE models fail to address adequately. A theoretical framework for extending the SW-CMM and SPICE models is proposed. Thereafter complementary improvement areas are identified and integrated with the SW-CMM and SPICE models to develop an Extended SPI Model. This Extended SPI Model adopts a systemic view of software process and IS organisational improvement by addressing a wide range of complementary improvement considerations. A case study of an SPI project is described, with the specific objective of testing and refining the Extended SPI Model. The results seem to indicate that the framework and Extended SPI Model are largely valid, although a few changes were made in light of the findings of the case study. Finally, the implications of the research for both theory and practice are discussed.
APA, Harvard, Vancouver, ISO, and other styles
4

Liao, Gang. "Information repository design for software evolution." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ34039.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Noppen, Johannnes Albertus Rudolf. "Imperfect information in software design processes." Enschede : University of Twente [Host], 2007. http://doc.utwente.nl/57882.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Xianpeng. "Software Clone Detection Basedon Context Information." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-324959.

Full text
Abstract:
Software clone detection is very promising and innovative within the industryfield. Existing mainstream clone detection techniques mainly focus ondetecting the similarity of source code itself, which makes them capable ofdetecting Type I and Type II clones (Type I clones are two identical codefragments except for variations in format and Type II clones are twostructurally identical code fragments except for variations in format). Butthey rarely pay attention to the relationship between codes. It becomes animportant research area to detect Type III code clones, which are clones withminor difference in statements, by using the context information in thesource code. I carry out a detailed analysis of existing software clone detectiontechniques in this thesis. It raises issues of existing software clone detectiontechniques in theory and practice. On the basis of the analysis, I propose anew method to improve existing clone detection techniques with a detailedtheory analysis and experimental verification. This method makes detectionof Type III software clones possible.
APA, Harvard, Vancouver, ISO, and other styles
7

Rehder, John J. "Semantic software scouts for information retrieval." W&M ScholarWorks, 2000. https://scholarworks.wm.edu/etd/1539623977.

Full text
Abstract:
A new concept for information storage and retrieval is proposed that links chunks of information within and among documents based on semantic relationships and uses those connections to efficiently retrieve all the information that closely matches the user's request. The storage method is semantic hypertext, in which conventional hypertext links are enriched with semantic information that includes the strength and type of the relationship between the chunks of information being linked. A retrieval method was devised in which a set of cooperating software agents, called scouts, traverse the connections simultaneously searching for requested information. By communicating with each other and a central controller to coordinate the search, the scouts are able to achieve high recall and high precision and perform extremely efficiently.;An attempt to develop a document base connected by semantic hypertext is described. Because of the difficulties encountered in the attempt, it was concluded that there is no satisfactory method for automatic generation of semantic hypertext from real documents. The collection of semantically linked documents used in this research was generated synthetically.;A Java-based agent framework used to develop three types of software scouts. In the simplest implementation, Scoutmaster, the paths of the scouts through the document base were specified by a central controller. The only task of each scout was to follow the links specified by the central controller. In the next level of autonomy, Broadcaster, the controller was used strictly as a conduit for scouts to exchange messages. The controller received information from the scouts and broadcast it to all of the other scouts to use in determining their actions. In the final implementation, Melee, the central controller was used only to inaugurate the scout searches. After initialization, the scouts broadcast their messages to all the other scouts.;Experiments were performed to test the ability of the scouts to find information in two synthetically created document sets. All scout types were able to find all of specified information, i.e. high recall, while searching few documents that did not contain the information, i.e. high precision. Using groups of scouts, the best time to search document sets with up to 3000 documents and 2.5 million links was about thirty seconds.
APA, Harvard, Vancouver, ISO, and other styles
8

Hulter, Oskar. "Improving Software Documentation using Data Visualization." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-70862.

Full text
Abstract:
This exploratory case study argues for the thesis that data visualization (dataviz) can have a positive impact on users understanding and perception of code documentation. The basis of the thesis is a case study performed in collaboration with SensiNet AB. The introduction describes the case and the problem area. The theoretical framework and methodology focus on theoretical foundations and how the data collection was planned and performed. The paper concludes with a brief discussion of related work and several suggestions for future studies.
APA, Harvard, Vancouver, ISO, and other styles
9

Leinonen, J. (Juho). "Evaluating software development effort estimation process in agile software development context." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201605221862.

Full text
Abstract:
This thesis studied effort estimation in software development, focusing on task level estimation that is done in Scrum teams. The thesis was done at Nokia Networks and the motivation for this topic came from the poor estimation accuracy that has been found to be present in software development. The aim of this thesis was to provide an overview of what is the current state of the art in effort estimation, survey the current practices present in Scrum teams working on LTE L2 software component at Nokia Networks Oulu, and then present suggestions for improvement based on the findings. On the basis of the literature review, three main categories of effort estimation methods were found: expert estimation, algorithmic models and machine learning. Universally there did not seem to be a single best method, but instead the differences come from the context of use. Algorithmic models and machine learning require data sets, whereas expert estimation methods rely on previous experiences and intuition of the experts. While model based methods have received a lot of research attention, the industry has largely relied on expert estimation. The current state of effort estimation at Nokia Networks was studied by conducting a survey. This survey was built based on previous survey studies that were found by conducting a systematic literature review. The questions found in the previous studies were formulated into a questionnaire, which was then used to survey the current effort estimation practices present in the participating teams. 41 people out of 100 in the participating teams participated in the survey. Survey results showed that like much of the software industry, the teams in LTE L2 relied on expert estimation methods. Most respondents had encountered overruns in the last sprint and the most often provided reason was that testing related effort estimation was hard. Forgotten subtasks were encountered frequently and requirements were found to be both unclear and to change often. Very few had had any training on effort estimation. There were no common practices for effort data collection and as such, it was mostly not done. By analyzing the survey results and reflecting them on the previous research, five suggestions for improvements were found. These were training in effort estimation, improving the information that is used during effort estimation by collaborating with specification personnel, improving testing related effort estimation by splitting acceptance testing into their own tasks, collecting and using effort data, and using Planning Poker as an effort estimation method, as it fit the context of estimation present in the teams. The study shed light on how effort estimation is done in software industry. Another contribution was the improvement suggestions, which could potentially improve the situation in the teams that participated in the survey. A third contribution was the questionnaire built during this study, as it could potentially be used to survey the current state of effort estimation in also other contexts.
APA, Harvard, Vancouver, ISO, and other styles
10

Mäder, Gabriel. "Management of Software Assets : Challenges in Large Organizations." Thesis, Linnéuniversitetet, Institutionen för informatik (IK), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-104552.

Full text
Abstract:
This thesis is a study about the difficulties managing software assets within larger organizations.Software Asset Management (SAM) is a fairly new practice which has started to be used byorganizations in order to track and manage software assets throughout their life cycle. Existingliterature states that organizations have challenges with the increasing complexity, which comesalong when buying software products from manufacturers. However, the lack of academicresearch, due to the rapid development of technology has been limited. The purpose of this paperis to explore the challenges that large organizations are confronted with when managing softwareassets. A mixed study approach was conducted with main focus on a qualitative approach,where five persons with mainly long experience in Software Asset Management, whereinterviewed. In addition, a survey was sent out to employees who work with IT, to capture ageneral aspect on the topic. The findings of the study include the identified challenges of SAM,which are divided in four main areas. Further, the paper highlights the importance and success ofSoftware Asset Management.
APA, Harvard, Vancouver, ISO, and other styles
11

Crunk, John. "Examining Tuckman's Team Theory in Non-collocated Software Development Teams Utilizing Collocated Software Development Methodologies." Thesis, Capella University, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10929105.

Full text
Abstract:
<p> The purpose of this qualitative, multi-case study was to explain Tuckman&rsquo;s attributes within software development when using a collocated software designed methodology in a non-collocated setting. Agile is a software development methodology that is intended for use in a collocated setting; however, organizations are using it in a non-collocated setting, which is increasing the software errors in the final software product. The New Agile Process for Distributed Projects (NAPDiP) was developed to fix these software errors that arise when using Agile in a non-collocated setting but have not been effective. This research utilized Tuckman's team theory to explore the disparity related to why these errors still occur. The research question asked is how software development programmers explain Tuckman's attributes (i.e., forming, storming, norming, performing) on software development projects. The study adopted a qualitative model using nomothetic major and minor themes in the exploration of shared expressions of sentiments from participants. The study&rsquo;s population came from seven participants located in the United States and India who met the requirement of using the Agile development methodology and work for organizations on teams with a size of at least thirty individuals from various organizations. A total of seven participants reached saturation in this multi-case study supporting the research question explored. The findings of the research demonstrated that development teams do not meet all stages and attributes of Tuckman&rsquo;s team development. Future research should explore additional ways that software development teams satisfy a more significant number of Tuckman&rsquo;s team development stages.</p><p>
APA, Harvard, Vancouver, ISO, and other styles
12

Rahim, Faizul Azli Mohd. "Factors contributing to information technology software project risk : perceptions of software practitioners." Thesis, University of Liverpool, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.548788.

Full text
Abstract:
The majority of software development projects normally connected with the application of new or advanced technologies. The use of advanced and, in most cases, unproven technology on software development projects could leads to a large number of risks. Every aspect of a software development project could be influenced by risks that could cause project failure. Generally, the success of a software development project is directly connected with the involved risk, i.e. project risks should be successfully mitigated in order to finish a software development project within the budget allocated. One of the early researches on risk of software projects was conducted by Boehm (1991) where the research identified top 10 risk factors for software project. Boehm research had been the starting point of research in risk of software projects. For the past 10-15 years, many researches had been conducted with the introduction of frameworks and guidelines. However, still software development project failures had been reported in the academic literatures. Researchers and practitioners in this area have long been concerned with the difficulties of managing the risks relating to the development and implementation of IT software projects. This research is concerned specifically with the risk of failure of IT software projects, and how related risk constructs are framed. Extant research highlights the need for further research into how a theoretically coherent risk construct can be combined with an empirical validation of the links between risk likelihood, risk impact on cost overrun, and evidence of strategic responses in terms of risk mitigation The proposal within this research is to address this aspect of the debate by seeking to clarify the role of a project life cycle as a frame of reference that contracting parties might agree upon and which should act as the basis for the project risk construct. The focus on the project life cycle as a risk frame of reference potentially leads to a common, practical view of the (multi) dimensionality setting of risk within which risk factors may be identified and which believe to be grounded across a wide range of projects and, specifically in this research, for IT software projects. The research surveyed and examine the opinions of professionals in IT and software companies. We assess which risk factors are most likely to occur in IT software projects; evaluate risk impact by assessing which risk factors IT professionals specifically think are likely to give rise to the possibility of cost overruns; and we empirically link which risk mitigation strategies are most likely to be employed in practice as a response to the risks and impacts identified. The data obtained were processed, analysed and ranked. By using the EXCEL and SPSS for factor analysis, all the risk factors were reduced and groups into clusters and components for further correlation analysis. The analysis was able to evidence opinion on risk likelihood, the impact of the risk of cost overrun, and the strategic responses that are likely to be effective in mitigating the risks that emerge in IT software projects. The analysis indicates that it is possible to identify a grouping of risk that is reflective of the different stages of the project life cycle which suggest three identifiable groups when viewing risk from the likelihood of occurrence and three identifiable groups from a cost overrun perspective. The evidence relating to the cost overrun view of risk provided a stronger view of which components of risk were important compared with risk likelihood. The research account for this difference by suggesting that a more coherent framework, or risk construct, offered by viewing risk within the context of a project life cycle allows those involved in IT software projects to have a clearer view of the relationships between risk factors. It also allows the various risk components and the associated emergent clusters to be more readily identifiable. The research on strategic response indicated different strategies as being effective between risk likelihood versus cost overrun. The study was able to verify the effective mitigation strategies that are correlated to the risk components. In this way, the actions or consequences conditioned can be observed on identification of risk likelihood and risk impact on cost overrun. However, the focus of attention on technical issues and the degree to which they attract strategic response is a new finding in addition to the usual reports concerning the importance of non-technical aspects of IT software projects. The research also developed a fuzzy theory based model to assist software practitioners in the software development life cycle. This model could help the practitioners in the decision making of dealing with risks in the software project. The contribution of the research relates to the assessment of risk within a construct that is defined in the context of a fairly broadly accepted view of the life cycle of projects. The software risk construct based on the project management framework proposed in this research could facilitates a focus on roles and responsibilities, and allows for the coordination and integration of activities for regular monitoring and aligning with the project goals. This contribution would better enable management to identify and manage risk as they emerge with project stages and more closely reflect project activity and processes and facilitate the risk management strategies exercise. Keywords: risk management, project planning, IT implementation, project life cycle
APA, Harvard, Vancouver, ISO, and other styles
13

Lindmark, Fanny, and Hanna Kvist. "Security in software : How software companies work with security during a software development process." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-130964.

Full text
Abstract:
This study is conducted, due to recent interest in privacy and software security, on how a number of software development companies work during a development process to develop secure software to the possible extent. The study is based on four interviews with four software companies located in Linköping, Sweden. The interviews followed a semi-structured format to ensure the possibility to compare the given answers from the companies to each other. This structure was chosen to give each company the liberty to express what they valued and thought were important during a software development process. The aim was to analyze how and if these companies work with security when developing software, and to see what differences and similarities that could be established. We found differences between the companies' perspective of security and on their methods of working. Some valued secure software more than others and performed more measures to ensure it. We also found some similarities on their view on importance of secure software and ways to work with it. The differences and similarities were related to the size of the companies, their resources, the types of products they develop, and the types of clients they have.
APA, Harvard, Vancouver, ISO, and other styles
14

Kidwell, Billy R. "MiSFIT: Mining Software Fault Information and Types." UKnowledge, 2015. http://uknowledge.uky.edu/cs_etds/33.

Full text
Abstract:
As software becomes more important to society, the number, age, and complexity of systems grow. Software organizations require continuous process improvement to maintain the reliability, security, and quality of these software systems. Software organizations can utilize data from manual fault classification to meet their process improvement needs, but organizations lack the expertise or resources to implement them correctly. This dissertation addresses the need for the automation of software fault classification. Validation results show that automated fault classification, as implemented in the MiSFIT tool, can group faults of similar nature. The resulting classifications result in good agreement for common software faults with no manual effort. To evaluate the method and tool, I develop and apply an extended change taxonomy to classify the source code changes that repaired software faults from an open source project. MiSFIT clusters the faults based on the changes. I manually inspect a random sample of faults from each cluster to validate the results. The automatically classified faults are used to analyze the evolution of a software application over seven major releases. The contributions of this dissertation are an extended change taxonomy for software fault analysis, a method to cluster faults by the syntax of the repair, empirical evidence that fault distribution varies according to the purpose of the module, and the identification of project-specific trends from the analysis of the changes.
APA, Harvard, Vancouver, ISO, and other styles
15

Lui, Nathan. "DependencyVis: Helping Developers Visualize Software Dependency Information." DigitalCommons@CalPoly, 2021. https://digitalcommons.calpoly.edu/theses/2270.

Full text
Abstract:
The use of dependencies have been increasing in popularity over the past decade, especially as package managers such as JavaScript's npm has made getting these packages a simple command to run. However, while incidents such as the left-pad incident has increased awareness of how vulnerable relying on these packages are, there is still some work to be done when it comes to getting developers to take the extra research step to determine if a package is up to standards. Finding metrics of different packages and comparing them is always a difficult and time consuming task, especially since potential vulnerabilities are not the only metric to consider. For example, considering how popular and how actively maintained the package is also just as important. Therefore, we propose a visualization tool called DependencyVis that is specific to JavaScript projects and npm packages as a solution by analyzing a project's dependencies in order to help developers by looking up the many basic metrics that can address a dependency's popularity, activeness, and vulnerabilities such as the number of GitHub stars, forks, and issues as well as security advisory information from npm audit. This thesis then proposes many use cases for DependencyVis to help users compare dependencies by displaying the dependencies in a graph with metrics represented by aspects such as node color or node size.
APA, Harvard, Vancouver, ISO, and other styles
16

Durrett, John Randall. "Distributed information systems design through software teams /." Digital version, 1999. http://wwwlib.umi.com/cr/utexas/fullcit?p9959479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Gethers, Malcom Bernard II. "Information Integration for Software Maintenance and Evolution." W&M ScholarWorks, 2012. https://scholarworks.wm.edu/etd/1539720326.

Full text
Abstract:
Software maintenance and evolution is a particularly complex phenomenon in the case of long-lived, large-scale systems. It is not uncommon for such systems to progress through years of development history, a number of developers, and a multitude of software artifacts including millions of lines of code. Therefore, realizing even the slightest change may not always be straightforward. Clearly, changes are the central force driving software evolution. Therefore, it is not surprising that a significant effort has been (and should be) devoted in the software engineering community to systematically understanding, estimating, and managing changes to software artifacts. This effort includes the three core change related tasks of (1) expert developer recommendations - identifying who are the most experienced developers to implement needed changes, (2) traceability link recovery recovering dependencies (traceability links) between different types of software artifacts, and (3) software change impact analysis - which other software entities should be changed given a starting point.;This dissertation defines a framework for an integrated approach to support three core software maintenance and evolution tasks: expert developer recommendation, traceability link recovery, and software change impact analysis. The framework is centered on the use of conceptual and evolutionary relationships latent in structured and unstructured software artifacts. Information Retrieval (IR) and Mining Software Repositories (MSR) based techniques are used for analyzing and deriving these relationships. All the three tasks are supported under the framework by providing systematic combinations of MSR and IR analyses on single and multiple versions of a software system. Our approach to the integration of information is what sets it apart from previously reported relevant solutions in the literature. Evaluation on a number of open source systems suggests that such combinations do offer improvements over individual approaches.
APA, Harvard, Vancouver, ISO, and other styles
18

Woody, Jeffrey L. "A classroom information management system /." Connect to unofficial online version of: A classroom information management system, 2005. http://minds.wisconsin.edu/bitstream/1793/18751/1/WoodyJeff.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Fenn, Tim. "Understanding & improving GIS software selection /." Click for abstract, 1998. http://library.ctstateu.edu/ccsu%5Ftheses/1490.html.

Full text
Abstract:
Thesis (M.S.)--Central Connecticut State University, 1998.<br>Thesis advisor: Professor John Harmon. " ... in partial fulfillment of the requirements for the degree of Master of Science in Geography." Includes bibliographical references (leaves ix-xi.
APA, Harvard, Vancouver, ISO, and other styles
20

Kivijärvi, M. (Marko). "Cross-platform software component design." Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201306011427.

Full text
Abstract:
The aim of this thesis is to analyze a project to design and implement a new FM Radio application for Symbian OS. The project process and relevant events are discussed when they have an impact on the design work. The goal of the project was to offer an improved user experience with features like favorite stations, song history, RT+ support, and automatic region selection. In order to complete the application within the project schedule, the existing radio modules were to be reused as much as possible. The application was required to be developed using the Qt application framework and to have no dependencies on the old UI framework from the Symbian OS. Platform-independence, testability, and simplicity were other key requirements for the new radio application. A final comprehensive goal was to stick to established design patterns and to follow the design principles and good practices defined earlier in the bachelor’s thesis by the same author. An added challenge was provided by the necessity to develop the application on top of a new UI framework that was still in development itself. Constant changes to the framework put a strain on the already tight project schedule. The discussion focuses on the design of the engine module that was required to house most of the application logic. The engine is disconnected from the Symbian OS by the use of a wrapper module, whose purpose is to hide the platform details. Due to its relevance to the engine, the design of the wrapper is discussed in detail. The application UI and the reused radio modules are discussed briefly and only to the extent that is relevant for the engine design. The resulting design fulfills its requirements and the implemented application performs as designed. All the required features are supported, and the existing radio modules are reused. The lack of dependency on the old UI framework is witnessed by running the application in a system that does not contain the framework. The possibility to run the application on a Windows workstation also affirms that the platform-independence requirement was achieved. The design and implementation adhere to the principles outlined in the bachelor’s thesis and provide a practical use for them. The principles are found to be valid and important for the successful completion of a complex software project. It is discovered that the goal of designing a simple system from complicated beginnings is difficult to achieve and requires many iterations. Gradual refinements to the architecture and implementation are necessary to finally arrive at the optimal design. Constant refactoring is found to be a key element in the successful completion of a software project<br>Tässä työssä analysoidaan projektia, jossa suunniteltiin ja toteutettiin uusi FM Radio -sovellus Symbian-käyttöjärjestelmälle. Projektin prosessi ja oleelliset tapahtumat kuvataan siltä osin kuin ne vaikuttavat suunnittelutyöhön. Projektin tavoite oli tarjota parempi käyttökokemus uusilla ominaisuuksilla kuten suosikkikanavilla, kappalehistorialla, RT+ -tuella, ja automaattisella aluevalinnalla. Olemassa olevia radiomoduuleita oli uudelleenkäytettävä niin paljon kuin mahdollista, jotta projekti saataisiin aikataulun puitteissa valmiiksi. Sovellus tuli suunnitella Qt-sovelluskehystä käyttäen, eikä sillä saanut olla riippuvuuksia vanhaan käyttöliittymäkirjastoon. Alustariippumattomuus, testattavuus ja yksinkertaisuus olivat myös tärkeitä vaatimuksia. Viimeinen kattava tavoite oli pitäytyä vakiintuneissa suunnittelumalleissa ja noudattaa suunnitteluperiaatteita ja hyviä toimintatapoja, jotka olivat saman tekijän kandidaatintyössä aiemmin määritelty. Tarve toteuttaa sovellus keskeneräisen käyttöliittymäkirjaston päälle asetti oman lisähaasteensa. Jatkuvat muutokset kirjastossa rasittivat jo valmiiksi tiukkaa projektiaikataulua. Tämän työn pääpaino on moottorimoduulissa, jonka tuli sisältää suurin osa sovelluksen logiikasta. Moottori on eriytetty Symbian-käyttöjärjestelmästä adapterimoduulilla, jonka tarkoitus on piilottaa ohjelmistoalustan yksityiskohdat. Adapterin suunnitelma kuvataan yksityiskohtaisesti. Sovelluksen käyttöliittymä ja uudelleenkäytetyt radiomoduulit kuvataan lyhyesti ja vain niiltä osin kuin ne ovat moottorin suunnitelmalle oleellisia. Tuloksena syntyvä suunnitelma täyttää vaatimukset, ja toteutettu sovellus toimii kuten oli suunniteltu. Kaikki vaaditut ominaisuudet ovat tuettuja ja olemassa olevat radiomoduulit käytettiin uudelleen. Riippumattomuus vanhasta käyttöliittymäkirjastosta havaitaan suorittamalla sovellusta järjestelmässä, jossa sitä ei ole. Mahdollisuus ajaa sovellusta Windows-työasemassa myös todistaa sen, että alustariippumattomuuden vaatimus täytettiin. Suunnitelma ja toteutus noudattavat kandidaatintyössä määriteltyjä periaatteita ja tarjoavat niille käytännön sovellutuksen. Periaatteet todetaan paikkansapitäviksi ja tärkeiksi monimutkaisen ohjelmistoprojektin onnistuneen valmistumisen kannalta. Työssä havaitaan, että tavoite suunnitella yksinkertainen järjestelmä monimutkaisista lähtökohdista on vaikea saavuttaa ja vaatii useita toistoja. Asteittaiset korjaukset arkkitehtuuriin ja toteutukseen ovat välttämättömiä optimaaliseen suunnitelman saavuttamiseksi. Jatkuvan uudelleenjärjestelyn havaitaan olevan avainasemassa ohjelmistosuunnitteluprosessissa
APA, Harvard, Vancouver, ISO, and other styles
21

Carlsson, Emil. "Software Developers Use of Source Code Summarization Comments : A qualitative study of software developers practices to understand third party source code libraries." Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-46066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ehlers, Kobus. "Agile software development as managed sensemaking." Thesis, Stellenbosch : University of Stellenbosch, 2011. http://hdl.handle.net/10019.1/6455.

Full text
Abstract:
Thesis (MPhil (Information Science))--University of Stellenbosch, 2011.<br>ENGLISH ABSTRACT: The environment in which all organisations currently operate is undoubtably dynamic. Regardless of the nature, size or geographical location of business, companies are being forced to cope with a rapidly changing world and increasing levels of unpredictability. This thesis tracks the history of software development methodologies leading up to agile development (chapter 2). Agile development has appeared in response to the limitations of traditional development approaches and evolved to address the particular demands of a changing world (chapter 3). The theory of sensemaking is used to gain insight into the functioning of agile development. Sensemaking is introduced and a working definition of this concept is formulated (chapter 4). This research does not argue that agile development is the same as sensemaking, but rather that it can be better understood through sensemaking. Agile development can be seen as a type of sensemaking, but sensemaking is also a generic, universal cognitive ability. The structure and design of agile development is well aligned with sensemaking, and one can understand its nature and the type of management needed to support agile development better from this perspective. In fact, agile development directly supports and facilitates several important elements of the sensemaking process. For successful sensemaking to occur, certain organisational conditions need to be present. The term "managed sensemaking" is introduced to expand this notion. After performing an analysis of agile development (chapter 5), certain pertinent implications and challenges facing organisations are considered (chapter 6). By framing these implications in terms of sensemaking, practical management suggestions can be provided based on a good fit between the problem that agile development is meant to solve and the cognitive requirements of the process leading to a solution. The research conducted in this process opens the door to further research opportunities (chapter 7) and allows for the application of sensemaking in the context of software development methodologies. This study provides insight into the prevalence and functioning of agile methodologies, in software engineering contexts, by leveraging the theory of sensemaking to provide an explanation for the underlying worldview and processes constituting this approach.<br>AFRIKAANSE OPSOMMING: Die omgewing waarin alle organisasies tans funksioneer in ongetwyfeld dinamies. Maatskappye word genoop om die uitdagings van 'n vinnig-veranderende wêreld die hoof te bied, ongeag die aard, grootte of geografiese ligging van die besigheid. Hierdie tesis volg die geskiedenis van sagteware-ontwikkelingsmetodologiee tot by agile development (hoofstuk 2). Agile development het verskyn as 'n reaksie op die beperkings van tradisionele ontwikkelingsbenaderings en evolueer om aan te pas by huidige uitdagings (hoofstuk 3). Die teorie van sensemaking word gebruik om insig te verkry in die funksionering van agile development. Sensemaking word ingelei en 'n werksdefinisie word geformuleer (hoofstuk 4). Hierdie navorsing argumenteer nie dat agile development dieselfde is as sensemaking nie, maar eerder dat dit beter verstaan kan word deur sensemaking. Agile development kan wel gesien word as 'n tipe sensemaking, maar sensemaking is ook 'n generiese, universele kognitiewe vermoe. Die struktuur en ontwerp van agile development is goed belyn met sensemaking, en 'n mens kan die aard daarvan en tipe bestuur benodig om agile develop- ment te ondersteun beter verstaan vanuit hierdie perspektief. Tewens, agile development ondersteun en fasiliteer verskeie belangrike elemente van die sensemaking proses direk. Vir suksesvolle sensemaking om plaas te vind, word sekere organisatoriese toestande benodig. Die term "managed sensemaking" word ingelei om hierdie idee uit te brei. Na 'n analise van agile development (hoofstuk 5) word sekere dwingende implikasies en uitdagings, wat organisasies in die gesig staar, oorweeg (hoofstuk 6). Deur hierdie implikasies te plaas in sensemaking-terme kan praktiese bestuursvoorstelle aangebied word, gegrond op 'n goeie passing tussen die probleem wat agile development probeer aanspreek en die kognitiewe vereistes van die proses wat lei na 'n oplossing. Die navorsing wat onderneem is in hierdie proses ontsluit moontlikhede vir verdere studies (hoofstuk 7) en skep die moontlikheid vir die toepassing van sensemaking in die konteks van sagtewareontwikkelingsmetodologiee. Hierdie studie bied insig in die voorkoms en funksionering van agile methodologies in sagteware-ingenieurwese omgewings deur die teorie van sensemaking te hefboom om 'n verduideliking vir die onderliggende wereldbeeld en prosesse aan te bied.
APA, Harvard, Vancouver, ISO, and other styles
23

Khan, Zeeshan. "Design of VoIP Paralleled Client-Server Software for Multicore." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142675.

Full text
Abstract:
As "Voice over IP" has become more prevalent and many client and server applications have been designed for them, the VoIP industry has seen the need for faster, more capable systems to keep up. Traditionally, system speed-up has been achieved by increasing clock speeds but, conventional single-core CPU clock rates have peaked a few years ago due to very high power consumption and heating problems. Recently, system speed-up has been achieved by adding multiple processing cores to the same processor chip called multi-core processors. The existing VoIP applications cannot attain full benefit and efficiency of multi-core processors because of their sequential design. \VoIP paralleled client-server software for multicores" that can split up sequential code and run concurrently on multiple cores instead of trying to exploit single-core hardware is the solution. We have created a model of generic, open source paralleled VoIP-server (IOpen-VoIP) in C that suits multi-core and that can be used as a simulation tool. Furthermore, we have designed and implemented a tool for performance testing. It can be used for performance evaluation of IOpenVoIP and other SIP servers. The tool emulates thousands of communication sessions through a server. Performance testing can help developers to eliminate bottle necks in multi-core server design. On the other hand side, VoIP clients are not just used for voice and video communication over Internet. Along with audio and video they can carry other real time data i.e. patients ECG signals. Raw data is usually sent from one end and it is processed at other end which is a processor intensive task. We designed and implemented a graphical VoIP-Client which utilizes multi-core processors.
APA, Harvard, Vancouver, ISO, and other styles
24

Armanet, Francisco 1963, and Ching-li Jimmy 1958 Wong. "A study of home informatics and a business plan for family information software." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9559.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management, 1999.<br>Includes bibliographical references (leaf 105).<br>Information technology has been a dominating tool in many industries. Computer sales in home segment accounts for 33% 1 Although Personal Computers are becoming more and more popular, home information management is relatively limited. According to the U.S. Bureau of Census, there were 22.8% of household have personal computer in 1993. By the linear interpolation of 1984, 1989, and 1993 data, PC penetration in US household should be 37% or higher by year 2002 However, the 1998 survey conducted by NIT A of Department of Commerce revealed that the household PC penetration has reached 36.6% in 1997. Bill Gates predicted that PC penetration at household will reach 60% in year 2001. The high penetration rate of PCs in the household did not reflect the success of Enterprise Resource Planning (ERP) type of integrated solution for the home IT applications. According to Forrester Research, "The basic windows metaphor is over 25 years old, and Microsoft is failing to advance the user interface. Vendors like Apple, MetaCreations, or America Online have the opportunity to graft distinct and consumer-focused interfaces on top of Windows" [Forrester Research, 1999]. As the PC platform reaching its maturity as stable processors and operating systems, there is a slow down in PC related technology innovation. In home informatics, we believe that there are three important issues related to the diffusion of family information management: 1. Home PC network. 2. Home automation. 3. Family oriented user interface This study will investigate what hindrance the development of home informatics, particularly in family information management. The focus has been placed on the user interface applications. A new concept for PC user interface, Homesoft, is proposed with a prototype demonstration. With this new interface, PC manufacturers and Internet portal service providers can utilize this concept to develop a more user-friendly system for American families. Homesoft will provide an integrated solution for domestic applications just like the Enterprise Resource Planning (ERP) system for enterprises. This proposed concept will include technical specifications for hardware (Electronic pen and tablet), Software (event planning, communication, time management, and information support systems), and their interfaces (simple and family-oriented). In addition to the technical concept, the market research and financial plan with feasibility analysis are the essences for this research study. The business plan for Homesoft is created as an exercise for taking a software concept into a business development project.<br>by Francisco Armanet [and] Ching-li Jimmy Wong.<br>M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
25

Ali, S. (Syed). "Creating and sustaining software knowledge." Master's thesis, University of Oulu, 2014. http://urn.fi/URN:NBN:fi:oulu-201405281551.

Full text
Abstract:
Software processes are complex combination of technology and skill, highly dependent on human knowledge. Human involvement and the very nature of software development processes incur high degree of volatility in software processes. Valuable knowledge is created and also lost as software organizations lean towards ‘agility’. Capturing and sustaining this knowledge in readily available and usable form is very important in ensuring continued success for the organization. Software organizations relying on agile methods, usually overlook the importance of knowledge sustenance, which can lead into loss of valuable software knowledge. This thesis outlines the factors influencing knowledge transfer in today’s increasingly agile software world by carrying out participatory (active) observations in a product based software organization. Knowledge representation forms (text / visual), software architecture, development practices and standards compliance affect how knowledge is sustained within a team and hence dictates the efficiency of transfer to new members.
APA, Harvard, Vancouver, ISO, and other styles
26

Korhonen, J. (Johanna). "Software piracy among university students." Master's thesis, University of Oulu, 2017. http://urn.fi/URN:NBN:fi:oulu-201706022476.

Full text
Abstract:
Software piracy has been an ongoing issue for decades and it has a tremendous economic impact on software business. Software piracy seems to be especially common among young people, who often are technologically oriented and happen to be students. The purpose of this study was to map out what kind of a role software piracy plays among the students of the University of Oulu. The goal was also to find out how common software piracy is and how the software piracy rates differ among different subpopulations as well as finding out the reasons for software piracy. The study was of quantitative nature and a survey was conducted in order to gather data. In addition, a conceptual model was proposed and its purpose was to map out which factors influence the attitudes and intentions regarding pirating software, and the questions of the survey were partly based on the conceptual model. The aforementioned survey was distributed to the students of the University of Oulu as an online link via email. This study examined and compared the demographic factors as well as the reasons and purposes behind software piracy. The results of this study indicate that age and gender have statistical significance with software piracy. When it came to reasons, expensiveness was the most significant reason, which was in line with previous literature. The study also investigated the views of university students regarding software piracy. The connections in the conceptual model were explored as well: the factors presented in the conceptual model were found to be correlated although the strength of the correlation varied greatly. In addition, all of the connections in the model had statistical significance.
APA, Harvard, Vancouver, ISO, and other styles
27

Dunlop, Mark David. "Multimedia information retrieval." Thesis, University of Glasgow, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.358626.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Pieters, Minnaar. "Open source software and government policy in South Africa." Thesis, Stellenbosch : Stellenbosch University, 2008. http://hdl.handle.net/10019.1/2480.

Full text
Abstract:
Thesis (MA (Information Science. Socio-Informatics))--Stellenbosch University, 2009.<br>Open-source software is not something new; however, it has come into the spotlight in the last few years, mostly due to hyped initial cost savings of the Linux operating system. Consumers and businesses were made aware of shortcomings in the traditional proprietary software model and this has in turn created a surge in popularity of open-source. The migration to open-source requires efficient research of options available and thorough analysis of the migratory process through all levels of the organization. Initial independent cost analysis has not been conclusive, with unreliable, skewed results and below average performance due to poor implementation. The focus of this study is whether open-source software is a suitable alternative to current proprietary software packages utilized by the government sector.
APA, Harvard, Vancouver, ISO, and other styles
29

Palmé, Michael, and Felix Toppar. "Line of Code Software Metrics Applied to Novice Software Engineers." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-264120.

Full text
Abstract:
In the world of modern software engineering, there are certain metrics used to measure size and effort of projects. This provides insight to how engineers work, however when it comes to novice engineers there is little to no documentation. Without enough documentation this becomes a problem when trying to make predictions on projects involving novice software engineers, since there simply is not enough previous work on the area involving novice software engineers.The problem is that there is very little research available when it comes to how novice software engineers efficiency compares to more experienced software engineers. This makes it difficult to calculate predictions on software projects where novice engineers are involved. So how do novice engineers distribute their time and effort in a software development project?The purpose is to find out how the time is distributed in a workplace involving novice software engineers. Further is to learn more of the differences between how novice and experienced software engineers distribute their time and effort in a project.The goal of this thesis is to improve the understanding of how novice software engineers contribute to a software project.In this work, a case study has been done with two novice engineers at a workplace in order to learn more about how novice engineers contribute to a software project. In this case study, a quantitative research method was adapted using the Line of Code software metric to document how the novice engineers distributed their time.The results of the case study showed that the two novice engineers spent far more time coding than planning and that they wrote code faster than the average experienced engineer.<br>Menigen med detta arbete är att ta reda på hur tiden som läggs på ett mjukvaruprojekt distrubieras på en arbetsplats med nyutbildade mjukvaruingenjörer.Målet med detta arbete är att förbättra förståelsen angående hur nyutbildade mjukvaruingenjörer bidrar till ett mjukvaruprojekt.I det här arbetet så har en fallstudie gjorts med två stycken nyutbildade mjukvaruingenjörer för att undersöka hur nyutbildade ingenjörer bidrar till ett mjukvaruprojekt. Fallstudien följer dessa två ingenjörer på en arbetsplats där en kvantitativ undersökning görs med hjälp av Line of Code mjukvarumetriken som används för att dokumentera hur ingenjörerna fördelade sin tid på arbetsplatsen.Resultaten av fallstudien visade att dessa två nyutbildade ingenjörer spenderade mycket mer tid åt kodande än åt planerande, och att de skrev kod snabbare än den genomsnittliga erfarna ingenjören.
APA, Harvard, Vancouver, ISO, and other styles
30

Paradis, Thomas. "Software-Defined Networking." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-143882.

Full text
Abstract:
Software Defined Networks (SDN) is a paradigm in which routing decisions are taken by a control layer. In contrast to conventional network structures, the control plane and forwarding plane are separated and communicate through standard protocols like OpenFlow. Historically, network management was based on a layered approach, each one isolated from the others. SDN proposes a radically different approach by bringing together the management of all these layers into a single controller. It is therefore easy to get a unified management policy despite the complexity of current networks requirements while ensuring performance through the use of dedicated devices for the forwarding plane. Such an upheaval can meet the current challenges of managing an increasingly dynamic network imposed by the development of cloud computing or the increased mobility of everyday devices. Many solutions have emerged, but all do not satisfy the same issues and are not necessarily usable in a real environment. The purpose of this thesis is to study and report on existing solutions and technologies as well as conceive a demonstration prototype to present the benefits of this approach. This project also focuses on an analysis of risks posed by these technologies and the possible solutions.
APA, Harvard, Vancouver, ISO, and other styles
31

Ståhl, Björn. "Exploring Software Resilience." Licentiate thesis, Karlskrona : Blekinge Institute of Technology, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-00493.

Full text
Abstract:
Software has, for better or worse, become a core component in the structured management and manipulation of vast quantitates of information, and is therefore central to many crucial services and infrastructures. However, hidden among the various benefits that the inclusion of software may bring is the potential of unwanted and unforeseen interactions, ranging from mere annoyances all the way up to full-blown catastrophes. Overcoming adversities of this nature is a challenge shared with other engineering ventures, and there are many developed strategies that work towards eliminating various kinds of disturbances, assuming that it is possible to apply such strategies correctly. One approach in this regard, is to accept some anomalous behaviors as mere facts of life and make sure that the situations experienced are dealt with in an expeditious manner, while at the same time trying to discover, implement and improve safe-guards that can lessen adverse consequences in the event of future problems; in short, to embed resilience. The work described in this thesis explores the foundations of software resilience, and thus covers the main resilience-enabling mechanisms, along with supporting tools, techniques and methods used to embed resilience. These instruments are dissected and analyzed from the perspective of stakeholders that have to operate on pre-existing, critical, large and heterogeneous subjects that are to some extent already up and running at the point of instrumentation. Finally, in the course of describing this subject, the thesis describes a demonstrator environment for self-healing activities in a partially damaged power grid, its construction details and the initial results of the study conducted in this environment.
APA, Harvard, Vancouver, ISO, and other styles
32

Rossi, Pablo Hernan, and pablo@cs rmit edu au. "Software design measures for distributed enterprise Information systems." RMIT University. Computer Science and Information Technology, 2004. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20081211.164307.

Full text
Abstract:
Enterprise information systems are increasingly being developed as distributed information systems. Quality attributes of distributed information systems, as in the centralised case, should be evaluated as early and as accurately as possible in the software engineering process. In particular, software measures associated with quality attributes of such systems should consider the characteristics of modern distributed technologies. Early design decisions have a deep impact on the implementation of distributed enterprise information systems and thus, on the ultimate quality of the software as an operational entity. Due to the fact that the distributed-software engineering process affords software engineers a number of design alternatives, it is important to develop tools and guidelines that can be used to assess and compare design artefacts quantitatively. This dissertation makes a contribution to the field of Software Engineering by proposing and evaluating software design measures for distributed enterprise information systems. In previous research, measures developed for distributed software have been focused in code attributes, and thus, only provide feedback towards the end of the software engineering process. In contrast, this thesis proposes a number of specific design measures that provide quantitative information before the implementation. These measures capture attributes of the structure and behaviour of distributed information systems that are deemed important to assess their quality attributes, based on the analysis of the problem domain. The measures were evaluated theoretically and empirically as part of a well defined methodology. On the one hand, we have followed a formal framework based on the theory of measurement, in order to carry out the theoretical validation of the proposed measures. On the other hand, the suitability of the measures, to be used as indicators of quality attributes, was evaluated empirically with a robust statistical technique for exploratory research. The data sets analysed were gathered after running several experiments and replications with a distributed enterprise information system. The results of the empirical evaluation show that most of the proposed measures are correlated to the quality attributes of interest, and that most of these measures may be used, individually or in combination, for the estimation of these quality attributes-namely efficiency, reliability and maintainability. The design of a distributed information system is modelled as a combination of its structure, which reflects static characteristics, and its behaviour, which captures complementary dynamic aspects. The behavioural measures showed slightly better individual and combined results than the structural measures in the experimentation. This was in line with our expectations, since the measures were evaluated as indicators of non-functional quality attributes of the operational system. On the other hand, the structural measures provide useful feedback that is available earlier in the software engineering process. Finally, we developed a prototype application to collect the proposed measures automatically and examined typical real-world scenarios where the measures may be used to make design decisions as part of the software engineering process.
APA, Harvard, Vancouver, ISO, and other styles
33

Magdalinos, Christos. "An information retrieval tool for reverse software engineering." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=24025.

Full text
Abstract:
Information retrieval in large data spaces using formal, structure oriented patterns of features has many possible applications. We developed and studied a system that can be used to localize code segments in a program. The system is built using a generic and extensible object oriented framework and uses the Viterbi dynamic programming algorithm on simple Markov models to calculate a similarity measure between an abstractly described code segment and a possible instantiation of it in the program. The resulting system can be incorporated in a larger cooperative environment of CASE tools and can be used during the design recovery process to perform concept localization.
APA, Harvard, Vancouver, ISO, and other styles
34

Yang, Yunwen. "Software-implemented attack tolerance for critical information retrieval." Thesis, Durham University, 2004. http://etheses.dur.ac.uk/2838/.

Full text
Abstract:
The fast-growing reliance of our daily life upon online information services often demands an appropriate level of privacy protection as well as highly available service provision. However, most existing solutions have attempted to address these problems separately. This thesis investigates and presents a solution that provides both privacy protection and fault tolerance for online information retrieval. A new approach to Attack-Tolerant Information Retrieval (ATIR) is developed based on an extension of existing theoretical results for Private Information Retrieval (PIR). ATIR uses replicated services to protect a user's privacy and to ensure service availability. In particular, ATIR can tolerate any collusion of up to t servers for privacy violation and up to ƒ faulty (either crashed or malicious) servers in a system with k replicated servers, provided that k ≥ t + ƒ + 1 where t ≥ 1 and ƒ ≤ t. In contrast to other related approaches, ATIR relies on neither enforced trust assumptions, such as the use of tanker-resistant hardware and trusted third parties, nor an increased number of replicated servers. While the best solution known so far requires k (≥ 3t + 1) replicated servers to cope with t malicious servers and any collusion of up to t servers with an O(n^*^) communication complexity, ATIR uses fewer servers with a much improved communication cost, O(n1/2)(where n is the size of a database managed by a server).The majority of current PIR research resides on a theoretical level. This thesis provides both theoretical schemes and their practical implementations with good performance results. In a LAN environment, it takes well under half a second to use an ATIR service for calculations over data sets with a size of up to 1MB. The performance of the ATIR systems remains at the same level even in the presence of server crashes and malicious attacks. Both analytical results and experimental evaluation show that ATIR offers an attractive and practical solution for ever-increasing online information applications.
APA, Harvard, Vancouver, ISO, and other styles
35

Suomela, R. (Riku). "Using lean principles to improve software development practices in a large-scale software intensive company." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201511212155.

Full text
Abstract:
Lean software development is the result of adapting lean principles from the manufacturing context to the software development domain. Recently, the various applications of lean software development have been studied but more empirical evidence is needed, especially from the practitioners’ point of view. Firstly, this thesis provides answers for the understanding of lean software development from the practitioners’ point of view. Secondly, this thesis provides answers on the opportunities and barriers in applying the lean software development. In order to study this, a case study was conducted in a large-scale software intensive company. Focus groups were conducted to collect qualitative data. Studying the understanding of lean software development showed that four of the seven lean software development principles were identifiable from the discussion in the focus group sessions. The difference between agile and lean was recognized. The opportunities in achieving a culture of continuous improvement and involving people in the transformation were found and can be also identified from the existing research. Some new opportunities were also identified, such as using informal code-reviews as a practice in development and focusing improvements on the activities that consume the most time in the day-to-day work. The barriers that were found, such as avoiding sub-optimization, facilitation of improvement and having time to experiment, can also be identified from the existing research. Some of the barriers not identifiable from the existing research were the lack of quality thinking and varying standards in gate keeping. The findings of this study were presented in the case company with positive feedback and were discussed to be included into future improvement initiatives. This study also identified the power of the focus group method as a tool that could be used to drive improvement work. Suggested directions for future research include studying lean software development in a similar case study and taking a look at the possibilities of using focus group method as a tool for driving improvement initiatives in software development companies.
APA, Harvard, Vancouver, ISO, and other styles
36

Lindström, Birgitta. "Methods for Increasing Software Testability." Thesis, University of Skövde, Department of Computer Science, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-494.

Full text
Abstract:
<p>We present a survey over current methods for improving software testability. It is a well-known fact that the cost for testing of software takes 50\% or more of the development costs. Hence, methods to improve testability, i.e. reduce the effort required for testing, have a potential to decrease the development costs. The test effort needed to reach a level of sufficient confidence for the system is dependent on the number of possible test cases, i.e., the number of possible combinations of system state and event sequences. Each such combination results in an execution order. Properties of the execution environment that affect the number of possible execution orders can therefore also affect testability. Which execution orders that are possible and not are dependent of processor scheduling and concurrency control policies. Current methods for improving testability are investigated and their properties with respect to processor scheduling and concurrency control analyzed. Especially, their impact on the number of possible test cases is discussed. The survey revealed that (i) there are few methods which explicitly address testability, and (ii) methods that concern the execution environment suggest a time-triggered design. It is previously shown that the effort to test an event-triggered real-time system is inherently higher than testing a time-triggered real-time system. Due to the dynamic nature of the event-triggered system the number of possible execution orders is high. A time-triggered design is, however, not always suitable. The survey reveals an open research area for methods concerning improvement of testability in event-triggered systems. Moreover, a survey and analysis of processor scheduling and concurrency control properties and their effect on testability is presented. Methods are classified into different categories that are shown to separate software into different levels of testability. These categories can form a basis of taxonomy for testability. Such taxonomy has a potential to be used by system designers and enable them to perform informed trade-off decisions.</p>
APA, Harvard, Vancouver, ISO, and other styles
37

De, Alwis Brian. "Supporting conceptual queries over integrated sources of program information." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/695.

Full text
Abstract:
A software developer explores a software system by asking and answering a series of questions. To answer these questions, a developer may need to consult various sources providing information about the program, such as the static relationships expressed directly in the source code, the run-time behaviour of a program recorded in a dynamic trace, or evolution history as recorded in a source management system. Despite the support afforded by software exploration tools, developers often struggle to find the necessary information to answer their questions and may even become disoriented, where they feel mentally lost and are uncertain of what they were trying to accomplish. This dissertation advances a thesis that a developer's questions, which we refer to as conceptual queries, can be better supported through a model to represent and compose different sources of information about a program. The basis of this model is the sphere, which serves as a simple abstraction of a source of information about a program. Many of the software exploration tools used by a developer can be represented as a sphere. Spheres can be composed in a principled fashion such that information from a sphere may replace or supplement information from a different sphere. Using our sphere model, for example, a developer can use dynamic runtime information from an execution trace to replace information from the static source code to see what actually occurred. We have implemented this model in a configurable tool, called Ferret. We have used the facilities provided by the model to implement 36 conceptual queries identified from the literature, blogs, and our own experience, and to support the integration of four different sources of program information. Establishing correspondences between similar elements from different spheres allows a query to bridge across different spheres in addition to allowing a tool's user interface to drive queries from other sources of information. Through this effort we show that sphere model broadens the set of possible conceptual queries answerable by software exploration tools. Through a small diary study and a controlled experiment, both involving professional software developers, we found the developers used the conceptual queries that were available to them and reported finding Ferret useful.
APA, Harvard, Vancouver, ISO, and other styles
38

Korhonen, J. (Johanna). "Piracy prevention methods in software business." Bachelor's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201605131733.

Full text
Abstract:
There are various forms of piracy in software business, and many prevention techniques have been developed against them. Forms of software piracy are, for example, cracks and serials, softlifting and hard disk loading, internet piracy and software counterfeiting, mischanneling, reverse engineering, and tampering. There are various prevention methods that target these types of piracy, although all of these methods have been broken. The piracy prevention measures can be divided into ethical, legal, and technical measures. Technical measures include measures like obfuscation and tamper-proofing, for example. However, relying on a single method does not provide complete protection from attacks against intellectual property, so companies wishing to protect their product should consider combining multiple methods of piracy prevention.
APA, Harvard, Vancouver, ISO, and other styles
39

Moses, John. "Cohesion prediction using information flow." Thesis, University of Sunderland, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387492.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Mattila, T. (Tuomo). "Virtualization based software management for edge gateway." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201609142779.

Full text
Abstract:
Scalability, heterogeneity, and device life-cycle are among the key concerns regarding Internet of Things (IoT) system development and deployment. Consequently, they require robust solutions for managing gateway-resident services as the systems evolve while being deployed in a large scale. This thesis studies kernel-level virtualization in the context of designing and implementing a software management system for use in IoT edge gateways. A management system is implemented for a gateway which is deployed to several households, where the gateways act as base stations for smart electricity plugs. The gateway is designed to be used in a private household network, connecting to the Internet using the household’s own connection without modifications to router settings and with no configuration required from the homeowner. The implementation runs on the GNU/Linux operating system on the Raspberry Pi 2 single-board computer. It uses Docker as the virtualization system, Ansible for host-level configuration management and MQTT as the primary protocol for communication with cloud-resident management services. The gateway was deployed to a number of households in the pre-pilot phase of the Flex4Grid project. During the pre-pilot, the implemented software management system provided a convenient way of deploying and maintaining middleware packages running inside the gateways, as well as being capable of updating itself as required<br>Skaalautuvuus, heterogeenisyys ja laitteiden elinkaari kuuluvat Internet of Things (IoT) -järjestelmien kehityksen ja käyttöönoton avainkysymyksiin. Tästä johtuen järjestelmät tarvitsevat toimivarmoja ratkaisuja yhdyskäytävälaitteissa ajettavien palveluiden hallitsemiseen laajasti käyttöönotetujen järjestelmien kehittyessä. Tämä diplomityö tutkii käyttöjärjestelmäydintason virtualisointia IoT-rajayhdyskäytävälaitteiden ohjelmistohallintajärjestelmän suunnitteluun ja toteuttamiseen liittyen. Työssä toteutetaan hallintajärjestelmä yhdyskäytävälaitteelle, jollaisia otetaan käyttöön useissa kotitalouksissa toimimaan tukiasemana älypistokkeille. Yhdyskäytävä toimii kotitalouksien yksityisissä lähiverkoissa, joissa ne käyttävät kunkin kotitalouden omaa Internet-yhteyttä tarvitsematta muutoksia kotireitittimien asetuksiin. Toteutettu järjestelmä toimii GNU/Linux-käytöjärjestelmässä Raspberry Pi 2 -laitteella ja käyttää Docker-virtualisointijärjestelmää, Ansible-konfiguraationhallintatyökalua ja MQTT-kommunikaatioprotokollaa pilvipalveluiden kanssa. Yhdyskäytävälaitetta käytettin lukuisissa kotitalouksissa Flex4Grid-projektin esipilottivaiheen aikana. Toteutettu ohjelmistonhallintajärjestelmä tarjosi esipilotin aikana sujuvan mekanismin yhdyskäytävälaitteissa ajettavien väliohjelmistojen etäasentamiseen ja päivittämiseen ja pystyi päivittämään ajon aikana tarvittaessa myös itsensä
APA, Harvard, Vancouver, ISO, and other styles
41

Krone, Joan. "The role of verification in software reusability /." The Ohio State University, 1988. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487594970650373.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Furnell, Steven Marcus. "Data security in European healthcare information systems." Thesis, University of Plymouth, 1995. http://hdl.handle.net/10026.1/411.

Full text
Abstract:
This thesis considers the current requirements for data security in European healthcare systems and establishments. Information technology is being increasingly used in all areas of healthcare operation, from administration to direct care delivery, with a resulting dependence upon it by healthcare staff. Systems routinely store and communicate a wide variety of potentially sensitive data, much of which may also be critical to patient safety. There is consequently a significant requirement for protection in many cases. The thesis presents an assessment of healthcare security requirements at the European level, with a critical examination of how the issue has been addressed to date in operational systems. It is recognised that many systems were originally implemented without security needs being properly addressed, with a consequence that protection is often weak and inconsistent between establishments. The overall aim of the research has been to determine appropriate means by which security may be added or enhanced in these cases. The realisation of this objective has included the development of a common baseline standard for security in healthcare systems and environments. The underlying guidelines in this approach cover all of the principal protection issues, from physical and environmental measures to logical system access controls. Further to this, the work has encompassed the development of a new protection methodology by which establishments may determine their additional security requirements (by classifying aspects of their systems, environments and data). Both the guidelines and the methodology represent work submitted to the Commission of European Communities SEISMED (Secure Environment for Information Systems in MEDicine) project, with which the research programme was closely linked. The thesis also establishes that healthcare systems can present significant targets for both internal and external abuse, highlighting a requirement for improved logical controls. However, it is also shown that the issues of easy integration and convenience are of paramount importance if security is to be accepted and viable in practice. Unfortunately, many traditional methods do not offer these advantages, necessitating the need for a different approach. To this end, the conceptual design for a new intrusion monitoring system was developed, combining the key aspects of authentication and auditing into an advanced framework for real-time user supervision. A principal feature of the approach is the use of behaviour profiles, against which user activities may be continuously compared to determine potential system intrusions and anomalous events. The effectiveness of real-time monitoring was evaluated in an experimental study of keystroke analysis -a behavioural biometric technique that allows an assessment of user identity from their typing style. This technique was found to have significant potential for discriminating between impostors and legitimate users and was subsequently incorporated into a fully functional security system, which demonstrated further aspects of the conceptual design and showed how transparent supervision could be realised in practice. The thesis also examines how the intrusion monitoring concept may be integrated into a wider security architecture, allowing more comprehensive protection within both the local healthcare establishment and between remote domains.
APA, Harvard, Vancouver, ISO, and other styles
43

Rodgers, Thomas Lee. "Software inspections: Collaboration and feedback." Diss., The University of Arizona, 1999. http://hdl.handle.net/10150/289015.

Full text
Abstract:
This dissertation studies impact of collaboration and feedback on software inspection productivity. Used as a software-engineering validation technique, software inspections can be a cost-effective method for identifying latent issues (defects) within design documents and program code. For over two years, Baan Company has used a generalized Electronic Meeting System (EMS also referred to as GroupWare) to support software inspections and reported EMS to be more productive than face-to-face paper-based inspections (Genuchten, 1998). Validation of this phenomenon and initial development of a potentially more effective specialized EMS (SEMS) tool is the basis of this dissertation. Explanations of the collaborative phenomenon are presented within a theoretical framework along with testable hypotheses. The framework is derived from Media Synchronicity Theory (Dennis and Valacich) and Focus Theory of Productivity (Briggs and Nunamaker). Two main research questions are explored. (1) Do collaboration tools improve software inspection productivity? (2) Can feedback dimensions that significantly improve productivity be identified and incorporated within software inspections? The first research question is supported. In a detailed reevaluation of the Baan study, EMS inspections are shown to be 32% more efficient than paper-based inspections. During the subsequent period, the results were more pronounced with EMS inspections being 66% more efficient even controlling for inspector proficiency. Significantly more conveying communication than convergent communication occurs during inspection meetings. EMS inspections enable more deliberation, less attention for communication, and more attention for information access compared to face-to-face paper-based inspections. The second research question is explored. Surveys and analysis probe some previously unexplored feedback dimensions (review rate, inspector proficiency and inspection process maturity). Experienced inspectors are surveyed regarding process maturity, inspector proficiency, and collaborative aspects of inspections. Preparation and review rates are necessary but not sufficient to explain productivity. Inspector proficiency is perceived to be important and multi-dimensional. Participation by highly proficient inspectors resulted in 49-76% more effective inspections. Significant inspection process variations exist within mature development organizations. Based on theory and experiences, the SEMS inspection tool is developed and a quasi-experiment proposed. Initial results using the SEMS inspection tool are reported and suggestions made for future enhancements.
APA, Harvard, Vancouver, ISO, and other styles
44

Ortolan, Riccardo. "Software engineering of Arduino based art systems." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-14138.

Full text
Abstract:
The approaching of user satisfaction in Digital Media is raising new questions andchallenges in the interactivity relationship between creator and audience. In this workinteractivity is defined as a technology attribute that endows a media environmentwith the capability of reciprocal communication amidst user and technology throughthe technology. What are the key focus extents for managing technology based artproject? What I propose is a new layer of interaction, in which the user is viewed aspart of the interactive installation, being prompted by its pro-active behavior, redefininghim as a creative source. In this dimension, in addition to the language of the artist,what changes is also the perspective of use of the Work of Art: The user is now a livingpart of every creation, contributing to change each time the characteristics. Thanks totechnology, it becomes possible to completely revolutionize the way we conceive anddesign any type of cultural experience and to create spaces for an absolutely innovativeuse. This thesis will engineer the artistic Arduino based installation ArTime inorder to make it into a stable system that can function in museums and exhibitions,experimenting the new layer of interaction with scientific approaches.
APA, Harvard, Vancouver, ISO, and other styles
45

Stokes, Todd Hamilton. "Development of a visualization and information management platform in translational biomedical informatics." Diss., Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/33967.

Full text
Abstract:
Translational Biomedical Informatics (TBMI) is an emerging discipline expanding beyond traditional bioinformatics, with a focus on developing computational technologies for real-world biomedical practice. The goal of my Ph.D. research is to address a few key challenges in TBI, including: (1) the high quality and reproducibility required by medical applications when processing high throughput data, (2) the need for knowledge management solutions that allow molecular data to be handled and evaluated by researchers, regulators, and doctors collectively, (3) the need for near real-time, efficient access to decision-oriented visualizations of integrated data and data processing results, and (4) the need for an integrated solution that can evolve as medical consensus evolves, without requiring retraining, overhaul or replacement. This dissertation resulted in the development and adoption of concrete web-based application deliverables in regular use by bioinformaticians, clinicians, biologists and nanotechnologists. These include: the Chip Artifact Correction (caCORRECT) web site and grid services, the ArrayWiki community microarray repository, and the SimpleVisGrid visualization grid services (including eGOMiner, nanoDRIVE, PathwayVis and SphingoVisGrid).
APA, Harvard, Vancouver, ISO, and other styles
46

Aslam, Gulshan, and Faisal Farooq. "A comparative study on Traditional Software Development Methods and Agile Software Development Methods." Thesis, Tekniska Högskolan, Högskolan i Jönköping, JTH. Forskningsområde Informationsteknik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-15383.

Full text
Abstract:
Everyone is talking about the software development methods but these methods are categorised into the different parts and the most important are two categories, one is agile software development methods and second is using the traditional software development methods. Agile software methods are relatively considered to be quick and for the small teams. Our main mission is to check which method is better from each other, so for that purpose we go out in the software development market to meet the professional to ask about their satisfaction on these software development methods. Our research is based on to see the suitable method for the professionals; see the challenges on the adoptability of methods and which method is quicker. To perform this study we have gone through a survey questionnaire, and results are analysed by using mixed method approach. Results shows that professionals from both types of methods are satisfied but professionals with traditional methods are more satisfy with their methods with respect to development of quality software, whereas agile professionals are more satisfied with their methods with respect of better communication with their customers. With agility point of view, our study says that both methods have characteristics which support agility but not fully support, so in such case we need to customize features from both types of methodologies.
APA, Harvard, Vancouver, ISO, and other styles
47

Saleh, Mehdi. "Built-in software quality in Agile development." Thesis, Uppsala universitet, Institutionen för informatik och media, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-413344.

Full text
Abstract:
Abstract Waterfall and Agile are the two most popular software development methodologies. Waterfall as the traditional one, is a progressive method. The progress flows in one direction, and upon the completion of previous process. Agile is another software development methodology with more iterative approach and possible change of requirements; with incremental delivery. Agile introduces freedom to requirement change and iterative delivery. However, such a liberty should not disrate software quality. Especially in automotive industry that deals with human safety and security. In such an iterative environment, there is a higher risk to compromise on quality checks before each delivery, due to short intense lead times. There will not be enough time for intensive quality assurance activities after each iteration. The solution to downsize the intensive quality check after each iteration, is to improve development quality during the development and building quality into development processes. This is what we refer as “Built-in quality”; the quality that is built during software artefacts development on a continuous basis. This study conducted at Volvo Cars during Agile transformation time and the main objective was to connect and emphasize the importance of built-in quality in agile software development. In this study we look at existing challenges that decrease quality of software artifacts during the development. Thus, by prevailing those challenges we can improve the software quality during iteration delivery. Such an improvement decreases the amount of intensive quality check after each iteration. Additionally, we look at guidelines and tools that used by different development teams to improve software artifacts’ quality. We also investigate how quality assurance engineer can support builtin quality during agile transformation.
APA, Harvard, Vancouver, ISO, and other styles
48

Begic, Amela, and Malin Almstedt. "Hur används Lean Software Development i praktiken?" Thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-85800.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

BARROS, FILHO Roberto Souto Maior de. "Using information flow to estimate interference between same-method contributions." Universidade Federal de Pernambuco, 2017. https://repositorio.ufpe.br/handle/123456789/24884.

Full text
Abstract:
Submitted by Pedro Barros (pedro.silvabarros@ufpe.br) on 2018-06-25T19:24:26Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Roberto Souto Maior de Barros Filho.pdf: 2025451 bytes, checksum: c2b9e33188a5a0b43ea456fefd1fcbc6 (MD5)<br>Made available in DSpace on 2018-06-25T19:24:26Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) DISSERTAÇÃO Roberto Souto Maior de Barros Filho.pdf: 2025451 bytes, checksum: c2b9e33188a5a0b43ea456fefd1fcbc6 (MD5) Previous issue date: 2017-03-31<br>CNPQ<br>In a collaborative software development environment, developers often implement their contributions (or tasks) independently using local versions of the files of a system. However, contributions from different developers need to be integrated (merged) to a central version of the system, which may lead to different types of conflicts such as syntactic, static semantic or even dynamic semantic conflicts. The first two types are more easily identifiable as they lead to syntactically incorrect programs and to programs with compilations problems, respectively. On the other hand, dynamic semantic conflicts, which may be caused by subtle dependencies between contributions, may not be noticed during the integration process. This type of conflict alters the expected behaviour of a system, leading to bugs. Thus, failing to detect dynamic semantic conflicts may affect a system’s quality. Hence, this work’s main goal is to understand if Information Flow Control (IFC), a security technique used for discovering leaks in software, could be used to indicate the presence of dynamic semantic conflicts between developers contributions in merge scenarios. However, as defining if a dynamic semantic conflict exists involves understanding the expected behaviour of a system, and as such behavioural specifications are often hard to capture, formalize and reason about, we instead try to detect a code level adaptation of the notion of interference from Goguen and Meseguer. Actually, we limit our scope to interference caused by developers contributions on the same method. More specifically, we want to answer if the existence of information flow between developers same-method contributions of a merge scenario can be used to estimate the existence of interference. Therefore, we conduct an evaluation to understand if information flow may be used to estimate interference. In particular, we use Java Object-sensitive ANAlysis (JOANA) to do the IFC for Java programs. JOANA does the IFC of Java programs by using a System Dependence Graph (SDG), a directed graph representing the information flow through a program. As JOANA accepts different options of SDG, we first establish which of these SDG options (instance based without exceptions) is the most appropriate to our context. Additionally, we bring evidence that information flow between developers same-method contributions occurred for around 64% of the scenarios we evaluated. Finally, we conducted a manual analysis, on 35 scenarios with information flow between developers same-method contributions, to understand the limitations of using information flow to estimate interference between same-method contributions. From the 35 analysed scenarios, for only 15 we considered that an interference in fact existed. We found three different major reasons for detecting information flow and no interference: cases related to the nature of changes, to excessive annotation from our strategy and to the conservativeness of the flows identified by JOANA. We conclude that information flow may be used to estimate interference, but, ideally, the number of false positives should be reduced. In particular, we envisage room for solving around three quarters of the obtained false positives.<br>Em um ambiente de desenvolvimento colaborativo, desenvolvedores frequentemente implementam suas contribuições independentemente usando versões locais dos arquivos de um sistema. No entanto, contribuições de diferentes desenvolvedores precisam ser integradas a uma versão central do sistema, o que pode levar a diferentes tipos de conflitos de integração como conflitos sintáticos, de semântica estática ou até de semântica dinâmica. Os dois primeiros tipos são mais fáceis de identificar dado que levam a programas sintaticamente incorretos e a erros de compilação, respectivamente. Por outro lado, conflitos de semântica dinâmica, que são em geral causados por dependências sutis entre as contribuições, podem passar despercebidos durante o processo de integração. Esse tipo de conflito altera o comportamento esperado de um sistema, o que leva a bugs. Portanto, falhar em detectar estes conflitos pode afetar negativamente a qualidade de um sistema. Tendo isso em mente, o principal objetivo deste trabalho é entender se Information Flow Control (IFC), uma técnica de segurança utilizada para descobrir vazamentos de segurança em software, pode ser utilizado para indicar a presença de conflitos de semântica dinâmica entre contribuições de cenários de integração. Porém, a definição da existência de um conflito de semântica dinâmica envolve o entendimento do comportamento esperado de um sistema. Como especificações desse tipo de comportamento são geralmente difíceis de capturar, formalizar e entender, nós na realidade utilizamos uma adaptação a nível de código da noção de interferência de Goguen e Meseguer. Na verdade, nós limitamos o nosso escopo a interferência causada por contribuições de desenvolvedores nos mesmos métodos. Especificamente, nós desejamos responder se a existência de fluxo de informação entre duas contribuições no mesmo método pode ser utilizada para estimar a existência de interferência. Portanto, nós realizamos uma avaliação com o intuito de entender se fluxo de informação pode ser usado para estimar interferência. Em particular, nós utilizamos o Java Object-sensitive ANAlysis (JOANA) para fazer o IFC de programas Java. O JOANA faz IFC desses programas usando uma estrutura chamada System Dependence Graph (SDG), um grafo direcionado representando o fluxo de informação em um programa. Como o JOANA aceita diferentes opções de SDG, primeiro nós estabelecemos qual destas é a mais apropriada para o nosso contexto. Adicionalmente, trazemos evidência que fluxo de informação entre contribuições de desenvolvedores no mesmo método aconteceram para cerca de 64% dos cenários que avaliamos. Finalmente, realizamos uma análise manual, em 35 cenários de integração com fluxo de informação entre contribuições no mesmo método, para entender as limitações de utilizar fluxo de informação para estimar interferência entre contribuições. Dos 35 cenários analisados, para apenas 15 consideramos que interferência existia de fato. Nós achamos três razões principais para fluxo de informação ser detectado e não existir interferência: casos relacionados a natureza das mudanças, a limitações da nossa estratégia de anotação e a natureza conservadora dos fluxos identificados pelo JOANA. Nós concluímos que fluxo de informação pode ser utilizado para estimar interferência, mas, idealmente, o número de falsos positivos precisa ser reduzido. Em particular, nós enxergamos espaço para reduzir até três quartos dos falsos positivos.
APA, Harvard, Vancouver, ISO, and other styles
50

Lehel, Vanda. "User-centered social software model and characteristics of a software family for social information management." Saarbrücken VDM Verlag Dr. Müller, 2007. http://d-nb.info/989330087/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!