To see the other types of publications on this topic, follow the link: Requirements Specification Language.

Dissertations / Theses on the topic 'Requirements Specification Language'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 22 dissertations / theses for your research on the topic 'Requirements Specification Language.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Thongglin, Kanjana. "Controlled language for Thai software requirements specification." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA1003.

Full text
Abstract:
Cette thèse porte sur l’utilisation d’une langue contrôlée pour les spécifications des besoins du logiciel en thaï. L’étudedécrit les ambiguïtés syntaxiques et sémantiques ainsi que les problèmes rencontrés dans les spécifications des besoins dulogiciel en thaï. Ce travail explique également la nature de la langue thaïe. Le modèle de la langue contrôlée pour lesspécifications des besoins du logiciel en thaï, proposé dans cette étude, comprend trois composantes: l’analyse lexicale,l’analyse syntaxique et l’analyse sémantique. Pour l’analyse syntaxique, une syntaxe contrôlée est conçue en utilisant laforme du Backus-Naur (BNF). Quant à l’analyse lexicale, nous créons une ressource lexicale sous forme de langage XMLpour stocker tous les mots classés selon leur domaine. Les mots reçus de la ressource XML sont corrects d’un point de vueconceptuel mais ne sont pas pertinents d’un point de vue sémantique. Pour résoudre ce problème, nous faisons alors usage dematrices booléennes pour aligner les phrases sémantiquement. Ainsi les phrases produites par le modèle serontsyntaxiquement et sémantiquement correctes.Après avoir créé le modèle, nous avons construit un logiciel pour tester son efficacité. Il est ainsi évalué par quatreméthodes d’évaluation : 1. le test de fonctionnement syntaxique pour vérifier la syntaxe de la phrase; 2. le test defonctionnement sémantique pour tester la sémantique de la phrase; 3. le test d’acceptation en terme de satisfaction desutilisateurs avec le logiciel; et 4. le test d’acceptation en terme d’acception des données de sortie.Des résultats positifs montrent que : 1. les phrases produites par le modèle proposé sont syntaxiquement correctes; 2. lesphrases produites par le modèle proposé sont sémantiquement correctes; 3. les utilisateurs sont satisfaits et acceptent lelogiciel; et 4. les utilisateurs acceptent et comprennent les phrases produites par ce modèle
This thesis focuses on using controlled language for Thai software requirements specifications. The studydescribes the ambiguities and problems encountered in Thai software requirements specifications; both syntacticambiguity and semantic ambiguity. The study also describes the nature of the Thai language. The model of controlledlanguage for Thai software requirements specifications is composed of three main components: lexical analysis,syntactic analysis, and semantic analysis. For syntactic analysis, a controlled syntax is created using Backus-NaurForm (BNF). In the lexical analysis stage, an XML format lexical resource is built to store words according to theirdomain. The words received from the XML resource are conceptually correct but may be semantically irrelevant. Tosolve this issue, the model applies Boolean Matrices to align sentences semantically. As a result, the sentencesproduced from the model are guaranteed to be syntactically and semantically correct.After having created this model, a program for testing the efficiency of the model is developed. The model isevaluated using four testing methods as follows: 1. functional testing for the correctness of the sentence’s syntax, 2.functional testing for the semantic correctness of the sentences produced by the model, 3. acceptance testing in termsof user satisfaction with the program, and 4. acceptance testing in terms of the validity of the outputs.The positive results signify that: 1. the sentences produced by the proposed model are syntactically correct, 2. thesentences produced by the proposed model are semantically correct, 3. the users are satisfied and accept the softwarecreated, and 4. the users approve and understand the sentences produced from this model
APA, Harvard, Vancouver, ISO, and other styles
2

Sidky, Ahmed Samy. "RGML: A Specification Language that Supports the Characterization of Requirements Generation Processes." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/34237.

Full text
Abstract:
Despite advancements in requirements generation models, methods and tools, low quality requirements are still being produced. One potential avenue for addressing this problem is to provide the requirements engineer with an interactive environment that leads (or guides) him/her through a structured set of integrated activities that foster "good" quality requirements. While that is our ultimate goal, a necessary first step in developing such an environment is to create a formal specification mechanism for characterizing the structure, process flow and activities inherent to the requirements generation process. In turn, such specifications can serve as a basis for developing an interactive environment supporting requirements engineering. Reflecting the above need, we have developed a markup language, the Requirements Generation Markup Language (RGML), which can be used to characterize a requirements generation process. The RGML can describe process structure, flow of control, and individual activities. Within activities, the RGML supports the characterization of application instantiation, the use of templates and the production of artifacts. The RGML can also describe temporal control within a process as well as conditional expressions that control if and when various activity scenarios will be executed. The language is expressively powerful, yet flexible in its characterization capabilities, and thereby, provides the capability to describe a wide spectrum of different requirements generation processes.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Lai, Danny Cho-Liang 1978. "Extending a formal specification & requirements language : a case study." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86814.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (leaves 92-93).
by Danny Cho-Liang Lai.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
4

Pong, Lih, and 龐立. "Formal data flow diagrams (FDFD): a petri-netbased requirements specification language." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1985. http://hub.hku.hk/bib/B31207406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pong, Lih. "Formal data flow diagrams (FDFD) : a petri-net based requirements specification language /." [Hong Kong : University of Hong Kong], 1985. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12323019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sutherland, Sean J. P. (Sean Jason Perry) 1977. "A demonstration of a formal specification & requirements language : a case study." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86852.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 109,111 [p. 110 intentionally blank]).
by Sean J.P. Sutherland.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
7

Dhar, Siddharth. "Optimizing TEE Protection by Automatically Augmenting Requirements Specifications." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98730.

Full text
Abstract:
An increasing number of software systems must safeguard their confidential data and code, referred to as critical program information (CPI). Such safeguarding is commonly accomplished by isolating CPI in a trusted execution environment (TEE), with the isolated CPI becoming a trusted computing base (TCB). TEE protection incurs heavy performance costs, as TEE-based functionality is expensive to both invoke and execute. Despite these costs, projects that use TEEs tend to have unnecessarily large TCBs. As based on our analysis, developers often put code and data into TEE for convenience rather than protection reasons, thus not only compromising performance but also reducing the effectiveness of TEE protection. In order for TEEs to provide maximum benefits for protecting CPI, their usage must be systematically incorporated into the entire software engineering process, starting from Requirements Engineering. To address this problem, we present a novel approach that incorporates TEEs in the Requirements Engineering phase by using natural language processing (NLP) to classify those software requirements that are security critical and should be isolated in TEE. Our approach takes as input a requirements specification and outputs a list of annotated software requirements. The annotations recommend to the developer which corresponding features comprise CPI that should be protected in a TEE. Our evaluation results indicate that our approach identifies CPI with a high degree of accuracy to incorporate safeguarding CPI into Requirements Engineering.
Master of Science
An increasing number of software systems must safeguard their confidential data like passwords, payment information, personal details, etc. This confidential information is commonly protected using a Trusted Execution Environment (TEE), an isolated environment provided by either the existing processor or separate hardware that interacts with the operating system to secure sensitive data and code. Unfortunately, TEE protection incurs heavy performance costs, with TEEs being slower than modern processors and frequent communication between the system and the TEE incurring heavy performance overhead. We discovered that developers often put code and data into TEE for convenience rather than protection purposes, thus not only hurting performance but also reducing the effectiveness of TEE protection. By thoroughly examining a project's features in the Requirements Engineering phase, which defines the project's functionalities, developers would be able to understand which features handle confidential data. To that end, we present a novel approach that incorporates TEEs in the Requirements Engineering phase by means of Natural Language Processing (NLP) tools to categorize the project requirements that may warrant TEE protection. Our approach takes as input a project's requirements and outputs a list of categorized requirements defining which requirements are likely to make use of confidential information. Our evaluation results indicate that our approach performs this categorization with a high degree of accuracy to incorporate safeguarding the confidentiality related features in the Requirements Engineering phase.
APA, Harvard, Vancouver, ISO, and other styles
8

Coskuncay, Ahmet. "An Approach For Generating Natural Language Specifications By Utilizing Business Process Models." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612305/index.pdf.

Full text
Abstract:
Business process modeling is utilized by organizations for defining and reengineering their business processes. On the other hand, software requirements analysis activities are performed for determining the system boundaries, specifying software requirements using system requirements and resolving conflicts between requirements. From this point of view, these two activities are considered in different disciplines. An organization requiring its business processes to be defined and supported with information systems would benefit from performing business process modeling and requirements analysis concurrently. In this study, an approach enabling concurrent execution of business process modeling and requirements analysis is developed. The approach includes two business process modeling notations adapted to the research needs, a process defining the steps for implementing the approach and the requirements generation tool that generates natural language specification documents by using business process models. Within this study, two case studies are introduced
one describing the development of the approach and the other exploring if the total efficiency of performing business process modeling and requirements analysis activites would be increased by using the approach.
APA, Harvard, Vancouver, ISO, and other styles
9

Kwan, Irwin. "On the Maintenance Costs of Formal Software Requirements Specification Written in the Software Cost Reduction and in the Real-time Unified Modeling Language Notations." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/1092.

Full text
Abstract:
A formal specification language used during the requirements phase can reduce errors and rework, but formal specifications are regarded as expensive to maintain, discouraging their adoption. This work presents a single-subject experiment that explores the costs of modifying specifications written in two different languages: a tabular notation, Software Cost Reduction (SCR), and a state-of-the-practice notation, Real-time Unified Modeling Language (UML). The study records the person-hours required to write each specification, the number of defects made during each specification effort, and the amount of time repairing these defects. Two different problems are specified—a Bidirectional Formatter (BDF), and a Bicycle Computer (BC)—to balance a learning effect from specifying the same problem twice with different specification languages. During the experiment, an updated feature for each problem is sent to the subject and each specification is modified to reflect the changes.

The results show that the cost to modify a specification are highly dependent on both the problem and the language used. There is no evidence that a tabular notation is easier to modify than a state-of-the-practice notation.

A side-effect of the experiment indicates there is a strong learning effect, independent of the language: in the BDF problem, the second time specifying the problem required more time, but resulted in a better-quality specification than the first time; in the BC problem, the second time specifying the problem required less time and resulted in the same quality specification as the first time.

This work demonstrates also that single-subject experiments can add important information to the growing body of empirical data about the use of formal requirements specifications in software development.
APA, Harvard, Vancouver, ISO, and other styles
10

Mahmud, Nesredin. "Ontology-based Analysis and Scalable Model Checking of Embedded Systems Models." Licentiate thesis, Mälardalens högskola, Inbyggda system, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-35386.

Full text
Abstract:
Currently, there is lack of effective and scalable methods to specify and ana-lyze requirements specifications, and verify the behavioral models of embed-ded systems. Most embedded systems requirements are expressed in naturallanguage which is flexible and intuitive but frequently ambiguous, vague andincomprehensive. Besides to natural language, template-based requirementsspecification methods are used to specify requirements specifications (esp. insafety-critical applications), which reduce ambiguity and improves the com-prehensibility of the specifications. However, the template-based method areusually rigid due to the fixed structures of the templates. They also lack meta-models for extensibility, and template selection is challenging.In this thesis, we proposed a domain specific language for embedded sys-tems, called ReSA, which is constrained natural language but flexible enoughto allow engineers to use different constructs to specify requirements. Thelanguage has formal semantics in proportional logic and description logic thatenables non-trivial and rigorous analysis of requirements specification, e.g.,consistency checking, completeness of specifications, etc.Moreover, we propose a scalable formal verification of Simulink models,whichisusedtodescribethebehaviorofsystemsthroughcommunicatingfunc-tional blocks. In industry, Simulink is the de facto modeling and analysis en-vironment of embedded systems. It is also used to generate code automati-cally from special Simulink models for various hardware platforms. However,Simulink lacks formal approach to verify large and hybrid Simulink models.Therefore, we also propose a formal verification of Simulink models, repre-sented as stochastic timed automata, using statistical model checking, whichhas proven to scale for industrial applications.We validate our approaches on industrial use cases from the automotiveindustry. These includes Adjustable Speed Limiter (ASL) and Brake-By-Wire(BBW) systems from Volvo Group Trucks Technology, both safety-critical.
Verispec
APA, Harvard, Vancouver, ISO, and other styles
11

Presland, S. G. "The analysis of natural language requirements documents." Thesis, University of Liverpool, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Júnior, Valdivino Alexandre de Santiago. "SOLIMVA: A methodology for generating model-based test cases from natural language requirements and detecting incompleteness in software specifications." Instituto Nacional de Pesquisas Espaciais, 2011. http://urlib.net/sid.inpe.br/mtc-m19/2011/11.07.23.30.

Full text
Abstract:
Em maior ou menor extensão, a Linguagem Natural (LN) é ainda amplamente usada para elaborar especificações de requisitos de software ou outros artefatos criados para a documentação de requisitos. Entretanto, fornecimentos elaborados em LN apresentam ambiguidade, inconsistência e não completude. Esta tese de doutorado apresenta uma metodologia, SOLIMVA, a qual alcança dois objetivos. O objetivo primário é a geração de casos de teste de sistema e aceitação baseados em modelos a partir de artefatos de requisitos elaborados em LN. Para esse propósito, uma ferramenta, também denominada SOLIMVA, foi projetada e implementada, e tal ferramenta traduz automaticamente requisitos elaborados em LN em modelos Statecharts. Uma vez gerados os Statecharts, outra ferramenta, GTSC, é usada para gerar Casos de Teste Abstratos os quais depois são transformados em Casos de Teste Executáveis. Entre outras características, um método para Desambiguidade de Sentido de Palavras, o qual ajuda no processo de tradução, foi implementado na ferramenta SOLIMVA, e designs combinatoriais são usados para identificar cenários para testes de sistema e aceitação baseados em modelos. A metodologia SOLIMVA foi aplicada a um estudo de caso principal, um produto de software da área espacial relacionado ao Segmento Espacial, e a metodologia foi comparada com uma abordagem manual desenvolvida anteriormente por um especialista sob dois aspectos: cobertura de objetivos de teste e características dos Casos de Teste Executáveis. Em ambos os aspectos, a metodologia SOLIMVA mostrou benefícios tais como uma melhor estratégia com os objetivos de teste claramente separados de acordo com as diretivas dos designs combinatoriais, e a geração de Casos de Teste Executáveis que previram comportamentos que não existiam na abordagem do especialista. Além disso, diretivas para aplicar a metodologia SOLIMVA a um segundo estudo de caso do domínio espacial, relacionado ao Segmento Solo, são também apresentadas. As principais vantagens em aplicar a metodologia SOLIMVA no contexto de um processo de Verificação e Validação são a facilidade de uso e o suporte de um método formal, fazendo com que a metodologia seja potencialmente adequada para ser usada em projetos de software complexos. O objetivo secundário é a detecção de não completude em especificações de software. A SOLIMVA foi então estendida para alcançar esse objetivo secundário. Model Checking combinado com arranjos simples de valores de variáveis e padrões de especificação foram usados para alcançar essa meta. A metodologia SOLIMVA demonstrou a sua eficiência pelo fato de detectar 21 defeitos de não completude, ao ser aplicada ao mesmo estudo de caso principal mencionado anteriormente.
In greater or lesser extent, Natural Language (NL) is still widely used to develop software requirements speci_cations or other artifacts created for documenting requirements. However, NL deliverables su_er from ambiguity, inconsistency, and incompleteness. This PhD thesis presents a methodology, SOLIMVA, which achieves two objectives. The primary objective is to generate model-based system and acceptance test cases considering NL requirements deliverables. For this purpose, a tool, also called SOLIMVA, was designed and implemented and such a tool makes it possible to automatically translate NL requirements into Statechart models. Once the Statecharts are created, another tool, GTSC, is used to generate Abstract Test Cases which are later translated into Executable Test Cases. Among other features, a Word Sense Disambiguation method which helps in the translation process was implemented in the SOLIMVA tool, and combinatorial designs are used to identify scenarios for model-based system and acceptance testing. The SOLIMVA methodology was applied to a main case study, a space application software product related to the Space Segment, and the methodology was compared with a previous manual approach developed by an expert under two aspects: coverage of test objectives and characteristics of Executable Test Cases. In both aspects, the SOLIMVA methodology presented bene_ts such as a better strategy with test objectives clearly separated according to the directives of combinatorial designs, and the generation of Executable Test Cases which predicted behaviors that did not exist in the expert's approach. In addition, guidelines to apply the SOLIMVA methodology to a second case study of the space domain related to the Ground Segment are also presented. The key advantages from applying the SOLIMVA methodology in the context of a Veri_cation and Validation process are the ease of use and the support of a formal method, making it potentially suitable for use in complex software projects. The secondary objective is the detection of incompleteness in software specifications. The SOLIMVA methodology was then extended to achieve this secondary objective. Model Checking combined with kpermutations of n values of variables and specification patterns were used to address this goal. The SOLIMVA methodology has proved its efficiency by the detection of 21 incompleteness defects when applied to the same main case study mentioned earlier.
APA, Harvard, Vancouver, ISO, and other styles
13

Warnier, Maxime. "Contribution de la linguistique de corpus à la constitution de langues contrôlées pour la rédaction technique : l'exemple des exigences de projets spatiaux." Thesis, Toulouse 2, 2018. http://www.theses.fr/2018TOU20065/document.

Full text
Abstract:
L'objectif de notre travail, qui émane d'une demande de la sous-direction Assurance Qualité du CNES (Centre National d'Études Spatiales), est d'augmenter la clarté des spécifications techniques rédigées par les ingénieurs préalablement à la réalisation de systèmes spatiaux. L'importance des spécifications (et en particulier des exigences qui les composent) pour la réussite des projets de grande envergure est en effet désormais largement reconnue, de même que les principaux problèmes liés à l'utilisation de la langue naturelle (ambiguïtés, flou, incomplétude) sont bien identifiés. Dès lors, de nombreuses solutions, plus ou moins formalisées, ont été proposées et développées pour limiter les risques d'interprétation erronée – dont les conséquences potentielles peuvent se révéler extrêmement coûteuses – lors de la rédaction des exigences.Nous voudrions définir une langue contrôlée pour la rédaction des exigences en français au CNES. L’originalité de notre démarche consiste à systématiquement vérifier nos hypothèses sur un corpus d’exigences (constitué à partir d’authentiques spécifications de projets spatiaux) à l’aide de techniques et d’outils de traitement automatique du langage existants, dans l’optique de proposer un ensemble cohérent de règles (nouvelles ou inspirées de règles plus anciennes) qui puissent ainsi être vérifiées semi-automatiquement lors de l’étape de spécification et qui soient conformes aux pratiques de rédaction des ingénieurs du CNES. Pour cela, nous nous appuyons notamment sur l’hypothèse de l’existence d’un genre textuel, que nous tentons de prouver par une analyse quantitative, ainsi que sur les notions de normalisation et normaison. Notre méthodologie combine les approches corpus-based et corpus-driven en tenant compte à la fois des règles imposées par deux autres langues contrôlées (dont l’adéquation avec des données réelles est discutée au travers d’une analyse plus qualitative) et des résultats offerts par des outils de text mining
The aim of this work is to improve the clarity and precision of the technical specifications written in French by the engineers at CNES (Centre National d’Études Spatiales / National Centre for Space Studies) prior to the realization of space systems. The importance of specifications (and particularly of the requirements that are part of them) for the success of large-scale projects is indeed widely acknowledged; similarly, the main risks associated with the use of natural language (ambiguity, vagueness, incompleteness) are relatively well identified.In this context, we would like to propose a solution that would be used by the engineers at CNES (who are currently not asked to follow specific writing rules): in that respect, we believe that this solution should be both effective (i.e. it should significantly limit the above-mentioned risks) and not too disruptive (which would make it counterproductive). A Controlled Natural Language (CNL) – i.e. a set of linguistic rules constraining the lexicon, the syntax and the semantics – seems to be an interesting option, provided that it remains close enough to natural language. Unfortunately, the CNLs for technical writing that we have examined are not always relevant from a linguistic point of view.Our methodology for developping a CNL for requirements writing in French at CNES relies on the hypothesis of the existence of a textual genre; besides, we make use of existing Natural Language Processing tools and methods to validate the relevance of the rules on a corpus of genuine requirements written for former projects
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Yang [Verfasser], Gunter [Gutachter] Saake, and Andreas [Gutachter] Nürnberger. "Automated extraction of feature and variability information from natural language requirement specifications / Yang Li ; Gutachter: Gunter Saake, Andreas Nürnberger." Magdeburg : Universitätsbibliothek Otto-von-Guericke-Universität, 2020. http://d-nb.info/1226932002/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Abid, Nouha. "Verification of real time properties in fiacre language." Phd thesis, INSA de Toulouse, 2012. http://tel.archives-ouvertes.fr/tel-00782554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Loh, Stanley. "Uma linguagem comum entre usuários e analistas para definição de requisitos de sistemas de informação." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 1991. http://hdl.handle.net/10183/24493.

Full text
Abstract:
O presente trabalho tem por objetivo apresentar uma linguagem comum entre Usuários e Analistas para definição de requisitos, a ser utilizada durante as fases de Análise de Requisitos e Especificação, realizadas durante o desenvolvimento de Sistemas de Informação. A motivação para o trabalho surgiu da busca de uma solução para o problema de compatibilizar as diferenças entre as linguagens usadas por aqueles. Normalmente, são utilizados dois tipos de linguagens. O primeiro tipo tem, por principal característica, a informalidade, sendo as linguagens deste tipo, portanto, naturais mas pouco precisas. Já as linguagens do segundo tipo apresentam grande precisão, mas pouca legibilidade. Considerando que as linguagens informais são melhores para a participação dos Usuários no desenvolvimento de Sistemas de Informação e que as linguagens formais são úteis e necessárias para que Analistas elaborem a especificação do sistema e projetistas a interpretem, fez-se necessário o estudo de uma linguagem intermediária que busque um meio termo entre legibilidade (ou naturalidade) e precisão e que, ao mesmo tempo, seja próxima das linguagens informais e formais já em uso. São também apresentadas, neste trabalho, heurísticas (regras informais) para as transformações entre as linguagens, para justificar a referida proximidade, e um estudo de caso para avaliação dos graus de precisão e legibilidade da linguagem comum proposta.
The objecive of this work is to present a common language for users and analists, for requirements definition during Information Systems development. The motivation for this work arose from the study of the communication problem that users and analists have, working with diferent languages of at least two kinds (natural and formal). Natural languages have informality as their main characteristic, hence are not precise. On the other side, formal languages are precise, but sometimes not readable. Informal or natural languages are better for user participation in information system development, and formal languages are useful and necessary to analists when they create a system specification for implementors. It is necessary to search for an intermediate language, that could play a middle role between readableness and precision, and that, at the same time, is relatively close to informal and formal languages. In this work, heuristics (informal and common sense rules) for requirements ellicitation and for transformations between languages are defined too. A case study is detailed, for illustrate the degree of precision and readableness of the common language proposed here.
APA, Harvard, Vancouver, ISO, and other styles
17

Irfan, Zeeshan. "Generating Formal Representations of System Specification from Natural Language Requirements." 2019. https://monarch.qucosa.de/id/qucosa%3A72329.

Full text
Abstract:
Natural Language (NL) requirements play a significant role in specifying the system design, implementation and testing processes. Nevertheless, NL requirements are generally syntactically ambiguous and semantically inconsistent. Issues with NL requirements can result into inaccurate and preposterous system design, implementation and testing. Moreover, informal nature of NL is a major hurdle in machine processing of system requirements specifications. To confront this problem, a requirement template is introduced, based on controlled NL to produce deterministic and consistent representation of the system. The ultimate focus of this thesis is to generate test cases from system specifications driven from requirements communicated in natural language. Manual software systems testing is a labour intensive, error prone and high cost activity. Traditionally, model-driven test generation approaches are employed for automated testing. However, system models are created manually for test generation. The test cases generated from system models are not generally deterministic and traceable with individual requirements. This thesis proposes an approach for software system testing based on template-driven requirements. This systematic approach is applied on the requirements elicited from system stakeholders. For this purpose natural language processing (NLP) methods are used. Using NLP approaches, useful information is extracted from controlled NL requirements and afterwards the gathered information is processed to generate test scenarios. Our inceptive observation exhibits that this method provides remarkable gains in terms of reducing the cost, time and complexity of requirements based testing.
APA, Harvard, Vancouver, ISO, and other styles
18

Dongmo, Cyrille. "Formalising non-functional requirements embedded in user requirements notation (URN) models." Thesis, 2016. http://hdl.handle.net/10500/23395.

Full text
Abstract:
The growing need for computer software in different sectors of activity, (health, agriculture, industries, education, aeronautic, science and telecommunication) together with the increasing reliance of the society as a whole on information technology, is placing a heavy and fast growing demand on complex and high quality software systems. In this regard, the anticipation has been on non-functional requirements (NFRs) engineering and formal methods. Despite their common objective, these techniques have in most cases evolved separately. NFRs engineering proceeds firstly, by deriving measures to evaluate the quality of the constructed software (product-oriented approach), and secondarily by improving the engineering process (process-oriented approach). With the ability to combine the analysis of both functional and non-functional requirements, Goal-Oriented Requirements Engineering (GORE) approaches have become de facto leading requirements engineering methods. They propose through refinement/operationalisation, means to satisfy NFRs encoded in softgoals at an early phase of software development. On the other side, formal methods have kept, so far, their promise to eliminate errors in software artefacts to produce high quality software products and are therefore particularly solicited for safety and mission critical systems for which a single error may cause great loss including human life. This thesis introduces the concept of Complementary Non-functional action (CNF-action) to extend the analysis and development of NFRs beyond the traditional goals/softgoals analysis, based on refinement/operationalisation, and to propagate the influence of NFRs to other software construction phases. Mechanisms are also developed to integrate the formal technique Z/Object-Z into the standardised User Requirements Notation (URN) to formalise GRL models describing functional and non-functional requirements, to propagate CNF-actions of the formalised NFRs to UCMs maps, to facilitate URN construction process and the quality of URN models.
School of Computing
D. Phil (Computer Science)
APA, Harvard, Vancouver, ISO, and other styles
19

Pestana, João Miguel Alves. "A JML-Based strategy for incorporating formal specifications into the software development process." Master's thesis, 2009. http://hdl.handle.net/10400.13/78.

Full text
Abstract:
This thesis presents a JML-based strategy that incorporates formal specifications into the software development process of object-oriented programs. The strategy evolves functional requirements into a “semi-formal” requirements form, and then expressing them as JML formal specifications. The strategy is implemented as a formal-specification pseudo-phase that runs in parallel with the other phase of software development. What makes our strategy different from other software development strategies used in literature is the particular use of JML specifications we make all along the way from requirements to validation-and-verification.
Orientador: Néstor Cataño
APA, Harvard, Vancouver, ISO, and other styles
20

Rinard, Martin C. "Acceptability-Oriented Computing." 2003. http://hdl.handle.net/1721.1/3846.

Full text
Abstract:
We discuss a new approach to the construction of software systems. Instead of attempting to build a system that is as free of errors as possible, the designer instead identifies key properties that the execution must satisfy to be acceptable to its users. Together, these properties define the acceptability envelope of the system: the region that it must stay within to remain acceptable. The developer then augments the system with a layered set of components, each of which enforces one of the acceptability properties. The potential advantages of this approach include more flexible, resilient systems that recover from errors and behave acceptably across a wide range of operating environments, an appropriately prioritized investment of engineering resources, and the ability to productively incorporate unreliable components into the final software system.
Singapore-MIT Alliance (SMA)
APA, Harvard, Vancouver, ISO, and other styles
21

Priesnitz, Andreas. "Multistage Algorithms in C++." Doctoral thesis, 2005. http://hdl.handle.net/11858/00-1735-0000-0006-B405-D.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Fernandes, Graciano Filipe Direito. "Geração automática de casos de teste a partir de requisitos." Master's thesis, 2014. http://hdl.handle.net/1822/34918.

Full text
Abstract:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
O facto de vivermos num mundo cada vez mais informatizado e em que os sistemas informáticos já fazem parte do quotidiano das pessoas e das organizações levou a que os sistemas informáticos se tornassem cada vez maiores e mais complexos. Para se construir um sistema que dê resposta ao pretendido e que tenha qualidade, existe um processo de desenvolvimento que deve ser seguido. Durante o processo de desenvolvimento de software existem várias etapas pelas quais se tem de passar, uma dessas etapas é a de testes. Sendo a etapa de testes uma das mais “caras” em termos de recursos e tempo no processo de desenvolvimento de software, a automatização de processos que compõem esta área tornou-se um dos principais desafios e interesses para as organizações. Assim, nasceu a necessidade de se construir uma ferramenta que a partir dos requisitos especificados para um projeto de software conseguir-se identificar quais os casos de teste de uma forma automática, garantindo, não só, uma maior rapidez mas também uma maior qualidade no processo de identificação de casos de teste. O que consequentemente faz com que seja desenvolvido um produto de melhor qualidade. Deste modo, o tema abordado neste documento baseia-se no desenvolvimento de uma solução para um problema numa organização real. O facto da solução abordada neste documento ser realizada para uma organização real, faz com que existam processos e abordagens utilizadas na organização com as quais se tem de trabalhar. Uma das abordagens utilizadas na organização e consequentemente utilizadas para a criação da solução descrita neste documento é o conceito de DSL (Domain-Specific Languages), que são linguagens criadas para um domínio especifico e as quais são utilizadas nesta solução para a especificação dos casos de teste. Este trabalho apresenta uma contribuição para a área de testes de software, com a aplicação de uma solução que permita a identificação de casos de teste de uma forma automática a partir de requisitos especificados para um determinado projeto.
The fact that we live in a world increasingly computerized and the computer systems that are already part of everyday life of people and organizations that led to the computer systems become increasingly larger and more complex. To build a system that is responsive and has the desired quality, there is a developmental process that must be followed. During the process of software development there are various stages through which it must pass one of these steps is to test. As the stage of testing one of the most "expensive" in terms of resources and time in the software development process, the automation of processes that make up this area has become a major challenge for organizations and interests. So, the need to build a tool that was born from the specified requirements for a software project get to identify which test cases in an automated manner, ensuring not only greater speed but also a higher quality process identification of test cases. What therefore causes a better quality product is developed. This way, the issue addressed in this document is based on developing a solution to a problem in a real organization. The fact that the solution discussed in this document be performed to a real organization, means that there are processes and approaches used in the organization with whom they must work. One of the approaches used in the organization and consequently used to create the solution described in this paper is the concept of DSL (Domain-Specific Languages) are languages created for a specific domain and which are used in this solution for the specification of cases test. This work presents a contribution to the field of software testing, with the application of a solution that enables the identification of cases in an automatic way from test requirements specified for a particular project.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography