To see the other types of publications on this topic, follow the link: Language requirements.

Dissertations / Theses on the topic 'Language requirements'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Language requirements.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Patterson, Jamie L. "Parsing of Natural Language Requirements." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/1147.

Full text
Abstract:
The purpose of this thesis was to automate verification of the software requirements for an implantable cardioverter defibrillator with minimal manual rework. The requirements were written in plain English with only loose stylistic constraints. While full automation proved infeasible, many significant advances were made towards solving the problem, including a framework for storing requirements, a program which translates most of the natural language requirements into the framework, and a novel approach to parts of speech analysis.
APA, Harvard, Vancouver, ISO, and other styles
2

Presland, S. G. "The analysis of natural language requirements documents." Thesis, University of Liverpool, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.383444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chantree, Francis J. "Identifying nocuous ambiguity in natural language requirements." Thesis, Open University, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lamar, Carl. "Linguistic analysis of natural language engineering requirements." Connect to this title online, 2009. http://etd.lib.clemson.edu/documents/1256570721/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Thongglin, Kanjana. "Controlled language for Thai software requirements specification." Thesis, Besançon, 2014. http://www.theses.fr/2014BESA1003.

Full text
Abstract:
Cette thèse porte sur l’utilisation d’une langue contrôlée pour les spécifications des besoins du logiciel en thaï. L’étudedécrit les ambiguïtés syntaxiques et sémantiques ainsi que les problèmes rencontrés dans les spécifications des besoins dulogiciel en thaï. Ce travail explique également la nature de la langue thaïe. Le modèle de la langue contrôlée pour lesspécifications des besoins du logiciel en thaï, proposé dans cette étude, comprend trois composantes: l’analyse lexicale,l’analyse syntaxique et l’analyse sémantique. Pour l’analyse syntaxique, une syntaxe contrôlée est conçue en utilisant laforme du Backus-Naur (BNF). Quant à l’analyse lexicale, nous créons une ressource lexicale sous forme de langage XMLpour stocker tous les mots classés selon leur domaine. Les mots reçus de la ressource XML sont corrects d’un point de vueconceptuel mais ne sont pas pertinents d’un point de vue sémantique. Pour résoudre ce problème, nous faisons alors usage dematrices booléennes pour aligner les phrases sémantiquement. Ainsi les phrases produites par le modèle serontsyntaxiquement et sémantiquement correctes.Après avoir créé le modèle, nous avons construit un logiciel pour tester son efficacité. Il est ainsi évalué par quatreméthodes d’évaluation : 1. le test de fonctionnement syntaxique pour vérifier la syntaxe de la phrase; 2. le test defonctionnement sémantique pour tester la sémantique de la phrase; 3. le test d’acceptation en terme de satisfaction desutilisateurs avec le logiciel; et 4. le test d’acceptation en terme d’acception des données de sortie.Des résultats positifs montrent que : 1. les phrases produites par le modèle proposé sont syntaxiquement correctes; 2. lesphrases produites par le modèle proposé sont sémantiquement correctes; 3. les utilisateurs sont satisfaits et acceptent lelogiciel; et 4. les utilisateurs acceptent et comprennent les phrases produites par ce modèle
This thesis focuses on using controlled language for Thai software requirements specifications. The studydescribes the ambiguities and problems encountered in Thai software requirements specifications; both syntacticambiguity and semantic ambiguity. The study also describes the nature of the Thai language. The model of controlledlanguage for Thai software requirements specifications is composed of three main components: lexical analysis,syntactic analysis, and semantic analysis. For syntactic analysis, a controlled syntax is created using Backus-NaurForm (BNF). In the lexical analysis stage, an XML format lexical resource is built to store words according to theirdomain. The words received from the XML resource are conceptually correct but may be semantically irrelevant. Tosolve this issue, the model applies Boolean Matrices to align sentences semantically. As a result, the sentencesproduced from the model are guaranteed to be syntactically and semantically correct.After having created this model, a program for testing the efficiency of the model is developed. The model isevaluated using four testing methods as follows: 1. functional testing for the correctness of the sentence’s syntax, 2.functional testing for the semantic correctness of the sentences produced by the model, 3. acceptance testing in termsof user satisfaction with the program, and 4. acceptance testing in terms of the validity of the outputs.The positive results signify that: 1. the sentences produced by the proposed model are syntactically correct, 2. thesentences produced by the proposed model are semantically correct, 3. the users are satisfied and accept the softwarecreated, and 4. the users approve and understand the sentences produced from this model
APA, Harvard, Vancouver, ISO, and other styles
6

OLIVEIRA, Bruno Medeiros de. "Simulation of hybrid systems from natural language requirements." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/24525.

Full text
Abstract:
Submitted by Alice Araujo (alice.caraujo@ufpe.br) on 2018-05-02T22:01:48Z No. of bitstreams: 1 DISSERTAÇÃO Bruno Medeiros de Oliveira.pdf: 1530584 bytes, checksum: 5c2fbe267a4bd37066d8a8510e517cb9 (MD5)
Made available in DSpace on 2018-05-02T22:01:48Z (GMT). No. of bitstreams: 1 DISSERTAÇÃO Bruno Medeiros de Oliveira.pdf: 1530584 bytes, checksum: 5c2fbe267a4bd37066d8a8510e517cb9 (MD5) Previous issue date: 2016-09-05
CAPES
Despite technological advances in the industry of systems development, testing is still the most commonly used verification method to ensure reliability. Model-based testing (MBT) techniques are principally employed for the purpose of generating test cases from specification models. Contributing to this branch of research an MBT strategy for creating test cases from controlled natural language (CNL) requirements was created, called NATural Language Requirements to TEST Cases (NAT2TEST). The NAT2TEST strategy deals with data-flow reactive systems, a class of embedded systems whose the main feature is to have the inputs and outputs always available as signals. However, there is a demand from the industry to to apply the strategy in the context of hybrid systems. These systems are a fusion of continuous dynamical and discrete dynamical systems, that is, they combine dynamical characteristics from both continuous and discrete worlds. Hybrid systems have received much attention in the last years. The main contribution of this work is to extend the NAT2TEST strategy to deal with hybrid systems. Using the new proposed approach, it is possible to write the requirements of a hybrid system, whose semantics is characterised based on the case grammar theory. Then, a formal representation of the system is built considering a model of hybrid data-flow reactive systems. Finally, to analyse the system behaviour via simulation, a modelling environment for simulation of hybrid systems was used, called Acumen. Thereby, a specification model in Acumen is generated and simulated in this environment. The characteristics of the new approach are exemplified using two examples, one belonging to the electronic field, the DC-DC Boost Converter (BC), and the other belonging to the automotive domain, the Adaptive Cruise Control (ACC).
Apesar dos avanços tecnológicos na indústria de desenvolvimento de sistemas, testes ainda é o método de verificação mais comumente usado para garantir confiabilidade. Técnicas de testes baseadas em modelo (MBT) são empregadas principalmente com a finalidade de geração de casos de testes a patir de modelos da especificação do sistema. Contribuindo para este ramo de pesquisa, foi criada uma estratégia MBT para a criação de casos de teste a partir de uma linguagem natural controlada (CNL), chamada de NAT2TEST. A estratégia NAT2TEST lida com sistemas reativos de fluxo de dados (DFRS), uma classe de sistemas embarcados cuja principal característica é a de terem as entradas e saídas sempre disponíveis como sinais. No entanto, há uma demanda oriunda da indústria para a utilização da estratégia no contexto de sistemas híbridos. Estes sistemas são uma fusão entre comportamentos dinâmicos e discretos, isto é, que combinam características dinâmicas de ambos os mundos, contínuo e discreto. Os sistemas híbridos têm recebido muita atenção nos últimos anos. A principal contribuição deste trabalho é estender a estratégia NAT2TEST para lidar com sistemas híbridos. Utilizando a abordagem proposta, é possível escrever os requisitos de um sistema híbrido, cuja semântica é caracterizada através da teoria de gramática de casos. Em seguida, uma representação formal do sistema é construída considerando um modelo DFRS para sistemas híbridos. Finalmente, para analisar o comportamento do sistema, por meio de simulação, um ambiente de modelagem e simulação de sistemas híbridos foi usado, chamado Acumen. Com isso, a estratégia proposta gera um modelo da especificação em Acumen e esse modelo é simulado no ambiente. As características da nova abordagem foram exemplificadas usando dois exemplos, um pertencente ao o campo eletrônico, o DC-DC Boost Converter (BC), e a outra pertencente ao domínio automobilístico, o Adaptive Cruise Control (ACC).
APA, Harvard, Vancouver, ISO, and other styles
7

Spike, Matthew John. "Minimal requirements for the cultural evolution of language." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/25930.

Full text
Abstract:
Human language is both a cognitive and a cultural phenomenon. Any evolutionary account of language, then, must address both biological and cultural evolution. In this thesis, I give a mainly cultural evolutionary answer to two main questions: firstly, how do working systems of learned communication arise in populations in the absence of external or internal guidance? Secondly, how do those communication systems take on the fundamental structural properties found in human languages, i.e. systematicity at both a meaningless and meaningful level? A large, multi-disciplinary literature exists for each question, full of apparently conflicting results and analyses. My aim in this thesis is to survey this work, so as to find any commonalities and bring this together in order to provide a minimal account of the cultural evolution of language. The first chapter of this thesis takes a number of well-established models of the emergence of signalling systems. These are taken from several different fields: evolutionary linguistics, evolutionary game theory, philosophy, artificial life, and cognitive science. By using a common framework to directly compare these models, I show that three underlying commonalities determine the ability of any population of agents to reliably develop optimal signalling. The three requirements are that i) agents can create and transfer referential information, ii) there is a systemic bias against ambiguity, and iii) some mechanism leading to information loss exists. Following this, I extend the model to determine the effects of including referential uncertainty. I show that, for the group of models to which this applies, this places certain extra restrictions on the three requirements stated above. In the next chapter, I use an information-theoretic framework to construct a novel analysis of signalling games in general, and rephrase the three requirements in more formal terms. I then show that we can use these 3 criteria as a diagnostic for determining whether any given signalling game will lead to optimal signalling, without the requirement for repeated simulations. In the final, much longer, chapter, I address the topic of duality of patterning. This involves a lengthy review of the literature on duality of patterning, combinatoriality, and compositionality. I then argue that both levels of systematicity can be seen as a functional adaptation which maintains communicative accuracy in the face of noisy processes at different levels of analysis. I support this with results from a new, minimally-specified model, which also clarifies and informs a number of long-fought debates within the field.
APA, Harvard, Vancouver, ISO, and other styles
8

Johnsson, Fredrik. "Language requirements for Swedish citizenship : Adult language learners' attitudes towards the Swedish language test for immigrants." Thesis, Stockholms universitet, Centrum för tvåspråkighetsforskning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-183494.

Full text
Abstract:
In line with the global trend of increased language requirements for naturalization, the Swedish government decided in 2019 to introduce a language test for citizenship. Drawing on Norton Peirce’s (1995) notion of investment and Fraser’s (2000) definition of participatory parity, this study investigates adult second language learners’ attitudes towards the test and analyzes how investment in language learning and perception of possibilities for participatory parity influence their views. Furthermore, the study considers some of the potential consequences of the new policy, for example, for acts of linguistic citizenship (Stroud, 2018). Eighteen SFI-students at level 3C, one of the highest levels at the language course, responded to a questionnaire. Nine respondents favored the test, four opposed it, four were neutral, and one was unsure. Interviews were conducted with six volunteers out of these participants, five out of six were supporters of the language test. The qualitative data was analyzed thematically. While all participants were motivated to learn Swedish, the findings indicate how diverse forms of capital, ideologies, and desired or actual identities in Swedish, that is, the nature of their investment and perceived possibilities for economic and social participation, influenced their attitudes towards the language test. Furthermore, supporters of the test considered that the language requirement would improve possibilities for participatory parity, whereas the opponents emphasized the risk of misrecognition and misrepresentation. The policy redefines Swedish citizenship: by making a test of Swedish language mandatory for prospective citizens, it formally demands that immigrants learn Swedish provided they want to integrate. As the results suggest, however, the participants in this study were engaged in a wide range of acts of linguistic citizenship in Swedish and had similar reported proficiency levels, regardless of their opinions of the Swedish test. The study calls for more research on the effects of language requirements for naturalization to examine to what extent and in what ways a language test for citizenship affects already motivated language learners.
APA, Harvard, Vancouver, ISO, and other styles
9

PORTUGAL, ROXANA LISETTE QUINTANILLA. "MINING INFORMATION IN NATURAL LANGUAGE TO SUPPORT REQUIREMENTS ELICITATION." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2016. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=28272@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PROGRAMA DE EXCELENCIA ACADEMICA
Este trabalho descreve a mineração de informações em linguagem natural a partir do repositório de projetos GitHub. É explicada como o conteúdo de projetos semelhantes dada uma busca por domínio podem ser úteis para o reuso de conhecimento, e assim, ajudar nas tarefas de Elicitação de Requisitos. Técnicas de mineração de textos, regularidades independentes do domínio, e os metadados de GitHub são os métodos utilizados para selecionar projetos relevantes e as informações dentro deles. Uma abordagem para atingir nossa meta utilizando pesquisa exploratória é explicada, bem como descrevemos os resultados alcançados.
This work describes the mining of information in natural language from the GitHub repository. It is explained how the content of similar projects given a search domain can be useful for the reuse of knowledge, and thus help in the Requirements Elicitation tasks. Techniques of text mining, regularities independent from domain, and GitHub metadata are the methods used to select relevant projects and the information within them. One approach to achieve our goal is explained with an exploratory research and the results achieved.
APA, Harvard, Vancouver, ISO, and other styles
10

Anttila, J. (Juha). "Language acquisition and linguistic requirements in peacekeeping:a case study." Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201305201318.

Full text
Abstract:
This study looks into the ways language acquisition is presented in peacekeeping and what are the linguistic requirements for working in a peacekeeping organization. The study also provides some possibilities for the results to be used by the peacekeeping organization in planning and organizing the language training of future peacekeepers in order to meet the linguistic requirements of peacekeeping more efficiently. The data mainly consists of recordings gathered during and after two separate tours of duty in peacekeeping work in Afghanistan. The gathered data was first assessed according to the assessment scales provided by the Common European Framework of Reference for Languages (CEFR) and the results were compared with a similar setting with Finnish employees working in a foreign language environment. The data has been divided into three separate phases, where the first phase represents the starting level of the subject’s language skills, the second phase represents the language skills resulted from active use of the target language at work, and the third phase represents the permanence of the improved language skills over a period of language use less active than between the first two phases. In each phase, the recordings were assessed according to each of the assessment scales for communicative language competences provided by the CEFR. The result of each assessment was compared to the corresponding assessment on the next phase, which made the language proficiency progress more measurable. The results of the research showed that most of the language competences categorized by the CEFR were permanently improved, or in other words, the results of both the second as well as the third phase were improved comparing to the first phase. Few of the competences did not improve at all, which was partly the result of the lack of those skills interfering with the message the least. According to the results, the future language training of peacekeepers could be more effective if peacekeepers were given more practical exercises regarding English in peacekeeping. Also, the awareness and tracking the progress of their own language skills could make the peacekeepers more efficient in communicating in a foreign language
Tässä tutkimuksessa perehdytään niihin tapoihin joilla kielen omaksuminen ilmenee rauhanturvatyössä ja mitä kielellisiä vaatimuksia työskentely rauhanturvaorganisaatiossa asettaa. Tutkimus tarjoaa myös muutamia mahdollisuuksia tulosten käyttämiseen rauhanturvaorganisaation kielellisen harjoittelun suunnitteluun ja järjestämiseen, jotta rauhanturvatyön asettamat kielelliset vaatimukset voitaisiin täyttää tehokkaammin. Aineisto koostuu pääosin äänitteistä, jotka on kerätty kahden Afganistanin rauhantuvatyön rotaation aikana ja niiden jälkeen. Kerätty aineisto ensin arvioitiin Eurooppalaisen viitekehyksen (CEFR) arviointiasteikkojen mukaan ja tuloksia verrattiin vastaavanlaiseen asetelmaan, jossa tutkittiin suomalaisia työntekijöitä vieraskielisessa työympäristössä. Aineisto on jaettu kolmeen vaiheeseen, jossa ensimmäinen edustaa koehenkilön lähtötasoa, toinen vaihe edustaa työnteon lomassa kohdekielen aktiivisesta käytöstä johtunutta kielitaitoa, ja kolmas vaihe edustaa kielitaidon kehityksen pysyvyyttä ajanjakson jälkeen jolloin kielenkäyttö on ollut vähemmän aktiivista verrattuna ensimmäisen ja toisen vaiheen väliseen aikajaksoon. Äänitteet arvioitiin kussakin vaiheessa jokaisen CEFR:in viestinnällisyyteen perustuvan kielitaidon arviointiasteikon mukaan. Kunkin arvioinnin tuloksia verrattiin vastaavaan arviointiin seuraavassa vaiheessa, joka mahdollisti kielitaidon kehittymisen tekemisen mittaamiskelpoisemmaksi. Tutkimuksen tulokset osoittivat, että CEFR:n luokittelemista kielitaidoista useimmat kehittyivät pysyvästi, tai toisin sanoen, sekä toisen että kolmannen vaiheen tulokset olivat ensimmäisen vaiheen tuloksia parempia. Vain harvat kielitaidoista eivät kehittyneet lainkaan, joka johtuu osittain siitä, että kyseisten kielitaitojen puutteet haittasivat kielenkäytön viestinnällisyyttä vähiten. Tulokset osoittavat, että rauhanturvaajien kielitaidon koulutus voisi tulevaisuudessa olla tehokkaampaa, mikäli rauhanturvaajille tarjottaisiin käytännöllisempiä harjoituksia liittyen englannin kielen käyttöön rauhanturvatyössä. Lisäksi omien kielitaitojen tiedostaminen ja niiden kehittymisen seuraaminen saattaisivat tehdä rauhanturvaajista tehokkaampia vieraan kielen käyttäjiä viestinnällisyyden kannalta
APA, Harvard, Vancouver, ISO, and other styles
11

BARZA, Sérgio. "Model checking requirements written in a controlled natural language." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/19519.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-07-12T13:26:23Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) SergioBarzaDissertation.pdf: 2147656 bytes, checksum: 5c75fe2262be1d224538c1ad6a575ebb (MD5)
Made available in DSpace on 2017-07-12T13:26:23Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) SergioBarzaDissertation.pdf: 2147656 bytes, checksum: 5c75fe2262be1d224538c1ad6a575ebb (MD5) Previous issue date: 2016-02-25
Software Maintainability (SM) has been studied since it became one of the key componentes of the software quality model accepted around the world. Such models support researchers and practitioners to evaluate the quality level of his systems. Therefore, many researchers have proposed a lot of metrics to be used as SM indicators. On the other hand, there is a suspicious that using SM metrics on industry is different from the academic context. In this case, practitioners do not adopt the metrics proposed/used by academia. Consequently, the goal of this research is to investigate the SM metrics adoption and applicability scenario on the Brazilian industrial context. This study will allow confirming if the practitioners use the SM metrics proposed by academics around the globe or if they propose their own metrics for SM measurement. As empirical method for data assessment, we used survey, divided in two steps. The first one was focused in gathering information that allowed us to design a specific scenario about the use and applicability of SM metrics. To achieve this goal, it was chosen, as research instrument, semi-structured interviews. The next step focused in a more general scenario, compassing the Brazillian software production industrial context. An online questionnaire was used as research instrument. Practitioners with different positions in several companies participated of this work. Data from requirements engineers, quality analysts, testers, developers and project managers were collected. 7 software companies participated in the first part of the study and 68 valid answers were collected on the second moment, resulting in 31 SM metrics listed. The results showed us that about 90% of the companies perform maintenance on their software products. However, only 60% confirms using maintainability metrics, resulting in a discrepancy regarding software maintenance vs SM metrics. Nearly half of the companies researched have used well-defined processes to collect these metrics. Nevertheless, there are those that do not have any formal methodology. Instead of it, they have used SM metrics that best fit to the needs of a specific project. The conclusions of this study point to an issue that is nothing new in the academic researchers around the world. Many of the academics results conducting, mainly, in the universities, are not coming to the software industries and this fact is also a truth when the subject is software maintenance. The results of this research may lead to discussions on how SM metrics are being proposals nowadays.
Manutenibilidade de Software (MS) é estudada desde que se tornou um dos componente de modelos de qualidade aceitos globalmente. Tais modelos auxiliam pesquisadores e profissionais do mercado na avaliação do nível de qualidade dos seus sistemas. Como consequência, muitos pesquisadores vêm propondo métricas que podem ser utilizadas como indicadores de MS. Por outro lado, existe uma suspeita que o uso de métricas de MS ocorre de maneira diferente da academia. Neste caso, as empresas não estão adotando as métricas que estão sendo propostas no ambiente acadêmico. O objetivo desta pesquisa é investigar o cenário de adoção e aplicação de métricas de manutenibilidade de software sob o contexto industrial brasileiro. Este estudo permitirá afirmar se estas empresas utilizam atributos de MS propostos por acadêmicos ao redor do mundo ou se elas propõem suas próprias métricas para medição de MS. Para ter acesso aos dados desta pesquisa, foi utilizado o método empírico survey, dividido em duas etapas. A primeira etapa objetivou levantar informações que permitissem um panorama mais específico sobre a utilização e aplicação de tais métricas. Para isto, foi escolhido, como instrumento de pesquisa, entrevistas semi-estruturadas. A segunda etapa apresenta um enfoque mais amplo, englobando todo o cenário industrial de produção de software brasileira. Um questionário online foi utilizado como instrumento de pesquisa. Profissionais de diferentes posições em várias empresas participaram desta pesquisa. Foram coletados dados de engenheiros de requisitos, analista de qualidade, testadores, desenvolvedores, gerente de projetos, entre outros. Sete empresas participaram da primeira etapa da pesquisa e 68 respostas válidas foram levantadas no segundo momento. Com isto, 31 métricas de MS foram identificadas. Os resultados mostram que cerca de 90% das empresas realizam manutenção em seus produtos de software. Porém somente 60% (aproximadamente) afirmaram fazer uso de métricas de MS, resultando em uma discrepância com relação à manutenção de software vs. uso de métricas. Quase metade das empresas possuem processos bem definidos para coletar estas métricas. Entretanto, muitas delas ainda não apresentam tais processos formais de coleta. Neste último caso, elas utilizam aqueles atributos que melhor se adaptam às necessidades de um projeto específico. As conclusões deste estudo apontam para problemas que não é novidade nas pesquisas acadêmicas ao redor do mundo. Pela amostra investigada neste trabalho, reforça-se a suspeita de que muitos dos resultados das pesquisas científicas realizadas nas universidades não estão chegando na indústria e este fato se reflete quando o assunto é manutenção de software. Os resultados deste estudo apresentam dados que poderão ocasionar discussões sobre a forma como as métricas de manutenibilidade são propostas atualmente.
APA, Harvard, Vancouver, ISO, and other styles
12

Rašovský, Martin. "Language for High-Level Description of User Interface Requirements." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2018. http://www.nusl.cz/ntk/nusl-386022.

Full text
Abstract:
Diplomová práce se zabývá problematikou návrhu vysokoúrovňového jazyka pro popis grafického uživatelského rozhraní. Teoretická část rozebírá současné technologie uživatelských rozhraní zejména pro stanovení požadavků na nový jazyk. Z těchto poznatků násladně jsou shrnuty zásadní požadavky, které se nutně musí zakomponovat při návrhu samotného jazyka. Jsou zde zmíněny i požadavky z pohledu osob se specifickými vzdělávacími potřebami dle tzv. návrhových principů \textit{počítačové terapie}. Následně práce dle analyzovaných požadavků navrhuje jazyk pro vysokoúrovňový popis uživatelského rozraní. Součástí návrhu jazyka je také popis algoritmu pro kompozici jednotlivých komponent definovaných v jazyce do výsledného uživatelského prostředí. Navržený jazyk je implementován v programovacím jazyce C\#. Implementace je demonstrována na reprezentativních příkladech. Nakonec se práce věnuje dalším možným rozšířením jazyka.
APA, Harvard, Vancouver, ISO, and other styles
13

Mikkonen, S. (Samu). "Requirements of the interaction design pattern languages for the web development:a pattern language delivery platform." Master's thesis, University of Oulu, 2016. http://urn.fi/URN:NBN:fi:oulu-201512312312.

Full text
Abstract:
Purpose: Previous studies of pattern languages have been focused HCI pattern languages or presented them in the context web development before advent of touchscreen mobile devices. This study looks at potential and evolvement of pattern languages in general and then evaluates the effects and challenges that modern Web environment brings to development of pattern languages for the web development. Methodology: This research was conducted as design science research by following the process of recognized design science research methodology. Experiment including qualitative survey was conducted in evaluation phase. Narrative literature review was conducted summarizing the most relevant and highest quality knowledge available to answer the research questions as part of the background research. Findings: This research points to that pattern languages have untapped potential in web technology based user interfaces. This study didn’t yet provide definitive answer to how to untap this potential, but instead provided set of requirements to build upon. Research limitations: Study focuses on only developing user interface patterns as front-end elements that can be paired with desired backend implementations. Artifacts developed in this study should also be evaluated in larger qualitative research in order to better evaluate the impact. Practical implications: Delivery platform for pattern language was developed and documented as artifact. Basic pattern language was also developed and documented to enable evaluation of the delivery platform. Insights of this paper aims to provide basis for the building pattern languages for the web development. Value of the paper comes also from discussing the role of recent tools and techniques that have been recently adopted in the web development or are just being developed.
APA, Harvard, Vancouver, ISO, and other styles
14

Hartnor, Sofie. "Language requirements and constructions of belonging : A critical WPR policy analysis of the proposal for language and civic knowledge requirements for Swedish citizenship." Thesis, Linköpings universitet, Tema Genus, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-177331.

Full text
Abstract:
This study provides a feminist critical examination of the proposal to add a language and civic knowledge requirement for citizenship acquisition and the ideas of belonging that are constructed within the proposal. The analysis has been made with Carol Bacchi’s ‘What’s the problem represented to be’-approach to policy analysis, examining what is represented as the ‘problem’ in the proposal, which assumptions and presuppositions that underlie this representation of the ‘problem’, what is silenced or unproblematized in this representation of the problem, an the potential effects that this representation of the problem can have. The analysis shows how the ‘problem’ in the proposal for language and civic knowledge requirements is represented as a ‘lack of knowledge and participation’ that’s been made possible due to a ‘lack of demands’. Further, the representation of the ‘problem’ constituting the ‘solution’ of the proposed requirements is individualized and the expected exclusionary effects of the proposal unproblematized. Finally, I argue that the representations of the ‘problem’ construct ideas of belonging which are predominantly about assessment of eligibility to be allowed entry into the ‘collectivity’ of Swedish citizenship.
APA, Harvard, Vancouver, ISO, and other styles
15

Sidky, Ahmed Samy. "RGML: A Specification Language that Supports the Characterization of Requirements Generation Processes." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/34237.

Full text
Abstract:
Despite advancements in requirements generation models, methods and tools, low quality requirements are still being produced. One potential avenue for addressing this problem is to provide the requirements engineer with an interactive environment that leads (or guides) him/her through a structured set of integrated activities that foster "good" quality requirements. While that is our ultimate goal, a necessary first step in developing such an environment is to create a formal specification mechanism for characterizing the structure, process flow and activities inherent to the requirements generation process. In turn, such specifications can serve as a basis for developing an interactive environment supporting requirements engineering. Reflecting the above need, we have developed a markup language, the Requirements Generation Markup Language (RGML), which can be used to characterize a requirements generation process. The RGML can describe process structure, flow of control, and individual activities. Within activities, the RGML supports the characterization of application instantiation, the use of templates and the production of artifacts. The RGML can also describe temporal control within a process as well as conditional expressions that control if and when various activity scenarios will be executed. The language is expressively powerful, yet flexible in its characterization capabilities, and thereby, provides the capability to describe a wide spectrum of different requirements generation processes.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
16

Messaoudi, Mohammed. "Requirements elicitation through viewpoint control in a natural language environment." Thesis, Durham University, 1994. http://etheses.dur.ac.uk/5479/.

Full text
Abstract:
While requirements engineering is about building a conceptual model of part of reality, requirements validation involves assessing the model for correctness, completeness, and consistency. Viewpoint resolution is the process of comparing different views of a given situation and reconciling different opinions. In his doctoral dissertation Leite [72] proposes viewpoint resolution as a means for early validation of requirements of large systems. Leite concentrates on the representation of two different views using a special language, and the identification of their syntactic differences. His method relies heavily on redundancy: two viewpoints (systems analysts) should consider the same topic, use the same vocabulary, and use the same rule-based language which constrains how the rules should be expressed. The quality of discrepancies that can be detected using his method depends on the quality of the viewpoints. The hypothesis of this thesis is that, independently of the quality of the viewpoints, the number of viewpoints, the language, and the domain, it is possible to detect better quality discrepancies and to point out problems earlier than Leite's method allows. In the first part of this study, viewpoint-oriented requirements engineering methods are classified into categories based on the kind of multiplicity the methods address: multiple human agents, multiple specification processes, or multiple representation schemes. The classification provides a framework for the comparison and the evaluation of viewpoint-based methods. The study then focuses on the critical evaluation of Leite's method both analytically and experimentally. Counter examples were designed to identify the situations the method cannot handle. The second part of the work concentrates on the development of a method for the very early validation of requirements that improves on Leite's method and pushes the boundaries of the validation process upstream towards fact-finding, and downstream towards conflicts resolution. The Viewpoint Control Method draws its principles from the fields of uncertainty management and natural language engineering. The basic principle of the method is that, in order to make sense of a domain one must learn about the information sources and create models of their behaviour. These models are used to assess pieces of information, in natural language, received from the sources and to resolve conflicts between them. The models are then reassessed in the light of feedback from the results of the process of information evaluation and conflict resolution. Among the implications of this approach is the very early detection of problems, and the treatment of conflict resolution as an explicit and an integral part of the requirements engineering process. The method is designed to operate within a large environment called LOLITA that supports relevant aspects of natural language engineering. In the third part of the study the Viewpoint Control Method is applied and experimentally evaluated, using examples and practical case studies. Comparing the proposed approach to Leite's shows that the Viewpoint Control Method is of wider scope, is able to detect problems earlier, and is able to point out better quality problems. The conclusions of the investigation support the view that underlines the naivety of assuming competence or objectivity of each source of information.
APA, Harvard, Vancouver, ISO, and other styles
17

Alam, Abu Turab. "Elicitation of software requirements : the role of natural language processing." Thesis, University of Surrey, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.280471.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

SAYAO, MIRIAM. "REQUIREMENTS VERIFICATION AND VALIDATION: NATURAL LANGUAGE PROCESSING AND SOFTWARE AGENTS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=10927@1.

Full text
Abstract:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO GRANDE DO SUL
No processo de desenvolvimento do software, atividades relacionadas ao Processo de Requisitos envolvem elicitação, modelagem, verificação e validação dos requisitos. O uso da linguagem natural no registro dos requisitos facilita a comunicação entre os participantes do processo, além de possibilitar que clientes e usuários validem requisitos sem necessitar de conhecimento extra. Por outro lado, na economia globalizada atual, o desenvolvimento de software por equipes geograficamente distribuídas está se tornando uma norma. Nesse cenário, atividades de verificação e validação de requisitos para um software de média ou alta complexidade podem envolver o tratamento de centenas ou milhares de requisitos. Com essa ordem de complexidade é importante que o engenheiro de software tenha apoio computacional para o desempenho adequado das atividades de aferição de qualidade. Neste trabalho estamos propondo uma estratégia que combina técnicas de processamento da linguagem natural (PLN) e agentes de software para apoiar as atividades de análise dos requisitos. Geramos visões textuais ou gráficas de grupos de requisitos relacionados; visões apóiam a análise de completude, a identificação de duplicidades e de dependências entre requisitos. Utilizamos técnicas de análise de conteúdo para apoiar a identificação de omissões em requisitos não funcionais. Também propomos uma estratégia para a construção ou atualização do léxico da aplicação, utilizando técnicas de PLN. Utilizamos agentes de software para implementar serviços que incorporam as estratégias referidas, e também para atuar como representantes dos participantes do projeto em desenvolvimento.
In software development process, initial activities can involve requirements elicitation, modeling and analysis (verification and validation). The use of natural language in the register of the requirements facilitates the communication among stakeholders, besides offering possibilities to customers and users to validate requirements without extra knowledge. On the other hand, in the current global economy, software development for teams geographically distributed is becoming a rule. In this scenario, requirements verification and validation for medium or high complexity software can involve the treatment of hundreds or even thousand requirements. With this complexity order it is important to provide computational support for the software engineer execute quality activities. In this work we propose a strategy which combines natural language processing (NLP) techniques and software agents to support analysis activities. We have generated textual or graphical visions from groups of related requirements; visions help completeness analysis, identification of duplicities and dependences among requirements. We use content analysis techniques to support the identification of omissions in nonfunctional requirements. Also, we propose a strategy to construct the lexicon, using NLP techniques. We use software agents to implement web services that incorporate the related strategies, and also agents to act as personal assistants for stakeholders of the software project.
APA, Harvard, Vancouver, ISO, and other styles
19

Lai, Danny Cho-Liang 1978. "Extending a formal specification & requirements language : a case study." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86814.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (leaves 92-93).
by Danny Cho-Liang Lai.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
20

SILVA, Bruno Cesar Ferreira. "CPN simulation-based test case generation from natural language requirements." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/22471.

Full text
Abstract:
Submitted by Alice Araujo (alice.caraujo@ufpe.br) on 2017-12-04T18:10:07Z No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) bcfsDissertationvFinal5.pdf: 1922556 bytes, checksum: 47cd71107f5450414e172d96bd4454ed (MD5)
Made available in DSpace on 2017-12-04T18:10:07Z (GMT). No. of bitstreams: 2 license_rdf: 811 bytes, checksum: e39d27027a6cc9cb039ad269a5db8e34 (MD5) bcfsDissertationvFinal5.pdf: 1922556 bytes, checksum: 47cd71107f5450414e172d96bd4454ed (MD5) Previous issue date: 2016-09-05
Software Engineering faces challenges such as difficulty in understanding the user needs, ambiguous specifications, poorly defined requirements and therefore problems in interpreting the system requirements. Model-based testing (MBT) is presented as an alternative for solving these problems by using (semi-)formal methods, in the specification, modelling or analysis of requirements, as well as by automatically generating test cases. This work proposes and implements a test generation strategy from Natural Language (NL) requirements via translation into Coloured Petri Nets (CPN), an extension of Petri Nets that supports model structuring. This strategy extends previous work on the NAT2TEST framework, which involves syntactic and semantic analyses of NL requirements and the generation of Data-Flow Reactive System (DFRS) as an intermediate representation, from which target formal models can be obtained for the purpose of test case generation. Our contributions include a systematic translation of DFRSs into CPN models, besides a strategy for test generation. We illustrate our overall approach with a running example. Therefore, this work presents a variant for the NATural Language Requirements to TEST Cases (NAT2TEST) strategy, in which the Coloured Petri Nets (CPN) is used as an intermediate model. The NAT2TEST strategy, which is applicable to discrete or continuous systems, consists of five phases: syntactic and semantic analyses, DFRS generation, CPN generation, and generation of test cases. The approach proposed here, which is based on Petri Nets simulation, has as benefit the maturity of the theory and tools related to CPN. Moreover, there are available resources for analysing structural and behavioural properties of the model. The process is automated by the NAT2TEST tool until the DFRS generation. The model translation from the DFRS to the CPN is automated by Spoofax framework. Finally, the test cases generation occurs automatically via simulations held in the CPN Tools. Our strategy was evaluated considering examples from the literature (Vending Machine and Nuclear Power Plant) and the aerospace industry (Priority Control). We analysed performance and the ability to detect defects generated via mutation. In general, our strategy outperformed the considered baseline: random testing. We also compared our strategy with the CSP version.
A Engenharia de Software possui desafios clássicos como dificuldade no entendimento das necessidades dos usuários, especificações ambíguas, requisitos mal definidos e, portanto, problemas na interpretação destes. A utilização de testes baseados em modelos (MBT) apresenta-se como alternativa para solução destes problemas, através do uso de métodos (semi)formais, seja na especificação, modelagem ou análises de requisitos, bem como na geração automática de casos de testes. Este trabalho propõe e implementa uma estratégia de geração de testes a partir de requisitos escritos em linguagem natural (NL) através da tradução para modelos em Redes de Petri Colorida (CPN), uma extensão de Redes de Petri que incorpora estruturação de modelos. Esta estratégia estende um trabalho anterior (NAT2TEST framework), que envolve análises sintática e semântica de requisitos em linguagem natural (NL) e geração do modelo de sistemas reativos baseados em fluxos de dados (DFRS) como uma representação intermediária, a partir do qual outros modelos formais podem ser obtidos com o propósito de geração de casos de testes. Nossa contribuição inclui uma tradução sistemática de DFRSs para modelos CPN, assim como uma estratégia para geração de testes. Ilustramos nossa abordagem através de um exemplo prático. Assim sendo, este trabalho apresenta uma variante da estratégia NAT2TEST, na qual formalismo intermediário é Redes de Petri Colorida (CPN), sendo aplicável a sistemas discretos e contínuos, e que consiste de cinco etapas: análises sintática e semântica, gerações dos modelos DFRS e CPN e de casos de testes. A abordagem empregada, através da simulação de Redes de Petri, tem como benefícios a maturidade da teoria e das ferramentas associadas a CPN, além de permitir a análise de propriedades estruturais e comportamentais do modelo. A ferramenta NAT2TEST já automatiza a tradução de requisitos em linguagem natural na notação do DFRS. A tradução do modelo DFRS para o formalismo CPN é uma primeira contribuição do presente trabalho e foi automatizada através do ambiente Spoofax. A geração dos casos de testes foi desenvolvida, de forma automatizada, através de simulações realizadas na ferramenta CPN Tools. A estratégia aqui proposta foi avaliada considerando exemplos da literatura (Vending Machine (VM) e Nuclear Power Plant (NPP)) e da indústria aeroespacial (Priority Control (PC)). Foram analisados o desempenho e a capacidade de detectar defeitos gerados através de operadores de mutação. Em geral, a nossa estratégia apresentou melhores resultados do que a referência adotada: testes aleatórios. A estratégia também foi comparada com a versão que utiliza Communicating Sequential Processes (CSP) como modelo formal intermediário e apresentou melhor desempenho nos três estudos realizados. Em um deles, encontrou a mesma quantidade de defeitos, sendo superior nos demais.
APA, Harvard, Vancouver, ISO, and other styles
21

Baayen, Harald R. "Resource requirements for neo-generative modeling in (psycho)linguistics." Universität Potsdam, 2012. http://opus.kobv.de/ubp/volltexte/2012/6231/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ahmed, Saqib, and Bilal Ahmad. "Transforming Requirements to Ontologies." Thesis, Jönköping University, Tekniska Högskolan, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-50048.

Full text
Abstract:
Capturing client’s needs and expectations for a product or service is an important problem in software development. Software requirements are normally captured in natural language and mostly they are unstructured which makes it difficult to automate the process of going from software requirements to the executable code. A big hurdle in this process is the lack of consistency and standardization in software requirements representation. Thus, the aim of the thesis is to present a method for transforming natural language requirement text into ontology. It is easy to store and retrieve information from ontology as it is a semantic model, and it is also easy to infer new knowledge from it. As it is clear from the aim of this work, the main component of our research was software requirements, so there was a need to investigate and decide the types of requirements to define the scope of this research. We selected INCOSE guidelines as a benchmark to scrutinize the properties which we desired in the Natural Language Requirements. These natural language requirements were used in the form of user stories as the input of the transformation process. We selected a combination of two methods for our research i.e. Literature Review and Design Science Research. The reason for selecting these methods was to obtain a good grip on existing work going on in this field and then to combine the knowledge to propose new rules for the requirements to ontology transformation. We studied different domains during literature review such as Requirements Engineering, Ontologies, Natural Language Processing, and Information Extraction. The gathered knowledge was then used to propose the rules and the flow of their implementation. This proposed system was named as “Reqtology”. Reqtology defines the process, from taking the requirements in form of user stories, to extracting the useful information based on the rules and then classifying that information so that it can be used to form ontologies. The workflow consists of a 6-step process which starts from input text in form of user stories and at the end provides us entities which can be used for ontologies formation.
APA, Harvard, Vancouver, ISO, and other styles
23

Pong, Lih, and 龐立. "Formal data flow diagrams (FDFD): a petri-netbased requirements specification language." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1985. http://hub.hku.hk/bib/B31207406.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

CARVALHO, Gustavo Henrique Porto de. "NAT2TEST: generating test cases from natural language requirements based on CSP." Universidade Federal de Pernambuco, 2016. https://repositorio.ufpe.br/handle/123456789/17929.

Full text
Abstract:
Submitted by Natalia de Souza Gonçalves (natalia.goncalves@ufpe.br) on 2016-09-28T12:33:15Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) GustavoHPCarvalho_Doutorado_CInUFPE_2016.pdf: 1763137 bytes, checksum: aed7b3ab2f6235757818003678633c9b (MD5)
Made available in DSpace on 2016-09-28T12:33:15Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) GustavoHPCarvalho_Doutorado_CInUFPE_2016.pdf: 1763137 bytes, checksum: aed7b3ab2f6235757818003678633c9b (MD5) Previous issue date: 2016-02-26
High trustworthiness levels are usually required when developing critical systems, and model based testing (MBT) techniques play an important role generating test cases from specification models. Concerning critical systems, these models are usually created using formal or semi-formal notations. Moreover, it is also desired to clearly and formally state the conditions necessary to guarantee that an implementation is correct with respect to its specification by means of a conformance relation, which can be used to prove that the test generation strategy is sound. Despite the benefits of MBT, those who are not familiar with the models syntax and semantics may be reluctant to adopt these formalisms. Furthermore, most of these models are not available in the very beginning of the project, when usually natural-language requirements are available. Therefore, the use of MBT is postponed. Here, we propose an MBT strategy for generating test cases from controlled naturallanguage (CNL) requirements: NAT2TEST, which refrains the user from knowing the syntax and semantics of the underlying notations, besides allowing early use of MBT via naturallanguage processing techniques; the formal and semi-formal models internally used by our strategy are automatically generated from the natural-language requirements. Our approach is tailored to data-flow reactive systems: a class of embedded systems whose inputs and outputs are always available as signals. These systems can also have timed-based behaviour, which may be discrete or continuous. The NAT2TEST strategy comprises a number of phases. Initially, the requirements are syntactically analysed according to a CNL we proposed to describe data-flow reactive systems. Then, the requirements informal semantics are characterised based on the case grammar theory. Afterwards, we derive a formal representation of the requirements considering a model of dataflow reactive systems we defined. Finally, this formal model is translated into communicating sequential processes (CSP) to provide means for generating test cases. We prove that our test generation strategy is sound with respect to our timed input-output conformance relation based on CSP: csptio. Besides CSP, we explore the generation of other target notations (SCR and IMR) from which we can generate test cases using commercial tools (T-VEC and RT-Tester, respectively). The whole process is fully automated by the NAT2TEST tool. Our strategy was evaluated considering examples from the literature, the aerospace (Embraer) and the automotive (Mercedes) industry. We analysed performance and the ability to detect defects generated via mutation. In general, our strategy outperformed the considered baseline: random testing. We also compared our strategy with relevant commercial tools.
Testes baseados em modelos (MBT) consiste em criar modelos para especificar o comportamento esperado de sistemas e, a partir destes, gerar testes que verificam se implementações possuem o nível de confiabilidade esperado. No contexto de sistemas críticos, estes modelos são normalmente (semi)formais e deseja-se uma definição precisa das condições necessárias para garantir que uma implementação é correta em relação ao modelo da especificação. Esta definição caracteriza uma relação de conformidade, que pode ser usada para provar que uma estratégia de MBT é consistente (sound). Apesar dos benefícios, aqueles sem familiaridade com a sintaxe e a semântica dos modelos empregados podem relutar em adotar estes formalismos. Aqui, propõe-se uma estratégia de MBT para gerar casos de teste a partir de linguagem natural controlada (CNL). Esta estratégia (NAT2TEST) dispensa a necessidade de conhecer a sintaxe e a semântica das notações formais utilizadas internamente, uma vez que os modelos intermediários são gerados automaticamente a partir de requisitos em linguagem natural. Esta estratégia é apropriada para sistemas reativos baseados em fluxos de dados: uma classe de sistemas embarcados cujas entradas e saídas estão sempre disponíveis como sinais. Estes sistemas também podem ter comportamento dependente do tempo (discreto ou contínuo). Na estratégia NAT2TEST, inicialmente, os requisitos são analisados sintaticamente de acordo com a CNL proposta neste trabalho para descrever sistemas reativos. Em seguida, a semântica informal dos requisitos é caracterizada utilizando a teoria de gramática de casos. Posteriormente, deriva-se uma representação formal dos requisitos considerando um modelo definido neste trabalho para sistemas reativos. Finalmente, este modelo é traduzido em uma especificação em communicating sequential processes (CSP) para permitir a geração de testes. Este trabalho prova que a estratégia de testes proposta é consistente considerando a relação de conformidade temporal baseada em entradas e saídas também definida aqui: csptio. Além de CSP, foi explorada a geração de outras notações formais (SCR e IMR), a partir das quais é possível gerar casos de teste usando ferramentas comerciais (T-VEC e RT-Tester, respectivamente). Todo o processo é automatizado pela ferramenta NAT2TEST. A estratégia NAT2TEST foi avaliada considerando exemplos da literatura, da indústria aeroespacial (Embraer) e da automotiva (Mercedes). Foram analisados o desempenho e a capacidade de detectar defeitos gerados através de operadores de mutação. Em geral, a estratégia NAT2TEST apresentou melhores resultados do que a referência adotada: testes aleatórios. A estratégia NAT2TEST também foi comparada com ferramentas comerciais relevantes.
APA, Harvard, Vancouver, ISO, and other styles
25

Rajković, Kostadin. "MEASURING THE COMPLEXITY OF NATURAL LANGUAGE REQUIREMENTS IN INDUSTRIAL CONTROL SYSTEMS." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-44703.

Full text
Abstract:
Requirements specification documents are one of the main sources of guidance in software engineering projects and they contribute to the definition of the final product and its attributes. They can often contain text, graphs, figures and diagrams. However, they are still mostly written in Natural Language (NL) in industry, which is also a convenient way of representing them. With the increase in the size of software projects in industrial systems, the requirements specification documents are often growing in size and complexity, that could result in requirements documents being not easy to analyze. There is a need to provide the stakeholders with a way of analyzing requirements in order to develop software projects more efficiently. In this thesis we investigate how the complexity of textual requirements can be measured in industrial systems. A set of requirements complexity measures was selected from the literature. These measures are adapted for application on real-world requirements specification documents. These measures are implemented in a tool called RCM and evaluated on requirements documentation provided by Bombardier Transportation AB. The statistical correlation between the selected measures was investigated based on a sample of data from the provided documentation. The statistical analysis has shown a significant correlation between a couple of selected measures. In addition, a focus group was performed with a goal of exploring the potential use of these metrics and the RCM tool in industrial systems as well as what different areas of potential improvement future research can investigate.
APA, Harvard, Vancouver, ISO, and other styles
26

Dunworth, Catherine M. "Tertiary entry level English language proficiency: a case study." Curtin University of Technology, Faculty of Education, 2001. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=13191.

Full text
Abstract:
This study was initiated as a result of the appearance of a number of articles and commentaries in the academic press which intimate that the English language levels of many overseas students studying in Australia are not sufficient to meet the demands of their academic programs. A preliminary investigation into the standards set by one university revealed that there was no statement, policy or, public document describing an appropriate level of language use; and that the university concerned defined language proficiency entirely in terms of bands, scores or grades provided by external testing organisations.Commencing with the assumption that there is an entry level of English language competence, below which students have little chance of success in their studies (at least, within the accepted timeframe), this qualitative case study into one tertiary institution utilises a number of data collection strategies in order to develop a description or definition of a 'gatekeeper' level of English competence. It then compares the findings with the criteria for assessment and grading used by the two most widely available English language tests, TOEFL and IELTS.The first chapter introduces the background to the study. The second chapter outlines the underlying philosophical, social and linguistic framework within which the study was devised, in the context of the literature which informed it. The following chapter presents a justification for the selected research methods and data collection strategies. In the fourth and fifth chapters, the results demonstrate that interpretations of tertiary entry-level language proficiency vary, leading to confusion and an absence of strategic direction; it is further suggested that an appropriate level of language proficiency for tertiary entry cannot be defined without taking into account the prevailing social, political and educational ++
environment. Recommendations are put forward for the development of an institutional-level framework in which it might be possible to make judgements about the desired levels of language proficiency and improve on existing procedures for their evaluation.This study aims to bring together a number of different strands of research into language and tertiary education such as definitions of language proficiency, language testing and literacy issues, and demonstrate their interconnectivity. As a result, it presents a broad overview (within the overarching discipline) rather than focusing on a single area in depth. Although as a site case study this research does not claim generalisability, it is hoped that- its findings might be useful for other institutions as a basis for their own research.
APA, Harvard, Vancouver, ISO, and other styles
27

Pong, Lih. "Formal data flow diagrams (FDFD) : a petri-net based requirements specification language /." [Hong Kong : University of Hong Kong], 1985. http://sunzi.lib.hku.hk/hkuto/record.jsp?B12323019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Sutherland, Sean J. P. (Sean Jason Perry) 1977. "A demonstration of a formal specification & requirements language : a case study." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/86852.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.
Includes bibliographical references (p. 109,111 [p. 110 intentionally blank]).
by Sean J.P. Sutherland.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
29

Erasmus, Daniel. "Video quality requirements for South African Sign Language communications over mobile phones." Master's thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/6395.

Full text
Abstract:
Includes abstract.
Includes bibliographical references.
This project aims to find the minimum video resolution and frame rate that supports intelligible cell phone based video communications in South African Sign Language.
APA, Harvard, Vancouver, ISO, and other styles
30

Rane, Prerana Pradeepkumar. "Automatic Generation of Test Cases for Agile using Natural Language Processing." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/76680.

Full text
Abstract:
Test case design and generation is a tedious manual process that requires 40-70% of the software test life cycle. The test cases written manually by inexperienced testers may not offer a complete coverage of the requirements. Frequent changes in requirements reduce the reusability of the manually written test cases costing more time and effort. Most projects in the industry follow a Behavior-Driven software development approach to capturing requirements from the business stakeholders through user stories written in natural language. Instead of writing test cases manually, this thesis investigates a practical solution for automatically generating test cases within an Agile software development workflow using natural language-based user stories and acceptance criteria. However, the information provided by the user story is insufficient to create test cases using natural language processing (NLP), so we have introduced two new input parameters, Test Scenario Description and Dictionary, to improve the test case generation process. To establish the feasibility, we developed a tool that uses NLP techniques to generate functional test cases from the free-form test scenario description automatically. The tool reduces the effort required to create the test cases while improving the test coverage and quality of the test suite. Results from the feasibility study are presented in this thesis.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
31

Schamai, Wladimir. "Model-Based Verification of Dynamic System Behavior against Requirements : Method, Language, and Tool." Doctoral thesis, Linköpings universitet, Institutionen för datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-98107.

Full text
Abstract:
Modeling and simulation of complex systems is at the heart of any modern engineering activity. Engineers strive to predict the behavior of the system under development in order to get answers to particular questions long before physical prototypes or the actual system are built and can be tested in real life. An important question is whether a particular system design fulfills or violates requirements that are imposed on the system under development. When developing complex systems, such as spacecraft, aircraft, cars, power plants, or any subsystem of such a system, this question becomes hard to answer simply because the systems are too complex for engineers to be able to create mental models of them. Nowadays it is common to use computer-supported modeling languages to describe complex physical and cyber-physical systems. The situation is different when it comes to describing requirements. Requirements are typically written in natural language. Unfortunately, natural languages fail at being unambiguous, in terms of both syntax and semantics. Automated processing of naturallanguage requirements is a challenging task which still is too difficult to accomplish via computer for this approach to be of significant use in requirements engineering or verification. This dissertation proposes a new approach to design verification using simulation models that include formalized requirements. The main contributions are a new method that is supported by a new language and tool, along with case studies. The method enables verification of system dynamic behavior designs against requirements using simulation models. In particular, it shows how natural-language requirements and scenarios are formalized. Moreover, it presents a framework for automating the composition of simulation models that are used for design verification, evaluation of verification results, and sharing of new knowledge inferred in verification sessions. A new language called ModelicaML was developed to support the new method. It enables requirement formalization and integrates UML and Modelica. The language and the developed algorithms for automation are implemented in a prototype that is based on Eclipse Papyrus UML, Acceleo, and Xtext for modeling, and OpenModelica tools for simulation. The prototype is used to illustrate the applicability of the new method to examples from industry. The case studies presented start with sets of natural-language requirements and show how they are translated into models. Then, designs and verification scenarios are modeled, and simulation models are composed and simulated automatically. The simulation results produced are then used to draw conclusions on requirement violations; this knowledge is shared using semantic web technology. This approach supports the development and dynamic verification of cyber-physical systems, including both hardware and software components. ModelicaML facilitates a holistic view of the system by enabling engineers to model and verify multi-domain system behavior using mathematical models and state-of-the-art simulation capabilities. Using this approach, requirement inconsistencies, incorrectness, or infeasibilities, as well as design errors, can be detected and avoided early on in system development. The artifacts created can be reused for product verification in later development stages.
APA, Harvard, Vancouver, ISO, and other styles
32

Renaud, L. "Formalisation and evaluation of focus theories for requirements elicitation dialogues in natural language." Thesis, University of Edinburgh, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.661046.

Full text
Abstract:
Requirements elicitation is a difficult part of software engineering in which the specifications for a new system are discussed with potential users. Because verifying that requirements are correct for a complex task, computer support is often beneficial. This support requires formal specifications. However, people are not usually trained to use formal specification languages. Task or domain specific languages smooth the learning curve to write formal specifications but the elicitation process often remains error prone. Users need more support while writing specifications. In particular, a tool which interacts with them and helps them express their requirements in a domain specific way could lower the number of requirements elicitation errors. However, although numerous frameworks have been developed to support the expression and analysis of requirements, much less has been paid to the control of the dialogue taking place between the users and the system whilst using such frameworks. Focus theories are theories which explain how participants in a dialogue pay attention to certain things, at certain point of a dialogue, and how this attention may shift to other topics. In this thesis, we propose to use focus theories to improve the quality of the interaction between users and requirements elicitation tools. We show that, by using the constraints on dialogue evolution provided by these theories and the constraints provided by the requirements elicitation task, we can guide the elicitation process in a natural and easily understandable manner. This interactive way of using focus theories is new. It could be used in other applications where reasoning and communication play important roles and need to interact. We also carry out a comparative study of the use of focus theories for requirements elicitation, which requires us to be precise about our interpretation of our chosen focus theories and to develop an innovative means of empirical testing for them. This gives us a formalisation of focus theories as well as a method for developing and testing experimental dialogue managers.
APA, Harvard, Vancouver, ISO, and other styles
33

Kummler, Patrick S. [Verfasser], and H. [Akademischer Betreuer] Fromm. "Automated Quality Assessment of Natural Language Requirements / Patrick S. Kummler ; Betreuer: H. Fromm." Karlsruhe : KIT-Bibliothek, 2021. http://nbn-resolving.de/urn:nbn:de:101:1-2021091505000935829050.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Alalshuhai, Ahmed. "Requirements engineering of context-aware applications." Thesis, De Montfort University, 2015. http://hdl.handle.net/2086/12487.

Full text
Abstract:
Context-aware computing envisions a new generation of smart applications that have the ability to perpetually sense the user’s context and use these data to make adaptation decision in response to changes in the user’s context so as to provide timely and personalized services anytime and anywhere. Unlike the traditional distribution systems where the network topology is fixed and wired, context-aware computing systems are mostly based on wireless communication due to the mobility of the network nodes; hence the network topology is not fixed but changes dynamically in an unpredictable manner as nodes join and the leave network, in addition to the fact that wireless communication is unstable. These factors make the design and development of context-aware computing systems much more challenging, as the system requirements change depending on the context of use. The Unified Modelling Language (UML) is a graphical language commonly used to specify, visualize, construct, and document the artefacts of software-intensive systems. However, UML is an all-purpose modelling language and does not have notations to distinguish context-awareness requirements from other system requirements. This is critical for the specification, visualization, construction and documentation of context-aware computing systems because context-awareness requirements are highly important in these systems. This thesis proposes an extension of UML diagrams to cater for the specification, visualization, construction and documentation of context-aware computing systems where new notations are introduced to model context-awareness requirements distinctively from other system requirements. The contributions of this work can be summarized as follows: (i) A context-aware use case diagram is a new notion which merges into a single diagram the traditional use case diagram (that describes the functions of an application) and the use context diagram, which specifies the context information upon which the behaviours of these functions depend. (ii) A Novel notion known as a context-aware activity diagram is presented, which extends the traditional UML activity diagrams to enable the representation of context objects, context constraints and adaptation activities. Context constraints express conditions upon context object attributes that trigger adaptation activities; adaptation activities are activities that must be performed in response to specific changes in the system’s context. (iii) A novel notion known as the context-aware class diagram is presented, which extends the traditional UML class diagrams to enable the representation of context information that affect the behaviours of a class. A new relationship, called utilisation, between a UML class and a context class is used to model context objects; meaning that the behaviours of the UML class depend upon the context information represented by the context class. Hence a context-aware class diagram is a rich and expressive language that distinctively depicts both the structure of classes and that of the contexts upon which they depend. The pragmatics of the proposed approach are demonstrated using two real-world case studies.
APA, Harvard, Vancouver, ISO, and other styles
35

Bilali, Loubna. "Localization Training: Towards an Industry-based Requirements-Gathering Model." Kent State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=kent1532649023272877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Boyce, Sarah. "Praxis Exam and Clinical Fellowship Year Requirements." Digital Commons @ East Tennessee State University, 2014. https://dc.etsu.edu/etsu-works/1542.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Nguyen, Thi Cam Le. "Learner autonomy and EFL learning at the tertiary level in Vietnam : a thesis submitted to the Victoria University of Wellington in fulfilment of the requirements for the degree of Doctor of Philosophy in Applied Linguistics /." ResearchArchive@Victoria e-Thesis, 2009. http://hdl.handle.net/10063/1203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Jeurissen, Maree Jayne. "Silence speaks volumes a thesis submitted in partial fulfilment of the requirements for the degree of Master of Arts (Applied Language Studies), Auckland University of Technology, 2005." Full thesis. Abstract, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
39

Vlas, Radu. "A Requirements-Based Exploration of Open-Source Software Development Projects – Towards a Natural Language Processing Software Analysis Framework." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/cis_diss/48.

Full text
Abstract:
Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding such requirements provides insight into the nature of open source projects. Unfortunately, manual analysis of natural language requirements is time-consuming, and for large projects, error-prone. Automated analysis of natural language requirements, even partial, will be of great benefit. Towards that end, I describe the design and validation of an automated natural language requirements classifier for open source software development projects. I compare two strategies for recognizing requirements in open forums of software features. The results suggest that classifying text at the forum post aggregation and sentence aggregation levels may be effective. Initial results suggest that it can reduce the effort required to analyze requirements of open source software development projects. Software development organizations and communities currently employ a large number of software development techniques and methodologies. This implied complexity is also enhanced by a wide range of software project types and development environments. The resulting lack of consistency in the software development domain leads to one important challenge that researchers encounter while exploring this area: specificity. This results in an increased difficulty of maintaining a consistent unit of measure or analysis approach while exploring a wide variety of software development projects and environments. The problem of specificity is more prominently exhibited in an area of software development characterized by a dynamic evolution, a unique development environment, and a relatively young history of research when compared to traditional software development: the open-source domain. While performing research on open source and the associated communities of developers, one can notice the same challenge of specificity being present in requirements engineering research as in the case of closed-source software development. Whether research is aimed at performing longitudinal or cross-sectional analyses, or attempts to link requirements to other aspects of software development projects and their management, specificity calls for a flexible analysis tool capable of adapting to the needs and specifics of the explored context. This dissertation covers the design, implementation, and evaluation of a model, a method, and a software tool comprising a flexible software development analysis framework. These design artifacts use a rule-based natural language processing approach and are built to meet the specifics of a requirements-based analysis of software development projects in the open-source domain. This research follows the principles of design science research as defined by Hevner et. al. and includes stages of problem awareness, suggestion, development, evaluation, and results and conclusion (Hevner et al. 2004; Vaishnavi and Kuechler 2007). The long-term goal of the research stream stemming from this dissertation is to propose a flexible, customizable, requirements-based natural language processing software analysis framework which can be adapted to meet the research needs of multiple different types of domains or different categories of analyses.
APA, Harvard, Vancouver, ISO, and other styles
40

Dhar, Siddharth. "Optimizing TEE Protection by Automatically Augmenting Requirements Specifications." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98730.

Full text
Abstract:
An increasing number of software systems must safeguard their confidential data and code, referred to as critical program information (CPI). Such safeguarding is commonly accomplished by isolating CPI in a trusted execution environment (TEE), with the isolated CPI becoming a trusted computing base (TCB). TEE protection incurs heavy performance costs, as TEE-based functionality is expensive to both invoke and execute. Despite these costs, projects that use TEEs tend to have unnecessarily large TCBs. As based on our analysis, developers often put code and data into TEE for convenience rather than protection reasons, thus not only compromising performance but also reducing the effectiveness of TEE protection. In order for TEEs to provide maximum benefits for protecting CPI, their usage must be systematically incorporated into the entire software engineering process, starting from Requirements Engineering. To address this problem, we present a novel approach that incorporates TEEs in the Requirements Engineering phase by using natural language processing (NLP) to classify those software requirements that are security critical and should be isolated in TEE. Our approach takes as input a requirements specification and outputs a list of annotated software requirements. The annotations recommend to the developer which corresponding features comprise CPI that should be protected in a TEE. Our evaluation results indicate that our approach identifies CPI with a high degree of accuracy to incorporate safeguarding CPI into Requirements Engineering.
Master of Science
An increasing number of software systems must safeguard their confidential data like passwords, payment information, personal details, etc. This confidential information is commonly protected using a Trusted Execution Environment (TEE), an isolated environment provided by either the existing processor or separate hardware that interacts with the operating system to secure sensitive data and code. Unfortunately, TEE protection incurs heavy performance costs, with TEEs being slower than modern processors and frequent communication between the system and the TEE incurring heavy performance overhead. We discovered that developers often put code and data into TEE for convenience rather than protection purposes, thus not only hurting performance but also reducing the effectiveness of TEE protection. By thoroughly examining a project's features in the Requirements Engineering phase, which defines the project's functionalities, developers would be able to understand which features handle confidential data. To that end, we present a novel approach that incorporates TEEs in the Requirements Engineering phase by means of Natural Language Processing (NLP) tools to categorize the project requirements that may warrant TEE protection. Our approach takes as input a project's requirements and outputs a list of categorized requirements defining which requirements are likely to make use of confidential information. Our evaluation results indicate that our approach performs this categorization with a high degree of accuracy to incorporate safeguarding the confidentiality related features in the Requirements Engineering phase.
APA, Harvard, Vancouver, ISO, and other styles
41

Nordin, Azlin. "Constructing component-based systems directly from requirements using incremental composition." Thesis, University of Manchester, 2013. https://www.research.manchester.ac.uk/portal/en/theses/constructing-componentbased-systems-directly-from-requirements-using-incremental-composition(68a25711-88a8-4ec5-aece-95325278837d).html.

Full text
Abstract:
In software engineering, system construction typically starts from a requirements specification that has been engineered from raw requirements in a natural language. The specification is used to derive intermediate requirements models such as structured or object-oriented models. Throughout the stages of system construction, these artefacts will be used as reference models. In general, in order to derive a design specification out of the requirements, the entire set of requirements specifications has to be analysed. Such models at best only approximate the raw requirements since these design models are derived as a result of the abstraction process according to the chosen software development methodology, and subjected to the expertise, intuition, judgment and experiences of the analysts or designers of the system. These abstraction models require the analysts to elicit all useful information from the requirements, and there is a potential risk that some information may be lost in the process of model construction. As the use of natural language requirements in system construction is inevitable, the central focus of this study was to use requirements stated in natural language in contrast to any other requirements representation (e.g. modelling artefact). In this thesis, an approach that avoids intermediate requirements models, and maps natural language requirements directly into architectural constructs, and thus minimises information loss during the model construction process, has been defined. This approach has been grounded on the adoption of a component model that supports incremental composition. Incremental composition allows a system to be constructed piece by piece. By mapping a raw requirement to elements of the component model, a partial architecture that satisfies that requirement is constructed. Consequently, by iterating this process for all the requirements, one at a time, the incremental composition to build the system piece by piece directly from the requirements can be achieved. In software engineering, system construction typically starts from a requirements specification that has been engineered from raw requirements in a natural language. The specification is used to derive intermediate requirements models such as structured or object-oriented models. Throughout the stages of system construction, these artefacts will be used as reference models. In general, in order to derive a design specification out of the requirements, the entire set of requirements specifications has to be analysed. Such models at best only approximate the raw requirements since these design models are derived as a result of the abstraction process according to the chosen software development methodology, and subjected to the expertise, intuition, judgment and experiences of the analysts or designers of the system. These abstraction models require the analysts to elicit all useful information from the requirements, and there is a potential risk that some information may be lost in the process of model construction. As the use of natural language requirements in system construction is inevitable, the central focus of this study was to use requirements stated in natural language in contrast to any other requirements representation (e.g. modelling artefact). In this thesis, an approach that avoids intermediate requirements models, and maps natural language requirements directly into architectural constructs, and thus minimises information loss during the model construction process, has been defined. This approach has been grounded on the adoption of a component model that supports incremental composition. Incremental composition allows a system to be constructed piece by piece. By mapping a raw requirement to elements of the component model, a partial architecture that satisfies that requirement is constructed. Consequently, by iterating this process for all the requirements, one at a time, the incremental composition to build the system piece by piece directly from the requirements can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
42

Bitchener, John. "An investigation into potential mismatches between teacher intention and learner interpretation of task a thesis submitted to Auckland University of Technology in partial fulfilment of the requirements for the degree of Master of Arts (MA), 2009 /." Click here to access this resource online, 2009. http://hdl.handle.net/10292/717.

Full text
Abstract:
Thesis (MA, Applied Language Studies) -- AUT University, 2009.
Includes bibliographical references. Also held in print (1 v. (various pagings) ; 30 cm.) in the Archive at the City Campus (T 418.0071 BIT)
APA, Harvard, Vancouver, ISO, and other styles
43

Schmitt, Susan M. "A critical examination of the relationship between the memory requirements of structured BASIC and unstructured BASIC." Instructions for remote access. Click here to access this electronic resource. Access available to Kutztown University faculty, staff, and students only, 1985. http://www.kutztown.edu/library/services/remote_access.asp.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Lucantonio, Valerio. "Enanching the consistency between requirements and test cases through the definition of a Controlled Natural Language." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-28404.

Full text
Abstract:
Requirements and testing engineering build up the solid base for developing successful software projects. In particular along the development process, testing follows and relies on requirements engineering: an incoherent specification of one of the two can affect the correctness of a project resulting in delays, failures, unhappy customers and other consequences on the project delivery.Especially nowadays software companies are competing in fast changing markets where the delivery time of the products is the most crucial aspect and it really affects the quality and the success of the product. Given the semantic gap between requirements (typically written in natural language) and test specifications, it is not rare that requirements are misunderstood leading to erroneous tests. This risk is even more relevant when trying to perform early validation, since testing is mainly based on requirements definition.This thesis work introduces an investigation to close the gap between requirements specification and test cases by providing automatic test case generation. Requirements are written in Natural Language, their subsequent restructuring in a more formal controlled natural language, and the final automatic translation derives test cases. The soundness of the concept is demonstrated through a practical implementation tailored to a previous real project developed at MDH. The implementation not only demonstrates the feasibility of the idea, but also shows interesting results in terms of generated test cases against the ones obtained for the project by hand.
APA, Harvard, Vancouver, ISO, and other styles
45

Ruiz, Martín Christopher. "Assisting Hispanoamericano parents to recognize children's literacy requirements by recontextualizing the academic language of social studies." CSUSB ScholarWorks, 1993. https://scholarworks.lib.csusb.edu/etd-project/818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Alenazi, Mounifah. "Toward Improved Traceability of Safety Requirements and State-Based Design Models." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1623168093414723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Ekström, Daniel, and Johnny Hermansson. "Maktkamp om svenska språkets spelregler : En fältanalys av språkets relation till medborgarskap i Sverige." Thesis, Stockholms universitet, Centrum för tvåspråkighetsforskning, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-189556.

Full text
Abstract:
Denna studie behandlar den offentliga debatt som uppstått i kölvattnet av januaripartiernas förslag om språktest för medborgarskap som presenterades i januari 2019. Genom ett diskursanalytiskt tillvägagångssätt undersöks de språkideologier och de synsätt på medborgarskap som kommer till uttryck i debattinläggen. Utifrån ambitionen att placera in diskursproducenternas positioner har studien tillämpat Bourdieus fältteori och med denna riktat fokus mot två specifika fält: det politiska och det vetenskapliga fältet. Vidare har studien fokuserat på de motsättningar som föreligger i debatten, inom såväl som mellan fälten.      Resultaten redovisar att det framkommer olika ståndpunkter som kan förklaras utifrån vilken position aktörerna talar och vilket fältkapital de besitter. Studien visar också att de språkideologiska villkoren förändrats under det senaste decenniet i synen på svenska språkkunskapers roll i naturalisationsprocessen. Detta har lett till att den språkvetenskapliga forskningen gett sig in i debatten och argumenterat mot språkkrav för medborgarskap. Resultaten indikerar att de två fälten bryter mot det kunskapsutbyte som karaktäriserat svenskt språkpolitiskt beslutsfattande där vetenskaplig forskning legat till grund för diskursiva slutsatser.
APA, Harvard, Vancouver, ISO, and other styles
48

Florent, Nicholas. "Investigating Hong Kong tertiary students' perceptions of the cognitive requirements of writing tasks in three English language proficiency tests." Click to view the E-thesis via HKUTO, 2001. http://sunzi.lib.hku.hk/hkuto/record/B31944796.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Florent, Nicholas. "Investigation Hong Kong tertiary students' perceptions of the cognitive requirements of writing tasks in three English language proficiency tests /." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk:8888/cgi-bin/hkuto%5Ftoc%5Fpdf?B23424461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Roettenbacher, Christian Wolfgang. "A framework for specifying business rules based on logic with a syntax close to natural language." Thesis, De Montfort University, 2017. http://hdl.handle.net/2086/15326.

Full text
Abstract:
The systematic interaction of software developers with the business domain experts that are usually no software developers is crucial to software system maintenance and creation and has surfaced as the big challenge of modern software engineering. Existing frameworks promoting the typical programming languages with artificial syntax are suitable to be processed by computers but do not cater to domain experts, who are used to documents written in natural language as a means of interaction. Other frameworks that claim to be fully automated, such as those using natural language processing, are too imprecise to handle the typical requirements documents written in heterogeneous natural language flavours. In this thesis, a framework is proposed that can support the specification of business rules that is, on the one hand, understandable for nonprogrammers and on the other hand semantically founded, which enables computer processability. This is achieved by the novel language Adaptive Business Process and Rule Integration Language (APRIL). Specifications in APRIL can be written in a style close to natural language and are thus suitable for humans, which was empirically evaluated with a representative group of test persons. A useful and uncommon feature of APRIL is the ability to define reusable abstract mixfix operators as sentence patterns, that can mimic natural language. The semantic underpinning of the mixfix operators is achieved by customizable atomic formulas, allowing to tailor APRIL to specific domains. Atomic formulas are underpinned by a denotational semantics, which is based on Tempura (executable subset of Interval Temporal Logic (ITL)) to describe behaviour and the Object Constraint Language (OCL) to describe invariants and pre- and postconditions. APRIL statements can be used as the basis for automatically generating test code for software systems. An additional aspect of enhancing the quality of specification documents comes with a novel formal method technique (ISEPI) applicable to behavioural business rules semantically based on Propositional Interval Temporal Logic (PITL) and complying with the newly discovered 2-to-1 property. This work discovers how the ISE subset of ISEPI can be used to express complex behavioural business rules in a more concise and understandable way. The evaluation of ISE is done by an example specification taken from the car industry describing system behaviour, using the tools MONA and PITL2MONA. Finally, a methodology is presented that helps to guide a continuous transformation starting from purely natural language business rule specification to the APRIL specification which can then be transformed to test code. The methodologies, language concepts, algorithms, tools and techniques devised in this work are part of the APRIL-framework.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography