To see the other types of publications on this topic, follow the link: Line feature.

Dissertations / Theses on the topic 'Line feature'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Line feature.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Huet, Benoit. "Object recognition from large libraries of line patterns." Thesis, University of York, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.298533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Larkins, Robert L. "Off-line signature verification." The University of Waikato, 2009. http://hdl.handle.net/10289/2803.

Full text
Abstract:
In today’s society signatures are the most accepted form of identity verification. However, they have the unfortunate side-effect of being easily abused by those who would feign the identification or intent of an individual. This thesis implements and tests current approaches to off-line signature verification with the goal of determining the most beneficial techniques that are available. This investigation will also introduce novel techniques that are shown to significantly boost the achieved classification accuracy for both person-dependent (one-class training) and person-independent (two-class training) signature verification learning strategies. The findings presented in this thesis show that many common techniques do not always give any significant advantage and in some cases they actually detract from the classification accuracy. Using the techniques that are proven to be most beneficial, an effective approach to signature verification is constructed, which achieves approximately 90% and 91% on the standard CEDAR and GPDS signature datasets respectively. These results are significantly better than the majority of results that have been previously published. Additionally, this approach is shown to remain relatively stable when a minimal number of training signatures are used, representing feasibility for real-world situations.
APA, Harvard, Vancouver, ISO, and other styles
3

Seidl, Christoph. "Evolution in Feature-Oriented Model-Based Software Product Line Engineering." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-81200.

Full text
Abstract:
Software Product Lines (SPLs) are a successful approach to software reuse in the large. Even though tools exist to create SPLs, their evolution is widely unexplored. Evolving an SPL manually is tedious and error-prone as it is hard to avoid unintended side-effects that may harm the consistency of the SPL. In this thesis, the conceptual basis of a system for the evolution of model-based SPLs is presented, which maintains consistency of models and feature mapping. As basis, a novel classification is introduced that distinguishes evolutions by their potential to harm the mapping of an SPL. Furthermore, multiple remapping operators are presented that can remedy the negative side-effects of an evolution. A set of evolutions is complemented with appropriate remapping operations for the use in SPLs. Finally, an implementation of the evolution system in the SPL tool FeatureMapper is provided to demonstrate the capabilities of the presented approach when co-evolving models and feature mapping of an SPL.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhao, Zhiyuan P. "A One Pass Line-Following Algorithm for Linear Feature Extraction." The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu1364219987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pérez, Rocha Ana Laura. "Segmentation and Line Filling of 2D Shapes." Thèse, Université d'Ottawa / University of Ottawa, 2013. http://hdl.handle.net/10393/23676.

Full text
Abstract:
The evolution of technology in the textile industry reached the design of embroidery patterns for machine embroidery. In order to create quality designs the shapes to be embroidered need to be segmented into regions that define different parts. One of the objectives of our research is to develop a method to automatically segment the shapes and by doing so making the process faster and easier. Shape analysis is necessary to find a suitable method for this purpose. It includes the study of different ways to represent shapes. In this thesis we focus on shape representation through its skeleton. We make use of a shape's skeleton and the shape's boundary through the so-called feature transform to decide how to segment a shape and where to place the segment boundaries. The direction of stitches is another important specification in an embroidery design. We develop a technique to select the stitch orientation by defining direction lines using the skeleton curves and information from the boundary. We compute the intersections of segment boundaries and direction lines with the shape boundary for the final definition of the direction line segments. We demonstrate that our shape segmentation technique and the automatic placement of direction lines produce sufficient constrains for automated embroidery designs. We show examples for lettering, basic shapes, as well as simple and complex logos.
APA, Harvard, Vancouver, ISO, and other styles
6

Ramos, Alves Vander. "Implementing software product line adoption strategies." Universidade Federal de Pernambuco, 2007. https://repositorio.ufpe.br/handle/123456789/2044.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:54:05Z (GMT). No. of bitstreams: 2 arquivo6551_1.pdf: 2254714 bytes, checksum: 89a6702d1c801f178299f95585aac5ab (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2007
Linha de Produtos de Software (LPS) é uma aborgadem promissora para o desenvolvimento de um conjunto de produtos focados em um segmento de mercado e desenvolvidos a partir de um conjunto comum de artefatos. Possíveis benefícios incluem reuso em larga escala e significativa melhoria em produtividade. Um problema-chave associado, no entanto, é o tratamento de estratégias de implantação, em que uma organização decide iniciar uma LPS a partir do zero, fazer bootstrap de produtos existentes em uma LPS, ou evoluir uma LPS. Em particular, no nível de implementação e de modelo de features, métodos de desenvolvimento carecem de apoio adequado para extração e evolução de LPSs. Neste contexto, apresentamos um m´etodo original provendo diretrizes concretas para extração e evolução de LPSs no nível de implementação e de modelo de features, nos quais proporciona reuso e segurança. O método primeiro faz o bootstrap da LPS e então a evolui com uma abordagem reativa. O método se baseia em uma coleção de refatoramentos tanto na implementação (refatoramentos orientados a aspectos) como no modelo de features. O método foi avaliado no domínio altamente variável de jogos móveis
APA, Harvard, Vancouver, ISO, and other styles
7

Niederhausen, Matthias. "Graphical product-line configuration of nesC-based sensor network applications using feature models." Thesis, Manhattan, Kan. : Kansas State University, 2008. http://hdl.handle.net/2097/938.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Shaw, Ryan Phillip. "Application of Subjective Logic to Vortex Core Line Extraction and Tracking from Unsteady Computational Fluid Dynamics Simulations." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/2989.

Full text
Abstract:
Presented here is a novel tool to extract and track believable vortex core lines from unsteady Computational Fluid Dynamics data sets using multiple feature extraction algorithms. Existing work explored the possibility of extracting features concurrent with a running simulation using intelligent software agents, combining multiple algorithms' capabilities using subjective logic. This work modifies the steady-state approach to work with unsteady fluid dynamics and is designed to work within the Concurrent Agent-enabled Feature Extraction concept. Each agent's belief tuple is quantified using a predefined set of information. The information and functions necessary to set each component in each agent's belief tuple is given along with an explanation of the methods for setting the components. This method is applied to the analyses of flow in a lid-driven cavity and flow around a cylinder, which highlight strengths and weaknesses of the chosen algorithms and the potential for subjective logic to aid in understanding the resulting features. Feature tracking is successfully applied and is observed to have a significant impact on the opinion of the vortex core lines. In the lid-driven cavity data set, unsteady feature extraction modifications are shown to impact feature extraction results with moving vortex core lines. The Sujudi-Haimes algorithm is shown to be more believable when extracting the main vortex core lines of the cavity simulation while the Roth-Peikert algorithm succeeding in extracting the weaker vortex cores in the same simulation. Mesh type and time step is shown to have a significant effect on the method. In the curved wake of the cylinder data set, the Roth-Peikert algorithm more reliably detects vortex core lines which exist for a significant amount of time. the method was finally applied to a massive wind turbine simulation, where the importance of performing feature extraction in parallel is shown. The use of multiple extraction algorithms with subjective logic and feature tracking helps determine the expected probability that an extracted vortex core is believable. This approach may be applied to massive data sets which will greatly reduce analysis time and data size and will aid in a greater understanding of complex fluid flows.
APA, Harvard, Vancouver, ISO, and other styles
9

Tajima, Johji, and Tatsuya Kobayashi. "Content-Adaptive Automatic Image Sharpening." IEEE, 2010. http://hdl.handle.net/2237/14477.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Thüm, Thomas [Verfasser], and Gunter [Akademischer Betreuer] Saake. "Product-line specification and verification with feature-oriented contracts / Thomas Thüm. Betreuer: Gunter Saake." Magdeburg : Universitätsbibliothek, 2015. http://d-nb.info/106915976X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Oster, Sebastian [Verfasser], Andy [Akademischer Betreuer] Schürr, and Ursula [Akademischer Betreuer] Goltz. "Feature Model-based Software Product Line Testing / Sebastian Oster. Betreuer: Andy Schürr ; Ursula Goltz." Darmstadt : Universitäts- und Landesbibliothek Darmstadt, 2012. http://d-nb.info/1106113845/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Nieke, Michael Verfasser], Ina [Akademischer Betreuer] [Schaefer, and Bernhard [Akademischer Betreuer] Rumpe. "Consistent Feature-Model Driven Software Product Line Evolution / Michael Nieke ; Ina Schaefer, Bernhard Rumpe." Braunschweig : Technische Universität Braunschweig, 2021. http://d-nb.info/1229615598/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wasell, Richard. "Automatisk detektering av diken i LiDAR-data." Thesis, Linköpings universitet, Datorseende, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-72066.

Full text
Abstract:
Den här rapporten har utrett möjligheten att automatiskt identifiera diken frånflygburet insamlat LiDAR-data. Den metod för identifiering som har valts harförst skapat en höjdbild från LiDAR-data. Därefter har den tagit fram kandidatertill diken genom att vektorisera resultatet från en linjedetektering. Egenskaper-na för dikeskandidaterna har sedan beräknats genom en analys av höjdprofilerför varje enskild kandidat, där höjdprofilerna skapats utifrån ursprungliga data.Genom att filtrera kandidaterna efter deras egenskaper kan dikeskartor med an-vändarspecificerade mått på diken presenteras i ett vektorformat som underlättarvidare användning. Rapporten beskriver hur algoritmen har implementerats ochpresenterar också exempel på resultat. Efter en analys av algoritmen samt förslagpå förbättringar presenteras den viktigaste behållningen av rapporten; Att det ärmöjligt med automatisk detektering av diken.
This Master’s thesis is investigating the possibility of automatically identifyingditches in airborne collected LiDAR data. The chosen approach to identificationcommences by creating an elevation picture from the LiDAR data. Then it usesthe result of a line detection to exhibit candidates for ditches. The properties forthe various candidates are calculated through an analysis of the elevation profile forthe candidates, where the elevation profiles are created from the original data. Byfiltering the candidates according to their calculated properties, maps with ditchesconforming to user-specified limits are created and presented in vector format.This thesis describes how the algorithm is implemented and gives examples ofresults. After an analysis of the algorithm and a proposal for improvements, itis suggested that automatic detection of ditches in LiDAR collected data is anachievable objective.
APA, Harvard, Vancouver, ISO, and other styles
14

Raman, Pujita. "Speaker Identification and Verification Using Line Spectral Frequencies." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/52964.

Full text
Abstract:
State-of-the-art speaker identification and verification (SIV) systems provide near perfect performance under clean conditions. However, their performance deteriorates in the presence of background noise. Many feature compensation, model compensation and signal enhancement techniques have been proposed to improve the noise-robustness of SIV systems. Most of these techniques require extensive training, are computationally expensive or make assumptions about the noise characteristics. There has not been much focus on analyzing the relative importance, or speaker-discriminative power of different speech zones, particularly under noisy conditions. In this work, an automatic, text-independent speaker identification (SI) system and speaker verification (SV) system is proposed using Line Spectral Frequency (LSF) features. The performance of the proposed SI and SV systems are evaluated under various types of background noise. A score-level fusion based technique is implemented to extract complementary information from static and dynamic LSF features. The proposed score-level fusion based SI and SV systems are found to be more robust under noisy conditions. In addition, we investigate the speaker-discriminative power of different speech zones such as vowels, non-vowels and transitions. Rapidly varying regions of speech such as consonant-vowel transitions are found to be most speaker-discriminative in high SNR conditions. Steady, high-energy vowel regions are robust against noise and are hence most speaker-discriminative in low SNR conditions. We show that selectively utilizing features from a combination of transition and steady vowel zones further improves the performance of the score-level fusion based SI and SV systems under noisy conditions.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
15

Al-Muhtaseb, Husni A., Sabri A. Mahmoud, and Rami S. R. Qahwaji. "Recognition of off-line printed Arabic text using Hidden Markov Models." Elsevier, 2008. http://hdl.handle.net/10454/4105.

Full text
Abstract:
yes
This paper describes a technique for automatic recognition of off-line printed Arabic text using Hidden Markov Models. In this work different sizes of overlapping and non-overlapping hierarchical windows are used to generate 16 features from each vertical sliding strip. Eight different Arabic fonts were used for testing (viz. Arial, Tahoma, Akhbar, Thuluth, Naskh, Simplified Arabic, Andalus, and Traditional Arabic). It was experimentally proven that different fonts have their highest recognition rates at different numbers of states (5 or 7) and codebook sizes (128 or 256). Arabic text is cursive, and each character may have up to four different shapes based on its location in a word. This research work considered each shape as a different class, resulting in a total of 126 classes (compared to 28 Arabic letters). The achieved average recognition rates were between 98.08% and 99.89% for the eight experimental fonts. The main contributions of this work are the novel hierarchical sliding window technique using only 16 features for each sliding window, considering each shape of Arabic characters as a separate class, bypassing the need for segmenting Arabic text, and its applicability to other languages.
APA, Harvard, Vancouver, ISO, and other styles
16

Srestasathiern, Panu. "Line Based Estimation of Object Space Geometry and Camera Motion." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1345401748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Dayibas, Orcun. "Feature Oriented Domain Specific Language For Dependency Injection In Dynamic Software Product Lines." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12611071/index.pdf.

Full text
Abstract:
Base commonality of the Software Product Line (SPL) Engineering processes is to analyze commonality and variability of the product family though, SPLE defines many various processes in different abstraction levels. In this thesis, a new approach to configure (according to requirements) components as building blocks of the architecture is proposed. The main objective of this approach is to support domain design and application design processes in SPL context. Configuring the products is made into a semi-automatic operation by defining a Domain Specific Language (DSL) which is built on top of domain and feature-component binding model notions. In order to accomplish this goal, dependencies of the components are extracted from the software by using the dependency injection method and these dependencies are made definable in CASE tools which are developed in this work.
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Chenguang. "Low level feature detection in SAR images." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAT015.

Full text
Abstract:
Dans cette thèse, nous développons des détecteurs de caractéristiques de bas niveau pour les images radar à synthèse d'ouverture (SAR) afin de faciliter l'utilisation conjointe des données SAR et optiques. Les segments de droite et les bords sont des caractéristiques de bas niveau très importantes dans les images qui peuvent être utilisées pour de nombreuses applications comme l'analyse ou le stockage d'images, ainsi que la détection d'objets. Alors qu'il existe de nombreux détecteurs efficaces pour les structures bas-niveau dans les images optiques, il existe très peu de détecteurs de ce type pour les images SAR, principalement en raison du fort bruit multiplicatif. Dans cette thèse, nous développons un détecteur de segment de droite générique et un détecteur de bords efficace pour les images SAR. Le détecteur de segment de droite proposé, nommé LSDSAR, est basé sur un modèle Markovien a contrario et le principe de Helmholtz, où les segments de droite sont validés en fonction d'une mesure de significativité. Plus précisément, un segment de droite est validé si son nombre attendu d'occurrences dans une image aléatoire sous l'hypothèse du modèle Markovien a contrario est petit. Contrairement aux approches habituelles a contrario, le modèle Markovien a contrario permet un filtrage fort dans l'étape de calcul du gradient, car les dépendances entre les orientations locales des pixels voisins sont autorisées grâce à l'utilisation d'une chaîne de Markov de premier ordre. Le détecteur de segments de droite basé sur le modèle Markovian a contrario proposé LSDSAR, bénéficie de la précision et l'efficacité de la nouvelle définition du modèle de fond, car de nombreux segments de droite vraie dans les images SAR sont détectés avec un contrôle du nombre de faux détections. De plus, très peu de réglages de paramètres sont requis dans les applications pratiques de LSDSAR.Dans la deuxième partie de cette thèse, nous proposons un détecteur de bords basé sur l'apprentissage profond pour les images SAR. Les contributions du détecteur de bords proposé sont doubles: 1) sous l'hypothèse que les images optiques et les images SAR réelles peuvent être divisées en zones constantes par morceaux, nous proposons de simuler un ensemble de données SAR à l'aide d'un ensemble de données optiques; 2) Nous proposons d'appliquer un réseaux de neurones convolutionnel classique, HED, directement sur les champs de magnitude des images. Ceci permet aux images de test SAR d'avoir des statistiques semblables aux images optiques en entrée du réseau. Plus précisément, la distribution du gradient pour toutes les zones homogènes est la même et la distribution du gradient pour deux zones homogènes à travers les frontières ne dépend que du rapport de leur intensité moyenne valeurs. Le détecteur de bords proposé, GRHED permet d'améliorer significativement l'état de l'art, en particulier en présence de fort bruit (images 1-look)
In this thesis we develop low level feature detectors for Synthetic Aperture Radar (SAR) images to facilitate the joint use of SAR and optical data. Line segments and edges are very important low level features in images which can be used for many applications like image analysis, image registration and object detection. Contrarily to the availability of many efficient low level feature detectors dedicated to optical images, there are very few efficient line segment detector and edge detector for SAR images mostly because of the strong multiplicative noise. In this thesis we develop a generic line segment detector and an efficient edge detector for SAR images.The proposed line segment detector which is named as LSDSAR, is based on a Markovian a contrario model and the Helmholtz principle, where line segments are validated according to their meaningfulness. More specifically, a line segment is validated if its expected number of occurences in a random image under the hypothesis of the Markovian a contrario model is small. Contrarily to the usual a contrario approaches, the Markovian a contrario model allows strong filtering in the gradient computation step, since dependencies between local orientations of neighbouring pixels are permitted thanks to the use of a first order Markov chain. The proposed Markovian a contrario model based line segment detector LSDSAR benefit from the accuracy and efficiency of the new definition of the background model, indeed, many true line segments in SAR images are detected with a control of the number of false detections. Moreover, very little parameter tuning is required in the practical applications of LSDSAR. The second work of this thesis is that we propose a deep learning based edge detector for SAR images. The contributions of the proposed edge detector are two fold: 1) under the hypothesis that both optical images and real SAR images can be divided into piecewise constant areas, we propose to simulate a SAR dataset using optical dataset; 2) we propose to train a classical CNN (convolutional neural network) edge detector, HED, directly on the graident fields of images. This, by using an adequate method to compute the gradient, enables SAR images at test time to have statistics similar to the training set as inputs to the network. More precisely, the gradient distribution for all homogeneous areas are the same and the gradient distribution for two homogeneous areas across boundaries depends only on the ratio of their mean intensity values. The proposed method, GRHED, significantly improves the state-of-the-art, especially in very noisy cases such as 1-look images
APA, Harvard, Vancouver, ISO, and other styles
19

Malik, Zeeshan. "Towards on-line domain-independent big data learning : novel theories and applications." Thesis, University of Stirling, 2015. http://hdl.handle.net/1893/22591.

Full text
Abstract:
Feature extraction is an extremely important pre-processing step to pattern recognition, and machine learning problems. This thesis highlights how one can best extract features from the data in an exhaustively online and purely adaptive manner. The solution to this problem is given for both labeled and unlabeled datasets, by presenting a number of novel on-line learning approaches. Specifically, the differential equation method for solving the generalized eigenvalue problem is used to derive a number of novel machine learning and feature extraction algorithms. The incremental eigen-solution method is used to derive a novel incremental extension of linear discriminant analysis (LDA). Further the proposed incremental version is combined with extreme learning machine (ELM) in which the ELM is used as a preprocessor before learning. In this first key contribution, the dynamic random expansion characteristic of ELM is combined with the proposed incremental LDA technique, and shown to offer a significant improvement in maximizing the discrimination between points in two different classes, while minimizing the distance within each class, in comparison with other standard state-of-the-art incremental and batch techniques. In the second contribution, the differential equation method for solving the generalized eigenvalue problem is used to derive a novel state-of-the-art purely incremental version of slow feature analysis (SLA) algorithm, termed the generalized eigenvalue based slow feature analysis (GENEIGSFA) technique. Further the time series expansion of echo state network (ESN) and radial basis functions (EBF) are used as a pre-processor before learning. In addition, the higher order derivatives are used as a smoothing constraint in the output signal. Finally, an online extension of the generalized eigenvalue problem, derived from James Stone’s criterion, is tested, evaluated and compared with the standard batch version of the slow feature analysis technique, to demonstrate its comparative effectiveness. In the third contribution, light-weight extensions of the statistical technique known as canonical correlation analysis (CCA) for both twinned and multiple data streams, are derived by using the same existing method of solving the generalized eigenvalue problem. Further the proposed method is enhanced by maximizing the covariance between data streams while simultaneously maximizing the rate of change of variances within each data stream. A recurrent set of connections used by ESN are used as a pre-processor between the inputs and the canonical projections in order to capture shared temporal information in two or more data streams. A solution to the problem of identifying a low dimensional manifold on a high dimensional dataspace is then presented in an incremental and adaptive manner. Finally, an online locally optimized extension of Laplacian Eigenmaps is derived termed the generalized incremental laplacian eigenmaps technique (GENILE). Apart from exploiting the benefit of the incremental nature of the proposed manifold based dimensionality reduction technique, most of the time the projections produced by this method are shown to produce a better classification accuracy in comparison with standard batch versions of these techniques - on both artificial and real datasets.
APA, Harvard, Vancouver, ISO, and other styles
20

Schroeter, Julia. "Feature-based configuration management of reconfigurable cloud applications." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-141415.

Full text
Abstract:
A recent trend in software industry is to provide enterprise applications in the cloud that are accessible everywhere and on any device. As the market is highly competitive, customer orientation plays an important role. Companies therefore start providing applications as a service, which are directly configurable by customers in an online self-service portal. However, customer configurations are usually deployed in separated application instances. Thus, each instance is provisioned manually and must be maintained separately. Due to the induced redundancy in software and hardware components, resources are not optimally utilized. A multi-tenant aware application architecture eliminates redundancy, as a single application instance serves multiple customers renting the application. The combination of a configuration self-service portal with a multi-tenant aware application architecture allows serving customers just-in-time by automating the deployment process. Furthermore, self-service portals improve application scalability in terms of functionality, as customers can adapt application configurations on themselves according to their changing demands. However, the configurability of current multi-tenant aware applications is rather limited. Solutions implementing variability are mainly developed for a single business case and cannot be directly transferred to other application scenarios. The goal of this thesis is to provide a generic framework for handling application variability, automating configuration and reconfiguration processes essential for self-service portals, while exploiting the advantages of multi-tenancy. A promising solution to achieve this goal is the application of software product line methods. In software product line research, feature models are in wide use to express variability of software intense systems on an abstract level, as features are a common notion in software engineering and prominent in matching customer requirements against product functionality. This thesis introduces a framework for feature-based configuration management of reconfigurable cloud applications. The contribution is three-fold. First, a development strategy for flexible multi-tenant aware applications is proposed, capable of integrating customer configurations at application runtime. Second, a generic method for defining concern-specific configuration perspectives is contributed. Perspectives can be tailored for certain application scopes and facilitate the handling of numerous configuration options. Third, a novel method is proposed to model and automate structured configuration processes that adapt to varying stakeholders and reduce configuration redundancies. Therefore, configuration processes are modeled as workflows and adapted by applying rewrite rules triggered by stakeholder events. The applicability of the proposed concepts is evaluated in different case studies in the industrial and academic context. Summarizing, the introduced framework for feature-based configuration management is a foundation for automating configuration and reconfiguration processes of multi-tenant aware cloud applications, while enabling application scalability in terms of functionality.
APA, Harvard, Vancouver, ISO, and other styles
21

Gómez, Llana Abel. "MODEL DRIVEN SOFTWARE PRODUCT LINE ENGINEERING: SYSTEM VARIABILITY VIEW AND PROCESS IMPLICATIONS." Doctoral thesis, Universitat Politècnica de València, 2012. http://hdl.handle.net/10251/15075.

Full text
Abstract:
La Ingeniería de Líneas de Productos Software -Software Product Line Engineerings (SPLEs) en inglés- es una técnica de desarrollo de software que busca aplicar los principios de la fabricación industrial para la obtención de aplicaciones informáticas: esto es, una Línea de productos Software -Software Product Line (SPL)- se emplea para producir una familia de productos con características comunes, cuyos miembros, sin embargo, pueden tener características diferenciales. Identificar a priori estas características comunes y diferenciales permite maximizar la reutilización, reduciendo el tiempo y el coste del desarrollo. Describir estas relaciones con la suficiente expresividad se vuelve un aspecto fundamental para conseguir el éxito. La Ingeniería Dirigida por Modelos -Model Driven Engineering (MDE) en inglés- se ha revelado en los últimos años como un paradigma que permite tratar con artefactos software con un alto nivel de abstracción de forma efectiva. Gracias a ello, las SPLs puede aprovecharse en granmedida de los estándares y herramientas que han surgido dentro de la comunidad de MDE. No obstante, aún no se ha conseguido una buena integración entre SPLE y MDE, y como consecuencia, los mecanismos para la gestión de la variabilidad no son suficientemente expresivos. De esta manera, no es posible integrar la variabilidad de forma eficiente en procesos complejos de desarrollo de software donde las diferentes vistas de un sistema, las transformaciones de modelos y la generación de código juegan un papel fundamental. Esta tesis presenta MULTIPLE, un marco de trabajo y una herramienta que persiguen integrar de forma precisa y eficiente los mecanismos de gestión de variabilidad propios de las SPLs dentro de los procesos de MDE. MULTIPLE proporciona lenguajes específicos de dominio para especificar diferentes vistas de los sistemas software. Entre ellas se hace especial hincapié en la vista de variabilidad ya que es determinante para la especificación de SPLs.
Gómez Llana, A. (2012). MODEL DRIVEN SOFTWARE PRODUCT LINE ENGINEERING: SYSTEM VARIABILITY VIEW AND PROCESS IMPLICATIONS [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/15075
Palancia
APA, Harvard, Vancouver, ISO, and other styles
22

Orhan, Umut. "A Knowledge Based Product Line For Semantic Modeling Of Web Service Families." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610305/index.pdf.

Full text
Abstract:
Some mechanisms to enable an effective transition from domain models to web service descriptions are developed. The introduced domain modeling support provides verification and correction on the customization part. An automated mapping mechanism from the domain model to web service ontologies is also developed. The proposed approach is based on Feature-Oriented Domain Analysis (FODA), Semantic Web technologies and ebXML Business Process Specification Schema (ebBP). Major contributions of this work are the conceptualizations of a feature model for web services and a novel approach for knowledge-based elicitation of domain-specific outcomes in order to allow designing and deploying services better aligned with dynamically changing business goals, stakeholders'
concerns and end-users'
viewpoints. The main idea behind enabling a knowledge-based approach is to pursue automation and intelligence on reflecting business requirements into service descriptions via model transformations and automated reasoning. The proposed reference variability model encloses the domain-specific knowledge and is formalized by using Web Ontology Language (OWL). Adding formal semantics to feature models allows us to perform automated analysis over them such as the verification of model customizations through exploiting rule-based automated reasoners. This research was motivated due to the needs for achieving productivity gains, maintainability and better alignment of business requirements with technical capabilities in engineering service-oriented applications and systems.
APA, Harvard, Vancouver, ISO, and other styles
23

Huy, Nikkilä Sovanny, and Axel Kollberg. "Today's Space Weather in the Planetarium : visualization and feature extraction pipeline for astrophysical observation and simulation data." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165692.

Full text
Abstract:
This thesis describes the work of two students in collaboration with OpenSpace and the Community Coordinated Modelling Center (CCMC). The need expressed by both parties is a way to more accessibly visualize space weather data from the CCMC in OpenSpace. Firstly, space weather data is preprocessed for downloading and visualizing, a process that involves reducing the size of the data whilst keeping important features. Secondly, a pipeline is created for dynamically fetching the time varying data from the web during runtime of OpenSpace. A sliding window technique is employed to manage the downloading of the data. The results show a complete and working system for downloading data during runtime. Measurements of the performance of running the space weather visualizations by dynamically downloading versus running them locally, show that the new system impacts the frame time marginally. The results also show a visualization of space weather data with enhanced features, which facilitate the exploration of the data, and creates a more comprehensible representation of the data. Data is originally kept in a tabular FITS file format, and file sizes after data reduction and feature extractionare approximately 3% of the original file sizes.
APA, Harvard, Vancouver, ISO, and other styles
24

Lertchuwongsa, Noppon. "Color Lines, and Regions and Their Stereo Matching." Thesis, Paris 11, 2011. http://www.theses.fr/2011PA112309.

Full text
Abstract:
En vision par ordinateur, les points saillants sont des caractéristiques essentielles aux algorithmes. Les performances dépendent de paramètres extérieurs (ex. illumination). Les mesures de similarité sont centrales à la reconnaissance. Pour assurer l'efficacité de traitement, les caractéristiques extraites d'une image doivent être stables, et la mesure de similarité doit les distinguer parfaitement.Dans cette thèse, des caractéristiques conjointes géométrie/couleur sont étudiées : lignes de couleur et régions. Elles fondent la détection d'une troisième, la profondeur, qui sert en retour à évaluer leurs performanceLes lignes sont des extensions des classiques lignes de niveau: l'espace couleur 3-D est projeté sur un espace 1-D adapté pour résumer l'information chromatique là où elle est adéquate,Les régions exploitent classiquement la connexité image mais jointe à une compacité dans l'histogramme bidimensionnel issu du modèle dichromatique. L'homogénéité ainsi définie garantit une robustesse a priori aux variations d'éclairage en séparant la couleur de l'intensité et les couleurs entre elles.Cette homogénéité est exploitée selon 2 méthodes d'extraction d'ensembles compacts autour des modes de l'histogramme: extraction analytique des extrema locaux de couleur, extraction de ces mêmes extrema contrôlée par les régions correspondantes de l'image.Pour la profondeur, trois calculs de disparité stéréoscopique sont proposés et les performances comparées avec la réalité terrain:1. Appariement de lignes couleur avec une distance de Hausdorff revisitée.2. Exploitation de la forme des histogrammes de disparité par régions3.Coopération entre appariement de points et de régions
In computer vision, salient points are essential features to algorithms. Performances depend on external parameters (e.g. illuminant). Similarity measures are central to recognition.To secure the processing efficiency, extracted features have to be stable enough, and the similarity measure needs to perfectly distinguish between them.In the thesis, joint geometrical and color features are studied: color lines and regions. They found the detection of a third one, range, that helps in turn to assess their goodness.Color lines are extensions of classical level lines: the 3 D color space is mapped onto a 1 D scale especially designed to retain the chromatic information where it is suitable.Regions require the usual image connectivity but in association with compactness in the bi-dimensional histogram stemming from the dichromatic model. The so-designed homogeneity is granting an a priori good robustness against illumination variations in separating the body colors and splitting color from intensity.The latter homogeneity gives raise to 2 methods for extracting compact sets around histogram modes: color first analysis (an analytic extraction of color local extrema) , and joint color/space analysis (same but controlled by the region growing).As for depth, 3 methods to compute the stereo disparity are proposed for their results to be confronted with the ground-truth:1. Color line matching based on a modified Hausdorff distance,2. Studying the shape of the disparity histogram between regions,3. Cooperation between pixel correlation and region matching.The robustness of the designed features is proved on several stereo pairs. Future work deals with improving efficacy and accuracy
APA, Harvard, Vancouver, ISO, and other styles
25

AlKhateeb, Jawad H. Y. "Word based off-line handwritten Arabic classification and recognition. Design of automatic recognition system for large vocabulary offline handwritten Arabic words using machine learning approaches." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4440.

Full text
Abstract:
The design of a machine which reads unconstrained words still remains an unsolved problem. For example, automatic interpretation of handwritten documents by a computer is still under research. Most systems attempt to segment words into letters and read words one character at a time. However, segmenting handwritten words is very difficult. So to avoid this words are treated as a whole. This research investigates a number of features computed from whole words for the recognition of handwritten words in particular. Arabic text classification and recognition is a complicated process compared to Latin and Chinese text recognition systems. This is due to the nature cursiveness of Arabic text. The work presented in this thesis is proposed for word based recognition of handwritten Arabic scripts. This work is divided into three main stages to provide a recognition system. The first stage is the pre-processing, which applies efficient pre-processing methods which are essential for automatic recognition of handwritten documents. In this stage, techniques for detecting baseline and segmenting words in handwritten Arabic text are presented. Then connected components are extracted, and distances between different components are analyzed. The statistical distribution of these distances is then obtained to determine an optimal threshold for word segmentation. The second stage is feature extraction. This stage makes use of the normalized images to extract features that are essential in recognizing the images. Various method of feature extraction are implemented and examined. The third and final stage is the classification. Various classifiers are used for classification such as K nearest neighbour classifier (k-NN), neural network classifier (NN), Hidden Markov models (HMMs), and the Dynamic Bayesian Network (DBN). To test this concept, the particular pattern recognition problem studied is the classification of 32492 words using ii the IFN/ENIT database. The results were promising and very encouraging in terms of improved baseline detection and word segmentation for further recognition. Moreover, several feature subsets were examined and a best recognition performance of 81.5% is achieved.
APA, Harvard, Vancouver, ISO, and other styles
26

Almehio, Yasser. "A Cumulative Framework for Image Registration using Level-line Primitives." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112155.

Full text
Abstract:
Nous proposons dans cette thèse une nouvelle approche cumulative de recalage d'images basée sur des primitives construites à partir des lignes de niveaux. Les lignes de niveaux sont invariantes par rapport aux diverses perturbations affectant l'image tels que les changements de contraste. Par ailleurs, leur abondance dans une image suggère naturellement un processus de décision cumulatif. Nous proposons alors un algorithme récursif d'extraction des lignes de niveaux simple et efficace qui extrait les lignes par groupes rectiligne appelés ``segments''. Les segments sont ensuite groupés -- sous contrainte de proximité -- en fonction du modèle de transformation recherchée et afin de faciliter le calcul des invariants. Les primitives construites ont alors la forme de Z, Y ou W et sont classées en fonction de leur fiabilité, ce qui participe au paramétrage du processus de décision cumulatif. Le vote est multi-tours et constitué d'une phase préliminaire de construction de listes de préférences inspiré de la technique des mariages stables. Les primitives votent à une itération donnée en fonction de leur fiabilité. Chaque itération fournit ainsi un estimé de la transformation recherchée que le tour suivant peut raffiner. Ce procédé multi-tours permet, de ce fait, d'éliminer les ambiguïtés d'appariement générées par les motifs répétitifs présents dans les images. Notre approche a été validée pour recaler des images sous différents modèles de transformations allant de la plus simple (similarité) à la plus complexe (projective). Nous montrons dans cette thèse comment le choix pertinent de primitives basées sur les lignes de niveaux en conjonction avec un processus de décision cumulatif permet d'obtenir une méthode de recalage d'images robuste, générique et complète, fournissant alors différents niveaux de précision et pouvant ainsi s'appliquer à différents contextes
In this thesis, we propose a new image registration method that relies on level-line primitives. Level-lines are robust towards contrast changes and proposed primitives inherit their robustness. Moreover, their abundance in the image is well adapted to a cumulative matching process based on a multi-stage primitive election procedure. We propose a simple recursive tracking algorithm to extract level lines by straight sets called "segments". Segments are then grouped under proximity constraints to construct primitives (Z, Y and W shapes) that are classified into categories according to their reliability. Primitive shapes are defined according to the transformation model. The cumulative process is based on a preliminary step of preference lists construction that is inspired from the stable marriage matching algorithm. Primitives vote in a given voting stage according to their reliability. Each stage provides a coarse estimate of the transformation that the next stage gets to refine. This process, in turn, eliminate gradually the ambiguity happened by incorrect correspondences. Our additional contribution is to validate further geometric transformations, from simple to complex ones, completing the path "similarity, affine, projective". We show in this thesis how the choice of level lines in conjunction with a cumulative decision process allows defining a complete robust registration approach that is tested and evaluated on several real image sequences including different type of transformations
APA, Harvard, Vancouver, ISO, and other styles
27

Zou, Rucong, and Hong Sun. "Building Extraction in 2D Imagery Using Hough Transform." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-17597.

Full text
Abstract:
The purpose of this paper is to find out whether Hough transform if it is helpful to building extraction or not. This paper is written with the intention to come up with a building extraction algorithm that captures building areas in images as accurately as possible and eliminates background interference information, allowing the extracted contour area to be slightly larger than the building area itself. The core algorithm in this paper is based on the linear feature of the building edge and it removes interference information from the background. Through the test with ZuBuD database in Matlab, we can detect images successfully.  So according to this study, the Hough transform works for extracting building in 2D images.
APA, Harvard, Vancouver, ISO, and other styles
28

AlKhateeb, Jawad Hasan Yasin. "Word based off-line handwritten Arabic classification and recognition : design of automatic recognition system for large vocabulary offline handwritten Arabic words using machine learning approaches." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4440.

Full text
Abstract:
The design of a machine which reads unconstrained words still remains an unsolved problem. For example, automatic interpretation of handwritten documents by a computer is still under research. Most systems attempt to segment words into letters and read words one character at a time. However, segmenting handwritten words is very difficult. So to avoid this words are treated as a whole. This research investigates a number of features computed from whole words for the recognition of handwritten words in particular. Arabic text classification and recognition is a complicated process compared to Latin and Chinese text recognition systems. This is due to the nature cursiveness of Arabic text. The work presented in this thesis is proposed for word based recognition of handwritten Arabic scripts. This work is divided into three main stages to provide a recognition system. The first stage is the pre-processing, which applies efficient pre-processing methods which are essential for automatic recognition of handwritten documents. In this stage, techniques for detecting baseline and segmenting words in handwritten Arabic text are presented. Then connected components are extracted, and distances between different components are analyzed. The statistical distribution of these distances is then obtained to determine an optimal threshold for word segmentation. The second stage is feature extraction. This stage makes use of the normalized images to extract features that are essential in recognizing the images. Various method of feature extraction are implemented and examined. The third and final stage is the classification. Various classifiers are used for classification such as K nearest neighbour classifier (k-NN), neural network classifier (NN), Hidden Markov models (HMMs), and the Dynamic Bayesian Network (DBN). To test this concept, the particular pattern recognition problem studied is the classification of 32492 words using ii the IFN/ENIT database. The results were promising and very encouraging in terms of improved baseline detection and word segmentation for further recognition. Moreover, several feature subsets were examined and a best recognition performance of 81.5% is achieved.
APA, Harvard, Vancouver, ISO, and other styles
29

Al-Msie', Deen Ra'Fat. "Construction de lignes de produits logiciels par rétro-ingénierie de modèles de caractéristiques à partir de variantes de logiciels : l'approche REVPLINE." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20024/document.

Full text
Abstract:
Les lignes de produits logicielles constituent une approche permettant de construire et de maintenir une famille de produits logiciels similaires mettant en œuvre des principes de réutilisation. Ces principes favorisent la réduction de l'effort de développement et de maintenance, raccourcissent le temps de mise sur le marché et améliorent la qualité globale du logiciel. La migration de produits logiciels similaires vers une ligne de produits demande de comprendre leurs similitudes et leurs différences qui s'expriment sous forme de caractéristiques (features) offertes. Dans cette thèse, nous nous intéressons au problème de la construction d'une ligne de produits à partir du code source de ses produits et de certains artefacts complémentaires comme les diagrammes de cas d'utilisation, quand ils existent. Nous proposons des contributions sur l'une des étapes principales dans cette construction, qui consiste à extraire et à organiser un modèle de caractéristiques (feature model) dans un mode automatisé. La première contribution consiste à extraire des caractéristiques dans le code source de variantes de logiciels écrits dans le paradigme objet. Trois techniques sont mises en œuvre pour parvenir à cet objectif : l'Analyse Formelle de Concepts, l'Indexation Sémantique Latente et l'analyse des dépendances structurelles dans le code. Elles exploitent les parties communes et variables au niveau du code source. La seconde contribution s'attache à documenter une caractéristique extraite par un nom et une description. Elle exploite le code source mais également les diagrammes de cas d'utilisation, qui contiennent, en plus de l'organisation logique des fonctionnalités externes, des descriptions textuelles de ces mêmes fonctionnalités. En plus des techniques précédentes, elle s'appuie sur l'Analyse Relationnelle de Concepts afin de former des groupes d'entités d'après leurs relations. Dans la troisième contribution, nous proposons une approche visant à organiser les caractéristiques, une fois documentées, dans un modèle de caractéristiques. Ce modèle de caractéristiques est un arbre étiqueté par des opérations et muni d'expressions logiques qui met en valeur les caractéristiques obligatoires, les caractéristiques optionnelles, des groupes de caractéristiques (groupes ET, OU, OU exclusif), et des contraintes complémentaires textuelles sous forme d'implication ou d'exclusion mutuelle. Ce modèle est obtenu par analyse d'une structure obtenue par Analyse Formelle de Concepts appliquée à la description des variantes par les caractéristiques. L'approche est validée sur trois cas d'étude principaux : ArgoUML-SPL, Health complaint-SPL et Mobile media. Ces cas d'études sont déjà des lignes de produits constituées. Nous considérons plusieurs produits issus de ces lignes comme s'ils étaient des variantes de logiciels, nous appliquons notre approche, puis nous évaluons son efficacité par comparaison entre les modèles de caractéristiques extraits automatiquement et les modèles de caractéristiques initiaux (conçus par les développeurs des lignes de produits analysées)
The idea of Software Product Line (SPL) approach is to manage a family of similar software products in a reuse-based way. Reuse avoids repetitions, which helps reduce development/maintenance effort, shorten time-to-market and improve overall quality of software. To migrate from existing software product variants into SPL, one has to understand how they are similar and how they differ one from another. Companies often develop a set of software variants that share some features and differ in other ones to meet specific requirements. To exploit existing software variants and build a software product line, a feature model must be built as a first step. To do so, it is necessary to extract mandatory and optional features in addition to associate each feature with its name. Then, it is important to organize the mined and documented features into a feature model. In this context, our thesis proposes three contributions.Thus, we propose, in this dissertation as a first contribution a new approach to mine features from the object-oriented source code of a set of software variants based on Formal Concept Analysis, code dependency and Latent Semantic Indexing. The novelty of our approach is that it exploits commonality and variability across software variants, at source code level, to run Information Retrieval methods in an efficient way. The second contribution consists in documenting the mined feature implementations based on Formal Concept Analysis, Latent Semantic Indexing and Relational Concept Analysis. We propose a complementary approach, which aims to document the mined feature implementations by giving names and descriptions, based on the feature implementations and use-case diagrams of software variants. The novelty of our approach is that it exploits commonality and variability across software variants, at feature implementations and use-cases levels, to run Information Retrieval methods in an efficient way. In the third contribution, we propose an automatic approach to organize the mined documented features into a feature model. Features are organized in a tree which highlights mandatory features, optional features and feature groups (and, or, xor groups). The feature model is completed with requirement and mutual exclusion constraints. We rely on Formal Concept Analysis and software configurations to mine a unique and consistent feature model. To validate our approach, we applied it on three case studies: ArgoUML-SPL, Health complaint-SPL, Mobile media software product variants. The results of this evaluation validate the relevance and the performance of our proposal as most of the features and its constraints were correctly identified
APA, Harvard, Vancouver, ISO, and other styles
30

Munir, Qaiser, and Muhammad Shahid. "Software Product Line:Survey of Tools." Thesis, Linköping University, Department of Computer and Information Science, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-57888.

Full text
Abstract:

software product line is a set of software-intensive systems that share a common, managed set of features satisfying the specificneeds of a particular market segment or mission. The main attractive part of SPL is developing a set of common assets which includes requirements, design, test plans, test cases, reusable software components and other artifacts. Tools for the development of softwareproduct line are very few in number. The purpose of these tools is to support the creation, maintenance and using different versions ofproduct line artifacts. This requires a development environment that supports the management of assets and product development,processes and sharing of assets among different products.

The objective of this master thesis is to investigate the available tools which support Software Product Line process and itsdevelopment phases. The work is carried out in two steps, in the first step available Software Product Line tools are explored and a list of tools is prepared, managed and a brief introduction of each tool is presented. The tools are classified into different categoriesaccording to their usage, relation between the tools is established for better organization and understanding. In the second step, two tools Pure::variant and MetaEdit+ are selected and the quality factors such as Usability, Performance, Reliability, MemoryConsumption and Capacity are evaluated.

APA, Harvard, Vancouver, ISO, and other styles
31

Baum, David. "Variabilitätsextraktion aus makrobasierten Software-Generatoren." Master's thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-132719.

Full text
Abstract:
Die vorliegende Arbeit beschäftigt sich mit der Frage, wie Variabilitätsinformationen aus den Quelltext von Generatoren extrahiert werden können. Zu diesem Zweck wurde eine Klassifizierung von Variablen entwickelt, die im Vergleich zu bestehenden Ansätzen eine genauere Identifikation von Merkmalen ermöglicht. Zudem bildet die Unterteilung die Basis der Erkennung von Merkmalinteraktionen und Cross-tree-Constraints. Weiterhin wird gezeigt, wie die gewonnenen Informationen durch Merkmalmodelle dargestellt werden können. Da diese auf dem Generator-Quelltext basieren, liefern sie Erkenntnisse über den Lösungsraum der Domäne. Es wird sichtbar, aus welchen Implementierungskomponenten ein Merkmal besteht und welche Beziehungen es zwischen Merkmalen gibt. Allerdings liefert ein automatisch generiertes Merkmalmodell nur wenig Erkenntnisse über den Lösungsraum. Außerdem wurde ein Prototyp entwickelt, der eine Automatisierung des beschriebenen Extraktionsprozesses ermöglicht.
APA, Harvard, Vancouver, ISO, and other styles
32

Eyal, Salman Hamzeh. "Recovering traceability links between artifacts of software variants in the context of software product line engineering." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20008/document.

Full text
Abstract:
L'ingénierie des lignes de produits logiciels (Software Product Line Engineering-SPLE en Anglais) est une discipline qui met en œuvre des principes de réutilisation pour le développement efficace de familles de produits. Une famille de produits logiciels est un ensemble de logiciels similaires, ayant des fonctionnalités communes, mais néanmoins différents selon divers aspects; nous parlerons des différentes variantes d'un logiciel. L'utilisation d'une ligne de produit permet de développer les nouveaux produits d'une famille plus vite et d'augmenter la qualité de chacun d'eux. Ces avantages sont liés au fait que les éléments communs aux membres d'une même famille (besoin, architecture, code source, etc.) sont réutilisés et adaptés. Créer de toutes pièces une ligne de produits est une tâche difficile, coûteuse et longue. L'idée sous-jacente à ce travail est qu'une ligne de produits peut être créée par la réingénierie de logiciels similaires (de la même famille) existants, qui ont été préalablement développés de manière ad-hoc. Dans ce contexte, la contribution de cette thèse est triple. La première contribution est la proposition d'une approche pour l'identification des liens de traçabilité entre les caractéristiques (features) d'une application et les parties du code source qui les implémentent, et ce pour toutes les variantes d'une application. Ces liens sont utiles pour générer (dériver) de nouveaux logiciels par la sélection de leurs caractéristiques. L'approche proposée est principalement basée sur l'amélioration de la technique conventionnelle de recherche d'information (Information Retrieval –IR en Anglais) et des approches les plus récentes dans ce domaine. Cette amélioration est liée à deux facteurs. Le premier facteur est l'exploitation des informations liées aux éléments communs ou variables des caractéristiques et du code source des produits logiciels analysés. Le deuxième facteur concerne l'exploitation des similarités et des dépendances entre les éléments du code source. Les résultats que nous avons obtenus par expérimentation confirment l'efficacité de notre approche. Dans la deuxième contribution, nous appliquons nos résultats précédents (contribution no 1) à l'analyse d'impact (Change Impact Analysis –CIA en Anglais). Nous proposons un algorithme permettant à un gestionnaire de ligne de produit ou de produit de détecter quelles les caractéristiques (choix de configuration du logiciel) impactées par une modification du code. Cet algorithme améliore les résultats les plus récents dans ce domaine en permettant de mesurer à quel degré la réalisation d'une caractéristique est impactée par une modification. Dans la troisième contribution nous exploitons à nouveau ces liens de traçabilité (contribution No 1) pour proposer une approche permettant de satisfaire deux objectifs. Le premier concerne l'extraction de l'architecture de la ligne de produits. Nous proposons un ensemble d'algorithmes pour identifier les points de variabilité architecturale à travers l'identification des points de variabilité au niveau des caractéristiques. Le deuxième objectif concerne l'identification des liens de traçabilité entre les caractéristiques et les éléments de l'architecture de la ligne de produits. Les résultats de l'expérimentation montre que l'efficacité de notre approche dépend de l'ensemble des configurations de caractéristiques utilisé (disponibles via les variantes de produits analysés)
Software Product Line Engineering (SPLE) is a software engineering discipline providing methods to promote systematic software reuse for developing short time-to-market and quality products in a cost-efficient way. SPLE leverages what Software Product Line (SPL) members have in common and manages what varies among them. The idea behind SPLE is to builds core assets consisting of all reusable software artifacts (such as requirements, architecture, components, etc.) that can be leveraged to develop SPL's products in a prescribed way. Creating these core assets is driven by features provided by SPL products.Unfortunately, building SPL core assets from scratch is a costly task and requires a long time which leads to increasing time-to-market and up-front investment. To reduce these costs, existing similar product variants developed by ad-hoc reuse should be re-engineered to build SPLs. In this context, our thesis proposes three contributions. Firstly, we proposed an approach to recover traceability links between features and their implementing source code in a collection of product variants. This helps to understand source code of product variants and facilitate new product derivation from SPL's core assets. The proposed approach is based on Information Retrieval (IR) for recovering such traceability links. In our experimental evaluation, we showed that our approach outperforms the conventional application of IR as well as the most recent and relevant work on the subject. Secondly, we proposed an approach, based on traceability links recovered in the first contribution, to study feature-level Change Impact Analysis (CIA) for changes made to source code of features of product variants. This approach helps to conduct change management from a SPL's manager point of view. This allows him to decide which change strategy should be executed, as there is often more than one change that can solve the same problem. In our experimental evaluation, we proved the effectiveness of our approach in terms of the most used metrics on the subject. Finally, based on traceability recovered in the first contribution, we proposed an approach to contribute for building Software Product Line Architecture (SPLA) and linking its elements with features. Our focus is to identify mandatory components and variation points of components. Therefore, we proposed a set of algorithms to identify this commonality and variability across a given collection of product variants. According to the experimental evaluation, the efficiency of these algorithms mainly depends on the available product configurations
APA, Harvard, Vancouver, ISO, and other styles
33

Reinhartz-Berger, Iris, Kathrin Figl, and Øystein Haugen. "Investigating styles in variability modeling: Hierarchical vs. constrained styles." Elsevier, 2017. http://dx.doi.org/10.1016/j.infsof.2017.01.012.

Full text
Abstract:
Context: A common way to represent product lines is with variability modeling. Yet, there are different ways to extract and organize relevant characteristics of variability. Comprehensibility of these models and the ease of creating models are important for the efficiency of any variability management approach. Objective: The goal of this paper is to investigate the comprehensibility of two common styles to organize variability into models - hierarchical and constrained - where the dependencies between choices are specified either through the hierarchy of the model or as cross-cutting constraints, respectively. Method: We conducted a controlled experiment with a sample of 90 participants who were students with prior training in modeling. Each participant was provided with two variability models specified in Common Variability Language (CVL) and was asked to answer questions requiring interpretation of provided models. The models included 9 to 20 nodes and 8 to 19 edges and used the main variability elements. After answering the questions, the participants were asked to create a model based on a textual description. Results: The results indicate that the hierarchical modeling style was easier to comprehend from a subjective point of view, but there was also a significant interaction effect with the degree of dependency in the models, that influenced objective comprehension. With respect to model creation, we found that the use of a constrained modeling style resulted in higher correctness of variability models. Conclusions: Prior exposure to modeling style and the degree of dependency among elements in the model determine what modeling style a participant chose when creating the model from natural language descriptions. Participants tended to choose a hierarchical style for modeling situations with high dependency and a constrained style for situations with low dependency. Furthermore, the degree of dependency also influences the comprehension of the variability model.
APA, Harvard, Vancouver, ISO, and other styles
34

Lee, Won Hee. "Bundle block adjustment using 3D natural cubic splines." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1211476222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Kudelski, Dimitri. "Détection automatique d'objets géologiques à partir de données numériques d'affleurements 3D." Thesis, Aix-Marseille 1, 2011. http://www.theses.fr/2011AIX10213/document.

Full text
Abstract:
Depuis plusieurs années, le LIDAR est utilisé en géologie pour acquérir la géométrie des affleurements sous forme de nuages de points et de surfaces. L'objectif de cette thèse consiste à développer des techniques visant à automatiser le traitement de ces données et notamment l'interprétation de structures géologiques sur affleurements numériques. Ces travaux s'insèrent dans un projet de recherche plus important financé par ENI qui a pour objectif de concevoir des méthodologies pour intégrer des données d'affleurements aux modèles géologiques. La problématique de cette thèse se focalise sur l'extraction d'objets géologiques (ie, traces de fractures ou de limites stratigraphiques) à partir de modèles numériques 3D d'affleurements. L'idée fondamentale consiste à considérer ces entités géologiques comme des lignes de ravins (ie, des lignes de forte concavité). Ce problème fait référence à la détection de lignes caractéristiques en informatique graphique. Dans un premier temps, nous proposons une méthode reposant sur les propriétés différentielles de troisième ordre de la surface (ie, dérivées de courbures). Un traitement est intégré afin de prendre en compte une connaissance a priori pour n'extraire que les objets orientés selon une direction particulière. Du fait du caractère rugueux et erratique des géométries d'affleurements, plusieurs limites apparaissent et mettent en défaut ce genre d'approche. Ainsi, dans un second temps, nous présentons deux algorithmes alternatifs afin de détecter de manière pertinente les objets géologiques ciblés. Ceux-ci prennent le contre-pied des techniques existantes en s'appuyant sur un marquage de sommets, établi depuis des propriétés différentielles de second ordre, suivi d'une opération de squelettisation. Nous validons, dans une dernière partie, l'ensemble des méthodes développées et proposons diverses applications afin de souligner la généricité des approches
For a few years now, the LIDAR technology has been employed in geology to capture outcrop geometries as point clouds and surfaces. The objective of this thesis is to develop solutions aiming at processing these data automatically and particularly interpreting geological structures on numerical outcrops. This work is funded by ENI-Agip and fits into a larger project which is devoted to creating methodologies to integrate outcrop data into geological models. The problematic of this thesis focuses on the extraction of geological objects (ie, fractures and stratigraphic limit traces) depicted as polylines from numerical outcrop data. The fundamental idea therefore considers these geological entities as ravine lines (ie, lines with high concavity). This problem belongs to the large domain of feature line detection in computer graphics. We propose an approach based on third-order differential properties of the surface (ie, curvature derivatives). An a priori knowledge is integrated to constrain the detection in order to extract the geological structures oriented in a particular direction.The outcrop rugosity and erratic body geometries however raise several limits of this kind of method. We present two alternative algorithms to detect targeted geological objects in a pertinent way. These algorithms rely on a vertex labeling which is executed according to second-order differential properties and followed by a skeletonization process overcoming traditional approaches of feature detection. We finally present a different context of application than the detection of geological structures to validate the proposed approaches and emphasize their genericity
APA, Harvard, Vancouver, ISO, and other styles
36

Al-Muhtaseb, Husni A. "Arabic text recognition of printed manuscripts. Efficient recognition of off-line printed Arabic text using Hidden Markov Models, Bigram Statistical Language Model, and post-processing." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4426.

Full text
Abstract:
Arabic text recognition was not researched as thoroughly as other natural languages. The need for automatic Arabic text recognition is clear. In addition to the traditional applications like postal address reading, check verification in banks, and office automation, there is a large interest in searching scanned documents that are available on the internet and for searching handwritten manuscripts. Other possible applications are building digital libraries, recognizing text on digitized maps, recognizing vehicle license plates, using it as first phase in text readers for visually impaired people and understanding filled forms. This research work aims to contribute to the current research in the field of optical character recognition (OCR) of printed Arabic text by developing novel techniques and schemes to advance the performance of the state of the art Arabic OCR systems. Statistical and analytical analysis for Arabic Text was carried out to estimate the probabilities of occurrences of Arabic character for use with Hidden Markov models (HMM) and other techniques. Since there is no publicly available dataset for printed Arabic text for recognition purposes it was decided to create one. In addition, a minimal Arabic script is proposed. The proposed script contains all basic shapes of Arabic letters. The script provides efficient representation for Arabic text in terms of effort and time. Based on the success of using HMM for speech and text recognition, the use of HMM for the automatic recognition of Arabic text was investigated. The HMM technique adapts to noise and font variations and does not require word or character segmentation of Arabic line images. In the feature extraction phase, experiments were conducted with a number of different features to investigate their suitability for HMM. Finally, a novel set of features, which resulted in high recognition rates for different fonts, was selected. The developed techniques do not need word or character segmentation before the classification phase as segmentation is a byproduct of recognition. This seems to be the most advantageous feature of using HMM for Arabic text as segmentation tends to produce errors which are usually propagated to the classification phase. Eight different Arabic fonts were used in the classification phase. The recognition rates were in the range from 98% to 99.9% depending on the used fonts. As far as we know, these are new results in their context. Moreover, the proposed technique could be used for other languages. A proof-of-concept experiment was conducted on English characters with a recognition rate of 98.9% using the same HMM setup. The same techniques where conducted on Bangla characters with a recognition rate above 95%. Moreover, the recognition of printed Arabic text with multi-fonts was also conducted using the same technique. Fonts were categorized into different groups. New high recognition results were achieved. To enhance the recognition rate further, a post-processing module was developed to correct the OCR output through character level post-processing and word level post-processing. The use of this module increased the accuracy of the recognition rate by more than 1%.
King Fahd University of Petroleum and Minerals (KFUPM)
APA, Harvard, Vancouver, ISO, and other styles
37

Al-Muhtaseb, Husni Abdulghani. "Arabic text recognition of printed manuscripts : efficient recognition of off-line printed Arabic text using Hidden Markov Models, Bigram Statistical Language Model, and post-processing." Thesis, University of Bradford, 2010. http://hdl.handle.net/10454/4426.

Full text
Abstract:
Arabic text recognition was not researched as thoroughly as other natural languages. The need for automatic Arabic text recognition is clear. In addition to the traditional applications like postal address reading, check verification in banks, and office automation, there is a large interest in searching scanned documents that are available on the internet and for searching handwritten manuscripts. Other possible applications are building digital libraries, recognizing text on digitized maps, recognizing vehicle license plates, using it as first phase in text readers for visually impaired people and understanding filled forms. This research work aims to contribute to the current research in the field of optical character recognition (OCR) of printed Arabic text by developing novel techniques and schemes to advance the performance of the state of the art Arabic OCR systems. Statistical and analytical analysis for Arabic Text was carried out to estimate the probabilities of occurrences of Arabic character for use with Hidden Markov models (HMM) and other techniques. Since there is no publicly available dataset for printed Arabic text for recognition purposes it was decided to create one. In addition, a minimal Arabic script is proposed. The proposed script contains all basic shapes of Arabic letters. The script provides efficient representation for Arabic text in terms of effort and time. Based on the success of using HMM for speech and text recognition, the use of HMM for the automatic recognition of Arabic text was investigated. The HMM technique adapts to noise and font variations and does not require word or character segmentation of Arabic line images. In the feature extraction phase, experiments were conducted with a number of different features to investigate their suitability for HMM. Finally, a novel set of features, which resulted in high recognition rates for different fonts, was selected. The developed techniques do not need word or character segmentation before the classification phase as segmentation is a byproduct of recognition. This seems to be the most advantageous feature of using HMM for Arabic text as segmentation tends to produce errors which are usually propagated to the classification phase. Eight different Arabic fonts were used in the classification phase. The recognition rates were in the range from 98% to 99.9% depending on the used fonts. As far as we know, these are new results in their context. Moreover, the proposed technique could be used for other languages. A proof-of-concept experiment was conducted on English characters with a recognition rate of 98.9% using the same HMM setup. The same techniques where conducted on Bangla characters with a recognition rate above 95%. Moreover, the recognition of printed Arabic text with multi-fonts was also conducted using the same technique. Fonts were categorized into different groups. New high recognition results were achieved. To enhance the recognition rate further, a post-processing module was developed to correct the OCR output through character level post-processing and word level post-processing. The use of this module increased the accuracy of the recognition rate by more than 1%.
APA, Harvard, Vancouver, ISO, and other styles
38

Costa, Gabriella Castro Barbosa. "Uma abordagem para linha de produtos de software científico baseada em ontologia e workflow." Universidade Federal de Juiz de Fora (UFJF), 2013. https://repositorio.ufjf.br/jspui/handle/ufjf/4787.

Full text
Abstract:
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-05-31T17:53:13Z No. of bitstreams: 1 gabriellacastrobarbosacosta.pdf: 2243060 bytes, checksum: 0aef87199975808e0973490875ce39b5 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-06-01T11:50:00Z (GMT) No. of bitstreams: 1 gabriellacastrobarbosacosta.pdf: 2243060 bytes, checksum: 0aef87199975808e0973490875ce39b5 (MD5)
Made available in DSpace on 2017-06-01T11:50:00Z (GMT). No. of bitstreams: 1 gabriellacastrobarbosacosta.pdf: 2243060 bytes, checksum: 0aef87199975808e0973490875ce39b5 (MD5) Previous issue date: 2013-02-27
CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Uma forma de aprimorar a reutilização e a manutenção de uma família de produtos de software é através da utilização de uma abordagem de Linha de Produtos de Software (LPS). Em algumas situações, tais como aplicações científicas para uma determinada área, é vantajoso desenvolver uma coleção de produtos de software relacionados, utilizando uma abordagem de LPS. Linhas de Produtos de Software Científico (LPSC) diferem-se de Li nhas de Produtos de Software pelo fato de que LPSC fazem uso de um modelo abstrato de workflow científico. Esse modelo abstrato de workflow é definido de acordo com o domínio científico e, através deste workflow, os produtos da LPSC serão instanciados. Analisando as dificuldades em especificar experimentos científicos e considerando a necessidade de composição de aplicações científicas para a sua implementação, constata-se a necessidade de um suporte semântico mais adequado para a fase de análise de domínio. Para tanto, este trabalho propõe uma abordagem baseada na associação de modelo de features e onto logias, denominada PL-Science, para apoiar a especificação e a condução de experimentos científicos. A abordagem PL-Science, que considera o contexto de LPSC, visa auxiliar os cientistas através de um workflow que engloba as aplicações científicas de um dado experimento. Usando os conceitos de LPS, os cientistas podem reutilizar modelos que especificam a LPSC e tomar decisões de acordo com suas necessidades. Este trabalho enfatiza o uso de ontologias para facilitar o processo de aplicação de LPS em domínios científicos. Através do uso de ontologia como um modelo de domínio consegue-se fornecer informações adicionais, bem como adicionar mais semântica ao contexto de LPSC.
A way to improve reusability and maintainability of a family of software products is through the Software Product Line (SPL) approach. In some situations, such as scientific applications for a given area, it is advantageous to develop a collection of related software products, using an SPL approach. Scientific Software Product Lines (SSPL) differs from the Software Product Lines due to the fact that SSPL uses an abstract scientific workflow model. This workflow is defined according to the scientific domain and, using this abstract workflow model, the products will be instantiated. Analyzing the difficulties to specify scientific experiments, and considering the need for scientific applications composition for its implementation, an appropriated semantic support for the domain analysis phase is necessary. Therefore, this work proposes an approach based on the combination of feature models and ontologies, named PL-Science, to support the specification and conduction of scientific experiments. The PL-Science approach, which considers the context of SPL and aims to assist scientists to define a scientific experiment, specifying a workflow that encompasses scientific applications of a given experiment, is presented during this disser tation. Using SPL concepts, scientists can reuse models that specify the scientific product line and carefully make decisions according to their needs. This work also focuses on the use of ontologies to facilitate the process of applying Software Product Line to scientific domains. Through the use of ontology as a domain model, we can provide additional information as well as add more semantics in the context of Scientific Software Product Lines.
APA, Harvard, Vancouver, ISO, and other styles
39

Seidl, Christoph. "Integrated Management of Variability in Space and Time in Software Families." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-218036.

Full text
Abstract:
Software Product Lines (SPLs) and Software Ecosystems (SECOs) are approaches to capturing families of closely related software systems in terms of common and variable functionality (variability in space). SPLs and especially SECOs are subject to software evolution to adapt to new or changed requirements resulting in different versions of the software family and its variable assets (variability in time). Both dimensions may be interconnected (e.g., through version incompatibilities) and, thus, have to be handled simultaneously as not all customers upgrade their respective products immediately or completely. However, there currently is no integrated approach allowing variant derivation of features in different version combinations. In this thesis, remedy is provided in the form of an integrated approach making contributions in three areas: (1) As variability model, Hyper-Feature Models (HFMs) and a version-aware constraint language are introduced to conceptually capture variability in time as features and feature versions. (2) As variability realization mechanism, delta modeling is extended for variability in time, and a language creation infrastructure is provided to devise suitable delta languages. (3) For the variant derivation procedure, an automatic version selection mechanism is presented as well as a procedure to derive large parts of the application order for delta modules from the structure of the HFM. The presented integrated approach enables derivation of concrete software systems from an SPL or a SECO where both features and feature versions may be configured.
APA, Harvard, Vancouver, ISO, and other styles
40

Warsop, Thomas E. "Three-dimensional scene recovery for measuring sighting distances of rail track assets from monocular forward facing videos." Thesis, Loughborough University, 2011. https://dspace.lboro.ac.uk/2134/8994.

Full text
Abstract:
Rail track asset sighting distance must be checked regularly to ensure the continued and safe operation of rolling stock. Methods currently used to check asset line-of-sight involve manual labour or laser systems. Video cameras and computer vision techniques provide one possible route for cheaper, automated systems. Three categories of computer vision method are identified for possible application: two-dimensional object recognition, two-dimensional object tracking and three-dimensional scene recovery. However, presented experimentation shows recognition and tracking methods produce less accurate asset line-of-sight results for increasing asset-camera distance. Regarding three-dimensional scene recovery, evidence is presented suggesting a relationship between image feature and recovered scene information. A novel framework which learns these relationships is proposed. Learnt relationships from recovered image features probabilistically limit the search space of future features, improving efficiency. This framework is applied to several scene recovery methods and is shown (on average) to decrease computation by two-thirds for a possible, small decrease in accuracy of recovered scenes. Asset line-of-sight results computed from recovered three-dimensional terrain data are shown to be more accurate than two-dimensional methods, not effected by increasing asset-camera distance. Finally, the analysis of terrain in terms of effect on asset line-of-sight is considered. Terrain elements, segmented using semantic information, are ranked with a metric combining a minimum line-of-sight blocking distance and the growth required to achieve this minimum distance. Since this ranking measure is relative, it is shown how an approximation of the terrain data can be applied, decreasing computation time. Further efficiency increases are found by decomposing the problem into a set of two-dimensional problems and applying binary search techniques. The combination of the research elements presented in this thesis provide efficient methods for automatically analysing asset line-of-sight and the impact of the surrounding terrain, from captured monocular video.
APA, Harvard, Vancouver, ISO, and other styles
41

Tambe-Ebot, Mathias Ashu Tako. "Proposing a Theoretical GIS Model for Landslides Analysis : The Case of Mount Cameroon." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-65899.

Full text
Abstract:
This study presents a theoretical GIS model to investigate the relative impacts of geomorphic and environmental factors that govern the occurrence of landslides on the slopes of Mount Cameroon and its surrounding areas. The study area is located along the Cameroon Volcanic Line (CVL), a major structural feature that originates from the south Atlantic and continues into the continental landmass. The quite frequent seismic activity, geologic character, humid tropical climate and high human pressure on hill slopes are the major factors behind the occurrence of landslides in Mount Cameroon. This paper, therefore, underscores the necessity of in-depth follow-up studies concerned with landslides prevention and management based on the relevance of sufficient reliable field methods in landform geomorphology and interpretation. As much is yet to be done to acquire data for structural and surface geology, hydrology, geomorphic processes and physiography of Mount Cameroon, it is difficult at this point in time to considerably apply suitable methods using GIS that would enable identifying and delineating the landslide-prone areas. In addition, the application of environmental surface monitoring instruments will not be meaningful without a clear presentation of which areas are a cause for concern (given that the employment of any slope stability monitoring and rehabilitation efforts will be only possible after appropriate problem-area identification has been done). Consequently, based on the writer’s previous work in the Mount Cameroon area and available related literature, a methodology using GIS is proposed, which provides the capability to demonstrate how the impact of individual or collective geomorphologic site-specific factors on landslides occurrence could be justified. Considering that digital data may not be readily available, a procedure for the creation of data and analysis of themes is proposed and illustrated. The factors analysis approach in landslides analysis may be cheaper and easier to employ in Mount Cameroon and similar problem regions in developing countries (given that there may be problems of limited financial resources and available expertise in GIS technology and applications). The study underscores and recommends the necessity for a later practical implementation with the availability of adequate resources.
APA, Harvard, Vancouver, ISO, and other styles
42

Püschel, Georg, Christoph Seidl, Thomas Schlegel, and Uwe Aßmann. "Using Variability Management in Mobile Application Test Modeling." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-143917.

Full text
Abstract:
Mobile applications are developed to run on fast-evolving platforms, such as Android or iOS. Respective mobile devices are heterogeneous concerning hardware (e.g., sensors, displays, communication interfaces) and software, especially operating system functions. Software vendors cope with platform evolution and various hardware configurations by abstracting from these variable assets. However, they cannot be sure about their assumptions on the inner conformance of all device parts and that the application runs reliably on each of them—in consequence, comprehensive testing is required. Thereby, in testing, variability becomes tedious due to the large number of test cases required to validate behavior on all possible device configurations. In this paper, we provide remedy to this problem by combining model-based testing with variability concepts from Software Product Line engineering. For this purpose, we use feature-based test modeling to generate test cases from variable operational models for individual application configurations and versions. Furthermore, we illustrate our concepts using the commercial mobile application “runtastic” as example application.
APA, Harvard, Vancouver, ISO, and other styles
43

Silva, Flayson Potenciano e. "Abordagem baseada em metamodelos para a representação e modelagem de características em linhas de produto de software dinâmicas." Universidade Federal de Goiás, 2016. http://repositorio.bc.ufg.br/tede/handle/tede/6231.

Full text
Abstract:
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-09-16T17:35:04Z No. of bitstreams: 2 Dissertação - Flayson Potenciano e Silva - 2016.pdf: 6563517 bytes, checksum: 7f7a3d166741057427f2d333473af546 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-09-19T11:17:33Z (GMT) No. of bitstreams: 2 Dissertação - Flayson Potenciano e Silva - 2016.pdf: 6563517 bytes, checksum: 7f7a3d166741057427f2d333473af546 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2016-09-19T11:17:33Z (GMT). No. of bitstreams: 2 Dissertação - Flayson Potenciano e Silva - 2016.pdf: 6563517 bytes, checksum: 7f7a3d166741057427f2d333473af546 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2016-09-06
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
This dissertation presents a requirement representation approach for Dynamic Software Product Lines (DSPLs). DSPLs are oriented towards the designing of adaptive applications and each requirement is represented as a feature. Traditionally, features are represented in a Software Product Line (SPL) by a Feature Model (FM). Nonetheless, such a model does not originally support dynamic features representation. This dissertation proposes an extension to FM by adding a representation for dynamic feature to it so that the model can have a higher expressivity regarding the context change conditions and the application itself. Therefore, a metamodel based on Ecore meta-metamodel has been developed to enable the definition of both Dynamic Feature Models (proposed extension to FM) and Dynamic Feature Configurations (DFC), the latter used to describe the possible configuration of products at-runtime. In addition to a representation for dynamic features and the metamodel, this dissertation provides a tool that interprets the proposed model and allows Dynamic Feature Models design. Simulations involving dynamic feature state changes have been carried out, considering scenarios of a ubiquitous monitoring application for homecare patients.
Esta dissertação apresenta uma abordagem de representação de requisitos para Linhas de Produto de Software Dinâmicas (LPSD). LPSDs são voltadas para a produção de aplicações adaptativas e cada requisito é representado como uma característica. Tradicionalmente, características são representadas em uma Linha de Produto de Software (LPS) por meio de um Modelo de Características (MC). Tal modelo, no entanto, não possui, originalmente, suporte para a representação de características dinâmicas. Esta dissertação propõe uma extensão ao MC, incorporando uma representação para as características dinâmicas, de forma que o modelo tenha maior expressividade quanto às condições de mudanças de contexto e da própria aplicação. Para isso, um metamodelo baseado no meta-metamodelo Ecore foi desenvolvido, para possibilitar a definição tanto de Modelos de Características Dinâmicas (extensão do MC proposta) quanto também de Modelos de Configuração de Características Dinâmicas (MCC-D), estes utilizados para descrever as possíveis configurações dos produtos em tempo de execução. Além de uma representação para características dinâmicas e do metamodelo, essa dissertação traz como contribuição uma ferramenta que interpreta o metamodelo proposto e permite a construção de Modelos de Características Dinâmicas. Simulações envolvendo mudanças de estado das configurações de características dinâmicas foram realizadas, considerando cenários de uma aplicação ubíqua de monitoramento de pacientes domiciliares.
APA, Harvard, Vancouver, ISO, and other styles
44

Sun, Zhibin. "Application of artificial neural networks in early detection of Mastitis from improved data collected on-line by robotic milking stations." Lincoln University, 2008. http://hdl.handle.net/10182/665.

Full text
Abstract:
Two types of artificial neural networks, Multilayer Perceptron (MLP) and Self-organizing Feature Map (SOM), were employed to detect mastitis for robotic milking stations using the preprocessed data relating to the electrical conductivity and milk yield. The SOM was developed to classify the health status into three categories: healthy, moderately ill and severely ill. The clustering results were successfully evaluated and validated by using statistical techniques such as K-means clustering, ANOVA and Least Significant Difference. The result shows that the SOM could be used in the robotic milking stations as a detection model for mastitis. For developing MLP models, a new mastitis definition based on higher EC and lower quarter yield was created and Principle Components Analysis technique was adopted for addressing the problem of multi-colinearity existed in the data. Four MLPs with four combined datasets were developed and the results manifested that the PCA-based MLP model is superior to other non-PCA-based models in many respects such as less complexity, higher predictive accuracy. The overall correct classification rate (CCR), sensitivity and specificity of the model was 90.74 %, 86.90 and 91.36, respectively. We conclude that the PCA-based model developed here can improve the accuracy of prediction of mastitis by robotic milking stations.
APA, Harvard, Vancouver, ISO, and other styles
45

Rhodenizer, Mark Russel. "Automatic extraction of features from line drawings." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq22385.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ghabach, Eddy. "Prise en charge du « copie et appropriation » dans les lignes de produits logiciels." Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4056/document.

Full text
Abstract:
Une Ligne de Produits Logiciels (LPL) supporte la gestion d’une famille de logiciels. Cette approche se caractérise par une réutilisation systématique des artefacts communs qui réduit le coût et le temps de mise sur le marché et augmente la qualité des logiciels. Cependant, une LPL exige un investissement initial coûteux. Certaines organisations qui ne peuvent pas faire face à un tel investissement, utilisent le « Clone-and-own » C&O pour construire et faire évoluer des familles de logiciels. Cependant, l'efficacité de cette pratique se dégrade proportionnellement à la croissance de la famille de produits, qui devient difficile à maintenir. Dans cette thèse, nous proposons une approche hybride qui utilise à la fois une LPL et l'approche C&O pour faire évoluer une famille de produits logiciels. Un mécanisme automatique d’identification des correspondances entre les « features » caractérisant les produits et les artéfacts logiciels, permet la migration des variantes de produits développées en C&O dans une LPL. L’originalité de ce travail est alors d’aider à la dérivation de nouveaux produits en proposant différents scenarii d’opérations C&O à effectuer pour dériver un nouveau produit à partir des features requis. Le développeur peut alors réduire ces possibilités en exprimant ses préférences (e.g. produits, artefacts) et en utilisant les estimations de coûts sur les opérations que nous proposons. Les nouveaux produits ainsi construits sont alors facilement intégrés dans la LPL. Nous avons étayé cette thèse en développant le framework SUCCEED (SUpporting Clone-and-own with Cost-EstimatEd Derivation) et l’avons appliqué à une étude de cas sur des familles de portails web
A Software Product Line (SPL) manages commonalities and variability of a related software products family. This approach is characterized by a systematic reuse that reduces development cost and time to market and increases software quality. However, building an SPL requires an initial expensive investment. Therefore, organizations that are not able to deal with such an up-front investment, tend to develop a family of software products using simple and intuitive practices. Clone-and-own (C&O) is an approach adopted widely by software developers to construct new product variants from existing ones. However, the efficiency of this practice degrades proportionally to the growth of the family of products in concern, that becomes difficult to manage. In this dissertation, we propose a hybrid approach that utilizes both SPL and C&O to develop and evolve a family of software products. An automatic mechanism of identification of the correspondences between the features of the products and the software artifacts, allows the migration of the product variants developed in C&O in an SPL The originality of this work is then to help the derivation of new products by proposing different scenarios of C&O operations to be performed to derive a new product from the required features. The developer can then reduce these possibilities by expressing her preferences (e.g. products, artifacts) and using the proposed cost estimations on the operations. We realized our approach by developing SUCCEED, a framework for SUpporting Clone-and-own with Cost-EstimatEd Derivation. We validate our works on a case study of families of web portals
APA, Harvard, Vancouver, ISO, and other styles
47

Sochos, Periklis. "The feature architecture mapping method for feature oriented development of software product lines." [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=985281928.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Lee, Young-ran. "Pose estimation of line cameras using linear features /." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1486457871786059.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Nilsson, Roland. "Statistical Feature Selection : With Applications in Life Science." Doctoral thesis, Linköping : Department of Physcis, Chemistry and Biology, Linköping University, 2007. http://www.bibl.liu.se/liupubl/disp/disp2007/tek1090s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

VARELA, Jean Poul. "Usando contextos e requisitos não-funcionais para configurar modelos de objetivos, modelos de features e cenários para linhas de produtos de software." Universidade Federal de Pernambuco, 2015. https://repositorio.ufpe.br/handle/123456789/16322.

Full text
Abstract:
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2016-04-05T15:42:49Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação - Jean Poul Varela.pdf: 3797900 bytes, checksum: fa011df68d9bf4b963c64b5a5b22c945 (MD5)
Made available in DSpace on 2016-04-05T15:42:49Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Dissertação - Jean Poul Varela.pdf: 3797900 bytes, checksum: fa011df68d9bf4b963c64b5a5b22c945 (MD5) Previous issue date: 2015-02-23
FACEPE
O processo GS2SPL (Goals and Scenarios to Software Product Lines) visa obter, de maneira sistemática, o modelo de features e a especificação de cenários de caso de uso, a partir de modelos de objetivos de uma linha de produto de software (LPS). Além disso, esse processo permite realizar a configuração desses artefatos de requisitos para um produto da LPS, com base no atendimento de requisitos nãofuncionais (RNFs). Contudo, essa configuração é realizada sem considerar o estado do contexto do ambiente no qual a aplicação gerada será implantada. Isso é uma limitação, pois uma configuração pode não atender as necessidades do stakeholders. Por outro lado, o processo E-SPL (Early Software Product Line) permite configurar o modelo de objetivos de um produto visando maximizar o atendimento de RNFs e levando em consideração o estado do contexto. Para superar a limitação do processo GS2SPL, o presente trabalho propõe uma extensão do processo GS2SPL para incorporar a atividade de configuração do E-SPL. O novo processo é chamado de GSC2SPL (Goals, Scenarios and Contexts to Software Product Lines), o qual possibilita a obtenção do modelo de features e cenários de caso de uso, a partir de modelos de objetivos contextuais. O processo também permite realizar a configuração desses artefatos de requisitos com base nas informações sobre o contexto e visando aumentar o atendimento dos requisitos nãofuncionais. O processo é apoiado pela ferramenta GCL-Tool (Goal and Context for Product Line - Tool). O processo foi aplicado à especificação de duas LPS: o Media@ e o Smart Home.
GS2SPL (Goals and Scenarios to Software Product Lines) is a process aimed at systematically obtaining a feature model and the specification of use case scenarios from goal models of a Software Product Line (SPL). Moreover, this process allows configuring specific applications of an SPL based on the fulfillment of non-functional requirements (NFRs). However, this configuration is performed without considering the context state in which the system will be deployed. This is a limitation because a configuration may not meet the needs of stakeholders. On the other hand, E-SPL (Early Software Product Line) is a process that allows configuring a product aimed maximizing the fulfillment of NFRs and taking into account the context state. To overcome the limitation of the GS2SPL process, in this work we propose extension of the GS2SPL process, to incorporate the configuration activity of the E-SPL. The new process is called GSC2SPL (Goals, Scenarios and Contexts to Software Product Lines), which allows obtaining a feature model and use case scenarios from contextual goal models. The process will also allow the configuration of such requirements artifacts based on the information about the context and aiming to maximize the fulfillment of non-functional requirements. The process is supported by the GCL-Tool (Goal and Context for Product Line - Tool). The process was applied to the specification of two LPS: Media@ and the Smart Home.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography