To see the other types of publications on this topic, follow the link: Taxonomy information.

Dissertations / Theses on the topic 'Taxonomy information'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Taxonomy information.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chang, Dempsey H., and n/a. "A Gestalt-Taxonomy for Designing Multimodal Information Displays." University of Canberra. Arts & Design, 2007. http://erl.canberra.edu.au./public/adt-AUC20081203.123314.

Full text
Abstract:
The theory of Gestalt was proposed in the nineteenth century to explain and predict the way that people perceptually group visual elements, and it has been used to develop guidelines for designing visual computer interfaces. In this thesis we seek to extend the use of Gestalt principles to the design of haptic and visual-haptic displays. The thesis begins with a survey of Gestalt research into visual, auditory and haptic perception. From this survey the five most commonly found principles are identified as figure-ground, continuation, closure, similarity and proximity. This thesis examines the proposition that these five principles can be applied to the design of haptic interfaces. Four experiments investigate whether Gestalt principles of figure-ground, continuation, closure, similarity and proximity are applicable in the same way when people group elements either through their visual (by colour) or haptic (by texture) sense. The results indicate significant correspondence between visual and haptic grouping. A set of haptic design guidelines for haptic displays are developed from the experiments. This allows us to use the Gestalt principles to organise a Gestalt-Taxonomy of specific guidelines for designing haptic displays. The Gestalt-Taxonomy has been used to develop new haptic design guidelines for information displays.
APA, Harvard, Vancouver, ISO, and other styles
2

Körlinge, Max. "Haxonomy : A Taxonomy for Web Hacking." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254639.

Full text
Abstract:
This study aims to show that the information present in public vulnerability reports from bug bounty programs can be utilized to provide aid for individual security researchers when performing their research. This is done here by creating a taxonomy based on the attack surfaces on a website that were used by the author of a report when discovering a vulnerability. Reports are then indexed according to this taxonomy together with the discovered vulnerability, to provide statistics on which vulnerabilities are most commonly found on what attack surfaces. The taxonomy and the indexed reports, referred to as the Haxonomy, are then also used as the basis for a machine learning algorithm which is trained to provide guidance to bug bounty hunters. It is concluded that this proof-of-concept, if developed fully, can be used to improve the success rate of individual security researchers operating on bug bounty platforms.
Syftet med denna studie är att visa att informationen som finns i offentliga sårbarhetsrapporter från så kallade bug-bounty program kan användas för att hjälpa individer att genomföra bättre sårbarhetstester. Detta görs här genom att skapa en taxonomi baserad på vilka attackytor på en hemsida som en författare av en sådan rapport har använt för att upptäcka sårbarheten. Sårbarhetsrapporter indexeras sedan enligt denna taxonomi, för att tillsammans med vilka sårbarheter som upptäckts ta fram statistik på vilka sårbarheter som man mest troligen kan hitta via vilka attackytor. Taxonomin och de indexerade rapporterna, tillsammans referrerade till som Haxonomin, används sedan också som material för att träna en algoritm med hjälp av maskininlärning, som kan vara till hjälp vid sårbarhetstester. Slutsatsen dras att detta konceptbevis kan utvecklas och användas för att hjälpa sårbarhetstestare att hitta fler sårbarheter i framtiden.
APA, Harvard, Vancouver, ISO, and other styles
3

Karresand, Martin. "A Proposed Taxonomy of Software Weapons." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1512.

Full text
Abstract:

The terms and classification schemes used in the computer security field today are not standardised. Thus the field is hard to take in, there is a risk of misunderstandings, and there is a risk that the scientific work is being hampered.

Therefore this report presents a proposal for a taxonomy of software based IT weapons. After an account of the theories governing the formation of a taxonomy, and a presentation of the requisites, seven taxonomies from different parts of the computer security field are evaluated. Then the proposed new taxonomy is introduced and the inclusion of each of the 15 categories is motivated and discussed in separate sections. Each section also contains a part briefly outlining the possible countermeasures to be used against weapons with that specific characteristic.

The final part of the report contains a discussion of the general defences against software weapons, together with a presentation of some open issues regarding the taxonomy. There is also a part discussing possible uses for the taxonomy. Finally the report is summarised.

APA, Harvard, Vancouver, ISO, and other styles
4

Li, Danhui. "The application of mathematical taxonomy to automatic speech recognition." Thesis, University of Huddersfield, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.334681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Joseph, Daniel. "Linking information resources with automatic semantic extraction." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/linking-information-resources-with-automatic-semantic-extraction(ada2db36-4366-441a-a0a9-d76324a77e2c).html.

Full text
Abstract:
Knowledge is a critical dimension in the problem solving processes of human intelligence. Consequently, enabling intelligent systems to provide advanced services requires that their artificial intelligence routines have access to knowledge of relevant domains. Ontologies are often utilised as the formal conceptualisation of domains, in that they identify and model the concepts and relationships of the targeted domain. However complexities inherent in ontology development and maintenance have limited their availability. Separate from the conceptualisation component, domain knowledge also encompasses the concept membership of object instances within the domain. The need to capture both the domain model and the current state of instances within the domain has motivated the import of Formal Concept Analysis into intelligent systems research. Formal Concept Analysis, which provides a simplified model of a domain, has the advantage in that not only does it define concepts in terms of their attribute description but object instances are simultaneously ascribed to their appropriate concepts. Nonetheless, a significant drawback of Formal Concept Analysis is that when applied to a large dataset, the lattice with which it models a domain is often composed of a copious amount of concepts, many of which are arguably unnecessary or invalid. In this research a novel measure is introduced which assigns a relevance value to concepts in the lattice. This measure is termed the Collapse Index and is based on the minimum number of object instances that need be removed from a domain in order for a concept to be expunged from the lattice. Mathematics that underpin its origin and behaviour are detailed in the thesis showing that if the relevance of a concept is defined by the Collapse Index: a concept will eventually lose relevance if one of its immediate subconcepts increasingly acquires object instance support; and a concept has its highest relevance when its immediate subconcepts have equal or near equal object instance support. In addition, experimental evaluation is provided where the Collapse Index demonstrated comparable or better performance than the current prominent alternatives in: being consistent across samples; the ability to recall concepts in noisy lattices; and efficiency of calculation. It is also demonstrated that the Collapse Index affords concepts with low object instance support the opportunity to have a higher relevance than those of high supportThe second contribution to knowledge is that of an approach to semantic extraction from a dataset where the Collapse Index is included as a method of selecting concepts for inclusion in a final concept hierarchy. The utility of the approach is demonstrated by reviewing its inclusion in the implementation of a recommender system. This recommender system serves as the final contribution featuring a unique design where lattices represent user profiles and concepts in these profiles are pruned using the Collapse Index. Results showed that pruning of profile lattices enabled by the Collapse Index improved the success levels of movie recommendations if the appropriate thresholds are set.
APA, Harvard, Vancouver, ISO, and other styles
6

Kasemsri, Rawiroj Robert. "A Survey, Taxonomy, and Analysis of Network Security Visualization Techniques." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/cs_theses/17.

Full text
Abstract:
Network security visualization is a relatively new field and is quickly gaining momentum. Network security visualization allows the display and projection of the network or system data, in hope to efficiently monitor and protect the system from any intrusions or possible attacks. Intrusions and attacks are constantly continuing to increase in number, size, and complexity. Textually reading through log files or other textual sources is currently insufficient to secure a network or system. Using graphical visualization, security information is presented visually, and not only by text. Without network security visualization, reading through log files or other textual sources is an endless and aggravating task for network security analysts. Visualization provides a method of displaying large volume of information in a relatively small space. It also makes patterns easier to detect, recognize, and analyze. This can help security experts to detect problems that may otherwise be missed in reading text based log files. Network security visualization has become an active research field in the past six years and a large number of visualization techniques have been proposed. A comprehensive analysis of the existing techniques is needed to help network security designers make informed decisions about the appropriate visualization techniques under various circumstances. Moreover, a taxonomy of the existing visualization techniques is needed to classify the existing network security visualization techniques and present a high level overview of the field. In this thesis, the author surveyed the field of network security visualization. Specifically, the author analyzed the network security visualization techniques from the perspective of data model, visual primitives, security analysis tasks, user interaction, and other design issues. Various statistics were generated from the literatures. Based on this analysis, the author has attempted to generate useful guidelines and principles for designing effective network security visualization techniques. The author also proposed a taxonomy for the security visualization techniques. To the author’s knowledge, this is the first attempt to generate a taxonomy for network security visualization. Finally, the author evaluated the existing network security visualization techniques and discussed their characteristics and limitations. For future research, the author also discussed some open research problems in this field. This research is a step towards a thorough analysis of the problem space and the solution space in network security visualization.
APA, Harvard, Vancouver, ISO, and other styles
7

Alkhaldi, Rawan. "Spatial data transmission security authentication of spatial data using a new temporal taxonomy /." abstract and full text PDF (free order & download UNR users only), 2005. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1433280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Weng, Li-Tung. "Information enrichment for quality recommender systems." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/29165/1/Li-Tung_Weng_Citation.pdf.

Full text
Abstract:
The explosive growth of the World-Wide-Web and the emergence of ecommerce are the major two factors that have led to the development of recommender systems (Resnick and Varian, 1997). The main task of recommender systems is to learn from users and recommend items (e.g. information, products or books) that match the users’ personal preferences. Recommender systems have been an active research area for more than a decade. Many different techniques and systems with distinct strengths have been developed to generate better quality recommendations. One of the main factors that affect recommenders’ recommendation quality is the amount of information resources that are available to the recommenders. The main feature of the recommender systems is their ability to make personalised recommendations for different individuals. However, for many ecommerce sites, it is difficult for them to obtain sufficient knowledge about their users. Hence, the recommendations they provided to their users are often poor and not personalised. This information insufficiency problem is commonly referred to as the cold-start problem. Most existing research on recommender systems focus on developing techniques to better utilise the available information resources to achieve better recommendation quality. However, while the amount of available data and information remains insufficient, these techniques can only provide limited improvements to the overall recommendation quality. In this thesis, a novel and intuitive approach towards improving recommendation quality and alleviating the cold-start problem is attempted. This approach is enriching the information resources. It can be easily observed that when there is sufficient information and knowledge base to support recommendation making, even the simplest recommender systems can outperform the sophisticated ones with limited information resources. Two possible strategies are suggested in this thesis to achieve the proposed information enrichment for recommenders: • The first strategy suggests that information resources can be enriched by considering other information or data facets. Specifically, a taxonomy-based recommender, Hybrid Taxonomy Recommender (HTR), is presented in this thesis. HTR exploits the relationship between users’ taxonomic preferences and item preferences from the combination of the widely available product taxonomic information and the existing user rating data, and it then utilises this taxonomic preference to item preference relation to generate high quality recommendations. • The second strategy suggests that information resources can be enriched simply by obtaining information resources from other parties. In this thesis, a distributed recommender framework, Ecommerce-oriented Distributed Recommender System (EDRS), is proposed. The proposed EDRS allows multiple recommenders from different parties (i.e. organisations or ecommerce sites) to share recommendations and information resources with each other in order to improve their recommendation quality. Based on the results obtained from the experiments conducted in this thesis, the proposed systems and techniques have achieved great improvement in both making quality recommendations and alleviating the cold-start problem.
APA, Harvard, Vancouver, ISO, and other styles
9

Weng, Li-Tung. "Information enrichment for quality recommender systems." Queensland University of Technology, 2008. http://eprints.qut.edu.au/29165/.

Full text
Abstract:
The explosive growth of the World-Wide-Web and the emergence of ecommerce are the major two factors that have led to the development of recommender systems (Resnick and Varian, 1997). The main task of recommender systems is to learn from users and recommend items (e.g. information, products or books) that match the users’ personal preferences. Recommender systems have been an active research area for more than a decade. Many different techniques and systems with distinct strengths have been developed to generate better quality recommendations. One of the main factors that affect recommenders’ recommendation quality is the amount of information resources that are available to the recommenders. The main feature of the recommender systems is their ability to make personalised recommendations for different individuals. However, for many ecommerce sites, it is difficult for them to obtain sufficient knowledge about their users. Hence, the recommendations they provided to their users are often poor and not personalised. This information insufficiency problem is commonly referred to as the cold-start problem. Most existing research on recommender systems focus on developing techniques to better utilise the available information resources to achieve better recommendation quality. However, while the amount of available data and information remains insufficient, these techniques can only provide limited improvements to the overall recommendation quality. In this thesis, a novel and intuitive approach towards improving recommendation quality and alleviating the cold-start problem is attempted. This approach is enriching the information resources. It can be easily observed that when there is sufficient information and knowledge base to support recommendation making, even the simplest recommender systems can outperform the sophisticated ones with limited information resources. Two possible strategies are suggested in this thesis to achieve the proposed information enrichment for recommenders: • The first strategy suggests that information resources can be enriched by considering other information or data facets. Specifically, a taxonomy-based recommender, Hybrid Taxonomy Recommender (HTR), is presented in this thesis. HTR exploits the relationship between users’ taxonomic preferences and item preferences from the combination of the widely available product taxonomic information and the existing user rating data, and it then utilises this taxonomic preference to item preference relation to generate high quality recommendations. • The second strategy suggests that information resources can be enriched simply by obtaining information resources from other parties. In this thesis, a distributed recommender framework, Ecommerce-oriented Distributed Recommender System (EDRS), is proposed. The proposed EDRS allows multiple recommenders from different parties (i.e. organisations or ecommerce sites) to share recommendations and information resources with each other in order to improve their recommendation quality. Based on the results obtained from the experiments conducted in this thesis, the proposed systems and techniques have achieved great improvement in both making quality recommendations and alleviating the cold-start problem.
APA, Harvard, Vancouver, ISO, and other styles
10

Costin, Aaron. "A new methodology for interoperability of heterogeneous bridge information models." Diss., Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55012.

Full text
Abstract:
With the passing of the MAP-21 (Moving Ahead for Progress in the 21st Century) Act in 2012, the United States bridge industry has had a significant push for the use of innovative technologies to advance the highway transportation system. Bridge Information Modeling (BrIM) is emerging as an important trend in the industry, in which various technologies and software are being used in all phases of the bridge lifecycle and have been shown to have a variety of benefits. However, most software are stand alone applications and do not efficiently exchange data among other software. This lack of interoperability creates impediments for the efficient and seamless transfer of information across the bridge lifecycle. In recent years, the building industry developed standards to promote interoperability for Building Information Models (BIM). Unfortunately, these standards lack the ability to incorporate bridges. Therefore, there major need for a standard for Bridge Information Modeling (BrIM). Moreover, as technology and modeling software have been coming more prevalent in other domains (roads, geotechnical, environment systems, etc.) there is an even larger need to expand interoperability standards across multi-disciplinary domains. The purpose of this research is to develop a methodology that would enable the interoperability of multi-disciplinary information models. The scope of the methodology is for Bridge Information Models, but the approach is extendable to other domains. This research is motivated by the fundamental issues of interoperability, such as semantic, logic, and software issues. In this research, the fundamental issues of interoperability are investigated as well as an in-depth review of literature proposing solutions. Additionally, current standards for interoperability of information models are reviewed. Based on the findings of the literature review, this research develops, evaluates, and validates a novel methodology for interoperability of information models. The fundamental issues of interoperability are addressed by the use of a taxonomy and ontology. A new standardization process to capture domain knowledge, called in “Information Exchange Standard” is outlined along with a novel method of developing an ontology based on industry workflows. This methodology has been used and validated by an industry domain case study. A software tool to automate the capturing of domain knowledge and development of a taxonomy is presented.
APA, Harvard, Vancouver, ISO, and other styles
11

Wu, Sheng-Tang. "Knowledge discovery using pattern taxonomy model in text mining." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16675/.

Full text
Abstract:
In the last decade, many data mining techniques have been proposed for fulfilling various knowledge discovery tasks in order to achieve the goal of retrieving useful information for users. Various types of patterns can then be generated using these techniques, such as sequential patterns, frequent itemsets, and closed and maximum patterns. However, how to effectively exploit the discovered patterns is still an open research issue, especially in the domain of text mining. Most of the text mining methods adopt the keyword-based approach to construct text representations which consist of single words or single terms, whereas other methods have tried to use phrases instead of keywords, based on the hypothesis that the information carried by a phrase is considered more than that by a single term. Nevertheless, these phrase-based methods did not yield significant improvements due to the fact that the patterns with high frequency (normally the shorter patterns) usually have a high value on exhaustivity but a low value on specificity, and thus the specific patterns encounter the low frequency problem. This thesis presents the research on the concept of developing an effective Pattern Taxonomy Model (PTM) to overcome the aforementioned problem by deploying discovered patterns into a hypothesis space. PTM is a pattern-based method which adopts the technique of sequential pattern mining and uses closed patterns as features in the representative. A PTM-based information filtering system is implemented and evaluated by a series of experiments on the latest version of the Reuters dataset, RCV1. The pattern evolution schemes are also proposed in this thesis with the attempt of utilising information from negative training examples to update the discovered knowledge. The results show that the PTM outperforms not only all up-to-date data mining-based methods, but also the traditional Rocchio and the state-of-the-art BM25 and Support Vector Machines (SVM) approaches.
APA, Harvard, Vancouver, ISO, and other styles
12

Wu, Sheng-Tang. "Knowledge discovery using pattern taxonomy model in text mining." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16675/1/Sheng-Tang_Wu_Thesis.pdf.

Full text
Abstract:
In the last decade, many data mining techniques have been proposed for fulfilling various knowledge discovery tasks in order to achieve the goal of retrieving useful information for users. Various types of patterns can then be generated using these techniques, such as sequential patterns, frequent itemsets, and closed and maximum patterns. However, how to effectively exploit the discovered patterns is still an open research issue, especially in the domain of text mining. Most of the text mining methods adopt the keyword-based approach to construct text representations which consist of single words or single terms, whereas other methods have tried to use phrases instead of keywords, based on the hypothesis that the information carried by a phrase is considered more than that by a single term. Nevertheless, these phrase-based methods did not yield significant improvements due to the fact that the patterns with high frequency (normally the shorter patterns) usually have a high value on exhaustivity but a low value on specificity, and thus the specific patterns encounter the low frequency problem. This thesis presents the research on the concept of developing an effective Pattern Taxonomy Model (PTM) to overcome the aforementioned problem by deploying discovered patterns into a hypothesis space. PTM is a pattern-based method which adopts the technique of sequential pattern mining and uses closed patterns as features in the representative. A PTM-based information filtering system is implemented and evaluated by a series of experiments on the latest version of the Reuters dataset, RCV1. The pattern evolution schemes are also proposed in this thesis with the attempt of utilising information from negative training examples to update the discovered knowledge. The results show that the PTM outperforms not only all up-to-date data mining-based methods, but also the traditional Rocchio and the state-of-the-art BM25 and Support Vector Machines (SVM) approaches.
APA, Harvard, Vancouver, ISO, and other styles
13

Liang, Huizhi. "User profiling based on folksonomy information in Web 2.0 for personalized recommender systems." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/41879/1/Huizhi_Liang_Thesis.pdf.

Full text
Abstract:
Information overload has become a serious issue for web users. Personalisation can provide effective solutions to overcome this problem. Recommender systems are one popular personalisation tool to help users deal with this issue. As the base of personalisation, the accuracy and efficiency of web user profiling affects the performances of recommender systems and other personalisation systems greatly. In Web 2.0, the emerging user information provides new possible solutions to profile users. Folksonomy or tag information is a kind of typical Web 2.0 information. Folksonomy implies the users‘ topic interests and opinion information. It becomes another source of important user information to profile users and to make recommendations. However, since tags are arbitrary words given by users, folksonomy contains a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise makes it difficult to profile users accurately or to make quality recommendations. This thesis investigates the distinctive features and multiple relationships of folksonomy and explores novel approaches to solve the tag quality problem and profile users accurately. Harvesting the wisdom of crowds and experts, three new user profiling approaches are proposed: folksonomy based user profiling approach, taxonomy based user profiling approach, hybrid user profiling approach based on folksonomy and taxonomy. The proposed user profiling approaches are applied to recommender systems to improve their performances. Based on the generated user profiles, the user and item based collaborative filtering approaches, combined with the content filtering methods, are proposed to make recommendations. The proposed new user profiling and recommendation approaches have been evaluated through extensive experiments. The effectiveness evaluation experiments were conducted on two real world datasets collected from Amazon.com and CiteULike websites. The experimental results demonstrate that the proposed user profiling and recommendation approaches outperform those related state-of-the-art approaches. In addition, this thesis proposes a parallel, scalable user profiling implementation approach based on advanced cloud computing techniques such as Hadoop, MapReduce and Cascading. The scalability evaluation experiments were conducted on a large scaled dataset collected from Del.icio.us website. This thesis contributes to effectively use the wisdom of crowds and expert to help users solve information overload issues through providing more accurate, effective and efficient user profiling and recommendation approaches. It also contributes to better usages of taxonomy information given by experts and folksonomy information contributed by users in Web 2.0.
APA, Harvard, Vancouver, ISO, and other styles
14

Stalker, Joshua D. "A Reading Preference and Risk Taxonomy for Printed Proprietary Information Compromise in the Aerospace and Defense Industry." NSUWorks, 2012. http://nsuworks.nova.edu/gscis_etd/314.

Full text
Abstract:
The protection of proprietary information that users print from their information systems is a significant and relevant concern in the field of information security to both researchers and practitioners. Information security researchers have repeatedly indicated that human behaviors and perception are important factors influencing the information security of organizations and have called for more research. The aerospace and defense industry commonly deals with its own proprietary information as well its customers. Further, e-training is a growing practice in this industry, it frequently deals with proprietary information, and has unique information security challenge, thus, serves as additional context for this study. This study focused on the investigation of two constructs, user reading preference and user perceived risk of compromising printed proprietary information, as well as seven user demographics. These constructs reflect human behavior and risk perceptions associated with compromising printed proprietary information and, thus, provide valuable insights applicable into information security. This study developed a Reading Preference and Risk (RPR) Taxonomy, which allows users to be classified according to the aforementioned two constructs under investigation and provides insightful characterizations of information security risks. A survey based on existing literature, the primary constructs, and several demographics was implemented to assess two research questions and seven associated hypotheses. The survey was sent to 1,728 employees of an aerospace and defense organization. The response rate was 18% with 311 usable records. The results of the study showed that employees were dispersed across the RPR Taxonomy with 15.1% identified as potentially problematic to the protection of printed proprietary information. The overall results showed that the population had a reading preference for print materials and a high perceived risk for compromising printed proprietary information, as well as significantly higher print preference for e-training materials when it was necessary to retain the content in memory. Significant differences in the two constructs were also found across several demographics including age, gender, frequency of user exposure to proprietary information, the confidentiality level of the proprietary information a user is regularly exposed to, and previous user experience with the compromise of proprietary information. Recommendations for practice and research are provided. Moreover, several areas for future research are also presented.
APA, Harvard, Vancouver, ISO, and other styles
15

Da, Silva Tiago Ferreira. "Risk identification and project approval: an importance-performance analysis of taxonomy-based risks in Information Technology projects." OpenSIUC, 2010. https://opensiuc.lib.siu.edu/theses/355.

Full text
Abstract:
The dissemination of project management practices is consolidating project as a great mean of achieving an organization's strategic plan (PMI, 2008, p. 10). But there are no resources and funds available to all, and competition among alternative projects is increasing. Funding institutions, government agencies, and credit rating organizations started considering project risk in project evaluation, instead of only using financial metrics such as ROI (Return on Investment) or NPV (Net Present Value). In order to obtain resources needed to implement projects, and thus contribute to an organization's objectives, it is necessary to identify the risks that have greater influence on project approval. This study applied the Importance-Performance Analysis (Martilla & James, 1977) to identify the main risk identification deficiencies for Information Technology (IT) project managers who apply for resources or funding approval. The identification was made possible by measuring IT project managers' perceptions of (1) the level of importance, or positive influence, which different risk categories have in project approval; (2) the level of performance they believe to have in the identification of these risks, meaning prior detection and registering; and (3) the gap between the measured levels of importance and performance. A survey listing 28 risk categories belonging to a validated risk taxonomy and 5-point Likert scales with different levels of importance and performance were presented to IT project managers from the central Illinois area. A total of 38 professionals answered the survey instrument, and verification of exclusion criteria resulted in an adjusted sample of 32 subjects. Descriptive statistics were used to compare mean values for each category, determining the gap between importance and performance levels for every category. The risk categories that presented the top three scores for importance were Scope uncertainty, Legal/regulatory, and Financial. The risk categories that represented the three greatest deficiencies or gaps (importance versus performance) for IT project approval were Contractual, Complexity, and Scope uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
16

Ekholm, Helena. "Learning Through Level Design : Using a learning taxonomy to map level design to pedagogy." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-9471.

Full text
Abstract:
Entertainment games are known for their motivational and engaging benefits when it comes to teaching the player how to play games. Still, there is little research about the connection between pedagogy and entertainment games. This knowledge could be used to develop educational games that utilize those sought after benefits of engagement and motivation. The purpose of this research is therefore to conduct a case study that identifies the underlying pedagogical elements in the level design components game progression and pacing in the entertainment game Space Team: Pocket Planets. The results show that by breaking down gameplay into level design components, used to teach the player how to play the game, and mapping them to a learning taxonomy, the pedagogical elements that corresponds to those components can be identified. This information can be used as a method when it comes to evaluating the pedagogy present in other games and to bridge the knowledge gap between game designers of educational and entertainment games.
APA, Harvard, Vancouver, ISO, and other styles
17

Kidwell, Billy R. "MiSFIT: Mining Software Fault Information and Types." UKnowledge, 2015. http://uknowledge.uky.edu/cs_etds/33.

Full text
Abstract:
As software becomes more important to society, the number, age, and complexity of systems grow. Software organizations require continuous process improvement to maintain the reliability, security, and quality of these software systems. Software organizations can utilize data from manual fault classification to meet their process improvement needs, but organizations lack the expertise or resources to implement them correctly. This dissertation addresses the need for the automation of software fault classification. Validation results show that automated fault classification, as implemented in the MiSFIT tool, can group faults of similar nature. The resulting classifications result in good agreement for common software faults with no manual effort. To evaluate the method and tool, I develop and apply an extended change taxonomy to classify the source code changes that repaired software faults from an open source project. MiSFIT clusters the faults based on the changes. I manually inspect a random sample of faults from each cluster to validate the results. The automatically classified faults are used to analyze the evolution of a software application over seven major releases. The contributions of this dissertation are an extended change taxonomy for software fault analysis, a method to cluster faults by the syntax of the repair, empirical evidence that fault distribution varies according to the purpose of the module, and the identification of project-specific trends from the analysis of the changes.
APA, Harvard, Vancouver, ISO, and other styles
18

Forrest, Jeffrey S. "Information Policies and Practices of Knowledge Management(KM) as Related to the Development of the Global Aviation Information Network(GAIN)- An Applied Case Study and Taxonomy Development." NSUWorks, 2006. http://nsuworks.nova.edu/gscis_etd/525.

Full text
Abstract:
The Global Aviation Information Network (GAIN) was initiated in response to U.S. Government policies seeking to reduce airline accidents. GAIN was to disseminate airline or aviation safety information in environments where public disclosure impedes the diffusion of information. Government legislation such as the U.S. Freedom of Information Act and other information policies create risks of public disclosure to those reporting information. Therefore, the problem investigated in this research was to identity and evaluate potential solutions to policy issues in public disclosure that prevent the collection and sharing of aviation safety information. Interactions between GAIN, information policy, and knowledge management (KM) and their impact on the diffusion of information were explored. A generalized taxonomy and ontology of KM was interpreted and presented. This taxonomy represents grounded theory developed from examination of examples and cases of KM contained in the literature. This taxonomy may be used to address challenges related to information or knowledge diffusion in various settings. A specialized taxonomy and ontology addressing issues controlling the diffusion of airline safety information was interpreted. This taxonomy presented issues related to diffusion, disclosure, and policy that may be used to help design and implement airline safety information sharing systems. Content analysis and text-mining processes were used to help interpret and develop the taxonomies, ontologies, and recommendations made in this study. This dissertation presents models for using these techniques to develop taxonomy and related ontology from published documentation and recorded interviews. Practitioners may use the methodology of this study to build taxonomy and ontology in other areas of study. Inductive reasoning was used to develop potential solutions to policy issues in public disclosure that prevent the collection and sharing of aviation safety information within GAIN's community and network of practice. GAIN should evolve into a community of practice serving as an information intermediary to various alliances seeking to share aviation safety information. GAIN should focus on assisting alliances with creating environments of trust, collaboration, and the development of policies and fair processes for addressing public disclosure as a barrier to the diffusion of aviation safety information.
APA, Harvard, Vancouver, ISO, and other styles
19

Ayfarah, Souad Mohamed. "An exploration of indirect human costs associated with information systems adoption." Thesis, Brunel University, 2004. http://bura.brunel.ac.uk/handle/2438/4856.

Full text
Abstract:
One of the dilemmas that information systems (IS) decision-makers encounter is the identification of the often hidden costs associated with IS adoption, particularly since most of them are reported to be external to the traditional IS budget. The review of the IS literature has identified that much effort to date has focused on the identification and measurement of direct costs, and that much less attention has been paid to indirect costs. One of the main problems reported in the literature associated with looking at indirect costs is that they are intangible and difficult to quantify, and there is evidence suggesting that these indirect costs are rarely completely budgeted for, and thus deserve a much closer consideration by decision-makers. This research investigates this view, arguing that one element of indirect costs, that is, indirect human costs (lRCs), is underestimated and little understood. The author argues that it is not possible to estimate or evaluate IHCs without first identifying all their components, yet there is an absence of models that show how such costs are allocated for IS adoption. This underpins the necessity of the present research. Proposed here is a framework of nine sequential phases for accommodating indirect human costs. In addition to this, 1) three conjectures, 2) cost taxonomy and 3) an interrelationship-mapping cost driver model of IRCs, are proposed based on the literature analysis and underpinning the conceptual phases of the framework. To test the conjectures and validate the models proposed, a case research strategy using case settings were carried out in the private sector. Empirical findings validates the models proposed and reveal that indirect human costs are perceived as costs associated with IS adoption, nevertheless not included in the evaluation process or investment proposals. However, during the empirical research, new cost factors and drivers emerged, which resulted in modifications being made to the previously proposed conceptual models. In doing so, it provides investment decision-makers with novel frames of reference and an extensive list of IRCs that can be used during both the IS budget proposals and the evaluation process of the IS investment.
APA, Harvard, Vancouver, ISO, and other styles
20

Nadee, Wanvimol. "Modelling user profiles for recommender systems." Thesis, Queensland University of Technology, 2016. https://eprints.qut.edu.au/93723/1/Wanvimol_Nadee_Thesis.pdf.

Full text
Abstract:
Recommender systems assist users in finding what they want. The challenging issue is how to efficiently acquire user preferences or user information needs for building personalized recommender systems. This research explores the acquisition of user preferences using data taxonomy information to enhance personalized recommendations for alleviating cold-start problem. A concept hierarchy model is proposed, which provides a two-dimensional hierarchy for acquiring user preferences. The language model is also extended for the proposed hierarchy in order to generate an effective recommender algorithm. Both Amazon.com book and music datasets are used to evaluate the proposed approach, and the experimental results show that the proposed approach is promising.
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Xiaoli (Li). "Integrating information literacy into higher education curricula: An IL curricular integration model." Thesis, Queensland University of Technology, 2010. https://eprints.qut.edu.au/41747/1/Xiaoli%20Wang%20Thesis.pdf.

Full text
Abstract:
This study investigates a way to systematically integrate information literacy (IL) into an undergraduate academic programme and develops a model for integrating information literacy across higher education curricula. Curricular integration of information literacy in this study means weaving information literacy into an academic curriculum. In the associated literature, it is also referred to as the information literacy embedding approach or the intra-curricular approach. The key findings identified from this study are presented in 4 categories: the characteristics of IL integration; the key stakeholders in IL integration; IL curricular design strategies; and the process of IL curricular integration. Three key characteristics of the curricular integration of IL are identified: collaboration and negotiation, contextualisation and ongoing interaction with information. The key stakeholders in the curricular integration of IL are recognised as the librarians, the course coordinators and lecturers, the heads of faculties or departments, and the students. Some strategies for IL curricular design include: the use of IL policies and standards in IL curricular design; the combination of face to face and online teaching as an emerging trend; the use of IL assessment tools which play an important role in IL integration. IL can be integrated into the intended curriculum (what an institution expects its students to learn), the offered curriculum (what the teachers teach) and the received curriculum (what students actually learn). IL integration is a process of negotiation, collaboration and the implementation of the intended curriculum. IL can be integrated at different levels of curricula such as: institutional, faculty, departmental, course and class curriculum levels. Based on these key findings, an IL curricular integration model is developed. The model integrates curriculum, pedagogy and learning theories, IL theories, IL guidelines and the collaboration of multiple partners. The model provides a practical approach to integrating IL into multiple courses across an academic degree. The development of the model was based on the IL integration experiences of various disciplines in three universities and the implementation experience of an engineering programme at another university; thus it may be of interest to other disciplines. The model has the potential to enhance IL teaching and learning, curricular development and to implement graduate attributes in higher education. Sociocultural theories are applied to the research process and IL curricular design of this study. Sociocultural theories describe learning as being embedded within social events and occurring as learners interact with other people, objects, and events in a collaborative environment. Sociocultural theories are applied to explore how academic staff and librarians experience the curricular integration of IL; they also support collaboration in the curricular integration of IL and the development of an IL integration model. This study consists of two phases. Phase I (2007) was the interview phase where both academic staff and librarians at three IL active universities were interviewed. During this phase, attention was paid specifically to the practical process of curricular integration of IL and IL activity design. Phase II, the development phase (2007-2008), was conducted at a fourth university. This phase explores the systematic integration of IL into an engineering degree from Year 1 to Year 4. Learning theories such as sociocultural theories, Bloom’s Taxonomy and IL theories are used in IL curricular development. Based on the findings from both phases, an IL integration model was developed. The findings and the model contribute to IL education, research and curricular development in higher education. The sociocultural approach adopted in this study also extends the application of sociocultural theories to the IL integration process and curricular design in higher education.
APA, Harvard, Vancouver, ISO, and other styles
22

Zhou, Xujuan. "Rough set-based reasoning and pattern mining for information filtering." Thesis, Queensland University of Technology, 2008. https://eprints.qut.edu.au/29350/1/Xujuan_Zhou_Thesis.pdf.

Full text
Abstract:
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).
APA, Harvard, Vancouver, ISO, and other styles
23

Zhou, Xujuan. "Rough set-based reasoning and pattern mining for information filtering." Queensland University of Technology, 2008. http://eprints.qut.edu.au/29350/.

Full text
Abstract:
An information filtering (IF) system monitors an incoming document stream to find the documents that match the information needs specified by the user profiles. To learn to use the user profiles effectively is one of the most challenging tasks when developing an IF system. With the document selection criteria better defined based on the users’ needs, filtering large streams of information can be more efficient and effective. To learn the user profiles, term-based approaches have been widely used in the IF community because of their simplicity and directness. Term-based approaches are relatively well established. However, these approaches have problems when dealing with polysemy and synonymy, which often lead to an information overload problem. Recently, pattern-based approaches (or Pattern Taxonomy Models (PTM) [160]) have been proposed for IF by the data mining community. These approaches are better at capturing sematic information and have shown encouraging results for improving the effectiveness of the IF system. On the other hand, pattern discovery from large data streams is not computationally efficient. Also, these approaches had to deal with low frequency pattern issues. The measures used by the data mining technique (for example, “support” and “confidences”) to learn the profile have turned out to be not suitable for filtering. They can lead to a mismatch problem. This thesis uses the rough set-based reasoning (term-based) and pattern mining approach as a unified framework for information filtering to overcome the aforementioned problems. This system consists of two stages - topic filtering and pattern mining stages. The topic filtering stage is intended to minimize information overloading by filtering out the most likely irrelevant information based on the user profiles. A novel user-profiles learning method and a theoretical model of the threshold setting have been developed by using rough set decision theory. The second stage (pattern mining) aims at solving the problem of the information mismatch. This stage is precision-oriented. A new document-ranking function has been derived by exploiting the patterns in the pattern taxonomy. The most likely relevant documents were assigned higher scores by the ranking function. Because there is a relatively small amount of documents left after the first stage, the computational cost is markedly reduced; at the same time, pattern discoveries yield more accurate results. The overall performance of the system was improved significantly. The new two-stage information filtering model has been evaluated by extensive experiments. Tests were based on the well-known IR bench-marking processes, using the latest version of the Reuters dataset, namely, the Reuters Corpus Volume 1 (RCV1). The performance of the new two-stage model was compared with both the term-based and data mining-based IF models. The results demonstrate that the proposed information filtering system outperforms significantly the other IF systems, such as the traditional Rocchio IF model, the state-of-the-art term-based models, including the BM25, Support Vector Machines (SVM), and Pattern Taxonomy Model (PTM).
APA, Harvard, Vancouver, ISO, and other styles
24

Arvidson, Martin, and Markus Carlbark. "Intrusion Detection Systems : Technologies, Weaknesses and Trends." Thesis, Linköping University, Department of Electrical Engineering, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1614.

Full text
Abstract:

Traditionally, firewalls and access control have been the most important components used in order to secure servers, hosts and computer networks. Today, intrusion detection systems (IDSs) are gaining attention and the usage of these systems is increasing. This thesis covers commercial IDSs and the future direction of these systems. A model and taxonomy for IDSs and the technologies behind intrusion detection is presented.

Today, many problems exist that cripple the usage of intrusion detection systems. The decreasing confidence in the alerts generated by IDSs is directly related to serious problems like false positives. By studying IDS technologies and analyzing interviews conducted with security departments at Swedish banks, this thesis identifies the major problems within IDSs today. The identified problems, together with recent IDS research reports published at the RAID 2002 symposium, are used to recommend the future direction of commercial intrusion detection systems.

APA, Harvard, Vancouver, ISO, and other styles
25

Johnson, Kristofer Dee. "The Application of Pedology, Stable Carbon Isotope Analyses and Geographic Information Systems to Ancient Soil Resource Investigations at Piedras Negras, Guatemala." BYU ScholarsArchive, 2004. https://scholarsarchive.byu.edu/etd/549.

Full text
Abstract:
The ancient inhabitants of the Maya Lowlands enjoyed a long and fruitful period of growth which climaxed at around AD 800. At that time, millions of people successfully subsisted in a challenging environment that today only supports a population a fraction of that size. These facts, and the subsequent "Maya Collapse", are the impetus of many recent studies that utilize environmental data, in addition to conventional archaeology, to investigate this Maya mystery. Pedological studies and stable carbon isotope analysis of soil organic matter, combined with Geographic Information Systems (GIS) are three tools that can be used to answer crucial questions as to how the Maya managed their soil resources. GIS maps that indicated areas of best agricultural potential based on slope and soil type were used as a guide to opportunistically sample soils in an area south of Piedras Negras Guatemala – an area that was densely vegetated and unexplored. Soils that represented the different soil resources of the area were sampled with a bucket auger at 15 cm intervals. The samples were then tested in a laboratory for physical and chemical characteristics and δ-13C values were determined for soil organic matter. Soil taxonomical descriptions indicated that overall the soil resources of the area were very good as almost all the soils were classified as Mollisols - the most fertile of all the soil orders. The suite of great groups found was Haprendolls, Argiudolls, Argiaquolls and Udorthents. The characteristics which distinguish these great groups were used to further investigate relative agricultural productivity from an ancient soil resources point of view. Haprendolls were better drained and probably made for good agricultural soils given soil depth and rainfall were adequate. The Argiudolls and especially the Argiaquolls were probably less favored because of very high clay contents that made them more difficult to work with and poor drainage. Stable carbon isotope analyses revealed strong evidence for maize agriculture in some environments of the study area. δ-13C values as high as -16.6‰ (76% C4—Carbon) were observed in areas of significant soil accumulation in well drained and moderately drained soils. Minimal evidence of maize agriculture was found in more marginal environments such as those with little soil accumulation or poorly drained areas. Also, the pattern of the graph of δ-13C values versus depth indicated that ancient agriculture occurred continuously in some areas, but in other areas as distinguishable events. Finally, when the strength of the C4 signal was represented graphically and overlaid with a modified GIS agricultural potential map, a visual representation of the extent and degree of ancient agriculture was achieved. Our findings suggest that upland agriculture was favored by the ancient Maya of Piedras Negras and that the region between Piedras Negras and Yaxchilan was an agriculturally important breadbasket. The methods and results of this study provide foundational information for the investigation of ancient Maya agriculture. In future studies, it may be possible to more systematically map ancient agricultural fields and estimate the carrying capacity of a region based on its soil resources.
APA, Harvard, Vancouver, ISO, and other styles
26

Ender, Linda. "Data Governance in Digital Platforms : A case analysis in the building sector." Thesis, Umeå universitet, Institutionen för informatik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-185598.

Full text
Abstract:
Data are often the foundation of digital innovation and are seen as a highly valuable asset for any organization. Many companies aim to put data at the core of their business, but struggle with regulating data in complex environments. Data governance becomes an integral part for data-driven business. However, only a minority of companies fully engage in data governance. Research also lacks knowledge about data governance in complex environments such as digital platforms. Therefore, this thesis examines the role of data governance in digital platforms, by researching the conceptual characteristics of platform data governance. The iterative taxonomy development process by Nickerson et al. (2013) has been used to classify the characteristics of platform data governance. The results are derived from existing literature and motivated by new insights from expert interviews as well as a case analysis of a real-life platform. The final taxonomy shows that the conceptual characteristics of platform data governance are based on the dimensions purpose, platform data, responsibilities, decision domains and compliance. The findings address challenges of data governance in inter organizational settings and help practitioners to define their own data governance. Additionally, the thesis highlights the potential for future research.
APA, Harvard, Vancouver, ISO, and other styles
27

Valiati, Eliane Regina de Almeida. "Avaliação de usabilidade de técnicas de visualização de informações multidimensionais." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2008. http://hdl.handle.net/10183/13699.

Full text
Abstract:
Técnicas de visualização de informações multidimensionais têm o potencial de auxiliar na análise visual e exploração de grandes conjuntos de dados, através do emprego de mecanismos que buscam tanto representar visualmente os dados quanto permitir ao usuário a interação com estas representações. Neste contexto, diversas técnicas têm sido desenvolvidas, muitas delas sem uma avaliação detalhada e aprofundada tanto de eficiência como de utilidade no suporte às necessidades dos usuários. Contudo, há relativamente pouco tempo começaram a ser publicados trabalhos abordando as diversas questões relacionadas à avaliação de usabilidade de sistemas ou das aplicações que implementam estas técnicas como forma de promover sua eficiente e efetiva utilização. A avaliação de usabilidade de interfaces de sistemas de visualização representa um desafio de pesquisa uma vez que elas apresentam significativas diferenças com relação a outros tipos de interface. Neste sentido, existe uma carência de sistematização (incluindo o uso de métodos e técnicas de avaliação de usabilidade) que explore e considere as características deste tipo de interface de maneira adequada. Esta tese investiga soluções viáveis para o desenvolvimento de uma abordagem sistemática para avaliação de usabilidade de técnicas de visualização de informações multidimensionais e apresenta as seguintes soluções ao problema em estudo: 1) determinação de uma taxonomia de tarefas específica relacionada ao uso de visualizações multidimensionais no processo de análise de dados e 2) adaptação de técnicas e métodos de avaliação de usabilidade, com o objetivo de torná-los mais efetivos ao contexto de sistemas de visualização de informações multidimensionais.
Multidimensional visualization techniques have the potential of supporting the visual analysis and exploration of large datasets, by means of providing visual representations and interaction techniques which allow users to interact with the data through their graphical representation. In this context, several techniques have been developed, most of them being reported without a broad and deep evaluation both regarding their efficiency and utility in supporting users tasks. Few years ago, thus quite recently, several works have been published reporting many issues related to the evaluation of visualization systems and applications, as a means of promoting their efficiency and effective use. In spite of these works, the usability evaluation of visualization systems’ graphical interfaces remains a challenge because of the significant differences between these interfaces and those of other systems. This way, there is a need of finding a systematic approach for such evaluations, including the definition of which usability methods and techniques are best suited for this kind of interfaces. This thesis reports our investigation of viable solutions for the development of a systematic approach for the usability evaluation of multidimensional information visualizations. We have conducted several case studies and experiments with users and have achieved the following contributions: 1) a taxonomy of visualization tasks, that is related to the use of interactive visualization techniques for the exploration and analysis of multidimensional datasets and 2) adaptation of usability evaluation techniques with the goal of making them more effective in the context of multidimensional information visualizations.
APA, Harvard, Vancouver, ISO, and other styles
28

Pipanmaekaporn, Luepol. "A data mining framework for relevance feature discovery." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/62857/1/Luepol_Pipanmaekaporn_Thesis.pdf.

Full text
Abstract:
This thesis is a study for automatic discovery of text features for describing user information needs. It presents an innovative data-mining approach that discovers useful knowledge from both relevance and non-relevance feedback information. The proposed approach can largely reduce noises in discovered patterns and significantly improve the performance of text mining systems. This study provides a promising method for the study of Data Mining and Web Intelligence.
APA, Harvard, Vancouver, ISO, and other styles
29

Santos, Naiara Andrade Malta. "Taxonomia e etiquetagem: análise dos processos de organização e representação da informação jurídica na web." Instituto de Ciência da Informação da Universidade Federal da Bahia, 2014. http://repositorio.ufba.br/ri/handle/ri/18684.

Full text
Abstract:
Submitted by Valdinei Souza (neisouza@hotmail.com) on 2015-10-08T20:54:14Z No. of bitstreams: 1 TAXONOMIA E ETIQUETAGEM - NAIARA ANDRADE MALTA SANTOS.pdf: 3461653 bytes, checksum: de172816740035ba4556e8642e1e1b10 (MD5)
Approved for entry into archive by Urania Araujo (urania@ufba.br) on 2016-03-04T20:06:01Z (GMT) No. of bitstreams: 1 TAXONOMIA E ETIQUETAGEM - NAIARA ANDRADE MALTA SANTOS.pdf: 3461653 bytes, checksum: de172816740035ba4556e8642e1e1b10 (MD5)
Made available in DSpace on 2016-03-04T20:06:01Z (GMT). No. of bitstreams: 1 TAXONOMIA E ETIQUETAGEM - NAIARA ANDRADE MALTA SANTOS.pdf: 3461653 bytes, checksum: de172816740035ba4556e8642e1e1b10 (MD5)
A pesquisa foi realizada com o objetivo de analisar a taxonomia e etiquetagem, empregadas na organização e representação do conhecimento da informação jurídica nos websites jurídicos do Brasil. Para isso, procedeu-se, inicialmente, pelo mapeamento dos websites jurídicos brasileiros que se encontravam entre os 500 mais acessados do país em dezembro de 2013, localizando 02 websites jurídicos (portal JusBrasil e o Portal do Tribunal de Justiça do Estado de São Paulo), que foi verificado quanto a disponibilidade das tipologias da documentação jurídica. Em seguida, identificou-se os níveis de taxonomia e etiquetagem empregadas na organização e representação do conhecimento nos websites selecionados comparando desta forma os mesmos. Foi verificado também se os termos que compõem a tabela do conhecimento da CAPES da área de Direito são encontrados nas taxonomia e na etiquetagem no tesauro jurídico do STF. Desta forma, o instrumento utilizado para coleta dos dados foi à observação participante e o formulário, quanto ao tratamento dos dados obtidos, a pesquisa é caracterizada como uma abordagem qualitativa e apresenta como resultados a taxonomia e a etiquetagem como aliadas na organização e representação do conhecimento jurídico nos portais estudados. Além dos usuários do Portal JusBrasil que participam de forma colaborativa na organização e representação do conhecimento jurídico disponível no portal.
ABSTRACT The research was performed with the aim of analyzing the taxonomy and tagging, employed in the organization and knowledge representation of juridical information in the juridical websites in Brazil. For this, proceeded, initially, by mapping of Brazilian legal websites witch were among the 500, that is the most accessed of the country on December, in 2013, localizing 02 juridical websites (Portal JusBrasil and the Portal of the Court of Justice of the State of Sao Paulo), which was verified for the availability of the types of legal documentation. Then, levels of taxonomy and tagging used in the organization and representation of knowledge on selected websites comparing this way the same. It was also verified that the terms that compose the table of knowledge of CAPES of the area of law are found in the taxonomy and tagging in legal thesaurus of STF. This way, the tool used for the data collection was participant observation and the form, regarding the treatment of data, the research is characterized as a qualitative approach and presents as results the taxonomy and tagging as allies in the organization and representation of legal knowledge in the portals studied. In addition to the Portal users JusBrasil participating collaboratively in the organization and representation of legal knowledge available in the portal.
APA, Harvard, Vancouver, ISO, and other styles
30

Pinho, Laura Ramos Pimentel. "O mapa conceitual na construção de taxonomias para organização da informação na WEB." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/27/27151/tde-12012018-110159/.

Full text
Abstract:
Esta pesquisa foi motivada pela identificação da dificuldade de encontrar, compreender e gerenciar as informações na web, considerando a necessidade de uma metodologia específica para a organização das informações em ambiente digital e que não existem padrões definidos que tornem os conceitos e os assuntos compreensíveis. É necessário manter uma lógica clara e que faça sentido para o usuário. Foram analisadas hipóteses para validar a organização da informação da web sob a perspectiva da Organização e Representação da Informação, a partir da taxonomia e da linguagem documentária com o auxílio do mapa conceitual (ferramenta criada com base na Psicologia da Aprendizagem). A terminologia foi apresentada para identificação de termos e definições relacionadas ao mapa conceitual, verificando como este pode auxiliar na construção de taxonomias de navegação para a web. Realizou-se levantamento teórico e metodológico sobre mapa conceitual: definições, aplicações, características. Analisou-se também como ocorre a ligação dos conceitos que se relacionam entre si através de proposições no mapa conceitual, para compreender o todo através de cada uma de suas partes interligadas. Foram identificados alguns tipos de mapas conceituais, e para esta pesquisa especificamente foi utilizado o tipo hierárquico, que está diretamente relacionado com a taxonomia dentro de uma complexidade estrutural. A taxonomia por sua vez foi analisada sob a perspectiva da organização da informação no ambiente web. Foram identificadas suas definições, usos, relação com a organização da informação e com o mapa conceitual. Por se tratar do contexto web, os conceitos do design de interface foram abordados, contextualizando e relacionando de forma aplicada a representação da informação. Para exemplificar a construção do mapa conceitual foi utilizado o software CMap Tools mostrando como é possível organizar informações da web, partindo de premissas utilizadas na Ciência da Informação. Com base na pesquisa bibliográfica, nas análises realizadas e no exemplo criado, mostrou-se que o mapa conceitual é uma ferramenta que auxilia na construção de taxonomia e que essas premissas são reforçadas inclusive por pesquisadores de outras áreas como Design e Ciência da Computação, que ressaltam a importância da taxonomia para representar a informação e a relação de satisfação do usuário quando encontra o que procura.
This research was motivated by the identification of finding out, comprehending and managing informations on the web - considering the necessity of a specific methodology to organize informations in a digital environment, and also that does not exist a defined pattern that make the concepts and issues comprehensible. It\'s necessary to maintain a clear logic that makes sense to the user. Hypothesis were analyzed to validate the information organization on the web from the Information\'s Organization and Representation perspective, from the taxonomy and documentary language with the concept map (software created based on the Psychology of Learning) support. The terminology was introduced to identificate terms and definitions related to the concept map, verifying how it can help on the construction of navigating taxonomy for the web. A theoretical and methodologic survey about the concept map was made, investigating: definitions, applications and features. We also analyzed the connection between concepts that relate to each other through the prepositions on the concept map, to comprehend it all through its interconnected parts. A few types of concept maps were identified, and to this specific research the hierarchic type was used, which is directly related to the taxonomy within a structural complexity. The taxonomy, otherwise, was analyzed from the information\'s organization on the web perspective. Were identified its definitions, applications and the relation with the information\'s organization and with the concept map. Because it\'s the web concept, the concepts of user interface were used, contextualizing and relating to the representation of the information in an applied way. The software CMap Tools was used to exemplify the construction of a concept map, showing how it\'s possible to organize informations on the web, based upon the premises used in Science Information. Basing it upon the bibliography research, on the technical analyzes and on the created exemple, it showed that the concept map is a software that helps on the taxonomy constructions, and that these premises are reinforced by researchers from other areas, like Design and Computer Science, that emphasize the importance of taxonomy in representing the information and the satisfaction the user feels when finds what was being searched.
APA, Harvard, Vancouver, ISO, and other styles
31

Dogan, Hakan Mete. "Understanding And Modeling Plant Biodiversity Of Nallihan (a3-ankara) Forest Ecosystem By Means Of Geographic Information Systems And Remote Sensing." Phd thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/4/1172436/index.pdf.

Full text
Abstract:
In this study, geographic information systems (GIS) and remote sensing (RS) tools were integrated and used to investigate the plant species diversity of the Nallihan forest ecosystem. Two distinct indices, Shannon Wiener and Simpson, were employed in order to express species diversity. The relationships between the indices and pertinent independent variables (topography, geology, soil, climate, supervised classes, and Normalized Difference Vegetation Index (NDVI) classes) were investigated to develop two distinct models for each index. After detecting important components with factor analysis, two models were developed by using multiple regression statistics. Running the models, two plant species diversity maps in grid format were produced. The validity of the models were tested by (1) mapping residuals to predict the locations where the models work perfectly, and (2) logical interpretations in ecological point of view. Elevation and climatic factors formed the most important component that are effective on plant species diversity. Geological formations, soil, land cover and land-use characteristics were also found influential for both models. Considering the disturbance and potential evapotranspiration (PET), the model developed for Shannon Wiener index was found out more suitable comparing the model for Simpson index.
APA, Harvard, Vancouver, ISO, and other styles
32

Bernardino, Mayara Cristina. "Representação da informação de bens culturais: construindo uma taxonomia no contexto das fazendas históricas paulistas." Universidade Federal de São Carlos, 2015. https://repositorio.ufscar.br/handle/ufscar/1154.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:16:43Z (GMT). No. of bitstreams: 1 6817.pdf: 2355668 bytes, checksum: 3f8482337eb80cf02071cfe6f0859ba6 (MD5) Previous issue date: 2015-02-19
Universidade Federal de Minas Gerais
This research is in the context of Brazilian historic farms that were responsible, especially at national economic development from the cultivation of coffee between the seventeenth and nineteenth centuries. Different research groups have come together to develop and implement a methodology for cataloging and inventory cultural property of these farms making them available in a free software called Virtual Memory (VM). The VM is a system developed on an open source platform under GPL (General Public License), and allows you to store different types of historical collections, whether bibliographic, museum, archives, architectural and natural in a single database. It was initially developed by Science course researchers Computing, University of São Paulo (ICMC / USP), on the campus of São Carlos - SP, with its prototype ready for testing. Currently these surveys were concentrated in the project "Criteria and methodologies for carrying out the São Paulo cultural heritage inventories", supported by FAPESP. One of the results of this project was the development of a Standard Description Information (SDI) that supports the provision of content on VM. With this approach the following specific objectives were established: Compose a descriptive mapping of the studies developed in the Brazilian Information science concerning the development of concepts, theories, methods and tools focused on description, organization and representation of the National Rural Heritage. Recover, detail and describe the procedures already developed in earlier research, focused on the collection of words related to Paulista Rural Heritage. Specifically consider international standards for Organization and Retrieval in Information Science among them the ANSI / NISO Z39.19-2005. Develop an introductory structure of taxonomy, to be incorporated into the VM as a resource for indexing of cultural goods which are likely to be registered, listed. As a result, an initial structure was built in 3639 with taxonomy terms, hierarchically structured and with signs of use. The largest order related to the development of this memory environment ranges from the preservation of references of tangible and intangible heritage of Brazilian historic farms to the systematization of information that can improve the formulation of public policies for heritage education, and in this sense, any tool that allow optimize the recovery processes of information about cultural property, are welcome.
Esta pesquisa se insere no contexto das fazendas históricas brasileiras que foram responsáveis, especialmente, pelo desenvolvimento econômico nacional a partir do cultivo do café entre os séculos XVII e XIX. Diferentes grupos de pesquisa se reuniram para desenvolver e aplicar uma metodologia para catalogar e inventariar bens culturais destas fazendas tornando-os disponíveis em um software livre chamado Memória Virtual (MV). O MV é um sistema desenvolvido em uma plataforma de software livre, sob licença GPL (General Public Licence), e permite armazenar diferentes tipos de acervos históricos, sejam bibliográficos, museológicos, arquivísticos, arquitetônicos e naturais em uma única base de dados. Foi inicialmente desenvolvido por pesquisadores do curso de Ciência da Computação da Universidade de São Paulo (ICMC/USP), no campus de São Carlos SP, estando seu protótipo já pronto em fase de testes. Atualmente estas pesquisas foram concentradas no projeto Critérios e metodologias para a realização de inventários do patrimônio cultural paulista , apoiado pela FAPESP. Um dos resultados deste projeto foi a elaboração de um Padrão de Descrição da Informação (PDI) que serve de apoio ao provimento de conteúdos no MV. Neste contexto, tivemos como objetivo geral elaborar um instrumento de linguagem, especificamente uma taxonomia, para ser utilizado como apoio da indexação de conteúdo no MV. Com este enfoque foram estabelecidos os seguintes objetivos específicos: Compor um mapeamento descritivo dos estudos desenvolvidos no âmbito da Ciência da informação brasileira que dizem respeito ao desenvolvimento de conceitos, teorias, métodos e instrumentos voltados à descrição, organização e representação do Patrimônio Rural Nacional. Recuperar, detalhar e descrever os procedimentos já desenvolvidos em pesquisas anteriores, voltados à coleta de palavras relacionadas ao Patrimônio Rural Paulista. Analisar especificamente as normas internacionais para Organização e Recuperação da Informação em Ciência da Informação dentre elas a ANSI/NISO Z39.19-2005. Desenvolver uma estrutura introdutória de taxonomia, a ser incorporado no MV como recurso para indexação dos bens culturais que poderão vir a ser cadastrados, elencados. Como resultado, foi construída uma estrutura inicial de taxonomia com 3639 termos, estruturados hierarquicamente e com sinalizações de uso. O intuito maior relacionado ao desenvolvimento deste ambiente de memória vai desde a preservação das referências dos patrimônios materiais e imateriais das fazendas históricas brasileiras até a sistematização de informações que possam aperfeiçoar a elaboração de políticas públicas voltadas a educação patrimonial, e neste sentido, qualquer ferramenta que permita otimizar os processos de recuperação da informação sobre bens culturais, são bem vindos.
APA, Harvard, Vancouver, ISO, and other styles
33

Morais, Paula. "TAXSI: taxionomia de sistemas informáticos." Doctoral thesis, Universidade do Minho, 2001. http://hdl.handle.net/11328/942.

Full text
Abstract:
Tese de Doutoramento em Tecnologias e Sistemas de Informação Engenharia e Gestão de Sistemas de Informação.
Esta tese descreve uma taxionomia de sistemas informáticos, TAXSI, e um método para a sua utilização, que servem dois propósitos: por um lado sistematizar conceitos sobre sistemas informáticos, e por outro, argumenta-se que constituem um contributo para melhorar o processo de desenvolvimento de sistemas de informação (DSI), e em particular a fase de engenharia de requisitos (ER). A TAXSI visa ser utilizada no início do processo de DSI, nomeadamente na fase inicial de levantamento de requisitos, para identificar o tipo de sistema informático a desenvolver, permitindo mais facilmente indicar um ponto de partida para o processo de desenvolvimento ou adopção de novos sistemas e contribuindo para que os engenheiros de requisitos desenvolvam o seu trabalho de uma forma mais orientada. Com vista a identificar e comparar taxionomias de sistemas informáticos, e também identificar designações de tipos de aplicações informáticas procedeu-se a uma revisão da literatura. Do estudo da literatura construiu-se uma lista onde se identificaram mais de uma centena de designações diferentes de sistemas informáticos. Algumas delas são usadas como homónimos, mas de facto referem-se a sistemas com diferentes funcionalidades, e outras poderiam ser usadas como sinónimos, pois apesar de terem diferentes nomes referem-se a sistemas com a mesma funcionalidade. Quanto às taxionomias foram identificadas na literatura relevante treze taxionomias de sistemas informáticos, tendo sido analisados para cada uma o autor, data e objectivo com que foi desenvolvida, as dimensões usadas, sistemas considerados e método de obtenção das dimensões, o método de construção, as classes identificadas e o método de validação. Da revisão pode concluir-se que cada autor selecciona critérios e métodos de construção de acordo com o objectivo da taxionomia, e, consequentemente, as taxionomias resultantes são demasiado diferentes para que se possa identificar algum padrão de classificação. Tendo este facto em consideração, propõe-se uma nova taxionomia de sistemas informáticos, baseada no papel das aplicações no contexto da sua utilização. Da análise desse papel identificaram-se como dimensões relevantes as seguintes: o conhecimento sobre os objectos existentes numa organização, os processos organizacionais, os objectos manipulados pelos processos, o domínio do negócio, as operações incluídas nos processos e a arquitectura típica de cada operação. Apresenta-se também uma proposta de um método de utilização da TAXSI no âmbito do DSI, e em particular na engenharia de requisitos, e descreve-se o processo de validação efectuado. A validação executada pretendeu avaliar se os sistemas analisados podiam ser adequadamente descritos com as dimensões definidas. A validação foi feita recorrendo a quatro casos: uma universidade, uma empresa de fiação, uma indústria farmacêutica e uma organização pública de controlo na área vitivinícola. Os resultados dos casos permitiram concluir da aplicabilidade da TAXSI em contextos reais, relativamente à adequação da descrição dos sistemas analisados.
This thesis presents a taxonomy of information systems, TAXSI, that has two main aims: to systematize information systems concepts and to contribute to improve the information systems development (ISD) process, in particular the requirements engineering (RE) phase. TAXIS aims to be used in the early stages of ISD, in the stage of requirements definition, to identify the kind of computer based information system (CBIS) to be developed. It can be argued that, being aware of the type of system to be developed, requirements engineers could carry out their work in a much more focused way, as they know what they are looking for. A review of literature was carried out in order to identify and compare CBIS taxonomies, and also CBIS designations. As a result of this review, a list containing more than a hundred different CBIS designations was built. Some of them are used as homonyms, although they refer to systems with different functionalities, and others could be used as synonyms since they refer to systems with the same functionality. The review also identified thirteen CBIS taxonomies that were analyzed taking into account the following aspects: author and context, criteria used, systems considered and method used to obtain the dimensions, the identified classes and validation method. The review led to the conclusion that very different criteria and development methods were used depending on the taxonomy objectives, and therefore, the resulting taxonomies are too different so that no classification pattern can be infered. A new taxonomy is proposed based on the role of CBIS in the context of their use. The following relevant dimensions were defined: business domain, organizational knowledge, organizational processes, processes operations, knowledge dealt with by the processes and typical architecture of each operation. This thesis outlines a method for TAXIS utilization in the ISD process, and in particular in the requirements engineering stage, and describes the validation procedures that were carried out in order to validate the taxonomy. The aim of the validation addressed its capability of classifying existing systems. The validation was carried out using four cases: a university, a spinning-mill, a generics drugs industry and a public regulation organization in the wine field. The case results allow confirming the applicability of TAXIS in real contexts, in what concerns the adequately description of the analyzed systems. The issue of other types of validation is also discussed in the thesis.
Orientação: Prof.º Doutor João Álvaro de Carvalho.
APA, Harvard, Vancouver, ISO, and other styles
34

Barclay, Matthew W. "The Impact of Team-Based Learning’s Readiness Assurance Process on Virtually Isolated Adults." DigitalCommons@USU, 2011. https://digitalcommons.usu.edu/etd/1025.

Full text
Abstract:
The purpose of this study was to test the effectiveness of the readiness assurance process of team-based learning (TBL) in virtually isolated settings. Many Internet sites offer courses for adults to use on their own without access to mentors or other learners. However, educational theory suggests that people learn better with others than by themselves. The focus of this investigation was whether the inclusion of the readiness assurance process would increase participants’ levels of learning based on Bloom’s revised taxonomy within the limits of virtual isolation. In this study an experimental pretest-posttest design was employed. Using a 2- day mini-course about listening in marriage, 117 participants were randomly assigned to three groups. In the TBL group, married couples worked together following the principles of the readiness assurance process. In the independent group, one spouse from a marriage worked alone, also following the principles of the readiness assurance process. In the baseline group, one spouse from a marriage took the pretest and posttest only. The first posttest, called posttest-L, measured lower levels of learning (remembering and understanding). The second posttest, called posttest-D, measured deeper learning (applying and evaluating). Using ANCOVA with the pretests as the covariates, results showed a statistically significant difference in learning gains between the TBL group and the independent group for lower levels of learning (ES = .39). However, statistical significance was not achieved for deeper learning. Moreover, TBL scores and independent scores were no different from the baseline scores for measures of deeper learning. Along with explanations for these results, limitations of the study are described and suggestions for future research are offered.
APA, Harvard, Vancouver, ISO, and other styles
35

Medjkoune, Massissilia. "Vers une approche non orientée pour l'évaluation de la qualité des odeurs." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS010/document.

Full text
Abstract:
Caractériser la qualité d’une odeur est une tâche complexe qui consiste à identifier un ensemble de descripteurs qui synthétise au mieux la sensation olfactive au cours de séances d’analyse sensorielle. Généralement, cette caractérisation est une liste de descripteurs extraite d’un vocabulaire imposé par les industriels d’un domaine pour leurs analyses sensorielles. Ces analyses représentent un coût significatif pour les industriels chaque année. En effet, ces approches dites orientées reposent sur l’apprentissage de vocabulaires, limitent singulièrement les descripteurs pour un public non initié et nécessitent de couteuses phases d’apprentissage. Si cette caractérisation devait être confiée à des évaluateurs naïfs, le nombre de participants pourrait être significativement augmenté tout en réduisant le cout des analyses sensorielles. Malheureusement, chaque description libre n’est alors plus associée à un ensemble de descripteurs non ambigus, mais à un simple sac de mots en langage naturel (LN). Deux problématiques sont alors rattachées à la caractérisation d’odeurs. La première consiste à transformer des descriptions en LN en descripteurs structurés ; la seconde se donne pour objet de résumer un ensemble de descriptions formelles proposées par un panel d’évaluateurs en une synthèse unique et cohérente à des fins industrielles. Ainsi, la première partie de notre travail se focalise sur la définition et l’évaluation de modèles qui peuvent être utilisés pour résumer un ensemble de mots en un ensemble de descripteurs désambiguïsés. Parmi les différentes stratégies envisagées dans cette contribution, nous proposons de comparer des approches hybrides exploitant à la fois des bases de connaissances et des plongements lexicaux définis à partir de grands corpus de textes. Nos résultats illustrent le bénéfice substantiel à utiliser conjointement représentation symbolique et plongement lexical. Nous définissons ensuite de manière formelle le processus de synthèse d’un ensemble de concepts et nous proposons un modèle qui s’apparente à une forme d’intelligence humaine pour évaluer les résumés alternatifs au regard d’un objectif de synthèse donné. L’approche non orientée que nous proposons dans ce manuscrit apparait ainsi comme l’automatisation cognitive des tâches confiées aux opérateurs des séances d’analyse sensorielle. Elle ouvre des perspectives intéressantes pour développer des analyses sensorielles à grande échelle sur de grands panels d’évaluateurs lorsque l’on essaie notamment de caractériser les nuisances olfactives autour d’un site industriel
Characterizing the quality of smells is a complex process that consists in identifying a set of descriptors best summarizing the olfactory sensation. Generally, this characterization results in a limited set of descriptors provided by sensorial analysis experts. These sensorial analysis sessions are however very costly for industrials. Indeed, such oriented approaches based on vocabulary learning limit, in a restrictive manner, the possible descriptors available for any uninitiated public, and therefore require a costly vocabulary-learning phase. If we could entrust this characterization to neophytes, the number of participants of a sensorial analysis session would be significantly enlarged while reducing costs. However, in that setting, each individual description is not related to a set of non-ambiguous descriptors anymore, but to a bag of terms expressed in natural language (NL). Two issues are then related to smell characterization implementing this approach. The first one is how to translate such NL descriptions into structured descriptors; the second one being how to summarize a set of individual characterizations into a consistent and synthetic unique characterization meaningful for professional purposes. Hence, this work focuses first on the definition and evaluation of models that can be used to summarize a set of terms into unambiguous entity identifiers selected from a given ontology. Among the several strategies explored in this contribution, we propose to compare hybrid approaches taking advantages of knowledge bases (symbolic representations) and word embeddings defined from large text corpora analysis. The results we obtain highlight the relative benefits of mixing symbolic representations with classic word embeddings for this task. We then formally define the problem of summarizing sets of concepts and we propose a model mimicking Human-like Intelligence for scoring alternative summaries with regard to a specific objective function. Interestingly, this non-oriented approach for identifying the quality of odors appears to be an actual cognitive automation of the task today performed by expert operators in sensorial analysis. It therefore opens interesting perspectives for developing scalable sensorial analyses based on large sets of evaluators when assessing, for instance, olfactory pollution around industrial sites
APA, Harvard, Vancouver, ISO, and other styles
36

Yu, Ye. "ULTRA-FAST AND MEMORY-EFFICIENT LOOKUPS FOR CLOUD, NETWORKED SYSTEMS, AND MASSIVE DATA MANAGEMENT." UKnowledge, 2018. https://uknowledge.uky.edu/cs_etds/68.

Full text
Abstract:
Systems that process big data (e.g., high-traffic networks and large-scale storage) prefer data structures and algorithms with small memory and fast processing speed. Efficient and fast algorithms play an essential role in system design, despite the improvement of hardware. This dissertation is organized around a novel algorithm called Othello Hashing. Othello Hashing supports ultra-fast and memory-efficient key-value lookup, and it fits the requirements of the core algorithms of many large-scale systems and big data applications. Using Othello hashing, combined with domain expertise in cloud, computer networks, big data, and bioinformatics, I developed the following applications that resolve several major challenges in the area. Concise: Forwarding Information Base. A Forwarding Information Base is a data structure used by the data plane of a forwarding device to determine the proper forwarding actions for packets. The polymorphic property of Othello Hashing the separation of its query and control functionalities, which is a perfect match to the programmable networks such as Software Defined Networks. Using Othello Hashing, we built a fast and scalable FIB named \textit{Concise}. Extensive evaluation results on three different platforms show that Concise outperforms other FIB designs. SDLB: Cloud Load Balancer. In a cloud network, the layer-4 load balancer servers is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. We built a software load balancer with Othello Hashing techniques named SDLB. SDLB is able to accomplish two functionalities of the SDLB using one Othello query: to find the designated server for packets of ongoing sessions and to distribute new or session-free packets. MetaOthello: Taxonomic Classification of Metagenomic Sequences. Metagenomic read classification is a critical step in the identification and quantification of microbial species sampled by high-throughput sequencing. Due to the growing popularity of metagenomic data in both basic science and clinical applications, as well as the increasing volume of data being generated, efficient and accurate algorithms are in high demand. We built a system to support efficient classification of taxonomic sequences using its k-mer signatures. SeqOthello: RNA-seq Sequence Search Engine. Advances in the study of functional genomics produced a vast supply of RNA-seq datasets. However, how to quickly query and extract information from sequencing resources remains a challenging problem and has been the bottleneck for the broader dissemination of sequencing efforts. The challenge resides in both the sheer volume of the data and its nature of unstructured representation. Using the Othello Hashing techniques, we built the SeqOthello sequence search engine. SeqOthello is a reference-free, alignment-free, and parameter-free sequence search system that supports arbitrary sequence query against large collections of RNA-seq experiments, which enables large-scale integrative studies using sequence-level data.
APA, Harvard, Vancouver, ISO, and other styles
37

Ayub, Muhammad, and Muhammad Jawad. "Structuring and Modelling Competences in the Healthcare Area with the help of Ontologies." Thesis, Jönköping University, School of Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-9627.

Full text
Abstract:

Ontology development is a systematic technique to represent the existing and new knowledge about a specific domain by using some models to present the system in which conceptualization is involved. This thesis presents the use of ontologies to formally represent ontology-based competence model for potential users of quality registry report in a healthcare organization. The model describes the professional and occupational interests and needs of the users through structuring and describing their skills and qualifications. There are individual competences model having two main parts: general competence and occupational competence. The model is implemented in an ontology editor. Although our competence model gives the general view about all medical areas in a hospital, from implementation point of view, we have considered only Cardiology area in detail. The potential users of quality registry are medical staff, county council staff and Pharmaceutical staff. In this report we have also used different classifications of education, occupational fields and diseases. A user can get information about the patient and specific disease with treatment tips by using various organizational resources: i.e. quality registries, electronic medical reports, and online journals. Our model also provides a support of information filtering which filters the information according to the need and the competencies of the users.

APA, Harvard, Vancouver, ISO, and other styles
38

Stigmar, Martin. "Metakognition och Internet : Om gymnasieelevers informationsanvändning vid arbete med Internet." Doctoral thesis, Växjö universitet, Institutionen för pedagogik, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-382.

Full text
Abstract:
This thesis describes what happens when a group of high-school students practice their ability to reflect upon their learning (metacognitive training) and then solve tasks with information collected on the Internet. The overall aim of the thesis is to make an explorative investigation to find out if exercises of metacognitive type can support the use of information for high-school students in their work with the Internet. The dissertation also aims at making clear what significance certain pre-requisites have for metacognitive training, ie if the students attend vocationally or theoretically oriented programmes and the attitude of the teacher towards metacognitive exercises. There are a number of reasons why I have chosen to investigate the field of learning and information and communication technology (ICT). The use of the Internet to search for information in school work has increased. An important reason to investigate these issues is that the use of ICT in itself doesn´t seem to mean improved learning, according to existing research. A central question to study is where the pedagogical surplus value lies in the use of the Internet in school work? In the thesis it is argued that it is not enough to provide schools with computers and Internet connections, but that something more is needed in order to achieve a development in teaching with ICT. During one year 40 students in theoretically and vocationally oriented programmes are being monitored together with their English teachers, in four different actions. The collection of data was done exploratively by interviews, logbook notes, observations and the SOLO-taxonomy. The result is accounted for in four case studies. During the action the professionalism of the teacher in creating beneficial teaching environments, by contextualizing the metacognitive exercises to things that challenge the students' internal motivation, turned out to be central. Furthermore it was shown that the students in the vocationally oriented programmes had most use of developing their reflective ability.
APA, Harvard, Vancouver, ISO, and other styles
39

Tancoigne, Elise. "Évaluer la santé de la taxonomie zoologique : histoire, méthodes et enjeux contemporains." Phd thesis, Museum national d'histoire naturelle - MNHN PARIS, 2011. http://tel.archives-ouvertes.fr/tel-00707531.

Full text
Abstract:
Ce travail est la première approche de la taxonomie zoologique basée sur les données de la plus grande base de données bibliographiques de zoologie, le Zoological Record, à l'aide d'outils scientométriques : méthodes cartographiques, comptages. Il remet en cause la conception selon laquelle la taxonomie serait menacée d'extinction, à l'instar de ses objets d'étude. Dans le contexte actuel de crise de la biodiversité, ce n'est pas une forme de déclin qui se révèle, mais plutôt une insuffisance des forces mises en jeu pour connaître l'ensemble de la biodiversité avant sa disparition. Ce travail sur les publications permet de donner une définition claire de cette discipline, et ainsi de proposer une nouvelle approche d'évaluation de sa santé. Puisque la taxonomie est définie comme la science qui travaille sur des collections naturalistes à des fins nomenclaturales, il est pertinent de proposer que sa santé soit évaluée en fonction du dynamisme de ses collections. Le déclin de la taxonomie n'étant plus considéré comme une donnée factuelle, il mériterait d'être abordé en tant que discours tenu par une communauté scientifique face à de profondes modifications de ses pratiques et face à la disparition de ses objets d'étude. Une approche relevant des sciences humaines traiterait sans doute avec profit cette thématique.
APA, Harvard, Vancouver, ISO, and other styles
40

Vu, Binh Verfasser], Matthias [Akademischer Betreuer] Hemmje, Matthias [Gutachter] Hemmje, and Kevitt Paul [Gutachter] [Mc. "A Taxonomy Management System Supporting Crowd-based Taxonomy Generation, Evolution, and Management / Binh Vu ; Gutachter: Matthias Hemmje, Paul Mc Kevitt ; Betreuer: Matthias Hemmje." Hagen : FernUniversität in Hagen, 2020. http://d-nb.info/1209359308/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Björklund, Jacqueline. "Reviewing the Non-Financial Reporting Directive : An analysis de lege lata and de lege ferenda concerning sustainability reporting obligations for undertakings in the EU." Thesis, Uppsala universitet, Juridiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-431664.

Full text
Abstract:
The Non-Financial Reporting Directive (“NFRD”),[1]is an important contributor to the European Union’s (EU) goal of creating a more sustainable future for all. By requiring large public-interest entities to report non-financial information relating to sustainability matters, the NFRD increases business transparency and gives stakeholders the opportunity to make more informed investment decisions, monitor corporate activities and initiate discussions based on current practices. The purpose of this thesis is to analyze the NFRD as it stands today and to analyze in what way the NFRD has the potential to improve by chiefly using the legal dogmatic method. The thesis reached its completion with an appropriate timing (January 2021) as the EU has announced its ambition to revise the NFRD by the first quarter of 2021. The conclusion drawn is that the NFRD should be revised on a series of points. Most importantly, reliability of the provided information should be secured through a stronger verification mechanism. Other areas for improvement concern the enlargement of the scope of the NFRD and the implementation of further measures securing comparable data.  [1]Directive 2014/95/EU.
APA, Harvard, Vancouver, ISO, and other styles
42

Juhari, Ariff Syah. "Evaluation of competitive intelligence software for MSC-status small and medium-sized enterprises in Malaysia." Thesis, Loughborough University, 2009. https://dspace.lboro.ac.uk/2134/13657.

Full text
Abstract:
Small and medium-sized enterprises (SMEs) in Malaysia, particularly In the information and communications technology (lCT) sector, are faced with an increasingly volatile environment. The Malaysian business scene has opened up their markets to the world where smaller businesses find themselves competing with newly launched multinational subsidiary and subdivision companies, along with the large local firms. The Malaysian Government has launched several campaigns and support for smaller local businesses to be more competitive and to continuously compete at par with these larger companies. This research project supports the Malaysian Government's objective of instilling a more structured approach towards a more competitive SME by focusing on the management of competitive information related to these companies. In recognising the rising need for competitive support, management and executives are increasingly relying on a concept called Competitive Intelligence (Cl), a systematic and ethical process for gathering, analysing, and managing information that can affect a company's plans, decisions, and operation. In managing competitive information, several companies have emerged especially to develop online tools and software that would enhance the Cl process and the value competitive intelligence brings to organisations. The success of these Cl software tools depends, however, on the sophistication of an organisation's understanding of the Cl process and scope of usage. Different companies derive different values from different approaches to competitive intelligence, and therefore require a flexible tool that is very specific to the company's needs. Therefore, this research investigated the structures and contexts of Malaysian Small and Mediumsized Enterprises (SMEs) based on competitive intelligence (Cl) concepts to derive a more customised approach to the use of Cl for SMEs in the ICT sector, as well as in the selection of appropriate Cl software. Mintzberg's approaches to analysing organisational structures and contexts, Bouthillier and Shearer's Intelligence Cycle, Herring's Key Intelligence Topics, and Davis' concept of effectiveness were used in two main stages. The first stage involved identifying the nature and range of SMEs, which exist under Malaysia's Multimedia Super Corridor, a government benchmarking body for local businesses. This gives an account, on the basis of cluster analysis, of a taxonomy of SME categories consisted of ten clusters. The relationships between the categories were also examined in the first stage of the research. The relationships and clusters found in the first part of the research offered the basis for the second part of the research, which constructs the criteria for evaluating online tools and software for competitive intelligence. The evaluation criteria are then used to evaluate eight Cl-ready software packages in finding suitable tools for the different categories of SMEs. Finally, the research concludes with a study of the prospective users' perceptions of effectiveness in SMEs drawn from the identified clusters. This 'multiple constituency' approach to understanding effectiveness evaluates both Davis' concept of effectiveness (usefulness), as well as the differential evaluations of perceived effectiveness. The research findings provide evidence of a range of SME structures in a variety of contexts. Levels of importance placed on different levels in the Cl process are identified, as well as aspects that need support, automation and/or augmentation. The software evaluation in the second part of the research provided ten recommendations of suitable software package(s) for each SME cluster. However, an initial review by SME managers of perceived effectiveness mostly did not reveal results that were parallel to the findings from the software evaluation study. All in all, the research confirms that SMEs can be analysed by clusters but further research would be necessary to confirm the effectiveness of using the recommended Cl software over a longer period of time.
APA, Harvard, Vancouver, ISO, and other styles
43

Algarni, Abdulmohsen. "Relevance feature discovery for text analysis." Thesis, Queensland University of Technology, 2011. https://eprints.qut.edu.au/48230/1/Abdulmohsen_Algarni_Thesis.pdf.

Full text
Abstract:
It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term- based ones in describing user preferences, but many experiments do not support this hypothesis. This research presents a promising method, Relevance Feature Discovery (RFD), for solving this challenging issue. It discovers both positive and negative patterns in text documents as high-level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the high-level features. The thesis also introduces an adaptive model (called ARFD) to enhance the exibility of using RFD in adaptive environment. ARFD automatically updates the system's knowledge based on a sliding window over new incoming feedback documents. It can efficiently decide which incoming documents can bring in new knowledge into the system. Substantial experiments using the proposed models on Reuters Corpus Volume 1 and TREC topics show that the proposed models significantly outperform both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and other pattern-based methods.
APA, Harvard, Vancouver, ISO, and other styles
44

Hasselplan, Fredrik. "Ontologi/taxonomi för att stödja integration mellan KBE-applikationer." Thesis, University of Skövde, Department of Computer Science, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-665.

Full text
Abstract:

Som nämns i titeln till examensarbetet är målet att integrera olika knowledge-based engineering (KBE) -applikationer till ett nytt IT-stöd. Med mål i examensarbetets kontext avses att skapa en begreppsmodell genom att tillämpa ontologi och taxonomi som en teknik i en begreppsmodellering. På detta sätt kan ett gemensamt vokabulär skapas. Syftet med vokabuläret är att presentera en samstämmig förklaring till olika begrepp samt relationerna dem emellan. Därav kan tolkningen av ett begrepps betydelse minimeras och på så sätt undvika "språkförbistring". Vidare kan olika modeller hjälpa till att öka förståelsen av vilka delar som avses integreras. Beskrivningsteknikerna ontologi och taxonomi tillämpas för att utföra en begreppsmodellering inom ett produkttillverkande företag. Resultatet av begreppsmodelleringen åskådliggör vilka centrala begrepp som finns i den domän som undersökts samt tillhörande förklaringar. Dessutom redovisas ett flertal modeller som ger prov på vilka relationer som olika begrepp kan ha.

APA, Harvard, Vancouver, ISO, and other styles
45

Giraldo, Velásquez Faber Danilo. "A framework for evaluating the quality of modelling languages in MDE environments." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90628.

Full text
Abstract:
This thesis presents the Multiple Modelling Quality Evaluation Framework method (hereinafter MMQEF), which is a conceptual, methodological, and technological framework for evaluating quality issues in modelling languages and modelling elements by the application of a taxonomic analysis. It derives some analytic procedures that support the detection of quality issues in model-driven projects, such as the suitability of modelling languages, traces between abstraction levels, specification for model transformations, and integration between modelling proposals. MMQEF also suggests metrics to perform analytic procedures based on the classification obtained for the modelling languages and artifacts under evaluation. MMQEF uses a taxonomy that is extracted from the Zachman framework for Information Systems (Zachman, 1987; Sowa and Zachman, 1992), which proposed a visual language to classify elements that are part of an Information System (IS). These elements can be from organizational to technical artifacts. The visual language contains a bi-dimensional matrix for classifying IS elements (generally expressed as models) and a set of seven rules to perform the classification. As an evaluation method, MMQEF defines activities in order to derive quality analytics based on the classification applied on modelling languages and elements. The Zachman framework was chosen because it was one of the first and most precise proposals for a reference architecture for IS, which is recognized by important standards such as the ISO 42010 (612, 2011). This thesis presents the conceptual foundation of the evaluation framework, which is based on the definition of quality for model-driven engineering (MDE). The methodological and technological support of MMQEF is also described. Finally, some validations for MMQEF are reported.
Esta tesis presenta el método MMQEF (Multiple Modelling Quality Evaluation Framework), el cual es un marco de trabajo conceptual, metodológico y tecnológico para evaluar aspectos de calidad sobre lenguajes y elementos de modelado mediante la aplicación de análisis taxonómico. El método deriva procedimientos analíticos que soportan la detección de aspectos de calidad en proyectos model-driven tales como: idoneidad de lenguajes de modelado, trazabilidad entre niveles de abstracción, especificación de transformación de modelos, e integración de propuestas de modelado. MMQEF también sugiere métricas para ejecutar procedimientos analíticos basados en la clasificación obtenida para los lenguajes y artefactos de modelado bajo evaluación. MMQEF usa una taxonomía para Sistemas de Información basada en el framework Zachman (Zachman, 1987; Sowa and Zachman, 1992). Dicha taxonomía propone un lenguaje visual para clasificar elementos que hacen parte de un Sistema de Información. Los elementos pueden ser artefactos asociados a niveles desde organizacionales hasta técnicos. El lenguaje visual contiene una matriz bidimensional para clasificar elementos de Sistemas de Información, y un conjunto de siete reglas para ejecutar la clasificación. Como método de evaluación MMEQF define actividades para derivar analíticas de calidad basadas en la clasificación aplicada sobre lenguajes y elementos de modelado. El marco Zachman fue seleccionado debido a que éste fue una de las primeras y más precisas propuestas de arquitectura de referencia para Sistemas de Información, siendo ésto reconocido por destacados estándares como ISO 42010 (612, 2011). Esta tesis presenta los fundamentos conceptuales del método de evaluación basado en el análisis de la definición de calidad en la ingeniería dirigida por modelos (MDE). Posteriormente se describe el soporte metodológico y tecnológico de MMQEF, y finalmente se reportan validaciones.
Aquesta tesi presenta el mètode MMQEF (Multiple Modelling Quality Evaluation Framework), el qual és un marc de treball conceptual, metodològic i tecnològic per avaluar aspectes de qualitat sobre llenguatges i elements de modelatge mitjançant l'aplicació d'anàlisi taxonòmic. El mètode deriva procediments analítics que suporten la detecció d'aspectes de qualitat en projectes model-driven com ara: idoneïtat de llenguatges de modelatge, traçabilitat entre nivells d'abstracció, especificació de transformació de models, i integració de propostes de modelatge. MMQEF també suggereix mètriques per executar procediments analítics basats en la classificació obtinguda pels llenguatges i artefactes de mode-lat avaluats. MMQEF fa servir una taxonomia per a Sistemes d'Informació basada en el framework Zachman (Zachman, 1987; Sowa and Zachman, 1992). Aquesta taxonomia proposa un llenguatge visual per classificar elements que fan part d'un Sistema d'Informació. Els elements poden ser artefactes associats a nivells des organitzacionals fins tècnics. El llenguatge visual conté una matriu bidimensional per classificar elements de Sistemes d'Informació, i un conjunt de set regles per executar la classificació. Com a mètode d'avaluació MMEQF defineix activitats per derivar analítiques de qualitat basades en la classificació aplicada sobre llenguatges i elements de modelatge. El marc Zachman va ser seleccionat a causa de que aquest va ser una de les primeres i més precises propostes d'arquitectura de referència per a Sistemes d'Informació, sent això reconegut per destacats estàndards com ISO 42010 (612, 2011). Aquesta tesi presenta els fonaments conceptuals del mètode d'avaluació basat en l'anàlisi de la definició de qualitat en l'enginyeria dirigida per models (MDE). Posteriorment es descriu el suport metodològic i tecnològic de MMQEF, i finalment es reporten validacions.
Giraldo Velásquez, FD. (2017). A framework for evaluating the quality of modelling languages in MDE environments [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90628
TESIS
APA, Harvard, Vancouver, ISO, and other styles
46

Vaidya, Gaurav Girish. "Taxonomic Checklists as Biodiversity Data| How Series of Checklists can Provide Information on Synonymy, Circumscription Change and Taxonomic Discovery." Thesis, University of Colorado at Boulder, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10680670.

Full text
Abstract:

Taxonomic checklists are a fundamental and widely-used product of taxonomy, providing a list of recognized taxa within a taxonomic group in a particular geographical area. Series of taxonomic checklists provide snapshots of recognized taxa over a period of time. Identifying and classifying the changes between these checklists can provide information on rates of name, synonym and circumscription change and can improve aggregation of datasets reconciled to different checklists.

To demonstrate this, I used a series of North American bird checklists to test hypotheses about drivers of splitting rates in North America birds. In particular, I asked if splitting was predominantly undoing previous lumping that happened during the heyday of the modern synthesis. I found that bird species have been split at an accelerating rate since the 1980s. While this was partially the result of previously lumped species being resplit, most splits were unrelated to previous lumps and thus represent new discoveries rather than simply the undoing of previous circumscription changes. I also used a series of North American freshwater algal checklists to measure stability over fifteen years, and found that 26% of species names were not shared or synonymized over this period. Rates of synonymization, lumping or splitting of species remained flat, a marked difference from North American birds. Species that were split or lumped (7% of species considered) had significantly higher abundance than other species in the USGS NAWQA dataset, a biodiversity database that uses these checklists as an index. They were associated with 19% of associated observations, showing that a small number of recircumscribed species could significantly affect interpretation of biodiversity data.

To facilitate this research, I developed a software tool that could identify and annotate taxonomic changes among a series of checklists, and could use this information to aggregate biodiversity data, which will hopefully facilitate similar research in the future. My dissertation demonstrates the value of taxonomic checklists series to answer specific questions about the drivers of taxonomic change ranging from philosophical and technical changes to characteristics of species themselves such as their abundance.

APA, Harvard, Vancouver, ISO, and other styles
47

Boyle, Brad, Nicole Hopkins, Zhenyuan Lu, Garay Juan Antonio Raygoza, Dmitry Mozzherin, Tony Rees, Naim Matasci, et al. "The taxonomic name resolution service: an online tool for automated standardization of plant names." BioMed Central, 2013. http://hdl.handle.net/10150/610265.

Full text
Abstract:
BACKGROUND:The digitization of biodiversity data is leading to the widespread application of taxon names that are superfluous, ambiguous or incorrect, resulting in mismatched records and inflated species numbers. The ultimate consequences of misspelled names and bad taxonomy are erroneous scientific conclusions and faulty policy decisions. The lack of tools for correcting this 'names problem' has become a fundamental obstacle to integrating disparate data sources and advancing the progress of biodiversity science.RESULTS:The TNRS, or Taxonomic Name Resolution Service, is an online application for automated and user-supervised standardization of plant scientific names. The TNRS builds upon and extends existing open-source applications for name parsing and fuzzy matching. Names are standardized against multiple reference taxonomies, including the Missouri Botanical Garden's Tropicos database. Capable of processing thousands of names in a single operation, the TNRS parses and corrects misspelled names and authorities, standardizes variant spellings, and converts nomenclatural synonyms to accepted names. Family names can be included to increase match accuracy and resolve many types of homonyms. Partial matching of higher taxa combined with extraction of annotations, accession numbers and morphospecies allows the TNRS to standardize taxonomy across a broad range of active and legacy datasets.CONCLUSIONS:We show how the TNRS can resolve many forms of taxonomic semantic heterogeneity, correct spelling errors and eliminate spurious names. As a result, the TNRS can aid the integration of disparate biological datasets. Although the TNRS was developed to aid in standardizing plant names, its underlying algorithms and design can be extended to all organisms and nomenclatural codes. The TNRS is accessible via a web interface at http://tnrs.iplantcollaborative.org/ webcite and as a RESTful web service and application programming interface. Source code is available at https://github.com/iPlantCollaborativeOpenSource/TNRS/ webcite.
APA, Harvard, Vancouver, ISO, and other styles
48

Olensky, Marlies. "Data accuracy in bibliometric data sources and its impact on citation matching." Doctoral thesis, Humboldt-Universität zu Berlin, Philosophische Fakultät I, 2015. http://dx.doi.org/10.18452/17122.

Full text
Abstract:
Ist die Zitationsanalyse ein geeignetes Instrument zur Forschungsevaluation? Diese Dissertation untersucht, ob die zugrunde liegenden Zitationsdaten ausreichend fehlerfrei sind, um aussagekräftige Ergebnisse der Analysen zu erzielen, beziehungsweise sollte dies nicht der Fall sein, ob der Prozess, der die zitierenden und zitierten Artikel einander zurordnet, ausreichend robust gegenüber Ungenauigkeiten in den Daten ist. Ungenauigkeiten wurden als Unterschiede in den Datenwerten der bibliographischen Angaben definiert. Die untersuchten Daten setzen sich aus gezielt ausgewählten Publikationen des Web of Science (WoS) zusammen, welche eine geschichtete Stichprobe ergeben. Die bibliographischen Daten von 3.929 Referenzen wurden in einer qualitativen Inhaltsanalyse bewertet und die bibliographischen Ungenauigkeiten in einer Taxonomie zusammengefasst. Um genau festzulegen, welche von diesen tatsächlich den Zuordnungsprozess von Zitationen beeinflussen, wurde eine spezifische Untergruppe von Zitationen, d.h. Zitationen die von WoS nicht erfolgreich dem jeweilig zitierten Artikel zugeordnet wurden, untersucht. Die Ergebnisse wurden mit den Daten zweier weiterer bibliographischen Datenbanken, Scopus und Google Scholar, sowie den Daten dreier angewandter bibliometrischer Forschungsgruppen, CWTS, iFQ und Science-Metrix, trianguliert. Die Zuordnungsalgorithmen von CWTS und iFQ konnten rund zwei Drittel dieser Zitierungen erfolgreich zuordnen. Scopus und Google Scholar konnten ebenso über 60% der fehlenden Zitierungen erfolgreich mit dem entsprechenden zitierten Artikel verbinden, während Science-Metrix nur eine geringe Anzahl an Referenzen (5%) schaffte. Vollkommen falsche erste Seitenzahlen sowie Zahlendreher in Publikationsjahren können in allen Datenquellen nicht richtig zugeordnete Zitierungen verursachen. Basierend auf den Ergebnissen wurden Lösungsvorschläge formuliert, die im Stande sind den Zuordnungsprozess von Zitationen in bibliometrischen Datenquellen zu verbessern.
Is citation analysis an adequate tool for research evaluation? This doctoral research investigates whether the underlying citation data is sufficiently accurate to provide meaningful results of the analyses and if not, whether the citation matching process can rectify inaccurate citation data. Inaccuracies are defined as discrepancies in the data values of bibliographic references, since they are the essential part in the citation matching process. A stratified, purposeful data sample was selected to examine typical cases of publications in Web of Science (WoS). The bibliographic data of 3,929 references was assessed in a qualitative content analysis to identify prevailing inaccuracies in bibliographic references that can interfere with the citation matching process. The inaccuracies were categorized into a taxonomy. Their frequency was studied to determine any strata-specific patterns. To pinpoint the types of inaccuracies that influence the citation matching process, a specific subset of citations, i.e. citations not successfully matched by WoS, was investigated. The results were triangulated with five other data sources: with data from two bibliographic databases in their role as citation indexes (Scopus and Google Scholar) and with data from three applied bibliometric research groups (CWTS, iFQ and Science-Metrix). The matching algorithms of CWTS and iFQ were able to match around two thirds of these citations correctly. Scopus and Google Scholar also handled more than 60% successfully in their matching. Science-Metrix only matched a small number of references (5%). Completely incorrect starting page numbers and transposed publication years can cause a citation to be missed in all data sources. However, more often it is a combination of more than one kind of inaccuracy in more than one field that leads to a non-match. Based on these results, proposals are formulated that could improve the citation matching processes of the different data sources.
APA, Harvard, Vancouver, ISO, and other styles
49

Forssman, Madeleine. "Taxonomi för hållbara investeringar : En kritisk granskning av den föreslagna taxonomiförordningen med fokus på finansmarknadsaktörers bristande tillgång till nödvändig information." Thesis, Linköpings universitet, Filosofiska fakulteten, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-165311.

Full text
Abstract:
Sedan industrialismens framfart har den globala uppvärmningen inneburit en genomsnittlig värmeökning på 1,1 +/- 0,1 grader. Det har även genom undersökningar klargjorts att klimatförändringarna har intensifierats under de senaste åren. I syfte att motverka klimatförändringarna, samt med en målsättning att hålla den globala uppvärmningen långt under två grader, förhandlades Parisavtalet fram som ett tillägg till FN:s klimatkonvention. För att nå de mål som fastställs i Parisavtalet identifierades dock ett behov av mer hållbart investerat privat kapital. Den av EU-kommissionen föreslagna taxonomiförordningen är det verktyg som är ämnat att möjliggöra ökningen av hållbart investerat privat kapital. Genom att i förordningen definiera hållbara ekonomiska aktiviteter, samt genom att ställa krav på finansmarknadsaktörer att lämna information om hållbarheten i finansiella produkter, är förhoppningen att mer privat kapital ska finansiera verksamheter och projekt som bedrivs på ett hållbart sätt. För att finansmarknadsaktörer ska kunna avgöra huruvida ett företags ekonomiska aktiviteter är hållbara eller inte krävs dock tillgång till information som normalt sett inte offentliggörs av företag idag. Till följd av den identifierade informationsbristen bemöttes det ursprunliga taxonomiförordningsförslaget med viss kritik. I syfte att förhindra de praktiska problem som informationsbristen riskerade att resultera i, reviderades därför det ursprungliga förordningsförslaget till att innefatta obligatoriska krav för företag av en viss storlek att rapportera taxonominödvändig information. Utöver det rapporteringskrav som stadgats, förs för närvarande även diskussioner om huruvida transparensen av företags hållbarhetsuppgifter bör utökas ytterligare genom revideringar av hållbarhetsredovisningsdirektivet. Efter en granskning av de lösningar som hittills företagits eller initierats på EU-rättslig nivå, kan konstateras att det obligatoriska rapporteringskrav som tillagts i taxonomiförordningen utgör en skälig och nödvändig åtgärd. Trots införandet av rapporteringskravet kvarstår dock viss problematik beträffande tillgängligheten och kvaliteten av företags hållbarhetsinformation. Bland annat finns ett behov av tydligare regler avseende var informationen ska offentliggöras. Det kan därutöver konstateras att bristen på kontrollmekanismer avseende den information som ska rapporteras enligt taxonomiförordningen riskerar att hindra regleringens syften, och bör prioriteras vid en eventuell revidering av hållbarhetsredovisningsdirektivet.
APA, Harvard, Vancouver, ISO, and other styles
50

Werner, Sara [Verfasser], Karlheinz [Akademischer Betreuer] Brandenburg, Ingrid [Gutachter] Heynderickx, and Callet Patrick [Gutachter] Le. "Quality taxonomy for scalable algorithms of free viewpoint video objects / Sara Kepplinger ; Gutachter: Ingrid Heynderickx, Patrick Le Callet ; Betreuer: Karlheinz Brandenburg." Ilmenau : TU Ilmenau, 2017. http://d-nb.info/117813489X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography