Siga este enlace para ver otros tipos de publicaciones sobre el tema: Metadata.

Tesis sobre el tema "Metadata"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Metadata".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Moore, Michael S., Jeremy C. Price, Andrew R. Cormier, and William A. Malatesta. "Metadata Description Language: The iNET Metadata Standard Language." International Foundation for Telemetering, 2009. http://hdl.handle.net/10150/605963.

Texto completo
Resumen
ITC/USA 2009 Conference Proceedings / The Forty-Fifth Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2009 / Riviera Hotel & Convention Center, Las Vegas, Nevada<br>In order to help manage the complexity in designing and configuring network-based telemetry systems, and to promote interoperability between equipment from multiple vendors, the integrated Network-Enhanced Telemetry (iNET) Metadata Standards Working Group (MDSWG) has developed a standard language for describing and configuring these systems. This paper will provide the community with an overview of Metadata Description Language (MDL), and describe how MDL can support the description of the requirements, design choices, and the configuration of devices that make up the Telemetry Network System (TmNS). MDL, an eXtensible Markup Language (XML) based language that describes a TmNS from various aspects, is embodied by an XML schema along with additional rules and constraints. Example MDL instance documents will be presented to illustrate how MDL can be used to capture requirements, describe the design, and configure the equipment that makes up a TmNS. Various scenarios for how MDL can be used will be discussed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Eckart, Thomas. "Einsatz und Bewertung komponentenbasierter Metadaten in einer föderierten Infrastruktur für Sprachressourcen am Beispiel der CMDI." Doctoral thesis, Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-207859.

Texto completo
Resumen
Die Arbeit setzt sich mit dem Einsatz der Component Metadata Infrastructure CMDI im Rahmen der föderierten Infrastruktur CLARIN auseinander, wobei diverse konkrete Problemfälle aufgezeigt werden. Für die Erarbeitung entsprechender Lösungsstrategien werden unterschiedliche Verfahren adaptiert und für die Qualitätsanalyse von Metadaten und zur Optimierung ihres Einsatzes in einer föderierten Umgebung genutzt. Konkret betrifft dies vor allem die Übernahme von Modellierungsstrategien der Linked Data Community, die Übernahme von Prinzipien und Qualitätsmetriken der objektorientierten Programmierung für CMD-Metadatenkomponenten, sowie den Einsatz von Zentralitätsmaßen der Graph- bzw. Netzwerkanalyse für die Bewertung des Zusammenhalts des gesamten Metadatenverbundes. Dabei wird im Rahmen der Arbeit die Analyse verwendeter Schema- bzw. Schemabestandteile sowie die Betrachtung verwendeter Individuenvokabulare im Zusammenspiel aller beteiligten Zentren in den Vordergrund gestellt.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Mitchell, Erik T. Greenberg Jane. "Metadata literacy an analysis of metadata awareness in college students /." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2009. http://dc.lib.unc.edu/u?/etd,2910.

Texto completo
Resumen
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2010.<br>Title from electronic title page (viewed Jun. 23, 2010). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the School of Information and Library Science." Discipline: Information and Library Science; Department/School: Information and Library Science, School of.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Oppedal, Anita Iren. "Alt er metadata : Bruk av metadata i et integrert brukersystem." Thesis, Norwegian University of Science and Technology, Department of Computer and Information Science, 2000. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-227.

Texto completo
Resumen
<p>Denne hovedfagsavhandlingen setter fokus på hvordan metadata brukes i et integrert brukersystem i en bedrift.</p><p>I et informasjonsrom er informasjonsressurser fra ulike medier intergrert, og en trenger et felles “bindeledd” for å støtte bedre gjenfinning og tilgang til informasjon i informasjonsrommet. Problemet er ofte at de ulike medier bruker ulike format for beskrivelse av sine informasjonsressurser, noe som vanskeliggjør interoperabilitet mellom de ulike medier. Dersom de ulike medier kan bruke samme metadataformat til å beskrive sine informasjonsressurser, vil det bedre interoperabiliteten.</p><p>Dublin Core Metadata Element Set (DC) er et format utviklet med tanke på publisering av informasjonsressuser via Intranett og Internett. Det er DC som er bindeleddet i det virtuelle informasjonsrommet som denne avhandlingen tar utgangspunkt i.</p><p>Sentralt i denne avhandlingen står vurderingen av hvordan Adresseavisens indekseringsbehov kan tilfredsstilles i DC for informasjonsressurser som artikler, bilder/illustrasjoner og film. Forslag til et kjerneformat for Adresseavisens informasjonsressurser, med medieavhengige variasjoner legges frem. Dette er informasjonsressurser hvor avis er brukskontekst. Forslaget som fremlegges imøtekommer resultater fra brukerundersøkelsen, og opplysninger og observasjon av hvordan indekseringsformatene allerede benytter brukes.</p><p>Undersøkelsen har resultert i følgende funn:</p><p>• De fleste brukere velger fritekstsøk fremfor metadatasøk</p><p>• Opplæring virker inn på bruk av metadata</p><p>• Arbeidsoppgaver/Informasjonsbehov påvirker bruk av metadata</p><p>• Erfaring med databasesystemet og hyppighet i søk i databasen kan påvirke bruk av metadata</p><p>• Noen metadataelement er mer bedre egnet for søk enn andre.</p><p>Undersøkelsene gir også anbefalinger som kan være nyttige ved navngivning av metadata. Følgende fremgår av undersøkelsen:</p><p>• Forkortelser i navngivning av metadata bør unngås for å gjøre dem mer selvforklarende</p><p>• Tvetydige begreper i navngivning av metadata gjør dem mindre intuitive i forhold til forståelse for innhold</p><p>Undersøkelsen er presentert med stolpediagram og tabeller, som er metoder som kan brukes til kvalitative analyser.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Phillips, Mark Edward. "Exploring the Use of Metadata Record Graphs for Metadata Assessment." Thesis, University of North Texas, 2020. https://digital.library.unt.edu/ark:/67531/metadc1707350/.

Texto completo
Resumen
Cultural heritage institutions, including galleries, libraries, museums, and archives are increasingly digitizing physical items and collecting born-digital items and making these resources available on the Web. Metadata plays a vital role in the discovery and management of these collections. Existing frameworks to identify and address deficiencies in metadata rely heavily on count and data-value based metrics that are calculated over aggregations of descriptive metadata. There has been little research into the use of traditional network analysis to investigate the connections between metadata records based on shared data values in metadata fields such as subject or creator. This study introduces metadata record graphs as a mechanism to generate network-based statistics to support analysis of metadata. These graphs are constructed with the metadata records as the nodes and shared metadata field values as the edges in the network. By analyzing metadata record graphs with algorithms and tools common to the field of network analysis, metadata managers can develop a new understanding of their metadata that is often impossible to generate from count and data-value based statistics alone. This study tested application of metadata record graphs to analysis of metadata collections of increasing size, complexity, and interconnectedness in a series of three related stages. The findings of this research indicate effectiveness of this new method, identify network algorithms that are useful for analyzing descriptive metadata and suggest methods and practices for future implementations of this technique.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Dillon, Martin. "Metadata for Web Resources: How Metadata Works on the Web." the Library of Congress, 2000. http://hdl.handle.net/10150/105769.

Texto completo
Resumen
This paper begins by discussing the various meanings of metadata both on and off the Web, and the various uses to which metadata has been put. The body of the paper focuses on the Web and the roles that metadata has in that environment. More specifically, the primary concern here is for metadata used in resource discovery, broadly considered. Metadata for resource discovery is on an evolutionary path with bibliographic description as an immediate predecessor. Its chief exemplar is the Dublin Core and its origins, nature and current status will be briefly discussed. From this starting point, the paper then considers the uses of such metadata in the Web context, both currently and those that are planned for. The critical issues that need addressing are its weaknesses for achieving its purposes and alternatives. Finally, the role of libraries in creating systems for resource discovery is considered, from the perspective of the gains made to date with the Dublin Core, the difficulties of merging this effort with traditional bibliographic description (aka MARC and AACRII), and what can be done about the gap between the two. --------------------------------------------------------------------------------
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Migletz, James J. "Automated metadata extraction." Thesis, Monterey, Calif. : Naval Postgraduate School, 2008. http://handle.dtic.mil/100.2/ADA483465.

Texto completo
Resumen
Thesis (M.S. in Computer Science)--Naval Postgraduate School, June 2008.<br>Thesis Advisor(s): Garfinkel, Simson. "June 2008." Description based on title screen as viewed on August 26, 2008. Includes bibliographical references (p.57-60). Also available in print.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Macrae, Robert. "Linking music metadata." Thesis, Queen Mary, University of London, 2012. http://qmro.qmul.ac.uk/xmlui/handle/123456789/8837.

Texto completo
Resumen
The internet has facilitated music metadata production and distribution on an unprecedented scale. A contributing factor of this data deluge is a change in the authorship of this data from the expert few to the untrained crowd. The resulting unordered flood of imperfect annotations provides challenges and opportunities in identifying accurate metadata and linking it to the music audio in order to provide a richer listening experience. We advocate novel adaptations of Dynamic Programming for music metadata synchronisation, ranking and comparison. This thesis introduces Windowed Time Warping, Greedy, Constrained On-Line Time Warping for synchronisation and the Concurrence Factor for automatically ranking metadata. We begin by examining the availability of various music metadata on the web. We then review Dynamic Programming methods for aligning and comparing two source sequences whilst presenting novel, specialised adaptations for efficient, realtime synchronisation of music and metadata that make improvements in speed and accuracy over existing algorithms. The Concurrence Factor, which measures the degree in which an annotation of a song agrees with its peers, is proposed in order to utilise the wisdom of the crowds to establish a ranking system. This attribute uses a combination of the standard Dynamic Programming methods Levenshtein Edit Distance, Dynamic Time Warping, and Longest Common Subsequence to compare annotations. We present a synchronisation application for applying the aforementioned methods as well as a tablature-parsing application for mining and analysing guitar tablatures from the web. We evaluate the Concurrence Factor as a ranking system on a largescale collection of guitar tablatures and lyrics to show a correlation with accuracy that is superior to existing methods currently used in internet search engines, which are based on popularity and human ratings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Wenning, Rigo, and Sabrina Kirrane. "Compliance Using Metadata." Springer, 2018. http://epub.wu.ac.at/6497/1/ComplianceUsingMetadata.pdf.

Texto completo
Resumen
Everybody talks about the data economy. Data is collected stored, processed and re-used. In the EU, the GDPR creates a framework with conditions (e.g. consent) for the processing of personal data. But there are also other legal provisions containing requirements and conditions for the processing of data. Even today, most of those are hard-coded into workflows or database schemes, if at all. Data lakes are polluted with unusable data because nobody knows about usage rights or data quality. The approach presented here makes the data lake intelligent. It remembers usage limitations and promises made to the data subject or the contractual partner. Data can be used as risk can be assessed. Such a system easily reacts on new requirements. If processing is recorded back into the data lake, the recording of this information allows to prove compliance. This can be shown to authorities on demand as an audit trail. The concept is best exemplified by the SPECIAL project https://specialprivacy.eu (Scalable Policy-aware Linked Data Architecture For PrivacyPrivacy, TransparencyTransparency and ComplianceCompliance). SPECIAL has several use cases, but the basic framework is applicable beyond those cases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Pittges, Jeff. "Metadata view graphs : a framework for query optimization and metadata management." Diss., Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/9256.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Artiaga, Amouroux Ernest. "File system metadata virtualization." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/133489.

Texto completo
Resumen
The advance of computing systems has brought new ways to use and access the stored data that push the architecture of traditional file systems to its limits, making them inadequate to handle the new needs. Current challenges affect both the performance of high-end computing systems and its usability from the applications perspective. On one side, high-performance computing equipment is rapidly developing into large-scale aggregations of computing elements in the form of clusters, grids or clouds. On the other side, there is a widening range of scientific and commercial applications that seek to exploit these new computing facilities. The requirements of such applications are also heterogeneous, leading to dissimilar patterns of use of the underlying file systems. Data centres have tried to compensate this situation by providing several file systems to fulfil distinct requirements. Typically, the different file systems are mounted on different branches of a directory tree, and the preferred use of each branch is publicised to users. A similar approach is being used in personal computing devices. Typically, in a personal computer, there is a visible and clear distinction between the portion of the file system name space dedicated to local storage, the part corresponding to remote file systems and, recently, the areas linked to cloud services as, for example, directories to keep data synchronized across devices, to be shared with other users, or to be remotely backed-up. In practice, this approach compromises the usability of the file systems and the possibility of exploiting all the potential benefits. We consider that this burden can be alleviated by determining applicable features on a per-file basis, and not associating them to the location in a static, rigid name space. Moreover, usability would be further increased by providing multiple dynamic name spaces that could be adapted to specific application needs. This thesis contributes to this goal by proposing a mechanism to decouple the user view of the storage from its underlying structure. The mechanism consists in the virtualization of file system metadata (including both the name space and the object attributes) and the interposition of a sensible layer to take decisions on where and how the files should be stored in order to benefit from the underlying file system features, without incurring on usability or performance penalties due to inadequate usage. This technique allows to present multiple, simultaneous virtual views of the name space and the file system object attributes that can be adapted to specific application needs without altering the underlying storage configuration. The first contribution of the thesis introduces the design of a metadata virtualization framework that makes possible the above-mentioned decoupling; the second contribution consists in a method to improve file system performance in large-scale systems by using such metadata virtualization framework; finally, the third contribution consists in a technique to improve the usability of cloud-based storage systems in personal computing devices.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Nadal, Francesch Sergi. "Metadata-driven data integration." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/666947.

Texto completo
Resumen
Data has an undoubtable impact on society. Storing and processing large amounts of available data is currently one of the key success factors for an organization. Nonetheless, we are recently witnessing a change represented by huge and heterogeneous amounts of data. Indeed, 90% of the data in the world has been generated in the last two years. Thus, in order to carry on these data exploitation tasks, organizations must first perform data integration combining data from multiple sources to yield a unified view over them. Yet, the integration of massive and heterogeneous amounts of data requires revisiting the traditional integration assumptions to cope with the new requirements posed by such data-intensive settings. This PhD thesis aims to provide a novel framework for data integration in the context of data-intensive ecosystems, which entails dealing with vast amounts of heterogeneous data, from multiple sources and in their original format. To this end, we advocate for an integration process consisting of sequential activities governed by a semantic layer, implemented via a shared repository of metadata. From an stewardship perspective, this activities are the deployment of a data integration architecture, followed by the population of such shared metadata. From a data consumption perspective, the activities are virtual and materialized data integration, the former an exploratory task and the latter a consolidation one. Following the proposed framework, we focus on providing contributions to each of the four activities. We begin proposing a software reference architecture for semantic-aware data-intensive systems. Such architecture serves as a blueprint to deploy a stack of systems, its core being the metadata repository. Next, we propose a graph-based metadata model as formalism for metadata management. We focus on supporting schema and data source evolution, a predominant factor on the heterogeneous sources at hand. For virtual integration, we propose query rewriting algorithms that rely on the previously proposed metadata model. We additionally consider semantic heterogeneities in the data sources, which the proposed algorithms are capable of automatically resolving. Finally, the thesis focuses on the materialized integration activity, and to this end, proposes a method to select intermediate results to materialize in data-intensive flows. Overall, the results of this thesis serve as contribution to the field of data integration in contemporary data-intensive ecosystems.<br>Les dades tenen un impacte indubtable en la societat. La capacitat d’emmagatzemar i processar grans quantitats de dades disponibles és avui en dia un dels factors claus per l’èxit d’una organització. No obstant, avui en dia estem presenciant un canvi representat per grans volums de dades heterogenis. En efecte, el 90% de les dades mundials han sigut generades en els últims dos anys. Per tal de dur a terme aquestes tasques d’explotació de dades, les organitzacions primer han de realitzar una integració de les dades, combinantles a partir de diferents fonts amb l’objectiu de tenir-ne una vista unificada d’elles. Per això, aquest fet requereix reconsiderar les assumpcions tradicionals en integració amb l’objectiu de lidiar amb els requisits imposats per aquests sistemes de tractament massiu de dades. Aquesta tesi doctoral té com a objectiu proporcional un nou marc de treball per a la integració de dades en el context de sistemes de tractament massiu de dades, el qual implica lidiar amb una gran quantitat de dades heterogènies, provinents de múltiples fonts i en el seu format original. Per això, proposem un procés d’integració compost d’una seqüència d’activitats governades per una capa semàntica, la qual és implementada a partir d’un repositori de metadades compartides. Des d’una perspectiva d’administració, aquestes activitats són el desplegament d’una arquitectura d’integració de dades, seguit per la inserció d’aquestes metadades compartides. Des d’una perspectiva de consum de dades, les activitats són la integració virtual i materialització de les dades, la primera sent una tasca exploratòria i la segona una de consolidació. Seguint el marc de treball proposat, ens centrem en proporcionar contribucions a cada una de les quatre activitats. La tesi inicia proposant una arquitectura de referència de software per a sistemes de tractament massiu de dades amb coneixement semàntic. Aquesta arquitectura serveix com a planell per a desplegar un conjunt de sistemes, sent el repositori de metadades al seu nucli. Posteriorment, proposem un model basat en grafs per a la gestió de metadades. Concretament, ens centrem en donar suport a l’evolució d’esquemes i fonts de dades, un dels factors predominants en les fonts de dades heterogènies considerades. Per a l’integració virtual, proposem algorismes de rescriptura de consultes que usen el model de metadades previament proposat. Com a afegitó, considerem heterogeneïtat semàntica en les fonts de dades, les quals els algorismes de rescriptura poden resoldre automàticament. Finalment, la tesi es centra en l’activitat d’integració materialitzada. Per això proposa un mètode per a seleccionar els resultats intermedis a materialitzar un fluxes de tractament intensiu de dades. En general, els resultats d’aquesta tesi serveixen com a contribució al camp d’integració de dades en els ecosistemes de tractament massiu de dades contemporanis<br>Les données ont un impact indéniable sur la société. Le stockage et le traitement de grandes quantités de données disponibles constituent actuellement l’un des facteurs clés de succès d’une entreprise. Néanmoins, nous assistons récemment à un changement représenté par des quantités de données massives et hétérogènes. En effet, 90% des données dans le monde ont été générées au cours des deux dernières années. Ainsi, pour mener à bien ces tâches d’exploitation des données, les organisations doivent d’abord réaliser une intégration des données en combinant des données provenant de sources multiples pour obtenir une vue unifiée de ces dernières. Cependant, l’intégration de quantités de données massives et hétérogènes nécessite de revoir les hypothèses d’intégration traditionnelles afin de faire face aux nouvelles exigences posées par les systèmes de gestion de données massives. Cette thèse de doctorat a pour objectif de fournir un nouveau cadre pour l’intégration de données dans le contexte d’écosystèmes à forte intensité de données, ce qui implique de traiter de grandes quantités de données hétérogènes, provenant de sources multiples et dans leur format d’origine. À cette fin, nous préconisons un processus d’intégration constitué d’activités séquentielles régies par une couche sémantique, mise en oeuvre via un dépôt partagé de métadonnées. Du point de vue de la gestion, ces activités consistent à déployer une architecture d’intégration de données, suivies de la population de métadonnées partagées. Du point de vue de la consommation de données, les activités sont l’intégration de données virtuelle et matérialisée, la première étant une tâche exploratoire et la seconde, une tâche de consolidation. Conformément au cadre proposé, nous nous attachons à fournir des contributions à chacune des quatre activités. Nous commençons par proposer une architecture logicielle de référence pour les systèmes de gestion de données massives et à connaissance sémantique. Une telle architecture consiste en un schéma directeur pour le déploiement d’une pile de systèmes, le dépôt de métadonnées étant son composant principal. Ensuite, nous proposons un modèle de métadonnées basé sur des graphes comme formalisme pour la gestion des métadonnées. Nous mettons l’accent sur la prise en charge de l’évolution des schémas et des sources de données, facteur prédominant des sources hétérogènes sous-jacentes. Pour l’intégration virtuelle, nous proposons des algorithmes de réécriture de requêtes qui s’appuient sur le modèle de métadonnées proposé précédemment. Nous considérons en outre les hétérogénéités sémantiques dans les sources de données, que les algorithmes proposés sont capables de résoudre automatiquement. Enfin, la thèse se concentre sur l’activité d’intégration matérialisée et propose à cette fin une méthode de sélection de résultats intermédiaires à matérialiser dans des flux des données massives. Dans l’ensemble, les résultats de cette thèse constituent une contribution au domaine de l’intégration des données dans les écosystèmes contemporains de gestion de données massives
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Savvidis, Evangelos. "Searching Metadata in Hadoop." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177467.

Texto completo
Resumen
The rapid expansion of the internet has led to the Big Data era. Companies that provide services which deal with Big Data have to face two major issues: i) storing petabytes of data and ii) manipulating this data. On the one end the open source Hadoop ecosystem and particularly its distributed file system HDFS comes to take care of the former issue, by providing a persistent storage for unprecedented amounts of data. For the latter, there are many approaches when it comes to data analytics – from map-reduce jobs to information retrieval and data discovery. This thesis provides a novel approach to information discovery firstly by providing the means to create, manage and associate metadata to HDFS files and secondly searching for files through their metadata using Elasticsearch. The work is composed of three parts: The first one is the metadata designer/manager, which is the AngularJS front end. The second part is the J2EE back end which enables the front end to perform all the managing actions on metadata using websockets. The third part is the indexing of data into Elasticsearch, the distributed and scalable open source search engine. Our work has shown that this approach works and it greatly helps finding information in the vast sea of data in the HDFS.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Scott, Dan. "Microdata: making metadata matter." Evergreen International Conference, 2013. https://zone.biblio.laurentian.ca/dspace/handle/10219/1993.

Texto completo
Resumen
In this session, Dan Scott (the contributor of the schema.org microdata enhancement for Evergreen and a participant in the schemabibex effort to extend schema.org to better support bibliographic data) will discuss the origins of the microdata standards, explain how nominally machine-readable cataloguing data can fit into the machine-actionable semantic web, reflect on the impact that a microdata-enabled catalogue has had at Laurentian University to date, and offer some thoughts about the future of microdata – including the schema.org and RDFa Lite standards.<br>WARNING: you may come away with ideas not only for enriching your library system, but for your web site and other web-based library applications as well! Microdata enables search engines and other automated processes to make sense of the data on a web page — like identifying the title, author, and identification number of a book from all of the other content on a given page. Web pages enhanced with microdata contribute to the semantic web, and in turn are more likely to be incorporated into search engines and advanced web applications. If it sounds like we should publish microdata from Evergreen’s catalogue, you will be pleased to know that Evergreen was (naturally) the first library system to incorporate microdata in its default public catalogue with the 2.2.0 release in June 2012.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Wilson, R. P. "RIPPLE : a metadata repository." Thesis, University of Canterbury. Computer Science, 1992. http://hdl.handle.net/10092/9556.

Texto completo
Resumen
Dramatic changes in the way we view software and information systems have occurred during the past 20 years. Manual techniques have been replaced by data dictionary products which are in turn being replaced by computer aided software engineering (CASE) or integrated project support environment (IPSE) systems. A research and teaching metadata repository system, RIPPLE, is presented. RIPPLE represents and manages a flexible and extensible internal conceptual model. This conceptual model is derived by a synthesis of common concepts from a variety of design methods. A layered structure is formed by successive abstractions of the concepts and structures derived by that synthesis. This layered structure provides a powerful metaphor for implementation of both the RIPPLE repository and design method repository support. Design methods can be defined in terms of this model. Tools to aid the configuration of RIPPLE to support a wide variety of methods are also presented. Once configured, RIPPLE can provide repository support to tools implementing these methods. Support for information sharing, tool interaction mediation and other important repository features is also provided.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Grace, Thomas, and Clay Fink. "METADATA FOR RANGE TELEMETRY." International Foundation for Telemetering, 2006. http://hdl.handle.net/10150/604128.

Texto completo
Resumen
ITC/USA 2006 Conference Proceedings / The Forty-Second Annual International Telemetering Conference and Technical Exhibition / October 23-26, 2006 / Town and Country Resort & Convention Center, San Diego, California<br>CTEIP has launched the integrated Network Enhanced Telemetry (iNET) project to foster advances in networking and telemetry technology to meet emerging needs of major test programs. This paper describes an approach for providing a unified means of describing telemetry systems. It will describe the motivation and framework for a metadata standard for specifying the components of an instrumented test article, its data and the flow of data through a telemetry system. The paper will also describe how this metadata standard can provide the means for describing different transmission formats for a common test article. The result of the task described by this paper will lead to a standard or set of standards that will optimize the use of commercial technology and tools.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Bermudez, Luis E. Piasecki Michael Ph D. "Ontomet: Ontology Metadata Framework /." Philadelphia, Pa. : Drexel University, 2004. http://dspace.library.drexel.edu/handle/1860/376.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Pradhan, Anup. "The Geospatial Metadata Server : on-line metadata database, data conversion and GIS processing." Thesis, University of Edinburgh, 2000. http://hdl.handle.net/1842/30655.

Texto completo
Resumen
This research proposes an on-line software demonstrator called the Geospatial Metadata Server (GMS), which is designed to catalogue, enter, maintain and query metadata using a centralised metadatabase. The system also converts geospatial data to and from a variety of different formats as well as proactively searches for data throughout the Web using an automated hyperlinks retrieval program. GMS is divided into six components, three of which constitute a Metadata Cataloguing tool. The metadatabase is implemented within an RDBMS, which is capable of querying large quantities of on-line metadata in standardised format. The database schema used to store the metadata was patterned in part off of the Content Standard for Digital Geographic Metadata, which provides geospatial data providers and users a common assemblage of descriptive terminology. Because of the excessive length of metadata records, GMS is equipped with a parsing algorithm and database entry utility that is used to enter delimited on-line metadata text files into the metadatabase. Alternatively the system provides a metadata entry and update utility, which is divided into a series of HTML forms each corresponding to a metadatabase table. The utility cleverly allows users to maintain their personal metadata records over the Web. The other three GMS components constitute a Metadata Querying tool. GMS consists of a search engine that can access its metadatabase, examine retrieved information and use the information within on-line server side software. The search engine integrates the metadatabase, on-line geospatial data and GIS software (i.e. the GMS conversion utilities) over the Web. The on-line conversion utilities are capable of converting geospatial data from anywhere on the Web solving many of the problems associated with the interoperability of vendor specific GIS data formats. Because the conversion utilities operate over the Web, the technology is easier to use and more accessible to a greater number of GIS users. Also integrated into the search engine is a Web robot designed to automatically seek out geospatial data from remote Web sites and index their hypertext links within a file or database.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Nilsson, Mikael. "From Interoperability to Harmonization in Metadata Standardization : Designing an Evolvable Framework for Metadata Harmonization." Doctoral thesis, KTH, Medieteknik och grafisk produktion, Media, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-26057.

Texto completo
Resumen
Metadata is an increasingly central tool in the current web environment, enabling large-scale, distributed management of resources. Recent years has seen a growth in interaction between previously relatively isolated metadata communities, driven by a need for cross-domain collaboration and exchange. However, metadata standards have not been able to meet the needs of interoperability between independent standardization communities. For this reason the notion of metadata harmonization, defined as interoperability of combinations of metadata specifications, has risen as a core issue for the future of web-based metadata. This thesis presents a solution-oriented analysis of current issues in metadata harmonization. A set of widely used metadata specifications in the domains of learning technology, libraries and the general web environment have been chosen as targets for the analysis, with a special focus on Dublin Core, IEEE LOM and RDF. Through active participation in several metadata standardization communities, a body of knowledge of harmonization issues has been developed. The thesis presents an analytical framework of concepts and principles for understanding the issues arising when interfacing multiple standardization communities. The analytical framework focuses on a set of important patterns in metadata specifications and their respective contribution to harmonization issues: Metadata syntaxes as a tool for metadata exchange. Syntaxes are shown to be of secondary importance in harmonization. Metadata semantics as a cornerstone for interoperability. This thesis argues that the incongruences in the interpretation of metadata descriptions play a significant role in harmonization. Abstract models for metadata as a tool for designing metadata standards. It is shown how such models are pivotal in the understanding of harmonization problems. Vocabularies as carriers of meaning in metadata. The thesis shows how portable vocabularies can carry semantics from one standard to another, enabling harmonization. Application profiles as a method for combining metadata standards. While application profiles have been put forward as a powerful tool for interoperability, the thesis concludes that they have only a marginal role to play in harmonization. The analytical framework is used to analyze and compare seven metadata specifications, and a concrete set of harmonization issues is presented. These issues are used as a basis for a metadata harmonization framework where a multitude of metadata specifications with different characteristics can coexist. The thesis concludes that the Resource Description Framework (RDF) is the only existing specification that has the right characteristics to serve as a practical basis for such a harmonization framework, and therefore must be taken into account when designing metadata specifications. Based on the harmonization framework, a best practice for metadata standardization development is developed, and a roadmap for harmonization improvements of the analyzed standards is presented.<br>QC 20101117
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Ide, Ichiro, and Reiko Hamada. "METADATA ANNOTATION THROUGH MEDIA INTEGRATION." INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2005. http://hdl.handle.net/2237/10357.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Park, June Young. "Data-driven Building Metadata Inference." Research Showcase @ CMU, 2016. http://repository.cmu.edu/theses/127.

Texto completo
Resumen
Building technology has been developed due to the improvement of information technology. Specifically, a human can control and monitor the building operation by a number of sensors and actuators. The sensors and actuators are installed on every single element in a building. Thus, the large stream of building data allows us to implement both quantitative and qualitative improvements. However, there are still limitations to mapping between the physical building element and cyber system. To solve this mapping issue, last summer, a text mining methodology was developed as part of a project conducted by the Consortium for Building Energy Innovation. Building data was extracted from building 661, in Philadelphia, PA. The ground truth of the building data point with semantic information was labeled by manual inspection. And a Support Vector Machine was implemented to investigate the relationship between the data point name and the semantic information. This algorithm achieves 93% accuracy with unseen building 661 data points. Techniques and lessons were gained from this project, and this knowledge was used to develop the framework for analyzing the building data from the Gates Hillman Center (GHC) building, Pittsburgh PA. This new framework consists of two stages. In the first stage, we initially tried to cluster the data points by similar semantic information, using the hierarchical clustering method. However, the effectiveness and accuracy of the clustering method is not adequate for this framework. Thus, the filtering and classification model is developed to identify the semantic information of the data points. From the filtering and classification method, it correctly identifies the damper position and supply air duct pressure data point with 90% accuracy by daily statistical features. Having the semantic information from the first stage, the second stage figures out the relationship between Variable Air Volume (VAV) terminal units and Air Handling Units (AHU). The intuitive thermal and flow relationship between VAVs and AHUs are investigated at the beginning, and the statistical features clustering method is applied from the VAV discharge temperature data. However, the control strategy of this building makes this relationship invisible. Alternatively we then compared the similarity between damper position at VAVs and supply air duct pressure at AHUs by calculating the cross correlation. Finally, this similarity scoring method achieved 80% accuracy to map the relationship between VAVs and AHUs. The suggested framework will guide the user to find the desired information such as the VAVs – AHUs relationship from the problem generated by a large number of heterogeneous sensor networks by using data-driven methodology.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Chan, Chu-hsiang. "Metadata Quality for Digital Libraries." The University of Waikato, 2008. http://hdl.handle.net/10289/2312.

Texto completo
Resumen
The quality of metadata in a digital library is an important factor in ensuring access for end-users. Several studies have tried to define quality frameworks and assess metadata but there is little user feedback about these in the literature. As collections grow in size maintaining quality through manual methods becomes increasingly difficult for repository managers. This research presents the design and implementation of a web-based metadata analysis tool for digital repositories. The tool is built as an extension to the Greenstone3 digital library software. We present examples of the tool in use on real-world data and provide feedback from repository managers. The evidence from our studies shows that automated quality analysis tools are useful and valued service for digital libraries.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Beckman, Erin M. "Requirements and information metadata system." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion.exe/07Mar%5FBeckman.pdf.

Texto completo
Resumen
Thesis (M.A. in Security Studies (Homeland Security and Defense))--Naval Postgraduate School, March 2007.<br>Thesis Advisor(s): Robert L. Simeral. "March 2007." Includes bibliographical references (p. 69-71). Also available in print.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Roy, Rishi R. (Rishi Raj) 1980. "Speech metadata in broadcast news." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/87892.

Texto completo
Resumen
Thesis (M.Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.<br>Includes bibliographical references (leaf 76).<br>by Rishi R. Roy.<br>M.Eng.and S.B.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Mukhedkar, Rahul. "Towards Metadata Driven User Interfaces." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1275403481.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Caubet, Marc, and Mònica Cifuentes. "Extracting metadata from textual documents and utilizing metadata for adding textual documents to an ontology." Thesis, Växjö universitet, Matematiska och systemtekniska institutionen, 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:vxu:diva-534.

Texto completo
Resumen
The term Ontology is borrowed from philosophy, where an ontology is a systematic account of Existence. In Computer Science, ontology is a tool allowing the effective use of information, making it understandable and accessible to the computer. For these reasons, the study of ontologies gained growing interest recently. Our motivation is to create a tool able to build ontologies from a set of textual documents. We present a prototype implementation which extracts metadata from textual documents and uses the metadata for adding textual documents to an ontology. In this paper we will investigate which techniques we have available and which ones have been used to accomplish our problem. Finally, we will show a program written in Java which allows us to build ontologies from textual documents using our approach.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Andrle, Ondřej. "Využití metadat při řešení business intelligence aplikací." Master's thesis, Vysoká škola ekonomická v Praze, 2010. http://www.nusl.cz/ntk/nusl-85285.

Texto completo
Resumen
The diploma thesis focuses on metadata as important means which are able to stop a trend of increasing costs of development, operation and maintenance of decision support systems -- Business Intelligence. The theoretical part of the thesis is elaborated based on such assumption. Its goal is to produce an extensive analysis of the term metadata -- starting with a general definition of the term, then dealing with the categorization and analyzing the issues and benefits. Further on, the term metadata management is discussed as well as metadata repository which is the key element of metadata solutions. The aim of the practical part of the thesis is to analyze the selected commercial metadata management solutions and answer the question whether there is currently a suitable comprehensive solution which would suit the needs of a chosen financial institution, Komerční banka. Furthermore, another question is discussed in the thesis and that is if a suitable solution is to either purchase a commercial solution or decide for own development. The analysis in both parts of the thesis, theoretical and practical, is mainly based on foreign sources, above all articles by specialists in the area of data warehousing, and numerous consultations with an expert on metadata from Komerční banka, Mr. Jiří Omacht.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Hellström, Johan. "Musikens metadata på webbradiotjänster : En studie kring omfattningen av musikens metadata på webbradiotjänster och beståndsdelarnas relevans." Thesis, Linnéuniversitetet, Institutionen för medieteknik (ME), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-25904.

Texto completo
Resumen
I uppsatsen undersöktes i vilken omfattning radiostationer på webben presenterar information om musiken som spelas, vilken information som finns samt vilken relevans dess beståndsdelar har för användare. För att uppnå syftet utfördes först en kvantitativ undersökning av webbradiostationer. Resultatet visade att musikrelaterad metadata visserligen presenteras i många skepnader, men att få stationer har anammat majoriteten av denna funktionalitet. Resultatet av undersökningen i kombination med litteraturstudier resulterade i en kravspecifikation som applicerades på en prototyp. Prototypens beståndsdelar i mån av musikrelaterad metadata utvärderades slutligen i användartest och kvalitativa intervjuer. Relevant information för deltagarna inkluderade låttitel, artist- eller bandnamn, lyrik, biografi, skivomslag och musikvideo. Mindre relevant var skivbolag och tourinformation.<br>The purpose of this thesis was to analyze the extent of musical information presented on web radio stations as well as its avaliability and relevance to the user. Initially, a quantitative analysis of web radio stations was made. The result showed that although music related metadata is presented in different forms, few stations have adopted the majority of this functionality. The result of the analysis in combination with litterature studies resulted in a specification of requirements that was applied to a prototype. Its functionality in terms of music related metadata was evaluated in user tests and interviews. Information relevant to the participants included track and artist names, lyrics, biographies, album covers and music videos. Labels and tour information was deemed less relevant.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Varga, Jovan. "Semantic metadata for supporting exploratory OLAP." Doctoral thesis, Universitat Politècnica de Catalunya, 2017. http://hdl.handle.net/10803/405663.

Texto completo
Resumen
On-Line Analytical Processing (OLAP) is an approach widely used for data analysis. OLAP is based on the multidimensional (MD) data model where factual data are related to their analytical perspectives called dimensions and together they form an n-dimensional data space referred to as data cube. MD data are typically stored in a data warehouse, which integrates data from in-house data sources, and then analyzed by means of OLAP operations, e.g., sales data can be (dis)aggregated along the location dimension. As OLAP proved to be quite intuitive, it became broadly accepted by non-technical and business users. However, as users still encountered difficulties in their analysis, different approaches focused on providing user assistance. These approaches collect situational metadata about users and their actions and provide suggestions and recommendations that can help users' analysis. However, although extensively exploited and evidently needed, little attention is paid to metadata in this context. Furthermore, new emerging tendencies call for expanding the use of OLAP to consider external data sources and heterogeneous settings. This leads to the Exploratory OLAP approach that especially argues for the use of Semantic Web (SW) technologies to facilitate the description and integration of external sources. With data becoming publicly available on the (Semantic) Web, the number and diversity of non-technical users are also significantly increasing. Thus, the metadata to support their analysis become even more relevant. This PhD thesis focuses on metadata for supporting Exploratory OLAP. The study explores the kinds of metadata artifacts used for the user assistance purposes and how they are exploited to provide assistance. Based on these findings, the study then aims at providing theoretical and practical means such as models, algorithms, and tools to address the gaps and challenges identified. First, based on a survey of existing user assistance approaches related to OLAP, the thesis proposes the analytical metadata (AM) framework. The framework includes the definition of the assistance process, the AM artifacts that are classified in a taxonomy, and the artifacts organization and related types of processing to support the user assistance. Second, the thesis proposes a semantic metamodel for AM. Hence, Resource Description Framework (RDF) is used to represent the AM artifacts in a flexible and re-usable manner, while the metamodeling abstraction level is used to overcome the heterogeneity of (meta)data models in the Exploratory OLAP context. Third, focusing on the schema as a fundamental metadata artifact for enabling OLAP, the thesis addresses some important challenges on constructing an MD schema on the SW using RDF. It provides the algorithms, method, and tool to construct an MD schema over statistical linked open data sets. Especially, the focus is on enabling that even non-technical users can perform this task. Lastly, the thesis deals with queries as the second most relevant artifact for user assistance. In the spirit of Exploratory OLAP, the thesis proposes an RDF-based model for OLAP queries created by instantiating the previously proposed metamodel. This model supports the sharing and reuse of queries across the SW and facilitates the metadata preparation for the assistance exploitation purposes. Finally, the results of this thesis provide metadata foundations for supporting Exploratory OLAP and advocate for greater attention to the modeling and use of semantics related to metadata.<br>El processament analític en línia (OLAP) és una tècnica àmpliament utilitzada per a l'anàlisi de dades. OLAP es basa en el model multi-dimensional (MD) de dades, on dades factuals es relacionen amb les seves perspectives analítiques, anomenades dimensions, i conjuntament formen un espai de dades n-dimensional anomenat cub de dades. Les dades MD s'emmagatzemen típicament en un data warehouse (magatzem de dades), el qual integra dades de fonts internes, les quals posteriorment s'analitzen mitjançant operacions OLAP, per exemple, dades de vendes poden ser (des)agregades a partir de la dimensió ubicació. Un cop OLAP va ser provat com a intuïtiu, va ser ampliament acceptat tant per usuaris no tècnics com de negoci. Tanmateix, donat que els usuaris encara trobaven dificultats per a realitzar el seu anàlisi, diferents tècniques s'han enfocat en la seva assistència. Aquestes tècniques recullen metadades situacionals sobre els usuaris i les seves accions, i proporcionen suggerències i recomanacions per tal d'ajudar en aquest anàlisi. Tot i ésser extensivament emprades i necessàries, poca atenció s'ha prestat a les metadades en aquest context. A més a més, les noves tendències demanden l'expansió d'ús d'OLAP per tal de considerar fonts de dades externes en escenaris heterogenis. Això ens porta a la tècnica d'OLAP exploratori, la qual es basa en l'ús de tecnologies en la web semàntica (SW) per tal de facilitar la descripció i integració d'aquestes fonts externes. Amb les dades essent públicament disponibles a la web (semàntica), el nombre i diversitat d'usuaris no tècnics també incrementa signifícativament. Així doncs, les metadades per suportar el seu anàlisi esdevenen més rellevants. Aquesta tesi doctoral s'enfoca en l'ús de metadades per suportar OLAP exploratori. L'estudi explora els tipus d'artefactes de metadades utilitzats per l'assistència a l'usuari, i com aquests són explotats per proporcionar assistència. Basat en aquestes troballes, l'estudi preté proporcionar mitjans teòrics i pràctics, com models, algorismes i eines, per abordar els reptes identificats. Primerament, basant-se en un estudi de tècniques per assistència a l'usuari en OLAP, la tesi proposa el marc de treball de metadades analítiques (AM). Aquest marc inclou la definició del procés d'assistència, on els artefactes d'AM són classificats en una taxonomia, i l'organització dels artefactes i tipus relacionats de processament pel suport d'assistència a l'usuari. En segon lloc, la tesi proposa un meta-model semàntic per AM. Així doncs, s'utilitza el Resource Description Framework (RDF) per representar els artefactes d'AM d'una forma flexible i reusable, mentre que el nivell d'abstracció de metamodel s'utilitza per superar l'heterogeneïtat dels models de (meta)dades en un context d'OLAP exploratori. En tercer lloc, centrant-se en l'esquema com a artefacte fonamental de metadades per a OLAP, la tesi adreça reptes importants en la construcció d'un esquema MD en la SW usant RDF. Proporciona els algorismes, mètodes i eines per construir un esquema MD sobre conjunts de dades estadístics oberts i relacionats. Especialment, el focus rau en permetre que usuaris no tècnics puguin realitzar aquesta tasca. Finalment, la tesi tracta amb consultes com el segon artefacte més rellevant per l'assistència a usuari. En l'esperit d'OLAP exploratori, la tesi proposa un model basat en RDF per consultes OLAP instanciant el meta-model prèviament proposat. Aquest model suporta el compartiment i reutilització de consultes sobre la SW i facilita la preparació de metadades per l'explotació de l'assistència. Finalment, els resultats d'aquesta tesi proporcionen els fonaments en metadades per suportar l'OLAP exploratori i propugnen la major atenció al model i ús de semàntica relacionada a metadades.<br>On-Line Analytical Processing (OLAP) er en bredt anvendt tilgang til dataanalyse. OLAP er baseret på den multidimensionelle (MD) datamodel, hvor faktuelle data relateres til analytiske synsvinkler, såkaldte dimensioner. Tilsammen danner de et n-dimensionelt rum af data kaldet en data cube. Multidimensionelle data er typisk lagret i et data warehouse, der integrerer data fra forskellige interne datakilder, og kan analyseres ved hjælp af OLAPoperationer. For eksempel kan salgsdata disaggregeres langs sted-dimensionen. OLAP har vist sig at være intuitiv at forstå og er blevet taget i brug af ikketekniske og orretningsorienterede brugere. Nye tilgange er siden blevet udviklet i forsøget på at afhjælpe de problemer, som denne slags brugere dog stadig står over for. Disse tilgange indsamler metadata om brugerne og deres handlinger og kommer efterfølgende med forslag og anbefalinger, der kan bidrage til brugernes analyse. På trods af at der er en klar nytteværdi i metadata (givet deres udbredelse), har stadig ikke været meget opmærksomhed på metadata i denne kotekst. Desuden lægger nye fremspirende teknikker nu op til en udvidelse af brugen af OLAP til også at bruge eksterne og uensartede datakilder. Dette har ført til Exploratory OLAP, en tilgang til OLAP, der benytter teknologier fra Semantic Web til at understøtte beskrivelse og integration af eksterne kilder. Efterhånden som mere data gøres offentligt tilgængeligt via Semantic Web, kommer flere og mere forskelligartede ikketekniske brugere også til. Derfor er metadata til understøttelsen af deres dataanalyser endnu mere relevant. Denne ph.d.-afhandling omhandler metadata, der understøtter Exploratory OLAP. Der foretages en undersøgelse af de former for metadata, der benyttes til at hjælpe brugere, og af, hvordan sådanne metadata kan udnyttes. Med grundlag i disse fund søges der løsninger til de identificerede problemer igennem teoretiske såvel som praktiske midler. Det vil sige modeller, algoritmer og værktøjer. På baggrund af en afdækning af eksisterende tilgange til brugerassistance i forbindelse med OLAP præsenteres først rammeværket Analytical Metadata (AM). Det inkluderer definition af assistanceprocessen, en taksonomi over tilhørende artefakter og endelig relaterede processeringsformer til brugerunderstøttelsen. Dernæst præsenteres en semantisk metamodel for AM. Der benyttes Resource Description Framework (RDF) til at repræsentere AM-artefakterne på en genbrugelig og fleksibel facon, mens metamodellens abstraktionsniveau har til formål at nedbringe uensartetheden af (meta)data i Exploratory OLAPs kontekst. Så fokuseres der på skemaet som en fundamental metadata-artefakt i OLAP, og afhandlingen tager fat i vigtige udfordringer i forbindelse med konstruktionen af multidimensionelle skemaer i Semantic Web ved brug af RDF. Der præsenteres algoritmer, metoder og redskaber til at konstruere disse skemaer sammenkoblede åbne statistiske datasæt. Der lægges særlig vægt på, at denne proces skal kunne udføres af ikke-tekniske brugere. Til slut tager afhandlingen fat i forespørgsler som anden vigtig artefakt inden for bruger-assistance. I samme ånd som Exploratory OLAP foreslås en RDF-baseret model for OLAP-forespørgsler, hvor førnævnte metamodel benyttes. Modellen understøtter deling og genbrug af forespørgsler over Semantic Web og fordrer klargørelsen af metadata med øje for assistance-relaterede formål. Endelig leder resultaterne af afhandlingen til fundamenterne for metadata i støttet Exploratory OLAP og opfordrer til en øget opmærksomhed på modelleringen og brugen af semantik i forhold til metadata
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Yin, Zheng. "Study of metadata for learning objects." Thesis, University of Ottawa (Canada), 2004. http://hdl.handle.net/10393/26819.

Texto completo
Resumen
Metadata is descriptive information for data. The purpose of metadata is to facilitate describing, managing and discovering resources in huge distributed repositories. Metadata experts worldwide create the Dublin Core (DC), which acts as a fundamental core metadata standard on which industrial metadata standards based. For educational industry, the need is increasing for description and exploration of a learning object (LO) in distributed learning object repositories (LOR) worldwide. Several organizations aim to establish metadata standards for facilitating better identifying, exchanging and reusing learning objects according to their specific needs. The Institute of Electrical and Electronics Engineers (IEEE) published Learning Object Metadata (LOM), which is a credited standard on the global level since it best represents the characteristics of digital learning objects. Conversion from one metadata standard to another is necessary in case that people want to exchange and reuse learning objects tagged using different kinds of metadata standards. Mapping between the DC and the LOM is an essential job in many e-learning systems. In this thesis, we present a new web based metadata editor for the DC and LOM, and a Web Services oriented mapping tool between them. Other clients can use our editor to create DC or LOM metadata records when catalog their learning objects into our LOR, and integrate the mapping web services as a part of different systems regardless of different platforms, protocols, and displaying devices. Our objective is to promote the reusability and interoperability for both the DC and the LOM users, therefore benefit the learning object industry by lowering the cost of using metadata. The DC-LOM mapping tool is demonstrated in our e-learning system called UbiLearn developed at the MCRLab of University of Ottawa.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Karlsson, Fredrik, and Fredrik Berg. "Algoritm för automatiserad generering av metadata." Thesis, KTH, Data- och elektroteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-168976.

Texto completo
Resumen
Sveriges Radio sparar sin data i stora arkiv vilket gör det svårt att hitta specifik information. På grund av denna storlek blir uppgiften att hitta specifik information om händelser ett stort problem. För att lösa problemet krävs en mer konsekvent användning av metadata, därför har en undersökning om metadata och nyckelordsgenerering gjorts.Arbetet gick ut på att utveckla en algoritm som automatisk kan generera nyckelord från transkriberade radioprogram. Det ingick också i arbetet att göra en undersökning av tidigare arbeten för att se vilka system och algoritmer som kan användas för att generera nyckelord. Dessutom utvecklades en applikation som generar färdiga nyckelord som förslag till en användare. Denna applikation jämfördes och utvärderades med redan existerande program. Metoderna som använts bygger på både lingvistiska och statistiska algoritmer. En analys av resultaten gjordes och visade att den utvecklade applikationen genererade många precisa nyckelord, men även till antalet stora mängder nyckelord. Jämförelsen med ett redan existe-rande program visade att täckningen var bättre för den utvecklade applikationen, samtidigt som precisionen var bättre för det redan existerande programmet.<br>Sveriges Radio stores their data in large archives which makes it hard to retrieve specific information. The sheer size of the archives makes retrieving information about a specific event difficult and causes a big problem. To solve this problem a more consistent use of metadata is needed. This resulted in an investigation about metadata and keyword genera-tion.The appointed task was to automatically generate keywords from transcribed radio shows. This included an investigation of which systems and algorithms that can be used to generate keywords, based on previous works. An application was also developed which suggests keywords based on a text to a user. This application was tested and compared to other al-ready existing software, as well as different methods/techniques based on both linguistic and statistic algorithms. The resulting analysis displayed that the developed application generated many accurate keywords, but also a large amount of keywords in general. The comparison also showed that the recall for the developed algorithm got better results than the already existing software, which in turn produced a better precision in their keywords.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Larsson, Marcus. "Metadata : En forensisk analys av Exif." Thesis, Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-23727.

Texto completo
Resumen
I en bild tagen med en digitalkamera kan man återfinna mängder av information om bildens ursprung och inställningar i kameran, något som benämns som Exif3data. Detta är något som kan vara av absolut intresse vid forensiska undersökningar i syfte att knyta bevismaterial till en gärningsman. I detta arbete skall ett antal frågeställningar besvaras som kan vara aktuell för en IT3 forensiker, med avseende på bilder tagna med en smartphone. Kan man styrka att en specifik enhet har tagit en specifik bild? Hur tillförlitlig är den GPS3information som kan lagras i en bild? Genom experiment och granskningar av Exif3data, kommer detta arbete ge svar på dessa frågor. Arbetet kommer också ge exempel på verktyg för att tolka Exif3 informationen. Vidare kommer arbetet även ta upp huruvida den mobila applikationen WhatsApp! väljer att radera Exif3data vid överföring av bilder mellan smartphones.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

McEnnis, Daniel. "On-demand metadata extraction network (OMEN)." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99382.

Texto completo
Resumen
OMEN (On-demand Metadata Extraction Network) addresses a fundamental problem in Music Information Retrieval: the lack of universal access to a large dataset containing significant amounts of copyrighted music. This thesis proposes a solution to this problem that is accomplished by utilizing the large collections of digitized music available at many libraries. Using OMEN, libraries will be able to perform on-demand feature extraction on site, returning feature values to researchers instead of providing direct access to the recordings themselves. This avoids copyright difficulties, since the underlying music never leaves the library that owns it. The analysis is performed using grid-style computation on library machines that are otherwise under-used (e.g., devoted to patron web and catalogue use).
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Price, Robin Michael. "Metadata and interactivity in sonic art." Thesis, Queen's University Belfast, 2013. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.602929.

Texto completo
Resumen
This thesis deals with questions about how to deal with recombinant works concocted from large media collections; how these different kinds of music can be represented; about handing over control to performers or an audience and how these pieces as a whole can be conceived of and presented to the public. It puts forward the database as a method for dealing with libraries of material, examines different representations for dealing with collections of sounds and music, appraises strategies for interactivity such as hypernarratives and suggests metaphor as a method for understanding all of this. These themes are dealt with in both the portfolio of works presented on the disk and this written dissertation. Out of this comes contributions to the bodies of work exploring the appropriation of everyday objects in art, the fields of algorithmic and generative music, generative video synthesis, online mass participation artworks, interactive pieces and local network instruments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Roxbergh, Linus. "Language Classification of Music Using Metadata." Thesis, Uppsala universitet, Avdelningen för systemteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-379625.

Texto completo
Resumen
The purpose of this study was to investigate how metadata from Spotify could be used to identify the language of songs in a dataset containing nine languages. Features based on song name, album name, genre, regional popularity and vectors describing songs, playlists and users were analysed individually and in combination with each other in different classifiers. In addition to this, this report explored how different levels of prediction confidence affects performance and how it compared to a classifier based on audio input. A random forest classifier proved to have the best performance with an accuracy of 95.4% for the whole data set. Performance was also investigated when the confidence of the model was taken into account, and when only keeping more confident predictions from the model, accuracy was higher. When keeping the 70% most confident predictions an accuracy of 99.4% was achieved. The model also proved to be robust to input of other languages than it was trained on, and managed to filter out unwanted records not matching the languages of the model. A comparison was made to a classifier based on audio input, where the model using metadata performed better on the training and test set used. Finally, a number of possible improvements and future work were suggested.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Kaage, Gabriella. "Metadata i publikationsdatabaser Hur används DSpace?" Thesis, Högskolan i Borås, Institutionen Biblioteks- och informationsvetenskap / Bibliotekshögskolan, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-16876.

Texto completo
Resumen
The aim of this bachelor thesis is to study how libraries handle metadata in institutional repositories. The methodology of the study is semi-structured interviews with librarians from all Swedish libraries that currently use the institutional repository system DSpace. Metadata schemes from the different libraries have also been studied. As analytical tools the study uses the user tasks presented in Functional Requirement for Bibliographic Records, as well as a study of metadata quality made by Jung-Ran Park. The study shows a rather big difference between the ways the libraries use institutional repositories. Some of the libraries use their repository as a full text database, while others use it for all publications within the organization with bibliometric functions. The study also shows that there is a difference in the use of metadata. Some of the libraries use the metadata elements created by Dublin Core Metadata Initiative, while others create their own elements.<br>Program: Bibliotekarie
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Tyagi, Nirvan. "A distributed metadata-private messaging system." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/106446.

Texto completo
Resumen
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 61-63).<br>Private communication over the Internet continues to be a difficult problem. Even if messages are encrypted, it is hard to deliver them without revealing metadata about which pairs of users are communicating. Scalable systems such as Tor are susceptible to traffic analysis. In contrast, the largest-scale systems with metadata privacy require passing all messages through a single server, which places a hard cap on their scalability. This paper presents Stadium, the first system to protect both messages and metadata while being able to scale its work efficiently across multiple servers. Stadium uses the same differential privacy definition for metadata privacy as Vuvuzela, the currently highest-scale system. However, providing privacy in Stadium is significantly more challenging because distributing users' traffic across servers creates more opportunities for adversaries to observe it. To solve this challenge, Stadium uses a novel verifiable mixnet design. We use a verifiable shuffle scheme that we extend to allow for efficient group verification, and present a verifiable distribution primitive to check message transfers across servers. We show that Stadium can scale to use hundreds of servers, support an order of magnitude more users than Vuvuzela, and cut the costs of operating each server.<br>by Nirvan Tyagi.<br>M. Eng.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Pereira, Pedro Honrado Rio. "Extensible metadata repository for information systems." Master's thesis, FCT - UNL, 2009. http://hdl.handle.net/10362/2290.

Texto completo
Resumen
Thesis submitted to Faculdade de Ciências e Tecnologia of the Universidade Nova de Lisboa, in partial fulfillment of the requirements for the degree of Master in Computer Science<br>Information Systems are, usually, systems that have a strong integration component and some of those systems rely on integration solutions that are based on metadata (data that describes data). In that situation, there’s a need to deal with metadata as if it were “normal”information. For that matter, the existence of a metadata repository that deals with the integrity, storage, validity and eases the processes of information integration in the information system is a wise choice. There are several metadata repositories available in the market, but none of them is prepared to deal with the needs of information systems or is generic enough to deal with the multitude of situations/domains of information and with the necessary integration features. In the SESS project (an European Space Agency project), a generic metadata repository was developed, based on XML technologies. This repository provided the tools for information integration, validity, storage, share, import, as well as system and data integration, but it required the use of fix syntactic rules that were stored in the content of the XML files. This situation causes severe problems when trying to import documents from external data sources (sources unaware of these syntactic rules). In this thesis a metadata repository that provided the same mechanisms of storage, integrity, validity, etc, but is specially focused on easy integration of metadata from any type of external source (in XML format) and provides an environment that simplifies the reuse of already existing types of metadata to build new types of metadata, all this without having to modify the documents it stores was developed. The repository stores XML documents (known as Instances), which are instances of a Concept, that Concept defines a XML structure that validates its Instances. To deal with reuse, a special unit named Fragment, which allows defining a XML structure (which can be created by composing other Fragments) that can be reused by Concepts when defining their own structure. Elements of the repository (Instances,Concepts and Fragment) have an identifier based on (and compatible with) URIs, named Metadata Repository Identifier (MRI). Those identifiers, as well as management information(including relations) are managed by the repository, without the need to use fix syntactic rules, easing integration. A set of tests using documents from the SESS project and from software-house ITDS was used to successfully validate the repository against the thesis objectives of easy integration and promotion of reuse.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Laitala, J. (Joni). "Metadata management in distributed file systems." Bachelor's thesis, University of Oulu, 2017. http://urn.fi/URN:NBN:fi:oulu-201709092881.

Texto completo
Resumen
The purpose of this research has been to study the architectures of popular distributed file systems used in cloud computing, with a focus on their metadata management, in order to identify differences between and issues within varying designs from the metadata perspective. File system and metadata concepts are briefly introduced before the comparisons are made.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Page, Kevin R. "Continuous metadata flows for distributed multimedia." Thesis, University of Southampton, 2011. https://eprints.soton.ac.uk/183241/.

Texto completo
Resumen
The practical use of temporal multimedia has increased markedly in recent years as enabling technologies for the distribution and streaming of media have become available. As a part of this trend, hypermedia systems and models have adapted accordingly to incorporate such distributed multimedia for presentation. Structured interpretation of information has long been a fundamental feature of both open hypermedia systems and knowledge systems. Metadata, in its many forms, has become the cornerstone for providing this structured knowledge above and beyond basic data and information. This thesis presents the rationale and requirements for continuous metadata, which supports the metadata accompanying distributed multimedia throughout the lifecycle of streamed media, from generation, through distribution, to presentation. Throughout this process it is the temporal and continuous nature of the metadata which is paramount. A conceptual framework for continuous metadata is proposed to encapsulate these principles and ideas. Continuous metadata and the associated framework enable the development, in particular, of real-time, collaborative, semantically enriched distributed multimedia applications. Experience building one such system using continuous metadata is evaluated within the framework. An ontology is developed for the system to enable the collation, distribution, and presentation of structure aiding navigation of multimedia, and it is shown how continuous metadata utilising the ontology can be distributed using multicast
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Kumar, Aman. "Metadata-Driven Management of Scientific Data." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1243898671.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

MESHRAM, VILOBH MAHADEO. "Distributed Metadata Management for Parallel FileSystems." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1313493741.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Zhang, Yaxuan. "Checking Metadata Usage for Enterprise Applications." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103425.

Texto completo
Resumen
It is becoming more and more common for developers to build enterprise applications on Spring framework or other other Java frameworks. While the developers are enjoying the convenient implementations of web frameworks, developers should pay attention to con- figuration deployment with metadata usage (i.e., Java annotations and XML deployment descriptors). Different formats of metadata can correspond to each other. Metadata usually exist in multiple files. Maintaining such metadata is challenging and time-consuming. Cur- rent compilers and research tools rarely inspect the XML files, not to say the corresponding relationship between Java annotations and XML files. To help developers ensure the quality of metadata, this work presents a Domain Specific Language, RSL, and its engine, MeEditor. RSL facilitates pattern definition for correct metadata usage. MeEditor can take in specified rules and check Java projects for any rule violations. Developer can define rules with RSL considering the metadata usage. Then, developers can run RSL script with MeEditor. 9 rules were extracted from Spring specification and are written in RSL. To evaluate the effectiveness and correctness of MeEditor, we mined 180 plus 500 open-source projects from Github. To evaluate the effectiveness and usefulness of MeEditor, we conducted our evaluation by taking two steps. First, we evaluated the effec- tiveness of MeEditor by constructing a know ground truth data set. Based on experiments of ground truth data set, MeEditor can identified the metadata misuse. MeEditor detected bug with 94% precision, 94% recall, 94% accuracy. Second, we evaluate the usefulness of MeEditor by applying it to real world projects (total 500 projects). For the latest version of these 500 projects, MeEditor gave 79% precision according to our manual inspection. Then, we applied MeEditor to the version histories of rule-adopted projects, which adopt the rule and is identified as correct project for latest version. MeEditor identified 23 bugs, which later fixed by developers.<br>Master of Science<br>It is becoming more and more common for developers to build enterprise applications on Spring framework or other other Java frameworks. While the developers are enjoying the convenient implementations of web frameworks, developers should pay attention to con- figuration deployment with metadata usage (i.e., Java annotations and XML deployment descriptors). Different formats of metadata can correspond to each other. Metadata usually exist in multiple files. Maintaining such metadata is challenging and time-consuming. Cur- rent compilers and research tools rarely inspect the XML files, not to say the corresponding relationship between Java annotations and XML files. To help developers ensure the quality of metadata, this work presents a Domain Specific Language, RSL, and its engine, MeEditor. RSL facilitates pattern definition for correct metadata usage. MeEditor can take in specified rules and check Java projects for any rule violations. Developer can define rules with RSL considering the metadata usage. Then, developers can run RSL script with MeEditor. 9 rules were extracted from Spring specification and are written in RSL. To evaluate the effectiveness and correctness of MeEditor, we mined 180 plus 500 open-source projects from Github. To evaluate the effectiveness and usefulness of MeEditor, we conducted our evaluation by taking two steps. First, we evaluated the effec- tiveness of MeEditor by constructing a know ground truth data set. Based on experiments of ground truth data set, MeEditor can identified the metadata misuse. MeEditor detected bug with 94% precision, 94% recall, 94% accuracy. Second, we evaluate the usefulness of MeEditor by applying it to real world projects (total 500 projects). For the latest version of these 500 projects, MeEditor gave 79% precision according to our manual inspection. Then, we applied MeEditor to the version histories of rule-adopted projects, which adopt the rule and is identified as correct project for latest version. MeEditor identified 23 bugs, which later fixed by developers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Wahlquist, Gustav. "Improving Automatic Image Annotation Using Metadata." Thesis, Linköpings universitet, Datorseende, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176941.

Texto completo
Resumen
Detecting and outlining products in images is beneficial for many use cases in e-commerce, such as automatically identifying and locating products within images and proposing matches for the detections. This study investigated how the utilisation of metadata associated with images of products could help boost the performance of an existing approach with the ultimate goal of reducing manual labour needed to annotate images. This thesis explored if approximate pseudo masks could be generated for products in images by leveraging metadata as image-level labels and subsequently using the masks to train a Mask R-CNN. However, this approach did not result in satisfactory results. Further, this study found that by incorporating the metadata directly in the Mask R-CNN, an mAP performance increase of nearly 5\% was achieved. Furthermore, utilising the available metadata to divide the training samples for a KNN model into subsets resulted in an increased top-3 accuracy of up to 16\%. By representing the data with embeddings created by a pre-trained CNN, the KNN model performed better with both higher accuracy and more reasonable suggestions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Gudumac, Iulian. "Metadata editing: un'implementazione per Open Office." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3131/.

Texto completo
Resumen
The purpose of this thesis is to enhance the functionalities of GAFFE, a flexible, interactive and user-friendly application for editing metadata in office documents by supporting different ontologies stored inside and outside of the digital document, by adding new views and forms and by improving its ease of use.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Autayeu, Aliaksandr. "Descriptive Phrases: Understanding Natural Language Metadata." Doctoral thesis, Università degli studi di Trento, 2010. https://hdl.handle.net/11572/368353.

Texto completo
Resumen
Fast development of information and communication technologies made available vast amounts of heterogeneous information. With these amounts growing faster and faster, information integration and search technologies are becoming a key for the success of information society. To handle such amounts efficiently, data needs to be leveraged and analysed at deep levels. Metadata is a traditional way of getting leverage over the data. Deeper levels of analysis include language analysis, starting from purely string-based (keyword) approaches, continuing with syntactic-based approaches and now semantics is about to be included in the processing loop. Metadata gives a leverage over the data. Often a natural language, being the easiest way of expression, is used in metadata. We call such metadata ``natural language metadata''. The examples include various titles, captions and labels, such as web directory labels, picture titles, classification labels, business directory category names. These short pieces of text usually describe (sets of) objects. We call them ``descriptive phrases''. This thesis deals with a problem of understanding natural language metadata for its further use in semantics aware applications. This thesis contributes by portraying descriptive phrases, using the results of analysis of several collected and annotated datasets of natural language metadata. It provides an architecture for the natural language metadata understanding, complete with the algorithms and the implementation. This thesis contains the evaluation of the proposed architecture.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Autayeu, Aliaksandr. "Descriptive Phrases: Understanding Natural Language Metadata." Doctoral thesis, University of Trento, 2010. http://eprints-phd.biblio.unitn.it/270/1/autayeu-phd-thesis.pdf.

Texto completo
Resumen
Fast development of information and communication technologies made available vast amounts of heterogeneous information. With these amounts growing faster and faster, information integration and search technologies are becoming a key for the success of information society. To handle such amounts efficiently, data needs to be leveraged and analysed at deep levels. Metadata is a traditional way of getting leverage over the data. Deeper levels of analysis include language analysis, starting from purely string-based (keyword) approaches, continuing with syntactic-based approaches and now semantics is about to be included in the processing loop. Metadata gives a leverage over the data. Often a natural language, being the easiest way of expression, is used in metadata. We call such metadata ``natural language metadata''. The examples include various titles, captions and labels, such as web directory labels, picture titles, classification labels, business directory category names. These short pieces of text usually describe (sets of) objects. We call them ``descriptive phrases''. This thesis deals with a problem of understanding natural language metadata for its further use in semantics aware applications. This thesis contributes by portraying descriptive phrases, using the results of analysis of several collected and annotated datasets of natural language metadata. It provides an architecture for the natural language metadata understanding, complete with the algorithms and the implementation. This thesis contains the evaluation of the proposed architecture.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Potetsianakis, Emmanouil. "Enhancing video applications through timed metadata." Electronic Thesis or Diss., Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLT029.

Texto completo
Resumen
Les dispositifs d'enregistrement vidéo sont souvent équipés de capteurs (smartphones par exemple, avec récepteur GPS, gyroscope, etc.) ou utilisés dans des systèmes où des capteurs sont présents (par exemple, caméras de surveillance, zones avec capteurs de température et/ou d'humidité). Par conséquent, de nombreux systèmes traitent et distribuent la vidéo avec des flux de métadonnées temporels, souvent sous forme de contenu généré par l'utilisateur (UGC). La diffusion vidéo a fait l'objet d'études approfondies, mais les flux de métadonnées ont des caractéristiques et des formes différentes, et il n'existe en pratique pas de méthode cohérente et efficace pour les traiter conjointement avec les flux vidéo. Dans cette thèse, nous étudions les moyens d'améliorer les applications vidéo grâce aux métadonnées temporelles. Nous définissons comme métadonnées temporelles toutes les données non audiovisuelles enregistrées ou produites, qui sont pertinentes à un moment précis sur la ligne de temps du média. ”L'amélioration” des applications vidéo a une double signification, et ce travail se compose de deux parties respectives. Premièrement, utiliser les métadonnées temporelles pour étendre les capacités des applications multimédias, en introduisant de nouvelles fonctionnalités. Deuxièmement, utiliser les métadonnées chronométrées pour améliorer la distribution de contenu pour de telles applications. Pour l'extension d'applications multimédias, nous avons adopté une approche exploratoire et nous présentons deux cas d'utilisation avec des exemples d'application. Dans le premier cas, les métadonnées temporelles sont utilisées comme données d'entrée pour générer du contenu, et dans le second, elles sont utilisées pour étendre les capacités de navigation pour le contenu multimédia sous-jacent. En concevant et en mettant en œuvre deux scénarios d'application différents, nous avons pu identifier le potentiel et les limites des systèmes vidéo avec métadonnées temporelles. Nous utilisons les résultats de la première partie afin d'améliorer les applications vidéo, en utilisant les métadonnées temporelles pour optimiser la diffusion du contenu. Plus précisément, nous étudions l'utilisation de métadonnées temporelles pour l'adaptation multi-variables dans la diffusion vidéo multi-vues et nous testons nos propositions sur une des plateformes développées précédemment. Notre dernière contribution est un système de buffering pour la lecture synchrone et à faible latence dans les systèmes de streaming en direct<br>Video recording devices are often equipped with sensors (smartphones for example, with GPS receiver, gyroscope etc.), or used in settings where sensors are present (e.g. monitor cameras, in areas with temperature and/or humidity sensors). As a result, many systems process and distribute video together with timed metadata streams, often sourced as User-Generated Content. Video delivery has been thoroughly studied, however timed metadata streams have varying characteristics and forms, thus a consistent and effective way to handle them in conjunction with the video streams does not exist. In this Thesis we study ways to enhance video applications through timed metadata. We define as timed metadata all the non-audiovisual data recorded or produced, that are relevant to a specific time on the media timeline. ”Enhancing” video applications has a double meaning, and this work consists of two respective parts. First, using the timed metadata to extend the capabilities of multimedia applications, by introducing novel functionalities. Second, using the timed metadata to improve the content delivery for such applications. To extend multimedia applications, we have taken an exploratory approach, and we demonstrate two use cases with application examples. In the first case, timed metadata is used as input for generating content, and in the second, it is used to extend the navigational capabilities for the underlying multimedia content. By designing and implementing two different application scenarios we were able to identify the potential and limitations of video systems with timed metadata. We use the findings from the first part, to work from the perspective of enhancing video applications, by using the timed metadata to improve delivery of the content. More specifically, we study the use of timed metadata for multi-variable adaptation in multi-view video delivery - and we test our proposals on one of the platforms developed previously. Our final contribution is a buffering scheme for synchronous and lowlatency playback in live streaming systems
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Beránek, Lukáš. "Vizualizace technických a business metadat." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-124775.

Texto completo
Resumen
This master's degree thesis focuses on the issues of visualization formerly preprocessed business and technical metadata in a business environment. Within the process of elabora-tion and usage of the collected data in the company, it is necessary to present the data to the users in a comfortable, comprehensible and clear way. The first goal of this thesis is to describe and to specify the term of metadata in the field of theory and on the level of busi-ness, their main structure, their occurrence in a non-visual manner and related places where we can find metadata in the heterogeneous business environment. This part also includes a short introduction to the usage of metadata that is related and originates from business in-telligence and a description of Company encyclopedia that can syndicate these resources for further utilization. When defined, the sources, destinations and purpose of technical and business metadata can be used in the second part of the thesis -- this part is aimed at model-ing the use cases for the visual component that can be applied to business and technical metadata. Use cases will be focused on the roles of the users that will use this component and to discover the primary demands and requirements of these users and the functionality that will be indispensable. After the use cases are defined we can process to the next stage of visual component development -- the data must be visualized itself and we have to find proper means to achieve this with user experience being the main focus. Then we have to encapsulate the visualization with a graphical user interface that will meet the requirements and demands of the users' roles specified by the use cases section by prototyping. Lastly, the results of the previous chapters are used to prototype the visual component suitable for a web environment which is based on principles of reusability, data-driven approach, and uses modern web technologies such as framework and library D3.js, HTML5, CSS3, and SVG.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Stenäng, Marie. "Metadata på webben : en studie av metadata och dess användning på sex svenska högskolors och universitets webbplatser." Thesis, Högskolan i Borås, Institutionen Biblioteks- och informationsvetenskap / Bibliotekshögskolan, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-18335.

Texto completo
Resumen
The purpose of this Master&apos;s thesis is to study metadata and its use at Web sites of six Swedish universities and colleges. Each Web site has been examined at three different hierarchical levels and the total number of examined Web pages is 258. 16 questionnaires have also been sent to webmasters at the six universities and colleges. The results show that metadata refers to the data which can assist to organize, describe, identify, locate, evaluate and retrieve resources on the Internet. Even though there are several different metadata formats available on the Internet, the commercial search engines only support the HTML&apos;s title tag and the meta tags with the attributes keywords and description. The six examined Web sites use the meta tag to some extent. The metadata format Dublin Core is represented in a small number of the examined Web pages. Search engines usually present the title tag and the description tag of Web pages in the search results. Therefore, it seems especially important to provide good title and description tags. The study of the three different hierarchical levels at each Web site shows a difference in the use of metadata. At some levels the HTML&apos;s meta tag is not used at all. In the questionnaires, the webmasters describe the Web sites as decentralized, where each webmaster only answers for his/her part of the Web site. The differences in the use of metadata could be due to each webmaster&apos;s interest and knowledge of metadata.<br>Uppsatsnivå: D
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía