To see the other types of publications on this topic, follow the link: Metadata creation.

Dissertations / Theses on the topic 'Metadata creation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 dissertations / theses for your research on the topic 'Metadata creation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Enoksson, Fredrik. "Adaptable metadata creation for the Web of Data." Doctoral thesis, KTH, Medieteknik och interaktionsdesign, MID, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154272.

Full text
Abstract:
One approach to manage collections is to create data about the things in it. This descriptive data is called metadata, and this term is in this thesis used as a collective noun, i.e no plural form exists. A library is a typical example of an organization that uses metadata, to manage a collection of books. The metadata about a book describes certain attributes of it, for example who the author is. Metadata also provides possibilities for a person to judge if a book is interesting without having to deal with the book itself. The metadata of the things in a collection is a representation of the collection that is easier to deal with than the collection itself. Nowadays metadata is often managed in computer-based systems that enable search possibilities and sorting of search results according to different principles. Metadata can be created both by computers and humans. This thesis will deal with certain aspects of the human activity of creating metadata and includes an explorative study of this activity. The increased amount of public information that is produced is also required to be easily accessible and therefore the situation when metadata is a part of the Semantic Web has been considered an important part of this thesis. This situation is also referred to as the Web of Data or Linked Data. With the Web of Data, metadata records living in isolation from each other can now be linked together over the web. This will probably change what kind of metadata that is being created, but also how it is being created. This thesis describes the construction and use of a framework called Annotation Profiles, a set of artifacts developed to enable an adaptable metadata creation environment with respect to what metadata that can be created. The main artifact is the Annotation Profile Model (APM), a model that holds enough information for a software application to generate a customized metadata editor from it. An instance of this model is called an annotation profile, that can be seen as a configuration for metadata editors. Changes to what metadata can be edited in a metadata editor can be done without modifying the code of the application. Two code libraries that implement the APM have been developed and have been evaluated both internally within the research group where they were developed, but also externally via interviews with software developers that have used one of the code-libraries. Another artifact presented is a protocol for how RDF metadata can be remotely updated when metadata is edited through a metadata editor. It is also described how the APM opens up possibilities for end user development and this is one of the avenues of pursuit in future research related to the APM.

QC 20141028

APA, Harvard, Vancouver, ISO, and other styles
2

Costache, Stefania [Verfasser]. "Exploiting metadata for context creation and ranking on the desktop / Stefania Costache." Hannover : Technische Informationsbibliothek und Universitätsbibliothek Hannover, 2010. http://d-nb.info/101083925X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mabry, Holly, and Daniel Jolley. "Using Analytic Tools to Measure Overall Trends and Growth Patterns in Digital Commons Collections." Digital Commons @ East Tennessee State University, 2018. https://dc.etsu.edu/dcseug/2018/schedule/1.

Full text
Abstract:
Digital Commons @ Gardner-Webb University was launched in Fall 2015 and currently has over 1300 papers including: theses and dissertations, journals in Education, Psychology, and Undergraduate Research, University Archives, and faculty scholarship activities. The repository has a small, but growing number of collections that continue to show significant year-to-year document download count increases, particularly in the nursing and education theses and dissertation collections. Digital Commons provides a number of ways to track collection statistics and identify repository access and download trends. This presentation will look at how we used the Digital Commons Dashboard report tool and Google Analytics to identify the most popular collections and where they’re being accessed on campus and globally. Using this data, we were able to write targeted metadata and include third party tools such as the Internet Archive BookReader in order to improve outreach to the campus and global scholarly community.
APA, Harvard, Vancouver, ISO, and other styles
4

Norlund, Petra. "Automatic and semi-automatic methods for metadata creation and maintenance : long term implementation of the INSPIRE directive." Thesis, Högskolan i Gävle, Avdelningen för Industriell utveckling, IT och Samhällsbyggnad, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-8212.

Full text
Abstract:
Metadata is an important part of any Spatial Data Infrastructure (SDI). Without proper and sufficient documentation of spatial data, resources are lost when pre-existing data has to be recreated or if data sets overlap. At the same time, creating and updating metadata can be a resource intensive task. Lantmäteriet seeks to optimize the creation and updating of metadata according to the new INSPIRE directive, as well as the Swedish National Geodata Strategy. INSPIRE (Infrastructure for Spatial Information in Europe) seeks to increase cooperation between nations in Europe through harmonization of certain spatial data themes, increased data and software interoperability, as well as the creation of a European spatial data infrastructure.  INSPIRE lays the judicial foundation for this European cooperation. Sweden has been involved with INSPIRE since May 15th 2009. This thesis is aimed at developing the most optimal business process model for how the Swedish Mapping, Cadastral, and Land Registration Authority (Lantmäteriet) can create and update metadata according to the new INSPIRE directive based on best practice case studies and extensive literature review. The European Commission (EC) INSPIRE directive will be fully implemented in 2010. Furthermore, a survey of current metadata practices has been carried out to establish a starting off point for metadata creation at Lantmäteriet as well as a best practice business process model using ArcGIS Desktop.
APA, Harvard, Vancouver, ISO, and other styles
5

Mugabe, Crispen [Verfasser], Wolfram [Akademischer Betreuer] Luther, and Andreas [Akademischer Betreuer] Harrer. "Development of a conceptual graphical user interface framework for the creation of XML metadata for digital archives / Crispen Mugabe. Gutachter: Andreas Harrer. Betreuer: Wolfram Luther." Duisburg, 2013. http://d-nb.info/1037311477/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Alemu, Getaneh. "A theory of digital library metadata : the emergence of enriching and filtering." Thesis, University of Portsmouth, 2014. https://researchportal.port.ac.uk/portal/en/theses/a-theory-of-digital-library-metadata(cdfbda48-3a8b-4fe9-afef-a8a7e3531253).html.

Full text
Abstract:
The ever increasing volume and diversity of information objects, technological advances and rising user expectations is causing libraries to face challenges in adequately describing information objects so as to improve the findability and discoverability of these objects by potential end users. Taking these present metadata challenges into account, this thesis inductively explores and develops overarching concepts and principles that are pertinent within both current standards-based and emerging metadata approaches. Adopting a Constructivist Grounded Theory Method, this thesis conducted in-depth interviews with 57 purposefully selected participants, comprised of practising librarians, researchers, metadata consultants and library users. The interview data was analysed using three stages of iterative data analysis: open coding, focused coding and theoretical coding. The analysis resulted in the emergence of four Core Categories, namely, metadata Enriching, Linking, Openness and Filtering. Further integration of the Core Categories resulted in the emergence of a theory of digital library metadata; The Theory of Metadata Enriching and Filtering. The theory stipulates that metadata that has been enriched, by melding standards-based (a priori) and socially-constructed (post-hoc) metadata, cannot be optimally utilised unless the resulting metadata is contextually and semantically linked to both internal and external information sources. Moreover, in order to exploit the full benefits of such linking, metadata must be made openly accessible, where it can be shared, re-used, mixed and matched, thus reducing metadata duplication. Ultimately, metadata that has been enriched (by linking and being made openly accessible) should be filtered for each user, via a flexible, personalised, and re-configurable interface. The theory provides a holistic framework demonstrating the interdependence between expert curated and socially-constructed metadata, wherein the former helps to structure the latter, whilst the latter provides diversity to the former. This theory also suggests a conceptual shift from the current metadata principle of sufficiency and necessity, which has resulted in metadata simplicity, to the principle of metadata enriching where information objects are described using a multiplicity of users’ perspectives (interpretations). Central to this theory is the consideration of users as pro-active metadata creators rather than mere consumers, whilst librarians are creators of a priori metadata and experts at providing structure, granularity, and interoperability to post-hoc metadata. The theory elegantly delineates metadata functions into two: enriching (metadata content) and filtering (interface). By providing underlying principles, this theory should enable standards-agencies, librarians, and systems developers to better address the changing needs of users as well as to adapt themselves to recent technological advances.
APA, Harvard, Vancouver, ISO, and other styles
7

Andersen, Andreas Engen. "Interactive Television on Handheld Devices : Handling of metadata and creating interactivity in T-DMB and DVB-H." Thesis, Norwegian University of Science and Technology, Department of Electronics and Telecommunications, 2007. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-9652.

Full text
Abstract:

As broadcasted television is changing from analogue to digital transmission, the television industry has to adapt itself for a new reality. Digitization opens for a wide array of new ways of watching television, where interactivity and mobility are paramount. The difference in the experience lies in the interactive part, inviting the user to take part in what happens on the mobile screen. What the mobile telephone lacks in screen size it can now make up for with its interactive potential. Instead of just watching television, the user now interacts with it. As a result, the television experience can be tailored to suit consumers with different requirements. In this study I look at true-time broadcasted television to handheld devices over the standards achieved in Europe and Korea today, DVB-H and T-DMB, and how interactivity between content provider and end-user can be achieved. I also look into how metadata plays a crucial role in interactive television, and the means to utilize metadata to favor the end-user’s demands according to standards such as XML, MPEG and Tv-Anytime. By supplying metadata to i.e sport or reality shows, and hence creating interactivity between content provider and end-user, a new marked for television is made possible. Electronic program guides (EPGs), teletext and weather forecasts for handhelds are examples of ways that metadata can be utilized to create an interactive experience for the end-user.

APA, Harvard, Vancouver, ISO, and other styles
8

Retelius, Philip, and Persson Eddie Bergström. "Creating a Customizable Component Based ETL Solution for the Consumer." Thesis, KTH, Hälsoinformatik och logistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296819.

Full text
Abstract:
In today's society, an enormous amount of data is created that is stored in various databases. Since the data is in many cases stored in different databases, there is a demand from organizations with a lot of data to be able to merge separated data and get an extraction of this resource. Extract, Transform and Load System (ETL) is a solution that has made it possible to easily merge different databases. However, the ETL market has been owned by large actors such as Amazon and Microsoft and the solutions offered are completely owned by these actors. This leaves the consumer with little ownership of the solution. Therefore, this thesis proposes a framework to create a component based ETL which gives consumers an opportunity to own and develop their own ETL solution that they can customize to their own needs. The result of the thesis is a prototype ETL solution that is built with the idea of being able to configure and customize the prototype and it accomplishes this by being independent of inflexible external libraries and a level of modularity that makes adding and removing components easy. The results of this thesis are verified with a test that shows how two different files containing data can be combined.
I dagens samhälle skapas det en enorm mängd data som är lagrad i olika databaser. Eftersom data i många fall är lagrat i olika databaser, finns det en efterfrågan från organisationer med mycket data att kunna slå ihop separerad data och få en utvinning av denna resurs. Extract, Transform and Load System (ETL) är en lösning som gjort det möjligt att slå ihop olika databaser. Dock är problemet denna expansion av ETL teknologi. ETL marknaden blivit ägd av stora aktörer såsom Amazon och Microsoft och de lösningar som erbjuds är helt ägda av dem. Detta lämnar konsumenten med lite ägodel av lösningen. Därför föreslår detta examensarbete ett ramverk för att skapa ett komponentbaserat ETL verktyg som ger konsumenter en möjlighet att utveckla en egen ETL lösning som de kan skräddarsy efter deras egna förfogande. Resultatet av examensarbete är en prototyp ETL-lösning som är byggd för att kunna konfigurera och skräddarsy prototypen. Lösningen lyckas med detta genom att vara oberoende av oflexibla externa bibliotek och en nivå av modularitet som gör addering och borttagning av komponenter enkelt. Resultatet av detta examensarbete är verifierat av ett test som visar på hur två olika filer med innehållande data kan kombineras.
APA, Harvard, Vancouver, ISO, and other styles
9

Lai, Pei-Chun. "Speech-based metadata generation for web map search." Master's thesis, 2021. http://hdl.handle.net/10362/113895.

Full text
Abstract:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Metadata is indispensable for data discoverability and interoperability. Most datasets utilize automatic techniques to create metadata; nevertheless, metadata creation still requires manual interventions and editions, yet manually metadata creation is a tedious task. The study proposes a prototype that introduces speech recognition in the metadata creation process. Users can generate content by speaking. Afterward, the prototype transforms it into metadata with JSON-LD format, a popular metadata format and utilized by mainstream search engines. A user study was conducted to understand the impact of speech-based interaction on user performance and user satisfaction. The result showed no signi cant performance di erence between speech-based and typebased by the e ciency, slip rate, and di culty rating evaluation. In the user experience evaluation, participants consider the type-based metadata creation is pragmatic, and speech-based metadata creation is hedonic. It suggests that the mix-mode can complement mutually with the advantages of each and optimize the user experience.
APA, Harvard, Vancouver, ISO, and other styles
10

Chen, Yu-Shin, and 陳郁心. "The Creation and Application Analysis of Hierarchical Geographic Metadata Framework." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/kr9bqa.

Full text
Abstract:
碩士
國立成功大學
測量工程學系碩博士班
90
The importance of metadata is well recognized with the increasing development of the distribution mechanism of geographic data. A successful geographic data distribution mechanism enables the sharing of geographic data as well as avoids unnecessary and duplicated data creation. However, the success of such a distribution mechanism may largely depend on to what degrees users have control over existing data. Metadata, which provides correct and complete description to the existing data, is therefore the critical factor to the whole mechanism. Nonetheless, the successful creation of metadata will require both the common consensus of data providers and a well-designed procedure to create correct and complete metadata. Though a tremendous amount of effort has been invested for the creation of metadata by governments in the past few years, the lack understanding to the current metadata standard as well as the difficulty of creating metadata compliant to the standard prove to be the major obstacles in developing the data sharing environment.   To overcome the above impediments, we propose a feasible, efficient and low-cost solution for creating metadata compliant to the metadata standard in this research. The fundamental strategy of the whole system design is to reduce unnecessary work loading and provide a user-friendly interface during the metadata creation process. Two basic concepts are proposed and respectively implemented in this research: (1)Hierarchical Metadata Framework   For a specific type of geographic data, the introduction of hierarchical metadata framework allows users to easily identify metadata element that is not applicable or commonly used. We can therefore largely reduce our work loading by removing not-applicable elements and repeatedly copying the value of common metadata elements. (2)Knowledge-aided auxiliary mechanism   A step-by-step auxiliary mechanism encompassing knowledge about the corresponding relationship between metadata elements and geographic data allows users to create metadata even with only limited knowledge about metadata standard. The creation of metadata can therefore be largely simplified with the reorganization of metadata elements, auxiliary explanations to the elements and automatic links between corresponding metadata elements.
APA, Harvard, Vancouver, ISO, and other styles
11

Tseng, Hsin Yi, and 曾馨儀. "IoT Metadata Creation System for Mobile Images and its Applications." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/40196550974007224536.

Full text
Abstract:
碩士
國立清華大學
資訊系統與應用研究所
104
This thesis proposes the capturing of IoT data and their embedding as metadata in digital snap- shots, including photos, audios, and videos. Specifically, we focus on IoT devices based on Bluetooth Low Energy (BLE) Technology, which is used for not only data communication but also proximity sensing and is ubiquitous in wearables, smartphones, personal tags, indoor-navigation beacons, and home-automation devices. The IoT metadata can cover different levels, from the radio signal strength and the media-access controller (MAC) address to device name, indoor location, temperature, hu- midity, air quality, and advertising messages. These IoT data can readily augment metadata such as timestamp, GPS location, and camera settings already captured by today’s cameras and saved in digital photo files. With such IoT metadata, the user will have much richer ways to understand the subject and environment of the scene being captured by the photo or media, such as the subject’s fitness condition, the advertised events, and the tagged personal items at the scene. These metadata will enable new ways of searching, querying, and organizing the photos and the associated objects. Moreover, a novel mechanism we propose is the ability to pair with a remote device whose identity and routing information is captured as IoT metadata, so that an authorized user can then pair with it at a later time remotely. We contend that IoT metadata capturing can bring significant benefits to the users.
APA, Harvard, Vancouver, ISO, and other styles
12

Su, Yu-chih, and 蘇昱志. "Ontology-Driven Web Information Extraction for the Creation of Metadata in Semantic Web." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/90920351054672999520.

Full text
Abstract:
碩士
大同大學
資訊工程學系(所)
93
With the problem of information explosion on the web, people need an efficient way to extract the information they really need. Semantic web is an emerging technology working by building a metadata layer upon the current web and using the metadata description language to describe the resources on the WWW. It is an extension of current Web where information is given well-defined meaning, better, enabling computers and people to process in cooperation. In this thesis, we design and implement an system that is able to extract the domain events from a large number of relevant documents and to provide the semantic service. The architecture consists of three parts: Back End Extraction Components, Ontology-based store and Service Front End. The Back End consists of several components used to extract the domain events. The ontology-based store is served as a common interface which takes extracted domain events as input and exports the specific format data as output and provide specific repository for specific data format to store. The Service Front End provides several semantic services. After building the whole system, we make the evaluation for our system by extract some specific domain events from the relevant documents and figure out which reasons can influence the result of extraction.
APA, Harvard, Vancouver, ISO, and other styles
13

Su, Yu-Chih, and 蘇昱志. "Ontology-Driven Web Information Extraction for the Creation of Metadata in Semantic Web." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/04644469316101409544.

Full text
Abstract:
碩士
大同大學
資訊工程研究所
93
With the problem of information explosion on the web, people need an efficient way to extract the information they really need. Semantic web is an emerging technology working by building a metadata layer upon the current web and using the metadata description language to describe the resources on the WWW. It is an extension of current Web where information is given well-defined meaning, better, enabling computers and people to process in cooperation. In this thesis, we design and implement an system that is able to extract the domain events from a large number of relevant documents and to provide the semantic service. The architecture consists of three parts: Back End Extraction Components, Ontology-based store and Service Front End. The Back End consists of several components used to extract the domain events. The ontology-based store is served as a common interface which takes extracted domain events as input and exports the specific format data as output and provide specific repository for specific data format to store. The Service Front End provides several semantic services. After building the whole system, we make the evaluation for our system by extract some specific domain events from the relevant documents and figure out which reasons can influence the result of extraction.
APA, Harvard, Vancouver, ISO, and other styles
14

Jin, Yi-De, and 金益德. "Machine Learning Approach to Information Extraction For The Creation of Metadata in Semantic Web." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/18215879949751975072.

Full text
Abstract:
碩士
大同大學
資訊工程學系(所)
96
With the problem of information explosion on the web, people need an efficient way to extract the information they really need. Semantic web is an emerging technology working by building a metadata layer upon the current web and using the metadata description language to describe the resources on the WWW. It is an extension of current Web where information is given well-defined meaning, better, enabling computers and people to process in cooperation. In this thesis, we design and implement a system that is able to extract the chinese documents and to provide the semantic service. The architecture consists of two parts: Chinese Extraction Components and Service Front End. The Back End consists of several components used to extract the Chinese documents and use Machine Learning to build Chinese grammar structural. The Service Front End provides several semantic services. After building the whole system, we make the evaluation for our system by extract some specific domain events from the relevant documents and figure out which reasons can influence the result of extraction.
APA, Harvard, Vancouver, ISO, and other styles
15

Powell, Daniel James. "Social knowledge creation and emergent digital research infrastructure for early modern studies." Thesis, 2016. http://hdl.handle.net/1828/7241.

Full text
Abstract:
This dissertation examines the creation of innovative scholarly environments, publications, and resources in the context of a social knowledge creation affordances engendered by digital technologies. It draws on theoretical and praxis-oriented work undertaken as part of the Electronic Textual Cultures Laboratory (ETCL), work that sought to model how a socially aware and interconnected domain of scholarly inquiry might operate. It examines and includes two digital projects that provide a way to interrogate the meaning of social knowledge creation as it relates to early modern studies. These digital projects – A Social Edition of the Devonshire Manuscript (BL Add. 17,492) and the Renaissance Knowledge Network – approach the social in three primary ways: they approach the social as a quality of material textuality, deriving from the editorial theories of D. F. McKenzie and Jerome McGann; as a type of knowledge work that digital technologies can facilitate; and as a function of consciously designed platforms and tools emerging from the digital humanities. In other words, digital humanities practitioners are uniquely placed to move what has until now been customarily an analytical category and enact or embed it in a practical, applied way. The social is simultaneously a theoretical orientation and a way of designing and making digital tools — an act which in turn embeds such a theoretical framework in the material conditions of knowledge production. Digital humanists have sought to explain and often re-contextualise how knowledge work occurs in the humanities; as such, they form a body of scholarship that undergirds and enriches the present discussion around how the basic tasks of humanities work—research, discovery, analysis, publication, editing—might alter in the age of Web 2.0 and 3.0. Through sustained analysis of A Social Edition of the Devonshire Manuscript (BL Add 17,492) and the Renaissance Knowledge Network, this dissertation argues argues that scholarly communication is shifting from a largely individualistic, single-author system of traditional peer-reviewed publication to a broadly collaborative, socially-invested ecosystem of peer production and public facing digital production. Further, it puts forward the idea that the insights gained from these long-term digital humanities projects – the importance of community investment and maintenance in social knowledge projects, building resources consonant with disciplinary expectations and norms, and the necessity of transparency and consultation in project development – are applicable more widely to shifting norms in scholarly communications. These insights and specific examples may change patters of behaviour that govern how humanities scholars act within a densely interwoven digital humanities. This dissertation is situated at the intersection of digital humanities, early modern studies, and to discussions of humanities knowledge infrastructure. In content it reports on and discusses two major digital humanities projects, putting a number of previous peer-reviewed, collaboratively authored publications in conversation with each other and the field at large. As the introduction discusses, each chapter other than the introduction and conclusion originally stood on its own. Incorporating previously published, peer-reviewed materials from respected journals, as well as grants, white papers, and working group documents, this project represents a departure from the proto-monograph model of dissertation work prevalent in the humanities in the United States and Canada. Each component chapter notes my role as author; for the majority of the included material, I acted as lead author or project manager, coordinating small teams of makers and writers. In form this means that the following intervenes in discussions surrounding graduate training and professionalization. Instead of taking the form of a cohesive monograph, this project is grounded in four years of theory and practice that closely resemble dissertations produced in the natural sciences.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
16

Castro, João Daniel Aguiar de. "Engaging researchers in research data management: creating metadata models for multi-domain dataset description." Tese, 2020. https://hdl.handle.net/10216/129396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Castro, João Daniel Aguiar de. "Engaging researchers in research data management: creating metadata models for multi-domain dataset description." Doctoral thesis, 2020. https://hdl.handle.net/10216/129396.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Timms, Katherine V. "Arbitrary borders? New partnerships for cultural heritage siblings – libraries, archives and museums: creating integrated descriptive systems." 2007. http://hdl.handle.net/1993/2836.

Full text
Abstract:
This thesis explores the topic of convergence of descriptive systems between different cultural heritage institutions — libraries, archives and museums. The primary purpose of integrated descriptive systems is to enable researchers to access cultural heritage information through one portal. Beginning with definitions of each type of cultural heritage institution and a historical overview of their evolution, the thesis then provides an analysis of similarities and differences between these institutions with respect to purpose, procedures, and perspective. The latter half of the thesis first provides a historical overview of each discipline’s descriptive practices with a brief comparative analysis before discussing various methods by which these institutions can create integrated descriptive systems. The overall emphasis is on complementary similarities between the institutions and the potential for cross-sectoral collaboration that these similarities enable. The conclusion of the thesis is that creating integrated descriptive systems is desirable and well within current technological capabilities.
October 2007
APA, Harvard, Vancouver, ISO, and other styles
19

Nishioka, Mizuho. "Metadata_photography and the construction of meaning : a thesis presented in partial fulfilment of the requirements for the degree Master of Fine Arts to Massey University, College of Creative Arts, School of Fine Arts, Wellington, New Zealand." 2010. http://hdl.handle.net/10179/1341.

Full text
Abstract:
Photographic technology is increasingly respondent to a desire for the production and consumption of information. The current age of photography not only possesses the ability to capture the image, but also to capture photographic metadata as supplemental information. Engaging in the premise that the photographic image exists as an incomplete medium to the transfer of information, this research identifies the acquisition of data as a means to resolve interpretation and quantify the photographic image. Inhabiting a complex territory within this structure, the photographic image manifests multiplicity and operates as source, production, and capture of information. This work challenges the perceptions of how to engage with the dialogues created between the photographic image, and the externally appended metadata.
APA, Harvard, Vancouver, ISO, and other styles
20

Zlatohlávková, Růžena. "Digitální repozitáře na vysokých školách v České republice." Master's thesis, 2014. http://www.nusl.cz/ntk/nusl-337064.

Full text
Abstract:
The aim of this thesis is to present, analyse, compare and evaluate the current state of digital repositories at universities in the Czech Republic that use a software application for their digital repository. A theoretical part, which introduces the reader into the issue of compilation and operating of digital repositories in the Czech academic context, precedes the practical reserach. The crucial chapter of the practical part are the results of the actual analysis. The results of a supplementary survey of universities that do not run a digital repository with usage of software application and choose a different way of storage and access to their grey literature draw on the results of this analysis. The conclusion of the entire thesis is the outline of future development of the investigated issue and the perspective of further progress of the Czech academic milieu.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography