To see the other types of publications on this topic, follow the link: Digital Domain.

Dissertations / Theses on the topic 'Digital Domain'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Digital Domain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Suen, Tsz-yin Simon. "Curvature domain stitching of digital photographs." Click to view the E-thesis via HKUTO, 2007. http://sunzi.lib.hku.hk/hkuto/record/B38800901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Suen, Tsz-yin Simon, and 孫子彥. "Curvature domain stitching of digital photographs." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2007. http://hub.hku.hk/bib/B38800901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shim, Lorraine Sohee. "Interaction Domain of Digital Device Adoption." Research Showcase @ CMU, 2016. http://repository.cmu.edu/theses/108.

Full text
Abstract:
The 21st century is interwoven with technological innovation and expanding networks. In the midst of such change, some designers have advocated that we pause and assess the objects with which we been surround ourselves. Erik Stolterman and his colleagues wrote in Device Landscapes that “The number of interactive digital artifacts is growing surrounding personal lives, and individuals have an increasing need to describe, analyze, and interpret what it means to own, use, and live with a large number of interactive artifacts” (Stolterman et al., 2013). With the emergence and rapid proliferation of technology devices, the divide between tangible and intangible things has been questioned as information and data have emerged as important extensions of personal devices. A sea of informational artifacts, therefore, poses a challenge for users to fully adopt them into their daily interactions. In response, I conducted an inquiry-driven investigation into the domain of device adoption and highlighted seven key themes in the context of current and speculative technology. The exploration was designed on a iterative model of areal definition and research to outline the greater territory. For the sake of a sensible scope, I have limited my target users to millennials who I describe as a unique generation of early adapters that are both active participants and architects of technological change. To present the research outcome, I propose an annotated portfolio-styled exhibition that curates ideations and explorative concepts that have emerged from each round of research. The exhibited concepts simulate a range of device experiences and encourage pedagogic discourse around current and future models of device interactions. They are designed to induce informed reflection and discussion over innovation of digital devices and on how to build true agency over objects that are constantly evolving and changing.
APA, Harvard, Vancouver, ISO, and other styles
4

Gu, Lifang. "Video analysis in MPEG compressed domain /." Connect to this title, 2002. http://theses.library.uwa.edu.au/adt-2003.0016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Orcutt, Edward Kerry 1964. "Correlation filters for time domain signal processing." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277215.

Full text
Abstract:
This study proposes employing new filters in various configurations for use in digital communication systems. We believe that significant improvements in such performance areas as transmission rate and synchronization may be achieved by incorporating these filters into digital communications receivers. Recently reported in the literature, these filters may offer advantages over the matched filter which allow enhancements in data rates, ISI tolerance, and synchronization. To make full use of the benefits of these filters, we introduce the concept of parallel signal transmission over a single channel. We also examine the effects of signal set selection and noise on performance.
APA, Harvard, Vancouver, ISO, and other styles
6

Gu, Lifang. "Video analysis in MPEG compressed domain." University of Western Australia. School of Computer Science and Software Engineering, 2003. http://theses.library.uwa.edu.au/adt-WU2003.0016.

Full text
Abstract:
The amount of digital video has been increasing dramatically due to the technology advances in video capturing, storage, and compression. The usefulness of vast repositories of digital information is limited by the effectiveness of the access methods, as shown by the Web explosion. The key issues in addressing the access methods are those of content description and of information space navigation. While textual documents in digital form are somewhat self-describing (i.e., they provide explicit indices, such as words and sentences that can be directly used to categorise and access them), digital video does not provide such an explicit content description. In order to access video material in an effective way, without looking at the material in its entirety, it is therefore necessary to analyse and annotate video sequences, and provide an explicit content description targeted to the user needs. Digital video is a very rich medium, and the characteristics in which users may be interested are quite diverse, ranging from the structure of the video to the identity of the people who appear in it, their movements and dialogues and the accompanying music and audio effects. Indexing digital video, based on its content, can be carried out at several levels of abstraction, beginning with indices like the video program name and name of subject, to much lower level aspects of video like the location of edits and motion properties of video. Manual video indexing requires the sequential examination of the entire video clip. This is a time-consuming, subjective, and expensive process. As a result, there is an urgent need for tools to automate the indexing process. In response to such needs, various video analysis techniques from the research fields of image processing and computer vision have been proposed to parse, index and annotate the massive amount of digital video data. However, most of these video analysis techniques have been developed for uncompressed video. Since most video data are stored in compressed formats for efficiency of storage and transmission, it is necessary to perform decompression on compressed video before such analysis techniques can be applied. Two consequences of having to first decompress before processing are incurring computation time for decompression and requiring extra auxiliary storage.To save on the computational cost of decompression and lower the overall size of the data which must be processed, this study attempts to make use of features available in compressed video data and proposes several video processing techniques operating directly on compressed video data. Specifically, techniques of processing MPEG-1 and MPEG-2 compressed data have been developed to help automate the video indexing process. This includes the tasks of video segmentation (shot boundary detection), camera motion characterisation, and highlights extraction (detection of skin-colour regions, text regions, moving objects and replays) in MPEG compressed video sequences. The approach of performing analysis on the compressed data has the advantages of dealing with a much reduced data size and is therefore suitable for computationally-intensive low-level operations. Experimental results show that most analysis tasks for video indexing can be carried out efficiently in the compressed domain. Once intermediate results, which are dramatically reduced in size, are obtained from the compressed domain analysis, partial decompression can be applied to enable high resolution processing to extract high level semantic information.
APA, Harvard, Vancouver, ISO, and other styles
7

Lertniphonphun, Worayot. "Unified design procedure for digital filters in the complex domain." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/14765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jamrozik, Michele Lynn. "Spatio-temporal segmentation in the compressed domain." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/15681.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Haddadin, Baker. "Time domain space mapping optimization of digital interconnect circuits." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=116004.

Full text
Abstract:
Microwave circuit design including the design of Interconnect circuits are proving to be a very hard and complex process where the use of CAD tools is becoming more essential to the reduction in design time and in providing more accurate results. Space mapping methods, the relatively new and very efficient way of optimization which are used in microwave filters and structures will be investigated in this thesis and applied to the time domain optimization of digital interconnects. The main advantage is that the optimization is driven using simpler models called coarse models that would approximate the more complex fine model of the real system, which provide a better insight to the problem and at the same time reduce the optimization time. The results are always mapped back to the real system and a relation/mapping is found between both systems which would help the convergence time. In this thesis, we study the optimization of interconnects where we build certain practical error functions to evaluate performance in the time domain. The space mapping method is formulated to avoid problems found in the original formulation where we apply some necessary modifications to the Trust Region Aggressive Space Mapping TRASM for it to be applicable to the design process in time domain. This new method modified TRASM or MTRASM is then evaluated and tested on multiple circuits with different configuration and the results are compared to the results obtained from TRASM.
APA, Harvard, Vancouver, ISO, and other styles
10

Thomas, Keri Louise. "The Hengwrt Chaucer : cultural capital in the digital domain." Thesis, Aberystwyth University, 2015. http://hdl.handle.net/2160/d2b4d855-ae66-4aed-b74c-7e4b2f5f704b.

Full text
Abstract:
Control is at the heart of issues surrounding the use of a digital artefact. In one sense, digitisation democratises knowledge; it makes that knowledge freely available to a large audience irrespective of who the audience member is, their education or place in the social hierarchy. In spite of this perceived egalitarianism, there are still limits in place; the material contained within those digital artefacts is still, for the large part, unintelligible to the layman, and the information imparted still chosen by an elite. This thesis attempts to explore several different concepts: the idea of cultural capital as suggested by Bourdieu, and whether the digitisation of cultural artefacts reinforces the cultural divide or emancipates knowledge; the Derridean notion of the archivist as both prison warden and creator of cultural value, with the manuscript captured in a form of house arrest: and considers Baudrillard’s concept of the simulacrum and applies it to the digital artefact, questioning whether digitisation erodes our understanding of the real to such an extent that we destroy it. All this is done through the framework of digitisation of the Hengwrt Chaucer, MS Peniarth 392D, possibly the oldest extant version of Geoffrey Chaucer’s The Canterbury Tales, held at Llyfrgell Genedlaethol Cymru, the National Library of Wales, in Aberystwyth, and discussions surrounding the use of social media to enhance the Library’s exhibition of their cultural artefacts. Ultimately, I hope to establish whether the digital has the potential to undermine the system, to truly emancipate knowledge from its theoretical and cultural restraints. To do this I will be examining the physical Hengwrt (MS Peniarth 392D) as well as its digital counterpart. I have chosen to identify and include comment upon the relevant literature in Chapters 1 and 2 of this thesis, and to incorporate it into the body of the work rather than having the review as a defined element of the thesis. I have done this because the synthesis of primary, secondary and tertiary literature I have employed covers a broad area and, where it has been collated for the purposes of other studies and research (in the case of Bourdieu, for example, his work consisted of qualitative and quantitative analysis methods to represent his discussion of habitus and cultural capita) I can present an overview of mixed sets of data over several different fields of research (Chaucerian research, for example, in juxtaposition with Bourdieuian theories of cultural capital and Baudrillard’s conception of the death of the real). Furthermore, I felt it was important to include a wide range of secondary literature in a range of fields as this represents a key element in data gathering and, in the case of a field such as cultural value, allows for the fact that my primary evidence might not be deemed adequately weighty to support the weight of my conjecture.
APA, Harvard, Vancouver, ISO, and other styles
11

Clevenger, Bryan. "HIDRA: Hierarchical Inter-Domain Routing Architecture." DigitalCommons@CalPoly, 2010. https://digitalcommons.calpoly.edu/theses/292.

Full text
Abstract:
As the Internet continues to expand, the global default-free zone (DFZ) forwarding table has begun to grow faster than hardware can economically keep pace with. Various policies are in place to mitigate this growth rate, but current projections indicate policy alone is inadequate. As such, a number of technical solutions have been proposed. This work builds on many of these proposed solutions, and furthers the debate surrounding the resolution to this problem. It discusses several design decisions necessary to any proposed solution, and based on these tradeoffs it proposes a Hierarchical Inter-Domain Routing Architecture - HIDRA, a comprehensive architecture with a plausible deployment scenario. The architecture uses a locator/identifier split encapsulation scheme to attenuate both the immediate size of the DFZ forwarding table, and the projected growth rate. This solution is based off the usage of an already existing number allocation policy - Autonomous System Numbers (ASNs). HIDRA has been deployed to a sandbox network in a proof-of-concept test, yielding promising results.
APA, Harvard, Vancouver, ISO, and other styles
12

Cordell, Peter James. "Coding of digital image sequences by recursive spatial domain decomposition." Thesis, Heriot-Watt University, 1990. http://hdl.handle.net/10399/878.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Hu, Yiqun. "Digital Spatial Domain Multiplexing technique for optical fibre sensor arrays." Thesis, University of Southampton, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Bracey, Mark. "Current domain analogue-to-digital conversion techniques for CMOS VLSI." Thesis, University of Southampton, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.242618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Li, Jing. "Digital Signal Characterization for Seizure Detection Using Frequency Domain Analysis." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-296861.

Full text
Abstract:
Nowadays, a significant proportion of the population in the world is affected by cerebral diseases like epilepsy. In this study, frequency domain features of electroencephalography (EEG) signals were studied and analyzed, with a view being able to detect epileptic seizures more easily. The power spectrum and spectrogram were determined by using fast fourier transform (FFT) and the scalogram was found by performing continuous wavelet transform (CWT) on the testing EEG signal. In addition, two schemes, i.e. method 1 and method 2, were implemented for detecting epileptic seizures and the applicability of the two methods to electrocardiogram (ECG) signals were tested. A third method for anomaly detection in ECG signals was tested.
En signifikant del av population påverkas idag av neurala sjukdomar som epilepsi. I denna studie studerades och analyserades egenskaper inom frekvensdomänen av elektroencefalografi (EEG), med sikte på att lättare kunna upptäcka epileptiska anfall. Effektspektrumet och spektrogramet bestämdes med hjälp av en snabb fouriertransform och skalogrammet hittades genom att genomföra en kontinuerlig wavelet transform (CWT) på testsignalen från EEGsignalen. I addition till detta skapades två system, metod 1 och metod 2, som implementerades för att upptäcka epileptiska anfall. Användbarheten av dessa två metoder inom elektrokardiogramsignaler (ECG) testades. En tredje metod för anomalidetektering i ECGsignaler testades.
APA, Harvard, Vancouver, ISO, and other styles
16

Hawick, Kenneth Arthur. "Domain growth in alloys." Thesis, University of Edinburgh, 1991. http://hdl.handle.net/1842/10605.

Full text
Abstract:
This thesis describes Monte-Carlo computer simulations of binary alloys, with comparisons between small angle neutron scattering (SANS) data, and numerically integrated solutions to the Cahn-Hilliard-Cook (CHC) equation. Elementary theories for droplet growth are also compared with computer simulated data. Monte-Carlo dynamical algorithms are investigated in detail, with special regard for universal dynamical times. The computer simulated systems are Fourier transformed to yield partial structure functions which are compared with SANS data for the binary Iron-Chromium system. A relation between real time and simulation time is found. Cluster statistics are measured in the simulated systems, and compared to droplet formation in the Copper-Cobalt system. Some scattering data for the complex steel PE16 is also discussed. The characterisation of domain size and its growth with time are investigated, and scaling laws fitted to real and simulated data. The simple scaling law of Lifshitz and Slyozov is found to be inadequate, and corrections such as those suggested by Huse, are necessary. Scaling behaviour is studied for the low-concentration nucleation regime and the high-concentration spinodal-decomposition regime. The need for multi-scaling is also considered. The effect of noise and fluctuations in the simulations is considered in the MonteCarlo model, a cellular-automaton (CA) model and in the Cahn-Billiard-Cook equation. The Cook noise term in the CHC equation is found to be important for correct growth scaling properties.
APA, Harvard, Vancouver, ISO, and other styles
17

Predoehl, Andrew M. "Time domain antenna pattern measurements." Thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-11072008-063651/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lejdfors, Calle. "Techniques for implementing embedded domain specific languages in dynamic languages /." Lund : Department of Computer Science, Lund Institute of Technology, Lund University, 2006. http://www.cs.lth.se/home/Calle_Lejdfors/publications/lic.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Yixiao. "Implementation of P [rho]-domain rate control /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p1422976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Cheung, Sunny Ping San. "Electrostatic force sampling of digital waveforms using synchronous time domain gating." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0026/MQ51694.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kapinchev, Konstantin. "Scalable parallel optimization of digital signal processing in the Fourier domain." Thesis, University of Kent, 2017. https://kar.kent.ac.uk/61075/.

Full text
Abstract:
The aim of the research presented in this thesis is to study different approaches to the parallel optimization of digital signal processing algorithms and optical coherence tomography methods. The parallel approaches are based on multithreading for multi-core and many-core architectures. The thesis follows the process of designing and implementing the parallel algorithms and programs and their integration into optical coherence tomography systems. Evaluations of the performance and the scalability of the proposed parallel solutions are presented. The digital signal processing considered in this thesis is divided into two groups. The first one includes generally employed algorithms operating with digital signals in Fourier domain. Those include forward and inverse Fourier transform, cross-correlation, convolution and others. The second group involves optical coherence tomography methods, which incorporate the aforementioned algorithms. These methods are used to generate cross-sectional, en-face and confocal images. Identifying the optimal parallel approaches to these methods allows improvements in the generated imagery in terms of performance and content. The proposed parallel accelerations lead to the generation of comprehensive imagery in real-time. Providing detailed visual information in real-time improves the utilization of the optical coherence tomography systems, especially in areas such as ophthalmology.
APA, Harvard, Vancouver, ISO, and other styles
22

Stelly, Christopher D. "A Domain Specific Language for Digital Forensics and Incident Response Analysis." ScholarWorks@UNO, 2019. https://scholarworks.uno.edu/td/2706.

Full text
Abstract:
One of the longstanding conceptual problems in digital forensics is the dichotomy between the need for verifiable and reproducible forensic investigations, and the lack of practical mechanisms to accomplish them. With nearly four decades of professional digital forensic practice, investigator notes are still the primary source of reproducibility information, and much of it is tied to the functions of specific, often proprietary, tools. The lack of a formal means of specification for digital forensic operations results in three major problems. Specifically, there is a critical lack of: a) standardized and automated means to scientifically verify accuracy of digital forensic tools; b) methods to reliably reproduce forensic computations (their results); and c) framework for inter-operability among forensic tools. Additionally, there is no standardized means for communicating software requirements between users, researchers and developers, resulting in a mismatch in expectations. Combined with the exponential growth in data volume and complexity of applications and systems to be investigated, all of these concerns result in major case backlogs and inherently reduce the reliability of the digital forensic analyses. This work proposes a new approach to the specification of forensic computations, such that the above concerns can be addressed on a scientific basis with a new domain specific language (DSL) called nugget. DSLs are specialized languages that aim to address the concerns of particular domains by providing practical abstractions. Successful DSLs, such as SQL, can transform an application domain by providing a standardized way for users to communicate what they need without specifying how the computation should be performed. This is the first effort to build a DSL for (digital) forensic computations with the following research goals: 1) provide an intuitive formal specification language that covers core types of forensic computations and common data types; 2) provide a mechanism to extend the language that can incorporate arbitrary computations; 3) provide a prototype execution environment that allows the fully automatic execution of the computation; 4) provide a complete, formal, and auditable log of computations that can be used to reproduce an investigation; 5) demonstrate cloud-ready processing that can match the growth in data volumes and complexity.
APA, Harvard, Vancouver, ISO, and other styles
23

Nayebi, Kambiz. "A time domain framework for the analysis and design of FIR multirate filter bank systems." Diss., Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/13867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Argyriou, Vasileios. "Advanced motion estimation algorithms in the frequency domain for digital video applications." Thesis, University of Surrey, 2006. http://epubs.surrey.ac.uk/843027/.

Full text
Abstract:
Motion estimation is a technique that is used frequently within the fields of image and video processing. Motion estimation describes the process of determining the motion between two or more frames in an image sequence. There are several different approaches to estimating the motion present within a scene. In general, the most well known motion estimation algorithms can be separated into fixed or variable block size, object based and dense motion estimation methods. Motion estimation has a variety of important applications, such as video coding, frame rate conversion, de-interlacing, object tracking and spatio-temporal segmentation. Furthermore there are medical, military and security applications. The proper motion measurement method is selected, based on the application and the available computational power. Several such motion estimation techniques are described in detail, all of which operating in the frequency domain based on phase correlation. The main objective and prepuce of this study is to improve the state-of-the-art motion estimation techniques that operate in the frequency domain, based on phase correlation and to introduce novel methods providing more accurate and reliable estimates. Furthermore, research is carried out to investigate and suggest algorithms for all motion estimation categories, based on the density of the optical flow, starting from block-based and moving to dense vector fields. Highly accurate and computationally efficient block-based techniques, utilising either gradient information or hypercomplex correlation, are suggested being suitable for estimation of motion in video sequences improving the baseline phase correlation method. Furthermore, a novel sub-pixel motion estimation technique using phase correlation, resulting in high-accuracy motion estimates, is presented in detail. A quad-tree scheme for obtaining variable size block-based sub-pixel estimates of interframe motion in the frequency domain is proposed, using either key features of the phase correlation surface or the motion compensated prediction error to manage the partition of parent block to four children quadrants. Sub-pixel estimates for arbitrarily shaped regions are obtained in a proposed frequency domain scheme without extrapolation to be required, based on phase correlation and the shape adaptive discrete Fourier transform. In the last part of this study, two fast dense motion estimation methods operating in the frequency domain are presented based either on texture segmentation or multi overlapped correlation, utilising either weighted averages or the novel gradient normalised convolution to restore missing motion vectors of the resulting dense vector field, requiring significant lower computational power compared to spatial and robust algorithms. Based on the performance study of the proposed frequency domain motion estimation techniques, performance advantages over the baseline phase correlation are achieved in terms of the motion compensated prediction error and zero-order entropy indicating higher level of compressibility and improved motion vector coherence. One of the most attractive features of the proposed schemes is that they enjoy a high degree of computational efficiency and can be implemented by fast transformation algorithms in the frequency domain. Concluding, it should be mentioned that according to the results of each of the proposed schemes, their complexity and performance are making them attractive for low computational power and real time applications. Furthermore, they provide comparable estimates to spatial domain techniques and estimates closer to the real motion present in the scene making them suitable for object tracking and 3-D scene reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhu, Qinwei. "5SGraph: A Modeling Tool for Digital Libraries." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/35832.

Full text
Abstract:
The high demand for building digital libraries by non-experts requires a simplified modeling process and rapid generation of digital libraries. To enable rapid generation, digital libraries should be modeled with descriptive languages. A visual modeling tool would be helpful to non-experts so they may model a digital library without knowing the theoretical foundations and the syntactical details of the descriptive language. In this thesis, we describe the design and implementation of a domain-specific visual modeling tool, 5SGraph, aimed at modeling digital libraries. 5SGraph is based on a metamodel that describes digital libraries using the 5S theory. The output from 5SGraph is a digital library model that is an instance of the metamodel, expressed in the 5S description language (5SL). 5SGraph presents the metamodel in a structured toolbox, and provides a top-down visual building environment for designers. The visual proximity of the metamodel and instance model facilitates requirements gathering and simplifies the modeling process. Furthermore, 5SGraph maintains semantic constraints specified by the 5S metamodel and enforces these constraints over the instance model to ensure semantic consistency and correctness. 5SGraph enables component reuse to reduce the time and efforts of designers. The results from a pilot usability test confirm the usefulness of 5SGraph.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
26

Liao, Zhiwu. "Image denoising using wavelet domain hidden Markov models." HKBU Institutional Repository, 2005. http://repository.hkbu.edu.hk/etd_ra/616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Naraharisetti, Sahasan. "Region aware DCT domain invisible robust blind watermarking for color images." Thesis, University of North Texas, 2008. https://digital.library.unt.edu/ark:/67531/metadc9748/.

Full text
Abstract:
The multimedia revolution has made a strong impact on our society. The explosive growth of the Internet, the access to this digital information generates new opportunities and challenges. The ease of editing and duplication in digital domain created the concern of copyright protection for content providers. Various schemes to embed secondary data in the digital media are investigated to preserve copyright and to discourage unauthorized duplication: where digital watermarking is a viable solution. This thesis proposes a novel invisible watermarking scheme: a discrete cosine transform (DCT) domain based watermark embedding and blind extraction algorithm for copyright protection of the color images. Testing of the proposed watermarking scheme's robustness and security via different benchmarks proves its resilience to digital attacks. The detectors response, PSNR and RMSE results show that our algorithm has a better security performance than most of the existing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
28

Lanciani, Christopher A. "Compressed-domain processing of MPEG audio signals." Diss., Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/13760.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Naraharisetti, Sahasan Mohanty Saraju. "Region aware DCT domain invisible robust blind watermarking for color images." [Denton, Tex.] : University of North Texas, 2008. http://digital.library.unt.edu/permalink/meta-dc-9748.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Krygielová, Magdaléna. "Author Disambiguation in the Domain of Scholarly Literature." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236217.

Full text
Abstract:
Tato práce se zabývá desambiguací autorů v databázích odborné literatury. Z důvodu nejednoznačnosti jmen autorů se v těchto databázích vyskytují problémy s přisuzováním autorství publikací a tím spojenou analýzou citací, vlivu autorů apod.  Tato práce se zabývá otázkou odhadu správného počtu autorů a zkoumá možnosti použití již existujících služeb. Součástí této práce je návrh metody pro desambiguaci autorů. Tato metoda byla implementována a evaluována v rámci systému CORE.
APA, Harvard, Vancouver, ISO, and other styles
31

Weaver, Mathew Jon. "Enhancing a domain-specific digital library with metadata based on hierarchical controlled vocabularies /." Full text open access at:, 2005. http://content.ohsu.edu/u?/etd,4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ren, Beibei. "A domain-specific cell based asic design methodology for digital signal processing applications /." Diss., Digital Dissertations Database. Restricted to UC campuses, 2005. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Compton, Bradley Wendell. "The domain shared by computational and digital ontology a phenomenological exploration and analysis /." Tallahassee, Florida : Florida State University, 2009. http://etd.lib.fsu.edu/theses/available/etd-07132009-125543/.

Full text
Abstract:
Thesis (Ph. D.)--Florida State University, 2009.
Advisor: Kathleen Burnett, Florida State University, College of Information. Title and description from dissertation home page (viewed on Nov. 5, 2009). Document formatted into pages; contains viii, 86 pages. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
34

Chen, Yuxin. "A Novel Hybrid Focused Crawling Algorithm to Build Domain-Specific Collections." Diss., Virginia Tech, 2007. http://hdl.handle.net/10919/26220.

Full text
Abstract:
The Web, containing a large amount of useful information and resources, is expanding rapidly. Collecting domain-specific documents/information from the Web is one of the most important methods to build digital libraries for the scientific community. Focused Crawlers can selectively retrieve Web documents relevant to a specific domain to build collections for domain-specific search engines or digital libraries. Traditional focused crawlers normally adopting the simple Vector Space Model and local Web search algorithms typically only find relevant Web pages with low precision. Recall also often is low, since they explore a limited sub-graph of the Web that surrounds the starting URL set, and will ignore relevant pages outside this sub-graph. In this work, we investigated how to apply an inductive machine learning algorithm and meta-search technique, to the traditional focused crawling process, to overcome the above mentioned problems and to improve performance. We proposed a novel hybrid focused crawling framework based on Genetic Programming (GP) and meta-search. We showed that our novel hybrid framework can be applied to traditional focused crawlers to accurately find more relevant Web documents for the use of digital libraries and domain-specific search engines. The framework is validated through experiments performed on test documents from the Open Directory Project. Our studies have shown that improvement can be achieved relative to the traditional focused crawler if genetic programming and meta-search methods are introduced into the focused crawling process.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
35

Gokozan, Tolga. "Template Based Image Watermarking In The Fractional Fourier Domain." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12605837/index.pdf.

Full text
Abstract:
One of the main features of digital technology is that the digital media can be duplicated and reproduced easily. However, this allows unauthorized and illegal use of information, i.e. data piracy. To protect digital media against illegal attempts a signal, called watermark, is embedded into the multimedia data in a robust and invisible manner. A watermark is a short sequence of information, which contains owner&rsquo
s identity. It is used for evidence of ownership and copyright purposes. In this thesis, we use fractional Fourier transformation (FrFT) domain, which combines space and spatial frequency domains, for watermark embedding and implement well-known secure spread spectrum watermarking approach. However, the spread spectrum watermarking scheme is fragile against geometrical attacks such as rotation and scaling. To gain robustness against geometrical attacks, an invisible template is inserted into the watermarked image in Fourier transformation domain. The template contains no information in itself but it is used to detect the transformations undergone by the image. Once the template is detected, these transformations are inverted and the watermark signal is decoded. Watermark embedding is performed by considering the masking characteristics of the Human Visual System, to ensure the watermark invisibility. In addition, we implement watermarking algorithms, which use different transformation domains such as discrete cosine transformation domain, discrete Fourier transformation domain and discrete wavelet transformation domain for watermark embedding. The performance of these algorithms and the FrFT domain watermarking scheme is experimented against various attacks and distortions, and their robustness are compared.
APA, Harvard, Vancouver, ISO, and other styles
36

Walker, Richard John. "Fully digital, phase-domain ΔΣ 3D range image sensor in 130nm CMOS imaging technology." Thesis, University of Edinburgh, 2012. http://hdl.handle.net/1842/6214.

Full text
Abstract:
Three-Dimensional (3D) optical range-imaging is a field experiencing rapid growth, expanding into a wide variety of machine vision applications, most recently including consumer gaming. Time of Flight (ToF) cameras, akin to RADAR with light, sense distance by measuring the round trip time of modulated Infra-Red (IR) illumination light projected into the scene and reflected back to the camera. Such systems generate 'depth maps' without requiring the complex processing utilised by other 3D imaging techniques such as stereo vision and structured light. Existing range-imaging solutions within the ToF category either perform demodulation in the analogue domain, and are therefore susceptible to noise and non-uniformities, or by digitally detecting individual photons using a Single Photon Avalanche Diode (SPAD), generating large volumes of raw data. In both cases, external processing is required in order to calculate a distance estimate from this raw information. To address these limitations, this thesis explores alternative system architectures for ToF range imaging. Specifically, a new pixel concept is presented, coupling a SPAD for accurate detection of the arrival time of photons to an all-digital Phase- Domain Delta-Sigma (PDΔΣ) loop for the first time. This processes the SPAD pulses locally, converging to estimate the mean phase of the incoming photons with respect to the outgoing illumination light. A 128×96 pixel sensor was created to demonstrate this principle. By incorporating all of the steps in the range-imaging process – from time resolved photon detection with SPADs, through phase extraction with the in-pixel phase-domain ΔΣ loop, to depth map creation with on-chip decimation filters – this sensor is the first fully integrated 3D camera-on-achip to be published. It is implemented in a 130nm CMOS imaging process, the most modern technology used in 3D imaging work presented to date, enabled by the recent availability of a very low noise SPAD structure in this process. Excellent linearity of ±5mm is obtained, although the 1σ repeatability error was limited to 160mm by a number of factors. While the dimensions of the current pixel prevent the implementation of very high resolution arrays, the all-digital nature of this technique will scale well if manufactured in a more advanced CMOS imaging process such as the 90nm or 65nm nodes. Repartitioning of the logic could enhance fill factor further. The presented characterisation results nevertheless serve as first validation of a new concept in 3D range-imaging, while proposals for its future refinement are presented.
APA, Harvard, Vancouver, ISO, and other styles
37

Smith, Megan Leigh. "Claiming the portable home/ creative acts of identity placemaking within the networked digital domain." Thesis, Leeds Beckett University, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.538324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Noorkami, Maneli. "Secure and Robust Compressed-Domain Video Watermarking for H.264." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16267.

Full text
Abstract:
The objective of this thesis is to present a robust watermarking algorithm for H.264 and to address challenges in compressed-domain video watermarking. To embed a perceptually invisible watermark in highly compressed H.264 video, we use a human visual model. We extend Watson's human visual model developed for 8x8 DCT block to the 4x4 block used in H.264. In addition, we use P-frames to increase the watermark payload. The challenge in embedding the watermark in P-frames is that the video bit rate can increase significantly. By using the structure of the encoder, we significantly reduce the increase in video bit rate due to watermarking. Our method also exploits both temporal and texture masking. We build a theoretical framework for watermark detection using a likelihood ratio test. This framework is used to develop two different video watermark detection algorithms; one detects the watermark only from watermarked coefficients and one detects the watermark from all the ac coefficients in the video. These algorithms can be used in different video watermark detection applications where the detector knows and does not know the precise location of watermarked coefficients. Both watermark detection schemes obtain video watermark detection with controllable detection performance. Furthermore, control of the detector's performance lies completely with the detector and does not place any burden on the watermark embedding system. Therefore, if the video has been attacked, the detector can maintain the same detection performance by using more frames to obtain its detection response. This is not the case with images, since there is a limited number of coefficients that can be watermarked in each image before the watermark is visible.
APA, Harvard, Vancouver, ISO, and other styles
39

Gray, Andrew, Meera Srinivasan, Marvin Simon, and Tsun-Yee Yan. "FLEXIBLE ALL-DIGITAL RECEIVER FOR BANDWIDTH EFFICIENT MODULATIONS." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/608745.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
An all-digital high data rate parallel receiver architecture developed jointly by Goddard Space Flight Center and the Jet Propulsion Laboratory is pre- sented. This receiver utilizes only a small number of high speed components along with a majority of lower speed components operating in a parallel fre- quency domain structure implementable in CMOS, and can process over 600 Mbps with numerous varieties of QPSK modulation, including those incorpo- rating precise pulse shaping for bandwidth eÆcient modulation. Performance results for this receiver for bandwidth eÆcient QPSK modulation schemes such as square-root raised cosine pulse shaped QPSK and Feher’s patented QPSK are presented, demonstrating the great degree of exibility and high performance of the receiver architecture.
APA, Harvard, Vancouver, ISO, and other styles
40

Soukal, David. "Advanced steganographic and steganalytic methods in the spatial domain." Diss., Online access via UMI:, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
41

Chau, Michael, Hsinchun Chen, Jailun Qin, Yilu Zhou, Yi Qin, Wai-Ki Sung, and Daniel M. McDonald. "Comparison of Two Approaches to Building a Vertical Search Tool: A Case Study in the Nanotechnology Domain." ACM/IEEE-CS, 2002. http://hdl.handle.net/10150/105990.

Full text
Abstract:
Artificial Intelligence Lab, Department of MIS, University of Arizona
As the Web has been growing exponentially, it has become increasingly difficult to search for desired information. In recent years, many domain-specific (vertical) search tools have been developed to serve the information needs of specific fields. This paper describes two approaches to building a domain-specific search tool. We report our experience in building two different tools in the nanotechnology domain - (1) a server-side search engine, and (2) a client-side search agent. The designs of the two search systems are presented and discussed, and their strengths and weaknesses are compared. Some future research directions are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
42

Daita, Viswanath. "Behavioral VHDL implementation of coherent digital GPS signal receiver." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000554.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Bugajska, Malgorzata. "Spatial visualization of abstract information : a classification model for visual design guidelines in the digital domain /." [S.l.] : [s.n.], 2003. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=14903.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Parle, John A. "Phase domain transmission line modelling for EMTP-type studies with application to real-time digital simulation." Thesis, University of Glasgow, 2000. http://theses.gla.ac.uk/3756/.

Full text
Abstract:
This research project is primarily concerned with the development of a new generation of power transmission lines for both non-real-time and real-time electromagnetic transient studies. The method proposed is entirely formulated in phase co-ordinates, avoiding the use of modal transformation matrices at every stage in the analysis. In comparison, the phase domain models presented thus far in the open literature have all incorporated the concept of modal decomposition in the initial frequency domain formulation of the problem. Only the time domain analysis is conducted in the phase domain. These models can therefore be regarded as a hybrid between the phase and modal methodologies. Algorithms are presented which allow accurate and efficient determination of the characteristic admittance matrix, Yc(), and wave propagation matrix, H(), directly in phase co-ordinates. A Padé iteration scheme is used for evaluating the characteristic admittance matrix, derived by exploiting a relationship between the matrix sign function and the matrix square root. Padé techniques have also been used to approximate the matrix exponential in order to evaluate the wave propagation function. By evaluating Yc() and H() directly in phase co-ordinates, any imbalances naturally present in the line will intrinsically be taken into account in these functions. Both methods have been extensively tested using line configurations of different size and complexity and both algorithms are shown to be very robust, accurate and efficient in all cases. One of the main difficulties in formulating the analysis entirely in phase co-ordinates for multiconductor systems concerns the unwinding of the wave propagation matrix. This is addressed in this research by evaluating a matrix phase shift function in phase co-ordinates. Since the method inherently takes into account the coupled time delays of the line, the elements of H() can be successfully unwound, irrespective of the configuration of the line, e.g. single-circuit, multi-circuit or asymmetrical.
APA, Harvard, Vancouver, ISO, and other styles
45

Ke, Jain-Yu, and 柯建羽. "Researches on Transform Domain Digital Watermarking." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/00617523742945649709.

Full text
Abstract:
碩士
中原大學
電機工程研究所
99
As the technology improves, using the internet becomes more and more popular. We have made a big upgrading and improvement of the information quality and quantity transferred on the internet. When the digital data are communicated on the internet, hackers can easily and illegally copy the data and transfer them through the internet. If the data were not encrypted with extra protection, this will result in a violation of intellectual copyright for legal owners. Therefore, scientists proposed methods of adding digital watermarks to the data. Hiding the digital watermark in the multimedia (like videos, sounds, or pictures) can have a copyright protection. This thesis studies invisible watermark embedding in the transform domain, which is a robust way to embed the watermark in the transform domain for resisting normal image attacks (such as image cropping, Gaussian noise addition, or filtering... etc). In the thesis, we focus on discussions of the discrete fractional random transform (DFRNT) and embedding the watermark in the DFRNT domain, which has better robustness as compare to other transforms (like discrete Fourier transform, or discrete cosine transform). In the Chapter 4 of the thesis, we discuss a lot of experiments with results and also propose a modified method to embed watermark in amplitudes of the transform domain.
APA, Harvard, Vancouver, ISO, and other styles
46

Kuo, We-Shen, and 郭偉晟. "Robust two-domain for digital image watermark." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/13237969733211052958.

Full text
Abstract:
碩士
國立宜蘭大學
電子工程學系碩士班
96
As many current image watermarking technologies have difficulty to cope with general and geometric attacks at the same time, the motive of this thesis is thus to integrate a variety of technologies to allow the watermark to survive under various malicious attacks. In actual practice, a dual-domain watermark embedding scheme is developed for such a purpose.   In our study, the dual-domain watermarking employs the Discrete Wavelet Transform (DWT) and Discrete Fourier Transform (DFT). The DWT has multi-band resolution properties so that the watermark can be embedded in the low-frequency band that is less susceptible to damages. On the other hand, the DFT has the invariant geometric properties that can withstand geometric attacks. Our scheme is to embed a trial template in the amplitude DFT for judging the possibility of geometric distortion during the process of watermark extraction.   In addition, in order to enhance the obscurity and security of the watermarks, we have used the spread spectrum technique along with chaos theory. The experiment results show that our dual-domain watermark frame is superior to many others. Not only its PSNR can reach 45 dB, but it can effectively shield against general attacks (e.g. salt and pepper noise, Gaussian noise, JPEG compression, etc.) as well as geometric attacks (e.g. rotating, zooming, cropping, etc.). This study will prove that our scheme constitutes a very effective and robust watermarking algorithm
APA, Harvard, Vancouver, ISO, and other styles
47

Chang, Chao-Chin, and 張朝欽. "Digital Watermarking in the Domain of Vector Quantization." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/00107484002528323548.

Full text
Abstract:
碩士
國立高雄應用科技大學
電子與資訊工程研究所碩士班
94
This thesis summarizes our study on and contribution to the compression and digital watermarking techniques in the domain of vector quantization (VQ). After intensive and comprehensive review of related works, numerous improvements had been developed and contribute to the above issue. Three of the main results are presented in this thesis. Firstly, a novel compression technique, named classified side-match VQ, was developed. The proposed scheme integrates positive attributes from both classified VQ and side-match VQ, and results in higher compression ratio at the same image quality. Secondly, a pioneer steganography scheme in the domain of side-match vector quantization was proposed. The challenge associated with dynamic state code books had been resolved by two possible alternatives, namely code book partition by code words’ mean and code book partition by pseudo random sequence. Experiment results reveal that imperceptibility required for secret communication can be ensured with the proposed approaches. Finally, for digital watermarking, an approach using genetic algorithms to reassign the indices of code words was suggested. As a result, embedded information will be diffused more evenly across the image to be protected, and therefore possible security leakage can be avoided. Experimental results reveal that the proposed scheme is free from the potential limitations in previous approaches, while maintaining the robustness against various kinds of attacks.
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Hao Jen, and 楊浩任. "A Color Digital Watermark Based on Spatial Domain." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/67151348969827077443.

Full text
Abstract:
碩士
臺中健康暨管理學院
資訊工程學系碩士班
93
Abstract Today, technology is at a tremendous pace, the software and hardware prices are acceptable, and the popularization of Internet, the color images can be obtained easily. It is a difficult problem for affirming copyrights, and digital watermark is a well known technology to judge the copyright dispute. But the most watermark technology embed the binary watermark in gray-scale image, in the multimedia world, such solution is insufficient. This thesis proposes a method can use less calculation to embed the color watermark in color image, and does not need the original image to extract the embedded color watermark. First, this study counts the most 24 colors of color watermark that we want to embed, and use these colors as the representative of color watermark. Then uses a pseudo random number generator to scramble the represented color watermark, and saves the random seed as the secret key. This proposed method embeds the color watermark in B-channel of the RGB color space. We can utilize adjoin pixel relation of similar gray-scale level, with the different permutation order according to pixel values of 2x2 block, to represent 24 kinds of different color, and can emphasize the distance among pixels with a parameter d to increase the robustness of the watermark. While extracting the embedded watermark, only need scan the permutation order of 2x2 block for the disputant image, and need not compare the original image. The experiment results prove the method proposed by this thesis is finer than methods have been proposed in the literatures.
APA, Harvard, Vancouver, ISO, and other styles
49

Hu, Yih-Shin, and 胡藝薰. "Retrieving Digital Images from Frequency Domain Transformed Databases." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/31320233318561467827.

Full text
Abstract:
碩士
國立中正大學
資訊工程研究所
92
In this thesis, two novel image retrieval schemes based on frequency domain will be presented. That is, users can retrieve digital images from frequency domain transformed databases. The first scheme is built on a block-based query system. This scheme employs discrete wavelet transformation technique to transform each block in the spatial domain to the wavelet domain. Then, from each transformed block, the mean value and the edge types are extracted. These extracted features are used to compute the similarity between a query image and the images in the database. In order to increase the accurateness of the query result, the current block can be further divided into many sub-blocks, and features can be extracted from these sub-blocks. Finally, the query result will be a set of ranked images in the database with respect to the query. The second scheme improves the efficiency of retrieving Joint Photographic Experts Group (JPEG) compressed images. The feature extraction of this scheme can be done directly to JPEG compressed images. We extract two features, DC feature and AC feature, from a JPEG compressed image. Then, we measure the distances between the query image and the images in a database in terms of these two features. This image retrieval scheme will give each retrieved image a rank to define its similarity to the query image. Furthermore, instead of fully decompressing JPEG images, this scheme only needs to do partial entropy decoding. Therefore, this scheme can accelerate the work of retrieving images. According to our experimental results, these two schemes are not only highly efficient but also capable of performing satisfactorily.
APA, Harvard, Vancouver, ISO, and other styles
50

Soewito, Atmadji Wiseso. "Least square digital filter design in the frequency domain." Thesis, 1991. http://hdl.handle.net/1911/16483.

Full text
Abstract:
This thesis develops new methods for obtaining optimal frequency domain approximation in the design of digital filters. The approach uses a squared error approximation criterion and allows a transition band in the desired frequency response specification. Four particular cases are considered. The first case considers FIR filters whose frequency responses include transition bands and whose errors are uniformly weighted. The new technique defines the exact transition edges, maintains optimality, reduces the Gibbs' phenomenon, and has low computational requirement. Two error measures are investigated, the discrete squared error and the integral squared error. The second case involves the near-singularity problems which occur in the design of FIR filter with zero transition band error. A new approach is developed and a considerable improvement over the existing techniques is obtained. This new design method could accommodate almost any weighting function, and has computational complexity comparable to those of the existing ones. The third case includes a complex frequency domain approximation in the IIR filter design. A technique based on the quasilinearization method is developed, and comparison shows that in most design examples the new approach appears to converge more rapidly than other competitive algorithms. Unlike other frequency domain linearization methods, the new algorithm does not modify the error criterion. The fourth case includes an IIR filter design method with magnitude specification in the frequency domain. The problem is formulated as a successive complex frequency domain approximation. A new algorithm to solve this problem is developed, and experiments indicate that the algorithm converges faster than the other competitors do for most of filter design examples.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography