To see the other types of publications on this topic, follow the link: Frame Processing.

Dissertations / Theses on the topic 'Frame Processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Frame Processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Saeedi, Mohammed Hashem. "Parallel processing of frame-based networks." Thesis, Sheffield Hallam University, 1993. http://shura.shu.ac.uk/20308/.

Full text
Abstract:
This Project involved the development of a simulation of a rectangular array of Processing Elements (PE's), with a dedicated frame based knowledge representation language. The main objective of the Project was to analyse and quantify the gain in speed of execution in a parallel environment, as compared with serial processing. The computational model of the language consisted of two main components: the knowledge base, and the replicated/distributed inference engine. The knowledge base was assumed to represent real world knowledge, in that it consisted of a large volume of information, which was divided into domains and hierarchies. When a query is made, appropriate portions of the knowledge base are mapped to the array of PEs on a one-to-one basis (one frame/PE), where each PE is capable of performing any relevant operations itself. The execution of a query is based on the propagation of messages across the array of PEs, where each message is contained in a data packet. Each packet holds the query-frame, created by interacting with the user, together with other relevant information used for knowledge manipulation. The main inference mechanism in the system is based on the parallel inheritance of properties, where each data packet carries inherited data from higher level to lower level frames, within the appropriate hierarchies. As each packet arrives at a PE which contains a relevant frame, a series of matching, and consequently, inheritance operations are performed. An algorithm, superimposed at the highest level of the system, computes time delays in relation to the overall architecture of the machine. There are two main operations for which time penalties are calculated : frame-processing and communication. The frame processing involves matching and inheritance operations, and the communication operation involves message passing and data packet traversal. During each execution cycle, the time penalties for both processing and communication are computed and stored in a file. These files are then used by a graphics package which transforms the numerical data into a set of graphs. These graphs are utilised in the analysis of the behaviour of the simulation. The analysis of the test-runs, and of their associated graphs, has yielded positive and encouraging results, demonstrating that there can be an average of a 35 fold gain in the speed of execution.
APA, Harvard, Vancouver, ISO, and other styles
2

Pan, Min-Cheng. "Frame to frame integration and recognition for dynamic imagery." Thesis, University of Reading, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.286022.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Toal, C. J. "Exploration of high performance frame processing architectures." Thesis, Queen's University Belfast, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.431455.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dane, Gökc̦e. "Temporal frame interpolation by motion analysis and processing /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3191984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Park, Joonam. "A visualization system for nonlinear frame analysis." Thesis, Georgia Institute of Technology, 1999. http://hdl.handle.net/1853/19172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Engan, Kjersti. "Frame based signal representation and compression." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2000. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-459.

Full text
Abstract:

The demand for efficient communication and data storage is continuously increasing and signal representation and compression are important factors in digital communication and storage systems.

This work deals with Frame based signal representation and compression. The emphasis is on the design of frames suited for efficient representation, or for low bit rate compression, of classes of signals.

Traditional signal decompositions such as transforms, wavelets, and filter banks, generate expansions using an analysis-synthesis setting. In this thesis we concentrate on the synthesis or reconstruction part of the signal expansion, having a system with no explicit analysis stage. We want to investigate the use of an overcomplete set of vectors, a frame or an overcomplete dictionary, for signal representations and allow sparse representations. Effective signal representations are desirable in many applications, where signal compression is one example. Others can be signal analysis for different purposes, reconstruction of signals from a limited observation set, feature extraction in pattern recognition and so forth.

The lack of an explicit analysis stage originates some questions on finding the optimal representation. Finding an optimal sparse representation from an overcomplete set of vectors is NP-complete, and suboptimal vector selection methods are more practical. We have used some existing methods like different variations of the Matching Pursuit (MP) [52] algorithm, and we developed a robust regularized FOCUSS to be able to use FOCUSS (FOCal Underdetermined System Solver [29]) under lossy conditions.

In this work we develop techniques for frame design, the Method of Optimal Directions (MOD), and propose methods by which such frames can successfully be used in frame based signal representation and in compression schemes. A Multi Frame Compression (MFC) scheme is presented and experiments with several signal classes show that the MFC scheme works well at low bit rates using MOD designed frames. Reconstruction experiments provides complimentary evidence of the good properties of the MOD algorithm.

APA, Harvard, Vancouver, ISO, and other styles
7

Ghuman, Parminder, Toby Bennett, and Jeff Solomon. "Shrinking the Cost of Telemetry Frame Synchronization." International Foundation for Telemetering, 1995. http://hdl.handle.net/10150/611605.

Full text
Abstract:
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada
To support initiatives for cheaper, faster, better ground telemetry systems, the Data Systems Technology Division (DSTD) at NASA Goddard Space Flight Center is developing a new Very Large Scale Integration (VLSI) Application Specific Integrated Circuit (ASIC) targeted to dramatically lower the cost of telemetry frame synchronization. This single VLSI device, known as the Parallel Integrated Frame Synchronizer (PIFS) chip, integrates most of the functionality contained in high density 9U VME card frame synchronizer subsystems currently in use. In 1987, a first generation 20 Mbps VMEBus frame synchronizer based on 2.0 micron CMOS VLSI technology was developed by Data Systems Technology Division. In 1990, this subsystem architecture was recast using 0.8 micron ECL & GaAs VLSI to achieve 300 Mbps performance. The PIFS chip, based on 0.7 micron CMOS technology, will provide a superset of the current VMEBus subsystem functions at rates up to 500 Mbps at approximately one-tenth current replication costs. Functions performed by this third generation device include true and inverted 64 bit marker correlation with programmable error tolerances, programmable frame length and marker patterns, programmable search-check-lock-flywheel acquisition strategy, slip detection, and CRC error detection. Acquired frames can optionally be annotated with quality trailer and time stamp. A comprehensive set of cumulative accounting registers are provided on-chip for data quality monitoring. Prototypes of the PIFS chip are expected in October 1995. This paper will describe the architecture and implementation of this new low-cost high functionality device.
APA, Harvard, Vancouver, ISO, and other styles
8

Ryen, Tom. "Rate-distortion optimal vector selection in frame based compression." Doctoral thesis, Norwegian University of Science and Technology, Faculty of Information Technology, Mathematics and Electrical Engineering, 2005. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-575.

Full text
Abstract:

In signal compression we distinguish between lossless and lossy compression. In lossless compression, the encoded signal is more bit efficient than the original signal and is exactly the same as the original one when decoded. In lossy compression, the encoded signal represents an approximation of the original signal, but it has less number of bits. In the latter situation, the major issue is to find the best possible rate-distortion (RD) tradeoff. The rate-distortion function (RDF) represents the theoretical lower bound of the distortion between the original and the reconstructed signal, subject to a given total bit rate for the compressed signal. This is with respect to any compression scheme. If the compression scheme is given, we can find its operational RDF (ORDF).

The main contribution of this dissertation is the presentation of a method that finds the operational rate-distortion optimal solution for an overcomplete signal decomposition. The idea of using overcomplete dictionaries, or frames, is to get a sparse representation of the signal. Traditionally, suboptimal algorithms, such as Matching Pursuit (MP), are used for this purpose. Given the frame and the Variable Length Codeword (VLC) table embedded in the entropy coder, the solution of the problem of establishing the best RD trade-off has a very high complexity. The proposed method reduces this complexity significantly by structuring the solution approach such that the dependent quantizer allocation problem reduces into an independent one. In addition, the use of a solution tree further reduces the complexity. It is important to note that this large reduction in complexity is achieved without sacrificing optimality. The optimal rate-distortion solution depends on the frame selection and the VLC table embedded in the entropy coder. Thus, frame design and VLC optimization is part of this work.

Extensive coding experiments are presented, where Gaussian AR(1) processes and various electrocardiogram (ECG) signals are used as input signals. The experiments demonstrate that the new approach outperforms Rate-Distortion Optimized (RDO) Matching Pursuit, previously proposed in [17], in the rate-distortion sense.

APA, Harvard, Vancouver, ISO, and other styles
9

Koh, Kwang-Ryul, Sang-Bum Lee, Taek-Joon Yi, and Whan-Woo Kim. "PC-Based Frame Optimizer Using Multiple PCM Files." International Foundation for Telemetering, 2011. http://hdl.handle.net/10150/595769.

Full text
Abstract:
ITC/USA 2011 Conference Proceedings / The Forty-Seventh Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2011 / Bally's Las Vegas, Las Vegas, Nevada
Many engineers have tried to detect and correct erroneous data in telemetry communications. The best source selector can be used to combine data from two or more bit synchronizers to reduce frame error rates. An error-correcting code can be used as well. These techniques are absolutely helpful to obtain reliable telemetry data. However, some errors still remain and must be removed. This paper introduces the way to effectively merge multiple PCM files that are saved in different receiving sites, and shows nearly errorless data resulting from merging flight test data using a PC-based frame optimizer, which is a developed program.
APA, Harvard, Vancouver, ISO, and other styles
10

Gunturk, Bahadir K. "Multi-frame information fusion for image and video enhancement." Diss., Available online, Georgia Institute of Technology, 2004:, 2003. http://etd.gatech.edu/theses/available/etd-04072004-180015/unrestricted/gunturk%5Fbahadir%5Fk%5F200312%5Fphd.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Strohbeck, Uwe. "A new approach in image data compression by multiple resolution frame-processing." Thesis, Northumbria University, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245827.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Oddy, Robert J. "The Animachine renderer : an accurate system for cartoon frame generation." Thesis, University of Bath, 1992. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Stodart, NP. "The development of a video frame grabber for a PC." Thesis, Cape Technikon, 1993. http://hdl.handle.net/20.500.11838/1159.

Full text
Abstract:
Thesis (Masters Diploma (Electrical Engineering)--Cape Technikon, Cape Town, 1993
This thesis describes the design and development of a computer vision system. The system (Video Frame Grabber) will give PCUsers the potential to capture any visual image into the memory of a computer. This computer intelligible image opens the way for new development in computer photography, Image recognition and . Desktop Publishing.
APA, Harvard, Vancouver, ISO, and other styles
14

Alfadda, Abdullah Ibrahim A. "Temporal Frame Difference Using Averaging Filter for Maritime Surveillance." Thesis, Virginia Tech, 2015. http://hdl.handle.net/10919/56583.

Full text
Abstract:
Video surveillance is an active research area in Computer Vision and Machine Learning. It received a lot of attention in the last few decades. Maritime surveillance is the act of effective detection/recognition of all maritime activities that have impact on economy, security or the environment. The maritime environment is a dynamic environment. Factors such as constant moving of waves, sun reflection over the sea surface, rapid change in lightning due to the sun reflection over the water surface, movement of clouds and presence of moving objects such as airplanes or birds, makes the maritime environment very challenging. In this work, we propose a method for detecting a motion generated by a maritime vehicle and then identifying the type of this vehicle using classification methods. A new maritime video database was created and tested. Classifying the type of vehicles have been tested by comparing 13 image features, and two SVM solving algorithms. In motion detection part, multiple smoothing filters were tested in order to minimize the false positive rate generated by the water surface movement, the results have been compared to optical flow, a well known method for motion detection.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
15

Levy, Alfred K. "Object tracking in low frame-rate video sequences." Honors in the Major Thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETH/id/339.

Full text
Abstract:
This item is only available in print in the UCF Libraries. If this is your Honors Thesis, you can help us make it available online for use by researchers around the world by following the instructions on the distribution consent form at http://library.ucf.edu/Systems/DigitalInitiatives/DigitalCollections/InternetDistributionConsentAgreementForm.pdf You may also contact the project coordinator, Kerri Bottorff, at kerri.bottorff@ucf.edu for more information.
Bachelors
Engineering
Computer Science
APA, Harvard, Vancouver, ISO, and other styles
16

Dong, Liqin Carleton University Dissertation Engineering Electrical. "Compressed voice in integrated services frame relay networks." Ottawa, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
17

Karri, Venkata Praveen. "Effective and Accelerated Informative Frame Filtering in Colonoscopy Videos Using Graphic Processing Units." Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc31536/.

Full text
Abstract:
Colonoscopy is an endoscopic technique that allows a physician to inspect the mucosa of the human colon. Previous methods and software solutions to detect informative frames in a colonoscopy video (a process called informative frame filtering or IFF) have been hugely ineffective in (1) covering the proper definition of an informative frame in the broadest sense and (2) striking an optimal balance between accuracy and speed of classification in both real-time and non real-time medical procedures. In my thesis, I propose a more effective method and faster software solutions for IFF which is more effective due to the introduction of a heuristic algorithm (derived from experimental analysis of typical colon features) for classification. It contributed to a 5-10% boost in various performance metrics for IFF. The software modules are faster due to the incorporation of sophisticated parallel-processing oriented coding techniques on modern microprocessors. Two IFF modules were created, one for post-procedure and the other for real-time. Code optimizations through NVIDIA CUDA for GPU processing and/or CPU multi-threading concepts embedded in two significant microprocessor design philosophies (multi-core design and many-core design) resulted a 5-fold acceleration for the post-procedure module and a 40-fold acceleration for the real-time module. Some innovative software modules, which are still in testing phase, have been recently created to exploit the power of multiple GPUs together.
APA, Harvard, Vancouver, ISO, and other styles
18

Tarassu, Jonas. "GPU-Accelerated Frame Pre-Processing for Use in Low Latency Computer Vision Applications." Thesis, Linköpings universitet, Informationskodning, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-142019.

Full text
Abstract:
The attention for low latency computer vision and video processing applications are growing for every year, not least the VR and AR applications. In this thesis the Contrast Limited Adaptive Histogram Equalization (CLAHE) and Radial Dis- tortion algorithms are implemented using both CUDA and OpenCL to determine whether these type of algorithms are suitable for implementations aimed to run at GPUs when low latency is of utmost importance. The result is an implemen- tation of the block versions of the CLAHE algorithm which utilizes the built in interpolation hardware that resides on the GPU to reduce block effects and an im- plementation of the Radial Distortion algorithm that corrects a 1920x1080 frame in 0.3 ms. Further this thesis concludes that the GPU-platform might be a good choice if the data to be processed can be transferred to and possibly from the GPU fast enough and that the choice of compute API mostly is a matter of taste.
APA, Harvard, Vancouver, ISO, and other styles
19

Kei, Chun-Ling. "Efficient complexity reduction methods for short-frame iterative decoding /." View Abstract or Full-Text, 2002. http://library.ust.hk/cgi/db/thesis.pl?ELEC%202002%20KEI.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2002.
Includes bibliographical references (leaves 86-91). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
20

Catelli, Ezio. "Representation functions in Signal Processing." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13530/.

Full text
Abstract:
Scopo dell'elaborato è presentare la trasformata windowed seguendo un approccio di modellizzazione matematica. La parte di teoria verte sui contenuti fondamentali e di specifico interesse per la trattazione nel capo della signal analysis di short-time Fourier transform e Wigner-Ville distribution. La parte di pratica presenta esempi svolti al calcolatore.
APA, Harvard, Vancouver, ISO, and other styles
21

Lee, Yen-Lin. "Method and architecture design for motion compensated frame interpolation in high-definition video processing." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3373109.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed Oct. 19, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 100-104).
APA, Harvard, Vancouver, ISO, and other styles
22

Huang, Ai-Mei. "Motion vector processing in compressed video and its applications to motion compensated frame interpolation." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3359508.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed July 7, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 108-112).
APA, Harvard, Vancouver, ISO, and other styles
23

Arici, Tarik. "Single and multi-frame video quality enhancement." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29722.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Yucel Altunbasak; Committee Member: Brani Vidakovic; Committee Member: Ghassan AlRegib; Committee Member: James Hamblen; Committee Member: Russ Mersereau. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
24

Thompson, Kinney. "Frames for Hilbert spaces and an application to signal processing." VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/2735.

Full text
Abstract:
The goal of this paper will be to study how frame theory is applied within the field of signal processing. A frame is a redundant (i.e. not linearly independent) coordinate system for a vector space that satisfies a certain Parseval-type norm inequality. Frames provide a means for transmitting data and, when a certain about of loss is anticipated, their redundancy allows for better signal reconstruction. We will start with the basics of frame theory, give examples of frames and an application that illustrates how this redundancy can be exploited to achieve better signal reconstruction. We also include an introduction to the theory of frames in infinite dimensional Hilbert spaces as well as an interesting example.
APA, Harvard, Vancouver, ISO, and other styles
25

Ofoghi, Bahadorreza. "Enhancing factoid question answering using frame semantic-based approaches." University of Ballarat, 2009. http://innopac.ballarat.edu.au/record=b1503070.

Full text
Abstract:
FrameNet is used to enhance the performance of semantic QA systems. FrameNet is a linguistic resource that encapsulates Frame Semantics and provides scenario-based generalizations over lexical items that share similar semantic backgrounds.
Doctor of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
26

McMurdie, Andrew Dennis. "Frame Synchronization Techniques for iNET-Formatted SOQPSK-TG Communications." BYU ScholarsArchive, 2015. https://scholarsarchive.byu.edu/etd/4452.

Full text
Abstract:
In this thesis, frame synchronization for iNET formatted SOQPSK-TG communications is considered. Frame synchronization for M-ary linear modulations (MQAM, MPSK, etc.) are known in the literature using pilot detection methods, but are based on a signal model that does not apply to SOQPSK-TG. Maximum likelihood frame synchronizers are derived for an SOQPSK-TG system following assumptions found in the literature. The analysis shows that a reinterpretation of known detectors operating on the samples of the received waveform and locally stored samples of the pilot is the optimum approach for this case. Simulation results for an AWGN channel and several multipath channels verify the performance of the synchronizers.
APA, Harvard, Vancouver, ISO, and other styles
27

Ekroll, Ingvild Kinn. "Ultrasound imaging of blood flow based on high frame rate acquisition and adaptive signal processing." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for sirkulasjon og bildediagnostikk, 2013. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-20512.

Full text
Abstract:
Ultrasound imaging of blood flow is in widespread use for assessment of atherosclerotic disease. Imaging of the carotid arteries is of special interest, as blood clots from atherosclerotic plaques may follow the blood stream to the brain with fatal consequences. Color flow imaging and PW Doppler are important tools during patient examination, providing a map of the mean velocities in an image region and the full velocity spectrum in a small region of interest respectively. However they both suffer from limitations which may hamper patient diagnostics. Recent technological advances have enabled an increased acquisition rate of ultrasound images, providing possibilities for further improvement in robustness and accuracy of color flow and PW Doppler imaging. Based on these advances, we aimed to utilize the high acquisition rate to enable robust vector Doppler imaging, where both velocity magnitude and direction is estimated. Additionally, we wanted to incorporate information from several parallel receive beams in spectral Doppler, which is currently limited to velocity estimation in a limited region of a single beam. Two limitations in conventional PW Doppler are especially considered, namely the trade-off between temporal and spectral resolution, and the increased spectral broadening in situations of high velocity or large beam-to-flow angles. By utilizing information from several parallel receive beams, we show that by applying adaptive spectral estimation techniques, it is possible to obtain high quality PW Doppler spectra from ensembles similar to those found in conventional color flow imaging. A new method to limit spectral broadening is also presented, and we show spectra with improved resolution and signal-to-noise ratio for a large span in beam-to-flow angles. Plane wave vector Doppler imaging was investigated using both realistic simulations of flow in a (diseased) carotid artery bifurcation, and in vivo studies. It was found that the plane wave approach could provide robust vector velocity estimates at frame rates significantly higher than what is found in conventional blood flow imaging. The technique was implemented in a research ultrasound system, and a feasibility study was performed in patients with carotid artery disease. Promising results were found, showing an increased velocity span and the successful capture of complex flow patterns. All together, the proposed techniques may provide more efficient clinical tools for vascular imaging, as well as quantitative information for research into new markers for cardiovascular disease.
APA, Harvard, Vancouver, ISO, and other styles
28

He, Xiaochen. "Feature extraction from two consecutive traffic images for 3D wire frame reconstruction of vehicle." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B3786791X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Zeileis, Achim, and Yves Croissant. "Extended Model Formulas in R. Multiple Parts and Multiple Responses." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2009. http://epub.wu.ac.at/1056/1/document.pdf.

Full text
Abstract:
Model formulas are the standard approach for specifying the variables in statistical models in the S language. Although being eminently useful in an extremely wide class of applications, they have certain limitations including being confined to single responses and not providing convenient support for processing formulas with multiple parts. The latter is relevant for models with two or more sets of variable, e.g., regressors/instruments in instrumental variable regressions, two-part models such as hurdle models, or alternative-specific and individual-specific variables in choice models among many others. The R package Formula addresses these two problems by providing a new class "Formula" (inheriting from "formula") that accepts an additional formula operator | separating multiple parts and by allowing all formula operators (including the new |) on the left-hand side to support multiple responses.
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
30

He, Xiaochen, and 何小晨. "Feature extraction from two consecutive traffic images for 3D wire frame reconstruction of vehicle." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B3786791X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Grunden, Beverly K. "On the Characteristics of a Data-driven Multi-scale Frame Convergence Algorithm." Wright State University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=wright1622208959661057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Ghabra, Fawwaz I. "Processor and postprocessor for a plane frame analysis program on the IBM PC." Thesis, Virginia Tech, 1985. http://hdl.handle.net/10919/45725.

Full text
Abstract:
In this thesis, a PROCESSOR and a POSTPROCESSOR are developed for a plane frame analysis computer program on the IBM PC. The PROCESSOR reads the data prepared by a PREPROCESSOR and solves for the unknown joint displacement using the matrix displacement method. The POSTPROCESSOR uses the results of the PROCESSOR to obtain the required responses of the structure. A chapter on testing procedures is also provided.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
33

Dikbas, Salih. "A low-complexity approach for motion-compensated video frame rate up-conversion." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42730.

Full text
Abstract:
Video frame rate up-conversion is an important issue for multimedia systems in achieving better video quality and motion portrayal. Motion-compensated methods offer better quality interpolated frames since the interpolation is performed along the motion trajectory. In addition, computational complexity, regularity, and memory bandwidth are important for a real-time implementation. Motion-compensated frame rate up-conversion (MC-FRC) is composed of two main parts: motion estimation (ME) and motion-compensated frame interpolation (MCFI). Since ME is an essential part of MC-FRC, a new fast motion estimation (FME) algorithm capable of producing sub-sample motion vectors at low computational-complexity has been developed. Unlike existing FME algorithms, the developed algorithm considers the low complexity sub-sample accuracy in designing the search pattern for FME. The developed FME algorithm is designed in such a way that the block distortion measure (BDM) is modeled as a parametric surface in the vicinity of the integer-sample motion vector; this modeling enables low computational-complexity sub-sample motion estimation without pixel interpolation. MC-FRC needs more accurate motion trajectories for better video quality; hence, a novel true-motion estimation (TME) algorithm targeting to track the projected object motion has been developed for video processing applications, such as motion-compensated frame interpolation (MCFI), deinterlacing, and denoising. Developed TME algorithm considers not only the computational complexity and regularity but also memory bandwidth. TME is obtained by imposing implicit and explicit smoothness constraints on block matching algorithm (BMA). In addition, it employs a novel adaptive clustering algorithm to keep the low-complexity at reasonable levels yet enable exploiting more spatiotemporal neighbors. To produce better quality interpolated frames, dense motion field at the interpolation instants are obtained for both forward and backward motion vectors (MVs); then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly.
APA, Harvard, Vancouver, ISO, and other styles
34

Abou-Rayan, Ashraf M. "A study of full displacement design of frame structures using displacement sensitivity analysis." Thesis, Virginia Tech, 1985. http://hdl.handle.net/10919/45557.

Full text
Abstract:

The intent of this study is to develop an algorithm for structural design based on allowable displacements for structural members, independent of stresses caused by the configurations imposed. Structural design can be based on displacement constraints applied in the same basic format as stress constraints so that convergence is based on allowable displacements rather than on stresses.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
35

Comstedt, Erik. "Effect of additional compression features on h.264 surveillance video." Thesis, Mittuniversitetet, Avdelningen för informationssystem och -teknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-30901.

Full text
Abstract:
In video surveillance business, a recurring topic of discussion is quality versus data usage. A higher quality allows for more details to be captured at the cost of a higher bit rate, and for cameras monitoring events 24 hours a day, limiting data usage can quickly become a factor to consider. The purpose of this thesis has been to apply additional compression features to a h.264 video steam, and evaluate their effects on the videos overall quality. Using a surveillance camera, recordings of video streams were obtained. These recordings had constant GOP and frame rates. By breaking down one of these videos to an image sequence, it was possible to encode the image sequence into video streams with variable GOP/FPS using the software Ffmpeg. Additionally a user test was performed on these video streams, following the DSCQS standard from the ITU-R recom- mendation. The participants had to subjectively determine the quality of video streams. The results from the these tests showed that the participants did not no- tice any considerable difference in quality between the normal videos and the videos with variable GOP/FPS. Based of these results, the thesis has shown that that additional compression features can be applied to h.264 surveillance streams, without having a substantial effect on the video streams overall quality.
APA, Harvard, Vancouver, ISO, and other styles
36

Bayer, Stefan [Verfasser], and Bernd [Gutachter] Edler. "Time Warped Filter Banks and their Application for Frame Based Processing of Harmonic Audio Signals / Stefan Bayer ; Gutachter: Bernd Edler." Erlangen : Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 2017. http://d-nb.info/1151399515/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Mirchandani, Chandru, David Fisher, and Parminder Ghuman. "Cost Beneficial Solution for High Rate Data Processing." International Foundation for Telemetering, 1999. http://hdl.handle.net/10150/606836.

Full text
Abstract:
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada
GSFC in keeping with the tenets of NASA has been aggressively investigating new technologies for spacecraft and ground communications and processing. The application of these technologies, together with standardized telemetry formats, make it possible to build systems that provide high-performance at low cost in a short development cycle. The High Rate Telemetry Acquisition System (HRTAS) Prototype is one such effort that has validated Goddard's push towards faster, better and cheaper. The HRTAS system architecture is based on the Peripheral Component Interconnect (PCI) bus and VLSI Application-Specific Integrated Circuits (ASICs). These ASICs perform frame synchronization, bit-transition density decoding, cyclic redundancy code (CRC) error checking, Reed-Solomon error detection/correction, data unit sorting, packet extraction, annotation and other service processing. This processing in performed at rates of up to and greater than 150 Mbps sustained using a high-end performance workstation running standard UNIX O/S, (DEC 4100 with DEC UNIX or better). ASICs are also used for the digital reception of Intermediate Frequency (IF) telemetry as well as the spacecraft command interface for commands and data simulations. To improve the efficiency of the back-end processing, the level zero processing sorting element is being developed. This will provide a complete hardware solution to extracting and sorting source data units and making these available in separate files on a remote disk system. Research is on going to extend this development to higher levels of the science data processing pipeline. The fact that level 1 and higher processing is instrument dependent; an acceleration approach utilizing ASICs is not feasible. The advent of field programmable gate array (FPGA) based computing, referred to as adaptive or reconfigurable computing, provides a processing performance close to ASIC levels while maintaining much of the programmability of traditional microprocessor based systems. This adaptive computing paradigm has been successfully demonstrated and its cost performance validated, to make it a viable technology for the level one and higher processing element for the HRTAS. Higher levels of processing are defined as the extraction of useful information from source telemetry data. This information has to be made available to the science data user in a very short period of time. This paper will describe this low cost solution for high rate data processing at level one and higher processing levels. The paper will further discuss the cost-benefit of this technology in terms of cost, schedule, reliability and performance.
APA, Harvard, Vancouver, ISO, and other styles
38

Jin, Gongye. "High-quality Knowledge Acquisition of Predicate-argument Structures for Syntactic and Semantic Analysis." 京都大学 (Kyoto University), 2016. http://hdl.handle.net/2433/215677.

Full text
Abstract:
If the author of the published paper digitizes such paper and releases it to third parties using digital media such as computer networks or CD-ROMs, the volume, number, and pages of the Journal of Natural Language Processing of the publication must be indicated in a clear manner for all viewers.
Kyoto University (京都大学)
0048
新制・課程博士
博士(情報学)
甲第19850号
情博第601号
新制||情||105(附属図書館)
32886
京都大学大学院情報学研究科知能情報学専攻
(主査)准教授 河原 大輔, 教授 黒橋 禎夫, 教授 河原 達也
学位規則第4条第1項該当
APA, Harvard, Vancouver, ISO, and other styles
39

Zhang, Xianxian. "Robust speech processing based on microphone array, audio-visual, and frame selection for in-vehicle speech recognition and in-set speaker recognition." Diss., Connect to online resource, 2005. http://wwwlib.umi.com/cr/colorado/fullcit?p3190350.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Genrich, Thad J. "300 MBPS CCSDS Processing Using FPGA's." International Foundation for Telemetering, 1996. http://hdl.handle.net/10150/611415.

Full text
Abstract:
International Telemetering Conference Proceedings / October 28-31, 1996 / Town and Country Hotel and Convention Center, San Diego, California
This paper describes a 300 Mega Bit Per Second (MBPS) Front End Processor (FEP) prototype completed in early 1993. The FEP implements a patent pending parallel frame synchronizer (frame sync) design in 12 Actel 1240 Field Programmable Gate Arrays (FPGA's). The FEP also provides (255,223) Reed-Solomon (RS) decoding and a High Performance Parallel Interface (HIPPI) output interface. The recent introduction of large RAM based FPGA's allows greater high speed data processing integration and flexibility to be achieved. A proposed FEP implementation based on Altera 10K50 FPGA's is described. This design can be implemented on a single slot 6U VME module, and includes a PCI Mezzanine Card (PMC) for a commercial Fibre Channel or Asynchronous Transfer Mode (ATM) output interface module. Concepts for implementation of (255,223) RS and Landsat 7 Bose-Chaudhuri-Hocquenghem (BCH) decoding in FPGA's are also presented. The paper concludes with a summary of the advantages of high speed data processing in FPGA's over Application Specific Integrated Circuit (ASIC) based approaches. Other potential data processing applications are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
41

Dahal, Ashok. "Detection of Ulcerative Colitis Severity and Enhancement of Informative Frame Filtering Using Texture Analysis in Colonoscopy Videos." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc822759/.

Full text
Abstract:
There are several types of disorders that affect our colon’s ability to function properly such as colorectal cancer, ulcerative colitis, diverticulitis, irritable bowel syndrome and colonic polyps. Automatic detection of these diseases would inform the endoscopist of possible sub-optimal inspection during the colonoscopy procedure as well as save time during post-procedure evaluation. But existing systems only detects few of those disorders like colonic polyps. In this dissertation, we address the automatic detection of another important disorder called ulcerative colitis. We propose a novel texture feature extraction technique to detect the severity of ulcerative colitis in block, image, and video levels. We also enhance the current informative frame filtering methods by detecting water and bubble frames using our proposed technique. Our feature extraction algorithm based on accumulation of pixel value difference provides better accuracy at faster speed than the existing methods making it highly suitable for real-time systems. We also propose a hybrid approach in which our feature method is combined with existing feature method(s) to provide even better accuracy. We extend the block and image level detection method to video level severity score calculation and shot segmentation. Also, the proposed novel feature extraction method can detect water and bubble frames in colonoscopy videos with very high accuracy in significantly less processing time even when clustering is used to reduce the training size by 10 times.
APA, Harvard, Vancouver, ISO, and other styles
42

Karch, Barry K. "Improved Super-Resolution Methods for Division-of-Focal-Plane Systems in Complex and Constrained Imaging Applications." University of Dayton / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1429032650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

渡辺, 崇., Takashi WATANABE, 優樹 前田, and Yuki MAEDA. "人が放置する物体の動的認識." 日本機械学会, 2006. http://hdl.handle.net/2237/9197.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Marzinotto, Gabriel. "Semantic frame based analysis using machine learning techniques : improving the cross-domain generalization of semantic parsers." Electronic Thesis or Diss., Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0483.

Full text
Abstract:
Rendre les analyseurs sémantiques robustes aux variations lexicales et stylistiques est un véritable défi pour de nombreuses applications industrielles. De nos jours, l'analyse sémantique nécessite de corpus annotés spécifiques à chaque domaine afin de garantir des performances acceptables. Les techniques d'apprenti-ssage par transfert sont largement étudiées et adoptées pour résoudre ce problème de manque de robustesse et la stratégie la plus courante consiste à utiliser des représentations de mots pré-formés. Cependant, les meilleurs analyseurs montrent toujours une dégradation significative des performances lors d'un changement de domaine, mettant en évidence la nécessité de stratégies d'apprentissage par transfert supplémentaires pour atteindre la robustesse. Ce travail propose une nouvelle référence pour étudier le problème de dépendance de domaine dans l'analyse sémantique. Nous utilisons un nouveau corpus annoté pour évaluer les techniques classiques d'apprentissage par transfert et pour proposer et évaluer de nouvelles techniques basées sur les réseaux antagonistes. Toutes ces techniques sont testées sur des analyseurs sémantiques de pointe. Nous affirmons que les approches basées sur les réseaux antagonistes peuvent améliorer les capacités de généralisation des modèles. Nous testons cette hypothèse sur différents schémas de représentation sémantique, langages et corpus, en fournissant des résultats expérimentaux à l'appui de notre hypothèse
Making semantic parsers robust to lexical and stylistic variations is a real challenge with many industrial applications. Nowadays, semantic parsing requires the usage of domain-specific training corpora to ensure acceptable performances on a given domain. Transfer learning techniques are widely studied and adopted when addressing this lack of robustness, and the most common strategy is the usage of pre-trained word representations. However, the best parsers still show significant performance degradation under domain shift, evidencing the need for supplementary transfer learning strategies to achieve robustness. This work proposes a new benchmark to study the domain dependence problem in semantic parsing. We use this bench to evaluate classical transfer learning techniques and to propose and evaluate new techniques based on adversarial learning. All these techniques are tested on state-of-the-art semantic parsers. We claim that adversarial learning approaches can improve the generalization capacities of models. We test this hypothesis on different semantic representation schemes, languages and corpora, providing experimental results to support our hypothesis
APA, Harvard, Vancouver, ISO, and other styles
45

Kopečný, Josef. "Optická metoda měření kontrakce izolované srdeční buňky." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219250.

Full text
Abstract:
In this master´s thesis we will firstly focused on the description of a cyte in terms of structure as well as in terms of electrical and chemical processes. We will examine a processes, which make a contraction and the processes on cyte membrane. We will also analyse image processing methods and methods for measuring the contraction. The block diagram will be designed and requirements for the measuring platform will be specified. The programme will be realized in LabWIEV programming language.
APA, Harvard, Vancouver, ISO, and other styles
46

Silva, Jonas dos Santos. "Implementação da compensação de movimento em vídeo entrelaçado no terminal de acesso do SBTVD." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2013. http://hdl.handle.net/10183/96500.

Full text
Abstract:
Uma sequencia de vídeo pode ser adquirida de forma progressiva ou entrelaçada. No padrão de codificação de vídeo H.264/AVC os campos de uma imagem entrelaçada podem ser codificados em modo frame (campos top e bottom entrelaçados) ou em modo field (campos top e bottom agrupados separadamente). Quando a escolha é adaptativa para cada par de macro blocos a codificação é chamada de Macroblock Adaptive Frame- Field (MBAFF). Inovações na predição inter-quadro do H.264/AVC contribuíram significantemente para a performance do padrão alcançar o dobro da taxa de compressão do seu antecessor (ITU, 1994), ao custo de um grande aumento de complexidade computacional do CODEC. Dentro da predição inter-quadro, o bloco de compensação de movimento (MC) é responsável pela reconstrução de um bloco de pixels. No decodificador apresentado em (BONATTO, 2012) está integrada uma solução em hardware para o MC que suporta a maior parte do conjunto de ferramentas do perfil Main do H.264/AVC. A compensação de movimento pode ser dividida em predição de vetores e processamento de amostras. No processamento de amostras é realizada a interpolação e a ponderação de amostras. O módulo de ponderação de amostras, ou predição ponderada, utiliza fatores de escala para escalonar as amostras na saída do MC. Isso é muito útil quando há esvanecimento no vídeo. Inicialmente este trabalho apresenta um estudo do processo de compensação de movimento, segundo o padrão de codificação de vídeo H.264/AVC. São abordadas todas as ferramentas da predição inter-quadro, incluindo o tratamento de vídeo entrelaçado e todos os possíveis modos de codificação para o mesmo. A seguir é apresentada uma arquitetura em hardware para a predição ponderada do MC. Esta arquitetura atende o perfil main do H.264/AVC, que prevê a decodificação de imagens frame, field ou MBAFF. A arquitetura apresentada é baseada no compensador de movimento contido no decodificador apresentado em (BONATTO, 2012), que não tem suporte a predição ponderada e a vídeo entrelaçado. A arquitetura proposta é composta por dois módulos: Scale Factor Prediction (SFP) e Weighted Samples Prediction (WSP) . A arquitetura foi desenvolvida em linguagem VHDL e a simulação temporal mostrou que a mesma pode decodificar imagens MBAFF em tempo real @60i. Dessa forma, tornando-se uma ferramenta muito útil ao desenvolvimento de sistemas de codificação e decodificação em HW. Não foi encontrada, na literatura atual, uma solução em hardware para compensação de movimento do padrão H.264/AVC com suporte a codificação MBAFF.
A video sequence can be acquired in a progressive or interlaced mode. In the video coding H.264/AVC standard an interlaced picture can be encoded in frame mode (top and bottom fields interlaced) or field mode (top and bottom fields combined separately). When the choice for each pair of macro-blocks coding is adaptive, it is called Macroblock Adaptive Frame-Field (MBAFF). The innovations in the inter-frame prediction of H.264/AVC contributed significantly to the performance of the standard that achieved twice the compression ratio of its predecessor (ITU, 1994), at the cost of a large increase in computational complexity of the CODEC. In the inter-frame prediction, the motion compensation (MC) module is responsible for the reconstruction of a pixel's block. In the decoder shown in (BONATTO 2012) an integrated hardware solution to the MC is included which can decode most of the H.264/AVC main profile tools. The motion compensation can be divided into motion vectors prediction and sample processing. In the sample processing part, samples interpolation and weighting are performed. The weighted samples prediction module uses scale factors to weight the samples for generating the output pixels. This is useful in video fading. Initially, this work presents a study of the motion compensation process, according to the H.264/AVC standard. It covers all of inter-frame prediction tools, including all possible coding modes for interlaced video. A hardware architecture for the weighted samples prediction of MC is shown next. It is in compliance with the main profile of H.264/AVC standard, therefore it can decode frame, field and MBAFF pictures. The architecture presented is based on the motion compensator used in the (BONATTO, 2012) decoder, which does not support the weighted prediction and interlaced video. The purposed architecture is composed by two modules: Scale Factor Prediction (SFP) and Weighted Samples Prediction (WSP). The hardware implementation was described using VHDL and the timing simulation has shown that it can decode MBAFF pictures in real time @60i. Therefore, this is an useful tool for hardware CODEC development. Similar hardware solution for H.264/AVC weighted prediction that supports MBAFF coding was not found is previous works.
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Yan. "Automatic syntactic analysis of learner English." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/285998.

Full text
Abstract:
Automatic syntactic analysis is essential for extracting useful information from large-scale learner data for linguistic research and natural language processing (NLP). Currently, researchers use standard POS taggers and parsers developed on native language to analyze learner language. Investigation of how such systems perform on learner data is needed to develop strategies for minimizing the cross-domain effects. Furthermore, POS taggers and parsers are developed for generic NLP purposes and may not be useful for identifying specific syntactic constructs such as subcategorization frames (SCFs). SCFs have attracted much research attention as they provide unique insight into the interplay between lexical and structural information. An automatic SCF identification system adapted for learner language is needed to facilitate research on L2 SCFs. In this thesis, we first provide a comprehensive evaluation of standard POS taggers and parsers on learner and native English. We show that the common practice of constructing a gold standard by manually correcting the output of a system can introduce bias to the evaluation, and we suggest a method to control for the bias. We also quantitatively evaluate the impact of fine-grained learner errors on POS tagging and parsing, identifying the most influential learner errors. Furthermore, we show that the performance of probabilistic POS taggers and parsers on native English can predict their performance on learner English. Secondly, we develop an SCF identification system for learner English. We train a machine learning model on both native and learner English data. The system can label individual verb occurrences in learner data for a set of 49 distinct SCFs. Our evaluation shows that the system reaches an accuracy of 84\% F1 score. We then demonstrate that the level of accuracy is adequate for linguistic research. We design the first multidimensional SCF diversity metrics and investigate how SCF diversity changes with L2 proficiency on a large learner corpus. Our results show that as L2 proficiency develops, learners tend to use more diverse SCF types with greater taxonomic distance; more advanced learners also use different SCF types more evenly and locate the verb tokens of the same SCF type further away from each other. Furthermore, we demonstrate that the proposed SCF diversity metrics contribute a unique perspective to the prediction of L2 proficiency beyond existing syntactic complexity metrics.
APA, Harvard, Vancouver, ISO, and other styles
48

Ren, Jinchang. "Semantic content analysis for effective video segmentation, summarisation and retrieval." Thesis, University of Bradford, 2009. http://hdl.handle.net/10454/4251.

Full text
Abstract:
This thesis focuses on four main research themes namely shot boundary detection, fast frame alignment, activity-driven video summarisation, and highlights based video annotation and retrieval. A number of novel algorithms have been proposed to address these issues, which can be highlighted as follows. Firstly, accurate and robust shot boundary detection is achieved through modelling of cuts into sub-categories and appearance based modelling of several gradual transitions, along with some novel features extracted from compressed video. Secondly, fast and robust frame alignment is achieved via the proposed subspace phase correlation (SPC) and an improved sub-pixel strategy. The SPC is proved to be insensitive to zero-mean-noise, and its gradient-based extension is even robust to non-zero-mean noise and can be used to deal with non-overlapped regions for robust image registration. Thirdly, hierarchical modelling of rush videos using formal language techniques is proposed, which can guide the modelling and removal of several kinds of junk frames as well as adaptive clustering of retakes. With an extracted activity level measurement, shot and sub-shot are detected for content-adaptive video summarisation. Fourthly, highlights based video annotation and retrieval is achieved, in which statistical modelling of skin pixel colours, knowledge-based shot detection, and improved determination of camera motion patterns are employed. Within these proposed techniques, one important principle is to integrate various kinds of feature evidence and to incorporate prior knowledge in modelling the given problems. High-level hierarchical representation is extracted from the original linear structure for effective management and content-based retrieval of video data. As most of the work is implemented in the compressed domain, one additional benefit is the achieved high efficiency, which will be useful for many online applications.
APA, Harvard, Vancouver, ISO, and other styles
49

Dragotti, Pier Luigi. "Wavelet footprints and frames for signal processing and communication /." [S.l.] : [s.n.], 2002. http://library.epfl.ch/theses/?nr=2559.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Yun, Hee Cheol. "Compression of computer animation frames." Diss., Georgia Institute of Technology, 1996. http://hdl.handle.net/1853/13070.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography