Добірка наукової літератури з теми "Multiview2"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Multiview2".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Дисертації з теми "Multiview2"

1

ZARDINI, Alessandro. "Gli impatti organizzativi delle piattaforme di Enterprise Content Management sui processi decisionali." Doctoral thesis, Università degli Studi di Verona, 2010. http://hdl.handle.net/11562/343376.

Повний текст джерела
Анотація:
L’obiettivo della tesi di ricerca è quello di analizzare le correlazioni esistenti tra il vantaggio competitivo, associato al miglioramento del processo di decision making, e la gestione dei contenuti aziendali attraverso le piattaforme di Enterprise Content Management (ECM). Con questo contributo si intende pertanto incrementare la letteratura presente all’interno del Knowledge Management (KM) ed in particolare sul rapporto esistente tra i sistemi di Knowledge Management, di Enterprise Content Management e la gestione dei processi decisionali. All’interno della letteratura del Knowledge Management, le piattaforme di Enterprise Content Management, sino ad ora, sono state analizzate solo attraverso la Transaction-Costs Theory (Reimer, 2002; McKeever, 2003; Smith e McKeen, 2003; O'Callaghan e Smits, 2005; Tyrväinen et al., 2006) e vengono generalmente descritte come dei sistemi utili per la riduzione dei costi di gestione dei contenuti aziendali presenti all’interno dell’organizzazione. Nello specifico attraverso analisi empiriche i diversi autori hanno evidenziato come gli strumenti di ECM siano in grado di aumentare l’efficienza della gestione delle informazioni aziendali, riducendone il costo di gestione e ricerca. Analizzando gli articoli presenti all’interno della letteratura manageriale, si può facilmente constatare che, a tutt’oggi, non esiste una definizione univocamente accettata del concetto di ECM. Esaminandoli congiuntamente si possono però riscontrare alcune analogie. La distinzione non dipende dal contenuto ma dal focus utilizzato dal ricercatore per descrivere, analizzare ed interpretare i sistemi ECM. Pochi ricercatori hanno però studiato gli impatti che tali strumenti di Content Management hanno sull’organizzazione e sui processi aziendali. In particolare, nessuna ricerca ha mai evidenziato il ruolo strategico delle piattaforme ECM nella gestione dei contenuti aziendali (Gupta et al., 2002; Helfat e Peteraf, 2003; Smith e McKeen, 2003; O'Callaghan e Smits, 2005). Per analizzare ed interpretare i valori rilevati all’interno del case study, verrà utilizzata la teoria della Knowledge Based View. Si considera infatti che i siano le risorse strategiche utili per raggiungere e mantenere il vantaggio competitivo (Conner e Prahalad; 1996; Choi et al.; 2008). I sistemi di ECM non verranno analizzati secondo un approccio gestionale, cioè non si valuterà l’aumento di efficienza connesso al miglioramento della gestione delle informazioni aziendali, bensì si andrà ad analizzare l’evoluzione delle performance aziendali connesso con lo sviluppo del processo decisionale. Nel corso dell’analisi, si andrà ad analizzare se la conoscenza contenuta all’interno delle organizzazioni, risulta essere fondamentale per lo sviluppo e la crescita aziendale (Wernerfelt, 1984; Grant, 1991; Penrose, 1995; Grant, 1996; Prusak, 1996; Teece et al., 1997; Piccoli et al., 2000; Piccoli et al., 2002). Le informazioni assumono però un reale valore solamente quando possono essere gestite facilmente all’interno del processo di decision making per il mantenimento di un vantaggio competitivo. Per migliorare le prestazioni aziendali, risulta fondamentale riuscire a trasformare i numerosi contenuti aziendali “passivi” in sorgenti “attive”. La potenzialità dei sistemi di Enterprise Content Management consiste nella loro capacità di elaborare elevati volumi informativi, fornendo all’utente finale o al sistema di Decision Support Systems (DSS), tutte le informazioni utili ai fini decisionali. In tal modo le migliori performance dell’attività del decision maker avviene non solo attraverso l’incremento della qualità e della quantità delle informazioni di ingresso al processo decisionale ma anche grazie ad una migliore formalizzazione della conoscenza presente all’interno della memoria organizzativa. Il metodo di ricerca utilizzato sarà il cosiddetto “interpretative case study”, il quale risulta particolarmente utile per esaminare un fenomeno nella sua naturale evoluzione (Benbasat, 1984). Il metodo del case study è stato scelto anche perché può rappresentare un veicolo ideale per giungere ad una più profonda comprensione dei processi di business espliciti ed impliciti, ma anche per comprendere meglio il ruolo degli attori all'interno dei sistemi organizzativi (Campbell, 1975; Hamel et al., 1993; Lee, 1999; Stake, 2000). Si utilizzerà l'azienda come unità di analisi (Yin, 1984) sia quando si analizzeranno le relazioni col mercato che il comportamento dei singoli partecipanti ad un processo (Zardini et al., 2010). Inizialmente si andranno ad analizzare alcune delle più significative definizioni di conoscenza presenti all’interno della letteratura e per ciascuna si evidenzieranno i punti di forza e di debolezza. Inizialmente sarà ripresa l’enunciazione proposta da Polanyi (Polanyi, 1958; Polanyi, 1967), la quale verrà poi integrata con gli studi condotti da Nonaka, Takeuchi e Konno (Nonaka, 1991; Nonaka e Takeuchi, 1995; Nonaka e Konno, 1998; Nonaka et al., 2000). Si passerà dal concetto generale di conoscenza alla nozione di knowledge assests, i quali verranno identificati anche come delle risorse intangibili generate internamente all’impresa, difficilmente acquistabili sul mercato. Dopo aver accertato che la conoscenza può essere considerata una risorsa importante per l’ottenimento di un vantaggio competitivo (Grant, 1996b; Prusak, 1996; Alavi e Leidner, 1999; Earl e Scott, 1999; Piccoli et al., 2002), il capitolo terminerà contestualizzando il concetto di knowledge assets anche all’interno della teoria della Knowledge Based View. Nel secondo capitolo verrà esplicitato il processo di creazione della conoscenza e si identificheranno le tre tipologie di Knowledge Management Systems. Il capitolo terminerà con una disamina dei principali sistemi di Knowledge Management utilizzati per la creazione, l’analisi ed il mantenimento della conoscenza presente all’interno della memoria organizzativa. Nel terzo capitolo si procederà alla disamina delle componenti principali presenti all’interno del processo di decision making e con l’analisi degli strumenti di KM specifici per il miglioramento del processo decisionale medesimo. Il capitolo si concluderà con la descrizione e la disamina dei sistemi a supporto delle decisioni. Nella quarta sezione si definirà il termine “contenuto aziendale” e lo si assocerà al concetto di dynamic capabilities (Teece et al., 1997; Eisenhardt e Martin, 2000; Helfat et al., 2007). Successivamente si analizzeranno tutte le fasi presenti all’interno del ciclo di vita dell’informazione: dalla creazione di un nuovo contenuto sino alla catalogazione, al salvataggio ed all’eventuale modifica o cancellazione dello stesso. Avendo circoscritto il concetto di content si procederà con l’analisi delle definizioni presenti all’interno della letteratura. Il capitolo terminerà con lo studio delle componenti principali presenti all’intento dei sistemi ECM ed in particolare con l’analisi degli strumenti utili a supportare i processi decisionali presenti all’interno delle organizzazioni. Nell’ultimo capitolo si procederà alla disamina della metodologia dell’Action-Research, analizzandone i punti di forza e le criticità. Successivamente si seguirà l’approccio proposto da Baskerville (Baskerville, 1999), secondo cui il termine “Ricerca-Azione” da un lato identifica un metodo di investigazione per le scienze sociali, dall’altro rappresenta una sub-categoria che la distingue dagli altri sotto-metodi presenti. Procedendo con l’analisi si giungerà al modello di Baskerville e Wood-Harper (Baskerville e Wood-Harper; 1998) secondo cui si possono individuare dieci distinte forme di Action-Research all’interno della letteratura dei Sistemi Informativi, e tra queste, la Multiview ed in particolare la Multiview2, sarà la metodologia di riferimento utilizzata per testare il framework teorico all’interno del case study.<br>The focus of this thesis is to analyze the correlations between the competitive advantage, associated to the improvement of the process of decision making, and the content management through the Enterprise Content Management platform (ECM). One scope of this work is to increase the Knowledge Management (KM) literature and in particular to seek the correlation between the ECM Systems and the Decision Support Systems. Enterprise Content Management platforms largely have been analyzed according to Transaction Cost Theory (Reimer, 2002; McKeever, 2003; Smith and McKeen, 2003; O'Callaghan and Smits, 2005; Tyrväinen et al., 2006) and generally are described as useful for the reduction of ECM costs inside an organization (McKeever, 2003). Through empirical analyses, various authors have stressed that ECM tools increase efficiency and reduce management and research costs. Few studies consider the impacts of these tools on the organization or company processes. In particular, no research has highlighted the strategic role of ECM platforms in Enterprise Content Management (Gupta et al., 2002; Helfat and Peteraf, 2003; Smith and McKeen, 2003; O'Callaghan and Smits, 2005) as a means to improve and speed up the decision-making process. The case study will be analyzed by the Knowledge Based View. Specifically, the knowledge-based view (KBV) constitutes a fundamental essence of the resource-based view (RBV; Conner and Prahalad, 1996), reflecting the importance of knowledge assets. The knowledge and enterprise content generated thus can be interpreted not only as strategic resources to achieve or maintain a competitive advantage but also as useful tools for developing and expanding the company’s ability to respond promptly to unexpected events in the external environment and therefore perfect decision making within the organization. According to several authors (Barney, 1991; Amit and Schoemaker, 1993; Peteraf, 1993; Winter, 1995; Grover et al., 2009), the Resource Based View (RBV) cites knowledge as a resource that can generate information asymmetries and thus a competitive advantage for the enterprises that possess it. Reconsidering the general theory on the RBV and including knowledge assets among an enterprise’s intangible resources easily results in the KBV. If the term “acquired resources” from the general RBV proposed by Lippman and Rumelt (1982) and Barney (1986) gets replaced by “knowledge,” the result is KBV theory, and knowledge represents one of the strategic factors for maintaining a competitive advantage (Grant and Baden-Fuller, 1995; Grant, 1996c; Teece et al., 1997; Sambamurthy and Subramani, 2005; Bach et al., 2008; Choi et al., 2008). The availability of content thus is necessary, but it is not a sufficient condition to improve the decision-making process and company performance. Rather, the company also needs to transform “passive” contents, such as unused information within the boundaries of organizational memory, into “active” sources that are integral to the decision-making process. To improve the decision-making process and create value, the enterprises must enrich the quality and quantity of all information that provides critical input to a decision. The goal therefore involves an ability to manage knowledge in- and outside the organization by transforming data into knowledge. In the case analyzed, decision-makers achieve the best performance not only improving the quantity and quality of input information to the decisional process but also thanks to a better formalization of the knowledge included in all phases of the process. In this view, ECM platforms are advanced KM tools that are fundamental for the development of a competitive advantage, in that they simplify and speed up the management (creation, classification, storing, change, deletion) of information, increase the productivity of each member, and improve the efficiency of the system (McKeever, 2003; Nordheim and Päivärinta, 2004; O' Callaghan and Smits, 2005). By implementing an ECM system, the company has not only an effective means for creating, tracking, managing, and archiving all company content but also can integrate business processes, develop collaborative actions through the systemic organization of work teams, and create a search engine with specialized “business logic views.” Standardized contents and layout, associated with a definition of content owners and users (i.e., management of authorizations), and document processes support the spread of updated, error-free information to various organizational actors. Similar to business intelligence systems, ECM platforms support decision making inside the organizations in terms of viewing and retrieving data and analyzing and sharing information—and thus increase organizational memory—as well as their storage and continuous maintenance along the life cycle of the enterprise. For the analysis of the case study, this study employs the action research method (Lewin, 1946; Checkland, 1985; Checkland and Scholes, 1990), and specifically Multiview2 (Avison and Wood-Harper, 2003). The original Multiview concept assumed a continuous interaction between analysts and method, including the present situation and the future scenario that originated by application of the methodology. In some respects, the original definition was limited, in that it did not describe the function of each element and the trend of possible interactions (Avison and Wood-Harper, 2003). Multiview2 fills these gaps by taking into consideration the action and reaction generated by the interactions of the elements. The three macro-categories therefore must be aligned to conduct an organizational, socio-technical, and technological analysis (Avison et al., 1998; Avison and Wood-Harper, 2003). The researcher provides a clear contribution that matches the theoretical framework used as a reference and measures and evaluates in subsequent phases the results obtained from those implemented actions.
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Vetro, Anthony, Emin Martinian, Jun Xin, Alexander Behrens, and Huifang Sun. "THECHNIQUES FOR MULTIVIEW VIDEO CODING." INTELLIGENT MEDIA INTEGRATION NAGOYA UNIVERSITY / COE, 2005. http://hdl.handle.net/2237/10361.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Shafaei, Alireza. "Multiview depth-based pose estimation." Thesis, University of British Columbia, 2015. http://hdl.handle.net/2429/56180.

Повний текст джерела
Анотація:
Commonly used human motion capture systems require intrusive attachment of markers that are visually tracked with multiple cameras. In this work we present an efficient and inexpensive solution to markerless motion capture using only a few Kinect sensors. We use our system to design a smart home platform with a network of Kinects that are installed inside the house. Our first contribution is a multiview pose estimation system. Unlike the previous work on 3d pose estimation using a single depth camera, we relax constraints on the camera location and do not assume a co-operative user. We apply recent image segmentation techniques with convolutional neural networks to depth images and use curriculum learning to train our system on purely synthetic data. Our method accurately localizes body parts without requiring an explicit shape model. The body joint locations are then recovered by combining evidence from multiple views in real-time. Our second contribution is a dataset of 6 million synthetic depth frames for pose estimation from multiple cameras with varying levels of complexity to make curriculum learning possible. We show the efficacy and applicability of our data generation process through various evaluations. Our final system exceeds the state-of-the-art results on multiview pose estimation on the Berkeley MHAD dataset. Our third contribution is a scalable software platform to coordinate Kinect devices in real-time over a network. We use various compression techniques and develop software services that allow communication with multiple Kinects through TCP/IP. The flexibility of our system allows real-time orchestration of up to 10 Kinect devices over Ethernet.<br>Science, Faculty of<br>Computer Science, Department of<br>Graduate
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Khattak, Shadan. "Low complexity multiview video coding." Thesis, De Montfort University, 2014. http://hdl.handle.net/2086/10511.

Повний текст джерела
Анотація:
3D video is a technology that has seen a tremendous attention in the recent years. Multiview Video Coding (MVC) is an extension of the popular H.264 video coding standard and is commonly used to compress 3D videos. It offers an improvement of 20% to 50% in compression efficiency over simulcast encoding of multiview videos using the conventional H.264 video coding standard. However, there are two important problems associated with it: (i) its superior compression performance comes at the cost of significantly higher computational complexity which hampers the real-world realization of MVC encoder in applications such as 3D live broadcasting and interactive Free Viewpoint Television (FTV), and (ii) compressed 3D videos can suffer from packet loss during transmission, which can degrade the viewing quality of the 3D video at the decoder. This thesis aims to solve these problems by presenting techniques to reduce the computational complexity of the MVC encoder and by proposing a consistent error concealment technique for frame losses in 3D video transmission. The thesis first analyses the complexity of the MVC encoder. It then proposes two novel techniques to reduce the complexity of motion and disparity estimation. The first method achieves complexity reduction in the disparity estimation process by exploiting the relationship between temporal levels, type of macroblocks and search ranges while the second method achieves it by exploiting the geometrical relation- ship between motion and disparity vectors in stereo frames. These two methods are then combined with other state-of-the-art methods in a unique framework where gains add up. Experimental results show that the proposed low-complexity framework can reduce the encoding time of the standard MVC encoder by over 93% while maintaining similar compression efficiency performance. The addition of new View Synthesis Prediction (VSP) modes to the MVC encoding framework improves the compression efficiency of MVC. However, testing additional modes comes at the cost of increased encoding complexity. In order to reduce the encoding complexity, the thesis, next, proposes a bayesian early mode decision technique for a VSP enhanced MVC coder. It exploits the statistical similarities between the RD costs of the VSP SKIP mode in neighbouring views to terminate the mode decision process early. Results indicate that the proposed technique can reduce the encoding time of the enhanced MVC coder by over 33% at similar compression efficiency levels. Finally, compressed 3D videos are usually required to be broadcast to a large number of users where transmission errors can lead to frame losses which can degrade the video quality at the decoder. A simple reconstruction of the lost frames can lead to inconsistent reconstruction of the 3D scene which may negatively affect the viewing experience of a user. In order to solve this problem, the thesis proposes, at the end, a consistency model for recovering frames lost during transmission. The proposed consistency model is used to evaluate inter-view and temporal consistencies while selecting candidate blocks for concealment. Experimental results show that the proposed technique is able to recover the lost frames with high consistency and better quality than two standard error concealment methods and a baseline technique based on the boundary matching algorithm.
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Barba, Ferrer Pere. "Multiview Landmark Detection forIdentity-Preserving Alignment." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142475.

Повний текст джерела
Анотація:
Face recognition is a fundamental task in computer vision and has been an important field of study for many years. Its importance in activities such as face recognition and classification, 3D animation, virtual modelling or biomedicine makes it a top-demanded activity, but finding accurate solutions still represents a great challenge nowadays. This report presents a unified process for automatically extract a set of face landmarks and remove all differences related to pose, expression and environment by bringing faces to a neutral pose-centred state. Landmark detection is based on a multiple viewpoint Pictorial Structure model, which specifies first, a part for each landmark we want to extract, second a tree structure to constraint its position within the face geometry and third, multiple trees to model differences due the orientation. In this report we address both the problem of how to find a set of landmarks from a model and the problem of training such a model from a set of labelled examples. We show how such a model successfully captures a great range of deformations needing far less training examples than common commercial face detectors. The alignment process basically aims to remove differences between multiple faces so they all can be analysed under the same criteria. It is carried out with Thin-plate Splines to adjust the detected set of landmarks to the desired configuration. With this method we assure smooth interpolations while the subject identity is preserved by modifying the original extracted configuration of parts and creating a generic distribution with the help of a reference face dataset. We present results of our algorithms both in a constrained environment and in the challenging LFPW face database. Successful outcomes are shown that prove our method to be a solid process for unitedly recognise and warp faces in the wild and to be on a par with other state-of-the-art procedures.<br>Ansiktsigenkänning är en grundläggande uppgift inom datorseende och har varit ett viktigt område för forskning i många år. Dess betydelse i områden som ansiktsigenkänning och klassificering, 3D-animering, virtuell modellering eller biomedicin gör det till en verksamhet med hög efterfrågan. Att hitta precisa lösningar utgör fortfarande en stor utmaning idag. Denna rapport presenterar en enhetlig process för att automatiskt extrahera en uppsättning ansiktslandmärken och ta bort alla skillnader relaterade till posering, uttryck och miljö genom att ta ansiktet till ett neutralcentrerat poseringstillstånd. Landmärksdetektering baseras på en bildmässig strukturmodell med multipel synvinkel som först anger en del för varje landmärke som ska extraheras, och sen en trädstruktur där positionen sparas därefter skapas multipla trädmodeller för att modellera skillnader på grund av olika riktningar. I denna rapport behandlas både problemet med hur man hittar en uppsättning landmärken från en modell och problemet med att träna en sådan modell från en uppsättning märkta exempel. Vi visar hur en sådan modell framgångsrikt fångar ett stort utbud av formändringar där betydligt mindre träningsexempel behövs än för vanliga kommersiella ansiktsdetektorer. Inriktningsprocessen syftar huvudsakligen till att upphäva skillnaderna mellan flera ansikten så att de alla kan analyseras enligt samma kriterier. För att justera den detekterade uppsättning landmärken används en splineinterpolation till den önskade konfigurationen. Denna metod ger en dämpad interpolation medan objektets identitet bevaras. Vi presenterar resultaten av våra algoritmer både i en begränsad miljö och i utmanande LFPW face-databas. Goda resultat visar att vår metod är en bra process för enigt erkänna och förvränga ansikten i en obegränsad miljö och att vara i nivå med andra state-of-the-art förfaranden.
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mendonça, Paulo Ricardo dos Santos. "Multiview geometry : profiles and self-calibration." Thesis, University of Cambridge, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.621114.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Aksay, Anil. "Error Resilient Multiview Video Coding And Streaming." Phd thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12611682/index.pdf.

Повний текст джерела
Анотація:
In this thesis, a number of novel techniques for error resilient coding and streaming for multiview video are presented. First of all, a novel coding technique for stereoscopic video is proposed where additional coding gain is achieved by downsampling one of the views spatially or temporally based on the well-known theory that the human visual system can perceive high frequencies in 3D from the higher quality view. Stereoscopic videos can be coded at a rate upto 1.2 times that of monoscopic videos with little visual quality degradation with the proposed coding technique. Next, a systematic method for design and optimization of multi-threaded multi-view video encoding/decoding algorithms using multi-core processors is proposed. The proposed multi-core decoding architectures are compliant with the current international standards, and enable multi-threaded processing with negligible loss of encoding efficiency and minimum processing overhead. End-to-end 3D Streaming system over Internet using current standards is implemented. A heuristic methodology for modeling the end-toend rate-distortion characteristic of this system is suggested and the parameters of the system is optimally selected using this model. End-to-end 3D Broadcasting system over DVB-H using current standards is also implemented. Extensive testing is employed to show the importance and characteristics of several error resilient tools. Finally we modeled end-to-end RD characteristics to optimize the encoding and protection parameters.
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Richter, Stefan. "Compression and View Interpolation for Multiview Imagery." Thesis, KTH, Ljud- och bildbehandling, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-37699.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Jutla, Dawn N. "Multiview model for protection and access control." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ31529.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Seeling, Christian. "MultiView-Systeme zur explorativen Analyse unstrukturierter Information." Aachen Shaker, 2007. http://d-nb.info/1000271293/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!